Last Update 1:30 PM January 13, 2026 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Tuesday, 13. January 2026

Ockto

Hoe richt je een digital-only aanvraagproces in dat écht werkt voor iedereen?

In het sociale domein groeit de behoefte aan efficiënte en schaalbare aanvraagprocessen. Digitalisering ligt voor de hand, maar brengt ook zorgen met zich mee. Want wat als de doelgroep juist bestaat uit mensen die moeite hebben met digitale middelen, de taal niet machtig zijn of simpelweg lastig te bereiken zijn? Het Noodfonds Energie laat zien dat het anders kan. Door technologie slim

In het sociale domein groeit de behoefte aan efficiënte en schaalbare aanvraagprocessen. Digitalisering ligt voor de hand, maar brengt ook zorgen met zich mee. Want wat als de doelgroep juist bestaat uit mensen die moeite hebben met digitale middelen, de taal niet machtig zijn of simpelweg lastig te bereiken zijn? Het Noodfonds Energie laat zien dat het anders kan. Door technologie slim en zorgvuldig in te zetten, aangevuld met de juiste ondersteuning, is een digital-only aanvraagproces wél toegankelijk te maken – ook voor de meest kwetsbare groepen.


Noodfonds Energie: Hoge volumes in slechts enkele dagen

Inkomensgegevens snel én veilig delen – ook onder hoge druk Toen de energierekening voor honderdduizenden Nederlanders onbetaalbaar werd, kwam het Noodfonds Energie in actie. In de 2025 editie dienden in slechts één week ruim 224.000 huishoudens een aanvraag in. Hoe organiseer je zo’n proces snel, digitaal én betrouwbaar – zonder dat kwetsbare aanvragers uitvallen?
Inkomensgegevens snel én veilig delen – ook onder hoge druk

Toen de energierekening voor honderdduizenden Nederlanders onbetaalbaar werd, kwam het Noodfonds Energie in actie. In de 2025 editie dienden in slechts één week ruim 224.000 huishoudens een aanvraag in. Hoe organiseer je zo’n proces snel, digitaal én betrouwbaar – zonder dat kwetsbare aanvragers uitvallen?


Spherical Cow Consulting

A Field Guide to Digital Identity Standards Bodies

In this episode of The Digital Identity Digest, Heather Flanagan offers a practical field guide to digital identity standards, explaining how organizations like the OpenID Foundation, W3C, IETF, and FIDO Alliance shape specifications, drafts, and published standards through very different processes and cultures. Discover how to interpret standards maturity, understand what a draft really means,

“This is not a thrilling blog post. There are no hot takes, no predictions, and no urgent calls to action. What it is is a reference.”

I spend a lot of time following and participating in standards development across a handful of open standards development organizations (SDOs): the OpenID Foundation, the W3C, the FIDO Alliance, and the IETF. These groups all shape the future of technology in general, and digital identity in specific, in different ways. They all have their own processes, norms, and decision-making structures. It’s a big learning curve, no doubt about it, to figure out where and how to participate. 

Over the coming months, I plan to do deeper dives into specific working groups and specifications that matter to the digital identity ecosystem. When I do, I’ll often be talking about where a piece of work is in its lifecycle, what kind of feedback is being sought, and how close (or far) it is from becoming something implementers can rely on.

Rather than re-explaining each organization’s process every time, this post serves as a standing reference. It lays out, at a high level, how the SDOs I follow most closely turn ideas into published standards, where exploration occurs, who can participate, and how decisions are made.

If you’re already steeped in standards work, much of this will feel familiar. If you’re newer to it, this should help explain why two “drafts” from different organizations can mean very different things. Either way, I’ll be pointing back here throughout the year as a shared baseline for understanding how this work moves forward.

Grab a beverage; this is going to be dry.

A Digital Identity Digest A Field Guide to Digital Identity Standards Bodies Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:15:07 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

tl;dr: an SDO comparison chart

How major standards bodies develop specifications.

DimensionOpenID Foundation (OIDF)W3CFIDO AllianceIETFParticipation modelMembership-based; contributors must agree to OIDF IPR policyMembership-based, with invited expertsMembership-based; participation tied to membership classOpen to anyone; participation is individual, not organizationalWho can join WGsAny individual or organization agreeing to IPR policyW3C Members, invited experts, and staffSponsor & Board Members by default; others by invitationAnyone may participate via mailing lists and meetingsDecision-making styleConsensus-first; formal votes when neededConsensus-first; Director and AC oversightFormal voting with defined thresholdsRough consensus; no formal votingPrimary standards body unitWorking GroupWorking GroupWorking GroupWorking GroupOpen / exploratory groups (non-standards)Community Groups — open participation, no standards outputCommunity Groups — open participation, incubation onlyNo direct equivalent; exploratory work typically happens within member-only groupsRFCs are immutable; updates via new RFCsCan exploratory groups produce standards?NoNon/aNoPurpose of exploratory groupsIdea incubation, coordination, early collaborationIncubation, cross-community discussion, early draftingExploration typically occurs inside formal structuresTest interest, scope problems, and decide whether to charter a WGPath from exploration to WGCG output may inform a WG proposalCG work may feed into WG charteringWG chartered directly by BoardBOF may lead to a WG charter proposalEarly-stage documentsContributions → Work Group DraftsWorking DraftsPre-Draft → Working DraftInternet-DraftsMid-stage signal to implementersImplementers DraftCandidate RecommendationImplementation Draft SpecificationProposed StandardHighest internal statusFinal SpecificationW3C RecommendationProposed Standard SpecificationInternet StandardIPR participation triggerExplicit contributor agreementMembership / invited expert agreementMembership agreementParticipation governed by the Note WellDocument mutabilityFinal specs fixed; errata via revised versionsNew versions replace prior RecsNew versions requiredWG chartered directly by the BoardCore emphasisInteroperability and ecosystem alignmentWeb-wide interoperability and horizontal reviewDeployability and securityDeployability, operational reality, “running code” How OpenID Foundation specifications are developed

OpenID Foundation specifications are developed through an open, consensus-driven process (described in detail here) designed to balance innovation with stability.

Working Group formation: New work begins in a formally chartered Working Group (WG), created through a public proposal that defines its scope, goals, and intended deliverables. Participation is open to individuals and organizations that agree to the OpenID Foundation’s intellectual property policy; only approved contributors can submit technical contributions or participate in formal decision-making, though drafts and discussions are publicly visible for broader review and feedback. Contributions and Work Group Drafts: Technical work starts with contributions from approved WG participants. Once adopted by the group, these become a Work Group Draft, which is iterated openly as the WG resolves feedback and builds consensus. Implementers Draft: When a draft is considered stable enough for real-world testing, the WG may advance it to an Implementers Draft. This signals that implementers can begin building against the specification while providing feedback. Final Specification: After additional review and approval—both within the Working Group and across the OpenID Foundation membership—a draft may be approved as a Final Specification. At this stage, the specification is considered stable and ready for broad adoption. Errata and maintenance: Final Specifications may later receive limited errata corrections to fix errors or clarify language, without changing core functionality.

Throughout this process, decisions prioritize consensus, transparency, and documented review, ensuring specifications are both technically sound and suitable for long-term interoperability.

How W3C specifications are developed

W3C specifications are developed through an open, consensus-driven process designed to support global interoperability on the Web, while balancing technical rigor, broad review, and implementer feedback. The full details regarding their process are here.

Working Group formation: New work at the W3C often begins with exploratory discussion in a Community Group, where participation is open, and ideas can be tested without committing to a standards track. When the work is ready to advance, it may move into an existing Working Group or prompt the creation of a new WG, which is chartered by the W3C based on a public proposal defining scope, goals, deliverables, and success criteria. Participation in a WG is open to W3C Members, invited experts, and staff; while formal decision-making is limited to group participants, technical discussions, drafts, and issues are publicly visible to enable wide review and input. Draft development (Working Drafts): Technical work progresses through a series of Working Drafts, which reflect ongoing development and refinement. These drafts are explicitly incomplete and may change substantially as feedback is incorporated and consensus evolves. Drafts are reviewed not just by the working group participants but also, as they get closer to Candidate Recommendation, but also via horizontal reviews that consider potential issues around internationalization, accessibility, security, privacy, and overall architecture for the web. Wide review and Candidate Recommendation: When a specification is considered feature-complete, it advances to Candidate Recommendation. This stage focuses on implementation experience—confirming that the specification can be implemented interoperably and that its requirements are clear, testable, and practical. Recommendation track: After demonstrating sufficient implementation experience and resolving outstanding issues, a specification may advance through Proposed Recommendation and ultimately become a W3C Recommendation. At this point, it is considered a stable Web standard suitable for broad adoption. Maintenance and updates: W3C Recommendations may later be updated, clarified, or superseded as the Web evolves. Substantive changes typically require progressing through the Recommendation track again, rather than being applied directly to an existing standard.

Throughout this process, the W3C emphasizes transparency, documented consensus, horizontal review, and real-world implementation as prerequisites for standardization. It’s also worth noting that the W3C publishes more than just specifications. Their process docs describe several different types of publications that inform the architecture and design of the World Wide Web.

How FIDO Alliance specifications are developed

FIDO Alliance specifications are developed through a member-driven Working Group process that emphasizes implementation readiness, formal voting, and strong board oversight—reflecting FIDO’s focus on deployable authentication standards. Their process is described in Section 4.4.2 of their Membership Agreement.

Working Group formation: All FIDO deliverables are developed within Board-chartered Working Groups. Only Sponsor Members and Board Members have full participation and voting rights; Associate and Government Members may participate without voting, and in some cases by invitation. Each Working Group is led by a Board-appointed Chair, with Editors responsible for managing technical drafts. Pre-Draft and Working Draft: Work begins when a Working Group participant submits an initial draft, which is acknowledged as a Pre-Draft. With approval by a simple majority of the Working Group, it becomes a Working Draft and serves as the active basis for technical development and iteration. Review Draft: When the Working Group believes a deliverable is sufficiently mature, it may advance the document to Review Draft status by a supermajority vote. Review Drafts are shared with the full FIDO membership for feedback and, for specifications, trigger a formal intellectual property review period. Implementation Draft (Specifications Only): After the review period, the Working Group may recommend a specification for advancement to Implementation Draft status. This step requires approval by a supermajority vote of the FIDO Board and signals that the specification is ready for implementation and interoperability testing. Proposed Standard Specification: Specifications intended for broad deployment or submission to external standards bodies (such as the IETF) may advance to Proposed Standard status. This requires a full supermajority vote of the Board and represents the highest level of maturity within the FIDO Alliance. Publication and external submission: The Board controls when and how FIDO deliverables are published or shared outside the membership, and may authorize submission of Proposed Standards to external standards organizations with the appropriate intellectual property commitments.

Throughout this process, FIDO emphasizes formal voting thresholds, defined intellectual property commitments, and Board-level approval to ensure specifications are both implementable and safe for widespread adoption.

How IETF specifications are developed

The Internet Engineering Task Force (IETF) develops technical specifications through an open, individual-driven process that emphasizes broad participation, rough consensus, and real-world implementation. Unlike membership-based standards bodies, the IETF operates as an open community: participation is based on contribution, not organizational affiliation. There are a lot of process documents for that SDO, in part because it’s been around a very long time. This website is probably the best place to start if you want to dig into the details. 

Working Group formation and open participation: Most IETF work happens in chartered Working Groups (WGs), which are approved by the IETF leadership and scoped to solve a specific technical problem. Before a WG is formed, individuals may request up to three Birds-of-a-Feather (BoF) sessions to either explore a topic informally or, in the case of a working group–forming BoF, to develop and test support for a proposed WG charter. Anyone may participate; in fact, everyone nominally participates as an individual, not as a corporate representative. Discussions take place primarily on public mailing lists, with in-person and virtual meetings used to support progress. Internet-Drafts (work in progress): All IETF specifications begin as Internet-Drafts (I-Ds). These are openly published, time-limited documents that can be revised frequently. Drafts may be authored by individuals or adopted by a Working Group; adoption signals that the WG agrees to work on the document but does not imply approval. Rough consensus and Working Group review: Technical decisions in the IETF are made by rough consensus, not formal voting. WG chairs assess consensus based on discussion and sustained agreement, prioritizing technical merit and deployability over unanimity. Silence does not automatically equal consent, and unresolved objections must be addressed before work can advance. IESG review and publication approval: When a WG believes a draft is ready, it is submitted for review by the Internet Engineering Steering Group (IESG). This review includes technical, security, and process checks, as well as broader community review. Approval by the IESG authorizes the document to move forward for publication as an RFC. RFC publication and document types: Approved documents are published by the RFC Editor as RFCs (which used to stand for “Requests for Comment” but they gave up on that years ago). RFCs are immutable once published. Not all RFCs are standards; common categories include: Standards Track (intended for interoperable implementation) Best Current Practice (BCP) (operational or procedural guidance) Informational (background or descriptive material) Experimental (early or exploratory work) Standards Track maturity (current practice): Standards Track documents typically progress from Proposed Standard to Internet Standard, based on demonstrated interoperability, implementation experience, and operational stability. While earlier versions of the process defined additional maturity levels, current practice focuses on these two stages. Maintenance, updates, and obsolescence: RFCs are never revised in place. Errors are addressed through published errata, and substantive changes require a new RFC that updates or obsoletes the earlier document. This preserves a permanent, auditable technical record.

Throughout the process, the IETF prioritizes openness, transparency, and deployability—favoring specifications that can be implemented and operated at Internet scale over theoretically complete designs.

I do want to add a quick reminder here that not all RFCs are produced by the IETF; the Internet Research Task Force (IRTF), the Internet Architecture Board (IAB), and the Independent Submission stream also work with the RFC Editor to publish RFCs. What I’ve described in this post is the most common way to get an RFC published, and the only way to get an IETF standard published, but never forget that the RFC Series is more than just IETF standards.

Choosing where to take the work (it’s not just about process)

Understanding how different standards bodies operate is useful, but it’s rarely the decisive factor in where work actually lands. In practice, choosing an SDO is less about picking the “right” process and more about understanding where the conversation will be most productive.

There are no clean lines dividing one SDO from another. Authorization specifications might reasonably belong in the IETF or the OpenID Foundation. User-facing authentication, account selection, or credential presentation could sit in the W3C or the FIDO Alliance. In some cases, the work spans multiple domains — and sometimes, as with WebAuthn, it spans multiple organizations at once.

A few practical considerations tend to matter more than formal process:

Who needs to be in the room

Standards succeed when the right mix of implementers, platform vendors, and relying parties are engaged early. Some SDOs attract browser vendors and web platform architects; others draw identity providers, protocol designers, or security hardware vendors. If the people who will ultimately implement or deploy the work aren’t participating, progress will stall no matter how elegant the process.

Where the problem naturally lives

Some problems are inherently infrastructure-level: wire protocols, token formats, cryptographic mechanisms, or authorization semantics. Others are user-facing: browser behavior, consent flows, account selection, or device interactions. While there’s overlap, different SDOs have different instincts about where responsibility should sit: in the protocol, the platform, or the application layer.

Maturity and risk tolerance

Early, exploratory work often benefits from looser structures and faster iteration, while mature, widely deployed technologies demand stronger guarantees and clearer intellectual property commitments. An idea that starts as a discussion or prototype may move between organizations as it solidifies and its audience becomes clearer.

Existing momentum

Work rarely starts from scratch. A related working group may already exist, or a community may already be circling the same problem under a different name. In those cases, joining an existing venue, even if it’s not a perfect fit, is often more effective than trying to bootstrap something new.

Willingness to collaborate across boundaries

Some of the most impactful identity standards have emerged through coordination across SDOs, not strict ownership by one. WebAuthn is a clear example: developed jointly by the W3C and the FIDO Alliance, it benefited from both web platform alignment and deep authentication expertise. That kind of collaboration is harder, but sometimes necessary.

Ultimately, standards work follows people and problems more than org charts. The most successful efforts tend to start where the conversation is already happening, evolve as understanding improves, and remain open to shifting venues when the work, or the ecosystem around it, demands it.

It’s kind of messy

Standards development is rarely linear, and it’s almost never confined neatly to a single organization. The processes outlined here provide structure, guardrails, and shared expectations, but they don’t determine where good ideas come from or where meaningful progress happens.

As I dig into specific working groups and specifications over the coming months, I’ll refer back to this post to help frame what stage the work is in, what kind of feedback is being sought, and what “progress” actually means in that context. If nothing else, this should make it easier to follow along without needing to decode each standards body’s process from scratch every time.

This isn’t meant to be exhaustive, and it won’t age perfectly. That’s fine. Standards evolve, processes get revised, and communities shift. Think of this as a shared reference point, something to return to when the details start to blur. It helps to zoom out and remember how the machinery fits together.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:00] Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down interesting topics in the field of digital identity—from credentials and standards to browser weirdness and policy twists.

If you work with digital identity but don’t have time to follow every specification or hype cycle, you’re in the right place.

Setting Expectations for This Episode

[00:00:26] Let’s be honest right up front: this episode is not the most exciting thing you’re going to hear today.

[00:00:36] There are no bold predictions here.
No hot takes.
No countdown to the “next big thing” in authorization.

[00:00:45] So if that’s what you’re listening for, you may want to bookmark this and come back later.

Why This Is a Reference Episode

[00:00:50] What this episode actually is, instead, is a reference.

I spend a lot of time following—and participating in—standards development across several standards development organizations (SDOs), including:

The OpenID Foundation The W3C The IETF The FIDO Alliance (which I follow closely, though I don’t participate directly)

[00:01:11] These groups shape the future of digital identity and technology in very real ways.

However, they all do it differently.

Why Standards Feel So Different Across Organizations

[00:01:19] Each organization has:

Its own culture Its own processes Its own instincts about how ideas become usable specifications

This is why a “draft” in one organization can feel stable and real, while a “draft” somewhere else feels tentative and exploratory.

[00:01:48] If you’ve ever wondered why that is, this episode is designed to explain exactly why.

How This Episode Fits Into Future Content

[00:01:57] Over the coming months, I’ll be diving deeper into specific working groups and specifications that matter to the digital identity ecosystem.

Rather than re-explaining standards processes every time, I’ll be pointing back to this episode—and its companion blog post—as a baseline reference.

[00:02:18] This makes it easier to talk about:

Where work sits in its lifecycle What kind of feedback is being sought How close something is to real-world implementation Who This Is For

[00:02:44] If you live and breathe standards work, much of this will feel familiar.

If you’re newer—or if you’ve only worked in one standards organization—this should help explain why similar labels can mean very different things depending on where the work lives.

[00:03:04] So yes—grab a beverage. This one is dry, but useful.

The Big Picture: What “Standards” Really Means

[00:03:08] All four organizations we’re discussing produce things we casually call “standards.”

But that doesn’t mean:

The same process The same expectations Or the same outcomes

[00:03:25] Some organizations are membership-based.
Some are open to anyone.
Some rely on voting, while others rely on consensus.

None of these models is inherently better—they’re simply optimized for different kinds of problems.

The OpenID Foundation Process

[00:04:19] The OpenID Foundation is membership-based, but participation in working groups is relatively open once intellectual property policies are accepted.

Key characteristics of the OpenID Foundation process include:

Formally chartered working groups Clearly defined scopes and deliverables Iterative draft development

[00:04:49] When contributions are adopted, they become working group drafts, which may iterate for quite some time.

[00:05:00] A draft may advance to an Implementer’s Draft, signaling readiness for real-world testing—but not finality.

[00:05:14] Only after additional reviews does a specification become final, at which point changes are limited to errata.

The W3C Standards Model

[00:05:50] The W3C process often begins before a working group exists.

Early exploration typically happens in Community Groups, which are:

Open to anyone Explicitly not on the standards track

[00:06:31] Once work enters a formal working group, drafts progress through well-defined maturity stages.

[00:06:56] Horizontal reviews assess:

Accessibility Internationalization Privacy Security Overall web architecture

[00:07:04] Implementation experience becomes critical at the Candidate Recommendation stage, where interoperability is tested.

How the FIDO Alliance Approaches Standards

[00:07:59] The FIDO Alliance is intentionally different.

It is strongly member-driven, with:

Board-chartered working groups Board-appointed chairs Voting rights tied to membership class

[00:08:38] Advancing a specification requires supermajority board approval, particularly at the implementation and proposed standard stages.

This conservative approach reflects the high stakes of authentication and security work.

Understanding the IETF Mental Model

[00:09:41] The IETF requires an entirely different mindset.

Participation is:

Open to anyone Individual-based (not company-based) Governed by the IETF Note Well

[00:10:44] All work starts as Internet-Drafts, which are:

Public Time-limited Expected to change frequently

[00:11:07] Decisions are made by rough consensus, not voting.

[00:11:50] Once published as RFCs, documents are immutable—updates happen through new RFCs, not revisions.

Choosing the Right Standards Venue

[00:12:08] A common question is: Which process is right?

The answer is simple—it depends.

What matters most are practical considerations, such as:

Who needs to be involved Where the problem naturally lives How mature the idea is How much ecosystem risk is acceptable

[00:13:07] Standards work tends to follow people and problems more than organizational charts.

Final Thoughts and Looking Ahead

[00:13:43] This episode isn’t exhaustive, and it won’t age perfectly—and that’s okay.

Standards evolve.
Processes change.
Communities shift.

[00:14:06] Think of this as a shared reference point—a way to zoom out when the details start to blur.

Closing the Episode

[00:14:16] If you made it this far, thank you for sticking with a very dry but very practical episode.

Next time, I promise something a little more opinionated.

[00:14:30] And that’s it for this week’s episode of the Digital Identity Digest. Stay curious, stay engaged, and let’s keep these conversations going.

The post A Field Guide to Digital Identity Standards Bodies appeared first on Spherical Cow Consulting.

Monday, 12. January 2026

myLaminin

Are You Compliant? The Overlapping Rules Governing Research Data Today

Researchers today must navigate a complex landscape of privacy, security, and compliance requirements. This compliance primer breaks down five key frameworks—HIPAA, PHIPA, PIPEDA, GDPR, and NIST 800-171—highlighting what they cover, how they differ, and why they matter. Learn how consent, data protection, technical security, and cross-border responsibilities intersect, and what research teams need
Researchers today must navigate a complex landscape of privacy, security, and compliance requirements. This compliance primer breaks down five key frameworks—HIPAA, PHIPA, PIPEDA, GDPR, and NIST 800-171—highlighting what they cover, how they differ, and why they matter. Learn how consent, data protection, technical security, and cross-border responsibilities intersect, and what research teams need to stay compliant in a global, data-driven environment.

Friday, 09. January 2026

liminal (was OWI)

Liminal Demo Day: Application Security in the Age of AI

The post Liminal Demo Day: Application Security in the Age of AI appeared first on Liminal.co.

Dock

The Missing Identity Layer for AI Agents (And Why OAuth and KYA Aren’t Enough)

AI agents can already search for flights, build shopping lists, and compare insurance quotes. What they can't do safely is complete the purchase. Today, the only way for an agent to actually book that flight or buy that item is for the user to share their account credentials.

AI agents can already search for flights, build shopping lists, and compare insurance quotes. What they can't do safely is complete the purchase. Today, the only way for an agent to actually book that flight or buy that item is for the user to share their account credentials. This creates obvious security risks for users and liability problems for merchants who have no way to verify the agent's authority.

To address this, we’ve seen two approaches starting to emerge. One extends OAuth-style delegated access into agent workflows. The other focuses on Know Your Agent (KYA) verification frameworks that certify agent developers.

Both are valuable. But neither was designed to solve the core problem of AI agent identity.

OAuth answers the question: “Can this software access an account?” 

KYA answers the question: “Who built this agent?”

But agentic commerce requires something more fundamental:

Who authorized this agent, what is it allowed to do, for how long, and who is accountable for its actions?

Without cryptographic proof of delegation, scope, and human accountability, merchants can’t safely transact with autonomous agents, and users can’t confidently delegate real purchasing power. The result is a trust gap on both sides of the transaction.

This isn’t a model capability problem. AI agents are already capable enough. It’s an identity infrastructure problem and approaches based purely on OAuth and KYA don’t fully address it. In this article, we’ll break down where they fail short and what’s actually required to enable autonomous trusted agent transactions.


Elliptic

The de-risking paradox: why cutting off digital asset businesses increases systemic risk

A December 2025 report from the U.S. House Financial Services Committee has reignited debate over whether American banks are systematically denying services to digital asset businesses. The 51-page document details what critics call "Operation Choke Point 2.0" and arrives just as high-profile account closures are making headlines.

A December 2025 report from the U.S. House Financial Services Committee has reignited debate over whether American banks are systematically denying services to digital asset businesses. The 51-page document details what critics call "Operation Choke Point 2.0" and arrives just as high-profile account closures are making headlines.


ComplyCube

What Is Digital Identity Verification and How eID Transforms It for Fintechs

Electronic identification (eID) aims to transform the digital identity verification space. It provides a government-backed and rapid method for validating and onboarding customers. eID is crucial for fast-growing global fintechs. The post What Is Digital Identity Verification and How eID Transforms It for Fintechs first appeared on ComplyCube.

Electronic identification (eID) aims to transform the digital identity verification space. It provides a government-backed and rapid method for validating and onboarding customers. eID is crucial for fast-growing global fintechs.

The post What Is Digital Identity Verification and How eID Transforms It for Fintechs first appeared on ComplyCube.


Spruce Systems

How to Build a Digital ID People Actually Want to Use: 5 Lessons From the Field

The success of digital ID isn’t measured by credentials issued, but by how often people reach for it in real life.

Digital identity programs succeed or fail based on one key factor: whether people actually use them. The technology can be flawless. The security can be exceptional. But if residents don't adopt it, none of that matters.

The most significant barrier to success for any digital identity program will not be technical feasibility but adoption. The industry's experience with mobile driver's licenses shows that uptake can be slow without clear incentives for both residents and verifiers.

The lessons below come from real-world deployments, programs that have issued millions of credentials and learned what drives adoption in practice.

Lesson 1: Anchor on Everyday Use Cases and Make Them Visible

The fastest path to adoption is relevance. Prioritize scenarios people already do: proof-of-age, traffic stops, airport screening, campus or health check-in, and benefits access.

When residents see their credentials accepted at TSA checkpoints, by a bartender checking age, during a routine traffic stop, at a pharmacy counter when picking up a prescription, or at a university or clinic check-in desk, digital ID stops feeling novel. 

Repeated exposure in familiar, low-friction moments builds trust and habit. Over time, people stop thinking of digital ID as an experiment or pilot and start recognizing it as everyday infrastructure they can rely on.

Lesson 2: Get Big Names on Board Early

Secure and publicize acceptance at TSA checkpoints and major employers/retailers recognized by consumers, such as Amazon, CVS, and others. TSA now lists participating states and checkpoints where digital IDs are accepted with CAT-2 readers; being on that list materially boosts perceived legitimacy and travel utility.

This creates a flywheel: visible acceptance drives resident enrollment, which in turn drives more verifier adoption, which leads to further enrollment. Foster adoption with verifiers by demonstrating how credential acceptance enhances and streamlines compliance, improves fraud prevention, and ultimately can lead to savings.

Don't stop at high-profile acceptance. Enlist relying parties through a formal partner program. Colorado runs a Partner Program and publishes participating businesses and law-enforcement agencies; it also provides a simple "how to verify" guide for front-line staff (visual checks plus tech options). States could launch partner programs with logos, window decals, training one-pagers, and a searchable directory, plus an RP sandbox and sample code.

Lesson 3: Reduce Friction for Everyone

Adoption stalls when any part of the ecosystem faces unnecessary barriers.

For verifiers: Offer a lightweight verifier certification, reference SDKs, and test suites so banks, hospitals, universities, and retailers can integrate quickly and confidently. The most significant barrier for relying parties is complexity or cost in implementation. If verifier solutions are hard or expensive to deploy, organizations, especially smaller businesses or local agencies, will be reluctant to participate.

For residents: Making digital ID easy to use safely is a key prerequisite to adoption. The more complicated it is to set up and use, the higher the risk of compromised credentials and a loss of trust. Design for the real world: with robust account recovery for lost devices, intuitive interfaces, and clear prompts when something requires attention.

Incentivize participation via rewards, not just mandates. Offer small integration grants or fee waivers for early verifiers; pre-certify point-of-sale and age-check vendors; and recognize partners in a public directory to drive foot traffic.

Lesson 4: Use Policy to Unlock Adoption

Technology alone won't drive adoption, policy clears the path.

Executive or administrative directives can authorize acceptance across state agencies, while clarifying that physical ID remains a fallback until digital ID is ubiquitous. States should pair statewide acceptance guidance with updates to ABC/bona-fide-ID rules so retailers can accept digital credentials legally.

Communicate clearly and often. Provide resident-facing guidance that digital ID complements, not replaces, physical ID during the transition, including plain-language FAQs. Use simple explainer pages and in-app education to demystify where credentials are accepted and why privacy protections are stronger online than showing a full physical card.

Design for accessibility and inclusion. States should ensure wallets/verifiers meet accessibility requirements from day one to broaden eligible users and avoid exclusion.

Lesson 5: Earn the Trust Dividend

Privacy protections aren't just ethical requirements, they're adoption accelerators.

A public education campaign highlighting these privacy-preserving attributes builds confidence and counters perceptions that the program represents increased government surveillance. When residents trust that their data stays under their control, they engage more willingly.

This is the trust dividend: by enforcing privacy through technology rather than policy alone, you reduce the friction of adoption. The system proves itself trustworthy through its design, which features no centralized data storage, no hidden tracking, and no phone-home behavior that allows issuers to monitor usage. Educate users regularly about security risks and how to use their credentials safely, promoting trust and transparency.

The Business Case

Streamlined digital service delivery decreases the costs of manual verification and paperwork processing, freeing resources for higher-value activities. Stronger identity assurance makes it easier for residents to securely access services they're entitled to, and measurably cuts fraud in benefits programs.

Private-sector reliance on government-backed credentials (particularly in finance, healthcare, and regulated industries) amplifies the return on investment by lowering compliance costs for businesses and reinforcing the government's role as a trustworthy anchor of digital trust.

But these benefits only materialize with adoption.

The Bottom Line

Digital identity in the U.S. is at a turning point. Wallets are maturing, verifiers are ready, and policymakers are acting. The question is no longer whether to build digital identity programs, but how to build ones that people actually want to use.

The answer starts with the resident: their daily transactions, their privacy concerns, their tolerance for complexity. Design for them first, and adoption follows.

If you're building a digital identity program and would like to discuss your adoption strategy, please get in touch. We've helped states navigate these challenges and would love to share what we've learned.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


auth0

Taming P99s in OpenFGA: How We Built a Self-Tuning Strategy Planner

Learn how OpenFGA used Thompson Sampling to reduce P99 latency by 98%, moving from static rules to a dynamic, self-tuning strategy planner for graph traversals.
Learn how OpenFGA used Thompson Sampling to reduce P99 latency by 98%, moving from static rules to a dynamic, self-tuning strategy planner for graph traversals.

Thursday, 08. January 2026

liminal (was OWI)

The Intelligence Gap (And How to Close it For 2026)

The difference between an average year and a breakout year isn’t luck. It’s signal-to-noise ratio. The post The Intelligence Gap (And How to Close it For 2026) appeared first on Liminal.co.

The difference between an average year and a breakout year isn’t luck. It’s noise-to-signal ratio.
For leaders in Fraud, Compliance, Cybersecurity, and Trust & Safety, the old way of planning with annual reports and backward-looking autopsies is too slow for the threat landscape you operate in.

High-performing teams work differently. They build an always-on intelligence layer that delivers real-time market insights and sharpens every decision, from budgeting to roadmap to GTM. This guide breaks down how they do it and how you can operationalize the same approach inside Link.

1. Automate your radar

If you cannot see the market move, you cannot respond to it.

The first step in building an always-on intelligence layer is creating a clean, reliable view of everything happening around you. Inside Link, this begins with the tools that reveal how your market is shifting in real time and help you filter signal from noise before decisions stack up.

What this solves

Teams often discover critical shifts only after they become headlines.

What this is in Link

Company & Product Discovery, Use Case Explorer, and Monitor

How top teams use it Create watchlists for regulators, competitors, fraud vectors, and market themes Follow categories to uncover new vendors and capability movement Filter Signal by intent, including regulatory actions, funding, product changes, and GTM moves The Result 

A real early-warning system that improves awareness and response time.

2. Align your strategy to a single source of truth

When every team works off a different map, no one moves forward.

A strong intelligence layer depends on a shared understanding of the market. Link gives teams one consistent model of vendors, capabilities, and categories so decisions are grounded in the same foundation rather than scattered assumptions. This alignment ensures teams are basing their intelligent decision-making on the same data, not competing versions of reality.

What this solves

Conflicting spreadsheets and misaligned interpretations across teams.

What this is in Link

Liminal’s market taxonomy, Use Case Explorer, and Capability Rankings

How top teams use it Anchor product, GTM, and risk teams to the same taxonomy Use Link’s market model to center planning discussions Replace static spreadsheets with always-current shared boards The Result 

Teams make faster decisions with fewer disagreements.

3. Outlearn the market (and the competition)

Teams that learn the fastest gain the advantage.

Every decision improves when you understand where your competitors are going and where buyers are shifting. Link helps teams stay ahead by revealing capability trends, white space, and competitive movement across the market.

What this solves

Guesswork around where to compete and how to differentiate.

What this is in Link

Persona Explorer, Capability Rankings, and Segment Filters

How top teams use it Identify which features are becoming table stakes Spot capability gaps and underserved customer needs Track competitor direction through funding, hiring, and product updates Analyze shifts in GTM messaging to refine your own narrative The Result 

Clearer positioning, sharper product decisions, and more confident sales teams.

4. Pre-empt the headlines

The cheapest lessons are the ones you learn from someone else’s mistakes.

Risk teams gain leverage when they can study failures across the ecosystem before similar vulnerabilities surface internally. Link surfaces real-time events and breakdowns so leaders can learn from external patterns without waiting for internal impact.

What this solves

Teams reacting only after internal failures occur.

What this is in Link

Monitor, Company & Product Discovery, and Use Case Explorer

How top teams use it Follow regulators and key categories to monitor developing risks Save and tag incidents connected to policy or product failures Build boards that track patterns and inform audits or readiness reviews The Result 

Better foresight and stronger internal controls.

5. Compound your intelligence

Intelligence should improve with time, not reset each quarter.

An effective intelligence layer is cumulative, and Link gets smarter the more you use it. As you save, tag, and organize data, the platform becomes a bespoke engine for your specific risk appetite, providing teams with a centralized place to capture and build upon the insights that would otherwise be lost in emails and siloed documents.

The Result: 

By Q1, you aren’t just working harder; you are navigating with a higher-resolution map than anyone else.

Ready to upgrade your intelligence layer?

If you want to enter the new year with clarity rather than guesses, let’s get you set up.

LINK: Book a 20-Minute Strategy Walkthrough (We’ll even build your first set of custom watchlists for you on the call.)

The post The Intelligence Gap (And How to Close it For 2026) appeared first on Liminal.co.


Elliptic

Venezuela sanctions: 3 steps for crypto compliance teams

Key takeaway: Maduro's arrest doesn't mean Venezuela sanctions are going away. Restrictions could be relaxed, tightened or reimposed as events unfold. Compliance teams should use this moment to review policies, assess Venezuela-related exposure and ensure screening systems are ready to adapt.

Key takeaway: Maduro's arrest doesn't mean Venezuela sanctions are going away. Restrictions could be relaxed, tightened or reimposed as events unfold. Compliance teams should use this moment to review policies, assess Venezuela-related exposure and ensure screening systems are ready to adapt.


The Maduro indictment: a blockchain intelligence perspective

Key takeaway: The capture of Nicolás Maduro has renewed attention on crypto and state-level illicit finance. When sanctioned actors try to convert digital assets to fiat currencies, they must interact with services that can identify them. Blockchain intelligence reveals these off-ramps, giving enforcement agencies the advantage.

Key takeaway: The capture of Nicolás Maduro has renewed attention on crypto and state-level illicit finance. When sanctioned actors try to convert digital assets to fiat currencies, they must interact with services that can identify them. Blockchain intelligence reveals these off-ramps, giving enforcement agencies the advantage.


Thales Group

New SkyBridge Alliance selects Thales to accelerate Europe’s Air Traffic Management modernisation

New SkyBridge Alliance selects Thales to accelerate Europe’s Air Traffic Management modernisation prezly Thu, 01/08/2026 - 10:00 Civil Aviation Airspace management Europe Share options Facebook
New SkyBridge Alliance selects Thales to accelerate Europe’s Air Traffic Management modernisation prezly Thu, 01/08/2026 - 10:00 Civil Aviation Airspace management Europe

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 08 Jan 2026 Air Navigation Service Providers (ANSPs) from the Czech Republic (ANS CR), Estonia (EANS) and Finland (FTANS) launch new SkyBridge Alliance to strengthen cooperation, harmonise operations, and drive a new era of efficiency and innovation in European air traffic management. The SkyBridge Alliance has selected an upgrade to Thales’s TopSky – ATC solution, and joins nine other European ANSPs in the TopSky - ATC Partners group, which seeks to develop together an ATC system for safe and seamless operations, which accelerates modernization and ensures interoperability across borders. Through this collaboration, SkyBridge will achieve substantial cost efficiencies and operational synergies, while aligning with the EU ATM Master Plan.

From left to right: Jan Klas, CEO Air Navigation Services of the Czech Republic (ANS CR); Youzec Kurp, Vice President, Airspace Mobility Solutions, Thales; Emeline Lopez, Head of Sales Europe, Airspace Mobility Solutions, Thales; Ivar Värk, CEO Estonian Air Navigation Services AS (EANS), and Raine Luojus, CEO Fintraffic Air Navigation Services Ltd (FTANS). © Pierre Olivier / CAPA Pictures.

In a landmark move for European aviation, ANS CR (Czech Republic), EANS (Estonia) and FTANS (Finland) have come together to launch the SkyBridge Alliance. This alliance reflects a shared ambition to harmonise systems and operations in line with the Single European Sky and the EU ATM Master Plan, advancing Europe’s vision of a safer, greener, and more connected sky.

“The creation of the SkyBridge Alliance marks an important milestone in cross-border ATM cooperation. By uniting technological and operational strategies, we are building a stronger foundation for a more sustainable, interoperable and efficient European airspace. This strategic investment not only supports our day-to-day operational reliability but will also ensure the long-term sustainability of air traffic management services.” said Jan Klas, CEO Air Navigation Services of the Czech Republic (ANS CR), Ivar Värk, CEO Estonian Air Navigation Services AS (EANS), and Raine Luojus, CEO Fintraffic Air Navigation Services Ltd (FTANS) for the SkyBridge Alliance.

To power Europe’s ATM modernisation, the SkyBridge Alliance has selected an upgrade of Thales’s cybersecured, AI-powered TopSky - ATC, the world’s most advanced air traffic control solution. This choice marks a significant milestone, bringing the three ANSPs into the TopSky – ATC Partners group, which unites 12 European Air Navigation Service Providers. As members of this collaborative community, the SkyBridge partners will take part in the upgraded version of TopSky – ATC product roadmap governance, ensuring alignment of system evolution with operational priorities and European regulatory developments. This approach will enable faster deployment, lower total lifecycle costs, and seamless cross-border interoperability.

The deployment of Thales’s TopSky – ATC will provide cutting-edge controller tools and a latest-generation, open, safe, and secure software architecture, allowing the alliance to introduce advanced functionalities, while ensuring continuity of operations and data integrity. This system is fully compliant with CP1 regulation1 and EASA’s DPO (Design or Production Organisation) framework, providing future-proof readiness for the evolving ATM environment.

The three ANSPs have also committed to a long-term evolutive and corrective maintenance agreement with Thales, ensuring continuous innovation, scalability, and resilience in readiness for the inevitable evolution of ATM. Additionally, FTANS and EANS will continue benefiting from the MonoFIR/Bi-FIR (Flight Information Region) functionality delivered to Finland and Estonia, a proven step toward seamless cross-border air traffic control.

“We are honored to support the SkyBridge Alliance in this bold initiative. With TopSky – ATC, we bring a proven, future-ready solution that empowers ANSPs to deliver safer and more efficient air traffic services, perfectly aligned with the goals of the Single European Sky.” Youzec Kurp, Vice President Airspace Mobility Solutions at Thales.

1 The Common Project 1 is a regulation developed under the Single European Sky ATM Research (SESAR) programme, focused on modernizing the ATM infrastructure and services across Europe, aiming for improved efficiency of air traffic flow, better environmental performance (lower emissions, more direct flight routes) and increased safety through new technologies and procedures.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About the SkyBridge Alliance

The SkyBridge Alliance is a newly formed partnership made up of Air Navigation Services of the Czech Republic (ANS CR), Estonian Air Navigation Services AS (EANS) and Fintraffic Air Navigation Services Ltd (FTANS). The alliance reflects a shared ambition to harmonise systems and operations in line with the Single European Sky and the EU ATM Master Plan, advancing Europe’s vision of a safer, greener, and more connected sky.

Press contacts

Thales, Media relations: pressroom@thalesgroup.com

ANS CR: Richard Klima, Head of Communication Department, klima@ans.cz, + 420 724 172 186

EANS: Annika Koppel, Communications Manager, annika.koppel@eans.ee, + 372 5335 9153

FTANS: Leena Huhtamaa, leena.huhtamaa@fintraffic.fr, + 358 40 756 3819

View PDF market_segment : Civil Aviation > Airspace management ; countries : Europe > Czech-Republic | Finland https://thales-group.prezly.com/new-skybridge-alliance-selects-thales-to-accelerate-europes-air-traffic-management-modernisation new-skybridge-alliance-selects-thales-accelerate-europes-air-traffic-management-modernisation On New SkyBridge Alliance selects Thales to accelerate Europe’s Air Traffic Management modernisation

Radiant Logic

NYDFS Cybersecurity Regulation: Why Identity Is the New Compliance Battleground

Discover how a new AI-driven, data-centric approach turns identity visibility into real-time action, closing the gap between detection and remediation to continuously shrink your attack surface. The post NYDFS Cybersecurity Regulation: Why Identity Is the New Compliance Battleground appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
Background on NYDFS Cybersecurity Regulation (23 NYCRR Part 500)

New York’s Department of Financial Services (NYDFS) has quietly reset the bar for financial sector cybersecurity. Its Cybersecurity Regulation, 23 NYCRR Part 500, was already influential when it first took effect in 2017. With the Second Amendment finalized in November 2023 and the deadline for final requirements on November 1, 2025, it is becoming one of the most demanding cyber regulations in the world, especially around identity.   

Core Requirements of NYDFS Part 500

At a high level, Part 500 requires covered entities to run a risk-based cybersecurity program, appoint a CISO, implement written policies and procedures, and conduct regular risk assessments. It mandates technical controls such as access control, encryption of nonpublic information, continuous monitoring or periodic penetration testing, and an incident response plan. Covered entities must notify NYDFS within 72 hours of certain cybersecurity incidents and file annual certifications of compliance – now with personal liability exposure for CEOs and CISOs under the Second Amendment.   

The 2025 Amendments: MFA and Asset Inventory

What has changed is the level of specificity and scrutiny. The final set of amended requirements that took effect on November 1, 2025, focuses on two pillars:  

Universal multi-factor authentication (MFA)  Formal information system asset inventory policies 

NYDFS has made it clear that weaknesses in MFA and basic asset hygiene are some of the most common root causes behind real world breaches, so these are now top enforcement priorities.   

Universal MFA Requirements Under NYDFS

For identity and access, the signal is loud and clear. As of November 1, 2025, covered entities must use MFA for any individual accessing any information system of the covered entity, regardless of location, user type, or the sensitivity of the data. There are only narrow exemptions, and even those require compensating controls approved by the CISO and reviewed at least annually. NYDFS FAQ guidance explicitly clarifies that internal networks include cloud email and document platforms such as Office 365 and Google Workspace, not just on premises systems.

At the same time, Part 500 expects firms to maintain an accurate, up to date inventory of information systems and to manage third party access with policies and contracts that enforce equivalent controls, including MFA. When you combine these expectations with the annual certification requirement and expanded penalties – NYDFS has now levied more than $100 million in fines for cybersecurity violations, including a recent $19 million enforcement against eight auto insurers – it is obvious that “best effort” identity governance is no longer enough.  Guidance from NYDFS and multiple legal analyses state that missing the November 1, 2025 deadlines will put entities out of compliance and at risk of multi‑million dollar fines, and they explicitly link these risks to the pattern of recent enforcement actions. 

From Identity Chaos to Compliance Clarity

For most financial institutions, identity is where this all becomes challenging. You cannot prove universal MFA coverage, effective access control, or timely revocation of access if you do not have a complete, accurate picture of every human and non-human identity, their entitlements, and their relationship to business services. Mergers and acquisitions, hybrid cloud migrations, and third-party platforms create a tangle of overlapping directories, local accounts, and “shadow” identities that are invisible to traditional tools but fully within NYDFS’s scope.   

Meeting NYDFS expectations requires moving from point-in-time control checks to identity observability and continuous identity security posture management. In practice, that looks like three phases. 

Organizations need a single, authoritative view of identities across Active Directory, cloud identity providers, HR systems, core banking and trading platforms, SaaS applications, and service accounts. This unified identity data fabric should normalize identities, correlate duplicates, and map non-human and machine identities back to accountable owners. Without this foundation, MFA rollout, privileged access management, and third-party access reviews remain in spreadsheet exercises.  Once the data is unified, firms can continuously observe their posture against NYDFS requirements. That means answering questions such as: Which accounts are still not covered by MFA. Where privileged accounts bypass centralized controls. Which third parties have persistent access to nonpublic information. Where are entitlements out of alignment with policy or business needs? This kind of observability allows CISOs and boards to see identity risk in the same way they see market or credit risk, rather than relying on static point-in-time audit snapshots.  Finally, the program must be able to act on those insights. That includes orchestrating remediation through IAM and IGA systems, ticketing workflows, and security tools to close orphaned accounts, enforce MFA everywhere it is required, tighten over broad permissions, and standardize controls across subsidiaries and affiliates. It also means generating evidence for auditors and regulators automatically, instead of mobilizing ad hoc teams every time a certification or examination is due. 

NYDFS is also turning its attention to artificial intelligence and AI enabled attacks, issuing guidance on AI related cybersecurity risks and recommending controls that include stronger access controls, risk assessments, and data management practices. Once again, identity and unified identity data sit at the center, since AI systems both depend on sensitive datasets and introduce a new class of non-human identities and privileged service accounts.   

Transforming Mandates into Measurable Security Gains 

For covered entities, the path forward is clear. Treat NYDFS not as a checklist, but as a catalyst to modernize identity that includes enforcement of least privilege, limiting and controlling privileged accounts, and performing regular access reviews and timely offboarding. MFA must be implemented across all remote access, all privileged accounts, and effectively all system access where nonpublic information is involved. And you must maintain centralized, auditable identity data so you can prove who has access to what, with what protections, and why. By unifying identity data, observing identity risk in real time, and acting quickly on what you see, financial institutions can turn a challenging regulation into an opportunity to build a more resilient, measurable, and trustworthy security posture. 

The post NYDFS Cybersecurity Regulation: Why Identity Is the New Compliance Battleground appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


auth0

Fact or Fiction: Eight Myths About Auth0 For B2B

Explore eight common myths about B2B multi-tenancy, SSO implementation, MFA, and Auth0's authorization model.
Explore eight common myths about B2B multi-tenancy, SSO implementation, MFA, and Auth0's authorization model.

Wednesday, 07. January 2026

LISNR

Operationalizing the IAB’s In-Store Framework

LISNR Helps Retailers Execute the IAB’s New “Verified Impression” standard using a deterministic proof-of-presence Signal Radius® and Quest help retailers put the IAB’s new In-Store Measurement Framework into practice using a deterministic proof-of-presence signal without cameras or new hardware. In-store media has reached an inflection point. Retailers and brands are investing heavily in physical
LISNR Helps Retailers Execute the IAB’s New “Verified Impression” standard using a deterministic proof-of-presence Signal

Radius® and Quest help retailers put the IAB’s new In-Store Measurement Framework into practice using a deterministic proof-of-presence signal without cameras or new hardware.

In-store media has reached an inflection point.

Retailers and brands are investing heavily in physical-world media, but the confidence behind that spend has not kept pace. As budgets grow, so do questions: Did the ad actually reach a shopper? Was it seen in the right place, at the right time? And most important: Can the ad performance be tied to results? In an era of heightened scrutiny and margin pressure, “estimated lift” is no longer enough.

That pressure is exactly what the Interactive Advertising Bureau (IAB) addressed with its new Viable Framework for Maturing In-Store Media Measurement. The framework establishes a higher bar for accountability, replacing inferred “opportunity to see” models with a standard for verified impressions, which is built on proof. Proof that an ad played, that a shopper was physically present, and that those events were time-aligned.

LISNR enables retailers to meet that bar in practice.

Through its Radius® ultrasonic proximity SDK and Quest engagement platform, LISNR provides a deterministic, privacy-forward proof-of-presence signal that works within existing store environments. Without cameras or new hardware, retailers can verify in-store exposure and connect it to engagement and transaction data they already control, making in-store media both measurable and outcome-ready while aligning with the IAB framework’s emphasis on verified impressions and outcome readiness.

Source: IAB: A Viable Framework or Maturing In-Store Media Measurement (2026).

The IAB framework outlines a phased approach to increase transparency, comparability, and privacy-forward measurement. starting with a baseline definition for “verified impressions” anchored in the Three Ps (Play, Presence, and Pairing) alongside additional layers for Insights and Outcomes.

The guidance explicitly moves the industry away from “opportunity to see” estimates and toward a stricter standard of Verified Impressions.

With the IAB’s viable framework, the industry now has shared expectations for how in-store exposures should be verified and governed.

Radius and Quest were built for exactly this moment: proving proximity-based presence in real retail environments and turning that signal into measurable customer actions while using the infrastructure retailers already have.

Solving the IAB Baseline: How Radius Delivers “Verified Impressions”

The IAB Baseline requires proof that an ad played, proof that a shopper was physically present within the “zone of influence,” and proof that these two events were time-aligned (“paired”).

Radius establishes a direct communication channel from in-store speakers to consumer devices, where a retailer app can detect a broadcasted ultrasonic identifier, thus enabling proximity-based presence detection without requiring cameras. Radius also supports configurable transmission range (from inches to 30+ feet, depending on implementation), helping retailers define and publish measurable proximity thresholds that align to how the IAB expects “presence” assumptions to be documented.

Legacy systems fail at “Presence” because they rely on signals like GPS and BLE that suffer from drift, spoofing, and bleed-through. This is what we identified in our analysis “Trust Me, Bro” is Not Proof of Presence as the “Spray-and-Pray” problem. If a signal cannot distinguish between a shopper viewing a screen and a shopper near a screen, it cannot meet the IAB’s verification standard.

Translating IAB standards into measurable signals: Radius was architected to eliminate ambiguity

Radius eliminates this ambiguity by establishing a direct, deterministic communication channel between in-store speakers and consumer devices.

The IAB guidelines explicitly state that verified presence requires a shopper to be “detected within a measurable radius.” As detailed in our technical breakdown Signal to Sale, Radius utilizes our Zone66 tone profile to define these precise zones. We do not infer traffic; Radius validates that a device, and therefore its human owner, is within the specific viewing cone of the asset.

Unlike “spatial AI” solutions that require expensive camera installations, Radius uses standard audio waves. This allows retailers to define and publish measurable proximity thresholds, a key IAB transparency requirement, without deploying new hardware.

Most attribution systems fail because they lack a single, continuous signal between the digital and physical world. Radius provides this missing substrate, turning presence from a “probabilistic guess” into a verifiable event that stabilizes the entire marketing stack.

From Exposure to Outcomes: How Quest Closes the Loop

The apex of the IAB framework is the “Tip of the Spear,” the ability to attribute outcomes by closing the loop between exposure and transaction. To achieve this, the IAB explicitly prefers “deterministic linkage” (knowing a specific verified user bought a product) over probabilistic modeling (guessing). IAB requires for probabilistic approaches to be disclosed,

This demand for certainty reveals the industry’s core engineering challenge: How do you connect a physical ad impression to a digital transaction log without breaking the data chain?

The answer is Proof of Presence.

While Radius provides the raw verification signal (the physics of presence), your loyalty app acts as the logic layer (the software of presence) that ingests those signals and maps them to trackable engagement and point-of-sale data.

As described in “Your Omnichannel Promise Has a Presence Problem,” true attribution requires connecting the Marketing Layer (the ad) to the Transaction Layer (the sale).

This is where LISNR’s own Quest bridges this gap, creating a competitive moat for retailers by turning ephemeral “store visits” into hard, transactional data. Quest is designed to ingest the proximity events generated by Radius and map them to trackable engagement and point-of-sale data.

From Insights to Action. The IAB’s “Insights” layer asks for reach and frequency context, while the “Outcomes” layer asks for transaction linkage. Quest solves both simultaneously by creating a single, unbroken data path: Exposure → Presence → Purchase. Once Radius verifies the impression via ultrasonic handshake, Quest instantly maps that event to a persistent, privacy-safe identifier, ensuring the attribution signal is never lost. POS Integration For a retailer to move beyond “estimated lift,” they need to see the receipt. Quest supports commerce-event ingestion and direct POS integrations (including Clover). This allows the software to match a verified ultrasonic impression to a specific basket ID deterministically—fulfilling the IAB’s highest standard for attribution while keeping data control strictly within the retailer’s environment. Talk with Our Team about Retail Media Solutions

By turning proximity events into trackable engagement, Quest gives retailers a practical path to store-level outcomes without rewriting their entire stack. For a technical walkthrough of how this software layer supports attribution across touchpoints (at-home audio, in-store zones, checkout), read Signal to Sale: How Ultrasonic Tech is Solving Retail’s Attribution Problem.

The Privacy & Interoperability AdvantageThe Privacy & Interoperability Advantage

The IAB framework explicitly elevates “Privacy by Design,” calling for measurement practices that protect consumer identity via anonymization and data minimization.

This is the decisive advantage of an ultrasonic approach over computer vision. Camera-based solutions attempt to achieve privacy by redaction (recording faces and then blurring them). LISNR achieves compliance while never recording any data in the first place.

Radius is designed to exchange data using speakers and microphones with no camera requirement, supporting privacy-forward in-store activation and measurement designs, as LISNR does not record or store any audio.

Because Radius exchanges data via an opt-in it is inherently consumer-friendly, devoid of biometric risk, and fully interoperable with the loyalty data retailers already possess. The IAB measurement framework also highlights interoperability as a core goal for in-store media measurement, and a standardized, verifiable proximity signal can help retailers align in-store reporting with broader commerce measurement expectations.

Alignment is a marketing term. Execution is an engineering reality. The IAB framework asks for precise fields of view, tight time-pairing tolerances, and deterministic attribution. LISNR Radius and Quest were purpose-built to answer them. Instead of just inferring an opportunity to see, we want to know the engagement occurred.

For retailers ready to move their networks from “estimated” to “verified,” the infrastructure is here.

The post Operationalizing the IAB’s In-Store Framework appeared first on LISNR.


Elliptic

Elliptic's 2026 regulatory and policy outlook: 5 crypto trends to expect in 2026

As always at Elliptic, we like to kick off the new year by taking a look at what lies ahead in the world of cryptoasset regulation and policy. After an eventful 2025 full of highly impactful regulatory and policy developments, 2026 is primed to be another groundbreaking year for the industry.

As always at Elliptic, we like to kick off the new year by taking a look at what lies ahead in the world of cryptoasset regulation and policy. After an eventful 2025 full of highly impactful regulatory and policy developments, 2026 is primed to be another groundbreaking year for the industry.


Indicio

Why Verifiable Credentials will power real-world AI in 2026

The post Why Verifiable Credentials will power real-world AI in 2026 appeared first on Indicio.
Verifiable Credentials provide a portable trust layer for AI systems, allowing them to consume structured, authenticated data and act in a dynamic way with provable authority — all without the need for custom integrations or centralized trust. This is what Indicio provides with Indicio ProvenAI.

By Trevor Butterworth and Will Groah

If AI systems are going to deliver business success in 2026, they must be able to safely access high-value data and act across organizational boundaries. To do this, systems and agents need to be:

Identifiable Have access to structured and accurate data in a permissioned and auditable way Demonstrate that they have the delegated authority to share this data with other identifiable AI agents and systems.

Three years ago, Indicio identified the need to assign verifiable identity to AI agents using the same decentralized identity technology we created for people, organizations, and devices. This led us to develop Indicio ProvenAI, an infrastructural solution that could be layered into existing systems.

ProvenAI provides seamless authentication and permissioned access to the structured and authenticated data contained in a Verifiable Credential, enabling AI agents and agentic AI to operate dynamically to user requests.

Verifiable Credentials are a container-like solution for sharing structured, trusted data. A Verifiable Credential issuer digitally signs a set of data or claims and sends them to a person, organization, or device in the form of a digital credential to hold in a digital wallet. A digital signature is an electronic, digital stamp that is cryptographically provable. This means that anyone can mathematically confirm who created the stamp and that the stamped data hasn’t been changed, without calling back to the issuer or checking another system. There is no need for direct, custom integrations; instead, you have verifiable information that can be instantly verified and trusted on the spot. 

This combination of portability and  provability means that trusted data can be easily and inexpensively used across disparate systems. As long as the credentials follow open standards and are interoperable, they can be verified and understood by any system, not just the one that created them.

The interoperability part matters. Some well-known identity verification solutions may issue ‘credentials’ that only work inside their own ecosystem, which limits how useful the data really is. When credentials are built to work across platforms, trusted data can move freely and be reused wherever it’s needed.

AI companies have already been fined under data protection regulations

Businesses and organizations are going to have to contend with the privacy and security implications of AI agents and systems accessing and processing vast amounts of personal and other high-value, high-risk data.

Regulators are often slow to act on new technology, but in this case, GDPR provides a clear template for how personal data needs to be treated. The EU has already issued multi-million dollar fines to AI companies for misusing personal data. The recent, proposed changes to GDPR around AI won’t alter fundamental data privacy and protection requirements.

The beauty of Verifiable Credentials is that — especially the way Indicio implements them —  how easily they handle compliance and trust. They do this in ways that are clear, auditable and easy to explain to regulators and users alike. 

First, credentials only work when the person or organization holding them explicitly gives permission to access the data. That simple requirement is integral to meeting a key requirement of GDPR and other data protection regulations around consent. 

This means they can also provide a way for the credential holder to consent to their data being shared by an AI agent with other AI agents and systems. This is often called delegated authority: it  means clear, provable consent that can travel with the data. 

Verifiable Credentials also provide ways to share only what’s necessary. You can reveal specific pieces of information, or even prove something is true without revealing the underlying data. This is called selective disclosure  and zero-knowledge proofs, which helps meet the data and purpose minimization requirements of data protection regulations, and gives users real data privacy assurance. 

Finally, they provide a seamless way for agent-to-user and agent-to-agent authentication, so both users and AI agents can verify who they are interacting with. That helps prevent  fake AI agents and fake users from entering the system, which is becoming a growing risk as AI use expands.

The move from deterministic to dynamic interaction

As a recent article by Vatsal Gupta for IASCA points out, the infrastructural ground has shifted with AI:

“The identity and access management (IAM) infrastructures that organizations rely upon today were built for human beings and fixed service accounts. They were not designed to manage autonomous AI systems that can reason about goals, make independent decisions, and dynamically adapt their actions. However, that is precisely the management agentic artificial intelligence (AI) demands.”

And a recent paper (Singh et al.) points to the scale of transformation and what that entails:

“Autonomous AI agents are rapidly becoming foundational across domains from cloud-native assistants and robotics to decentralized systems and edge-based IoT controllers. These agents act independently, make decisions, and collaborate at scale. As agent populations grow into the billions across heterogeneous platforms and administrative boundaries, the ability to identify, discover, and trust agents in real time has emerged as a critical infrastructure challenge.”

Companies and organizations can respond to these opportunities and challenges in two ways: They can attempt to mitigate these problems after they have built these systems or build in accountability and operational interoperability from the start.  

Solving these problems now mitigates the risk of a retrofit being costly, time-consuming, and complex. But it’s not just about risk. One of the decentralized identity’s most powerful features is decentralized governance, a simple way to establish trust in a network and prescribe workflows.

Indicio is a pioneer in decentralized governance, helping to develop the Decentralized Identity Foundation’s Credential Trust Establishment specification. Decentralized governance means that the natural authority for a given credential use case publishes a machine-readable file for every participant in the network. The file contains a “trust list” of approved credential issuers and, if needed, verifying parties. It also establishes what information needs to be presented by a credential holder to a verifier. 

In this way, users and AI agents can instantly recognize and verify each other as legitimate credential holders as the credentials come from issuers listed in their governance files. 

This is an enormously powerful feature. Decentralized governance enables interoperable credential systems to work with each other and scale into larger trust networks. It removes the need for, and friction of, an independent trust registry, and in doing so, it provides a powerful foundation for scripting trust in fully automated systems and trusted interaction with other fully automated systems. 

Indicio ProvenAI’s governance solution makes it easy to build and scale interoperable trust networks for AI agents and systems so that they can reliably identify each other, share permissions, and operate across organizations.

Indicio + NVIDIA Inception

Indicio is a market-leader in Verifiable Credential technology. In 2025, we joined NVIDIA’s Inception program to develop Indicio ProvenAI, a practical, easy-to-deploy trust and compliance layer for AI systems. 

We are providing the foundation that lets autonomous AI networks and systems operate efficiently and effectively, with clear identity, permissions, and governance, while working seamlessly alongside the Verifiable Credential systems already used by people and organizations. Interoperability is where we’ve led the market, and it’s what makes us the bridge to a seamless, secure human-AI world.

If you’re building or deploying AI today, this is the moment to put decentralized identity and verifiable data in place before scale makes it harder and more expensive. A ProvenAI license from Indicio will give you a fast, standards-based path to move forward with confidence. 

Join us.

 

The post Why Verifiable Credentials will power real-world AI in 2026 appeared first on Indicio.


Trinsic Podcast: Future of ID

Ross Freiman-Mendel – Networked Identity, Reusable Personas, and the Future of KYC

In this episode of The Future of Identity Podcast, I’m joined by Ross Freiman-Mendel, Head of Product Growth at Persona, to explore the shift from one-off, siloed KYC toward network-based and reusable identity products. Ross walks through Persona’s consumer and enterprise identity networks, including Reusable Personas, Persona Connect, and emerging work on Know Your Agent (KYA), and explains why r

In this episode of The Future of Identity Podcast, I’m joined by Ross Freiman-Mendel, Head of Product Growth at Persona, to explore the shift from one-off, siloed KYC toward network-based and reusable identity products. Ross walks through Persona’s consumer and enterprise identity networks, including Reusable Personas, Persona Connect, and emerging work on Know Your Agent (KYA), and explains why redundant verification has become one of the biggest unsolved problems in identity.

Our conversation goes deep on the practical realities of building reusable identity at scale. Ross shares concrete adoption metrics, including 2× higher conversion rates, significantly faster completion times, and why over 90% of Persona customers are now activated on the network. We also unpack how network-based identity improves fraud detection while simultaneously reducing user friction - one of the rare win-win scenarios in identity.

In this episode we explore:

Why redundant KYC is breaking onboarding experiences and how transferable identity solves it. The difference between consumer-owned identity wallets (Reusable Personas) and enterprise-to-enterprise sharing (Persona Connect). How network-based identity acts as a trust signal, improving fraud outcomes without global blocklists. Where mobile driver’s licenses and digital wallets fit into Persona’s platform-agnostic acceptance strategy. Why AI agents are accelerating the need for identity portability and what Know Your Agent could look like in practice.

This episode is essential listening for anyone building or buying identity verification technology. Ross offers a grounded, metrics-driven perspective on how reusable identity is finally moving from theory to production and why networks, not standalone checks, will define the next era of digital trust.

Enjoy the episode, and don’t forget to share it with others who are passionate about the future of identity!

Learn more about Persona.

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.

Listen to the full episode on Apple Podcasts or Spotify, or find all ways to listen at trinsic.id/podcast.


ComplyCube

Simplify AML With No-Code Compliance Workflow Software

Low-code or no-code compliance workflow software utilizes visual drag-and-drop interfaces, eliminating the need for custom coding. Compliance teams can adopt stronger governance, agility, and scale rapidly with no-code software. The post Simplify AML With No-Code Compliance Workflow Software first appeared on ComplyCube.

Low-code or no-code compliance workflow software utilizes visual drag-and-drop interfaces, eliminating the need for custom coding. Compliance teams can adopt stronger governance, agility, and scale rapidly with no-code software.

The post Simplify AML With No-Code Compliance Workflow Software first appeared on ComplyCube.


PingTalk

What is Zero-Knowledge Biometric Authentication? A Simple Guide for Security Teams

Explore how Zero-Knowledge Biometrics uses privacy-preserving cryptography to deliver secure, frictionless authentication. A guide for security teams.

Biometric authentication is now part of everyday digital life. But more often than not, biometric systems force security, fraud, product, and digital transformation teams to make trade-offs between security, privacy, and usability - with cost and integration also factored in.

 

Zero-Knowledge Biometrics™ (ZKB) is a novel approach to authentication developed by Keyless (now part of Ping Identity) that aims to put and end to having to choose. By combining advanced cryptography with biometrics, ZKB allows organisations to deliver strong, user-friendly authentication without compromising user privacy.

 

In this guide, we’ll explain what Zero-Knowledge Biometrics is, how it works, and why organizations are using it to offer a more secure, private, and user-friendly alternative to conventional authentication models.


Spruce Systems

Digital Identity In America: Series Overview

An executive guide to SpruceID’s multi-part series on the future of digital identity in the U.S.

Digital identity is no longer an abstract debate. It is unfolding right now in America and abroad, across DMVs, airports, banks, legislatures, and smartphones. The question is no longer if digital identity will become a cornerstone of our lives, but how it will be built, who it will serve, and whether it will earn trust.

We believe digital identity must be usable, secure, private, and decentralized. To make that case, we’ve published an eight-part series exploring the foundations, policy momentum, technical underpinnings, privacy implications, and practical pathways for digital identity in the United States.

This overview serves as an executive guide to the series, highlighting key themes for policymakers, financial institutions, technologists, and civil society.

Why Digital Identity Matters Now Adoption is accelerating. Seventeen states now issue mobile driver’s licenses, TSA accepts them at over 250 airports, and private-sector services are expanding into identity verification. Policy is moving. From Utah’s SB 260 to the EU’s eIDAS 2.0 to the U.S. Supreme Court’s age verification ruling, governments are setting new expectations for digital identity. Technology is mature. Standards like W3C Verifiable Credentials, ISO 18013 series, and NIST SP 800-63-4 are converging into a workable ecosystem. AI is raising the stakes. Generative AI threatens to undermine legacy verification methods, making cryptographically anchored identity the only defensible model. Trust is fragile. Public confidence in both government and technology is low. Unless identity is designed for privacy and user control, adoption will stall. Series at a Glance 1. Foundations of Digital Identity

We start with first principles. What is digital identity? Why has the internet historically relied on “borrowed” logins from big tech platforms?

Drawing from economic thinkers like Adam Smith, Milton Friedman, and Friedrich Hayek, we explore how identity functions as critical infrastructure and why decentralization offers a path to resilience, competition, and individual empowerment.

Read now 2. Policy Momentum: Why Governments Care Now

From California’s DMV Wallet to Utah’s SB 260, U.S. states are leading. Federal agencies like DHS and NIST are piloting. The EU’s eIDAS 2.0 sets a global benchmark.

Policy is treating identity as infrastructure, embedding privacy and interoperability as public goods.

Read now 3. The Technology Powering Digital Identity

In this post, we explore the technical backbone: verifiable credentials, decentralized identifiers, selective disclosure, and zero-knowledge proofs.

Real-world examples show how cryptography solves problems that passwords and scanned documents cannot.

Read now 4. Privacy and User Control

Privacy is not a feature; it is the heart of digital identity. Groups like the ACLU and EFF warn that without safeguards, digital IDs could become tools of surveillance and exclusion.

This post humanizes the stakes, showing how people overshare today and how the combination of strong policy and technology together can protect autonomy.

Read now 5. Practical Digital Identity in America

A wide-angle look at adoption, policy, and technology in the U.S. today. We explore the rise of mDLs, the impact of AI fraud, financial sector reforms, and the shared goals of usable, secure, private, and decentralized systems.

Read now 6. Enabling Issuers of U.S. Digital Identity

Issuers are the foundation. DMVs lead, but vital records offices, veterans’ agencies, municipalities, and libraries also matter.

We outline what issuers need: new NIST guidance, wallet certification, independent audits, and governance to ensure cross-state trust.

Read now 7. Verifiers: Building Trust at the Point of Use

Banks, agencies, and businesses are the gatekeepers. They need assurance of authenticity, revocation checks, and regulatory clarity.

We recommend plug-and-play standards, revocation infrastructure, and explicit regulatory endorsement.

Read now 8. Digital Identity: End User Experience

The most important actor is the individual. Holders need identity that is convenient, private, and empowering. Wallets must be intuitive, inclusive, and transparent. Without holders’ trust and adoption, the entire ecosystem fails.

Read now

Unifying Themes

Across all eight posts, a set of clear themes emerges:

Identity is infrastructure. Like roads or power grids, digital identity underpins everything else in society and the economy. Trust is fragile. Without privacy, voluntariness, and transparency, adoption will fail. AI raises urgency. Legacy verification methods are collapsing. Cryptographic credentials are the only sustainable path. Compliance drives adoption. Regulators must endorse digital identity as valid for KYC, AML, sanctions, and age verification. People must remain at the center. If digital identity doesn’t work for holders, it doesn’t work at all. SpruceID’s Perspective

At SpruceID, we work at the intersection of technology, policy, and human experience. We help issuers launch wallets, verifiers adopt credentials, and holders take control of their data. We contribute to open standards and advocate for frameworks that are interoperable, privacy-preserving, and aligned with democratic values.

Our view is simple: digital identity is inevitable, but how it’s built is a choice. If we focus on usability, security, privacy, and decentralization, digital identity can strengthen democracy, modernize financial systems, and empower individuals. If we don’t, it risks becoming another tool of surveillance and exclusion.

The Call to Action

Digital identity is no longer a pilot. It is becoming public infrastructure. The U.S. now faces a choice:

Enable issuers with clear requirements and certifications. Empower verifiers with standards and regulatory clarity. Center holders with wallets that deliver usability and privacy.

These are shared goals. They require coordination across states, federal agencies, private industry, and civil society. The time to align is now.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


auth0

Model Context Protocol Nov 2025 Specification Update: CIMD, XAA, and Security

The November 2025 Model Context Protocol (MCP) update introduces Client ID Metadata Documents (CIMD) and Cross App Access (XAA). Learn how these changes improve AI agent security.
The November 2025 Model Context Protocol (MCP) update introduces Client ID Metadata Documents (CIMD) and Cross App Access (XAA). Learn how these changes improve AI agent security.

Tuesday, 06. January 2026

Spruce Systems

Digital Identity: End User Experience

Part 8 of SpruceID’s Digital Identity in America Series

This article is part of SpruceID’s series on the future of digital identity in America. Start with the first installment here.

When people talk about digital identity, they often focus on the issuers - the DMVs, vital records offices, and municipal agencies that create credentials - or the verifiers, like banks and government agencies, that check them. But in between sits the most important actor of all: the holder.

Holders are the individuals who carry credentials in their wallets and present them when needed. Without them, digital identity is just infrastructure with no purpose. They are not passive participants. They are the ones whose trust, adoption, and daily choices will determine whether digital identity succeeds or fails in America.

Designing for holders means confronting a hard truth: people are skeptical. They worry about surveillance, data breaches, and usability nightmares. At the same time, they expect consumer-grade experiences. If digital identity wallets aren’t as seamless as Apple Pay or Google Maps, adoption will stall. For all the cryptography and compliance frameworks under the hood, the success of digital identity ultimately rests on the experience of the holder.

Why Holders Matter

The entire promise of decentralized identity rests on one principle: people, not platforms, should control their own identity data. This vision collapses if people don’t actually feel in control.

Every stakeholder depends on holders. Issuers need them to accept and store credentials. Verifiers need them to present proofs on demand. Policymakers need them to trust that rights and protections are being respected. If holders reject the system, the rest of the ecosystem crumbles.

This is why the “middle actor” is in fact the central one. Identity systems succeed or fail based on whether they deliver value to holders in their daily lives.

What People Want from Digital Identity

Research and experience show that people want three things from identity systems: convenience, security, and privacy. These are not luxuries; they are the baseline.

Convenience: People expect identity to be easy. A digital credential should be as simple to use as tapping your phone to pay. If proving your age or your residency takes more than a few clicks, users will default to old methods. Security: People are tired of data breaches. They know their Social Security numbers, addresses, and account information are floating around the dark web. They want identity systems that are harder to hack, and they want to feel safe using them. Privacy: Oversharing is the core problem of today’s identity model. A bartender doesn’t need your address, yet your driver’s license shows it. An online retailer doesn’t need your birthdate, yet it collects it anyway. Holders want identity systems that minimize exposure—sharing just what’s necessary, nothing more.

If digital identity wallets deliver on these three expectations, holders will adopt them. If they don’t, the entire system risks rejection.

The Problem of Identity Fatigue

People are already suffering from “identity fatigue.” The average American manages over 100 online accounts, each with its own login, password, and recovery process. Two-factor authentication, while more secure, adds more steps. Meanwhile, fraud continues to rise despite all the friction.

This fatigue leads to shortcuts - password reuse, insecure recovery methods, reliance on federated logins like “Sign in with Google.” These shortcuts may be convenient, but they erode privacy and security.

A practical digital identity wallet can relieve this fatigue. Instead of dozens of accounts and logins, people would carry a small set of credentials that prove who they are across contexts. But if wallets themselves add friction or confusion, they risk becoming just another layer of fatigue.

A Day in the Life of a Holder

To see what digital identity means in practice, let’s follow a person through a day.

In the morning, they log into their health portal. Instead of juggling usernames and passwords, they present a credential from their wallet confirming their patient ID. The hospital accepts it instantly.

At lunch, they stop at a liquor store. Instead of handing over a plastic license that reveals their address, they share a proof that says only: “Over 21.” The cashier sees a green checkmark, nothing more.

That afternoon, they apply for a new bank account. Instead of scanning utility bills and uploading selfies, they share a DMV-issued credential. The bank checks authenticity against the DMV’s key, verifies revocation status, and completes onboarding in minutes.

Later, they log into a government portal to renew their passport. The system accepts their verifiable credential from the DMV, eliminating the need for them to re-enter biographical data.

Finally, they attend an evening concert. Their ticket is tied to a credential in their wallet, ensuring it can’t be counterfeited or scalped.

In each case, the holder presents just what’s needed, nothing more. The experience is faster, safer, and less invasive than the alternatives. That’s the promise of digital identity - when designed for the holder.

Privacy as a User Expectation

Privacy is often discussed as a policy principle, but for holders it is an expectation. They may not use the phrase “zero-knowledge proof,” but they know they don’t want to overshare.

Selective disclosure is the feature that makes this possible. Holders should be able to prove they are over 21 without revealing their exact birthdate, or prove they live in a certain state without exposing their full address. Zero-knowledge proofs take it further, allowing holders to prove they are not on a sanctions list or do meet an eligibility requirement without exposing anything else.

For holders, the language is simple: “I show only what I need to.” If wallets don’t deliver this privacy by default, people will assume the system is just another surveillance tool.

Trust and Transparency

Holders will not adopt digital identity unless they trust it. Trust comes from transparency. People want to know:

Who issued this credential? Who can see it when I use it? What data am I sharing? Can I revoke it if I need to? Can I choose not to use it at all?

Utah’s SB 260 is a good example of policy supporting holder trust. It enshrines voluntariness, bans tracking, and ensures physical IDs remain valid. The EU’s eIDAS 2.0 regulation does the same, mandating free wallets for citizens, selective disclosure, and a ban on central databases.

Technology must align with these protections. Wallets should show holders exactly what they’re sharing and with whom. They should allow easy revocation. They should make it clear that people, not platforms, are in control.

The Risk of Exclusion

One of the biggest risks in digital identity is exclusion. Holders who lack access to smartphones, reliable internet, or government-issued IDs risk being left behind. Millions of Americans fall into these categories.

For digital identity to succeed, it must be inclusive. That means supporting low-tech alternatives, ensuring physical IDs remain valid, and expanding the set of issuers beyond DMVs to include vital records, veterans’ agencies, and municipal IDs. It also means ensuring wallets are accessible for people with disabilities and designed for multilingual populations.

If holders don’t see themselves in the system, they won’t use it. Worse, they’ll be excluded from services that go digital-only. Designing for inclusivity is not optional; it is essential.

What Holders Need from Wallets

From the holder’s perspective, the wallet is the face of digital identity. It must embody the values of usability, security, privacy, and decentralization.

Usability: Wallets should feel like familiar apps: clean, intuitive, and frictionless. Issuing and presenting credentials should be no harder than downloading a boarding pass. Security: Wallets must use strong cryptography, but they should hide that complexity from users. People shouldn’t need to understand keys or signatures. They should just trust that the wallet is safe. Privacy: Wallets must default to minimal disclosure and make it obvious what data is being shared. Transparency is critical. Decentralization: Wallets must avoid lock-in. Holders should be able to move credentials between wallets, just as they can move SIM cards between phones. If wallets become walled gardens, trust collapses.

Wallet certification programs, like those proposed through FIDO and Kantara, can give holders assurance that their wallet meets these standards. But the ultimate test will be the experience itself.

The Opportunity for Holders

For holders, digital identity offers a new kind of empowerment. No longer do they have to overshare, entrust sensitive documents to strangers, or juggle dozens of logins. Instead, they can carry their identity with them, on their terms, and use it across contexts.

This empowerment is not abstract. It means a mother can update her last name after marriage without mailing in stacks of documents. A veteran can prove their service instantly for benefits. A refugee can present credentials safely across borders. A teenager can prove their age online without exposing personal information.

Digital identity, when designed for holders, is not just more efficient. It is more human. Designing for holders means making digital identity usable, secure, private, and inclusive. It means respecting choice and protecting against surveillance. It means recognizing that identity is not just a technical artifact, but a deeply personal part of who people are.

The lesson is clear: if digital identity doesn’t work for holders, it doesn’t work at all.

This article is part of SpruceID’s series on the future of digital identity in America. Read more in the series:

SpruceID Digital Identity in America Series

Foundations of Decentralized Identity Digital Identity Policy Momentum The Technology of Digital Identity Privacy and User Control Practical Digital Identity in America Enabling U.S. Identity Issuers Verifiers at the Point of Use Holders and the User Experience

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


From Paper to Structured Data: The Missing Link in Government Digital Services

Governments already have the information they need; it’s just locked inside documents. This post explains how modern document capture, OCR, and validation turn paperwork into structured data that systems can actually use, enabling faster decisions and fewer errors.

Governments already have the information they need. Proof of identity, eligibility, licensure, income, residence, and compliance all exist today. The problem is not a lack of data. It is that this information is locked inside documents that systems cannot easily read, trust, or reuse.

Paper forms, scanned PDFs, and uploaded images remain the primary way information enters many government systems. Even when a service is labeled digital, the intake process often relies on residents submitting documents that must be reviewed, interpreted, and re entered by humans. This creates friction at the very first step of a digital service and limits how effective any downstream system can be.

Modern document capture changes this equation. By combining secure upload, image recognition, optical character recognition, and automated validation, governments can turn documents into structured data at the moment they are submitted. This shift is the missing link between digital front ends and systems that actually work.

The real bottleneck in digital government

Much of government modernization focuses on portals, dashboards, and workflow tools. These investments matter, but they often overlook the weakest link in the chain. Intake.

When information enters a system as an unstructured document, everything that follows becomes harder. Staff must manually review submissions. Data must be re keyed into legacy systems. Errors are introduced through interpretation and transcription. Different programs collect the same information in slightly different ways, making reuse nearly impossible.

This is why many digital services feel slow and fragmented even after modernization efforts. The interface may be new, but the data foundation is not.

The result is a persistent gap between what residents submit and what systems can actually use.

Documents are not data

A scanned document is visually digital, but functionally opaque. A PDF or image file does not tell a system which fields matter, how values should be validated, or whether information is complete and consistent.

Consider a common example like proof of address. A resident uploads a utility bill. A human reviewer looks for a name, an address, and a date. They decide whether it meets policy requirements. That judgment is rarely captured in a structured way. The system only knows that a file was uploaded and approved.

Multiply this across millions of submissions and dozens of programs and the limitations become clear. Systems cannot easily share information. Analytics are unreliable. Automation is constrained because the data never becomes machine readable in a meaningful way.

Treating documents as data sources instead of static files is the necessary shift.

How modern document capture works

Modern document capture starts by assuming that documents contain valuable data, not just evidence.

When a resident uploads a document or takes a photo, image recognition and OCR extract text and key attributes. Layout analysis identifies fields like names, dates, identifiers, and issuing authorities. Validation rules check for completeness, consistency, and format in real time.

For example, a license document can be checked to ensure the expiration date is valid, the issuing authority matches expectations, and required fields are present. A benefits document can be validated against program rules before it ever reaches a caseworker.

Crucially, the original document is preserved for audit and legal purposes, while the extracted data becomes structured input that systems can use immediately.

This approach reduces manual review, shortens processing timelines, and improves accuracy without requiring agencies to replace existing backend systems.

Trust starts at intake

Data quality is not just a technical concern. It is a trust issue.

When intake is inconsistent or error prone, agencies lose confidence in their own systems. Staff rely on workarounds. Programs build parallel processes. Leaders hesitate to automate decisions because the inputs are unreliable.

Structured document capture creates a clearer chain of custody for information. Data is extracted, validated, and recorded at the moment of submission. Errors are caught early. Exceptions are flagged explicitly rather than discovered weeks later.

This makes it easier to explain decisions, audit outcomes, and demonstrate compliance. It also creates the conditions for responsible automation, where systems assist humans instead of creating new risks.

Faster services without sacrificing control

One concern agencies often raise is whether automation reduces oversight. In practice, modern document capture does the opposite.

By standardizing how data is extracted and validated, agencies gain more visibility into what was submitted and why it was accepted or rejected. Staff spend less time on routine checks and more time on true exceptions and complex cases.

Residents benefit as well. Submissions are clearer. Errors are caught immediately instead of triggering follow up requests. Processing moves faster because data arrives ready to use.

Speed improves not because corners are cut, but because unnecessary manual steps are removed.

Laying the groundwork for reuse and interoperability

Once data is captured in structured form, new possibilities open up.

Information can be reused across programs with appropriate consent. Verification can happen digitally instead of through phone calls or mailed letters. Analytics become more reliable because fields are consistent and well defined.

In some cases, structured data can be packaged into verifiable digital credentials that residents can present again without re uploading documents. This is particularly valuable for information that must be shown repeatedly, such as licenses, permits, or eligibility determinations. Credentials work best when they are built on strong intake practices that ensure the underlying data is accurate from the start.

Standards based approaches, including those developed by the World Wide Web Consortium, help ensure that structured data remains portable and interoperable as systems evolve.

Why this matters now

Governments are under pressure to do more with limited resources while improving service quality and security. At the same time, interest in analytics and artificial intelligence is growing. Without a clean, structured data foundation, ambitions like predictive analytics and artificial intelligence remain out of reach. It's why 71% of federal agencies indicate their data is not yet ready for AI

AI systems trained on inconsistent or poorly structured information produce unreliable results. Automation built on fragile inputs creates risk rather than efficiency. The path to smarter systems runs directly through better intake.

Modern document capture is not a future state technology. It is a practical step agencies can take today to improve accuracy, speed, and trust without waiting for wholesale system replacement.

Turning paperwork into progress

Paperwork has always been part of government. What needs to change is how that paperwork is handled.

By treating documents as sources of structured data, agencies can bridge the gap between resident submissions and digital systems that actually work. Secure capture, OCR, and validation transform intake from a bottleneck into a foundation.

This shift does not require abandoning existing systems or policies. It requires rethinking the first mile of digital services and investing where the impact is greatest.

The information governments need is already there. Unlocking it starts with turning documents into data that systems can trust and use.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Digital Identity Beyond Credentials: What Governments Actually Need

Digital identity is more than issuing credentials. This piece explains how identity underpins access, fraud prevention, and service delivery — and why governments need flexible, standards-based identity infrastructure instead of single-purpose tools.

Digital identity is often discussed in terms of credentials. A mobile ID. A digital license. A verifiable certificate stored in a wallet. These tools matter, but they represent only a small part of what identity must do for government digital services.

In practice, digital identity is infrastructure, not a single product. It underpins who can access services, how fraud is detected, how data is shared, and how systems make decisions across agencies and programs. When identity is treated as a single-purpose solution instead of a flexible layer, governments end up with fragmented systems that solve one problem while creating many others.

What governments actually need is identity infrastructure that supports many use cases, adapts over time, and integrates cleanly with existing systems as part of long-term government modernization.

Identity is the control plane for digital services

Every digital service depends on identity, whether it is visible or not to the end user.

When a resident applies for benefits, identity determines whether they are eligible to submit the application and what information they must provide. When a business renews a license, identity governs access to records and approval workflows. When a staff member reviews a case, identity defines what data they are allowed to see and what actions they can take.

These decisions are not made once. They happen continuously throughout the lifecycle of a service as context, risk, and policy change.

Identity is therefore not just about proving who someone is. It is about access control, policy enforcement, and risk management that allow systems to operate safely at scale.

Credentials are outputs, not the system

Digital credentials are often the most visible expression of digital identity, but they are not the system itself and should not be treated as the foundation.

A credential represents a moment in time. It asserts that certain information was verified under specific conditions. On its own, it does not manage access, handle revocation, or adapt to changing policy requirements or fraud signals.

When governments focus solely on issuing credentials, they risk building brittle solutions that work only for narrow use cases. Each new program requires a new credential type. Each integration becomes bespoke. Reuse is limited and interoperability suffers.

Credentials are most effective when they sit on top of a broader identity and access management infrastructure that handles authentication, authorization, lifecycle management, and auditability across digital services.

Identity enables access without friction

One of the most important roles of digital identity is enabling access without creating unnecessary barriers for residents or staff.

Not every interaction requires the same level of assurance. Browsing information, submitting a form, updating a profile, and approving a decision all carry different levels of risk within government systems.

Flexible identity infrastructure allows agencies to apply the right level of verification for each action. Stronger checks are used when risk is high. Lighter-touch methods are used when it is not. This risk-based identity approach improves security while preserving usability.

When identity systems are rigid or single-purpose, agencies often default to the highest common denominator. Users are over-verified. Services slow down. Trust erodes and digital transformation stalls.

Identity is central to fraud prevention

Fraud prevention is not just about detecting bad actors. It is about reducing uncertainty in digital services.

Strong identity signals help systems distinguish between legitimate variation and suspicious behavior. They allow agencies to correlate activity across programs, detect anomalies, and apply controls proportionally without over-collection of data.

When identity infrastructure is fragmented, fraud prevention tools operate in silos. Signals are incomplete. False positives increase. Legitimate users are caught in unnecessary reviews that increase cost and delay.

A unified identity layer provides consistent signals that can be used across services without exposing sensitive data or centralizing risk unnecessarily through shared databases.

Standards matter more than tools

Governments operate in complex ecosystems. Multiple agencies, vendors, and partners must work together over long time horizons. In this environment, proprietary identity solutions become liabilities rather than accelerators.

Standards-based identity infrastructure allows components to evolve independently. Authentication methods can improve. Credential formats can change. New services can be added without breaking existing integrations or reissuing identity artifacts.

Standards such as World Wide Web Consortium Verifiable Credentials define interoperable credential formats, while identity assurance models in National Institute of Standards and Technology SP 800-63 provide a framework for applying different levels of identity proofing and authentication based on risk.

This standards-based approach is what allows identity infrastructure to support many use cases without locking agencies into a single tool or vendor.

Identity as infrastructure, not a project

One reason identity initiatives struggle is that they are treated as projects with a fixed end state rather than long-lived infrastructure.

In reality, identity requirements evolve continuously. Policies change. Threats change. User expectations change. Infrastructure must adapt without disruption.

When identity is designed as a modular layer, agencies can add capabilities incrementally. New credentials can be issued without redesigning access control. New services can rely on existing identity signals. Risk models can improve over time as more data becomes available.

This approach reduces disruption and increases return on investment across modernization efforts.

Supporting modern service delivery

Modern digital services depend on reliable identity in ways that are not always obvious.

Secure document intake relies on identity to link submissions to the correct user. Data sharing relies on identity to enforce purpose and authorization. Automation relies on identity to ensure decisions are explainable, auditable, and defensible.

Credentials can play an important role in these flows, but only when they are part of a broader system that manages trust end to end across services and systems.

What governments actually need

Governments do not need more point solutions. They need digital identity infrastructure that is flexible, standards-based, and designed to support real-world complexity.

This means treating credentials as one tool among many. It means prioritizing access control, fraud prevention, and interoperability. It means building systems that can grow without being replaced every few years as policies and programs evolve.

Digital identity works best when it is mostly invisible, quietly enabling services to be secure, efficient, and trustworthy. When identity is done right, residents experience simpler interactions, staff gain better tools, and agencies gain confidence in how digital services are delivered.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Radiant Logic

The Next Era of Identity Security Starts With Action

Discover how a new AI-driven, data-centric approach turns identity visibility into real-time action, closing the gap between detection and remediation to continuously shrink your attack surface. The post The Next Era of Identity Security Starts With Action appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
How Today’s AI-Driven Remediation Launch Signals a Shift From Visibility to Proven Risk Reduction 

For years, security leaders have invested heavily in visibility. We expanded observability platforms, centralized telemetry, and populated dashboards designed to reassure boards that we were finally gaining control over sprawling identity environments. Yet one question still lingers: Has visibility alone made us safer? 

Today’s announcement marks a shift towards a more operational identity model — one where detection is paired with real-time, measurable action. With AI-driven remediation, collaborative investigation capabilities, MCP-enabled identity context services, and real-time enforcement through SSF CAEP, identity security enters a new phase, defined not by what we can see, but by what we can resolve and how quickly we can act. 

This is the beginning of an accountability era for identity security. 

The Visibility Plateau Is Real 

Enterprises are saturated with signals. Cloud expansion, the rise of Non-Human Identities, parallel IAM stacks, and inconsistent directory architectures generate more findings that teams can meaningfully address. We surface privileged access anomalies, orphaned accounts, and misaligned entitlements every day — yet fragmentation across IAM, PAM, IGA, HR, and ITSM slows remediation to a crawl. 

We detect. 

We discuss. 

But we resolve far too little. 

This is why Gartner’s Outcome-Driven Metrics (ODMs) resonate. The framing is clear — visibility only matters when it connects to action, and action must tie directly to measurable risk reduction. The problem is that ODMs break down without unified identity data and a fast, consistent remediation engine behind them. 

Today’s announcement is the first major step toward that operational model. 

Why Today’s Launch Represents an Industry Shift 

Radiant Logic’s new AI-driven remediation closes the most persistent gap in identity security: the distance between insight and action. When the platform detects a complex identity anomaly, it now initiates a real-time investigation channel in collaboration spaces like Slack or Microsoft Teams. RadiantOne’s AI Data Assistant (AIDA) steps in with the full identity lineage, policy context, and recommended remediation paths. Instead of waiting for tickets to climb through queues, stakeholders resolve issues where they already work. 

This approach doesn’t just accelerate action — it finally makes outcomes measurable. 

Mean time to remediate identity risks drops dramatically  Ownership becomes clear and distributed  The attack surface shrinks continuously rather than periodically  This is the practical foundation ODMs require Unified Identity Data Is the Control Plane for Outcomes 

The industry has long underestimated a simple truth — meaningful automation and measurement require clean, consistent, unified identity data. Without it, AI is non-deterministic in nature, workflows break, and signals contradict each other. 

Radiant Logic’s identity data fabric provides that missing layer, consolidating all human and non-human identities into a single, governed source of truth. With this as the base, continuous observability and AI-assisted remediation become not only possible, but dependable. 

It transforms identity from a fragmented set of tools into a coherent operational system. 

Preparing for Agentic AI 

The rise of agentic AI introduces immense opportunity but also unprecedented identity risk. For AI agents to make safe, governed decisions, they need real-time, trusted identity context. 

Support for the Model Context Protocol (MCP) enables that. Through MCP, AI agents — including AIDA — gain secure access to unified identity data and live observations. This is the architecture required for autonomous identity operations that remain transparent, auditable, and aligned with Zero Trust principles in any agentic AI orchestration environment. 

This release positions enterprises for the next optional model: identity controls that operate at machine speed, not ticket speed.  

Real-Time Enforcement Through Shared Signals 

Detection only matters when downstream systems respond instantly. With support for the Shared Signals Framework and Continuous Access Evaluation Profile (SSF CAEP), RadiantOne can now trigger real-time signals that adjust access and enforce controls dynamically. 

This means: 

Session revocation in response to identity anomalies  Immediate risk-based access adjustments  Continuous policy alignment across distributed IAM stacks 

Identity security shifts from episodic, batch-based controls to continuous enforcement. 

The Accountability Era Is Here 

Boards, regulators, insurers, and CISOs are all converging on the same expectation — security investments must show measurable reductions in risk. Dashboards no longer satisfy that requirement. 

Today’s Radiant Logic advancements represent more than a feature release. They mark a shift toward identity programs rooted in outcomes: unified data, continuous observability, AI-driven remediation, and real-time enforcement. This is how organizations finally move from reactive monitoring to proactive attack surface reduction. 

The age of visibility for visibility’s sake is ending. 

The age of identity security that proves its impact has begun. 

The post The Next Era of Identity Security Starts With Action appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Spruce Systems

Applying Zero Trust to Government Data Flows

Zero Trust isn’t just a network model; it’s a way of handling data safely as it moves through government systems. This post explains how Zero Trust principles apply to document intake, identity checks, and service delivery, ensuring access is verified at every step without slowing users down.

Zero Trust is often discussed as a network security model, but its real impact extends far beyond firewalls and infrastructure. At its core, Zero Trust is a way of handling data safely as it moves through government digital services, systems, users, and agencies.

For government digital services, this perspective matters. Data rarely stays in one place. Documents are submitted by residents, reviewed by staff, shared across programs, and stored for long periods of time across multiple systems of record. Each step introduces risk if trust is assumed rather than verified.

Applying Zero Trust to data flows means verifying access and integrity at every stage, from document intake through system integration, without adding friction for the people using the service.

Why data flow matters more than system boundaries

Traditional security models focused on protecting systems. If a user or application was inside the network, it was trusted. Once data crossed that boundary, controls were often relaxed.

Modern government services do not operate this way. Cloud platforms, third-party integrations, mobile access, and cross-agency collaboration have dissolved clear perimeters as part of government modernization and digital transformation. Data moves constantly between systems with different owners, security postures, and access rules.

In this environment, protecting the network is not enough. Trust must travel with the data itself across workflows and integrations.

Zero Trust shifts the question from where a request originates to whether it should be trusted right now, given the identity, context, and policy governing that interaction at the moment it occurs.

Zero Trust starts at document intake

The first point of contact for most government data is document intake. Forms, images, and uploads are how residents provide information that drives decisions across digital services.

Applying Zero Trust here means not assuming that submitted data is valid simply because it arrived through an official channel. Each submission is treated as untrusted until it is verified against defined policy requirements.

Modern intake systems apply validation at the moment of capture. Documents are analyzed using OCR and structured data extraction. Required fields are checked. Formats and expiration dates are validated. Submissions that do not meet policy requirements are flagged immediately rather than passing silently into downstream systems and workflows.

This approach reduces manual review while improving confidence in the data. Trust is established explicitly instead of implicitly at the edge of the service.

Identity checks as continuous verification

Identity is not a one-time event in a Zero Trust model. It is continuously assessed throughout service interactions.

When residents or staff interact with a service, systems evaluate identity signals based on the sensitivity of the action being taken. Accessing general information may require minimal assurance. Submitting an application, updating records, or approving decisions requires stronger verification and clearer identity signals.

Zero Trust architectures rely on verified identity, authentication strength, device context, and behavior rather than network location. These signals inform access decisions dynamically across digital services and systems.

This approach aligns with identity assurance models defined in NIST SP 800-63, which separates identity proofing, authentication, and federation and supports applying different assurance levels based on risk rather than treating identity as a binary gate.

Securing data as it moves between systems

Once data enters a system, it rarely stays there. Information is routed to case management tools, analytics platforms, payment systems, and partner agencies through system integration and APIs.

Applying Zero Trust to these data flows means every system verifies the identity and authorization of the requester before sharing data. APIs enforce least privilege access. Data is encrypted in transit and at rest. Access is logged and auditable at each interaction.

Importantly, systems do not assume trust based on prior interactions. Each request is evaluated independently, even if it comes from another government system within the same agency.

This allows agencies to scale interoperability without creating implicit trust relationships that become long-term liabilities.

Delivering services without slowing users down

A common misconception is that Zero Trust adds friction. In practice, the opposite is true when it is implemented as part of the architecture.

By verifying trust continuously and automatically, systems avoid the need for broad, disruptive checks. Users are not repeatedly asked for information that has already been validated. Staff are not forced into manual review for routine actions that can be handled through policy-driven workflows.

For example, once a document has been validated at intake, downstream systems can rely on that validation rather than rechecking it. Once identity assurance is established at the appropriate level, services can proceed smoothly within defined policy boundaries.

The result is a faster experience that remains secure across the full service lifecycle.

Enabling reuse and interoperability safely

One of the biggest challenges in government is safely reusing data across programs. Zero Trust provides a framework for doing this without expanding risk.

Because access decisions are made at each interaction, data can be shared selectively based on purpose, role, and policy. Systems do not need to fully trust one another to work together in order to interoperate.

In some cases, verified data can be packaged into reusable artifacts such as verifiable digital credentials. These allow information to be presented and verified without exposing underlying systems or duplicating records. Their effectiveness depends on strong identity assurance and data integrity at the source starting with intake.

Why Zero Trust is essential for modern service delivery

As government digital services become more interconnected, automated, and data-driven, the cost of implicit trust grows. A single weak link can compromise multiple programs through shared integrations.

Zero Trust addresses this by making verification routine, automated, and largely invisible to end users. Trust is never assumed and never permanent. It is continuously evaluated based on identity, context, and policy at every step in the data flow.

This approach aligns security with how modern digital services actually operate.

Designing data flows for trust from the start

Applying Zero Trust to government data flows is not about adding more controls. It is about placing the right controls in the right places across intake, identity, and system integration.

When document intake, identity verification, and service delivery are designed with Zero Trust principles from the beginning, systems become easier to integrate, safer to scale, and more resilient over time.

Most importantly, services remain usable. Residents get faster decisions. Staff get clearer signals. Agencies gain confidence that data is handled responsibly at every step.

That is how Zero Trust moves from a security concept to a practical foundation for modern government digital services.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Modernizing Government Systems Without Replacing Them

Most agencies can’t rip and replace legacy systems — and shouldn’t have to. This post explains how modern APIs, identity layers, and workflow tools can extend existing systems without disrupting operations or increasing risk.

Government agencies rely on systems that were built to last. Many of these platforms have been in place for decades, supporting critical programs, large user bases, and complex policy rules. While they are often labeled legacy systems, they continue to perform essential functions reliably.

The challenge is that expectations around digital services have changed. Residents expect faster interactions and simpler workflows. Staff need better tools to move information between systems. Leaders want data that can be shared securely, analyzed across programs, and integrated without manual work. Replacing core systems outright is rarely feasible and often introduces unacceptable risk.

The good news is that government modernization does not require a rip and replace strategy. In most cases, the fastest and safest path forward is legacy system modernization through extension. Existing systems remain systems of record, while modern layers improve how they connect, interact, and evolve.

Why rip and replace rarely works

Large-scale system replacement is appealing in theory. Start fresh. Eliminate technical debt. Build something modern from the ground up.

In practice, these projects are costly, disruptive, and slow. Core systems encode years of policy decisions, edge cases, and operational knowledge. Recreating that logic accurately takes time and introduces risk. During long replacement cycles, agencies must operate old systems while building new ones in parallel, stretching budgets, staff, and governance capacity.

Even when replacements succeed, they often deliver less flexibility than expected. The underlying challenges are not just technical. They are organizational, regulatory, and operational.

Digital transformation efforts that work with existing systems instead of against them are far more likely to deliver value quickly and sustainably.

Shifting the focus from systems to connections

A more effective modernization strategy starts by changing where modernization happens.

Instead of modifying core systems directly, agencies modernize the layers around them. This includes identity infrastructure, document intake, workflow automation, and system integration.

By improving how systems connect and exchange information, agencies unlock new capabilities without destabilizing what already works. Modernization becomes additive rather than destructive.

This is where APIs, identity layers, and workflow orchestration play a critical role.

Using APIs to extend legacy systems safely

APIs provide a controlled way to expose specific functions and data from legacy systems without opening them up entirely.

Rather than building fragile point-to-point integrations, agencies can define stable, standards-based interfaces that allow modern digital services to interact with existing platforms. This makes it possible to automate processes, integrate third-party tools, and launch new services without rewriting core logic.

APIs also improve governance and security. Access can be scoped and monitored. Usage can be audited. Changes can be versioned. Systems remain decoupled so updates in one area do not cascade into failures elsewhere.

Over time, APIs transform rigid legacy systems into flexible building blocks for modernization.

Adding an identity layer instead of rewriting authentication

Many legacy systems were not designed for modern identity requirements. They rely on siloed user accounts, outdated authentication models, or inconsistent access controls.

Rather than modifying each system individually, agencies can introduce a shared digital identity layer that handles authentication and authorization consistently across services.

This layer establishes who a user is, what level of identity assurance applies, and what actions they are permitted to take. Existing systems consume these signals instead of managing identity themselves.

The result is stronger security and simpler user experiences. Residents and staff interact with a consistent identity framework even as they move between multiple systems. Identity becomes infrastructure rather than application logic.

Modernizing workflows without touching core logic

Workflow rigidity is another common barrier to modernization. Legacy systems often enforce linear processes that no longer reflect how work actually happens.

Modern workflow automation tools can sit alongside existing platforms to orchestrate processes across systems. They handle routing, approvals, notifications, and exception management while leaving systems of record untouched.

For example, a workflow layer can coordinate document intake, validation, review, and decisioning across multiple backend systems. Each system continues to do what it does best, while the workflow layer manages the end-to-end digital service.

This makes it far easier to adapt processes as policies change without rewriting underlying systems.

Reducing risk through incremental modernization

One of the biggest advantages of a layered approach is risk reduction.

Changes can be introduced incrementally. New digital services can be piloted with limited scope. Rollbacks are simpler because core systems remain stable. Staff can adapt gradually instead of being forced into a single, disruptive transition.

Security improves as well. Modern layers bring clearer access controls, better observability, and more consistent auditability. Risk is reduced not by freezing systems in place, but by isolating responsibilities and making trust explicit.

Preparing systems for the future

Extending existing systems is not a stopgap. It is how agencies prepare for future capabilities.

Once APIs, identity infrastructure, and workflow orchestration are in place, agencies are better positioned to adopt analytics, automation, and AI responsibly. New tools can be integrated without deep changes to systems of record. Data can be reused safely across programs. Innovation becomes incremental instead of disruptive.

Over time, agencies can modernize or replace individual components based on evidence and outcomes, not pressure or deadlines.

Modernization that respects reality

Government systems exist for a reason. They support critical missions, complex rules, and real-world constraints. Modernization efforts that ignore this reality often fail.

By focusing on legacy system modernization through extension, agencies can improve digital services, interoperability, and security without putting operations at risk. APIs, identity layers, and workflow automation create a bridge between what exists today and what is possible tomorrow.

This is how government digital transformation becomes practical, sustainable, and aligned with how government actually works.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Thales Group

Thales powers CES-winning post-quantum chip from Samsung Electronics

Thales powers CES-winning post-quantum chip from Samsung Electronics prezly Tue, 01/06/2026 - 15:00 Research & innovation Enterprise Mobile communications Share options Facebook
Thales powers CES-winning post-quantum chip from Samsung Electronics prezly Tue, 01/06/2026 - 15:00 Research & innovation Enterprise Mobile communications

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 06 Jan 2026 Thales’ secure operating system (OS) supports Samsung's security chip, winner of the CES 2026 ‘Best Cybersecurity Innovation’ Award. The chip is the first embedded Secure Element (eSE) to integrate post-quantum cryptography (PQC), protecting devices against tomorrow’s quantum-enabled cyber threats. Thales’ quantum-resistant software and OS enable unmatched performance, energy efficiency and long-term data protection.

Quantum computers, with their unprecedented processing power, will ultimately challenge today’s encryption standards. This is why Thales welcomes the CES 2026 recognition awarded to the new post-quantum–ready security chip from Samsung Electronics' System LSI Business, which embeds Thales’ secure operating system and quantum-resistant cryptographic libraries. This breakthrough represents a major step forward in protecting connected devices against both current cyberattacks and tomorrow’s quantum-era threats.

Thales’ hardened OS enables Samsung's award-winning security chip to deliver hardware-based, quantum-resistant protection from the moment devices power on. Ultimately, it ensures that encrypted data and device credentials remain secure against both classical and quantum attacks, preserving confidentiality, integrity and long-term trust, even in a post-quantum world.

The risk is not only future-oriented: malicious actors can already intercept and store encrypted data today, waiting for the moment quantum capability arrives to decrypt it later (“harvest now, decrypt later”). Indeed, with the expected power of quantum computing, anything protected by current standards (personal identities, sensitive credentials and even the cryptographic keys embedded in connected devices) could be exposed.

Thales’ OS and PQC libraries enable the Samsung chip to perform next-generation cryptography at high speed with reduced power and memory consumption. This ensures:

Quantum-resistant encryption and authentication. High-performance cryptographic operations on the smallest footprint. Long-term confidentiality against “harvest now, decrypt later” attacks.
“We are very proud to partner with Samsung System LSI on this pioneering achievement. The S3SSE2A chip is a game-changer, offering robust, future-proof security in an energy-efficient design. This breakthrough confirms that post-quantum security is not just for high-end systems, it is essential for all connected devices, from consumer electronics to vast IoT ecosystems. Together, our companies have redefined what is possible for embedded cybersecurity, setting a new benchmark for the industry.” Eva Rudin, Vice President, Mobile Connectivity Solutions at Thales
Hwa Yeal Yu, vice president and head of the System LSI Security & Power Product Development Team at Samsung Electronics, added: "Samsung and Thales have built a long-standing collaboration in security, and we are pleased to introduce the S3SSE2A, the industry’s first PQC total solution. Developed jointly from the outset to integrate hardware and software, this solution delivers an exceptional level of security. We look forward to continuing our collaboration with Thales to advance security solutions for the next generation of connected devices."

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies in advanced for the Defence, Aerospace and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF corporate : Research & innovation ; market_segment : Cybersecurity + Enterprise > Mobile communications | Consumer electronics ; advanced_technologies : Advanced technologies > Quantum algorithms https://thales-group.prezly.com/thales-powers-ces-winning-post-quantum-chip-from-samsung-electronics thales-powers-ces-winning-post-quantum-chip-samsung-electronics On Thales powers CES-winning post-quantum chip from Samsung Electronics

Elliptic

Crypto regulatory affairs: Hong Kong progresses cryptoasset regulatory activities

In this first January edition of crypto regulatory affairs, we will cover:

In this first January edition of crypto regulatory affairs, we will cover:


Spherical Cow Consulting

Process, Standards, and the AI Rogue Wave: Notes from Gartner IAM

In this episode of The Digital Identity Digest, Heather Flanagan reflects on Gartner IAM and what it reveals about digital identity decision-making, identity access management priorities, and enterprise buying behavior. The conversation explores how process, not product, often drives outcomes in real-world IAM programs. Learn why overlooked process maturity, invisible identity standards, and int

Gartner IAM is a strange kind of conference, at least compared to the other events I generally attend in a year. It’s an event hosted by one of the world’s largest analyst firms.”

Attending as an individual means either shelling out a LOT of money or being invited as a speaker. The conference is geared towards Gartner subscribers, who receive passes as part of their company’s Gartner subscription.

Gartner IAM is not where you go to learn the latest implementation tricks or debate protocol edge cases. It’s a buyer-seller-enterprise architect event that’s been optimized for evaluations, shortlists, and early-stage decisions. For someone like me, an independent industry consultant, the most useful information was not in the sessions as much as it was in the booth conversations, the side comments, and the quiet conversations before I even arrived at the event.

Walking the floor, talking to vendors, and listening to how buyers describe their constraints offers a view into how identity decisions are actually being made right now, rather than how people (hello, my fellow standards geeks!) working at a different layer in the stack hope they’re made, and not how they’re described in marketing material.

A Digital Identity Digest Process, Standards, and the AI Rogue Wave: Notes from Gartner IAM Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:12 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

tl;dr: three patterns

This year, three patterns stood out to me. None of them was entirely new, but seeing them in this different format gave me quite a bit to think about.

First, many organizations don’t really need a better product; they need a process they can sustain. Second, standards work is happening, but it’s largely invisible in the conversations where buying decisions take shape. And third, even as Gartner itself warned that the current AI hype is spiraling, the market continues to reward vendors for leaning into it anyway.

Taken together, these observations say less about individual products and more about the pressures shaping the identity market right now. And those pressures, more than any single feature announcement, are what will determine how well today’s IAM decisions hold up over the next few years.

Lesson 1: Customers need process more than they need product

This lesson didn’t come from a session or a slide deck. It came from a quiet, matter-of-fact conversation with a potential buyer whom I met on the ferry as I was heading to the airport.

They had budget. Plenty of it, at least when it came to capital spend. What they didn’t have was much in the way of operations and maintenance budget. I remember this tension from when I worked in research and education, but it’s not something I’ve had to think about in over 15 years.

What stood out wasn’t their indecision about vendors or features. In fact, they were almost indifferent to the product itself. That wasn’t because the tools were bad, or interchangeable, or “good enough.” It was because the product wasn’t the problem they were trying to solve.

Their real challenge was designing a process that actually worked in their environment.

They needed to understand who owned which decisions, how access changes would be handled over time, what could reasonably be automated, and what still required human judgment. They needed workflows that matched their staffing model, risk tolerance, regulatory requirements, and existing operational constraints. Without that, any product they chose would eventually become shelfware or, worse, another brittle system propped up by manual workarounds.

Capital vs O&M

This is where the capital-versus-operations gap becomes painfully visible. Buying software is often easier than committing to the ongoing work of running it well. Capital budgets can be approved as a one-time event. Operational maturity takes sustained investment, clear accountability, and a willingness to change how teams actually work.

Most IAM products assume a certain level of process maturity. They assume you know who owns approvals, how exceptions are handled, how failures are detected, and how changes ripple across systems. When those assumptions don’t hold, the tool can’t compensate, no matter how modern or feature-rich it is.

What this buyer needed wasn’t a better product demo. They needed help designing a process that fit their reality, one they could actually sustain once the implementation team packed up and moved on.

Walking away from that conversation, it was hard not to notice how many booths were still selling features, dashboards, and AI-assisted capabilities, when what many buyers are quietly struggling with is something far more fundamental: turning identity into an operational practice, not just a purchased solution. I’m still thinking about what that means for identity standards and the problems we’re trying to solve, but I think there’s something there for us to take to heart.

Lesson 2: Standards are rarely part of buyer–seller conversations

To be clear up front: standards were present at Gartner IAM, just not where most buyers would encounter them.

There were some genuinely strong signals during the conference itself. Atul Tulshibagwale at SGNL ran a full session focused on authorization standards, grounding the discussion in real-world needs rather than abstract theory. Members of the OpenID Foundation’s AuthZEN working group organized an interoperability event during the conference, doing the hard, unglamorous work of making things actually line up across implementations. That work matters, and it deserves recognition.

But then there was the show floor.

In conversation after conversation, standards barely came up. When they did, it was usually as a vague assurance—“we support standards,” which is a ridiculously vague statement that makes my eyes twitch—rather than a concrete explanation of which standards were used, where, or why that choice mattered. Many people staffing booths couldn’t answer even basic questions about how their products interacted with standards at all.

Some of that is likely structural. Booths are often staffed by marketing teams rather than engineers or architects. Messaging is optimized for clarity and differentiation, not nuance. Still, the absence bothered me. Even if the people on the floor aren’t expected to whiteboard protocol flows, it’s not unreasonable to expect some visible signal that interoperability is a design goal rather than an afterthought.

Interoperability matters, even in marketing

What I kept coming back to was a simple question: where is the interoperability?

Nothing worked full-stack. Products might integrate into something else, or expose an API, or “support” a standard in isolation, but very little of it connected cleanly end-to-end across vendors. Interop existed in pockets and side events, disconnected from the narratives shaping purchasing decisions just a few aisles away.

That gap matters. Buyers don’t necessarily need a deep understanding of authorization models or protocol tradeoffs to make good decisions. But when standards are invisible in sales conversations, you’re actually missing something incredibly powerful about how any given product interacts with the world.

What made this especially frustrating was knowing that the work is happening. Standards groups are active. Interop events are running. Technical alignment is improving. Yet none of that showed up where buyers were forming first impressions and shortlists.

If standards are the foundation that makes long-term flexibility and portability possible, hiding them from the buying conversation doesn’t make them irrelevant; it just makes their absence someone else’s problem later.

Lesson 3: The AI hype cycle irony

Let’s step back to the very start of the conference.

Gartner executives kicked things off by acknowledging that the current AI hype isn’t just excessive, it’s downright insane. What we’re seeing isn’t a single hype cycle, but three converging at once, crashing together like a rogue wave. Their message was explicit: expectations are inflated, capabilities are uneven, and caution is warranted. They were not subtle in their warning.

And then I walked the expo floor.

Almost every booth had something about AI in its materials. “AI-powered.” “AI-driven.” “AI-enhanced.” Sometimes the connection to the actual product was clear. Often, it wasn’t. AI had become less a description of capability and more a signaling mechanism. It’s a way to reassure buyers that a vendor wasn’t falling behind.

So. Much. Irony. The same event that warned about runaway expectations was surrounded by messaging designed to feed them.

Say it like you mean it. Please.

One private conversation made that tension especially clear. A vendor admitted, off the record, that their product doesn’t actually have anything to do with AI in any meaningful way. But their CEO had insisted it appear in the marketing anyway. Not because it was accurate, but because it was expected.

That moment captured something important about the current market dynamic. This isn’t just hype driven by ignorance or bad faith. It’s driven by fear: fear of being perceived as outdated, irrelevant, or uncompetitive in a moment when “AI” has become shorthand for innovation itself.

The risk, of course, is that this kind of signaling erodes trust. Buyers are left to sort out which AI claims reflect real capability, which reflect roadmap aspirations, and which are little more than branding exercises. Meanwhile, genuinely useful applications of AI risk being lost in the noise.

What made this especially unsettling in an IAM context is that identity systems are foundational infrastructure. They don’t benefit from magical thinking. Overpromising here doesn’t just disappoint, it complicates operations, inflates risk, and makes already-hard problems harder to unwind later.

If Gartner’s warning was meant to encourage sobriety, the show floor illustrated just how difficult that is when market incentives reward saying the expected thing rather than the accurate one.

So what do we do with all of this?

None of these observations is about any single vendor doing something wrong. In fact, what struck me most at Gartner IAM was how rational all of this looks when you step back.

Buyers are constrained by budgets that favor capital spend over operational change. Vendors are competing in a crowded market where attention is scarce and signaling matters. Marketing teams are asked to simplify complex systems into messages that can be absorbed in a five-minute booth conversation. And AI, for better or worse, has become the shorthand everyone believes they must use to be taken seriously.

Given those pressures, it’s not surprising that process design gets sidelined, standards fade into the background, and hype fills the gaps.

What’s harder to ignore is where that leaves us.

Identity systems don’t live in slides or demos. They live in operations, span vendors, and generally persist long after the initial buying decision. When process is underdesigned, interoperability is accidental, and capabilities are oversold, the cost doesn’t show up immediately. It shows up later, when teams are stuck maintaining systems they can’t easily change or explain.

Identity is about incentives

That’s why this conference stuck with me more than I expected; I’d never been to a Gartner event and didn’t have high expectations, as I am neither a buyer nor a seller. Seeing these patterns side by side, in a setting optimized for real purchasing decisions, was a reminder that the hardest identity problems aren’t technical in the narrow sense. They’re about incentives, visibility, and the gap between how systems are sold and how they’re actually lived with.

For those of us who spend our time thinking about standards, architectures, and long-term outcomes, that gap is uncomfortable, but it’s also instructive. If the work we’re doing isn’t visible or legible where decisions are made, then we shouldn’t be surprised when it gets deprioritized.

Gartner IAM didn’t change my views so much as sharpen them. The pressure shaping the identity market right now isn’t going away. The question is whether we adapt by making process, interoperability, and honesty easier to see, or whether we keep letting those concerns become someone else’s problem down the line.

And yes, I’m still thinking about that ferry conversation.

A note on perspective

The lens I bring to conferences like Gartner IAM is shaped by the work I do every day.

Through Spherical Cow Consulting, I spend most of my time tracking and interpreting changes in digital identity standards, browser behavior, and policy discussions, and helping organizations understand how those shifts affect real systems and real decisions. That often means looking past product claims to the assumptions underneath them: about process maturity, interoperability, operational ownership, and long-term sustainability. If you’d like product recommendations, you might want to check out Gartner, RedMonk, KuppingerCole, or some other analyst firm.

I work primarily as an advisor and analyst, not an implementer. My role is to help teams make sense of where standards are actually doing work, where incentives are misaligned, and where today’s “reasonable” decisions may create friction later. That vantage point is what informed the observations in this post.

So when I say that buyers need process more than product, that standards are invisible where decisions are made, or that AI hype is distorting conversations, those aren’t abstract critiques. They’re patterns I see repeatedly when technical realities, organizational constraints, and market pressure collide.

If this post raised questions for you—or felt uncomfortably familiar—that’s probably not a coincidence. Feel free to reach out if you’d like to continue the discussion directly!

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Setting the Stage at Gartner IAM

00:00:30
Gartner IAM is a strange conference—at least compared to most industry events I attend. It’s run by one of the largest analyst firms in the world, and attending usually means either paying a significant fee or being invited as a speaker.

00:01:05
Most attendees are Gartner subscribers using passes tied to their company subscriptions. That reality shapes everything about the event.

This is not a conference for debating protocol edge cases or swapping implementation tricks.

Instead, Gartner IAM is optimized for:

Buyer–seller conversations Enterprise architecture evaluations Shortlists and early purchasing decisions

00:01:45
As an independent consultant, the most useful insights weren’t in the sessions. They came from conversations on the show floor, side comments at booths, and even discussions before the conference began.

Lesson One: Customers Need Process More Than Product

00:02:35
The first lesson stood out clearly: customers need process more than they need a product.

This insight didn’t come from a keynote or slide deck. It came from a quiet conversation with a buyer I met on a ferry on the way to the airport.

00:02:55
This person had capital budget—plenty of it. What they lacked was operational and maintenance funding.

That tension may sound familiar.

00:03:20
What stood out wasn’t indecision about vendors or features. In fact, they were almost indifferent to the product itself.

Why?

Because the product wasn’t the problem they were trying to solve.

00:03:45
Their real challenge was designing a process that actually worked in their environment, including:

Who owns which decisions How access changes over time What can realistically be automated Where human judgment is still required

00:04:20
They needed workflows aligned with staffing models, risk tolerance, regulatory requirements, and operational constraints.

Without that foundation, any product would eventually become:

Shelfware Technical debt A brittle system held together by manual workarounds

00:05:10
This is where the capital versus operations gap becomes painfully visible. Buying software is often easier than committing to the ongoing work of running it well.

00:05:45
Most IAM products assume a level of process maturity that simply doesn’t exist in many organizations.

When those assumptions fail, even the most modern tools can’t compensate.

Lesson Two: Where Standards Disappear

00:06:30
As a standards person, the second lesson was hard to ignore: standards rarely show up in buyer–seller conversations.

To be clear, standards were present at Gartner IAM—just not where most buyers encountered them.

00:06:55
There were encouraging signals:

SGNL ran a session grounded in real-world authorization standards The OpenID Foundation’s AuthZEN Working Group hosted an interoperability event Real technical alignment work was happening

That work matters. It deserves recognition.

00:07:20
But on the show floor, standards barely came up.

When they did, it was usually reduced to a vague statement:
“Yes, we support standards.”

00:07:50
Rarely was there clarity about:

Which standards were supported Where they were implemented Why those choices mattered

Many booth staff couldn’t answer basic interoperability questions.

00:08:30
Even acknowledging conference dynamics—marketing over engineering—the absence was still striking.

Interoperability existed in pockets and side conversations, disconnected from the narratives shaping purchasing decisions.

00:08:55
That gap matters.

Standards don’t need to dominate sales conversations, but when they’re invisible, buyers lose insight into how products interact with the broader ecosystem.

Lesson Three: The AI Rogue Wave

00:09:20
The third lesson involved an impressive amount of AI irony.

Gartner executives opened the conference by acknowledging that AI hype isn’t just excessive—it’s downright insane.

00:09:40
They described not one, but three hype cycles converging into a single rogue wave:

Inflated expectations Uneven capabilities A strong need for caution

They were not subtle.

00:10:00
Then I walked onto the show floor.

Nearly every booth featured AI messaging:
AI-powered. AI-driven. AI-enhanced.

00:10:20
In some cases, the connection to real capability was clear. More often, it wasn’t.

AI had become less a feature and more a signal—reassurance that a vendor wasn’t falling behind.

00:10:50
One off-the-record conversation captured the moment perfectly.

A vendor admitted their product didn’t meaningfully use AI, but leadership insisted it appear in marketing materials because it was expected.

00:11:15
This isn’t hype driven solely by bad faith. It’s driven by fear:

Fear of being seen as outdated Fear of appearing irrelevant Fear of missing the innovation narrative

00:11:40
Over time, this erodes trust.

Buyers are left sorting out which AI claims reflect real capability and which are branding exercises.

In IAM, that’s especially risky.

Identity systems are foundational infrastructure. Overpromising doesn’t just disappoint—it complicates operations and inflates long-term risk.

Where This Leaves Us

00:11:55
None of these observations point to any single vendor doing something wrong.

In fact, what stood out was how rational all of this looks when you step back.

Buyers face budget structures favoring capital over operations Vendors compete in crowded markets with limited attention Marketing simplifies complex systems into five-minute conversations

00:12:25
Given those incentives, it’s not surprising that:

Process gets sidelined Standards fade into the background Hype fills the gaps

What’s harder to ignore is the long-term cost.

Identity systems don’t live in demos. They live in operations. They span vendors and persist long after purchasing decisions are made.

Final Reflections

00:12:55
Gartner IAM didn’t change my views so much as sharpen them.

Seeing these patterns side by side was a reminder that the hardest identity problems aren’t narrowly technical.

They’re about:

Incentives Visibility The gap between how systems are sold and how they’re lived with

00:13:15
That first ferry conversation is going to stick with me for a long time.

Before I wrap up, it’s worth noting where this perspective comes from.

My work sits at the intersection of standards development, policy conversations, and real-world procurement decisions.

00:13:35
That vantage point shapes everything I’ve shared here.

This isn’t critique from the sidelines—it’s grounded in helping organizations understand the trade-offs they’re making, whether they realize it or not.

Closing

00:13:35
Thanks for listening to this week’s episode of the Digital Identity Digest.

If this helped make things a little clearer—or at least more interesting—share it with a colleague and connect with me on LinkedIn.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Process, Standards, and the AI Rogue Wave: Notes from Gartner IAM appeared first on Spherical Cow Consulting.

Monday, 05. January 2026

Northern Block

How IATA is Building Trust for Nearly 400 Airlines (with Gabriel Marquie)

How do airlines trust travel agencies, partners, and passengers across borders in real time? In this episode, IATA’s Gabriel Marquie explains how digital identity is reshaping airline distribution and the future of contactless travel. The post How IATA is Building Trust for Nearly 400 Airlines (with Gabriel Marquie) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

🎥 Watch on YouTube 🎥
🎧 Listen On Spotify 🎧
🎧 Listen On Apple Podcasts 🎧

How does an airline instantly trust a travel agency on the other side of the world? And what does it take to enable a future where passengers move through airports with a seamless, contactless identity experience?

In the latest episode of The SSI Orbit Podcast, host Mathieu Glaude speaks with Gabriel Marquie, Digital Identity Lead at the International Air Transport Association, about how digital identity is becoming foundational infrastructure for modern aviation.

Trust Is the Backbone of Aviation

Aviation has always depended on trust. Long before the internet, airlines relied on shared rules and accreditation systems to sell tickets across borders and partner with carriers they had never met. Those systems worked well in closed networks, but the industry has since moved to open, API driven distribution models.

As Gabriel explains, modern airline retailing through NDC dramatically expands reach and innovation, but it also introduces new risk. Airlines must be able to verify who is selling their seats, not just once, but continuously across increasingly complex distribution channels.

Digital Identity for Airline Distribution

A central focus of the episode is the B2B identity challenge. Airlines remain responsible for the passenger experience even when tickets are sold through intermediaries. When something goes wrong, the airline bears the cost.

Digital identity and verifiable credentials allow airlines to build on existing accreditation frameworks and add real-time verification of travel agencies and even individual agency employees. This reduces friction, lowers operational cost, and creates a shared trust layer that benefits the entire ecosystem.

The episode also highlights practical progress, including industry pilots and demonstrations with leading airlines, showing how these identity frameworks can work in real operational environments.

From Airline Operations to the Passenger Journey

The conversation then expands to the passenger experience. Air travel involves airlines, airports, border agencies, and many other stakeholders, each with its own systems and requirements.

Gabriel explains how digital passports, biometrics, and tap-and-go experiences can reduce friction while supporting strong security and data minimization. Rather than repeatedly presenting full documents, travellers share only what is needed at each step of the journey.

This work requires close coordination between industry and governments, including alignment with International Civil Aviation Organization standards and large-scale pilots in regions like Europe.

What Comes Next

Air travel is expected to double over the next decade, making manual and paper-based processes increasingly unsustainable. Gabriel outlines IATA’s roadmap, including the Contactless Travel Directory and a shift from isolated pilots to coordinated global rollout.

Digital identity is no longer experimental in aviation. It is becoming essential infrastructure for scaling the industry while improving security, efficiency, and the passenger experience.

🎧 Listen to the full episode to hear how digital trust is being built across one of the world’s most complex global systems.

Additional resources:

Episode Transcript IATA One ID Initiative IATA New Distribution Capability (NDC) About Guest Gabriel Marquie is the Head of Digital Identity, Innovation, and Identity Services at the International Air Transport Association (IATA), where he leads the exploration, standardization, and adoption of digital identity across the global airline industry. Based in Geneva, he works closely with airlines, governments, and technology partners to modernize trust frameworks spanning airline distribution, passenger eligibility, and airport biometric processing. With a background in delivering large-scale, customer-facing SaaS solutions, Gabriel has played a key role in developing verifiable credential frameworks for airline distribution and advancing passenger digital identity standards. His work has supported solutions used by millions of travellers and adopted by leading airlines worldwide. Motivated by solving complex problems through technology, Gabriel focuses on translating emerging digital identity capabilities into practical, scalable systems that improve business operations while enhancing the passenger experience. LinkedIn

The post How IATA is Building Trust for Nearly 400 Airlines (with Gabriel Marquie) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

paray

CA Gets the DROP on Data Brokers

Beginning in 2026, California residents can access the Delete Request and Opt-out Platform (DROP) and request data brokers delete their personal information – information collected from third parties or from consumers in a non-“first party” capacity (i.e., through an interaction where the consumer did not intend or expect to interact with the data broker). Under … Continue reading CA Gets the DROP
Beginning in 2026, California residents can access the Delete Request and Opt-out Platform (DROP) and request data brokers delete their personal information – information collected from third parties or from consumers in a non-“first party” capacity (i.e., through an interaction where the consumer did not intend or expect to interact with the data broker). Under … Continue reading CA Gets the DROP on Data Brokers →

Spruce Systems

Why Privacy-Preserving Design Matters in Public Services

Public trust depends on how data is handled. This post explains privacy-by-design principles, selective disclosure, and why minimizing data exposure is just as important as securing it.

Public trust in government digital services is shaped less by what agencies say and more by how they handle data. Every form submission, document upload, and identity check is a moment where residents decide whether a service deserves their confidence.

Security is essential, but it is not sufficient. A system can be technically secure and still feel invasive if it collects more data than necessary or shares information too broadly. Privacy-preserving design addresses this gap by focusing not just on protecting data, but on limiting exposure in the first place.

For modern public services, privacy is not an optional feature. It is a core design requirement.

Privacy is about restraint, not secrecy

Privacy is often misunderstood as keeping data hidden. In practice, it is about restraint.

Privacy-preserving systems ask clear questions before collecting or sharing information. What data is required to complete this task? Who needs access to it? For how long? For what purpose?

When systems are designed without these constraints, data accumulates by default. Copies proliferate. Access expands beyond original intent. Even well-secured systems become risky simply because too much information is available in too many places.

Designing for privacy means building limits into the architecture so data use stays aligned with purpose over time.

Privacy by design starts at intake

The moment data enters a system is where privacy risks begin.

Many public services collect entire documents when only a small portion of the information is actually needed. A full document may include names, addresses, identifiers, or unrelated personal details that are not required for the transaction at hand.

Privacy by design shifts intake toward capturing only what is necessary. Modern document capture and validation tools can extract specific fields while preserving the original record for audit. Systems work with structured data instead of passing full documents through every downstream process.

This reduces exposure without reducing capability.

Minimizing data reduces risk and complexity

Data minimization is one of the most effective privacy controls available. It is also one of the simplest in concept.

When systems collect less data, there is less to secure, less to audit, and less to misuse. Breach impact is reduced. Compliance becomes easier. User trust improves because interactions feel proportionate rather than intrusive.

Importantly, minimization also improves system performance. Smaller data sets are easier to share responsibly. Decisions can be automated more safely. Analytics become clearer when fields are well defined and intentional.

Privacy-preserving design aligns operational efficiency with ethical responsibility.

Selective disclosure supports better service delivery

Selective disclosure allows individuals to share only the information required for a specific interaction, rather than exposing an entire record.

For example, a resident may need to prove eligibility or age without revealing additional personal details. A business may need to demonstrate compliance without sharing unrelated internal data.

When systems support selective disclosure, services become easier to use and harder to abuse. Agencies receive exactly what they need, no more and no less. Residents gain confidence that participation does not mean loss of control.

Standards developed by organizations like the World Wide Web Consortium (W3C) support these patterns by enabling data to be shared in granular, verifiable ways that respect user privacy.

Privacy and security are complementary

Privacy-preserving design is sometimes framed as being in tension with security. In reality, they reinforce each other. Security focuses on protecting data from unauthorized access. Privacy focuses on limiting how much data is exposed even to authorized systems.

A system that minimizes data exposure reduces its attack surface. Fewer copies mean fewer points of failure. Narrow access scopes make misuse easier to detect.

Guidance from bodies such as the National Institute of Standards and Technology emphasizes that strong privacy controls are a key component of resilient security architectures, particularly in distributed and cloud-based environments.

Trust is built through predictable handling of data

Residents care deeply about how their information is used after submission. Uncertainty erodes trust quickly.

Privacy-preserving systems behave predictably. Data is collected for a stated purpose. Access is governed by clear rules. Retention is limited. Reuse requires consent or policy justification.

When people understand what will happen to their data and see that systems behave consistently, trust grows. When they are surprised by reuse or over collection, trust breaks down even if no breach occurs.

Predictability is as important as protection.

Enabling modern services without overreach

Modern public services depend on data sharing, automation, and interoperability. Privacy-preserving design makes these goals achievable without overreach.

By relying on structured data, selective disclosure, and purpose-based access controls, agencies can enable cross-program workflows while respecting boundaries. Services can be faster and more integrated without becoming surveillance-oriented.

This balance is essential as governments expand digital offerings and adopt more advanced technologies.

Privacy as a foundation for public trust

Public services exist to serve people, not to extract data from them. Privacy-preserving design reflects this principle in system architecture.

When agencies minimize data exposure, support selective disclosure, and design with purpose in mind, they send a clear signal of respect. That respect translates directly into trust.

Trust is what allows digital services to succeed at scale. It is earned not through statements or policies, but through systems that consistently handle data with care. Privacy-preserving design is how public services demonstrate that care from the inside out.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Elliptic

Stablecoins aren't the challenge. Bank readiness is

Following the passing of the GENIUS ACT, the path to stablecoin adoption continues to spark debate within the US banking sector. Some fear it will trigger unprecedented outflows, while others are embracing the technology and building their own stablecoin infrastructure or seeking new partnerships or collaborations.

Following the passing of the GENIUS ACT, the path to stablecoin adoption continues to spark debate within the US banking sector. Some fear it will trigger unprecedented outflows, while others are embracing the technology and building their own stablecoin infrastructure or seeking new partnerships or collaborations.


Recognito Vision

Why 3D Spoofing Is No Longer the Top Facial Recognition Risk

For years, 3D face spoofing was seen as the biggest threat to facial recognition systems. Security teams worried about masks, molded faces, and printed replicas tricking biometric scanners. At the time, those concerns were valid because early systems relied on limited visual checks. Today, the risk landscape looks very different. Facial recognition technology has matured,...

For years, 3D face spoofing was seen as the biggest threat to facial recognition systems. Security teams worried about masks, molded faces, and printed replicas tricking biometric scanners. At the time, those concerns were valid because early systems relied on limited visual checks.

Today, the risk landscape looks very different. Facial recognition technology has matured, and attackers have shifted their focus. The most serious threats are no longer physical masks but digital manipulation, enrollment fraud, and system-level weaknesses that target identity workflows rather than cameras.

This shift is exactly why modern biometric security needs a new way of thinking. Recognito helps organizations move beyond outdated threat models by providing advanced facial recognition and anti spoofing technology built for today’s real risks.

In this blog, we explain what 3D face spoofing is, why it is no longer the primary concern, and what security teams should focus on instead when evaluating facial recognition systems.

What Is 3D Face Spoofing

3D face spoofing refers to attempts to trick facial recognition systems using physical replicas of a person’s face. These attacks aim to mimic depth and structure, making them more convincing than flat photos.

3D Masks

Wearable masks made from silicone or latex that attempt to replicate a real person’s facial features. Early biometric systems struggled to detect them due to limited depth analysis.

3D Printed Faces

Rigid facial models created using 3D printers based on stolen or scraped images. These replicas often fail to replicate natural movement and skin behavior.

Silicone Models

High-quality silicone faces designed to simulate realistic texture and contours. While visually convincing, they lack natural biological responses.

3D spoofing gained attention in early biometric systems because those systems focused mainly on facial shape rather than behavior, motion, or biological signals.

Why 3D Spoofing Is Less Effective Today Why 3D Spoofing Is Less Effective Today

3D spoofing is becoming increasingly difficult as facial recognition technology evolves. Here are some modern techniques that advanced systems use to prevent 3D spoofing.

Advances in Anti Spoofing Technology

Modern anti-spoofing systems now confirm real human presence by analyzing subtle cues like micro-movements, depth variations, and natural facial responses. Liveness detection ensures that only genuine faces are recognized, blocking most mask- or model-based attacks.

Advanced models also perform depth and texture analysis to detect inconsistencies in light reflection, material composition, and skin texture that spoofing attempts cannot replicate. Additionally, motion and behavioral cues, such as facial dynamics, head movement, and response timing, reveal unnatural patterns, further strengthening system security.

Smarter AI Security Models

Facial recognition today relies on adaptive AI rather than static rules. Models are trained on real spoofing attempts and continuously update as new techniques emerge, reducing false acceptance rates and improving detection accuracy over time. Independent testing, like NIST FRVT evaluations, ensures these AI-driven systems remain reliable and robust.

The Real Security Risks Facing Facial Recognition Today

The biggest threats to facial recognition no longer come from physical spoofing. Today, risks arise from digital manipulation and system-level vulnerabilities that can bypass traditional safeguards.

Presentation Attacks Beyond 3D Masks

Attackers are using increasingly sophisticated methods to fool systems. High-resolution video injection attempts to feed manipulated video streams into verification systems, while synthetic media and replay attacks reuse captured biometric data to bypass weak implementations.

Identity Injection and Enrollment Fraud

Many attacks now target the onboarding stage. Fraudulent identities can be injected during enrollment, and weak verification workflows allow fake profiles to become trusted system identities.

Infrastructure and Integration Weaknesses

Even the most accurate algorithms fail if systems are poorly designed. Exposed APIs, misconfigured deployments, and improper handling of biometric data create vulnerabilities that attackers can exploit.

How Modern Facial Recognition Addresses Today’s Risks How Modern Facial Recognition Addresses Today’s Risks

Modern facial recognition systems tackle today’s threats by combining advanced biometric checks, adaptive verification, and AI-driven fraud prevention to ensure stronger and smarter security.

Layered Biometric Security

Liveness detection combined with face scanning confirms both presence and identity. Contextual risk signals, like device behavior and session patterns, add an extra layer of protection.

Adaptive Identity Verification

Risk-based authentication adjusts security levels based on user behavior. Ongoing verification monitors activity throughout the session to detect misuse.

AI-Driven Fraud Prevention

Pattern detection identifies suspicious behavior across sessions. Cross-session analysis helps stop long-term fraud strategies. Key Considerations When Evaluating Facial Recognition Security

Choosing the right system goes beyond feature comparison. You need to consider anti-spoofing, system design, and privacy safeguards.

Anti Spoofing Depth

Check if the system uses multiple liveness detection methods, like depth sensing, micro-movements, or behavioral cues, to prevent spoofing.

System Architecture

Understand how on-device versus server-side processing works and how data flows, ensuring security and efficiency.

Privacy and Compliance

Systems must respect consent and regulations like GDPR. ENISA guidance also highlights layered identity security for stronger protection.

Facial Recognition Security by Recognito Facial Recognition Security by Recognito

Recognito delivers facial recognition solutions designed for modern threat environments. Our technology focuses on liveness detection, adaptive verification, and real fraud prevention rather than surface-level matching.

You can explore technical resources and implementation examples through our GitHub repository or contact us to discuss secure deployment tailored to their needs.

Conclusion

The facial recognition threat landscape has evolved. While 3D face spoofing once dominated concerns, it is no longer the primary risk. Today’s threats target enrollment processes, digital workflows, and system integration.

A modern biometric security approach requires layered defenses, adaptive AI, and informed risk assessment. By focusing on real threats rather than outdated fears, organizations can deploy facial recognition with confidence.

Frequently Asked Questions Is 3D face spoofing still a threat

It exists, but modern liveness detection can reliably spot spoof attempts using depth, motion, and facial cues.

What is the biggest risk in facial recognition today

Enrollment fraud and system-level vulnerabilities pose greater risk than physical spoofing.

How does anti spoofing prevent fraud

It confirms real human presence using depth, motion, and behavioral signals.

Is facial recognition secure for identity verification

Yes, when supported by strong architecture and compliance controls.

What industries face the highest facial recognition risk

Finance, travel, education, and digital platforms handling remote identity verification.


Herond Browser

Introducing Mobile Herond Browser: Redesigned for Modern Browsing

Mobile Herond Browser introduces a fresh way to browse privately and securely from your phone, wherever you are. The post Introducing Mobile Herond Browser: Redesigned for Modern Browsing appeared first on Herond Blog.

Mobile Herond Browser introduces a fresh way to browse privately and securely from your phone, wherever you are. Built as a privacy‑first browser, Mobile Herond Browser automatically blocks intrusive ads, harmful trackers, and profiling scripts that follow you across the web, helping keep your personal data safe by default. With features like encrypted connections, warnings for untrusted sites, and secure password management, it turns your everyday browsing, whether on public Wi‑Fi or mobile data, into a safer, more protected experience.

This new mobile concept is designed for users who want both strong privacy and smooth performance on the go. A lightweight interface, fast page loading, and tools like incognito mode and IP masking make it easy to stay anonymous without sacrificing speed or convenience. Combined with Mobile Herond Browser’s Web3‑ready foundation, the mobile browser aims to be your secure gateway not only to traditional websites but also to the next generation of internet apps and services, so you can explore more while exposing less.

What’s New Redesigned Navigation & Address Bar

Our new navigation and address bar is cleaner, more responsive, and intelligently adapts to your browsing behavior. Finding what you need and getting where you want to go is now faster and more intuitive than ever.

New Tab Page: Your Fresh Start

Every new tab now opens to a beautifully redesigned page that helps you jump right into what matters. With a clean, modern layout and quick access to your most-visited sites, bookmarks, and suggestions, you’ll spend less time searching and more time browsing.

Enhanced Tab Manager

Managing multiple tabs on mobile has never been easier. Our redesigned tab manager features visual improvements that make it simple to see all your open tabs at a glance, organize them logically, and switch between them seamlessly, even when you have dozens open.

Modernized Menu & Settings

We’ve simplified our menu and settings interface to put your most-used tools front and center. The new layout reduces clutter, speeds up access to key features, and makes customizing your browser experience straightforward and intuitive.

Refreshed Onboarding Experience

First impressions matter. New users will now enjoy a smoother, more engaging onboarding process that introduces Mobile Herond Browser’s privacy features and customization options without overwhelming them. Getting started with Mobile Herond Browse never been easier.

The Result: Mobile Browsing That Just Works

These updates represent our commitment to delivering a mobile browser that respects your time, protects your privacy, and adapts to how you actually use your phone. Every design decision was made with one goal in mind: making your browsing experience better.

Download or update Mobile Herond Browser today and experience the difference. As always, we’d love to hear your feedback as you explore the new features.

The redesigned Mobile Herond Browser represents our vision of what mobile browsing should be – intuitive, efficient, and built around you. Every improvement, from redesigned navigation to enhanced tab manager, was crafted with one goal: making your daily browsing experience smoother and more enjoyable. Update today and discover how much better mobile browsing can be. As always, your feedback drives our innovation, so share your thoughts as you explore the new features.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Introducing Mobile Herond Browser: Redesigned for Modern Browsing appeared first on Herond Blog.


Mobile Herond Browser: A New Privacy‑First Concept Coming Soon

The new Mobile Herond Browser will block intrusive ads and trackers and safeguard your personal data The post Mobile Herond Browser: A New Privacy‑First Concept Coming Soon appeared first on Herond Blog.

Herond Browser is preparing to launch a brand‑new mobile browser concept built from the ground up for privacy‑first browsing on Android and iOS. The new Mobile Herond Browser will block intrusive ads and trackers and safeguard your personal data. With a clean, modern interface and performance‑optimized engine, it aims to deliver fast, smooth, and secure browsing for everyday use, Web3 dApps, and everything in between.

Unlike traditional browsers that treat privacy as an optional add‑on, Mobile Herond Browser puts protection at the center of experience. The upcoming release will integrate advanced tracking prevention. Having built‑in ad blocking, and smart security alerts to help you avoid malicious or suspicious sites. Combined with features like private tabs, cross‑device sync, and tools tailored for the next generation of the internet. The new Mobile Herond Browser is set to redefine what safe, private browsing.

Mobile Herond Browser is set to bring a new standard of privacy, speed, and control to everyday browsing. By combining powerful tracker‑blocking, built‑in ad protection, and a modern, performance‑focused design, it gives you a cleaner, safer way to explore the web wherever you are. As this privacy‑first concept rolls out to mobile, users will be able to enjoy a seamless experience that protects their data by default. And staying secure online becomes effortless, not an extra chore.

The future of mobile browsing is privacy by design, not privacy by compromise. Mobile Herond Browser represents more than just another app in the store. It’s a commitment to giving users back control over their digital lives. As we prepare for launch on Android and iOS, we’re building a browser that doesn’t ask you to choose between convenience and security. Stay tuned for updates as we get ready to put true privacy-first browsing in the palm of your hand. Your data belongs to you – it’s time your mobile browser reflects that.

Conclusion

The future of mobile browsing isn’t about choosing between privacy and convenience, it’s about having both, effortlessly. Herond Mobile Browser is built to prove that you shouldn’t have to sacrifice security for speed, or accept tracking as the cost of staying connected. As we approach launch, we committed to delivering a mobile experience that puts you first, protects you by default, and works the way you deserve. Your phone is personal. Your browser should be too. Stay tuned – privacy-first mobile browsing is coming soon.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Mobile Herond Browser: A New Privacy‑First Concept Coming Soon appeared first on Herond Blog.


PingTalk

The Road to PSD3/PSR1 Compliance and the Role of Identity

Learn how PSD3 transforms payments and why modern identity, fraud prevention and API security are essential for financial institutions to achieve compliance.  

PSD3 represents the most significant overhaul of Europe’s payments regime since PSD2, with fraud prevention and identity assurance now central regulatory pillars.

Enforcement begins from 2026–2028, meaning 2026–2027 is the critical window for architectural decisions and capability consolidation.

Compliance scope widens substantially: stronger SCA, real-time fraud monitoring, API hardening, recovery assurance, delegated entitlements, and alignment with eIDAS 2.0.

Converged, identity-centric IAM is essential to achieving PSD3/PSR1 readiness at speed and scale.

Financial institutions that treat PSD3 as an innovation catalyst, not a box-ticking exercise, will gain lasting competitive advantage.


FastID

Beyond CRUD: Advanced Features of Fastly’s KV Store

Go beyond CRUD with Fastly’s KV Store. Use metadata, generation markers and TTL to build faster, safer edge applications.
Go beyond CRUD with Fastly’s KV Store. Use metadata, generation markers and TTL to build faster, safer edge applications.

Saturday, 03. January 2026

Thales Group

The Third COSMO-SkyMed Second Generation Satellite Successfully Launched

The Third COSMO-SkyMed Second Generation Satellite Successfully Launched tas Sat, 01/03/2026 - 07:44 Space Share options Facebook X
The Third COSMO-SkyMed Second Generation Satellite Successfully Launched tas Sat, 01/03/2026 - 07:44 Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 03 Jan 2026

This state-of-the-art Italian dual-use radar Earth observation constellation, owned by the Italian Space Agency and the Italian Ministry of Defense, offers the highest level of performance in terms of security and sustainability
 

Rome, January 3, 2026 – The third satellite part of the COSMO-SkyMed Second Generation (CSG) constellation, owned by the Italian Space Agency and the Italian Ministry of Defense,  built by Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%) and operated in orbit by Telespazio, a joint venture between Leonardo (67%) and Thales (33%), has successfully been launched from Vandenberg Space Force Base in California (USA), aboard a SpaceX Falcon 9 rocket.

59 minutes after the separation from its launcher, the satellite’s signal was acquired and controlled by Telespazio's Fucino Space Center located in Abruzzo, Italy. The nominal duration of the LEOP (launch and early orbit phase) will last approximately 9 days.

© Thales Alenia Space

Massimo Claudio Comparini, Managing Director of Leonardo’s Space Division, stated: “Each COSMO-SkyMed launch represents a significant achievement for the Italian national space system and its supply chain. The program, developed to meet the requirements of the Italian Space Agency and the Italian Ministry of Defense, reflects the technological and industrial excellence led by Leonardo together with the joint ventures Thales Alenia Space, Telespazio and e-GEOS. Earth observation and the data it provides are a strategic asset for security and sustainability, enabling increasingly targeted and timely services and interventions. This commitment strengthens Italy’s role in space and helps generate value for the country and the international community.”

Giampiero Di Paolo, CEO of Thales Alenia Space in Italy and Senior Vice President Earth Observation, Exploration and Navigation, commented: “Being responsible for the overall COSMO-SkyMed Second Generation program, Thales Alenia Space is extremely proud of this successful launch, which further demonstrates the company’s excellence in radar technology and highlights the dedication of our teams. Once fully deployed with its four satellites, it will provide substantial technological and performance progress, strengthening Thales Alenia Space’s global leadership in space-based Earth observation infrastructure.”

COSMO-SkyMed (Constellation of Satellites for the Mediterranean basin Observation) is a dual-use Earth observation constellation owned by the Italian Space Agency (ASI) and the Italian Ministry of Defense. Regarding the development of the constellation, the Italian industry plays a leading role with Leonardo and the joint ventures Thales Alenia Space, Telespazio and e-GEOS, plus with a significant number of small and medium-sized enterprises.

This third Second Generation satellite, built by Thales Alenia Space like the other satellites in the constellation, will guarantee the operational continuity of radar (SAR, Synthetic-Aperture Radar) services, further enhancing the already high performance of the system in terms of image quality and area coverage. With a third satellite in orbit, Cosmo SkyMed Second Generation, using the latest technologies and engineering solutions, is progressively replacing the first-generation system, which features four satellites including two operational to date. The new generation increases the overall performance of the system and significantly expands the range of applications offered, given the final configuration of four satellites. The entire system, including the ground segment, is setting the performance standard for space-based radar observation systems in terms of accuracy, image quality and flexible user services.

Over the years, the data obtained by the COSMO-SkyMed system have provided fundamental information for environmental and territorial monitoring, security, and emergency management. Since the launch of the first COSMO-SkyMed satellite in 2007, about 4,3 million images have been acquired by the satellites and archived.

As a mission participating in the European Copernicus program, COSMO-SkyMed’s images are of great importance to the European Commission’s Emergency Rapid Mapping service, also operated by e-GEOS, which provides satellites maps of areas affected by natural disasters of humanitarian crises within a matter of hours.

Industry role

Regarding COSMO-SkyMed, Italian industry plays a leading role, with Leonardo Thales Alenia Space and Telespazio, together with a significant number of small and medium-sized enterprises.

Thales Alenia Space is responsible for the entire Second-Generation COSMO-SkyMed program, including satellite development and manufacturing, as well as the design, integration and commissioning of the end-to-end system.  

Telespazio is responsible for the design and development of the CSG ground segment and the provision of Integrated Logistics and Operations Services. Telespazio's Fucino-based Space Centre, from where the first telemetry data sent by the satellite has been acquired, will manage all the satellite's Launch and Early Orbit Phase (LEOP) up to IOT, Commissioning and Routine Phases.

Leonardo contributes to the program by providing attitude control equipment, as well as state-of-the-art units for the management and distribution of electrical power.

COSMO-SkyMed data is marketed worldwide by e-GEOS - a company jointly owned by the Italian Space Agency (20%) and Telespazio (80%) - which holds exclusive commercialization rights.
e-GEOS processes COSMO-SkyMed data for the development of applications and operational services including support for emergency management, security, infrastructure monitoring, maritime traffic management, precision agriculture, and the monitoring of natural resources and ecosystems.

About Leonardo

Leonardo is an international industrial group, among the main global companies in Aerospace, Defence, and Security (AD&S). With 60,000 employees worldwide, the company approaches global security through the Helicopters, Electronics, Aeronautics, Cyber & Security and Space sectors, and is a partner on the most important international programmes such as Eurofighter, JSF, NH-90, FREMM, GCAP, and Eurodrone. Leonardo has significant production capabilities in Italy, the UK, Poland, and the USA. Leonardo utilises its subsidiaries, joint ventures, and shareholdings, which include Leonardo DRS (71.4%), MBDA (25%), ATR (50%), Hensoldt (22.8%), Telespazio (67%), Thales Alenia Space (33%), and Avio (19.3%). Listed on the Milan Stock Exchange (LDO), in 2024 Leonardo recorded new orders for €20.9 billion, with an order book of €44.2 billion and consolidated revenues of €17.8 billion. Included in the MIB ESG index, the company has also been part of the Dow Jones Sustainability Indices (DJSI) since 2010. www.leonardo.com

About Thales Alenia Space

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe.  www.thalesaleniaspace.com

About Telespazio

Telespazio, a Leonardo and Thales 67:33 joint venture, is one of the world’s leading operators in space services. Its activities range from the design and development of space systems to the management of launch services and in-orbit satellite control, from Earth observation, integrated communications, satellite navigation and localisation services to scientific programmes. The company plays a leading role in the reference markets, supported by its infrastructure and the technological experience acquired in over 60 years of activity, which include participation in space programmes such as Galileo, EGNOS, Copernicus, COSMO-SkyMed and Moonlight. Telespazio, which is Thales Alenia Space’s partner in the “Space Alliance”, generated sales of EUR 750 million in 2024 while employing 3,300 people in 15 different countries. www.telespazio.com

View PDF market_segment : Space third-cosmo-skymed-second-generation-satellite-successfully-launched On

Friday, 02. January 2026

Spruce Systems

Secure by Design: Building Systems That Assume Breach

Modern government systems must assume compromise and design accordingly. This article covers encryption, device trust, least-privilege access, and how to build systems that remain safe even when parts fail.

For decades, government systems were designed around a simple assumption: keep attackers out and everything inside will be safe. Strong perimeters, trusted networks, and restricted access were the primary defenses.

That assumption no longer holds.

Modern government systems operate across cloud platforms, mobile devices, third-party services, and interagency integrations. Users connect from anywhere. Data moves constantly. In this environment, compromise is not a question of if, but when.

Secure by design starts from this reality. Instead of trying to prevent every breach, systems are built to remain safe even when parts fail.

What it means to assume breach

Assuming breach does not mean giving up on security. It means designing systems that expect components to be compromised and limiting the damage when that happens.

This mindset changes how systems are built. Trust is not implicit. Access is not permanent. Sensitive data is protected even from systems that appear legitimate.

When breach is assumed, security becomes a continuous property of the system rather than a single control at the edge.

Encryption as a baseline, not a feature

Encryption is one of the most fundamental tools in an assume breach model.

Data must be encrypted in transit so it cannot be intercepted or altered as it moves between systems. It must also be encrypted at rest so that a compromised database or storage service does not immediately expose sensitive information.

Crucially, encryption should be applied consistently and automatically. It should not depend on individual teams remembering to enable it or on special handling for sensitive records.

When encryption is treated as a baseline, breaches become containment events rather than catastrophic failures.

Least privilege limits blast radius

Assume breach design recognizes that credentials will be stolen, accounts will be misused, and systems will behave unexpectedly.

Least privilege access limits what an attacker can do when that happens.

Each user, service, and application is granted only the permissions required to perform its specific function. Access is scoped narrowly and reviewed regularly. Temporary access is preferred over permanent entitlements.

This approach reduces blast radius. A compromised account cannot access unrelated systems. A misconfigured service cannot read more data than necessary. Investigations are easier because access patterns are clearer.

Least privilege is one of the most effective ways to reduce risk without impacting usability when implemented thoughtfully.

Device trust and context matter

In modern systems, identity alone is not enough. Context matters.

Assume breach architectures consider the device and environment from which a request is made. Is the device managed or unmanaged. Is it running current security updates. Is the access pattern consistent with normal behavior.

By incorporating device trust and contextual signals, systems can adapt dynamically. High-risk requests can trigger additional verification. Low-risk interactions can proceed smoothly.

This reduces reliance on static rules and makes systems more resilient to evolving threats.

Designing for failure, not perfection

Failures will happen. Networks will drop. Services will misbehave. Credentials will be compromised. Secure by design systems are built to degrade safely.

This means isolating components so failures do not cascade. It means validating inputs even from internal systems. It means logging and monitoring activity so anomalies are detected quickly.

Rather than assuming every component behaves correctly, systems verify continuously and recover gracefully.

Why this matters for government services

Government systems support critical functions and handle highly sensitive data. The impact of breaches extends beyond financial loss to public trust and safety.

Assume breach design aligns directly with established federal security models, including Zero Trust architectures defined in NIST SP 800-207, identity assurance frameworks in NIST SP 800-63, and system control requirements in NIST SP 800-53. Together, these frameworks emphasize continuous verification, least privilege, and explicit trust boundaries rather than perimeter-based security.

For government systems, this approach prioritizes resilience and accountability over superficial controls.

Security that supports, not blocks, delivery

One fear agencies often have is that stronger security will slow services down. In practice, secure-by-design systems often enable faster delivery.

When trust decisions are automated and embedded into the architecture, users are not repeatedly interrupted. Staff are not forced into manual checks for routine actions. Systems can integrate more easily because trust boundaries are explicit.

Security becomes an enabler rather than an obstacle.

Building confidence through resilience

Public trust depends not just on preventing incidents, but on how systems behave when things go wrong.

Systems that assume breach protect data even under stress. They fail in predictable ways. They recover quickly. They provide clear audit trails and accountability.

This resilience builds confidence among users, staff, and leaders alike.

Secure by design as a long-term strategy

Secure by design is not a checklist. It is a philosophy that guides decisions across architecture, development, and operations.

By assuming breach and designing accordingly, government agencies can build systems that are safer, more adaptable, and better aligned with how digital services actually operate today.

In a world where compromise is inevitable, resilience is what makes modern government systems trustworthy.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


IDnow

Why 10% of European bank brands have either closed or consolidated in 2 years, and why even more will be closing by 2027.

Patchwork of regulations are putting huge economic pressures on financial services. Will a more harmonized, pan-European approach to identity verification and AML make operations easier? Or will the associated costs of compliance make it worse? As of June 2025, there were approximately 
Patchwork of regulations are putting huge economic pressures on financial services. Will a more harmonized, pan-European approach to identity verification and AML make operations easier? Or will the associated costs of compliance make it worse?

As of June 2025, there were approximately 4,752 bank brands operating in the European Union; 10% fewer than in 2023 (5,304), which itself saw a 2.9% decline on the previous year.  

The reduction in the number can be explained by several factors but is often linked to a bank’s resilience in remaining efficient and/ or profitable amid market shifts and technological changes, as well as being able to offer the services – and experiences – that consumers expect.

“The challenges associated with operating in Europe include the need to provide a wide range of identity verification methods, process hundreds of different identity document types, and comply with national interpretations of the most robust of regulatory frameworks.”

To remain profitable and efficient in 2026 and beyond is no easy feat, which is why many banks stay in an almost constant flux of structural or digital transformation or decide to consolidate with other brands.

Uwe Pfizenmaier, Director of Product at IDnow.
Under [regulatory] pressure.

While regulations are designed to perform several key functions, including protecting consumers, and maintaining the proper functioning of financial markets, they also cause major economic pressures for banks.  

For example, Europe’s largest institutions spend, on average, €14.5 million per year to remain compliant with AML and KYC requirements. Plus, with 2024’s introduction of AMLD 6, costs have increased dramatically. In fact, a PWC study found that over half of financial institutions have seen their AML compliance costs rise by more than 10% over the last two years. 

Failure to comply with AML can be catastrophic, with minimum fines for serious breaches doubling from €5 million to €10 million. Some banks escape with fines; others aren’t as lucky and can even have their licences revoked. 

Alongside long-established regulations like Anti-Money Laundering Directives (AMLD), there are also newly created ones, such as the Digital Operational Resilience Act (DORA) that aims to strengthen the IT security of financial entities such as banks, insurance companies and investment firms and making sure that the financial sector in Europe is able to stay resilient in the event of a severe operational disruption. While compliance with DORA can cost a bank up to €1 million, the cost of non-compliance far outweighs that, with financial sanctions varying from 2% of annual global turnover or 1% of average daily global turnover. 

Considering the financial risk, compliance with new and existing regulations is understandably a chief focus for European banks right now. In fact, on average, many financial services allocate over 10% of their budgets to new compliance technologies and tools. 

While for the consumer it has never been easier to register and start their customer journey, via a range of identity verification methods, for banks, the road to ensuring that experience remains compliant throughout Europe is far from straightforward, and one only likely to become bumpier and more expensive – for the next two years anyway. After 2027, however, operating in the EU will become decidedly easier and more efficient – in theory. But buckle up banks, it is likely to get worse before it gets better…

How did financial services become so fragmented?

Before we look ahead, let us first consider how Europe’s financial services became so fragmented. In 2014, in a bid to facilitate secure, cross-border online financial transactions, the European Union enacted the Electronic IDentification, Authentication and Trust Services (eIDAS) regulation. 

However, despite the best of intentions, it was widely accepted that the first iteration of the regulation failed in its mission; by 2020, only 60% of EU citizens had access to a trusted identification system, while adoption and usage were even lower. It also became apparent that the interoperability of national services and infrastructures was not sufficient. As such, it proved difficult for banks to use it as an international Know Your Customer system. 

In 2025, not only does each EU member state have their own national interpretation of AML rules and therefore enforcement standards, but they also invariably adhere to different regulatory requirements for remote identity verification processes. For example, while some European regulatory bodies accept fully automated identity verification, others such as the German BaFin have historically required all new financial services customers to undergo either in-person or video verification as part of the onboarding process. However, a 2024 draft bill has laid the way for more automation in the identity verification process, which would bring German identity verification processes more in line with other EU member states. 

Clearly, consistency is key to any meaningful impact in the European fight against fraud and money laundering. However, the fact remains that in 2025, the EU is effectively stitched together with a patchwork of national regulations and processes that threaten the fabric of financial services.

Accept marketing cookies to view this YouTube video.

Manage my cookie preferences

Bringing the EU together in regulatory harmony by 2027.

The EU has devised three key regulations, bodies, and initiatives that it hopes will create a more harmonized approach to AML efforts and remote identity verification. Each promote cross-border digital identity verification and trust services, simplify customer onboarding and KYC,and boost secure digital transactions for banks and consumers. All will be effective across the EU by late 2027/ early 2028  

eIDAS 2.0: The second iteration of eIDAS, eIDAS 2.0 establishes the European Digital Identity (EUDI) framework and mandates all member states to offer EUDI Wallets to citizens by November 2026. It also addresses weaknesses in the original regulation, such as data protection, and introduces new trust services, including data preservation/ archiving services, and Qualified Electronic Attestations of Attributes (QEAA), which will allow users to share verified credentials across a range of use cases. Banks will need to accept said Wallets for user onboarding and authentication by November 2027.  Anti‑Money Laundering Authority (AMLA). By creating a coherent AML framework for member states to follow, the newly formed EU authority hopes to strengthen the fight against money laundering and terrorism financing. It also creates a common supervisory culture at EUlevel. The AMLA will be fully operational (with a staff of 430) and begin direct supervision by 2028.  The Anti-Money Laundering Regulation (AMLR): Perhaps the boldest of all upcoming regulations and initiatives, AMLR creates one set of rules across the EU and establishes stronger checks and rules for KYC, monitoring, and Customer Due Diligence. It also essentially standardizes and distils compliant identity verification into three methods: 
 
a) EUDI Wallet: Consumers can use their EUDI Wallet for authentication for KYC purposes and compliant AML onboarding. 
 
b) Notified eID Schemes: Consumers will be able to use their national electronic identification (eID) scheme (including smartcards, mobile and log-in) along with all other eID Schemes in Europe. 
 
c) Qualified Trust Services: Here, consumers use Qualified Electronic Signatures (QES) and Qualified Electronic Attestations of Attributes (QEAA) for compliant automated and hybrid identity verification. 
 
AMLR will replace the current national directive-based approach to AML efforts by July 2027. 

Based on the European Commission’s impact assessment study, streamlined, consistent onboarding procedures for financial services could generate annual savings of between €860 million to €1.7 billion, while enhanced fraud prevention measures could yield additional savings ranging of €1.1 billion to €4.3 billion per year.

But wait, there’s a costly catch.

However, it’s worth noting that the same study also predicts significant implementation costs to ensure compliance with AMLR and eIDAS 2.0. While it’s difficult to know the exact amount that individual banks will need to outlay to become compliant (as it depends on size and scale), the EU Commission estimates eIDAS 2.0 implementation costs to be north of €3.2 billion.

Top 5 financial services compliance challenges.

To prepare for AMLR and eIDAS 2.0, European banks will encounter substantial costs for system upgrades, staff training, and compliance preparation. As such, banks must rethink their technology stacks and risk models, compliance procedures, and customer experience strategies.Here are the top costs that banks need to prepare for. 

1. Banks will need to integrate 27+ national identity wallets, each requiring separate registration.

2. Each Wallet will have its own diverse APIs and data formats, which will increase integration complexity and associated costs.

3. There will also be significant development and maintenance costs, regardless of whether systems are built in-house or supplied by multiple different vendors.

4. Although only the EUDI Wallet is mandatory, for banks that wish to provide each of the three compliant identification methods (Qualified Trust Services, eID Schemes, Wallets), they will face a considerable cost.

5. Ongoing data checks and controls.

“Whereas in the past, European banks faced challenges associated with fragmented regulations they now face significant technical and user experience challenges, which can be effectively addressed with the right solutions that needn’t cost the earth. As a Qualified Trust Service Provider, IDnow uniquely supports all three required identification methods through our AI-driven platform — enabling businesses to transform compliance from a challenge into a trust-building opportunity that drives sustainable growth,” said Uwe. 

To learn more about how IDnow can support your AMLR compliance journey, contact our team of experts today.

By

Jody Houton
Senior PR & Content Manager at IDnow
Connect with Jody on LinkedIn

Wednesday, 31. December 2025

Holochain

2025 at a Glance: Landing Reliability

Blog

Looking back at 2025, I find myself reflecting on a year that was less about dramatic new features and more about something arguably harder: making what we have actually work reliably. This was the year of focusing on stability for Holochain.

The Foundation Underneath

It's easy to forget now, but we only integrated  Kitsune2 this year. If you experienced Holochain before this transition, you'll remember the frustration: peer discovery that didn't quite discover, sync that sometimes just... didn't. DHT synchronization that could take 30 minutes or more—if it completed at all.

Kitsune2 changed that. Not through clever optimization tricks, but through a fundamental rethinking of how we approach network reliability. The result: synchronization dropped to about a minute or faster in most cases. More importantly, it became predictable. Data shows up. Peers find each other. The network does what a network should do.

This might just seem like baseline functionality, but anyone who has tried to build distributed systems knows that reliable peer-to-peer networking is genuinely hard. Getting it right matters, getting it fast comes next.

Validation That Validates

Alongside Kitsune2, we completed a first pass on our workflow updates, particularly around validation and integration. The validation pipeline had accumulated enough edge cases and failure modes that behavior was, to put it charitably, inconsistent. We went through it systematically—not adding features, but making the existing logic do what it was supposed to do.

The warrants feature represents the logical completion of this work. Holochain has always validated data, but until now there were no real consequences when validation failed. With warrants, bad actors get blocked at the network transport level, and anyone querying a compromised agent's public key receives the warrant. The immune system, as we're calling it, is now functional. It’s not complete (membrane proof enforcement during network handshaking is still coming) but it’s functional.

Seeing What We're Building

Three pieces of infrastructure came into their own this year, each essential for building with confidence:

Wind Tunnel reached production readiness. For the first time, we can run real-world application tests across as many nodes as we need, with comprehensive metrics from both the OS and conductor levels. The framework now runs scheduled automated tests, executes scenarios in parallel, and gives us actual data about how Holochain behaves under stress. Several bugs we caught this year only manifested under load—the kind of issues that would have been mysterious failures in production. Now we see them before you do. The Build Guide fills a gap that, honestly, we should have addressed sooner. A year ago, Holochain documentation consisted of API references in cargo docs and READMEs scattered across repositories. Now there's a comprehensive, beginner-friendly guide that walks through every feature of the framework with working code snippets. If you haven't looked at it recently, take another look. It's substantively different from what came before. The Roadmap represents a commitment we made this year to transparency about where we are and where we're going. It's not a marketing document—it's a live view into our backlog, our in-progress work, and our predictions about completion based on actual velocity data. You can see what we're working on, what's queued up, and how our estimates compare to reality over time. If you want to understand our priorities or hold us accountable to them, this is where to look. The Long List

Beyond the headline items, there's been a steady stream of fixes across tx5, K2, Holochain core, and our tooling. Connection handling that gracefully re-establishes after failure. DHT sync sessions that correctly calculate all missing data. Request management that handles high volume gracefully. WebRTC library flexibility for deployment scenarios that need it.

None of these individually sounds dramatic. Collectively, they represent a codebase that behaves the way developers expect it to.

The People Who Make This Real

No framework matters without people building on it, and I want to acknowledge some of the ecosystem partners whose work this year demonstrates what Holochain makes possible.

Volla continues pushing toward privacy-respecting mobile computing. Unyt is building value accounting infrastructure for decentralized organizations. Coasys is developing ADAM and Flux, showing what semantic interoperability looks like when you take it seriously. Sensorica keeps advancing open value network infrastructure, bringing Valueflows and hREA to bear on real peer production challenges. The Carbon Farm Network is applying these tools to regenerative agriculture and climate-beneficial fiber sourcing. Arkology Studio is building a Data Commons Stack for enhanced collective sensemaking. HummHive continues to build on Holochain and deploy on Holo to help people create and share stories with choice. And DADA explores the edge where art, value exchange, and the collective meet.

On the media and discourse side,  hAppenings serves as a crucial connector and information resource for the broader community, making the ecosystem legible to newcomers and advocates alike. Tthe  Entangled Futures podcast has been exploring questions of mutuality and collective action—the philosophical territory that gives this technical work its meaning. And this year we participated in the DWeb Seminar, engaging with researchers and builders across the peer-to-peer space on the challenges that still lie ahead.

These are a few among many. The value of what we're building comes from the network of people using it, extending it, and imagining what it could become.

What This Means

We made a deliberate choice this year to limit Holochain to features known to be stable and to fix known issues in core functionality. The temptation in any project is to keep adding—new capabilities feel like progress. But there's a different kind of progress in making existing capabilities reliable enough that people can build on them without worrying about the foundation shifting.

That's what 2025 was about. The networking layer works. Validation is consistent. We can test at scale. Documentation significantly improved. These aren't the kinds of things that generate headlines, but they're the kinds of things that let you actually build applications.

We're entering 2026 with a more solid foundation than we've ever had. What gets built on it is the next chapter.

---

*Eric Harris-Braun is Executive Director of the Holochain Foundation.*


Aergo

[2025 Recap] Aergo to HPP: From Proven Enterprise to an AI-Native Infrastructure

2025 wasn’t just a “rebrand year.” It was the year the ecosystem became structurally clear: HPP became the public-facing network built to scale for what’s next, built on Aergo’s enterprise-proven foundation. 1) The direction became official (AIP-21) In early April 2025, the community approved AIP-21, formalizing the unified ecosystem direction under House Party Protocol (HPP) and setting th

2025 wasn’t just a “rebrand year.” It was the year the ecosystem became structurally clear: HPP became the public-facing network built to scale for what’s next, built on Aergo’s enterprise-proven foundation.

1) The direction became official (AIP-21)

In early April 2025, the community approved AIP-21, formalizing the unified ecosystem direction under House Party Protocol (HPP) and setting the foundation for the token/network transition.

2) Aergo’s strongest proof point kept compounding: National healthcare at scale

While narratives shifted, Aergo’s enterprise credibility stayed grounded in production. A flagship example highlighted during the year was Korea’s National Health Insurance Service (NHIS), which deployed an Aergo Enterprise-based TSA system processing roughly 400,000 transactions per day, peaking around 700,000. It stands as one of the largest and most successful enterprise blockchain implementations.

3) HPP Public Mainnet made the transition real

In August 2025, HPP Public Mainnet launched, turning the ecosystem’s evolution into something builders and integrators could actually work with. This was the moment HPP moved from a roadmap narrative to a live network where teams could deploy contracts, test real user flows, and begin integrating infrastructure components end-to-end.

Just as importantly, the launch established a concrete base for everything that followed, including migration flows, liquidity planning, and exchange coordination.

4) Q4 was about operations: migration and exchange readiness

The year closed with the hard work that makes transitions succeed: migration rails, exchange coordination, stability checks, and rebrand/swap execution. It wasn’t glamorous, but it’s exactly where real ecosystems prove they can ship. By the end of 2025, HPP was also listed on three DAXA-member exchanges, marking a key step in strengthening access.

2026 and forward

Aergo ended 2025 with its enterprise-grade credibility still intact, backed by real deployments and operational discipline.

HPP ended 2025 as the ecosystem’s public execution layer: launched, operational, and positioned to scale in 2026. From here, the focus shifts from proving the transition to expanding real usage, strengthening integrations, and converting infrastructure momentum into a broader builder and user ecosystem.

Thank you to everyone who supported Aergo and HPP throughout 2025. Your support helped turn a transition into real infrastructure, and we’re excited to build the next chapter together in 2026!

[2025 Recap] Aergo to HPP: From Proven Enterprise to an AI-Native Infrastructure was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spruce Systems

Why Document Intake Is the Weakest Link in Digital Services

Even the best backend systems fail if intake is unreliable. This post explores why document capture, OCR accuracy, and secure storage are critical — and how improving intake dramatically reduces downstream cost and fraud.

Digital government systems are often evaluated by what happens after data is submitted. Eligibility engines, case management platforms, analytics tools, and fraud detection systems receive the most attention. Yet many digital services struggle not because of what happens downstream, but because of how information enters the system in the first place through outdated document intake processes.

Document intake is the weakest link in most government digital services. When intake is unreliable, even the most sophisticated backend systems are forced to compensate. Errors propagate, costs rise, and fraud prevention becomes harder. Fixing intake is one of the highest-leverage improvements agencies can make as part of government modernization and digital transformation.

Intake sets the ceiling for system performance

Every digital service depends on the quality of its inputs. If documents are incomplete, inconsistent, or difficult to interpret, systems must slow down or fall back on manual review instead of workflow automation.

Many government services still rely on residents uploading scanned documents, photos, or PDFs that were never designed for machine use as part of legacy form workflows. Staff then interpret those documents, extract key details, and re-enter information into systems of record. Each step introduces delay and risk and degrades data quality.

No amount of backend optimization can fully overcome unreliable intake. At best, systems spend resources correcting errors. At worst, decisions are made using flawed data that undermines digital service delivery.

Documents arrive unstructured and ambiguous

A document upload does not tell a system what matters or how the data should be used.

A single file may contain multiple data points, some relevant and some not. Names may appear in different formats. Dates may be ambiguous. Key fields may be missing or illegible. Determining whether a document meets policy requirements often depends on human judgment rather than consistent, policy-driven validation.

This ambiguity is costly. It limits automation. It creates backlogs. It increases variation across reviewers and programs and complicates system integration.

Until documents are transformed into structured, validated data, they remain a bottleneck rather than an asset for modern digital services.

OCR accuracy is foundational, not optional

Optical character recognition is often treated as a convenience feature. In reality, it is foundational to reliable document intake and document processing.

Low-quality OCR introduces subtle errors that are hard to detect. A single incorrect character in an identifier or date can cause downstream mismatches, false flags, or incorrect decisions. When accuracy is inconsistent, systems must rely on manual checks that negate the benefits of form modernization and workflow automation.

Modern OCR combined with layout analysis, validation rules, and AI document processing dramatically improves reliability. Fields are identified explicitly. Values are checked against expected formats and policies. Errors are surfaced immediately instead of being discovered later in the process.

Accuracy at this stage directly determines how much automation and system integration is possible downstream.

Secure storage matters as much as capture

Intake does not end when a document is uploaded or processed.

Documents and extracted data must be stored securely, with clear access controls and auditability. Many systems store full documents even when only a small subset of data is needed, increasing exposure and compliance risk without improving outcomes.

Secure-by-design intake systems separate concerns. Original documents are preserved for legal and audit purposes. Structured data is stored and shared according to purpose and policy. Access is logged and limited based on role to support compliance requirements.

This approach reduces risk while improving usability across digital services.

Poor intake increases fraud risk

Fraud thrives in ambiguity and weak validation.

When intake processes are inconsistent or loosely validated, it becomes easier to submit altered documents, reuse information across programs, or exploit gaps between systems. Manual review catches some issues, but it does not scale well and often focuses on surface-level checks instead of systemic patterns.

Reliable intake strengthens fraud prevention by establishing clearer signals at the source. Documents are validated for consistency. Required fields are enforced. Anomalies are flagged early. Systems can correlate information across submissions more confidently using structured data.

Fraud prevention improves not because systems become more aggressive, but because uncertainty is reduced through better document intake and data quality.

Downstream costs multiply quickly

Every weakness in intake creates downstream cost throughout digital service workflows.

Staff time is spent correcting errors. Processing timelines lengthen. Appeals increase. Data sharing becomes harder because systems cannot trust what they receive. Analytics lose credibility because underlying data is inconsistent and difficult to reconcile.

These costs rarely appear in a single budget line item. They are spread across operations, IT, compliance, and customer support. Intake improvements, by contrast, deliver benefits across the entire lifecycle of a service from submission through decisioning.

Intake as infrastructure, not a feature

One reason intake is often overlooked is that it is treated as a feature rather than infrastructure within digital transformation initiatives.

Forms and uploads are seen as simple front-end components, not as critical control points. As a result, they receive less architectural attention than identity, payments, or analytics during modernization efforts.

In reality, intake is where trust begins. It is where data quality, privacy, and security are first established. Standards-based approaches emphasize validating inputs, limiting exposure, and designing for auditability from the start for government systems.

When intake is treated as infrastructure, systems become more reliable everywhere else because downstream services can trust their inputs.

Strengthening the weakest link

Improving document intake does not require replacing every backend system. It requires investing in how documents are captured, interpreted, validated, and stored as part of form modernization and document processing modernization.

Modern intake systems combine secure upload, high-accuracy OCR, structured data extraction, and policy-driven validation. They reduce manual effort while increasing confidence. They make automation safer rather than riskier and enable scalable digital services.

Most importantly, they change the economics of digital services. Fewer errors. Faster processing. Lower fraud risk. Better data that supports integration and compliance.

The weakest link in digital services is also one of the easiest to strengthen. When intake works, everything that follows works better across workflows, systems, and programs.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.

Tuesday, 30. December 2025

Spherical Cow Consulting

ICYMI 2025: What You All Read the Most This Year

In this episode, Heather Flanagan looks back at the most read Digital Identity Digest posts of 2025, exploring what resonated across digital identity, governance, credentials, and AI. The recap reveals patterns behind shifting priorities, recurring debates, and the questions shaping standards work and system design. Discover how topics like agentic AI and authentication, delegation, decentraliza

“I enjoy looking back at the posts that caught people’s attention over the past year; I never really know what will catch people’s attention!”

My blog isn’t exactly mainstream clickbait; you and I are part of a niche crowd who get excited about things like key lifecycles, European regulatory patterns, and whether AI agents need their own delegation models (they need one, but I don’t know that it needs to be specifically for AI). But that’s the fun of it.

So, that said, let’s take a walk down blog-memory lane to see what people found most interesting based on simple numbers (which, since I am not a statistician nor can I play one on TV, means that more recent posts didn’t make the list unless they were REALLY compelling.) 

A Digital Identity Digest ICYMI 2025: What You All Read the Most This Year Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:13:58 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

#10 — Kill the Wallet? Rethinking the Metaphors Behind Digital Identity

Metaphors shape how people understand technology (and their world). Sometimes the metaphor shapes things in a not-quite-right fashion. This post explored whether the “wallet” metaphor has outlived its usefulness. Between platform-controlled wallets, issuer-specific lockers, open-standards containers, and everything in between, “wallet” now means too many incompatible things.

The metaphor hides thorny issues around consent, selective disclosure, and user agency. Standards bodies like Kantara and NIST are pushing for stronger user control, but most “wallets” don’t meet that bar yet. Maybe it’s time to retire the term and pick something that doesn’t collapse trust boundaries quite so dramatically.

#9 — Why Governance Decides If Decentralization Works

This post wrapped up my four-part decentralization series with a simple argument: the technology is ready, but governance is not.

When decentralization fails, it’s rarely because the protocol was flawed. It’s more because no one agreed on who gets to decide, update, or enforce anything. Governance defines authority, escalation paths, and shared accountability. Without it, decentralized systems drift into ambiguity and finger-pointing. This was a regular theme in 2025.

Identity teams sit right in the middle of this tension, and the post encouraged readers to treat governance as an intentional design choice rather than an afterthought.

#8 — Agentic AI in the Open Standards Community: Standards Work or Just Hype?

At IETF 123, AI showed up everywhere, in working groups as well as in drafts that had nothing to do with AI on the surface. Delegation chaining, workload identity, bot authentication, AI preference signaling, and agent collaboration protocols all intersect with how AI systems behave online.

This post mapped where these conversations are happening and offered a snapshot of the standards work that’s quietly shaping the infrastructure AI agents will have to live with. Spoiler: it’s not all branded as “AI.”

#7 — Acting on Behalf of Others: Why Delegation Is Still Broken

One user, one identity, one intent; that tidy model never really reflected reality. Caregivers, coworkers, parents, and now AI agents all need to act on behalf of someone else, but most digital systems still assume delegation is an edge case.

We’ve tried partial fixes: OAuth sharing, UMA, token exchange, persona chaining. But none handle conditional constraints, auditability, revocation, or lifecycle boundaries.

The rise of agentic AI raises the stakes even further. If delegation is broken for humans, it’s certainly not ready for autonomous systems. This post, the first in a series, argued that delegation needs to be a first-class identity feature, because the demand is only growing.

#6 — Digital Credentials That Can Be Verified: A Lesson in Terminology

The credential terminology mess—VCs vs mDLs, digital credentials vs verifiable digital credentials—is more than a naming annoyance. It reflects real disagreements about standards, data formats, media types, and interoperability.

This post unpacked how we reached this point and why clear definitions matter. With multiple standards bodies registering overlapping media types and governments adopting different terminology, confusion is a feature, not a bug. The message: define your terms, and don’t assume “digital credential” means the same thing to everyone.

#5 — Understanding NHIs: Key Differences Between Human and Non-Human Identities

Non-Human Identities (NHIs), which include everything from workloads, microservices, and now AI agents, don’t behave like human users, and they shouldn’t be forced into human IAM systems.

NHIs operate at machine speed, require cryptographic authentication, and have lifecycles measured in minutes. They need workload federation models, dynamic credentials, and automated lifecycle management.

This post laid out why treating NHIs like “just API keys” is a security liability and why standards like WIMSE and SPICE are increasingly essential.

#4 — Unlock the Secrets of OAuth 2.0 Tokens (and Have Fun Doing It!)

One of my early audio experiments from the end of 2024, this post revisited token security basics in a more conversational tone: short-lived and scoped tokens are safer; long-lived tokens carry real risk; sender-constrained designs help; and modern systems increasingly rely on real-time, context-aware authorization. As far as I can tell, the audience over at Hacker News really likes it.

A surprising number of people apparently enjoy a good token-lifecycle explainer. Honestly, same.

#3 — The End of the Global Internet

This post examined the many vectors of Internet fragmentation: technical, regulatory, economic, political, and infrastructural. The “borderless Internet” ideal is eroding, replaced by a patchwork shaped by sovereignty mandates, supply-chain splits, geopolitical tensions, and market forces.

But fragmentation isn’t purely negative. It can drive innovation, resilience, and higher privacy standards. The key is recognizing fragmentation as the new baseline and designing for interoperability across a more constrained, uneven landscape.

I’m incredibly pleased with how this post has been received and that, despite how recently it was posted, it’s near the top of the list. I’ve had so many great conversations come out of this one!

#2 — Verifiable Credentials and mdocs: A Tale of Two Protocols

This 2024 post refuses to leave the leaderboard. Seriously, it gets at least one view Every. Single. Day. It breaks down the mDL/mdoc vs. W3C VC divide, explaining how each format emerged from different assumptions: one from government ID workflows and the other from web-centric extensibility.

The post outlined where the friction comes from (governance, media types, developer experience, privacy models) and why implementers need to plan for a moving target as both ecosystems continue to evolve.

#1 — Agentic AI and Authentication: Exploring the Unanswered Questions

No surprise here: the most-read post of 2025 tackled one of the biggest open problems in identity today: what happens when authentication systems designed for humans meet AI agents acting autonomously.

The post explored gaps in trust boundaries, delegation, accountability, context-aware authorization, and credential handling. OAuth solves some delegation problems, but not the ones AI introduces. Wallet-based models help in some places and raise new issues in others.

The takeaway: our identity systems weren’t built for autonomous actors. We need new patterns, and we need them quickly.

What These Top Posts Say About 2025

Looking across the topics that resonated most this year, a few themes stand out:

AI isn’t a feature anymore — it’s a structural change.

From authentication to delegation to governance, readers gravitated toward posts about how AI agents challenge our deepest assumptions about identity. The conversation has moved beyond “add AI to X” into “re-architect X because of AI.” I think the best thing I can say about that is, well, it’s a choice. Not sure it’s a good choice, mind you, but regardless of what I think about it, it’s where the tech world is headed. 

Governance beat technology as the real bottleneck.

Whether the topic was decentralization, internet fragmentation, or wallet ecosystems, the posts that resonated most emphasized governance, accountability, and shared rules of engagement. Technology isn’t the blocker; the lack of alignment is.

Definitions matter. A lot.

Posts on terminology, metaphors, and conceptual clarity all landed in the top ten. In a standards-heavy domain, words carry architectural implications. Readers seem eager for clearer language and were willing to entertain my rants about sloppy metaphors.

Identity is expanding to new kinds of actors.

Whether NHIs, workloads, or agentic AI, people are thinking beyond human users. Some people have been thinking about that way for a while, but the whole NHI conversation is absolutely mainstream now. Identity practitioners are recognizing different lifecycles, behaviors, and risk profiles for the full range of users in their systems. I’m not going to say it’s about time, but…

Interoperability is the quiet throughline.

Wallets that don’t interoperate. Credentials that don’t align. Ecosystems that fracture politically. Delegation models that don’t fit across systems. Identity pros want bridges, not silos. Which seems pretty obvious, but I have to point out that we, collectively, need to go beyond “want” and dig into “build.” If we want those bridges, no one will build them for us. Regulators won’t. Standards architects will try, but they can’t do it without the people who do the implementation and development.

So, about 2026…

And with that, thanks for reading, sharing, arguing with me, and sending me down new rabbit holes this year. It’s genuinely energizing to see how many of you care about the deeper questions shaping digital identity; not just the headlines, but the underlying shifts in governance, standards, and how we build trust online. If you want to follow these conversations from the perspective of the senior identity practitioners from companies around the world, that’s exactly what we’re building at The Identity Salon. It’s where the messier, more candid discussions happen, and where many of these ideas get pressure-tested long before they show up on the blog. And I get to write the reports. Dream job ftw!

If these trends continue—and I have every expectation that they will—2026 is going to be an interesting one.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript A Different Kind of Episode: Looking Back at 2025

00:00:30
This episode is a little different. Instead of diving into a single topic, we’re looking back.

Each year, I review which posts resonated the most—not because popularity equals quality, but because it reveals what people in the field are wrestling with:

What caught your attention What sparked conversation What may have kept you up at night

Of course, some of this could simply reflect what social media algorithms decided to surface. But for today, let’s assume human interest won.

This is an In Case You Missed It recap of 2025: the top 10 most-read posts, starting at number ten and working our way to the top. Along the way, we’ll also explore what these patterns suggest about where digital identity is headed.

Number 10: Kill the Wallet

00:02:00
Coming in at number ten is a post born out of sheer frustration: Kill the Wallet.

In this piece, I ask whether the “wallet” metaphor is still useful in digital identity—or whether it’s time to retire it with honors.

At this point, wallet can mean almost anything:

An open-standards holder for verifiable credentials A tightly controlled app limited to issuer-approved items A credential locker disguised as a university app Whatever Apple or Google says it is this week

As a result, the term has become a catch-all for wildly different design choices, governance models, and policy assumptions. When a metaphor collapses that many boundaries, it stops clarifying and starts obscuring.

The post ultimately argues for:

Clearer language Stronger user agency Less metaphorical sleight of hand

If your “wallet” can’t support transparency, selective disclosure, or revocable permissions, it may not deserve the name.

Number 9: Why Governance Decides Whether Decentralization Works

00:03:45
Number nine was the final entry in my four-part series on decentralization: Why Governance Decides If Decentralization Works.

This post makes a point I will likely keep making for a long time:
Technology is rarely the problem. Governance is.

We already know how to build decentralized systems:

The internet itself is decentralized DNS, BGP, and email are all decentralized technologies

When decentralization fails today, it’s usually because of governance gaps:

No shared rules No escalation paths No agreement on decision-making authority

Without governance, decentralization becomes friction—not flexibility.

The takeaway is simple but important: governance is infrastructure. And digital identity practitioners are often closest to the consequences, which means we’re also best positioned to help shape it.

Number 8: AI Is Already in the Standards (Even When It’s Not in the Title)

00:05:20
Number eight takes us into AI—but not into the hype cycle.

This post came out of IETF 123 in Madrid, where AI showed up everywhere, sometimes explicitly and sometimes quietly, hiding behind familiar topics like:

Delegation chains Workload identity API-driven workflows Bot authentication

These areas suddenly matter much more when AI agents start acting across domains.

The post mapped where AI-related conversations are happening:

IETF W3C OpenID Foundation Emerging AI-focused community groups

The key insight? Much of the most consequential work doesn’t mention “AI” at all. If you only follow headlines, you’ll miss the standards decisions that shape authentication flows, logging, and agent discovery.

If you want to understand AI’s future, watch the standards—not the hype.

Number 7: Acting on Behalf of Others—Why Delegation Is Still Broken

00:06:50
Delegation is one of the oldest problems in identity.

We’ve always needed ways for one person—or system—to act for another:

Caregivers Parents Assistants Coworkers Now, AI agents

Yet digital systems still assume a one-user, one-identity, one-intent model. When real life doesn’t fit, we get:

Password sharing Manual overrides Role switching Support tickets Endless workarounds

This post explores why delegation remains broken, despite partial solutions like OAuth extensions, UMA, token exchange, and persona chaining.

What’s missing?

Conditional constraints Transitivity Lifecycle management Clear attribution of actions

With AI agents entering the picture, the stakes are even higher. Delegation needs to become a first-class identity feature, not an afterthought.

Number 6: Why Credential Terminology Is Such a Mess

00:08:20
Number six focused on language—specifically, the chaos around digital credential terminology.

We now have:

Verifiable credentials mDocs Digital credentials Verifiable digital credentials

Behind each term often sits:

A different standards body A different architecture Different privacy and governance assumptions

This post untangles how we got here and why clarity matters. When governments, vendors, and standards groups use the same words to mean different things, “interoperability” becomes wishful thinking.

The takeaway is straightforward:
Define your terms—and resist the urge to invent new ones.

Number 4: Token Lifecycles and Why They Matter

00:09:45
Number four was an early experiment in audio—and clearly, people wanted a clear explainer on token lifecycles.

This post covered:

Why short-lived, scoped OAuth 2.0 tokens are safer Why refresh tokens require careful handling Why sender-constrained tokens reduce replay risk

It also touched on emerging trends like:

DPoP CAEP Real-time authorization signals

The core message? Token security is not “set it and forget it.” It’s an ongoing risk decision that evolves with your architecture.

Number 3: The End of the Global Internet

00:11:00
Despite the title, this post wasn’t apocalyptic.

Instead, it explored the growing fragmentation of the internet driven by:

Regulatory divergence Supply chain splits Sovereignty rules Content controls Infrastructure gaps

The internet isn’t collapsing—it’s becoming a patchwork. And patchworks can work, but only with intentional design.

Some fragmentation improves privacy and resilience. Others increase friction and inequality. Either way, global interoperability is no longer a given, and systems must be designed with that reality in mind.

Number 2: Verifiable Credentials and a Tale of Two Protocols

00:12:10
Originally written in 2024, this post continues to attract daily readers.

It compared how:

mDocs grew out of government ID workflows Verifiable credentials emerged from a web-driven extensibility model

Those origins still shape:

Privacy models Developer experience Governance assumptions

The post clarified why the two ecosystems sometimes compete, sometimes conflict, and sometimes solve the same problems from opposite directions.

The fact that this debate persists explains why the post still resonates.

Number 1: Agentic AI and Authentication

00:13:00
The most-read post of 2025 explored the collision between authentication systems built for humans and AI agents acting autonomously.

It asked difficult but necessary questions:

How do we bound trust? How do we constrain delegation? How do we separate user intent from agent behavior? Where does accountability sit? What does selective disclosure mean when the presenter isn’t human?

There were no final answers—because we’re not there yet. But surfacing the right questions is often where meaningful standards work begins.

What These Posts Reveal About the Field

00:14:10
Looking across the list, a few themes stand out:

AI is no longer an add-on—it’s reshaping identity architecture Governance is the real bottleneck, not technology Clear definitions matter more than ever Identity now includes a wider cast of actors, human and non-human Interoperability remains the quiet throughline beneath it all

People want bridges more than silos.

Closing Thoughts

00:15:20
Thank you for listening, reading, sharing, and challenging these ideas. It’s been incredible to see how many of you care about the deeper structural questions shaping digital identity—not just the headlines.

If you want to explore these topics in a more candid, Chatham House-rule space, that’s exactly what we’re building at the Identity Salon, where many of these ideas are pressure-tested before they show up on the blog.

These trends will absolutely continue into 2026—and that makes the year ahead a fascinating one.

Thanks for being here. I’ll see you next year.

The post ICYMI 2025: What You All Read the Most This Year appeared first on Spherical Cow Consulting.


Spruce Systems

From Uploads to Intelligence: Rethinking Document Workflows

Uploading PDFs isn’t digital transformation. This post explains how structured capture, validation, and automated processing turn documents into actionable data — without manual review.

Uploading a PDF is often mistaken for digital transformation. A form moves online, a document is attached, and a confirmation screen appears. Behind the scenes, however, the workflow looks much the same as it did before in many government digital services. Someone opens the file, reads it, extracts key details, and enters them into another system.

This approach digitizes the surface of a process without changing how it actually works. True transformation happens when documents stop being passive files and start becoming sources of actionable data that modern systems can trust and reuse.

Rethinking document workflows means shifting from uploads and manual review to modern document intake, structured capture, validation, and automated processing that systems can rely on.

Why uploads are a dead end

PDF uploads feel convenient because they are familiar. They require little change from residents and minimal integration work for agencies. But they create long term costs that are easy to underestimate as service volume and program complexity grow.

An uploaded document provides no structure. Systems do not know which fields matter, whether information is complete, or how values should be interpreted. Every downstream step depends on human judgment and manual handling.

As volume increases, this model breaks down. Backlogs grow. Errors slip through. Automation stalls because inputs are inconsistent. What looked like a digital workflow becomes a manual process with extra steps that slow service delivery.

Uploads solve a short term problem at the expense of long term efficiency and modernization goals.

Documents contain intelligence, but systems cannot see it

Most documents submitted to government already contain the information agencies need. Names, identifiers, dates, addresses, eligibility criteria, and approvals are all there inside existing forms and records. The problem is that systems cannot access that intelligence without human intervention.

When documents are treated as opaque files, valuable signals are lost. Systems cannot validate inputs in real time. They cannot reuse information across services. They cannot reliably trigger automated decisions or policy-driven workflows.

Turning documents into intelligence requires extracting meaning at the moment of submission, not weeks later during review or case processing.

Structured capture changes the workflow

Structured capture starts by identifying what information matters and extracting it directly from documents as part of modern document intake.

Using image recognition, OCR, and AI document processing, systems identify relevant fields and convert them into structured data. Layout analysis distinguishes between similar looking elements. Validation rules check formats, completeness, and consistency before the submission moves forward into downstream workflows.

For example, an application document can be validated to ensure required fields are present, dates are current, and identifiers follow expected patterns. Errors are flagged immediately, reducing rework and follow up for both staff and applicants.

The original document is preserved for audit and legal needs, while the extracted data becomes usable input for downstream systems and interoperable services.

This is where workflows begin to change fundamentally from document handling to data-driven processing.

Validation replaces manual review

Manual review is often treated as a safeguard. In practice, it is a bottleneck in modern digital services.

Human reviewers are inconsistent by necessity. They interpret documents differently, miss subtle errors, and apply rules unevenly under pressure. Review does not scale well and becomes increasingly expensive as volume grows across programs and agencies.

Automated validation applies rules consistently and instantly. Policy requirements are encoded once and enforced every time. Exceptions are surfaced clearly instead of being discovered late in the process after decisions have already been delayed.

This does not eliminate the need for human judgment. It focuses it where it matters most. Staff review true exceptions and complex cases instead of performing repetitive checks that can be handled automatically.

Automated processing unlocks speed and accuracy

When documents are captured and validated as structured data, workflow automation and system integration become possible.

Data can flow directly into systems of record. Workflows can route cases based on clear criteria. Decisions can be made faster because inputs are reliable and consistently validated.

This reduces processing time without reducing oversight. In many cases, it improves accuracy because errors are caught early and consistently before they propagate across systems.

Automation works not because it is faster than humans, but because it removes unnecessary translation steps between documents and systems that introduce risk and delay.

Reducing cost and fraud through clarity

Unstructured workflows create ambiguity, and ambiguity creates cost and risk.

When data is unclear, agencies spend time resolving discrepancies. When rules are applied inconsistently, appeals increase. When validation is weak, fraud becomes harder to detect and easier to scale.

Structured capture and automated processing reduce uncertainty at the source. Information is standardized. Validation rules are explicit. Anomalies are easier to identify across programs and services.

Fraud prevention improves because systems have clearer signals, not because scrutiny increases across the board or burden is shifted to applicants.

Designing document workflows as infrastructure

One reason document workflows remain manual is that they are treated as peripheral features rather than core infrastructure within digital transformation initiatives.

Uploads are added to forms. Review steps are bolted onto case management systems. Each program solves the problem locally with its own tools and processes.

A more durable approach treats document workflows as shared infrastructure. Capture, validation, and processing are standardized across services. Systems consume structured data instead of files. Policies are enforced consistently through common workflows.

Guidance from organizations such as National Institute of Standards and Technology emphasizes validating inputs, limiting exposure, and designing for auditability as foundational practices, not optional enhancements for modern digital systems.

Moving from digital paperwork to intelligent services

Uploading PDFs is a starting point, not an endpoint for digital transformation.

Digital services become intelligent when documents are transformed into data that systems can understand and act on. Structured capture enables validation. Validation enables automation. Automation enables faster, more reliable services at scale.

This shift does not require replacing existing systems. It requires rethinking how information enters them through modernized intake and workflows.

When document workflows are designed for intelligence instead of storage, digital services stop imitating paper processes and start delivering on their promise of efficiency, accuracy, and trust.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.

Monday, 29. December 2025

Shyft Network

2025: Global Compliance Mandates Collide With Privacy Expectations

2025 was the year crypto regulation stopped being theoretical. The EU, the United States, and major Asia-Pacific markets all implemented comprehensive frameworks that transformed Travel Rule compliance from a regional concern into a global operational infrastructure. For VASPs, the question shifted from “if” to “how.” But implementation surfaced a problem most regulators haven’t solved: how

2025 was the year crypto regulation stopped being theoretical. The EU, the United States, and major Asia-Pacific markets all implemented comprehensive frameworks that transformed Travel Rule compliance from a regional concern into a global operational infrastructure. For VASPs, the question shifted from “if” to “how.”

But implementation surfaced a problem most regulators haven’t solved: how do you build systems that satisfy comprehensive compliance obligations while preserving the privacy expectations that define crypto markets? Traditional finance never faced this tension. Crypto can’t avoid it.

The Infrastructure Mandate: From Optional to Essential

The EU’s Transfer of Funds Regulation went live on December 30, 2024. Zero threshold. Every crypto asset transfer across all 27 member states now requires complete originator and beneficiary information exchange. No exceptions.

By year-end 2025, ESMA counted 102 licensed Crypto Asset Service Providers operating under MiCA, including 12 credit institutions. The market split cleanly. Firms with robust compliance infrastructure expanded across the EU through passporting rights. Firms without it faced mounting operational costs and delayed market entry.

This wasn’t regulatory overreach — it was the EU establishing that compliance belongs at the foundation, not bolted on later. The firms that thrived built systems capable of handling zero-threshold requirements at scale. The firms that struggled treated Travel Rule compliance as a cost center to minimize. Wrong approach.

The United States followed a different path to similar conclusions. The GENIUS Act, signed July 18, 2025, created the first federal framework for payment stablecoins. The requirements: 1:1 reserve backing in high-quality liquid assets, strict disclosure, AML compliance under the Bank Secrecy Act’s $3,000 threshold. Treasury guidance made the subtext explicit: if you’re building stablecoin infrastructure, you’re building compliance infrastructure. Full stop.

The timing reflected market reality. The stablecoin market cap exceeded $250 billion. Transfer volumes surpassed Visa and Mastercard combined in 2024. Payment stablecoins have become a critical financial infrastructure. The GENIUS Act imposed banking-grade compliance requirements — but without the regulatory flexibility traditional banks enjoy around privacy frameworks. Stablecoin issuers face a challenge legacy finance never confronted: comprehensive compliance that assumes you’re a bank, privacy expectations that assume you’re not.

Asia-Pacific markets demonstrated that regulatory maturity doesn’t require identical approaches, but it does converge on similar principles. Hong Kong’s Stablecoin Ordinance took effect in August 2025. HKMA received over 70 expressions of interest. They’ll issue a handful of licenses. Singapore’s DTSP regime went live in June 2025. Japan selectively eased restrictions while maintaining rigorous custody requirements.

The pattern is clear: regulatory frameworks reward VASPs that invested in compliance infrastructure early. They penalize firms, treating it as a compliance tax. But these frameworks also surface the tension that will define 2026: comprehensive compliance requirements versus user privacy expectations.

The Technical Challenge: Interoperability and Privacy

Travel Rule implementation through 2025 exposed two problems that VASPs can’t solve individually.

First: interoperability. Chile activated Travel Rule obligations in July. Nicaragua earlier in the year. South Africa went live in 2025. Peru, Argentina, and multiple Middle Eastern jurisdictions target 2026. Brazil’s framework remains under consultation. Every jurisdiction implements on different timelines with different technical specifications. All expect seamless cross-border data exchange. The industry has built solutions such as TRUST, TRISA, and OpenVASP, but adoption remains uneven.

Second: privacy. And this one’s harder. Travel Rule compliance requires collecting, storing, and transmitting PII about transaction originators and beneficiaries. This creates direct conflict with GDPR, with the privacy expectations that drove users to crypto, with blockchain’s pseudonymous architecture.

The EU’s zero-threshold requirement made this conflict explicit. Every transaction demands PII exchange. GDPR requires data minimization and purpose limitation. Travel Rule requirements push toward comprehensive data collection and extended retention. VASPs operating in Europe face regulatory obligations that pull in opposite directions.

Through 2025, VASPs made different bets. Some built centralized compliance databases aggregating transaction data across all users. Efficient for regulatory reporting. Also creates exactly the surveillance infrastructure that privacy frameworks discourage and users resist. Others implemented minimal compliance systems satisfying regulatory checkboxes while leaving themselves vulnerable to enforcement.

Neither approach solves the actual problem.

The VASPs navigating this successfully recognized something fundamental: privacy and compliance aren’t sequential concerns you address one after the other. They’re parallel design constraints. You either build systems that satisfy both simultaneously, or you build systems that fail at scale.

Peer-to-peer encrypted data exchange. Retention policies limited to regulatory compliance periods. Privacy-preserving compliance as technical architecture, not policy theater. These aren’t nice-to-have features. They’re the difference between sustainable operations and growing regulatory liability.

Looking Forward: Where Compliance and Privacy Must Converge

Jurisdictions that established frameworks in 2025 are moving to enforcement. The EU has issued over €540 million in fines since MiCA implementation. More than 50 firms lost licenses by February 2025. Licensing isn’t approval — it’s ongoing obligations with real consequences.

The jurisdictions implementing frameworks in 2026 — Peru, Argentina, the Middle Eastern, and Southeast Asian markets — are watching how early movers handled implementation. They’re studying which approaches create sustainable business environments rather than operational gridlock. More importantly, they’re watching how VASPs balance regulatory obligations with user privacy expectations.

2025 has settled whether Travel Rule compliance is necessary. 2026 will test whether it can be implemented without creating surveillance infrastructure that undermines crypto’s value proposition.

Technical solutions exist. Peer-to-peer encrypted data exchange eliminates intermediary storage vulnerabilities. Data minimization limits retention to regulatory requirements, not comprehensive user profiling. VASPs that invested in privacy-preserving approaches during 2025 positioned themselves well. VASPs that built centralized compliance databases are learning that efficiency gains don’t offset privacy liability.

Stablecoins make this tension acute. MiCA and the GENIUS Act both established strict obligations because stablecoins function as a payment infrastructure. VASPs facilitating stablecoin transfers face the same compliance expectations as traditional payment processors.

Here’s the difference: traditional payment rails were built without privacy expectations. Users accepted surveillance as the price of convenience. Crypto users didn’t make that trade. They expected pseudonymous transactions, minimal data collection, and protection from state and corporate surveillance. Travel Rule requirements force a reckoning with these expectations. They don’t eliminate them.

The VASPs entering 2026 with a strategic advantage recognize that privacy and compliance aren’t opposing forces. They’re design constraints you satisfy simultaneously or fail to satisfy at all. The regulatory landscape will continue maturing. More jurisdictions will implement requirements. Enforcement will intensify.

But the defining question isn’t technical compliance. It’s whether the industry can build a compliance infrastructure that preserves the privacy principles that make digital assets valuable in the first place.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance in blockchain while preserving privacy and sovereignty. SHFT is the network’s native token and fuel.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visit our website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn, Telegram, Medium, and YouTube.Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network

2025: Global Compliance Mandates Collide With Privacy Expectations was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Dock

Will Big Tech Win the EUDI Wallet Race?

Samsung and Google have both expressed interest in becoming European Digital Identity (EUDI) Wallet providers. That’s according to Mirko Mollik, Identity Architect at SPRIND, Germany’s Federal Agency for Breakthrough Innovation. That immediately raises an uncomfortable question: Are the wallet wars already decided?

Samsung and Google have both expressed interest in becoming European Digital Identity (EUDI) Wallet providers. That’s according to Mirko Mollik, Identity Architect at SPRIND, Germany’s Federal Agency for Breakthrough Innovation.

That immediately raises an uncomfortable question:

Are the wallet wars already decided?

Across Europe, we’re seeing a growing number of EUDI wallet initiatives led by governments, consortia, and private players. At the same time, the contrast with the US is hard to ignore. Several US states already support mobile driver’s licenses in Apple Wallet, and Google Wallet supports US digital passports.


ComplyCube

Boost ROI with the Best KYC Cost Calculator Approach

Smart budgeting for identity verification begins by leveraging a KYC cost calculator to strike a balance between efficiency and customer experience. Cost benchmarking strategies can help forecast verification spend with confidence. The post Boost ROI with the Best KYC Cost Calculator Approach first appeared on ComplyCube.

Smart budgeting for identity verification begins by leveraging a KYC cost calculator to strike a balance between efficiency and customer experience. Cost benchmarking strategies can help forecast verification spend with confidence.

The post Boost ROI with the Best KYC Cost Calculator Approach first appeared on ComplyCube.


Thales Group

Thales secures major contract to transform Royal Navy mine countermeasures with AI powered remote command centres

Thales secures major contract to transform Royal Navy mine countermeasures with AI powered remote command centres prezly Mon, 12/29/2025 - 09:00 Defence Naval Europe Share options Facebook
Thales secures major contract to transform Royal Navy mine countermeasures with AI powered remote command centres prezly Mon, 12/29/2025 - 09:00 Defence Naval Europe

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 29 Dec 2025 Thales has been awarded a major contract by Defence Equipment and Support (DE&S) for the design, development and delivery of the next-generation of portable autonomous command centres, marking a significant advancement in the transformation of the Royal Navy’s Mine Counter Measures (MCM) capability. This contract, which will revolutionise maritime autonomous mine hunting, directly supports the UK’s Strategic Defence vision for a “Hybrid Navy”. Integrating Artificial Intelligence (AI) from cortAIx, the powerful AI accelerator from Thales, the Mi-Map and M-Cube applications accelerate and secure operations at the heart of the command centre. Thales is leveraging its 60-year heritage in mine counter measures and strong investment in new technologies to continue delivering cutting-edge systems.

Mine-Counter-Measures Command Centre ©Film06 - Thales

Awarded under the Autonomous Remote Command Centre (RCC) contract, this initial £10 million investment marks the first stage of a programme which has scope to grow to up to £100 million to deliver next-generation mine countermeasures capability for the Royal Navy.

The Group will lead the integration of multiple unmanned assets, both above and below the water, into becoming a true system of systems for safer, more efficient and agile mine hunting missions. It will provide the hardware, software, training and technical advice collaborating with a robust UK supply chain to enable iterative capability improvement and rapid technology adoption.

The Thales M-Cube Mission Management System will be at the heart of the command centres. This combat-proven software suite is already used by multiple navies worldwide for planning, execution and evaluation of both conventional and autonomous MCM missions. It provides unparalleled situational awareness from the task force to individual unit level.

Mi-Map planning and evaluation software lies at the heart of the Royal Navy’s new remote Command Centre. Featuring advanced AI-powered automatic target recognition, it empowers operators by intelligently filtering and refining raw data, streamlining and expediting the mine hunting process. Leveraging machine learning, Mi-Map continually enhances its database and processes vast quantities of information beyond human capability. Not only it accelerates target identification but also delivers superior accuracy and effectiveness compared to traditional systems.

This sophisticated AI is developed with the support of cortAIx, Thales AI accelerator with a global workforce of 800 experts in AI within the Group serving the performance of sovereign advanced systems and sensors in critical environments.

Working with programme partners, Thales will initially deliver twin-containerised solutions that will seamlessly integrate platforms, systems and sub-systems. This highly flexible capability, will transform how MCM is conducted - allowing Royal Navy personnel to coordinate a fleet of uncrewed and autonomous assets, greatly increasing operational effectiveness while maximising personnel safety.

Its utility, for autonomous command and control, has application across the seabed warfare domain and aligns with the UK Government’s vision for a ‘Hybrid Navy’ and the Royal Navy’s Long Term Capability Plan for MCM mission systems integration. ​

“Thales is honoured to continue its central role in delivering mine countermeasures capability to the Royal Navy, building on our proven heritage. This next-generation of autonomous command centres is part of a flexible suite of autonomous C2 from containerised solutions to vessel operations centres or large, shore operations centres. ​ By collaborating across the supply chain, we are committed to supporting the UK with world-class technology and fostering growth and high-value skilled jobs across our UK operations.” Paul Armstrong, Managing Director for Underwater Systems activities, Thales in the UK.
"The threat to the UK is growing, driven by global instability, Russian aggression, and a greater willingness of states and hostile actors to target our critical infrastructure. By embracing autonomous maritime technology, the Royal Navy is pioneering innovation to help keep our sailors safe at sea. This is backed by a UK defence industry delivering world-class capabilities that exemplify how defence acts as an engine for growth.” - Minister for Defence Readiness and Industry, Luke Pollard MP

Thales’ significant investment in UK mine countermeasures has sustained over 200 highly skilled jobs—particularly at Thales’ Somerset and Plymouth sites—while reinforcing the broader ecosystem of suppliers and partners across the region.

For more information about Thales in the UK in the domains of Mine Counter Measures and AI:

Thales Delivers the World’s First Autonomous Mine Hunting System to the Royal Navy | Thales Group Thales launches cortAIx in the UK with 200 experts in AI for critical systems | Thales Group

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in the UK

Thales in the UK is a team of over 7,000 experts, including 4,500 highly skilled engineers, located across 16 sites.

Each year Thales invests over £575 million into its UK supply chain, working with over 2,000 companies. They are dedicated to research and technology, working with partners to invest over £130m+ in R&D in the UK annually. In February 2025, Thales launches cortAIx in the UK, its AI accelerator, with 200 experts in AI for critical systems

With a heritage of over 130 years, Thales in the UK understands the importance of developing skills for the future, with over 400 apprentices and graduates across the UK. Thales is committed to continuously developing talent, and highly skilled experts.

Recent images of Thales and its Defense, Aerospace and Cyber & Digital activities can be found on the Thales Media Library. For any specific requests, please contact the Media Relations team.

View PDF market_segment : Defence > Naval ; countries : Europe > United Kingdom https://thales-group.prezly.com/thales-secures-major-contract-to-transform-royal-navy-mine-countermeasures-with-ai-powered-remote-command-centres thales-secures-major-contract-transform-royal-navy-mine-countermeasures-ai-powered-remote-command On Thales secures major contract to transform Royal Navy mine countermeasures with AI powered remote command centres

Saturday, 27. December 2025

Spruce Systems

What “Digital Transformation” Really Means for Government

Digital transformation isn’t about new portals — it’s about changing how information flows between people, systems, and agencies. This post explains what successful transformation looks like in practice.

Digital transformation is often described in terms of what residents can see. A new portal. A redesigned application. A mobile friendly form. These changes are visible and important, but they are not transformation on their own and rarely deliver lasting modernization by themselves.

For government, real digital transformation happens behind the scenes. It is about changing how information flows between people, systems, and agencies across the full service lifecycle. When those flows improve, services become faster, more reliable, and easier to scale. When they do not, new interfaces simply mask old problems embedded in legacy workflows and systems.

Understanding this distinction is critical for leaders who want modernization efforts to deliver lasting results rather than short term usability gains.

Transformation is about flow, not surface

Most government services already collect the information they need. The challenge is how that information moves once it is submitted and how many times it must be handled, reviewed, or re-entered.

In many systems, data enters as documents, emails, or PDFs. It is reviewed manually, re entered into multiple systems, and shared through ad hoc processes. Each handoff introduces delay and risk. Each system becomes a silo that limits interoperability and reuse.

Digital transformation changes this pattern. Information enters systems in structured form. It is validated once and reused appropriately. It moves securely across services through integrated workflows and system interfaces without repeated requests or manual translation.

When information flows smoothly, services feel modern even if core systems remain unchanged because the experience is driven by data, not paperwork.

Portals do not fix broken intake

A common transformation pitfall is focusing on portals first as the primary modernization deliverable.

A new front end can improve usability, but if it feeds the same unstructured intake and manual workflows, the gains are limited. Staff still chase documents. Backlogs persist. Errors propagate downstream into eligibility, compliance, and reporting processes.

Successful transformation starts earlier in the process. It begins at intake, where information is captured, validated, and structured so systems can act on it immediately without manual interpretation.

Once intake improves, portals, workflows, analytics, and automation all benefit because they operate on reliable data.

Systems should exchange data, not files

One of the clearest signs of incomplete transformation is continued reliance on file based exchanges between systems and agencies.

When systems share PDFs or scanned documents, they are not really integrated. They are passing responsibility along with the file. Each recipient must interpret the contents again and reconcile them with their own systems.

Modern transformation replaces file passing with data exchange. Systems communicate through APIs. Fields are defined and validated. Access is controlled by policy rather than by manual handling.

This approach reduces duplication, improves accuracy, and makes interagency collaboration practical instead of burdensome at scale.

Identity and trust are foundational

Information cannot flow safely without trust between systems and organizations.

Digital transformation requires clear answers to fundamental questions. Who is making this request. What are they allowed to do. Can the data be trusted. Has it been altered. Is it being used for the right purpose under applicable policy and regulation.

Identity systems, access controls, and auditability provide these answers. When they are consistent across services, agencies can share data without expanding risk or increasing compliance burden.

Frameworks and guidance from organizations such as the National Institute of Standards and Technology emphasize identity driven access and continuous verification as cornerstones of modern digital systems and digital trust infrastructure.

Without this foundation, transformation efforts stall under security and compliance concerns even when user-facing tools improve.

Transformation works with existing systems

Another misconception is that digital transformation requires replacing legacy systems before progress can be made.

In reality, most successful efforts extend existing platforms rather than removing them. Core systems continue to act as systems of record. Modern layers handle intake, identity, workflows, and integration around those systems.

APIs expose specific functions. Workflow tools orchestrate processes across systems. Identity layers manage access consistently. This allows agencies to modernize incrementally while maintaining operational stability and continuity of service.

Transformation becomes achievable instead of disruptive because it aligns with how government systems actually operate.

Automation follows clarity

Automation is often a goal of transformation, but it is rarely the starting point for successful modernization.

Automation works when inputs are reliable and rules are explicit. If data is inconsistent or incomplete, automation creates errors faster rather than improving outcomes and increases downstream risk.

By improving how information is captured and validated, agencies create the conditions for safe automation. Decisions can be made faster. Exceptions are clearer. Staff focus on complex cases instead of routine checks that can be handled automatically.

Transformation succeeds when automation is a result of better information flow, not a substitute for it or a workaround for broken intake.

Measuring transformation by outcomes, not features

It is tempting to measure transformation by deliverables. A portal launched. A system deployed. A feature added to a roadmap.

More meaningful measures focus on outcomes. Processing times decrease. Error rates drop. Data is reused across programs. Staff spend less time on manual tasks. Residents submit information once instead of repeatedly across agencies.

These outcomes reflect changes in how information moves, not just how it looks on the surface.

What successful transformation looks like in practice

In practice, digital transformation means that information flows predictably and securely from the moment it is submitted through intake, verification, decisioning, and fulfillment. Systems trust their inputs. Agencies collaborate without creating new silos. Services adapt as policies and needs change without rework.

Residents experience simpler interactions because systems work together. Staff gain confidence because data is consistent and auditable. Leaders gain visibility because information is reliable across programs.

This is not achieved through a single project or platform. It is achieved by rethinking how information moves across the entire service lifecycle and designing systems to support that flow.

Reframing digital transformation

Digital transformation is not about new portals or modern design alone. It is about building infrastructure that allows information to move safely, efficiently, and purposefully between people and systems at scale.

When that infrastructure is in place, interfaces improve naturally. Services scale. Trust grows across government digital services.

For government, this is what transformation really means.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.

Friday, 26. December 2025

KILT

KILT’s evolution: Primer

One week ago, a team from the KILT community took over stewardship of the project and we’ve been heads-down working since then. We’re going fast, breaking new ground, and giving KILT holders what they’ve been waiting for: open-minded and agile development, ready to move with the times. Primer Agents: activated KILT was built for decentralized identity, and it was a true pioneer in this fie

One week ago, a team from the KILT community took over stewardship of the project and we’ve been heads-down working since then. We’re going fast, breaking new ground, and giving KILT holders what they’ve been waiting for: open-minded and agile development, ready to move with the times.

Primer

Agents: activated

KILT was built for decentralized identity, and it was a true pioneer in this field. But a rigid focus on personal identity alone has been limiting and we are no longer tied to that narrow track. It’s not 2021 any more… we’re looking towards the agentic economy as the present frontier for DeID and exciting related technologies.

Primer is KILT’s evolution, with a new freedom to move with the times. Same community, wider scope, bigger mission. In fact, we already have something to show you:

x402 SDK & Facilitator: HTTP-native payments for AI agents. Live now on mainnet. 8004 Gateway: ERC-8004 agent identity registration and management. Primer Console: USB/portable encrypted agent wallet management. And more. https://www.primer.systems Phase 1: $KILT → $PR swap

Rather than keep you waiting months for another formal migration, we have moved quickly in offering this token swap as the first step. KILT still has activity that needs to be wound down, and the token remains actively traded; this approach gives flexibility without forcing anyone’s hand.

When you’re ready, swap. Until then, KILT works as it always has. KILT will continue to trade on uniswap and MexC just as before.

For those who would like to get ahead of the curve, KILT may be ported to PR now via our website.

We’ve built a bidirectional, reversible swap service. Phase 1:

Swap KILT for PR (or back) at a fixed rate: 1 KILT = 2.065 PR No deadline, no pressure. An “OG snapshot” has been taken; this provides whitelisting for the swap during phase 1. Swap opens in the coming days.

KILT remains tradeable on Uniswap and MexC. Port it to PR when you’re ready.

How else can I get PR tokens?

The only way to get PR tokens is by swapping from KILT. There has been no public or private sale, and the token will be 100% circulating at launch with zero inflation. The swap exactly preserves your holdings as a % of supply.

$PR tokenomics Base: 0x2357110F5F0c5344EEf75966500c75116A4aA153 Total Supply: 600,000,000 (1 KILT = 2.065 PR) Circulating: 100% Mintable: No Burnable: Yes EIP-3009: x402 ready EIP-2612: gasless approvals Find out more

X | TGGit

Thursday, 25. December 2025

TÜRKKEP A.Ş.

Dijital Bir Belgeyi Geçerli Kılan Nedir?

Her şeyin hızla dijitale dönüştüğü bir çağda yaşıyoruz. Sözleşmeler, başvurular, bilgilendirmeler… Artık çoğu belgeyi kâğıt üzerinde değil, dijital ortamda görüyor ve kullanıyoruz. Peki tüm bu belgeler dijitalde dolaşırken, hukuki geçerlilikleri ne kadar sağlam? Daha da önemlisi: Bir belgenin dijital olması, onun aynı zamanda resmî, geçerli ve gerektiğinde kanıt niteliğinde olduğu anlamına gelir mi
Her şeyin hızla dijitale dönüştüğü bir çağda yaşıyoruz. Sözleşmeler, başvurular, bilgilendirmeler… Artık çoğu belgeyi kâğıt üzerinde değil, dijital ortamda görüyor ve kullanıyoruz. Peki tüm bu belgeler dijitalde dolaşırken, hukuki geçerlilikleri ne kadar sağlam? Daha da önemlisi: Bir belgenin dijital olması, onun aynı zamanda resmî, geçerli ve gerektiğinde kanıt niteliğinde olduğu anlamına gelir mi?

Ocean Protocol

DF166–168 Complete and DF169 Launches

Predictoor DF166–168 rewards available. DF169 runs December 25th — January 1st, 2026 1. Overview Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor. Data Farming Round 166 (DF166) through Round 168 (DF168) have completed and rewards are available to claim. DF169 is live, December 25th. It conc
Predictoor DF166–168 rewards available. DF169 runs December 25th — January 1st, 2026 1. Overview

Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.

Data Farming Round 166 (DF166) through Round 168 (DF168) have completed and rewards are available to claim.

DF169 is live, December 25th. It concludes on January 1st. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF169 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF169

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean and DF Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF166–168 Complete and DF169 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 24. December 2025

Indicio

This holiday season, ho-ho-‘Know It’s Santa’ with Indicio Proven®

The post This holiday season, ho-ho-‘Know It’s Santa’ with Indicio Proven® appeared first on Indicio.
Seamless authentication with Verifiable Credentials from Indicio Proven for a secure holiday and a seamless 2026. Happy Holidays from Indicio!

By Trevor Butterworth

“You’d think being an elf at Santa’s Workshop would mean you’d never fall for a fake Santa Claus,” said Pepper Minstix, Head of Security at Santa’s Workshop, in a recent zoom call with Indicio. “But you take a highly-focused staff, intense toy production schedules, and the ease of procuring Santa costumes — realistic beards, belly prosthetics — and elves, like people, default to trusting without verifying.” 

“Verifiable Credentials take the stress out of implementing zero-trust security. No one wants to ask Santa for proof he’s real, or poke his belly to see if it wobbles and make, what is, essentially, a subjective call. With a Verifiable Credential no elf has to. We have cryptographic proof it’s Santa. Authenticated voice biometrics mean it’s an instant “ho-ho-yes” or “ho-ho-no.”

“Ho-ho-ho” — Santa speaks into the workshop intercom while simultaneously presenting an authenticated biometric of his voice for instant cryptographic comparison. No biometric storage needed!

Asked whether Santa Claus security  is affected by the rise in AI, Minstrix said it is an issue — mostly around the naughty or nice list. 

“As you can imagine, no-one wants to be on Santa’s naughty list, so we have to be especially vigilant when it comes to accessing this highly sensitive document. And then there are the subversives — the Grinch and Krampus — who want to disrupt holiday cheer.”

The threat vectors are systems access, where AI is being used to brute force passwords, or support synthetic elf fraud — and, not least, deepfake Santas trying to access the system. There are fake Santas everywhere.”

“This is where Verifiable Credentials have transformed security. You wouldn’t believe how many elves had ‘I love Santa Claus’ as their password when we used centralized systems. Now, no more passwords, just instant seamless authentication. When you don’t have to store personal data for identity verification, it can’t be stolen, right?” 

Santa’s Workshop is now using Verifiable Credentials for seamless access to all systems, removing centralized data risks, and protecting against synthetic elf fraud and deepfakes.

“We started with a simple use case, ’Know It’s Santa,’” said Minstrix — “and then expanded to all our systems access and identity authentication. The next step is to use Verifiable Credentials to manage our Artificial Elf Intelligence. Like any organization, we’re always looking for ways to make the job easier and more efficient, and with Verifiable Credentials, we can start this digital transformation from a position of secure authentication.”

“Digitally-native elves,” Minstrix added,  “are not that much different from people — they expect seamless experiences. I guess you could say Verifiable Credentials are the gift that keeps on giving.”

Time to decentralize your identity and data in 2026 with the gift of Indicio Proven! Enjoy the holidays — and talk to us in the new year.

 

The post This holiday season, ho-ho-‘Know It’s Santa’ with Indicio Proven® appeared first on Indicio.


ComplyCube

How UK Crypto Trading Platforms Can Build Stronger KYC

Regulatory scrutiny on firms offering crypto services has grown sharply in the UK. Unlike legacy banks, meeting crypto KYC requirements demands tailored strategies, deeper checks, and ongoing adjustments to stay compliant. The post How UK Crypto Trading Platforms Can Build Stronger KYC first appeared on ComplyCube.

Regulatory scrutiny on firms offering crypto services has grown sharply in the UK. Unlike legacy banks, meeting crypto KYC requirements demands tailored strategies, deeper checks, and ongoing adjustments to stay compliant.

The post How UK Crypto Trading Platforms Can Build Stronger KYC first appeared on ComplyCube.

Tuesday, 23. December 2025

Recognito Vision

How ID Verification Solves Identity Fraud in University Admissions

University admissions are increasingly moving online, but this shift has also created new risks. Identity fraud, fake applicants, and document manipulation are becoming serious challenges for higher education institutions. Without strong identity checks, universities risk admitting unverified students and damaging academic credibility. Recognito helps universities address these risks through secure

University admissions are increasingly moving online, but this shift has also created new risks. Identity fraud, fake applicants, and document manipulation are becoming serious challenges for higher education institutions. Without strong identity checks, universities risk admitting unverified students and damaging academic credibility.

Recognito helps universities address these risks through secure ID verification designed for modern admissions. Our technology supports reliable identity checks that protect institutions from fraud while keeping the application experience smooth for genuine students.

In this blog, you will learn how identity fraud impacts university admissions, why traditional checks fall short, how ID verification prevents fraud, and what universities should consider when implementing secure student verification systems.

The Growing Identity Fraud Problem in University Admissions

Universities are increasingly targeted by identity fraud because they manage sensitive personal data, academic records, and financial information for thousands of applicants each year. As online and remote admissions become more common, verifying applicant identities without face-to-face interaction has become a major challenge. This shift has made it easier for fraudsters to exploit gaps in traditional verification processes.

Many institutions still rely on basic document uploads and manual reviews. These methods are difficult to scale and easy to manipulate, especially when admissions teams must evaluate applications from different countries, languages, and document standards. As a result, universities commonly face the following types of admission fraud:

Impersonation during online admissions, where individuals apply using someone else’s identity Fake or altered identity documents submitted to bypass manual verification checks Proxy test takers and enrollment misuse, where third parties complete applications, exams, or onboarding steps

Identity fraud affects more than just admissions decisions. When unverified applicants are admitted, it can lead to invalid credentials, regulatory scrutiny, and long-term damage to institutional reputation. Over time, this erodes trust among students, faculty, employers, and accreditation bodies.

What Is ID Verification in Higher Education

ID verification in university admissions is the process of confirming that an applicant is who they claim to be. It typically involves validating government-issued identity documents and matching them to the applicant’s identity during the application process.

In digital admissions, ID verification supports secure digital identity creation for students. This allows universities to verify applicants remotely without requiring physical presence or manual document handling.

By using automated identity verification, institutions can improve trust, reduce identity fraud, and ensure that every admitted student has been properly verified before enrollment.

Why Traditional Admission Checks No Longer Work Why Traditional Admission Checks No Longer Work

Traditional admission checks rely heavily on manual review and outdated processes, which are no longer effective for large-scale and remote admissions.

Manual Verification Limits

Manual document checks are slow, inconsistent, and vulnerable to human error. Admissions teams often lack the tools needed to detect forged or altered identity documents accurately.

Challenges With Remote Admissions

Remote admissions make it difficult to confirm if the real applicant is present. Without biometric or live verification, impersonation and identity misuse become easier.

How ID Verification Prevents Admission Fraud How ID Verification Prevents Admission Fraud

Modern ID verification combines automation, biometrics, and real time checks to stop fraud before it reaches enrollment.

Document Verification

Document verification validates the authenticity of identity documents by checking formats, security features, and data consistency. This helps detect fake or altered IDs early in the admissions process.

Liveness Detection

Liveness detection confirms that the applicant is physically present during verification. It prevents fraud attempts using photos, videos, or pre-recorded media.

Biometric Matching

Biometric matching compares a live facial scan with the photo on an identity document to confirm that the applicant is the same individual. To ensure accuracy and reliability, facial recognition systems are commonly measured against independent benchmarks such as the NIST Face Recognition Vendor Test (FRVT), which evaluates facial recognition performance under real-world conditions. Recognito aligns its biometric matching approach with these established testing principles, using the insights from 1:1 verification results to support accurate, consistent, and trustworthy identity verification across admissions workflows.

Real World Use Cases in University Admissions

Universities worldwide are already using ID verification to strengthen admissions security and improve efficiency.

Online admissions portals verify student identity before application submission International student onboarding uses remote identity verification to reduce fraud Exam and assessment platforms prevent impersonation using biometric checks Credential and degree verification processes link academic records to verified identities

These use cases help institutions protect academic integrity while supporting global student access.

Benefits of ID Verification for Universities and Admissions Teams Benefits of ID Verification for Universities and Admissions Teams

ID verification delivers long-term benefits that extend beyond fraud prevention.

Stronger Fraud Prevention

By verifying identity at the earliest stage, universities reduce impersonation, document fraud, and unauthorized enrollment. This protects institutional credibility and accreditation standards.

Faster and More Efficient Processing

Automated verification reduces manual workloads and shortens application review times. Admissions teams can focus on evaluating candidates rather than checking documents.

Trusted Student Verification

Verified identities increase confidence across enrollment, assessment, and degree issuance. This improves trust among students, faculty, and external partners.

Scalable Identity Verification for Growing Institutions

As application volumes grow, ID verification systems scale without adding operational complexity. Institutions can expand global reach while maintaining consistent security standards.

Key Considerations When Implementing ID Verification Systems

Universities should evaluate several factors before adopting an ID verification solution.

Compliance with data protection regulations such as the General Data Protection Regulation (GDPR) Transparency around how student data is collected and used Ease of use for students across devices and regions Integration with existing admissions and student information systems

Institutions can also reference guidance from EDUCAUSE, a leading authority on higher education technology, for best practices in digital identity and security.

Getting Started With ID Verification

Universities should begin by identifying fraud risks within their admissions workflow and defining verification requirements. Recognito provides a flexible ID verification platform that supports secure admissions, remote student onboarding, and scalable identity checks. Institutions can explore technical resources and implementation examples through the official Recognito GitHub repository to better understand how secure student verification can be deployed efficiently.

Conclusion

Identity fraud is a growing threat in higher education. By adopting ID verification, universities can protect academic integrity, improve admissions efficiency, and build long term trust with students and stakeholders. As digital admissions continue to expand, secure identity verification is no longer optional. It is essential for the future of university admissions.

Frequently Asked Questions What is ID verification in university admissions?

It is the process of confirming an applicant’s identity using documents and biometric checks.

How does ID verification prevent identity fraud?

It detects impersonation, fake documents, and unauthorized applicants before enrollment.

Is ID verification required for online admissions?

It is not always mandatory, but it is strongly recommended to reduce fraud risks.

What is liveness detection?

It ensures the applicant is physically present during identity verification.

Can international students be verified remotely?

Yes, ID verification supports secure remote verification for international applicants.


Spruce Systems

Reducing Fraud Without Slowing Down Services

Fraud prevention often creates friction for legitimate users. This post explores how modern verification, risk-based workflows, and selective disclosure can reduce fraud without adding delays.

Fraud prevention and user experience are often framed as opposing goals. Stronger controls are assumed to mean more steps, longer processing times, and frustrated residents. Faster services are assumed to invite abuse.

This tradeoff is not inevitable.

Modern digital services can reduce fraud while improving speed when identity verification and fraud prevention are designed as part of broader digital transformation efforts. Systems built on risk-based workflows, high-integrity data intake, and policy-driven decisioning can apply scrutiny proportionally. Advanced verification is focused where risk is highest, while low-risk interactions remain fast and accessible.

Why traditional fraud controls create friction

Many fraud prevention strategies rely on broad, uniform checks. Every applicant is asked for the same documents. Every transaction follows the same path. Manual review becomes the default safeguard.

This approach is common in agencies still reliant on paper forms, PDFs, and siloed legacy systems, but it is inefficient. Most users are legitimate. Applying the highest level of scrutiny to every interaction wastes time and staff capacity and undermines efforts to deliver accessible government forms and digital-first services.

Worse, blanket controls often fail to stop sophisticated fraud, which adapts to predictable rules. Uniform verification paths also make it difficult to differentiate between low-risk self-service actions and high-risk transactions that warrant deeper review.

Risk-based workflows change the equation

Risk-based workflows take a fundamentally different approach. Instead of treating all interactions the same, systems assess risk dynamically based on transaction context, identity assurance level, data quality, and program policy.

Low risk interactions can proceed with minimal friction. Higher risk cases trigger additional verification or review. Signals such as inconsistencies in submitted data, unusual patterns, or mismatches across systems inform these decisions.

This model aligns fraud prevention with workflow automation and service modernization. Intake, verification, and eligibility decisioning become distinct stages, governed by clear policies rather than manual judgment. Most submissions move straight through. Only a small percentage require enhanced checks or staff intervention.

Better data reduces fraud at the source

Fraud thrives on ambiguity. When intake data is inconsistent or loosely validated, it becomes easier to submit altered documents or exploit gaps between systems.

Modern form modernization and AI document processing reduce this uncertainty at the point of entry. Information is captured digitally, extracted into structured fields, and validated automatically. Requirements are enforced consistently. Anomalies are identified early, before they propagate downstream.

High-quality data strengthens fraud prevention while also enabling system interoperability, cross-program analysis, and longitudinal insights. Agencies gain better visibility without increasing burden on residents or staff.

Verification without over-collectionthe

One of the most common sources of friction is overcollection of data. Applicants are asked to submit full documents when only a single attribute is required. Systems collect more information than they need in order to feel safe. This increases the burden on users and creates additional privacy risk.

Selective disclosure and attribute-level verification address this problem. Individuals can prove specific facts, such as eligibility, age, or residency, without sharing full records. Benefit eligibility can be verified without exposing unrelated personal information. Compliance can be demonstrated without retaining sensitive documents.

By limiting what is shared, systems reduce both friction and risk.

Attribute-level verification also reduces data retention and downstream exposure, simplifying compliance and audit requirements.

Identity as a fraud prevention signal

Strong digital identity and identity assurance signals are powerful tools for fraud prevention, but only when applied proportionally. Not every action requires the same level of verification.

Risk-based identity systems adjust verification strength based on transaction sensitivity and user context. Low-risk actions remain fast and accessible. High-risk actions receive additional scrutiny.

In this model, identity functions as a continuous signal within digital trust infrastructure, informing policy evaluation and workflow routing rather than acting as a one-time pass-or-fail checkpoint.

Automation that supports humans

Automation plays an important role in reducing fraud, but it must be designed carefully. Automated checks work best when they handle routine validation and pattern detection, freeing human reviewers to focus on complex or high risk cases. When automation replaces judgment entirely, errors can scale quickly.

By combining structured data, clear rules, and human oversight, agencies can increase throughput without sacrificing accuracy.

Exception-based workflows ensure that human review is targeted, explainable, and proportional to risk.

When legitimate users experience delays, repeated requests, or opaque reviews, confidence in the system erodes. Faster, more predictable services signal competence and fairness.

Reducing fraud without slowing services helps agencies serve the public better while protecting resources. It also reduces incentives for workarounds and repeated submissions that can themselves create risk.

Designing fraud prevention for modern services

Effective fraud prevention is not about adding more steps. It is about making better decisions with better information.

Risk-based workflows, modern identity verification, selective disclosure, and high-integrity data intake allow agencies to reduce fraud while accelerating service delivery. Legitimate users move faster. Fraud becomes harder and more expensive.

When fraud controls are embedded directly into intake, identity, and workflow orchestration, security improves without becoming visible to the user.

When fraud controls are embedded directly into digital identity infrastructure, document intake, and workflow orchestration, security improves without becoming visible to the user, supporting both compliance and accessible digital services.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Kin AI

14 Ways Your Board of Advisors Can Help You Keep Your Sanity Over Christmas

A little extra help over the holidays

Hey all,

We’ve been busy over in Kin HQ, squashing bugs and working on feedback as quick as we can - apologies for the radio silence.

But, given that we’re deep into the festive season, we wanted to share some playful ways your personal board of advisors can support you over the holidays.

They’ll open directly in the Kin app. so check them out below.

1. Strategic Gift-Giving When You Forgot About Secret Santa🎁

In case you forgot (you’re welcome if we reminded you), your logical advisor can suggest a great gift that matches your secret santa in no time at all. For next year, remember you can always set a reminder in Kin 😉

Get emergency gift advice

2. “Your Family vs. My Family” Negotiations 👪

It’s never a fun topic, but it’s all too common. If you’re struggling to balance festive family traditions and expectations from both your family and your partner’s, consider talking to your personal relationship advisor.

Talk me through it

3. Why Am I Arguing About Politics With People I See Once a Year? 🤔

Your Values advisor can help you prepare some “safe” subjects, and some quick topic-changers to steer you away from politics, all tailored around the conversations your loved ones like to have (and which ones you don’t).

Help me keep the peace

4. Managing the Energy Cost of Explaining What I Actually Do 🙄

We all have someone who asks what we do every single year, and understands exactly none of the times we explain it. Your personal energy and pacing advisor can work with you on the most energy-efficient ways to do it, while helping you schedule in psychological recovery time.

Make this easier

5. How to Fake Loving a Present You Hate 😶

As weird or ridiculous as the gift is, your loved one’s heart is often in the right place - so don’t break it. Chat to your personal social advisor about how to hide your true feelings and avoid breaking the Christmas peace.

Teach me to perform

6. “Why Aren’t You Married Yet?” Defense Playbook 😬

Another staple of the holidays here - we’ve pre-prompted your personal relationships advisor through this link to help you think about who might ask you a variation of this question, and how you’ll respond without trapping yourself in an awkward lecture.

Get me through this

7. My Drunk Cousin Wants to Pitch Me His Startup Idea 📈

You know the cousin we mean. Your personal career and logic advisor is on hand not only to prepare responses, but to help you evaluate your uncle’s latest scheme, and get back to him with some useful pointers without requiring you to completely abandon your festive spirit.

Evaluate this... gently

8. Post-Christmas Dinner Food Coma: Rest or Weakness? 🍽️

The inevitable Christmas Day laze is a sacred part of the holiday for many of us - but that doesn’t mean it always comes without guilt or worry. Your personal energy and pacing advisor can help you justify explore your annual food coma, and whether it really is the rest you need (spoiler: it probably is).

Validate my life choices

9. My Cousin Just Bought a House and I’m Still Renting 🥴

How did they do it? No one knows. If you fear you feel an existential crisis coming on, reach out to your personal Values advisor- she’s got a knack for putting things back in perspective, and helping you reinforce your goals and self-worth.

Talk me down

10. Should I Reply to This Investor Email on Christmas Day? 💻

The boundaries between work and home keep blurring (which we have a whole article on). Your personal career and logic advisor can help you decide whether that message in your notification bar is worth letting work interrupt the festive cheer.

Boundary check me

11. Escape Plan: My Uncle’s 45-Minute Crypto Monologue 🏃‍♀️

Getting cornered by monologues is another holiday staple - and we’ve pre-prompted your personal social advisor through this link to help you identify who’s most likely to trap you in a ramble, and build an escape plan if your evasion efforts fail. (You can thank us later).

Plan my freedom

12. “My Mom’s Friend’s Son Also Does That” 🤓

Do they really? Do you know university IT support is in fact very different to an AI startup?

Again, we’ve prepped your personal social advisor to help you figure out how to navigate that situation when it happens without insults being hurled or sanity being lost.

Navigate this gracefully

13. Easy Festive Games People Will Actually Play 🃏

There’s nothing worse than a silent room of family and friends on Christmas, where you’re just waiting for someone to start an argument about a long-dead politician. Consider speaking to your personal social advisor about who’s in the room, and what simple games like charades or hangman, might keep them entertained and civil.

Keep everyone civil

14. “I’m Not Hungover, I’m Recalibrating My Circadian Rhythm” 😴

And of course, no festive period is complete without using being drunk as an excuse to rest, or needed to explain how you just need to rest when you’re actually drunk.

Your personal energy and pacing advisor can help you polish up your script here, with much better lines than the title we used - and all tailored to the way you and your family speak to each other.

Help me sell this

Happy Holidays from the KIN team!

Above all though, we want to wish you a merry Christmas and a joyful festive period from myself and the entire Kin team.

You’ve supported us through Kin’s biggest year of life to date, and we couldn’t be more proud or appreciative of the community we’ve created. We especially appreciate those of you who took the time to read to the bottom 😉

The whole KIN team is doing our best to make Kin even better for you - though we’ll be taking a small break for the holidays.

If you need us, we’ll be keeping an eye on support@mykin.ai, our official Discord server, and any bug reports that come in. But we’ll also be getting some rest.

So, keep giving us feedback, keep using Kin, and keep looking after yourselves. Get some rest, too

We’ll see you in the new year!

With love,

The KIN Team


ComplyCube

The CryptoCubed Newsletter: Top 5 AML Crypto Fines in 2025

In 2025, we witnessed regulators increasing AML fines for crypto firms. In this December edition, we cover the largest crypto fine cases involving OKX, KuCoin, Cryptomus, BitMEX, and Paxos. Avoid AML penalties effectively today. The post The CryptoCubed Newsletter: Top 5 AML Crypto Fines in 2025 first appeared on ComplyCube.

In 2025, we witnessed regulators increasing AML fines for crypto firms. In this December edition, we cover the largest crypto fine cases involving OKX, KuCoin, Cryptomus, BitMEX, and Paxos. Avoid AML penalties effectively today.

The post The CryptoCubed Newsletter: Top 5 AML Crypto Fines in 2025 first appeared on ComplyCube.


Indicio

Gartner highlights Indicio as a leader in decentralized identity for AI and machines

The post Gartner highlights Indicio as a leader in decentralized identity for AI and machines appeared first on Indicio.
Gartner’s 2025 Market Guide identifies Indicio as the only vendor explicitly addressing decentralized identity for AI agents and machines. Indicio ProvenAI provides a way for AI agents to hold verifiable identities, authenticate customers, obtain consent to access personal data, and use delegate authority to share that data with other verifiable agents in an autonomous and compliant way.

By Helen Garneau

Gartner’s Market Guide for Decentralized Identity marks an important moment for the digital identity verification market. The 2025 report reports a clear shift away from experimental pilots and toward real-world deployment of decentralized identity infrastructure. Among a diverse set of vendors, Indicio is recognized not only as a representative provider of decentralized identity solutions, but as the only company addressing a critical digital identity challenge: Verifiable identities for AI agents and decentralized governance for autonomous systems.   

This distinction matters. 

Much of the decentralized identity conversation to date has focused on human users: digital wallets, government-issued credentials, workforce identity, and consumer use cases. Gartner’s analysis looks further ahead. 

As organizations deploy AI agents and autonomous systems to perform tasks and interact with one another and across systems to do so, those agents must be able to prove who they are, what they are authorized to do, and under what conditions they can access systems and data.

Traditional federated identity and access management approaches were not designed for this Agent-powered world. Certificates, shared secrets, and static credentials struggle to scale across dynamic environments where machines act independently, collaborate with other agents, and operate across organizational boundaries using personal or high-value data 

Gartner points to decentralized identity as a viable alternative, particularly for newer devices and AI-driven systems that require stronger assurance, delegated authority, and clearer trust relationships.

Within this context, Gartner explicitly points to Indicio Proven AI as a solution for managing  AI agents in a decentralized way. No other vendor in the report is singled out in this way . 

The reference appears alongside discussions of emerging machine-driven use cases, including IoT, operational technology, smart vehicles, digital twins, and product lifecycle tracking. All of these scenarios depend on the same core requirement: the ability to identify, authenticate, and authorize machines without relying on centralized identity stores.

Gartner’s inclusion of AI agents in the decentralized identity narrative signals a broader market shift. AI systems are no longer confined to experiments or narrowly-scoped tasks. They are being embedded into enterprise workflows, supply chains, financial systems, and digital services. 

As they take on more responsibility, the question of trust becomes unavoidable. Who issued this agent its authority? What data is it allowed to access? For how long? Can that authority be verified cryptographically and audited over time?

Indicio’s work in decentralized identity for AI agents directly addresses these questions, and provides a way to prevent fake AI agents from exploiting high-level access to data — something legacy identity solutions cannot do. 

Additionally, Indicio’s decentralized governance, which is based on the Decentralized Identity Foundation’s Credential Trust Establishment specification, allows companies to create trust networks for authenticatable agents, making it easy to establish and maintain human governance over autonomous systems.

The Market Guide also cautions that the decentralized identity market will experience consolidation as adoption accelerates. Gartner advises organizations to work with vendors that demonstrate real deployments and the ability to support evolving use cases over time. Indicio’s recognition, particularly in the emerging area of AI and machine identity, reflects a level of maturity and foresight that sets it apart.

As decentralized identity moves into its execution phase, the scope is expanding. Beyond proving a person’s identity, it is about establishing trust across entire digital ecosystems, including the AI agents and machines that will increasingly act on our behalf. 

Gartner’s report makes clear that this future is already taking shape, and Indicio is helping define what comes next.

Talk with one of our digital identity experts about how you can gain a competitive edge with Verifiable Credentials and build the internet of tomorrow, today.

 

The post Gartner highlights Indicio as a leader in decentralized identity for AI and machines appeared first on Indicio.


Spherical Cow Consulting

Web Payments and Digital Identity Standards Are Converging – #TIL

In this episode Heather Flanagan examines how web payments and digital identity are converging at the W3C, exploring digital wallets, browser-based APIs, and regulatory pressure shaping modern payment flows and trust on the web today as standards discussions reveal shifting assumptions across ecosystems. Discover how Secure Payment Confirmation, passkeys, browser-bound keys, and the Digital Cred

“Attending standards development meetings is not dissimilar in some ways to attending a more typical conference: there is always more to do than there is time to do it.”

In my case, I very much wanted to learn more about what’s going on in the web payments world. That world is increasingly overlapping with the tech world I’m more familiar with: digital identity. Identity verification has always been a thing, but web payments already had their own stack of processes and regulations for that. Now? It’s all verifiable digital credentials, digital wallets, and so much more.

Of course, just like with digital identity, I’ve yet to find decent 101-level material that helps someone know where to even start. Diving directly into meeting notes is a bit of going off the deep end and hoping for the best. Oh well!

Which brings me to this blog post. Since I need to spend some quality time with the meeting minutes from the Web Payments Working Group (WPWG) and the Web Payment Security Interest Group (WPSIG) sessions held at the W3C TPAC 2025, I’m going to share what I’m learning with you while noting that I’m not at all an expert in this space. This is how I learn about new communities and their work—by reading, listening, and asking, “why does it work that way?” until the picture comes together.

A Digital Identity Digest Web Payments and Digital Identity Standards Are Converging – #TIL Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:13:43 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Two groups, two charters, one very tangled problem space

Before diving into the fun bits, it helps to understand who’s doing what.

The Web Payments Working Group (WPWG)

This is the group that produces actual web standards. If you want to define or evolve a browser API related to payments—Payment Request API, Payment Handler API, Secure Payment Confirmation (SPC)—this is where that work happens. The conversations here tend to be grounded in:

implementability in browsers, web developer ergonomics, interoperability across platforms, and the slow, steady march toward something usable by both merchants and consumers. The Web Payments Security Interest Group (WPSIG)

This group, on the other hand, doesn’t produce standards. Its job is to bring together a much broader ecosystem—FIDO Alliance, EMVCo, OpenWallet Foundation, payment networks, browser vendors, and others—to talk about how everything fits together and where the sharp edges are. It’s where wallet experts, identity experts, payments experts, and regulatory watchers all try to make sense of each other.

When you’re in the thick of it, it’s difficult to really parse the difference, because if you’re interested in one, you’re probably interested in the other. That’s exactly why the boundaries blur and the song, “It’s a Small World,” starts running through people’s heads. Both groups touch on payments and identity. Both care about trust, fraud, user experience, and emerging wallet models. But the groups ultimately hold different levers.

WPWG: builds the browser plumbing.

WPSIG: coordinates the rest of the ecosystem so the plumbing leads somewhere safe.

Where the conversations converged: digital wallets, identity, and trust

So, let’s dive into the minutes. They are public, and you can read them here:

Web Payments Working Group – TPAC Minutes from 9 (or 10, depending on where you stand in relation to the International Date Line) November 2025 Web Payments Working Group –TPAC Minutes from 10 (or 11) November 2025 Web Payments Security Interest Group – TPAC Minutes 12 November 2025

Reading through the minutes, you can see the overlapping concerns loud and clear. (As it should be; it would be weird if they talked about completely different things.) Payments may have their own decades-long history of risk controls, but the shift toward digital wallets and verifiable credentials is reshuffling assumptions across the board, and that definitely shows up in the conversations.

A few themes stood out.

1. Wallet interoperability is becoming unavoidable

This comes up in both groups. The wallet landscape is fragmented:

OpenWallet Foundation is pushing for open, interoperable models. EMVCo is defining identity and payments task forces. FIDO Alliance is focusing on strong authentication and wallet certification. Browser vendors are building APIs like SPC, Payment Request API, and the Digital Credentials API.

Everyone is looking at everyone else. There’s a real sense that wallet experiences can’t stay siloed inside one organization’s worldview.

The WPSIG discussion even entertained the idea of a W3C Digital Wallets Interest Group, separate from the security IG, to coordinate wallet architecture and expectations across the many SDOs involved. I’m not sure if another group will improve things, but it’s an interesting idea.

2. Identity verification is no longer “outside scope” for payments

The old assumption that “identity is for identity people; payments already solved this” is breaking down. Japan shared updates showing a shift from selfie-matching eKYC to IC-chip-based national IDs. European regulators are tightening requirements through PSD3 and AML6.

And in many presentations—including Mastercard, Visa, Meta, Rakuten—participants kept coming back to variations of the same question:

How much identity should a wallet (or browser) present during a payment flow? And who decides?

That’s where the Digital Credentials API started appearing in conversation.

Where the DC API enters the story

If you’ve been following my writing on the W3C Digital Credentials API (DC API), you’ll know it’s designed to let websites request credentials—anything from age claims to verifiable IDs—through an interoperable browser API. I knew the web payments people were paying attention, but I didn’t really grasp just how closely they are watching this API.

Payments may become one of the earliest large-scale adopters of the DC API

A few highlights from the WPWG meeting:

Implementers see a natural fit between the Payment Request API (checkout orchestration) and the DC API (structured credential exchange). One proposal suggested treating the DC API as a payment method within the Payment Request API. In this model, a merchant could initiate a payment flow where the DC API provides the necessary credentials, rather than relying exclusively on traditional authentication channels. Supporters of this idea saw it as a way to bring identity-based or credential-backed payments into a standardized browser framework. People who were less supportive were concerned about expanding the DC API into general payment use cases, particularly where it might increase the spread of identity requirements or blur regulatory boundaries. One person emphasized that the DC API was intentionally designed as a one-shot credential presentation mechanism with explicit user interaction; embedding it inside automated or merchant-driven payment flows could undermine those guardrails. There’s momentum around “credential bundles”—where a wallet might return both a payment credential and auxiliary credentials (e.g., phone number, email, proof of age) in a single flow. The DC API’s “one-shot” design (a specific request → a specific credential) fits some payment use cases surprisingly well.

But the big tension is privacy.

The TAG is very wary of the DC API being used as a stealth identity requirement in payments

TAG feedback, repeated in the meeting, was that the DC API must not become a way for merchants to refuse service unless a user hands over identity information. This is exactly the kind of privacy creep standards bodies want to avoid.

And the payment world is… complicated. Identity requirements often stem from regulation, not merchant preference. Age verification, for example, is generally required only when regulators turn that into law.

So payments may need the DC API for certain cases, but cannot rely on it for everything, and must stay within privacy-preserving guardrails.

This is the tension that will shape the DC API + payments integration over the next few years.

Secure Payment Confirmation (SPC) sits at the center of all of this

SPC appears in both groups’ discussions, and for good reason. It’s one of the few browser features explicitly designed for payment authentication, and everyone wants it to work well.

What’s working Users like SPC when it works—Visa shared some user research showing strong reactions to the UI. SPC is faster and smoother than traditional 3DS flows that use OTP. Chrome is shipping improvements steadily on Android and desktop. Bindings like browser-bound keys (BBKs) reduce fraud and reduce reliance on cross-device sync by providing a possession factor, something particularly important in regulated payment environments.  What’s not working iOS has no SPC support. This came up repeatedly as a major barrier to adoption. Enrollment friction remains high—passkey flows still confuse users. Cross-device syncing is unreliable, especially when accounts differ across devices. Merchants are hesitant about flows they can’t customize, especially around branding. It’s not a major blocker when it comes to purely the authentication aspects, but handing over control to the browsers in other aspects of the payment request UX makes them itchy. The mood in the room: maybe it’s time to stop centering passkeys for SPC

I love passkeys. The meeting notes suggested, however, that passkeys still have a ways to go when it comes to user understanding. That led some people to suggest a BBK-only SPC, where the browser stores a key tied to the device and payment instrument without any of the passkey synchronization challenges.

It’s not settled yet, but that’s why we have these discussions.

Agentic AI makes everything more interesting (and more complicated)

Both groups discussed agentic AI—systems that can autonomously initiate actions like purchases, bookings, or recurring transactions.

What stood out is how quickly AI is forcing these conversations:

No one wants AI systems impersonating users through passkeys. There is interest in agent-initiated payments authorized by users, similar to mandate management. Stripe and AP2 demos showed early patterns for agent-initiated commerce. WPSIG is watching this closely and wants coordination with the forthcoming AI and Web IG.

This is one of those areas where identity and payments are going to collide repeatedly. Agents need to know enough about a user to make decisions, but not so much that they become a surveillance nightmare or fraud catalyst.

Fraud is the ever-present pressure point

Across both groups, the concerns repeat:

Cross-device phishing Abuse of synced passkeys Lack of UI trust signals The need for better cryptographic binding How SPC, DPC, and DBSC change the risk landscape Whether wallets can meaningfully contribute to fraud detection without over-collecting data

The Anti-Fraud CG joined WPSIG for part of the meeting, and trust signals—especially for cross-device CTAP flows—were a major topic.

This is an area where identity and payments will need shared guidance. Fraudsters don’t care about charters.

So what’s the difference between WPWG and WPSIG again?

After reading two days of meeting minutes from each group, here’s the distinction in practical terms:

WPWG Defines browser APIs. Works through implementation details, developer ergonomics, and test suites. Debates things like error codes, mediation modes, and API integration. Thinks in terms of “can we ship this?” WPSIG Pulls in the whole industry. Surfaces pain points across the payments ecosystem—wallet certification, regulatory impacts, agentic AI, fraud. Helps organizations see around corners. Thinks in terms of “will this fit into the world we actually live in?”

Both are essential and influence each other’s work. And, probably of course at this point, both are working toward a future where digital wallets, identity credentials, and payment mechanisms can coexist without breaking user expectations or regulatory sanity.

But they do different jobs. And in a space as tightly interconnected as payments and identity, it’s completely understandable that observers (and newcomers like me) blur them together.

Why I’m watching this area so closely

It’s rare to see two worlds—payments and digital identity—colliding this directly in standards bodies. Payments folks are realizing that credentials matter. Identity folks are realizing that wallets aren’t just for government IDs. Browser vendors are trying to build something usable without getting caught in regulatory crossfire.

And everyone is trying to figure out how to handle AI before AI handles them.

For me, digging into these minutes is a way of understanding how these communities think, where they’re cautious, where they’re bold, and where they need help. Even if I’m not an expert here, I can already see how this work will spill into everything from consumer wallets to enterprise authentication to the regulatory debates coming down the road. And, full disclosure, I may have interpreted some of what’s in the minutes incorrectly. These are a place to start, not a place to end.

I’ll share more as I keep learning. Your mileage may vary, but I find these intersections fascinating—and often a little chaotic. Which is exactly why I keep going back.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Why Web Payments Suddenly Matter to Identity

00:00:30

When attending standards meetings, one reality always stands out: there is far more happening than any one person can reasonably track. That truth was especially apparent during W3C TPAC, the annual meeting of the World Wide Web Consortium.

Although payments are not my primary domain, I found myself drawn to the Web Payments discussions—not because I plan to reinvent myself as a payments expert, but because payments and digital identity are beginning to overlap in ways that are impossible to ignore.

Since I couldn’t attend those sessions directly, I did what many standards practitioners do: I read the meeting minutes carefully and looked for patterns beneath the surface.

Identity Has Always Been Part of Payments—But Something Has Changed

00:01:45

Identity verification in payments is not new. Risk models, regulatory frameworks, and authentication requirements have existed for decades.

What is new is how these requirements are increasingly expressed using the language of:

Digital credentials Digital wallets Browser-based APIs

This post—and the accompanying audio—reflects a thinking-out-loud process about what those changes signal, especially for people working in digital identity.

The Two W3C Groups Shaping Web Payments

00:02:30

At the W3C, two groups consistently surface in web payments discussions, each with a distinct role.

Web Payments Working Group

This group builds browser standards, including:

Payment Request API Payment Handler API Secure Payment Confirmation (SPC)

Their focus is concrete and implementation-driven. They ask:

Can this ship across browsers? Will developers use it? Does it improve user experience without breaking the web? Web Payment Security Interest Group

The Security Interest Group does not produce specifications. Instead, it serves as a coordination forum across:

Browser vendors Payment networks Wallet initiatives Other standards organizations

This group compares notes, surfaces shared problems, and examines how different standards efforts intersect—or collide.

Wallet Fragmentation Is No Longer Optional

00:03:40

Across both groups, several themes appear repeatedly:

Digital wallets Fraud prevention Regulation User trust Digital identity

One strong signal from the meetings was that wallet interoperability is no longer optional. The ecosystem is crowded, including:

Payment wallets Identity wallets Government-issued wallets Browser-managed credentials Platform-specific solutions

Everyone agrees fragmentation is harmful, but no one has a single, clean solution.

Regulatory Pressure from Japan and Europe

00:04:20

Meeting minutes highlighted presentations from Japan and Europe that revealed similar pressure points despite very different regulatory environments.

Japan

Older approaches to identity proofing are being phased out. Specifically:

Selfie-based eKYC is proving too easy to spoof Phishing-resistant authentication is becoming mandatory Passkeys are explicitly named in government guidance

There is also a shift toward:

IC chip–based identity cards Device-bound authentication models

At the same time, Japan’s payments ecosystem remains highly diverse, relying on:

QR codes Domestic payment networks Transit-driven adoption

This diversity does not align neatly with platform-centric wallets like Apple Pay or Google Pay.

Europe

Europe faces similar challenges under tightening regulations such as:

PSD3 AML6

These frameworks increase requirements around:

Fraud prevention Auditability Accountability

As a result, institutions are being asked not just whether a user authenticated, but how—and whether that process can be demonstrated after the fact.

The Digital Credentials API Enters the Payments Conversation

00:06:20

This is where the Digital Credentials API (DCAPI) begins to surface in web payments discussions.

Originally, the DCAPI was designed as:

Privacy-conscious User-mediated Intentionally scoped

It was never meant to be a generic data pipe.

However, what stood out in the Web Payments Working Group minutes was a sustained discussion about whether the DCAPI could play a role in payment flows.

Ideas raised included:

Allowing payment flows to invoke the DCAPI directly Treating the DCAPI as a payment method within the Payment Request API

This represents a meaningful shift in thinking—and one not currently centered in DCAPI-focused working groups.

Credentials as Part of Checkout

00:07:10

In practical terms, participants discussed scenarios where a wallet might return a digital credential during checkout, such as:

A credential containing a cryptographic proof Selective disclosure of attributes (e.g., a phone number for risk analysis)

The intent was not to replace existing payment rails, but to allow credentials to become part of how payments are authorized and confirmed.

At the same time, there was clear caution. The DCAPI was deliberately designed as a one-shot interaction with explicit user involvement.

Expanding it into routine payment flows risks embedding identity requirements that are difficult—or impossible—to roll back.

Secure Payment Confirmation at a Turning Point

00:08:05

Secure Payment Confirmation (SPC) is one of the few browser features designed explicitly for payments.

When it works well, SPC:

Is faster than traditional 3DS flows Reduces phishing risk Clearly positions the browser as a trust mediator

However, discussions revealed growing tension around passkeys in payment contexts.

Challenges include:

Unreliable cross-device syncing Confusing enrollment flows Difficult-to-explain failure modes Perceived redundancy in layered risk checks The Rise of Browser-Bound Keys

00:09:00

An emerging concept discussed was “BBK-only SPC,” where BBK stands for browser-bound key.

In this model:

The browser generates a device-bound cryptographic key The key is tied to a specific payment instrument No cross-device syncing is required

This approach offers several advantages:

Reduced enrollment friction Simpler mental model for users Strong device binding for fraud prevention

Notably, participants openly questioned whether passkeys were ever the right abstraction for payment confirmation—a significant statement given their momentum elsewhere.

Agentic AI and Mandated Payments

00:10:30

Another fast-emerging area is agent-initiated payments driven by AI systems.

While everyone agreed agents should not impersonate users, there was alignment around a mandate-based model:

Users authorize actions upfront Agents operate within defined boundaries

This immediately raises familiar identity questions:

How is consent represented? How is it verified? How is it revoked? How is it audited?

These are longstanding identity challenges that are now becoming unavoidable in payments.

Fraud as the Constant Pressure

00:11:40

Underlying all of these discussions is fraud:

Cross-device phishing Credential misuse Replay attacks

These threats do not respect working group boundaries. As a result, conversations about SPC, DCAPI, device-bound credentials, and wallet trust signals all circle the same question:

How do we reduce risk without creating systems that are brittle, invasive, or impossible to deploy globally?

Two Groups, One Set of Fault Lines

00:12:20

The Web Payments Working Group and the Web Payment Security Interest Group approach the same challenges from different angles.

The Working Group focuses on what browsers can ship today The Interest Group examines regulatory pressure, industry alignment, and emerging risks

They were designed to be complementary—and the minutes suggest that collaboration is increasingly effective.

Final Reflections on Identity and the Future of Payments

00:12:50

Spending time with the meeting minutes was not about mastering payments. It was about noticing how quickly digital identity concepts are becoming foundational in domains that once believed identity was already solved.

Key takeaways include:

Wallets are no longer just containers Credentials are not limited to government use cases Browsers are active trust mediators Payments are deeply intertwined with identity ecosystems

The overlap between payments and identity will only continue to grow. Watching that convergence closely is one of the best ways to understand where the web is headed next.

Closing and Call to Action

00:13:08

Thanks for spending time with this episode of the Digital Identity Digest. If it helped make things clearer—or simply more interesting—please share it with a colleague.

You can connect with me on LinkedIn and find the full written version at sphericalcowconsulting.com. Stay curious, stay engaged, and let’s keep these conversations going.

The post Web Payments and Digital Identity Standards Are Converging – #TIL appeared first on Spherical Cow Consulting.


Elliptic

Crypto regulatory affairs: OCC gives US banks go-ahead on riskless crypto transfers

In this second December edition of crypto regulatory affairs, we will cover:

In this second December edition of crypto regulatory affairs, we will cover:

Monday, 22. December 2025

Spruce Systems

Interoperability Without Lock-In: Why Standards Matter

Vendor lock-in slows innovation and increases long-term cost. This post explains how open standards enable interoperability across agencies, vendors, and platforms — while preserving flexibility.

Interoperability is a goal nearly every government agency shares. Systems should work together. Data should move securely between programs. New services should build on what already exists as part of broader government modernization and digital transformation efforts.

Yet many modernization initiatives struggle to achieve this vision because interoperability is pursued through tools instead of standards. Point solutions integrate well at first, but over time they create dependencies that are difficult and expensive to unwind. Innovation slows. Costs rise. Flexibility disappears as agencies accumulate tightly coupled systems and bespoke integrations.

Open standards offer a different path. They make interoperability durable, portable, and resilient across agencies, vendors, and platforms while supporting long-term system integration and modernization without forcing long term lock in.

Lock in is an architectural problem, not a procurement mistake

Vendor lock in is often blamed on contracts or purchasing decisions. In reality, it is usually the result of architecture. When systems rely on proprietary data formats, custom APIs, or closed identity models, switching vendors becomes risky and modernization efforts stall. Integrations must be rebuilt. Data must be transformed. Staff retraining becomes unavoidable. Even small changes feel disruptive.

These costs compound over time, making agencies hesitant to adopt new capabilities or modernize legacy systems. The system may technically work, but it no longer adapts. Avoiding lock in requires designing for change from the beginning as a core modernization principle.

Architectures built on open protocols like OAuth 2.0, OpenID Connect (OIDC), SAML 2.0, and SCIM allow identity, access, and provisioning layers to evolve independently of any single vendor implementation while supporting interoperable digital services.

Standards define how systems work together

Standards create a shared language between systems. They define how data is structured, how identity is represented, how access is verified, and how information is exchanged across modern digital services. When systems adhere to common standards, they can interoperate without knowing the internal details of one another.

This decoupling is powerful. Agencies can change vendors. Vendors can improve products. New services can be added without breaking existing integrations or disrupting mission-critical workflows.

Standards shift interoperability from a series of custom projects to a built in capability that scales across programs and agencies.

In practice, this often means APIs defined using OpenAPI specifications, data validated with JSON Schema, and event driven and workflow-based integrations that rely on documented contracts instead of informal assumptions.

Interoperability across agencies and programs

Government services rarely exist in isolation. Residents interact with multiple agencies. Programs depend on information from other systems. Data sharing is essential to efficient service delivery and accessible digital government experiences.

Standards make this possible without forcing uniformity. Agencies can maintain their own systems of record while exchanging well defined data where appropriate through secure, policy-driven interfaces. Access controls and purpose limitations travel with the data.

This approach reduces duplication and makes collaboration practical rather than burdensome while supporting cross-program modernization.

For identity and eligibility use cases, standards such as W3C Verifiable Credentials and ISO/IEC 18013-5 (mobile driver’s license) enable agencies to verify information without direct database access or repeated document collection as part of modern digital identity infrastructure.

Flexibility without fragmentation

One concern agencies sometimes raise is that standards limit flexibility. In practice, the opposite is true. Standards define interfaces, not implementations. Agencies and vendors are free to innovate behind those interfaces using different technologies and platforms. Different solutions can coexist as long as they speak the same language at the boundary.

This allows agencies to adopt best of breed solutions over time instead of committing to a single platform for every need and supports incremental modernization rather than large-scale replacement. It also encourages competition, which drives quality and cost efficiency.

Standards-based architectures make it possible to swap components such as identity providers, document processing tools, or fraud detection services without rearchitecting end-to-end workflows.

Identity and data standards as force multipliers

Some of the most impactful standards relate to identity and data exchange. Common identity standards allow users to authenticate consistently across services while enabling different assurance levels based on risk. Data standards ensure that information retains meaning as it moves between systems.

Organizations like the World Wide Web Consortium (W3C) and the International Organization for Standardization (ISO) define widely adopted specifications for data formats and verifiable information, while the National Institute of Standards and Technology (NIST) provides guidance on identity, security, and interoperability for government systems.

In the public sector, this guidance frequently includes NIST SP 800-63 for digital identity assurance, NIST SP 800-207 for Zero Trust architecture, and alignment with FedRAMP (and GovRAMP) security controls for cloud-based services.

Together, the standards and guidance from these bodies form a foundation that supports secure sharing without centralizing control.

Reducing long term cost through portability

Portability is one of the most tangible benefits of standards. When data and integrations are standards-based, agencies can migrate systems incrementally. They can pilot new tools without committing to full replacement. They can respond to policy changes without rewriting everything.

This reduces long-term cost not by cutting corners, but by preserving choice. Agencies retain leverage and avoid being trapped by earlier decisions.

Portability also enables parallel operation during transitions, allowing legacy and modern systems to coexist safely while services are phased over time.

Standards support security and privacy

Interoperability is sometimes seen as a security risk. Standards help mitigate this concern. Well-designed standards incorporate security and privacy principles such as authentication, authorization, encryption, and data minimization. They make expected behavior explicit and auditable.

Rather than relying on bespoke integrations that are hard to review, standards-based systems follow known patterns that can be assessed, monitored, and improved over time.

This consistency is essential for applying Zero Trust principles, where every request is verified based on identity, context, and policy rather than assumed trust between systems.

Interoperability as an ongoing capability

Interoperability is not a one-time achievement. It is an ongoing capability that must survive vendor changes, technology shifts, and evolving requirements.

Standards make this possible by anchoring systems to shared agreements instead of specific products. They allow government IT ecosystems to evolve without constant reinvention.

Building for the long term

Governments have a responsibility to build systems that last. This means planning for change, not just delivery.

Open standards are one of the most effective tools for doing so. They enable interoperability without lock-in, foster healthy vendor ecosystems, and protect public investment over time. When standards come first, innovation accelerates rather than stalls. Systems work together without being welded together. Agencies gain flexibility instead of losing it.

That is why standards matter.

Building digital services that scale take the right foundation.

Talk to our team

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Anonym

What’s a “State Endorsed Digital Identity” and Why is Utah Creating One?

Trusting Digital Identity The modern web provides users with instantaneous access to information and a wide range of digital capabilities never before seen in the world.  People are able to send electronic communications, make remote purchases for rapid delivery, and access the worlds knowledge and information via their electronic devices.  Online users can perform sensitive […] The po
Trusting Digital Identity

The modern web provides users with instantaneous access to information and a wide range of digital capabilities never before seen in the world.  People are able to send electronic communications, make remote purchases for rapid delivery, and access the worlds knowledge and information via their electronic devices.  Online users can perform sensitive functions, such as communicating with their doctor, ordering prescription medication, conducting financial transactions, and shopping online. 

It has been observed that the internet was created without an identity layer.  This unforeseen necessity has left online content and service providers to provide user accounts and security methods (e.g., username and password) they created themselves.  Users, not wanting to remember numerous passwords, often reuse their login credentials.  This leaves people vulnerable to cyberattacks and account compromises.  Federated account management (e.g. “Login With ___”) services have tried to help, but left users beholden to account providers that have incentives to monitor internet activity.

Since issuing identity cards has traditionally been the role of the state, governments have entered the digital identity arena in order to provide secure digital credentials for their citizens.

Government’s Traditional Role in Identity

Every individual is unique and their identity comes by virtue of their birth, family, and social environment.  When social environments are small, it is easy for everyone to know each other.  However, as social environments grow into much larger cities and there is a lot of interaction between geographically dispersed groups, knowing someone by sight becomes impossible.

Governments have been created in societies to help people interact in a mutually agreeable fashion.  As part of this process, it fell on governments to help people to be able to identify each other also in a mutually agreeable way.  This led to identity papers being issued by governments, so that individuals could present them when needing to establish a level of trust between parties who don’t know each other.

While that type of identity paradigm benefits society in many ways, it can also leave people beholden to the government.  If that sounds strange, anyone who has stood in long and tedious lines at the local Department of Motor Vehicles can sympathize.  However, the real drawback of the model where governments provide identity methods is that governments can also take away those credentials, which happens when someone moves between states and must surrender their existing identity card.

Digital Identity

With traditional physical identity cards, proving one’s identity is largely transactional (e.g., show a card to get into a building) without a means of re-verifying the card’s ongoing validity.  Conversely, digital identity enables a wide range of previously impossible scenarios, such as immediate cryptographic verification.  This process helps store clerks from being fooled by look-a-like fake IDs. 

The cryptographic verifiability properties of digital identity also help people prove their identity online while visiting remote websites and also helps web service operators to be assured that visitors are who they claim to be.  This helps facilitate everyday activities, such as online purchasing or checking email.  However, it also enables more sensitive activities, such as patient-doctor communications, online banking, filing income taxes, purchasing securities, etc.

While government backed digital identity delivers a high degree of trust to all parties of identity transactions, it also leaves people with an increasing dependency on the government issuers.  Since digital identity creates the ability to conduct medical, financial, social, commercial, personal, etc. activities online, any kind of disruption to that identity now becomes catastrophic to consumers.  Such disruptions can happen after forgetting to renew a driver’s license, due to random administrative issues, identity service (internet) disruption, or from a cyberattack. 

If people use their government issued digital ID for their online activities, any disruption to that ID can leave them unable to function online.

Utah’s “State Endorsed Digital Identity”

The US state of Utah is taking a novel approach to digital identity – one that leaves people with more control.  Utah’s approach draws from the time-honored birth certificate process.

When a baby is born, the parents choose their child’s name and that becomes the means by which they are known throughout their life.  The child’s name wasn’t selected or assigned by the government, rather it came from the parents.  However, since the government provides a trusted identity service, the child will also need a credential in the form of a birth certificate.  Birth certificates provide a government-backed representation of a person’s name and other vital information … all of which was provided by the parents.

Passed into law in 2025, Section 1202 of the Utah Code reads:

Figure 1 – Utah’s State digital identity policy.

Using the birth certificate paradigm as its model, Utah has asserted that each individual has a unique identity, which is not established by the state, but is inherent to the individual.  In a world where governments typically issue a person’s identity credential (e.g., driver’s license or passport), Utah made the choice to affirm individuality and then to endorse it.  This is why Utah’s new digital identity system is known as the State Endorsed Digital Identity (SEDI).

How a SEDI Credential Works

While the exact technology stack for SEDI has yet to be announced, at its technological base, SEDI will generally draw from a Decentralized Identity (DI) paradigm.  In DI, a user creates their own unique cryptographic identifier called a Decentralized Identifier (DID) using an app they control.  DIDs are like web addresses (i.e., URLs) and that reference both a unique identifier and set of cryptographic public keys.  DIDs provide the basis for establishing secure communications, encrypting files, and utilizing verifiable credentials.

Using the DI paradigm, a user will present their DID to the state along with other traditional identification, such as a birth certificate, driver’s license, or other identity documents the state deems trustworthy and chooses to accept.  The following figure illustrates the basic interactions between the user and the state when requesting a SEDI endorsement credential:

Figure 2 – SEDI Request Process

Using this process, a person will create a DID (that they alone maintain and control) using a SEDI-enabled wallet application.  It is important to note that the DID is theirs forever and while the state can issue a SEDI credential to the user tied to their DID, the state does not take possession of or control the DID, itself.

That is what is unique about SEDI’s use of the DI paradigm.  The user can continue to use that same DID for receiving other credentials from other sources and may perform many other DI functions.  If the SEDI endorsement is ever revoked, the user still keeps the original DID and may continue to use it however they choose.  This process of issuing SEDI credentials tied to a person’s unique DID identifier enables them to keep the anchor DID even if the SEDI endorsement credential were ever revoked for any reason.

Using a SEDI

Once a person receives their SEDI credential, they can use it to perform a wide range of digital activities, such as identifying themselves to and conducting business with the state.  This enables them to securely prove their identity without the need for a username or password, to secure digital communication channels, cryptographically sign documents, make requests, etc.  Once implemented, users will be able to securely pay taxes, renew drivers licenses, obtain a hunting or fishing license, etc. either in person or online.

As SEDI becomes adopted by the internet industry, users will be able to do more than conduct business with the state government.  Users will be able to login to websites, remit payments, make verified online purchases, enroll in social media, prove their age as required, etc.

Types of SEDI Assertions

Utah Code Section 63A-16-12-1202 discusses several security and privacy requirements for SEDI usage both in and potentially outside of primary government environments.  SEDI-enabled technologies should implement specific functionality outlined in Utah State Code.

Selective Disclosure

Paragraph (1)(g)(i) of Utah Code states that a SEDI holder is entitled to choose “how” and “to whom” “the individual discloses the individual’s state-endorsed digital identity”, which is interpreted to include both in-government and outside of government usage scenarios.  This paragraph further entitles users to determine “which elements” they choose to disclose, which is referred to within the identity industry as Selective Disclosure.

The following figure illustrates three Selective Disclosure use case scenarios in which a person may elect to disclose all or some of their SEDI-related personal information.

Figure 3 – Selective Disclosures Using SEDI

When presenting their SEDI to the state’s Department of Motor Vehicles (DMV), it is presumed that the DMV already has their full identity information set, so it is not necessary to hide or mask any of that information through Selective Disclosure.  However, when presenting a SEDI to a bank, only the primary identity information is required, so the user is able to avoid divulging physical attributes, veteran status, donor status, etc. as depicted above.  Finally, if a person is interacting with a business to prove their name and address, SEDI’s Selective Disclosure functionality enables them to share only that information while not having to disclose any other information contained within the SEDI credential. 

These Selective Disclosure features are a significant privacy advancement above current identity proving methods (e.g., providing a photo of their driver’s license) which typically only enable users to prove their identity after they provide all information contained within the credential type.  While current methods result in verifiers (e.g., businesses or web services) collecting vast amounts of irrelevant personal data, SEDI dramatically enhances personal privacy by enabling users to provide only the specific personal data items that are necessary.

Age Range Disclosure

From the Utah Code cited above, paragraph (2) further provides personal data protections, such as:

“The state may not endorse an individual’s digital identity unless:” (2)(e) “the state-endorsed digital identity enables an individual to” (2)(e)(i) “selectively disclose” and (2)(e)(ii) “verify that the individual’s age satisfies an age requirement without revealing the individual’s age or date of birth”. 

On the surface, one might assume that the age requirement could simply be embedded in a SEDI as “over age 18” or “over age 21”, as was done in the Mobile Driver’s License (i.e., mDL; ISO/IEC 18013) standards.  However, this is impractical given the numerous varying age requirements throughout both government and society.  As an example, US military recruits must be at least age 17 to join the US military.  In Utah, to run for legislative office, candidates must be at least age 25 or at least age 30 to run for Governor.  Utah State Code stipulates specific employer work requirements based on age ranges, such as 14-15, 16-17, 18, etc.  Given the wide variety of cases where age must be proven “without revealing the individual’s age or date of birth”, simply including a series of “over age XX” fields in the SEDI credential is impractical.

The age range disclosure functionality provided by SEDI helps users maintain their privacy by limiting disclosures to “meets requirements” instead of divulging specific identity data elements. There are several technology-based solutions by which to express a data item or range of data items without revealing the data items themselves.  A cryptographic digital signature type, known as a Zero Knowledge Proof (ZKP), can be used to enable this privacy-preserving data proving capability.

SEDI Protects Against Unwanted Surveillance

From Utah Code Section 63A-16-12-1202, paragraph (2)(d) stipulates protections for SEDI holders as:

 “(d) the state ensures that when an individual uses a state-endorsed digital identity

 (ii) “the use is free from surveillance, visibility, tracking, or monitoring by any other governmental entity or person”.  

Modern methods for tracking individuals in digital settings have become significantly advanced with the advent of targeted advertising.  Most tracking methods perform a combination of data collection analysis and individual tracking identifier correlation.  Tracking identifiers can take the form of anything that uniquely identifies a person, such as cell phone number, email address, social security account number, driver’s license, etc. 

In digital environments, even static or semi-permanent Decentralized Identifiers (DIDs) can also contribute to surveilling users’ activities.  None of these tracking methods should be used without specific consent from the user.  In order to accept a SEDI credential, verifiers will need to support surveillance protections required by Utah state law.  Keeping users free from unwanted surveillance is an important step forward.

SEDI:  A Model for Privacy-Enhancing Credentials

According to Utah’s official State-Endorsed Digital Identity whitepaper (Oct 2025), the basic principles governing SEDI are:

Comprehensive Legal Framework – clear governance, specific requirements, enforceable standards, open standards, explicit transparency, and separation of duties Individual Control – individual create and maintain perpetual control of their core identifier upon which the SEDI endorsement is based Privacy – implemented using a decentralized, peer-to-peer approach where individuals control their SEDI elements and are free from surveillance tracking Parental Rights and Delegation – SEDI outlines a series of parental rights to enable parents to manage their children’s digital identity and to provide tools to parents that keep their children safe from exploitation or identity misuse Critical Public Infrastructure and Security – in the modern era given the significant value of personal data, SEDI recognizes the digital identity ecosystem as a critical infrastructure to be protected similar to how other critical infrastructures are protected Backwards compatibility – SEDI maintains backward compatibility with existing digital identity ecosystems in order to ensure that SEDI holders may continue to interact with such ecosystems while participants transition to the more secure and private SEDI architecture

Utah has taken a very progressive and forward-thinking approach to government-endorsed digital identity that is intended to position the information and privacy rights of individuals ahead of the needs (current or future) of the state or commercial enterprise.  Given the amount of personal data surveillance, collection, and aggregation happening on today’s internet, Utah’s move is a much needed and welcome approach.  It is foreseen that Utah’s SEDI model will become a pattern by which other governments may create and issue secure and private digital identity for their citizens.

The post What’s a “State Endorsed Digital Identity” and Why is Utah Creating One? appeared first on Anonyome Labs.


liminal (was OWI)

Promotion and Loyalty Abuse for E-Commerce

The post Promotion and Loyalty Abuse for E-Commerce appeared first on Liminal.co.

Elliptic

The true cost of “cheap” sanctions screening

Key takeaway: Cheaper sanctions screening solutions often cut costs by narrowing what they detect. The result is gaps in intelligence, tracing or coverage that can expose your customers to sanctioned funds and your compliance program to regulatory scrutiny.

Key takeaway: Cheaper sanctions screening solutions often cut costs by narrowing what they detect. The result is gaps in intelligence, tracing or coverage that can expose your customers to sanctioned funds and your compliance program to regulatory scrutiny.


liminal (was OWI)

2026 Predictions: What Comes Next in Fraud, Cybersecurity, and Identity

This past year revealed more than isolated trends. It showed how quickly decision-making itself is changing. The post 2026 Predictions: What Comes Next in Fraud, Cybersecurity, and Identity appeared first on Liminal.co.

With editorial support from Jennie Berry, Filip Verley, Andrew Bowden, Will Charnley, Darin Bunker, Yura Nunes, and Grant Gillem.

This past year revealed more than isolated trends. It showed how quickly decision-making itself is changing. AI moved deeper into workflows. Fraud and impersonation became more adaptive. Trust and integrity shifted from being a policy concern to an operational constraint. Buyers demanded sharper differentiation as markets became noisier and more competitive.

Taken together, these signals point to one conclusion: 2026 will be shaped by acceleration. Decision cycles will continue to compress, the cost of misinterpretation will rise, and intelligence will become essential, not as a reference point, but as infrastructure.

In this environment, data alone is insufficient. Organizations need context scaffolding: the ability to connect signals across identity, behavior, risk, and market dynamics, and translate them into real-time, confident action. Static reports and fragmented tools cannot keep pace with adaptive threats, automated systems, and continuous change.

The predictions below reflect what we are seeing across Link and what our leadership team believes will define the coming year. Individually, they describe specific shifts. Together, they outline how intelligence becomes embedded, how identity becomes continuous, and how teams move faster without losing control.

Adaptive Threats Require Continuous Context

In 2025, fraud crossed a threshold. The most dangerous attacks were no longer the largest in scale. They were the fastest to adapt. That shift is now visible in the underlying traffic itself. Imperva reports that automated traffic surpassed human activity in 2024, accounting for 51% of all web traffic, with bad bots representing 37% of all internet traffic.

In 2026, defending against fraud will depend less on individual controls and more on whether organizations can maintain continuous context across identity, behavior, and time. Without that context, speed becomes a liability rather than an advantage. When attackers can run high-frequency experiments against your systems, point-in-time verification and siloed signals quickly break down.

AI-driven impersonation will emerge as the most disruptive fraud vector in 2026

In 2025, impersonation and account recovery fraud accelerated across industries as attackers employed synthetic voice, synthetic video, and session replay techniques to convincingly impersonate legitimate users. These attacks exposed a widening gap between how fraud now operates and how many authentication systems were originally designed to operate.

In 2026, AI-driven impersonation becomes the most disruptive fraud vector because it adapts in real time. Static authentication checkpoints and single-signal verification break down as attackers learn how to respond to friction, vary inputs, and exploit inconsistencies across channels.

Defending against impersonation will require continuous context, not stronger gates. Organizations will rely on multimodal identity signals, behavioral consistency over time, and real-time risk evaluation across sessions and journeys. The objective shifts from proving identity once to understanding intent continuously.

As decision cycles compress, organizations that cannot interpret identity context in the moment will be forced into a reactive posture. Organizations that can will act before impersonation becomes a damaging issue.

Synthetic identities will evolve into adaptive, model-generated personas

Throughout 2025, synthetic identity fraud continued expanding in both impact and sophistication. Losses tied to synthetic identity fraud surpassed the $35 billion mark in 2023, and the Federal Reserve Bank of Boston has identified generative AI as a significant accelerant that makes synthetics harder to detect and easier to iterate. In particular, GenAI can automate identity creation, increase realism, and learn from failed attempts to produce more of what works.

In 2026, synthetic identities evolve into adaptive, model-generated personas. These personas are not designed to pass a single check. They are designed to persist, learn, and adjust behavior in response to controls.

This evolution breaks detection models built around attributes and one-time verification. Defending against adaptive synthetics requires context scaffolding across relationships, linking identities, devices, behaviors, and outcomes to reveal coordinated activity that no single signal can expose.

In an environment defined by acceleration, the challenge is no longer identifying fake identities. It is recognizing systems of behavior masquerading as legitimate users.

Fraud actors will deploy reinforcement learning agents to probe identity systems

Automated probing activity surged in 2025, particularly across fallback paths, retry flows, and account recovery mechanisms, and the macro signals are clear. Imperva reports that automation is now the baseline on the public internet, with bots driving 51% of web traffic and bad bots comprising 37%. Imperva also reports 40% growth in account takeover attacks in 2024, underscoring how quickly adversaries can scale once automation is in place.

TransUnion’s global fraud findings reinforce this trajectory in identity-specific terms. It reports a 21% increase in the volume of digital account takeover from H1 2024 to H1 2025 and identifies account creation as the riskiest stage in the lifecycle, with 8.3% of all digital account creation attempts in H1 2025 suspected of being fraudulent.

In 2026, fraud actors deploy reinforcement learning agents that continuously probe identity and fraud systems, observe outcomes, and refine tactics autonomously. These agents behave less like bots and more like adversarial systems, adapting faster than rule-based defenses can respond.

Defending against this class of threat requires intelligence that operates at the session and system level, identifying learning behavior rather than isolated anomalies. Security teams will need to think in terms of adversarial dynamics, continuous feedback loops, and model-level resilience.

As attackers accelerate, organizations that treat fraud as a static detection problem will fall behind. Those who treat it as a learning problem will keep pace.

Identity, Risk, and Trust Converge Into Decision Systems

In 2025, fragmentation became a liability. Identity signals, risk scores, and access decisions were stored in separate systems, evaluated by different teams, and acted upon at varying speeds. As fraud and abuse grew more adaptive, those seams became increasingly exploitable.

Industry research reflects this pressure. McKinsey has highlighted that siloed fraud, financial crime, and cybersecurity functions create duplication, blind spots, and slower response times, arguing that organizations must move toward unified operating models to keep pace with modern threats.

In 2026, identity, risk, and trust converge operationally into continuous decision systems. The objective is not simplification for its own sake, but the ability to maintain shared context and act consistently as conditions change.

Risk, identity, and cybersecurity teams will converge under a unified architecture

Throughout 2025, attackers increasingly exploited gaps between onboarding checks, authentication flows, and access controls. Fragmented stacks, each optimized for a narrow moment in the journey, created blind spots that adaptive threats learned to navigate with ease.

This fragmentation is now widely recognized as unsustainable. Gartner has forecast that by 2028, 20% of large enterprises will adopt cyber-fraud fusion teams, up from less than 5% today, reflecting a push toward unified architectures that continuously evaluate identity and risk, rather than in silos.

In 2026, organizations respond by converging risk, identity, and cybersecurity under a unified, identity-led architecture. Identity signals, behavioral patterns, and access decisions inform one another dynamically rather than being evaluated in isolation.

As decision cycles compress, organizations that cannot interpret risk holistically will be forced to slow down. Convergence becomes a prerequisite for speed, not a structural preference.

Trust will become a measurable performance indicator for product and revenue teams

In 2025, organizations increasingly recognized that trust shifted from an abstract brand value to a driver of measurable outcomes. Platforms that invested in identity integrity, user quality, and protection against abuse consistently outperformed those relying on reactive enforcement.

External data reinforces this shift. Edelman’s Trust Barometer has consistently demonstrated that trust has a direct impact on purchasing behavior, loyalty, and advocacy, while cybersecurity and data protection failures significantly erode customer confidence. As more interactions are mediated by AI systems, the link between trust and performance becomes even more direct.

In 2026, trust becomes a measurable performance indicator because organizations finally have the contextual signals required to quantify it. Identity confidence, behavioral consistency, and integrity signals can be observed and scored over time, rather than inferred from isolated events.

Trust becomes actionable when embedded into decision-making systems, informing access, pricing, moderation, and engagement in real-time. Teams that operationalize trust will outperform those that continue to manage it as policy, messaging, or brand positioning.

Continuous identity will replace static verification as the industry default

By 2025, it was clear that point-in-time identity verification could not keep pace with adaptive fraud, impersonation, and AI-driven abuse. Identity risk no longer begins and ends at onboarding. It evolves throughout every session and interaction.

This reality is reflected in market data. TransUnion reports that digital account takeover volume increased by 21% from H1 2024 to H1 2025, and identifies account creation as the riskiest stage in the lifecycle, with 8.3% of digital account creation attempts suspected of being fraudulent. These patterns demonstrate why static verification models fail once attackers adapt post-onboarding.

In 2026, continuous identity becomes the default across regulated and non-regulated industries. Rather than proving identity once, organizations maintain ongoing identity confidence using behavioral signals, device intelligence, and contextual risk factors.

Continuous identity provides the execution layer for unified decision systems. It enables organizations to adjust access, friction, and enforcement dynamically as behavior changes, replacing binary outcomes with real-time, probabilistic decisions.

As AI compresses decision cycles, continuous identity allows teams to move faster without sacrificing trust or control.

Intelligence Moves From Insight to Infrastructure

In 2025, organizations did not lack insight. They lacked the ability to act on it in a timely manner. Intelligence was increasingly delivered through dashboards, reports, and slide decks, while decisions were made elsewhere under pressure. As markets and threats accelerated, that separation became untenable.

In 2026, that gap closes. Intelligence moves from something teams consult to something systems use. Context is no longer assembled manually after the fact. It is scaffolded directly into workflows where decisions are made.

This shift reflects a broader realization: when decision cycles compress, intelligence that is not embedded becomes irrelevant.

Organizations will replace point solutions with outcome-oriented use case platforms

Throughout 2025, buyer behavior underwent significant shifts. Teams stopped defining needs around individual tools and started organizing around outcomes, such as preventing fraud across an entire journey, maintaining identity integrity over time, or enforcing trust consistently at scale.

This shift is well documented. McKinsey has noted that organizations operating fragmented risk, fraud, and cybersecurity stacks suffer from duplicated controls, inconsistent decisions, and slower response times, prompting enterprises to adopt integrated platforms that can support end-to-end use cases.

In 2026, organizations will replace point solutions with outcome-oriented use case platforms because isolated tools cannot maintain context across the decisions that matter most. When intelligence is fragmented, decisions become inconsistent.

Platforms succeed because they scaffold context across workflows, connecting identity, behavior, risk, and performance into a shared decision layer. Rather than optimizing isolated tasks, they enable organizations to act coherently as conditions change.

As acceleration continues, outcome-oriented platforms become the only viable way to move fast without introducing new blind spots.

Intelligence will become a native layer inside the systems teams use every day

In 2025, market, risk, and threat signals shifted faster than most teams could respond. Intelligence trapped in reports or external tools created awareness but rarely drove timely action.

This lag is increasingly visible in operational data. IBM reports that organizations struggling to operationalize AI and analytics cite a lack of integration into workflows as a primary barrier to value realization, particularly in security and risk functions.

In 2026, intelligence becomes a native layer embedded directly into operational systems, including CRMs, identity flows, fraud engines, product workflows, and go-to-market platforms. Intelligence no longer sits adjacent to decisions. It actively informs them as they are made.

Embedded intelligence provides context at the moment of action. It explains what matters, why it matters now, and how teams should respond, without manual interpretation or delay. This transformation turns intelligence into infrastructure.

As organizations automate more decisions, systems without embedded intelligence become bottlenecks. In 2026, intelligence must operate at the same speed as execution.

Budget pressure will accelerate platform consolidation across fraud, identity, and cybersecurity

Organizations entered 2025 burdened by overlapping tools, redundant data sources, and fragmented ownership across fraud, identity, and cybersecurity. As budgets tightened, the operational cost of fragmentation became impossible to ignore.

This pressure is reflected in both spending and threat data. IBM’s Cost of a Data Breach Report shows that organizations with highly integrated security platforms experience significantly lower breach costs and faster containment times than those operating fragmented stacks.

In 2026, budget pressure accelerates platform consolidation not just to reduce spending, but to eliminate context loss between systems that evaluate the same users, behaviors, and events. Fragmentation slows decisions and increases exposure.

Gartner has forecast that by 2028, 20% of large enterprises will adopt cyber-fraud fusion teams, up from less than 5% today, reinforcing the direction toward unified intelligence and shared ownership.

Consolidation succeeds when platforms unify context and decisioning, not just functionality. In an environment defined by acceleration, fewer systems with shared intelligence outperform many tools with disconnected insight.

Governance, Capital, and Control Catch Up to Automation

By 2025, automation had outpaced oversight. Organizations can move faster than ever, but often without sufficient visibility into how decisions were made, who or what was involved, and why certain outcomes occurred. As intelligence embedded into workflows and AI systems became more autonomous, the lack of context became a material risk.

In 2026, governance, capital allocation, and access controls evolve not to slow automation, but to make acceleration sustainable. Speed without context becomes exposure. Speed with context becomes an advantage.

AI agents will require identity-grade access controls across the enterprise

In 2025, AI agents rapidly expanded across enterprise workflows, supporting fraud investigations, alert triage, routing decisions, and internal automation. As these systems accessed sensitive data and production environments, a new class of risk emerged: machines were acting with authority, but without identity.

This risk is already measurable. IBM reports that 13% of organizations experienced security incidents involving AI models or AI applications, and among those affected, 97% lacked AI-specific access controls. These incidents frequently resulted in compromised data (60%) and operational disruption (31%), highlighting how quickly unmanaged automation can escalate risk.

Additional research reinforces the urgency. SailPoint found that 96% of technology professionals view AI agents as a growing security threat, 80% report unintended actions by AI agents, and yet only 44% say their organizations have policies governing agent behavior.

In 2026, AI agents require identity-grade access controls equivalent to those applied to human users. Permissions, monitoring, and accountability provide the context needed to scale automation safely. Without that scaffolding, organizations are forced to constrain AI usage. With it, they can accelerate confidently.

Capital will prioritize platforms with proprietary data and compounding intelligence loops

Investor behavior over the past year has made one signal clear: capital is increasingly flowing toward platforms that automate decisions and improve with use, not those that simply add features.

PitchBook reports that AI-enabled fintech startups carried a median valuation of $134 million in 2025, representing a 242% valuation premium over non-AI peers. These companies also captured 54% of fintech venture capital deal value year-to-date, despite representing roughly one-third of startups.

The implication for fraud, identity, and cybersecurity platforms is clear. Capital is rewarding systems that generate compounding intelligence loops, where every interaction improves future decisions. Proprietary data matters not as an asset in isolation, but as fuel for learning systems that deliver durable decision advantage.

In an environment defined by acceleration, investors back platforms that turn complexity into repeatable, defensible execution.

AI governance will become a mandatory requirement in every enterprise procurement process

As AI adoption accelerated in 2025, governance lagged behind implementation. Many organizations deployed AI systems before establishing clear policies around transparency, auditability, and access control.

This gap is now visible in procurement data. CAPS Research found that 62% of companies are already utilizing generative AI in procurement, yet only 15% have formal governance policies in place for generative AI.

At the same time, Icertis reports that 90% of procurement leaders have already considered or are actively using AI agents in sourcing, contracting, and supplier management workflows.

In 2026, this imbalance forces change. AI governance becomes a baseline requirement in enterprise procurement, not because organizations are slowing down adoption, but because they are scaling it up. Procurement teams will institutionalize requirements for transparency, auditability, and access control as table stakes for vendor selection.

In an AI-first environment, governance enables acceleration. Organizations that invest in contextual transparency will move faster with confidence. Those who do not will be forced to slow down under scrutiny.

Conclusion

What comes next will not be determined solely by tools. It will be determined by how effectively organizations maintain context as decision cycles continue to compress. Across fraud, identity, cybersecurity, and governance, the dominant failure mode is no longer a lack of data; it is a lack of understanding. It is the inability to connect signals fast enough to act with confidence.

In 2026, intelligence becomes the operating system for how teams compete. Context scaffolding, the ability to connect identity, behavior, risk, and outcomes in real time, is what allows organizations to move faster without losing control. As automation expands and threats become more adaptive, advantage shifts to those who can interpret what matters in the moment and translate insight directly into action.

The organizations that succeed in the year ahead will not simply react to change; they will actively adapt to it. They will build systems that anticipate it, absorb it, and continuously adjust as conditions evolve. Acceleration is inevitable. The ability to navigate it with clarity is what will separate leaders from those forced to slow down.

The post 2026 Predictions: What Comes Next in Fraud, Cybersecurity, and Identity appeared first on Liminal.co.


Ockto

Van kwetsbaar naar betrouwbaar: de kracht van brondata tegen hypotheekfraude

In februari 2026 vindt in Utrecht de 35e editie van het Hypotheken Event plaats. Het is een editie waarin één onderwerp nadrukkelijk naar voren komt: hypotheekfraude ontwikkelt zich razendsnel en raakt steeds meer schakels in de keten. Tijdens een break-out sessie gaan Gert Vasse, Non-Executive Director bij Ockto, en Gert-Jan van Dijke, Account Director bij Ockto, in op de nieuwste frau

In februari 2026 vindt in Utrecht de 35e editie van het Hypotheken Event plaats. Het is een editie waarin één onderwerp nadrukkelijk naar voren komt: hypotheekfraude ontwikkelt zich razendsnel en raakt steeds meer schakels in de keten. Tijdens een break-out sessie gaan Gert Vasse, Non-Executive Director bij Ockto, en Gert-Jan van Dijke, Account Director bij Ockto, in op de nieuwste fraudevormen en de rol van brondata om die ontwikkeling te keren.


Dock

EU Business Wallet: What You Need to Know About the EU’s Digital ID for Companies [Video and Takeaways]

As the European Commission moves forward with its proposal for the EU Business Wallet, many organizations are asking what this new regulation actually means in practice and how it will change the way businesses identify themselves, exchange data, and interact with public authorities across Europe. To unpack this, we recently

As the European Commission moves forward with its proposal for the EU Business Wallet, many organizations are asking what this new regulation actually means in practice and how it will change the way businesses identify themselves, exchange data, and interact with public authorities across Europe.

To unpack this, we recently hosted a deep-dive session with two people directly involved in shaping and implementing the initiative: Viky Manaila, Trust Services Director at Intesi Group, and Rob Brand, Senior Policy Officer at the Netherlands Ministry of Economic Affairs and Co-Coordinator of the WE BUILD Consortium.

The session explored what the EU Business Wallet is designed to do, why it matters for businesses of all sizes, and how it builds on eIDAS 2 and the European Digital Identity framework. We covered the regulatory proposal itself, the role of trust service providers, mandatory public-sector acceptance, and the real-world use cases being tested today through large-scale pilots.

Below are the key takeaways from that conversation, capturing what stood out most for businesses, wallet providers, and ecosystem builders preparing for what comes next.


Indicio

Indicio’s Helen Garneau receives Community Recognition Award from LF Decentralized Trust

The post Indicio’s Helen Garneau receives Community Recognition Award from LF Decentralized Trust appeared first on Indicio.
Indicio’s Chief Marketing Officer Helen Garneau is recognized by the Linux Foundation’s umbrella organization for her leadership in decentralized technologies and building open source communities. By Trevor Butterworth

As part of the LF Decentralized Trust’s celebration of a decade successfully promoting decentralized technologies, the organization recently announced 10 Community Recognition Awards, one of which was awarded to Indicio’s Chief Marketing Officer, Helen Garneau. 

LF Decentralized Trust thanked Helen for almost a decade of leadership in the open source community and contributions, and for serving as a “role model” in the way she has championed LF Decentralized Trust and its work.

“Contributing to the LF Decentralized Trust community has been a cornerstone of my work at Indicio,” said Garneau on receipt of the award. “This community proves how powerful open source can be when people rally around shared standards and real collaboration. Helping tell the story of identity projects, giving them clarity, visibility, and a voice, has shown me how much good marketing builds our collective success. When an open source project’s work is understood, adopted, and championed, the whole ecosystem moves forward. It’s been a privilege to support these projects, the contributors behind them, and the open-source values that make decentralized identity possible.”

“LF Decentralized Trust has played an instrumental role in advancing decentralized identity and Verifiable Credential technology,” said Heather Dahl, CEO of Indicio, “and it has always been critical for Indicio to actively participate in and support its work. By recognizing and celebrating Helen’s contribution to its success, the  LF Decentralized Trust also recognizes the central role that Helen plays at Indicio and in the world of decentralized technologies, as well as Indicio’s commitment to having its team members participate and lead in the open source community. The result is a success story for collaboration and innovation — a model for sustainable technology.”

For more on the LD Decentralized Trust’s activities, visit LFDecentralizedTrust.org

The post Indicio’s Helen Garneau receives Community Recognition Award from LF Decentralized Trust appeared first on Indicio.


Thales Group

SAMP/T NG: Thales contributes to the success of the European air defence system’s firing campaign

SAMP/T NG: Thales contributes to the success of the European air defence system’s firing campaign prezly Mon, 12/22/2025 - 08:00 Defence Air Share options Facebook
SAMP/T NG: Thales contributes to the success of the European air defence system’s firing campaign prezly Mon, 12/22/2025 - 08:00 Defence Air

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 22 Dec 2025 On Monday, December 15, 2025, the first test firing of the French version of the SAMP/T NG system was successfully performed from the DGA Essais de Missiles test range in Biscarosse (Nouvelle-Aquitaine region). Each system features a Thales Ground Fire radar, the leading radar in the field of surveillance and air defence with an unmatched performance: range of up to 400 km and panoramic coverage at 360° and 90° elevation. The Ground Fire radar, which has been in series production since the beginning of 2025, demonstrated its exceptional performance during this firing of the French version of the SAMP/T NG system.

Firing of the SAMP/T NG system © DGA Essais de Missiles

This firing campaign has once again demonstrated the high level of performance of this long-range air defence system developed by eurosam, a joint venture between Thales and MBDA. After a successful first firing in Italy on December 3, 2025, this new firing in France demonstrated the innovations and performance of the SAMP/T NG fire control system with its new modernised Engagement Module when coupled with the Thales Ground Fire radar.

Based on fully digital active electronically scanned antennas (AESA) technology, the Ground Fire radar provides a very high level of performance for the detection, tracking and classification of all types of targets in the most difficult environments (sea, mountains, intense traffic density, jamming...). This multifunction radar features a refresh rate of only 1 second and a surveillance capacity of up to 400 km, with panoramic coverage at 360° and 90° elevation. It is capable of simultaneously detecting drones, fighter jets and ballistic missiles, while benefiting from the mobility of a tactical radar.

This success is a further step towards the operational deployment of the SAMP/T NG system in France and Italy, with the first deliveries scheduled for 2026.

The SAMP/T NG system has thus established itself as the only European alternative for medium- and long-range protection against all types of threat, including ballistic, manoeuvring and saturating threats.

The SAMP/T NG programme is supervised by the Organisation Conjoint de Coopération en matière d'Armement (OCCAR). Eurosam, a joint venture between Thales and MBDA, is prime contractor for the entire system.

"The success of this firing campaign of the French variant of the SAMP/T NG at Biscarosse confirms the remarkable capabilities of this system, which contributes to the industrial and defence sovereignty of European nations. It is a further step towards the commissioning of the system in the forces in 2026." said Raphael DESI, Vice President, Integrated Airspace Protection Systems, Thales.

1 Active Electronically Scanned Array

About Thales

Thales (Euronext Paris: HO) is a world leader in advanced technologies for the Defence, Aerospace and Cybersecurity & Digital sectors. Its portfolio of innovative products and services helps to meet several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion a year in Research & Development in key areas, particularly for mission-critical environments, such as Artificial Intelligence, Cybersecurity, Quantum and Cloud technologies.

Thales employs more than 83,000 people in 68 countries. In 2024, the Group generated sales of €20.6 billion.

Recent images of Thales and its Defence, Aerospace and Cyber & Digital activities can be found on the Thales Media Library. For any specific requests, please contact the Media Relations team.

View PDF market_segment : Defence > Air https://thales-group.prezly.com/sampt-ng-thales-contributes-to-the-success-of-the-european-air-defence-systems-firing-campaign samp/t-ng-thales-contributes-success-european-air-defence-systems-firing-campaign On SAMP/T NG: Thales contributes to the success of the European air defence system’s firing campaign

Sunday, 21. December 2025

Spruce Systems

Designing Digital Services People Actually Complete

Completion rates matter more than features. This post explains how clear flows, fewer steps, and intelligent defaults dramatically improve user outcomes — especially for high-stakes government services.

Digital services succeed or fail on one simple outcome. Whether or not people actually finish them.

Completion rates matter more than feature lists, visual polish, or technical sophistication. A service that looks modern but is abandoned halfway through does not deliver value to residents or agencies. This is especially true for high stakes government services, where unfinished applications can delay benefits, licenses, or critical support.

Designing services people actually complete requires a shift in focus. The goal is not to showcase functionality, but to remove friction, reduce uncertainty, and guide users confidently from start to finish.

Why completion is the real metric

Government services often measure success by launch milestones. A portal goes live. A form is published. A new feature is added.

For users, success looks different. Success means submitting an application without confusion. It means knowing what to expect next. It means not getting stuck or giving up.

Low completion rates are a signal that something is broken in the experience. The issue may be unclear instructions, too many steps, unnecessary data collection, or a lack of feedback. Whatever the cause, incomplete services create real world consequences for people who depend on them.

Completion is therefore a systems outcome, not just a design outcome. It reflects how well intake, validation, identity, and workflow orchestration work together behind the scenes.

Clear flows reduce cognitive load

Every additional decision a user must make increases the chance they will abandon the process. Clear flows guide users through a service in a logical sequence. Each step has a purpose. Each question builds on the last. Users understand where they are, what is required, and how much remains.

This clarity is especially important for complex or unfamiliar processes. When users are asked to interpret policy language or guess what information is needed, completion drops quickly.

Well designed flows replace guesswork with guidance. This often means structuring services around eligibility and readiness checks, so users are only asked questions that apply to their situation at that moment.

Fewer steps matter more than more features

It is tempting to add features to solve edge cases or accommodate every scenario. Over time, this leads to bloated workflows that overwhelm most users. High completion services are ruthless about simplicity. They ask only for what is required at that moment. Optional steps are deferred or removed. Advanced features are hidden unless they are needed.

Reducing steps does not mean reducing rigor. It means sequencing work intelligently so users are not asked to do everything at once.

Breaking long processes into smaller, stateful steps allows services to save progress, recover gracefully from interruptions, and reduce abandonment caused by time pressure or uncertainty.

Intelligent defaults keep users moving

Defaults are one of the most powerful tools in service design. When systems pre fill known information, select common options, or suggest likely answers, users move faster and make fewer errors. Intelligent defaults reduce typing, minimize decision fatigue, and signal that the system understands the user’s context.

For government services, this can mean pre populating information already on file, reusing previously verified data, or setting sensible initial values that users can change if needed. The key is transparency. Users should understand what was filled in and why, and they should always remain in control.

Defaults work best when they are paired with clear validation and the ability to correct or override assumptions without penalty.

Error handling should prevent abandonment

Errors are inevitable. How they are handled determines whether users continue or quit. Many services surface errors only after submission, forcing users to hunt for mistakes or re enter large amounts of information. This is a major driver of abandonment.

Effective services validate inputs in real time. Errors are explained clearly and immediately. Users are shown how to fix the problem without losing progress.

Good error handling treats mistakes as part of the process, not as failures. Inline validation and immediate feedback reduce rework and help users build confidence that they are on the right track.

Trust and confidence drive completion

Users are more likely to complete a service when they trust it. Trust is built through clarity, predictability, and respect for time and data. Users should understand why information is being requested and how it will be used. They should see progress indicators and receive confirmation that actions were successful.

For high stakes services, reassurance matters. Clear language, consistent behavior, and visible security cues all contribute to confidence. When users feel uncertain, they hesitate. When they hesitate, they abandon.

Predictable system behavior is as important as visual design. Unexpected requests, unexplained delays, or repeated data entry quickly erode trust.

Accessibility improves outcomes for everyone

Accessibility is often framed as a compliance requirement. In practice, it is a completion strategy. Services that are readable, navigable, and usable by people with diverse abilities are easier for everyone to complete. Clear language helps non native speakers. Keyboard navigation helps power users. Mobile friendly design helps people completing services on the go.

Designing for accessibility reduces friction across the board. Accessibility also improves resilience by supporting real world usage across devices, environments, and time constraints.

Completion focused design benefits agencies too

Improving completion rates is not just a user benefit. It reduces operational cost. When services are completed correctly the first time, agencies spend less time on follow up, correction, and support. Backlogs shrink. Staff focus on processing instead of troubleshooting.

Higher completion also improves data quality. Information arrives in structured, validated form, making downstream decisions faster and more reliable. Completion is therefore directly linked to throughput, accuracy, and overall system efficiency.

Designing for completion from the start

Completion does not happen by accident. It is the result of deliberate design choices. Successful services start by mapping the user journey end to end. They identify where people drop off and why. They simplify flows, remove unnecessary steps, and test assumptions with real users.

Features are added only when they support completion, not when they complicate it. Designing for completion means treating the entire service lifecycle, from intake through decisioning, as a single experience rather than a series of disconnected screens.

Services that work in the real world

Digital services are only valuable when people can and do use them successfully. By focusing on clear flows, fewer steps, and intelligent defaults, government agencies can design services that people actually complete. This is especially critical for high stakes interactions where failure has real consequences.

Completion is the outcome that matters most. Designing for it is how digital services deliver on their promise.

About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.


Dock

Unified Identity: What It Is, Why It Matters, and How It Improves Security

Unified identity is about making identity reusable instead of repeatedly re-created. Today, identity is fragmented across systems, channels, departments, and partners. Each interaction forces users to re-enter the same information, repeat the same identity checks, and create new credentials, even when the organization already has trusted identity data on file.

Unified identity is about making identity reusable instead of repeatedly re-created.

Today, identity is fragmented across systems, channels, departments, and partners. Each interaction forces users to re-enter the same information, repeat the same identity checks, and create new credentials, even when the organization already has trusted identity data on file.

From the outside, this feels like friction. Behind the scenes, it creates risk. Fragmented identity expands attack surfaces, multiplies credentials, and forces teams to compensate with weaker controls like passwords, OTPs, and knowledge-based authentication.

This isn’t because organizations lack identity tools. It’s because most identity infrastructure was designed to work inside individual systems, not across an entire ecosystem.

Unified identity changes that model. Instead of adding yet another identity layer, it focuses on unifying the trusted identity data organizations already hold so it can flow securely across systems, channels, and partners, creating one unified identity experience rather than disconnected ones.

In the sections below, we’ll explain what unified identity really means, why fragmented identity increases both risk and friction, how unified identity improves security, and what a modern unified identity platform looks like in practice.


Business Wallets and Organizational Identity: What the EWC Pilots Revealed

During one of our webinars, Esther Makaay, VP of Digital Identity at Signicat, presented an overview of the organizational identity findings from the European Identity Wallet Consortium’s Large-Scale Pilots, based on two years of implementation and testing.  Unlike personal wallets, business wallets introduce a different set

During one of our webinars, Esther Makaay, VP of Digital Identity at Signicat, presented an overview of the organizational identity findings from the European Identity Wallet Consortium’s Large-Scale Pilots, based on two years of implementation and testing. 

Unlike personal wallets, business wallets introduce a different set of requirements, structures, and challenges. The EWC pilots provided early insight into how these could function in real operational environments.

Here’s what we learned:

Friday, 19. December 2025

Shyft Network

Shyft Network 2025: Expanding in the Fastest-Growing Crypto Market

2025 was our strategic breakthrough year. Veriscope became the preferred Travel Rule solution across all major VASP categories in India — the world’s second-largest crypto market by volume and #1 in grassroots adoption. Eight integrations — seven in India, spanning million-user platforms to the nation’s first outlet exchange, plus international expansions — demonstrated that our cryptographic pro

2025 was our strategic breakthrough year. Veriscope became the preferred Travel Rule solution across all major VASP categories in India — the world’s second-largest crypto market by volume and #1 in grassroots adoption.

Eight integrations — seven in India, spanning million-user platforms to the nation’s first outlet exchange, plus international expansions — demonstrated that our cryptographic proof-based technology scales seamlessly across business models, user bases, and jurisdictions.

India’s momentum validated our strategic focus: #1 in Chainalysis’ 2025 Global Crypto Adoption Index, over 75% of activity from non-metro regions (CoinSwitch India’s Crypto Portfolio 2025), and strong projected growth in the exchange platform market toward USD 7.5 billion by 2030 (Grand View Research).

India — Why Shyft Strategic Priority

The fundamentals aligned perfectly. India topped the 2025 Chainalysis Adoption Index, driven by widespread grassroots engagement. Non-metro regions powered over 75% of trading activity, with Tier 2 cities at 32.2% and Tier 3/4 cities at 43.4% of user bases (CoinSwitch 2025 report). Uttar Pradesh emerged as the top state for crypto investment, signaling adoption spreading deep into Bharat.

India’s Virtual Asset Service Provider ecosystem demonstrated regulatory readiness, making it ideal for Travel Rule infrastructure deployment.

Regulatory maturity added tailwinds: India’s FATF regular follow-up status reflects advanced risk mitigation, and VASPs have proactively embraced Travel Rule measures ahead of full mandates.

We targeted this high-growth environment during regulatory evolution — where early, privacy-preserving compliance infrastructure creates lasting advantages.

Key Results: Depth Across the Ecosystem

We partnered deliberately across India’s full VASP spectrum:

Unitic (February) — Set the standard for non-custodial, cryptographic proof compliance in dynamic markets. SunCrypto (April) — 1M+ users; established, choosing Veriscope for proactive growth. Inocyx (April) — FIU-registered innovator proving compliance fuels technical advancement. GetBit (May) — Mobile-first design ideal for non-metro expansion as activity shifts to smaller cities. IN1 (September) — VASP-licensed across 35+ countries; unified fintech bridging crypto and traditional finance. Nowory (September) — Launched August 2025 with Veriscope built-in — compliance as core architecture from day one. Fincrypto (November) — India’s first hybrid outlet exchange; uniform Travel Rule coverage across digital and physical locations to build trust in Tier 3/4 regions.

Internationally:

Endl (November) — Stablecoin neobank with cross-border payment rails; same tech powering institutional flows.

These eight integrations covered every key category — proving User Signing delivers scalable, privacy-first compliance everywhere.

The Core Technology: User Signing on Veriscope

Veriscope’s breakthrough: VASPs request cryptographic proofs directly from users’ non-custodial wallets. No centralized sensitive data. No privacy trade-offs. Pure blockchain-native FATF compliance.

This turns regulation into an advantage — reducing friction while embedding trust.

Strategic Win: Competitive Ecosystem Coverage

Network effects accelerated: Every new VASP makes the ecosystem stronger. Fincrypto transactions with Nowory or GetBit data shares with SunCrypto happen instantly and privately on shared infrastructure.

Diverse coverage validated universality: From traditional exchanges to mobile apps, fintech bridges, hybrid models, and greenfield launches — Veriscope works because it solves core challenges.

International scalability followed naturally: India’s complex environment prepared us for global payment rails like Endl without rework.

Compliance-first launches like Nowory show the market’s evolution — platforms building readiness upfront as adoption moves to states like Uttar Pradesh.

Early positioning paid off: As India leads global adoption and non-metros drive growth, our partners gained a structural edge in crypto compliance over those who retrofit later.

The Path Forward: Growth Meets Regulatory Maturity

The infrastructure we built in India becomes the template for markets worldwide, entering their regulatory maturation phase. As more jurisdictions adopt FATF guidelines and VASPs face the build-versus-retrofit decision, network effects will accelerate faster than linear growth suggests.

The market is splitting: platforms building compliance into their foundation versus those bolting it on under pressure. That gap widens in 2026.

We’re positioned where the next wave of growth meets regulatory clarity — ready to serve markets that recognize privacy-preserving compliance as competitive infrastructure, not regulatory burden.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance in blockchain while preserving privacy and sovereignty. SHFT is the network’s native token and fuel.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visit our website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn, Telegram, Medium, and YouTube.Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network

Shyft Network 2025: Expanding in the Fastest-Growing Crypto Market was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

S3NS announces SecNumCloud qualification for PREMI3NS, its trusted cloud offering

S3NS announces SecNumCloud qualification for PREMI3NS, its trusted cloud offering prezly Fri, 12/19/2025 - 11:00 Group Investor relations Defence Share options Facebook
S3NS announces SecNumCloud qualification for PREMI3NS, its trusted cloud offering prezly Fri, 12/19/2025 - 11:00 Group Investor relations Defence

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 19 Dec 2025

● PREMI3NS, S3NS’ (pronounced “sense”) trusted cloud offering, has now received ANSSI’s SecNumCloud qualification, meeting the most stringent protection requirements against extraterritorial laws in France and Europe

● The fruition of the partnership between Thales and Google Cloud enables organizations from the private and public sectors to innovate and transform with one of the broadest ranges of managed services in a trusted cloud environment

● Early adopters of S3NS include companies from the insurance, manufacturing, healthcare and finance industries

ANSSI delivered the SecNumCloud 3.2 qualification for S3NS'PREMI3NS offering, meeting all its requirements and passing all three milestones of the qualification process.

S3NS, a subsidiary of Thales in partnership with Google Cloud, today announced that PREMI3NS, its "Trusted Cloud" (Cloud de confiance) offering, has received the SecNumCloud 3.2 qualification delivered by the French National Agency for the Security of Information Systems (ANSSI). Meeting SecNumCloud 3.2’s protection and resilience requirements, which are known as the most demanding ones in France and Europe, it offers immunity from non-European extraterritorial laws.

With PREMI3NS, S3NS now offers businesses and public sector organizations the most extensive cloud service among the offerings that have received the SecNumCloud 3.2 qualification. PREMI3NS integrates the most advanced IaaS and PaaS technology from Google Cloud.

" The SecNumCloud 3.2 qualification is the result of the unparalleled collaboration between two cloud and cybersecurity leaders. It opens new opportunities for the French and European markets. Never has a SecNumCloud 3.2 - certified cloud offering included such a wide range of managed services. PREMI3NS will enable its customers to innovate, optimize, and transform with utmost confidence and security with their most sensitive applications. As a matter of fact, Thales group has chosen S3NS for its own IT and its sensitive engineering." said Christophe Salomon, Deputy CEO, Secure Information and Communication Systems, Thales.

The SecNumCloud 3.2 qualification results from the original strategic partnership between Thales and Google Cloud and the creation of S3NS in 2022. It is the assertion of theirombined ambition to offer an unparalleled solution on the market and is now available to all public and private organizations. With this qualification, S3NS, a company operating under French law and fully controlled by Thales, fulfills its commitment to deploying the most feature-rich cloud offering meeting the SecNumCloud 3.2 requirements on the market within three years of its creation.

PREMI3NS is operated and managed exclusively by S3NS employees in data centers located in France. All cloud technologies and their updates are quarantined, analyzed and then validated by S3NS before the company manages them in its dedicated infrastructure.

The SecNumCloud 3.2 framework is the most demanding standard for cloud security in Europe. France is currently the only country to require its public sector organizations to comply with its requirements when managing sensitive data, as the government commits to guaranteeing French citizens the optimal protection of their data.

Organizations are already choosing S3NS

PREMI3NS, now SecNumCloud-qualified, has been accessible for several months as part of S3NS’ “early adopters” program and tested by about thirty pioneering customers. S3NS is currently supporting insurance companies (MGEN, Matmut, AGPM), companies from the manufacturing industry (Thales, Birdz, a subsidiary of Veolia), the financial sector (Qonto, BConnect) and services (Club Med) as they progressively migrate to the "Trusted Cloud" and leverage the combined expertise of Thales and Google Cloud. EDF selected S3NS for the storage, processing, and valorization of the Group's strategic data, and Thales is already using PREMI3NS for its internal information system and its engineering.

The broadest range of cloud services with the SecNumCloud qualification on the market

PREMI3NS offers a large portfolio of IaaS, PaaS, and CaaS services, allowing organizations to operate their most sensitive applications in a high-performance and trusted environment. The offering revolves around fundamental and proven Google Cloud technological components, such as Compute Engine for virtual machine management, Cloud Storage for data storage, and Cloud SQL for relational databases. This robust foundation provides access to all the capacity, innovation, and robustness of the cloud through advanced managed services, including Google Kubernetes Engine for containerization, BigQuery for the market-leading, serverless, and highly scalable data warehouse preparing for an easy transition to AI, as well as cutting-edge solutions for network and interconnection management.

This extensive service portfolio will continue to grow in the coming months with S3NS notably preparing the integration of generative artificial intelligence solutions, and reaffirming its commitment to providing its customers with constant access to the most innovative technologies, within a trusted framework.

About S3NS

An alliance between Thales, a global leader in data protection and cybersecurity, and Google Cloud, a global leader in cloud technologies, S3NS offers public institutions and private companies, concerned about further protecting their most sensitive data, highly secure public cloud offerings to operate their transition to the trusted cloud, meeting the criteria of the ANSSI SecNumCloud framework. S3NS is a company under French law entirely controlled by Thales.

TO KNOW MORE

https://www.s3ns.io/en

S3NS Cloud de Confiance enters General Availability

s3ns.io/actualite/s3ns-annonce-qualification-sec-num-cloud

View PDF corporate : Group + Investor relations ; market_segment : Defence > Cyber + Cybersecurity > Cybersecurity services | Sovereign solutions https://thales-group.prezly.com/s3ns-announces-secnumcloud-qualification-for-premi3ns-its-trusted-cloud-offering-fulbja s3ns-announces-secnumcloud-qualification-premi3ns-its-trusted-cloud-offering On S3NS announces SecNumCloud qualification for PREMI3NS, its trusted cloud offering

Thales to modernise AAC Air Traffic Control Centre in Panama

Thales to modernise AAC Air Traffic Control Centre in Panama prezly Fri, 12/19/2025 - 09:06 Civil Aviation Airspace management Americas Share options Facebook
Thales to modernise AAC Air Traffic Control Centre in Panama prezly Fri, 12/19/2025 - 09:06 Civil Aviation Airspace management Americas

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 18 Dec 2025 Thales to implement a full suite of advanced air traffic management solutions to enhance the Autoridad Aeronáutica Civil de Panamá (AAC)’s operational capabilities. This project, which leverages Thales’s 40+ years of experience in air traffic management (ATM) across Central America and the Caribbean, reinforces the safety and efficiency of regional aviation. The upgraded systems will enable AAC to effectively manage the anticipated growth of more than 20 million passengers in the coming years.

Thales to modernise the Autoridad Aeronáutica Civil de Panamá (AAC)’s Air Traffic Control Centre in Panama © Thales

Thales will modernize the AAC’s Air Traffic Control Centre. This project includes a full suite of advanced solutions, such as TopSky - ATC, AMHS, AIS, eAIP/AIXM, as well as a Voice Communication System (VCS), a billing system and voice recording systems (VRS). Thales will work closely with its local partner, SOFRATESA, as part of a consortium to implement this project.

The AAC seeks to establish a safe, reliable, efficient, and sustainable civil aviation system that will strengthen Panama’s global prestige in aviation. The country has seen remarkable growth, with a record 14.8% increase in aircraft movements in 2024, totalling 152,813 operations compared to 133,084 in 2022. With this system modernisation, Panama consolidates and strengthens its regional leadership as the Air Connection Centre of the Americas, serving as a bridge between America, Europe, and other destinations. Flight movements are projected to exceed 200,000 annually and passenger traffic to exceed 20 million in the coming years.

To support this growth, Thales will ensure efficient air traffic management throughout the Panama Flight Information Region. This partnership builds on Thales’s deep roots in the region, where it celebrates 40 years in Central America and the Caribbean, and over a decade supporting Panama with air traffic control centres and navigation systems.

Thales has also pioneered modernisation projects throughout Latin America and the Caribbean. The Group’s Air Traffic Management Integration & Service Centre in Mexico City provides support and skills development in the region and will be instrumental in this contract.

“This project, resulting from the alliance with Thales and Sofratesa, strengthens the institution’s firm commitment to operational safety, improves efficiency in air traffic management, and promotes the implementation of cutting-edge technologies, in line with the growth of the aviation sector and the highest international standards.” Capt. Rafael Bárcenas, Director General of the Civil Aviation Authority of Panama.
“This collaboration marks a significant step in the continued modernisation and sustainable growth of Panama’s civil aviation sector, underpinned by Thales’s proven technology and enduring partnership commitment.” Youzec Kurp, Vice President of Airspace Mobility Solutions, Thales.

With a strong presence in Latin America, Thales plays a leading role in shaping the region’s air traffic management infrastructure with the deployment of more than 220 radars, 500 navaids, and 30 Air Traffic Control Centres, supporting safe and efficient skies across the continent. The Group also pioneered the world’s first 100% solar powered Air Traffic Control radar station located in Calama, in the Chilean Desert. This extensive presence underscores Thales’s commitment to innovation, sustainability, and long-term partnerships with aviation authorities across the region.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in Latin America

With six decades of presence in Latin America, Thales is a global technology leader in the Defense, Aerospace, Cybersecurity, and Digital sectors. The group invests in digital and "deep tech" innovations – Big Data, artificial intelligence, connectivity, cybersecurity, and quantum technology – to build a future we can all trust.

​The company has 2,500 employees in the region, distributed across seven countries – Argentina, Bolivia, Brazil, Chile, Colombia, Mexico, and Panama – and has ten offices, five factories, and service and engineering centres in all the sectors in which it operates.

​Through strategic partnerships and innovative projects, Thales in Latin America drives sustainable growth and strengthens its ties with governments and public and private institutions, as well as airports, airlines, banks, and telecommunications and technology companies.

View PDF market_segment : Civil Aviation > Airspace management ; countries : Americas > Panama https://thales-group.prezly.com/thales-to-modernise-aac-air-traffic-control-centre-in-panama thales-modernise-aac-air-traffic-control-centre-panama On Thales to modernise AAC Air Traffic Control Centre in Panama

auth0

Why MCP’s Move Away from Server-Sent Events Simplifies Security

Discover why the Model Context Protocol (MCP) deprecated Server-Sent Events (SSE) for Streamable HTTP and how this shift enables stronger authentication, standard CORS policies, and secure session management.
Discover why the Model Context Protocol (MCP) deprecated Server-Sent Events (SSE) for Streamable HTTP and how this shift enables stronger authentication, standard CORS policies, and secure session management.

Build an AI Assistant with LangGraph, Next.js, and Auth0 Connected Accounts

Learn how to build a tool-calling AI agent using LangGraph, Next.js, and Auth0. Integrate GitHub and Slack tools using Connected Accounts for Token Vault.
Learn how to build a tool-calling AI agent using LangGraph, Next.js, and Auth0. Integrate GitHub and Slack tools using Connected Accounts for Token Vault.

Thursday, 18. December 2025

auth0

Bulletproof Hosting Defense: Mitigating the Proxy Threat with Threat Intelligence

Learn how to mitigate the risks of Bulletproof Hosting providers using Auth0 Actions and threat intelligence signals to secure your identity infrastructure.
Learn how to mitigate the risks of Bulletproof Hosting providers using Auth0 Actions and threat intelligence signals to secure your identity infrastructure.

Elliptic

Cartels are using cryptoassets to move drug proceeds. Blockchain intelligence can stop them.

Key takeaway: When cartels launder money with cryptoassets, every transaction leaves a trace. With the right blockchain data and intelligence, government agencies can trace these flows, identify individuals and disrupt cartel operations.

Key takeaway: When cartels launder money with cryptoassets, every transaction leaves a trace. With the right blockchain data and intelligence, government agencies can trace these flows, identify individuals and disrupt cartel operations.


ComplyCube

Why Multi-Bureau Identity Verification is the Ultimate Fraud Defense

Multi-bureau identity verification helps onboarding by validating customer data across trusted sources. Using 1+1 and 2+2 checks, it ensures regulated businesses reduce fraud, meet KYC requirements, and onboard customers with confidence. The post Why Multi-Bureau Identity Verification is the Ultimate Fraud Defense first appeared on ComplyCube.

Multi-bureau identity verification helps onboarding by validating customer data across trusted sources. Using 1+1 and 2+2 checks, it ensures regulated businesses reduce fraud, meet KYC requirements, and onboard customers with confidence.

The post Why Multi-Bureau Identity Verification is the Ultimate Fraud Defense first appeared on ComplyCube.


Thales Group

Thales strengthens its support to the Italian Navy with the new Navy Service Centre in Taranto and two multi-year support contracts

Thales strengthens its support to the Italian Navy with the new Navy Service Centre in Taranto and two multi-year support contracts prezly Thu, 12/18/2025 - 10:39 Defence Naval Europe Share options Facebook
Thales strengthens its support to the Italian Navy with the new Navy Service Centre in Taranto and two multi-year support contracts prezly Thu, 12/18/2025 - 10:39 Defence Naval Europe

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 17 Dec 2025 The collaboration between Thales and the Italian Navy is intensifying: Italian FREMM (Multipurpose Frigate) class frigates are now supported by a new maintenance centre located at the Taranto Naval Base. The relationship is further consolidated by the signing, through OCCAR and Orizzonte Sistemi Navali, of a 4-year logistic support contract, covering the maintenance of the sonar suites1 and electronic warfare solutions2 installed on the frigates of the FREMM class units, under the broader Through Life Sustainment Management (TLSM2) support contract. An additional contract was formalized by Thales as the lead company of a temporary grouping of companies (RTI) with Leonardo to provide 18-month operational support for the Sonar systems on board the GAETA-class minehunters, enhancing response effectiveness and timeliness.

Italian Navy frigate Marceglia © marina.difesa.it 1

Thales and the Italian Navy are further consolidating their cooperation in Italy with the opening of a modern service centre at the Taranto Arsenal, dedicated to the maintenance of sonars and equipment installed on the Italian Navy vessels.

This new facility enables the Italian Navy to cut intervention times by half on Thales systems, thanks to the deployment of highly qualified local experts and technicians. In addition, on-the-job training will be provided at this facility to Navy personnel on the maintenance of all supported systems, thus facilitating the transfer of specialized skills directly in the field.

Thales has invested in locating skilled personnel and technical resources within Italy, positioning them where they are most needed - right on the waterfronts of the major naval arsenals. This is a process of incremental and customized localization and proximity, transferring locally a strategic capital of technical expertise and technological instrumentation, at the service of Italy’s Defence.

The surveillance performance and reliability of Thales sonar systems in detecting and classifying submarines and underwater objects is now matched by a highly effective, responsive and local Logistic Support capability.

Thales reaffirms its commitment in supporting the Italian Navy, which in turn confirms its trust in the Group by assigning a dedicated infrastructure within the Arsenal of Taranto for the FREMM Through Life Sustainment Management programme.

This cooperation is further strengthened with the signing of two multi-year maintenance and support contracts: the first for FREMM Class frigates dedicated to the maintenance of the Integrated Sonar Suite and electronic warfare solutions2, and the second for Gaeta Class minehunters, dedicated to the maintenance of the Variable Depth Sonar on board.

“We are extremely proud to work alongside the Italian Navy, helping to deliver faster and more efficient support services. We thank the Italian Navy for granting access to the seafront facility at Taranto Arsenal, where our Italian technical team operates. This hub enables us to carry out all levels of maintenance, from straightforward component replacements to the most complex tasks. With this initiative, we underscore our strong commitment and the value of Italian know-how in service of Italian Defence.” said Donato Amoroso, Thales Italia CEO & Country Director.

1 ISS - Integrated Sonar Suite

2 EWS/CESM - Electronic Warfare System/Communication Electronic Support Measures

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF market_segment : Defence > Naval ; countries : Europe > Italy https://thales-group.prezly.com/thales-strengthens-its-support-to-the-italian-navy-with-the-new-navy-service-centre-in-taranto-and-two-multi-year-support-contracts thales-strengthens-its-support-italian-navy-new-navy-service-centre-taranto-and-two-multi-year On Thales strengthens its support to the Italian Navy with the new Navy Service Centre in Taranto and two multi-year support contracts

auth0

Auth0 My Account API: Let Users Manage Their Own Account

Learn how the Auth0 My Account API securely enables client-side, self-service user management, eliminating the need for server-side proxies for features like passkey enrollment and account linking.
Learn how the Auth0 My Account API securely enables client-side, self-service user management, eliminating the need for server-side proxies for features like passkey enrollment and account linking.

FastID

From AI Crawlers to Headless Bots: How Automated Traffic is Changing the Web

Bots now drive nearly a third of web traffic. Learn how AI crawlers and headless bots are reshaping security, performance, and business decisions.
Bots now drive nearly a third of web traffic. Learn how AI crawlers and headless bots are reshaping security, performance, and business decisions.

Wednesday, 17. December 2025

Indicio

Juniper Research names Indicio as a “Disruptor and Challenger” in 2025 Digital Identity Leaderboard

The post Juniper Research names Indicio as a “Disruptor and Challenger” in 2025 Digital Identity Leaderboard appeared first on Indicio.
As digital identity threats evolve, independent market analysis shows Verifiable Credentials is the most powerful way to defend against AI deepfakes, synthetic IDs, and document fraud. Indicio’s inclusion in the report points to the strength of its market-leading decentralized identity solution, Indicio Proven®, and the pace of customer deployment.

By Helen Garneau

The 2025 Juniper Research Digital Identity Market 2025–2030 Competitor Leaderboard evaluates how identity solutions are built, deployed, and used in practice, and it has ranked Indicio high on the Product and Positioning axis, thanks to Indicio’s ground breaking combination of authenticated biometrics and document validation, multi-credential interoperability, and its global partnerships.

The report also explores the market’s move toward deployable, standards-based identity infrastructure emphasizing the importance of interoperability, credential issuance, customer adoption, scalability and long-term viability. 

This placement reflects Juniper’s recognition of Indicio as a key provider of verifiable identity solutions and infrastructure. With multiple customer deployments this year, the report signals that Indico’s approach to verifiable identity resonates with how the market is implementing fraud-resistant decentralized identity systems to combat the rise of AI-driven digital fraud.

What Juniper’s report reveals about the digital identity market

Juniper’s research shows that digital identity has crossed an important line. No longer a space defined by pilots, proofs of concept, or isolated experimentation, governments, banks, travel providers, and digital platforms are actively deploying decentralized  identity systems and relying on them in live environments. That shift signals real commitment and marks the move from exploration to execution.

The driving force behind this acceleration is fraud. The report is clear that legacy approaches built on static identifiers, shared secrets, and repeated data collection are struggling to keep up with AI-enabled threats like deepfakes and synthetic identities. As fraud becomes harder to detect and more costly to manage, organizations are being pushed to adopt identity models that are verifiable, reusable, and harder to compromise at scale.

Juniper also highlights two patterns shaping how adoption is unfolding. The first is pragmatism. Digital identity systems, including Verifiable Credentials, are being layered into existing environments rather than replacing them outright. This allows organizations to increase assurance and reduce risk without disrupting workflows that already function. 

The second is interoperability. Identity only works when it can move across vendors, sectors, and borders. Fragmented systems limit value and create gaps that attackers exploit, while shared standards and cross-platform compatibility are what make digital identity viable at scale. 

Indicio’s role in this market shift

Indicio Proven® enables organizations to issue cryptographically-secure digital credentials derived from trusted sources that can be bound to the holder of the credential with authenticated biometrics. 

Unlike other vendors, the credentials issued with Indicio Proven can be independently verified, shared selectively, and used across systems without relying on centralized databases. This means that a person can carry an authenticated copy of their biometrics and present it for verification without a verifier having to cross-check it against a stored version. This radically simplifies the infrastructure around biometric verification, removes the security risks and compliance challenges of storing biometric data, and provides a way for people to easily prove they aren’t a deepfake. 

Verifiable Credentials with authenticated biometrics represent the highest possible digital identity assurance. They can also be created in minutes, transforming KYC, remote enrollment, and account access.  

These capacities and features are driving Verifiable Credential adoption in highly-regulated industries like  finance and banking, border crossing, and government services where trust, privacy, and auditability matter and fraud pressure is highest.

Indicio Proven is already in use across multiple large-scale deployments in these sectors, and it also supports small and mid-sized businesses through Verifiable Credential–based authentication. With Indicio Proven Auth, small and medium enterprises can manage system access, verify vendors, support billing and payments, and improve supply chain traceability. The platform is easy to deploy, making advanced decentralized identity capabilities accessible even to the smallest organizations.

The future of digital identity — ProvenAI

Indicio’s position on the Juniper leaderboard reflects not only its leadership in the open standards and open-source technologies required to put Verifiable Credentials into production, but the application of the technology to meet the emerging world of AI and autonomous systems.

Indicio ProvenAI applies decentralized identity and Verifiable Credentials to AI agents and systems. As AI systems interact with sensitive systems and personal data, being able to verify both the human user and the AI agent becomes a key fraud prevention measure. ProvenAI uses Verifiable Credentials to establish trust between users and AI agents, and between AI agents themselves. This ensures that automated interactions are authenticated and authorized, and secured against spoofing or misuse.

ProvenAI also enables permissioned data access, ensuring AI agents only use personal data when a person has given clear consent. It also supports delegated authority, allowing AI agents to securely share that data with other authenticated AI agents. This makes it possible to move trusted data across departments, internal systems, and approved external partners without losing control or accountability.

With Indicio’s decentralized governance solution, organizations now have a powerful way to create secure, autonomous systems, where each agent can be authenticated and proven to belong to the system.

Why this recognition matters

The digital identity landscape will be shaped by regulatory frameworks (EUDI, eIDAS 2.0), cross-industry adoption, and increasingly sophisticated threat vectors. Research like Juniper’s and inclusion on the leaderboard directs organizations to companies with the  technology that can meet these demands. 

Juniper’s analysis makes it clear: Indicio’s architecture model built on Verifiable Credentials and decentralized identity is exactly what the market needs now. 

Don’t wait. Talk with one of our digital identity experts about how you can gain a competitive edge with Verifiable Credentials and build the internet of tomorrow, today.

 

The post Juniper Research names Indicio as a “Disruptor and Challenger” in 2025 Digital Identity Leaderboard appeared first on Indicio.


This week in identity

E66 - Review of Gartner IAM Texas, Blackhat Europe, Saviynt $700M funding and 2026 predictions

Keywords identity, cybersecurity, AI, Gartner, conferences, funding, innovation, resilience, OWASP, predictions Summary In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the latest trends in identity and cybersecurity, reflecting on recent conferences, particularly the Gartner Identity Conference. They explore the evolving role of identity in organizations,

Keywords

identity, cybersecurity, AI, Gartner, conferences, funding, innovation, resilience, OWASP, predictions


Summary

In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the latest trends in identity and cybersecurity, reflecting on recent conferences, particularly the Gartner Identity Conference. They explore the evolving role of identity in organizations, the impact of AI, and the importance of resilience in supply chains. The conversation also touches on funding dynamics in the identity space, the challenges of innovation versus reliability, and the newly released OWASP Top 10 for Agentic AI. As they look ahead to 2026, they share predictions about the future of identity and the cybersecurity landscape.


Takeaways

Identity is becoming increasingly central to cybersecurity strategies.

The Gartner Identity Conference highlighted key trends in identity management.

AI is reshaping the identity landscape but hasn't introduced new business models yet.

Organizations need to focus on resilience in their identity systems.

Funding in the identity space is shifting, with significant investments being made.

Innovation in identity solutions must balance reliability and customer needs.

The complexity of supply chains poses challenges for identity management.

OWASP's Top 10 for Agentic AI emphasizes the importance of identity in AI security.

2025 has been a pivotal year for identity, with increased attention and funding.

Organizations should start small and automate basic identity management tasks.


Sound bites


"Identity is at the core."

"Identity is foundational."

"Phishing isn't fixed."


Chapters

00:00 Introduction and Festive Greetings

02:04 Conferences and Events: Insights from Black Hat Europe

04:38 Gartner Identity and Access Management Conference Overview

09:32 Emerging Trends in Identity Management

12:47 Identity as Core Infrastructure

17:48 The Future of Identity in Cybersecurity

23:39 Funding and Market Trends in Identity Solutions

28:08 Navigating Customer Satisfaction and Market Dynamics

29:32 The Shift from Hypergrowth to Profitability

31:53 Investment Strategies and Market Positioning

33:37 Innovation vs. Reliability in Established Companies

36:57 The Role of Innovation Across Business Functions

38:44 The Importance of Identity in Cybersecurity

40:03 Supply Chain Vulnerabilities and Interdependencies

44:26 Resilience and Recovery in Cybersecurity

48:07 The Need for Investment in Incident Response

49:31 Privacy Challenges in the Digital Age

51:30 OWASP Top 10 for Agentic AI

55:37 Predictions for 2026: AI and Cybersecurity Trends



Elliptic

What does blockchain coverage really mean?

Key takeaway: When vendors claim blockchain coverage, what do they actually mean? Some count blockchains with partial intelligence. Elliptic only counts blockchains that meet four strict standards for full coverage.

Key takeaway: When vendors claim blockchain coverage, what do they actually mean? Some count blockchains with partial intelligence. Elliptic only counts blockchains that meet four strict standards for full coverage.


auth0

Continuous Authorization Testing: FGA, GitHub Actions, and CI/CD

Learn how to achieve continuous confidence in your authorization logic. This article will guide you on defining OpenFGA test files and integrating them into a robust CI/CD pipeline using GitHub Actions.
Learn how to achieve continuous confidence in your authorization logic. This article will guide you on defining OpenFGA test files and integrating them into a robust CI/CD pipeline using GitHub Actions.

FastID

IDC Study Reveals 3X Gains from Modern AppSec Programs

An IDC study reveals that modern AppSec programs achieve 3X better business outcomes and are almost 2X less likely to experience a data breach.
An IDC study reveals that modern AppSec programs achieve 3X better business outcomes and are almost 2X less likely to experience a data breach.

Inside Fastly’s 2025 Internship Program: Projects, Impact & Culture

Discover the projects, impact, and culture of Fastly's 2025 Internship Program, recently ranked #1 by Vault. Meet the next generation of engineers.
Discover the projects, impact, and culture of Fastly's 2025 Internship Program, recently ranked #1 by Vault. Meet the next generation of engineers.

Tuesday, 16. December 2025

liminal (was OWI)

2025 Reflections: Lessons from a Year of Acceleration

2025 was a year that kept every operator on their toes. Fraud accelerated. Identity challenges became more sophisticated. Cybersecurity pressures intensified alongside AI adoption. Compliance teams faced rising scrutiny. Yet, despite the rapid pace of change, this year did not feel chaotic. It felt like the market was finally snapping into focus. For us at […] The post 2025 Reflections: Lessons

2025 was a year that kept every operator on their toes. Fraud accelerated. Identity challenges became more sophisticated. Cybersecurity pressures intensified alongside AI adoption. Compliance teams faced rising scrutiny. Yet, despite the rapid pace of change, this year did not feel chaotic. It felt like the market was finally snapping into focus.

For us at Liminal, that focus crystallized in a meaningful milestone: the close of our $8.5M Series A led by Noro-Moseley Partners. This round was not about chasing the next idea. It was about doubling down and accelerating the product and go-to-market strategy we have been building toward for years.

When we founded Liminal in 2021, we believed the future of intelligence would hinge on context scaffolding, the ability to structure, interpret, and activate information in real time as AI reshaped how decisions are made. In an AI-first world, static reports and disconnected tools fall apart. What teams need is intelligence that understands the why, not just the what.

In 2025, that thesis proved prescient. Teams are increasingly turning to Liminal to move faster, interpret market trends with confidence, and act decisively in moments that matter. The year ahead may bring more complexity, but the path forward is clearer than ever.

As we reflected on the year, one thing stood out clearly: none of the shifts below came as a surprise. Each was a prediction we outlined in our 2024 outlook, not as guarantees, but as trajectories we believed would define the next phase of the market. In 2025, those predictions became a lived reality.

What follows is not a recap of headlines. It is a scorecard, grounded in data, conversations, and execution, of how those predictions played out.

1. Fraud became more personal, adaptive, and believable

2024 prediction: AI would push fraud from scale to sophistication

In our 2024 outlook, we warned that fraud would move beyond volume-driven attacks and become deeply personalized, adaptive, and far more convincing. In 2025, that prediction materialized more quickly than most teams expected.

Fraud shifted from broad campaigns to individualized interactions powered by generative AI. Voice clones, realistic deepfakes, and multi-step impersonation schemes became common enough that fraud teams described them as routine rather than exceptional.

One moment that defined the year happened in October, when a fake YouTube livestream impersonating NVIDIA CEO Jensen Huang drew almost 100,000 viewers and promoted a QR-code crypto scheme. It captured global attention not only because of its scale, but because it showed how believable these attacks have become.

Across the market, the core signal was not simply that fraud got “smarter.” It was that it got faster. In our 2025 seminal research on the convergence of authentication and fraud prevention, 71% of buyers said they are concerned their current tools cannot stop GenAI-driven scams, and 72% said single-signal authentication methods like passwords fail against modern fraud. The same research found that 78% of buyers report recurring account takeover incidents, and 83% already use probabilistic signals like behavioral, device, or location signals, with most others planning adoption within a year.

The teams that kept pace were the ones that stopped relying on static checkpoints. They leaned into continuous risk scoring, behavioral analytics, and richer identity signals that paint a clearer picture of how a legitimate customer behaves over time. Fraud did not get easier in 2025, but the teams that invested early in multi-signal intelligence created real distance from the rest of the market.

What separated leading teams in 2025 was not access to better tools. It was access to better context, the ability to understand behavior over time, connect signals across touchpoints, and act before fraud felt obvious.

(Seminal Report: The Convergence of Authentication and Fraud Prevention, Page 39)

(Seminal Report: The Convergence of Authentication and Fraud Prevention, Page 5)

2. First-party fraud moved into the center of the P&L conversation

2024 prediction: first-party fraud would become a material revenue risk

Last year, we predicted that first-party fraud would stop being treated as operational noise and start showing up as a measurable drag on revenue. In 2025, that shift became impossible to ignore.

Retailers reported that margin pressure from returns abuse is no longer a niche operational issue. In our research, more than 80% of retailers reported that returns abuse is putting pressure on their profit margins, and nearly half have embedded fraud signals into their returns decisioning. The takeaway was consistent across leaders: controlling abuse without breaking customer experience requires identity-aware context, not blunt enforcement.

Industry-wide data reinforced the direction of travel. In the Merchant Risk Council’s 2025 Global eCommerce Payments and Fraud Report, refund and policy abuse showed broad prevalence and meaningful year-over-year increase, with only a small minority reporting no impact and many reporting material growth. The operational implication is straightforward: when post-purchase abuse becomes both common and fast-moving, teams cannot manage it as an exception process. They need continuous context across identity, behavior, and policy.

Disputes and chargebacks told the same story from a second angle. Mastercard’s chargeback outlook projects chargeback volume rising from 261 million in 2025 to 324 million by 2028, a 24% increase in three years. More disputes mean higher costs, increased ambiguity between customer friction and misuse, and greater pressure to distinguish legitimate claims from opportunistic behavior in real-time.

The meaningful progress we saw this year came from organizations that treated first-party fraud as a core revenue decision problem, not a narrow detection problem. They aligned fraud and CX teams, tightened policy frameworks, improved segmentation, and introduced identity-aware controls that could differentiate misuse from legitimate behavior. This was the year first-party fraud gained its place on the leadership agenda.

3. AI governance and data access control became operational necessities

2024 prediction: AI governance would move from policy to production

In 2024, we predicted that AI governance would quickly move from policy decks to day-to-day operations. In 2025, that transition became unavoidable.

As AI agents transitioned from experimentation to production workflows, organizations faced new risks associated with data exposure, model behavior, and access control. In our Link Index for AI Data Governance, 94% of practitioners cited safeguarding sensitive data as their top AI concern, and more than three-quarters stated that traditional security controls cannot keep pace with AI’s rapid adoption. The signal was clear: governance is becoming inseparable from execution, because AI is now embedded directly into workflows that touch sensitive systems and decisions.

Public incident data reinforced the urgency behind that shift. Harmonic’s analysis of enterprise prompts to widely used LLMs found that 8.5% of employee prompts contained sensitive data, with customer data comprising the largest share of leaked content. In practical terms, the risk is no longer theoretical; it is now a reality. The “prompt layer” has become a significant data-loss pathway, and its impact scales with adoption.

Inside the enterprise, the response has begun to formalize. Internal audit and risk organizations are increasingly turning to structured guidance for evaluating AI governance, controls, and oversight. The Institute of Internal Auditors’ AI Auditing Framework is one example of how rapidly the assurance layer is catching up to production reality. In parallel, government and oversight bodies have also elevated AI risk management for critical systems, including guidance tied to critical infrastructure and expectations for risk assessment.

In practice, governance in 2025 was less about restriction and more about confidence: confidence that decisions made by humans and machines alike are grounded in accurate, traceable, and well-scaffolded context.

4. KYC, AML, and financial crime teams made progress while facing higher stakes

2024 prediction: Continuous monitoring would replace point-in-time compliance

Last year, we anticipated that point-in-time compliance would no longer be sufficient to counter AI-driven fraud and synthetic identity threats. In 2025, that prediction came to fruition across KYC and AML programs.

Deepfakes and synthetic identities made onboarding harder than at any point in the past decade. Nearly 87% of organizations said they are not fully prepared for deepfake-enabled onboarding threats. At the same time, continuous monitoring gained credibility, with 72% expecting to adopt perpetual KYC within two years. The operating model is shifting from one-time clearance to sustained confidence.

Synthetic identity fraud accelerated beyond expectations. The Federal Reserve Bank of Boston has highlighted that losses tied to synthetic identity fraud surpassed $35 billion in 2023 and that generative AI is a meaningful accelerant, making synthetic identities easier to create and harder to detect. That dynamic is a direct driver of why compliance teams are moving toward continuous monitoring and cross-institution context, especially as attackers test synthetic personas across multiple institutions.

On the execution side, the push toward automation is not simply about efficiency. It is about survivability in terms of volume and complexity. BCG has noted that KYC and compliance processes can represent a material share of operating costs in financial institutions, and that AI-enabled approaches can reduce KYC compliance costs substantially when implemented correctly. The strategic implication is that automation without governance creates exposure, but governance without automation creates bottlenecks.

The most effective programs treated intelligence as a living system: continuously refreshed, context-rich, and capable of evolving in response to emerging threats.

5. Capital returned with clearer priorities and a higher bar

2024 prediction: capital would return, but discipline would define winners

In our 2024 outlook, we predicted that capital would re-enter fraud, identity, and cybersecurity, but with far greater selectivity. That prediction held.

Public market and venture data points to a simple pattern: capital remained available in 2025, but it concentrated around higher-conviction bets and clearer narratives. KPMG’s Venture Pulse shows that global venture investment decreased from Q1 to Q2 2025, reflecting a more cautious environment despite continued activity. Crunchbase similarly highlighted that Q2 2025 funding remained substantial, but with large portions concentrated in fewer, larger rounds and in AI-related categories. The message to founders and operators was consistent: differentiation and execution mattered more than momentum.

Strategic activity reinforced the same theme. Buyers and acquirers were not consolidating categories for optics. They were filling capability gaps, especially where intelligence can be operationalized and embedded. Mastercard’s launch of its Threat Intelligence solution illustrates the direction of travel: in announcing the product, Mastercard cited that 60% of global fraud leaders are not notified of a breach until after losses begin, a framing that elevates real-time, actionable intelligence as an operational requirement, not a “nice to have.”

The market was not quiet in 2025. It was discerning. Companies that could translate intelligence into execution stood out. Those without a clear story struggled.

(Q2 2025 Market & Investment Trends Report, Page 16 and 18)

(Q2 2025 Financial Crime & Compliance Market & Investment Trends Report, Page 18)

6: Age assurance moves from debate to implementation

2024 prediction: age assurance would shift from discussion to enforcement

Few predictions materialized as cleanly as age assurance. In 2024, we expected global momentum. In 2025, enforcement began.

In Australia, a social media minimum age framework moved into implementation. From December 10, 2025, age-restricted platforms must take reasonable steps to prevent Australians under 16 from creating or keeping accounts, alongside an age assurance trial and supporting guidance. In the UK, the Online Safety Act created legal duties to protect users and children online, with age assurance explicitly positioned as a core mechanism for preventing children from accessing pornography and other harmful content.

In the U.S., the Supreme Court’s decision in Free Speech Coalition, Inc. v. Paxton upheld a Texas law requiring certain websites with sexually explicit content to verify that visitors are 18 or older, signaling continued momentum for state-level age restrictions and verification requirements.

Age assurance is no longer a future requirement. It is now an implementation cycle. Platforms must operationalize age checks, navigate regional compliance requirements, and strike a balance between user experience, privacy expectations, and regulatory risk.

As enforcement ramps up in 2026, the organizations that will succeed will be those that contextualize regulation, risk, and user experience in real-time, rather than treating age assurance as a static compliance checkbox.

Looking Ahead

Each of the shifts we reflected on this year followed the same arc. They began as early signals, evolved into visible patterns, and ultimately reshaped how teams made decisions in real time. That progression reinforces a belief we have held since Liminal’s earliest days: intelligence only creates value when it is contextual, timely, and designed for action.

As AI compresses decision cycles and increases the cost of error, organizations are discovering that data alone is not enough. What they need is context scaffolding, the connective tissue that explains what matters, why it matters now, and what to do next. Static reports, point-in-time assessments, and disconnected tools consistently failed teams in 2025 when speed, confidence, and coordination were required.

This insight directly shapes how we think about what comes next. The predictions we have outlined for 2026 build on what this year confirmed: intelligence is no longer a reference point. It is becoming a native layer inside the systems teams use every day, embedded across fraud workflows, identity journeys, product decisions, and go-to-market execution. The organizations that win will not be those with the most tools, but those with intelligence that compounds, improving with every signal, interaction, and decision.

Looking ahead, the pace of change will only accelerate. Fraud actors will become more adaptive. Identity will move from checkpoints to continuous evaluation. Governance will become inseparable from automation. Capital will continue to reward platforms that unify workflows and translate intelligence into execution.

The teams that succeed in 2026 will not simply react faster. They will act with clarity and purpose. And clarity comes from intelligence that is dynamic, contextual, and embedded, where decisions actually happen.

Travis and Jennie

Co-Founders

The post 2025 Reflections: Lessons from a Year of Acceleration appeared first on Liminal.co.


Spherical Cow Consulting

Two APIs Walk Into a Browser: FedCM vs. the DC API

In this episode of The Digital Identity Digest, Heather Flanagan explores how two emerging browser APIs—FedCM and the Digital Credentials API—are reshaping the identity layer of the web. Learn why browsers are shifting from passive intermediaries to active participants as privacy reforms and regulatory pressure accelerate. Discover how these APIs differ in governance, user experience, and archit

“The web is in the middle of a significant redesign, one that touches on architecture, governance, politics, and more.”

For years, browsers were passive conduits, rendering pages, storing cookies, and quietly stitching together the signals that made identity flows work. That passivity is gone.

Privacy reforms have pushed browsers into new roles. Safari and Firefox got there early with cross-site tracking restrictions; Chrome’s initial stance on the third-party cookie phase-out forced everyone else to pay attention. But while “cookiepocalypse” is what first brought the Federated Credential Management API (FedCM) into the spotlight, the story is more complicated. Google stepped back from full cookie removal after UK regulators raised concerns that deprecating third-party cookies would give Google an unfair advantage in online advertising. Even with that pause, the Chrome team still had a clear problem to solve: federated sign-in was vulnerable, insufficiently privacy-preserving, and overly dependent on tracking-era mechanics.

So instead of treating FedCM as a bandage for a disappearing feature, the team reframed it as an opportunity to create a better user experience—one that would not require hidden redirects, ambient signals, or cross-site cookies to function safely. Privacy and usability, rather than cookie deprecation alone, became the primary justification for the API.

In parallel, entirely new ecosystems around verifiable digital credentials are emerging, and the Digital Credentials API (DC API) is meant to be the web-layer bridge for them. That puts us in an unusual moment: two identity APIs, developed for two very different purposes, landing at roughly the same time, each reshaping the browser’s role in digital identity.

And so the question becomes: how do these APIs relate to each other, and what happens if they collide?

A Digital Identity Digest Two APIs Walk Into a Browser: FedCM vs. the DC API Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:13:30 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

FedCM: a user-experience upgrade disguised as a structural repair

FedCM started life because the underlying assumptions of the web were changing. Third-party cookies—one of the invisible building blocks used by various federated identity flows—were going away. But as people dug into the problem, they realized that federated sign-in wasn’t just at risk; it was uncomfortable for users, leaky for privacy, and built on mechanisms never intended for this purpose.

FedCM’s goal, especially after some time examining the cookie problems, moved beyond “keep federated login working.” The idea now is to give users an experience that is safer, clearer, and less ad-tech-shaped.

To do that, FedCM inserts the browser as an active intermediary between:

the Relying Party, or RP (the site requesting the sign-in), the Identity Provider, or IdP (the service supplying the identity), and the user (the person deciding whether to allow that relationship).

The browser becomes responsible for:

showing a consistent, browser-owned account selection interface, protecting users from silent tracking, enforcing permission steps, and ensuring both parties only learn about each other with explicit user consent.

This makes FedCM a smart API, not in the “AI” sense, but in the architectural one:

Smart = the browser makes decisions

A smart API:

enforces rules, manages UX, maintains state, shapes flows, and carries explicit opinions about what good privacy and usability look like.

Right now, Chrome is the only browser implementing FedCM, which is promising and precarious in equal measure. The ecosystem needs more than one implementer if this is going to become foundational. But at least we’re getting real-world experience on how this could work. And that said, I don’t want to ignore the fact that quite a few people do NOT want browsers to be active mediators in the web experience, especially when it comes to digital identity-related actions, but I think that ship has probably sailed.

The Digital Credentials API: a deliberately minimal transport layer

Where FedCM is opinionated, the DC API is intentionally unopinionated. I wrote about that a few weeks ago: “Digital Identity Wallet Standards, the DC API, and Politics“.

It enables a website to:

request the presentation of a digital credential, and request the issuance of a digital credential.

And then it steps aside.

The DC API does not tell anyone:

which credential format to use, which presentation or issuance protocol should be followed, what a wallet should look like, how selective disclosure should work, or what verification actually means.

Instead, the DC API offers a security boundary and a transport mechanism. Basically, it is a very small window through which appropriately structured requests and responses can pass.

This is why people call it a “dumb pipe.”

Dumb pipe = the browser transmits with consent, but does not judge

A dumb pipe:

does not choose the wallet, does not choose the credential, does not evaluate trustworthiness, does not interpret the content, and has no opinion about the structure or semantics of the data.

Its design goals—transparent requests, encrypted responses, user activation, wallet selection UX delegated to the platform—reflect this philosophy.

It is meant to be a flexible, protocol-agnostic infrastructure for a world where credentials live in wallets, and wallets live on devices that may or may not be integrated into the browser.

Two APIs, two philosophies

Put side-by-side, the differences sharpen:

AreaFedCMDC APIPrimary domainFederated sign-inVerifiable digital credentials & walletsBrowser roleActive mediatorMinimal transport boundaryUX responsibilityBrowser-owned account chooserPlatform-owned wallet/credential chooserArchitectural postureOpinionated (“smart”)Unopinionated (“dumb pipe”)Implementation realityChrome only, todayUnder development; multi-vendor interest

These are not competing APIs. They solve different problems for different ecosystems.

The overlap we’re not talking about enough

Even though FedCM and the DC API were created to solve different problems, it’s hard not to notice how easily their scopes could start touching. If a website can already ask the browser to help pick an identity provider for a federated sign-in flow, it’s not a huge stretch to imagine that same browser helping the user pick a wallet or select a credential in a verifiable credential flow. The underlying mechanics are different, but the user’s mental model isn’t: choose an account, choose a wallet, choose a credential. They’re all variations of the same action.

That overlap leads to a more delicate question: if the browser is already trusted to mediate identity selection in one domain, should it also mediate credential selection in another? Some people see this as a natural evolution—a unified identity experience across the web. Others see it as a risk, especially when FedCM itself is still stabilizing and the verifiable credential ecosystem is trying to balance innovation with regulatory caution. Treating FedCM as a convenient foundation for wallet behavior could introduce more tight coupling than the ecosystem can handle.

The result is a quiet tension, at least for me: the possibility of convergence is there, and it may even be appealing, but the consequences of moving too quickly—or of not noticing that convergence is happening implicitly—could shape the identity layer of the web in ways we don’t fully understand yet.

The political sensitivity

Identity on the web has never been just a technical matter, but the stakes are now higher than they’ve been in years. Regulators have entered the room with strong expectations about predictability, interoperability, and user protection. If they don’t get it from the technical standards, they’ll introduce regulation to require it. Platforms, meanwhile, are trying to ensure that new identity layers don’t diminish their control or introduce obligations they can’t realistically meet. Wallet vendors worry about being sidelined if browsers centralize too much of the user experience. Standards bodies, all with their own histories and governance norms, disagree about how much power should sit in the browser stack at all.

The DC API was deliberately designed to stay neutral and avoid becoming a governance surface. FedCM, however, is explicitly opinionated and user-agent driven. If those two worlds start blending—whether by design or by accident—the browser suddenly becomes the arbiter of identity behavior across both federated authentication and verifiable credentials. That shift wouldn’t just raise architectural questions; it would raise political ones. Who gets to define the rules? Who enforces them? And what happens if regulators expect consistency that the underlying specifications were never meant to guarantee?

This is why the conversation is sensitive. Changes to the identity layer ripple outward: to wallet ecosystems, to platform policies, to regulatory compliance pathways, and even to what users believe is “normal” on the web.

So where does this leave us?

We are at an interesting point where two identity systems—federated sign-in and verifiable digital credentials—are evolving in parallel, and both now rely on the browser in ways they never did before. FedCM gives the browser real authority over account selection and permissioning. The DC API gives the browser a clean, minimal boundary for credential issuance and presentation. These aren’t incompatible visions, but they do reflect different philosophies about what the browser is supposed to do.

The challenge is that the identity ecosystem is trying to absorb both shifts simultaneously. Meanwhile, only one browser has implemented FedCM. Wallet standards are still being debated across multiple forums. Regulators are writing requirements faster than implementers can test them. And users—who ultimately need to understand and trust all of this—are not clamoring for complexity. They just want to get to their newspaper subscription or social media feed. It’s a precarious landscape, and the choices made now will influence the next decade of identity design on the web.

Convergence between FedCM and the DC API might eventually happen, especially if the ecosystem wants a unified identity experience. But convergence too early, or without a clear understanding of the tradeoffs, could entangle two very different problem spaces and make it harder to deploy either one successfully.

What we should be watching

Over the next couple of years, several developments will tell us which direction things are heading. The most immediate signal will come from browser support: if additional browsers adopt FedCM, it becomes far easier to treat the API as a stable foundation rather than a Chrome-specific solution. At the same time, the level of enthusiasm—or reluctance—from wallet vendors will shape how quickly the DC API moves from “promising concept” to “reliable infrastructure.”

Regulators will also exert force here, especially those drafting rules that assume uniform behavior across wallets, platforms, and device ecosystems. If regulatory expectations outpace technical reality, we may see pressure to treat one of these APIs as the harmonizing layer by default. And finally, the standards community itself will need to decide how much responsibility the browser should carry. There’s a meaningful difference between an API that coordinates identity transactions and one that governs them, and the line between those two roles is thinner than it appears.

This is a moment when paying attention matters. The work happening now—stabilizing FedCM, maturing the DC API, clarifying responsibilities across layers—will determine whether identity on the web becomes more coherent or more fragmented. The cement is still wet, but not for long. I’m watching this from the front row as co-chair of the Federated Identity Working Group, where both APIs are going through the standardization process. I don’t want to say “wish us luck”. Instead, I want to say, “please follow along and chime in!”

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Today’s episode begins with a very bad joke — but it opens the door to a much more serious conversation about how two emerging browser APIs are reshaping identity online.

Two APIs Walk Into a Browser

[00:00:29] Two APIs walk into a bar.
The bartender asks, “You two together?”
FedCM replies, “Absolutely not. I need explicit user consent first.”
The DC API shrugs: “I’m just here to pass messages.”

Bad jokes aside, these two APIs — FedCM and the Digital Credentials API (DCAPI) — were created for very different reasons. Yet today, they’re quietly occupying neighboring territory. And whenever standards live close together, people inevitably ask:

Are they supposed to converge? Should one replace the other? How will they interact?

To answer that, we need to start at the beginning.

The End of Browser Passivity

[00:01:48] For most of the web’s history, browsers played a surprisingly passive role. They simply rendered pages, stored cookies, and moved signals between sites so identity systems could function. Browsers weren’t designed to be intermediaries — they were just convenient containers for developer hacks like third-party cookies.

But that era is ending.

Privacy reforms fundamentally changed the game:

Safari and Firefox blocked cross-site tracking early. Chrome’s (now paused) plan to remove third-party cookies forced the entire ecosystem to reconsider its identity assumptions. Regulators pushed browsers to take responsibility for what happens on their platforms.

As a result, old federated login flows — with their fragile redirects and opaque cookies — became too risky and too leaky to survive unchanged.

Enter FedCM: A Smart, Opinionated API

[00:03:01] FedCM began as a stopgap solution for the disappearance of third-party cookies. But when Google revised its approach, the API evolved into something more ambitious: a way to redesign federated login with stronger user protection.

FedCM places the browser directly in the middle of the flow:

The browser shows the account chooser. The browser prevents silent tracking. The browser enforces permissions. The browser ensures the identity provider and relying party only learn about each other when the user agrees.

[00:04:11] In short, FedCM is a smart API. Not AI-smart — architecturally smart. It makes decisions. It carries strong opinions about usability and privacy. And with Chrome as the only browser currently implementing it, FedCM is powerful but still precarious.

Some dislike the idea of browsers becoming active mediators, but that ship has sailed. Browser involvement is now part of the reality of the identity stack.

Enter the Digital Credentials API: A Minimalist, “Dumb Pipe” Approach

[00:05:00] The Digital Credentials API takes a very different approach. Where FedCM is opinionated, the DCAPI is intentionally neutral.

It allows a site to request:

Presentation of a credential, or Issuance of a credential

…and then it steps back.

Unlike FedCM, the DCAPI does not decide:

Which credential format is used Which protocol applies How a wallet should behave What trust model is appropriate

[00:06:23] Instead, it creates a narrow transport boundary — a simple, structured pipe between a website and a credential wallet. Many call it a “dumb pipe,” and in this context that’s a compliment. It stays out of the way and focuses on preventing silent or unexpected credential exchanges.

This neutrality makes it attractive for a wide range of use cases:

Government IDs Travel documents Employment or education credentials Enterprise or sector-specific proofs

In other words, the DCAPI solves a different problem than FedCM.

Where the Lines Start to Blur

Even though these APIs have different roles, they sit close enough together that interesting questions emerge.

For example, the browser already helps users choose an identity provider through FedCM. It’s easy to imagine the browser helping them choose a wallet, or even selecting a credential. From a user perspective, these actions can feel similar:

Choose an account Choose a wallet Choose a credential

This similarity raises the question: Should these two APIs converge?

Some in the ecosystem think yes.

Others — including me — see risks.

The Governance Question

FedCM, by design, is a governance interface. It enforces policies and offers a consistent user experience.

The DCAPI, by contrast, deliberately avoids that role.

If the industry tries to use FedCM as a governance layer for the DCAPI, we could see:

A unified identity decision point inside the browser A single interface covering federated login and verifiable credentials A user experience that feels cohesive — but also more tightly controlled

And yet that tight coupling could create more complexity, not less.

Political Pressure and Regulatory Realities

[00:09:14] Identity on the web has never been just technical. Today, political pressure is especially high.

Key players have competing priorities:

Regulators want predictability and interoperability. Platforms want to avoid compliance burdens. Wallet vendors fear being boxed out if browsers centralize identity choices. Standards bodies disagree on how much control browsers should have.

If these two APIs merge accidentally — or prematurely — we may end up with browsers arbitrating identity across both federated and verifiable credential workflows. That brings questions about authority, enforcement, and unintended regulatory assumptions.

Where Things Stand Now

[00:10:23] Two identity systems — federated login and verifiable credentials — are evolving in parallel. Both depend on the browser more than ever before.

FedCM gives the browser active authority. The DCAPI gives the browser a minimal but powerful boundary.

These are not incompatible visions, but they reflect different philosophies about what browsers should be.

Meanwhile:

Only Chrome currently supports FedCM. Standards across W3C, IETF, and other bodies are still in motion. Regulators are drafting rules faster than developers can test them. Users just want access to their newspaper or social media — not more complexity. Will the Two APIs Converge?

[00:10:45] It is entirely possible that FedCM becomes the governance layer people want for managing credential flows. That could drive convergence.

But if convergence happens too soon, we risk entangling two problem spaces that need room to mature independently. Some experimentation and stabilization still need to happen.

So what should we watch?

Browser adoption — If more browsers implement FedCM, it becomes foundational. Wallet-vendor responses — Their enthusiasm (or hesitation) will shape adoption. Regulatory pressure — Especially where governments demand uniform behavior. Standards decisions — Particularly around how much authority resides in the browser stack.

The line between an API that coordinates identity and one that governs it is very thin — thinner than many admit.

Final Thoughts

[00:12:14] The identity layer of the web is in a formative moment. Decisions about FedCM, the DCAPI, and browser roles are happening right now. The cement is still wet — but not for much longer.

[00:12:38] That’s it for today’s episode. As always, stay curious about where the web is heading… because it’s definitely going somewhere.

[00:12:53] If this episode helped clarify things, share it with a colleague and connect with me on LinkedIn at @hlflanagan. And for the full written post, visit sphericalcowconsulting.com.

Stay curious. Stay engaged.

The post Two APIs Walk Into a Browser: FedCM vs. the DC API appeared first on Spherical Cow Consulting.


Metadium

[Notice] Metadium Explorer DB Maintenance

To ensure better service stability, we will be performing database maintenance as scheduled below. 📅 Schedule December 17, 2025 (Wed) 10:30–12:30 (KST) The schedule is subject to change depending on the progress. 🔧 Details DB Maintenance ⚠️ Note Metadium Explorer will be temporarily unavailable during the maintenance period. Thank you. The Metadium Team. [공지] 메타디움 익스플로러 DB 점검&n

To ensure better service stability, we will be performing database maintenance as scheduled below.

📅 Schedule

December 17, 2025 (Wed) 10:30–12:30 (KST) The schedule is subject to change depending on the progress.

🔧 Details

DB Maintenance

⚠️ Note

Metadium Explorer will be temporarily unavailable during the maintenance period.

Thank you. The Metadium Team.

[공지] 메타디움 익스플로러 DB 점검 안내

안녕하세요, 메타디움 팀입니다. 보다 안정적인 서비스 제공을 위해 아래와 같이 데이터베이스(DB) 점검이 진행될 예정입니다.

📅 일정

2025년 12월 17일 (수) 10:30 ~ 12:30 (KST) (약 2시간) 작업 상황에 따라 시간은 변동될 수 있습니다.

🔧 작업 내용

DB 점검

⚠️ 유의 사항

점검 시간 동안 Explorer 서비스 이용이 일시 중단됩니다.

이용에 참고 부탁드립니다.

감사합니다.

[Notice] Metadium Explorer DB Maintenance was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

FAPI for Developers: Here Is Your Guide

A new ebook is available to explain why FAPI is essential for regulated industries, and how to move beyond OAuth bearer tokens for top identity security.
A new ebook is available to explain why FAPI is essential for regulated industries, and how to move beyond OAuth bearer tokens for top identity security.

BlueSky

Find Your Friends on Bluesky

We're introducing Find Friends — a contact import feature that makes it easy to find people you know on Bluesky without compromising your privacy.

Today, we're introducing Find Friends — a contact import feature that makes it easy to find people you know on Bluesky.

Social media started as a way to connect with people you actually know. Over time, that got lost in the noise of algorithms and engagement incentives. We're carrying those original values forward, but in a new way that protects your privacy and keeps you in control.

Contact import has always been the most effective way to find people you know on a social app, but it's also been poorly implemented or abused by platforms. Even with encryption, phone numbers have been leaked or brute-forced, sold to spammers, or used by platforms for dubious purposes. We weren't willing to accept that risk, so we developed a fundamentally more secure approach that protects your data.

How it works

If you choose to use Find Friends, you'll verify your phone number and upload your contacts. When someone in your contact book goes through the same process and Bluesky finds a match, we'll let both of you know. This can happen immediately, or later via notification if the match happens down the road.

Find Friends will initially be limited to mobile app users in the following countries: Australia, Brazil, Canada, France, Germany, Italy, Japan, the Netherlands, South Korea, Spain, Sweden, the United Kingdom, and the United States.

A note for early adopters

Matches might take time to appear if you're one of the first to use this feature. As more people opt in, you'll start seeing more connections.

Privacy-first by design

Here's what makes our approach different:

It only works if both people participate. You'll only be matched with someone if you both have each other in your contacts and you've both opted into Find Friends. If you never use this feature, you'll never be findable through it. Your coworker can't use it to look you up unless you've uploaded their number from your contacts.

You verify your number first. Before any matching happens, you prove that you own your phone number. This prevents bad actors from uploading random numbers to fish for information about who's on Bluesky.

Your contact data is protected even if something goes wrong. We store phone numbers as hashed pairs — your number combined with each contact's number — which makes the data exponentially harder to reverse-engineer. That encryption is also tied to a hardware security key stored separately from our database.

You can remove your data anytime. Changed your mind? You can delete your uploaded contacts and opt out entirely.

What about inviting friends who aren't on Bluesky yet?

When you invite a friend through Find Friends:

That invite won't come from Bluesky. It comes directly from you when you choose to send it in a text message.

What if you're already on Bluesky but got an invite anyway? That's because we don't store or track individual phone numbers, so we have no way to tell your friend you're already here. Think of it as a friend reaching out directly — they don't know you've already joined the party.

There's no "opt out" for receiving invites because they're sent directly via text message outside the Bluesky app. These are personal text messages between friends, not automated messages from Bluesky, so we don't have a way to block them and we have no way to send follow up messages.

We published a detailed technical breakdown of this system as an RFC before building it — you can read the full design here. We wanted to get it right, so we put it out for the security community to be able to verify our approach. For details about the data we collect and process, see the Privacy Policy we created for this feature. Users who opt in to this feature agree to the terms of this policy.

Social media is better with friends. We hope this makes it easier to find yours on Bluesky.


FastID

Smarter Data Migration: Move Less, Save More with Fastly

Move only the active data you need with Fastly's On-Demand Migration for Object Storage. Cut expensive egress fees & simplify management with the new UI.
Move only the active data you need with Fastly's On-Demand Migration for Object Storage. Cut expensive egress fees & simplify management with the new UI.

Monday, 15. December 2025

HYPR

5 Questions HR and Security Must Answer Before Implementing Workforce Identity Verification in 2026

Identity verification is quickly becoming a cornerstone of workforce security. What started as a targeted solution for stopping fake applicants or verifying new hires has expanded into something much larger: organizations now recognize that everyone in the workforce, from interviewees to long-tenured employees to contractors and offshore administrators, presents identity risk. Most o

Identity verification is quickly becoming a cornerstone of workforce security. What started as a targeted solution for stopping fake applicants or verifying new hires has expanded into something much larger: organizations now recognize that everyone in the workforce, from interviewees to long-tenured employees to contractors and offshore administrators, presents identity risk.

Most organizations have identified one area where they need to implement stronger identity verification controls, And here’s the part most teams don’t anticipate:
Once you introduce identity verification in one workflow, you uncover the need to implement it across the entire workforce.
Verification isn’t an isolated step. It’s connected to account provisioning, access controls, device activation, HR systems, policy enforcement, legal compliance, and continuous trust.

This is where many organizations experience scope creep. A “simple” project to solve for interviewing or onboarding suddenly becomes a full-scale, cross-functional initiative involving HR, IT, Security, Legal, Operations, and Compliance, often with far more complexity than expected.

As companies prepare for a high-volume 2026 hiring season and heightened scrutiny around identity assurance, now is the time to align on the foundational questions that determine whether workforce identity verification succeeds or spirals.
Below are the five questions HR, Security, Operations teams must answer before implementing Identity Verification anywhere in the workforce, and why each decision impacts the rest of the organization.

1. What policies must be updated before workforce identity verification can go live?

Identity verification isn’t plug-and-play. Once you deploy it for one part of the employee lifecycle, every related policy should be reviewed and potentially updated to ensure consistency and legal defensibility. This applies not just to hiring but also to account recovery, access elevation, contractor onboarding, hardware issuance, and more.

Teams must determine:

How identity verification is defined across the organization When it is required (interview, onboarding, Day-0 access, role changes, periodic re-verification) What employment, vendor, or contracting agreements must reflect the new requirement What contingencies must be added to offer letters and access-granting workflows How policies differ for employees vs. contractors vs. offshore workers How policy aligns with compliance frameworks and regulatory expectations

This is where many organizations realize the true scope: workforce identity verification isn’t a “task” - it’s a policy transformation.

2. How will the organization handle refusal across interviews, onboarding, and the current workforce?

Teams often focus on refusal during interviews or Day-1, but forget that refusal can occur at any point in the employment lifecycle.

You need a refusal strategy for:

Applicants

New hires

Contractors

Existing employees

Remote workers

Privileged access roles (admins, IT, finance)

Without a defined protocol, HR, IT, and Security will make inconsistent decisions that introduce legal risk and operational friction.

Key decisions include:

Is refusal treated as voluntary withdrawal, policy violation, or something else?

How do timelines differ (e.g., refusal to complete verification before access provisioning)?

Who approves exceptions?

How do union, ADA, or local labor considerations impact refusal handling?

What happens when existing employees must re-verify during access escalations or annual audits?

This is often where “small” IDV rollouts become bigger projects: what was designed for onboarding suddenly must be mirrored for existing staff.

3. How will we ensure fairness, accessibility, and compliance across the entire workforce?

When identity verification touches the whole workforce, fairness and accessibility become non-negotiable.

This means designing a workflow that minimizes:

Document-scan bias (lighting, older IDs, skin tones) False negatives that disproportionately affect certain demographics Accessibility barriers for people with disabilities Technical limitations for workers without certain devices (BYOD, contractor laptops, etc.)

It also requires alternative verification paths that maintain security without excluding legitimate workers.

This is one of the biggest sources of scope creep: ensuring ADA compliance, DEI alignment, alternative workflows, and legal defensibility across all job types (from frontline employees to senior engineers to offshore admins) is a major cross-functional effort.

4. What consent, disclosure, and data-handling language must exist - not just in paperwork, but in every product workflow?

Workforce identity verification isn’t just a legal issue - it’s a communication issue.

Employees and contractors want clarity about:

What data is being captured Where it is stored How long it is retained Whether biometric templates are stored Who has access to verification data How verification results affect access decisions

Consent must appear not only in employee paperwork, but in the workflow itself, across every scenario:

Interviews New-hire onboarding Device activation Account recovery Privileged access elevation Periodic workforce re-verification

This is another moment teams realize the project is bigger than expected, because consent must be consistent everywhere identity verification appears.

5. Where does identity verification sit in the workforce lifecycle, and how does it tie into access provisioning and ongoing trust?

Identity verification is only meaningful if its results impact access.

This means you must define:

At which lifecycle stages verification is required How verification connects to device provisioning How verified identity binds to credentials (passkeys, biometrics, tokens) Whether failed verification blocks provisioning or requires escalation How often verification must be repeated How IT logs, risk engines, and IAM systems consume verification results

This is the heart of the scope creep problem:
Identity verification is not a moment. It’s a thread that runs through the entire workforce lifecycle and must integrate with every downstream system.

Misalignment here is the biggest barrier to success.

The 2026 Workforce Reality: Identity Verification Must Scale Across the Organization

As hiring ramps up in early 2026 and regulators sharpen expectations around workforce identity assurance, organizations must accept that identity verification is not a single workflow or vendor feature. It is a company-wide operational model.

The businesses that succeed will be the ones that:

Treat identity verification as an enterprise initiative Align HR, IT, Security, Legal, and Operations early Build flexible verification paths for all worker type Integrate verification directly into access provisioning Use consistent consent and policies across the entire workforce Plan for ongoing, repeatable verification - not just Day-1 checks

Identity verification is now core to workforce security, and the organizations that implement it holistically will enter 2026 with a measurable advantage in trust, compliance, and operational resilience.

For Example: Including the expectation of this process as part of a condition to starting, in an offer letter, may minimize risk and provide candidates with maximum disclosure of onboarding procedures.

Tip: ensures that identity verification processes that are automated are not also providing reviewers with immediate and visible access to the driver's license vs. providing relevant information and success, is one such way to minimize the risk of bias in the process"

Ready to modernize your identity verification process and safeguard your organization against AI-driven threats?

Subscribe to our updates to receive expert insights and learn how HYPR's multi-factor verification and digital identity solutions can protect your business and customers.

 


Dock

EUDI in Practice: Inside Germany’s Digital Identity Strategy [Video and Takeaways]

As the European Digital Identity Wallet (EUDI) moves from policy to implementation, many organizations are trying to understand what it will actually look like in practice, especially at the national level. In a recent webinar, we spoke with Mirko Mollik, Identity Architect at SPRIND (Germany’s Federal Agency for

As the European Digital Identity Wallet (EUDI) moves from policy to implementation, many organizations are trying to understand what it will actually look like in practice, especially at the national level.

In a recent webinar, we spoke with Mirko Mollik, Identity Architect at SPRIND (Germany’s Federal Agency for Breakthrough Innovation), to unpack how Germany is approaching the rollout of the EUDI ecosystem. The conversation went beyond high-level regulation and focused on the realities of implementation: how wallets will be certified, how issuers and verifiers will onboard, how privacy is enforced in practice, and which standards are truly mandatory.


auth0

An Accessible Guide to WCAG 3.3.9: Going for Gold

WCAG 3.3.8 sets minimum standards for accessible authentication. 3.3.9 enhances these by removing exceptions and automating cognitive tests for users.
WCAG 3.3.8 sets minimum standards for accessible authentication. 3.3.9 enhances these by removing exceptions and automating cognitive tests for users.

Herond Browser

Herond Browser Rebrands: Unveiling Striking New Logo Design

Curious about the inspiration, design process, and what this new logo means to you? Let's break it down. The post Herond Browser Rebrands: Unveiling Striking New Logo Design appeared first on Herond Blog.

The Herond Browser has always stood for speed, security, and smarter browsing. Now, as part of our bold Herond rebranding initiative, we’re thrilled to introduce the new Herond logo, a Browser logo redesign that captures our fearless evolution and future-ready vision.

This isn’t just a visual refresh; it’s a symbol of our commitment to innovation. Sleek lines, dynamic energy, and intuitive symbolism reflect the seamless power of the Herond Browser rebrand, making it instantly recognizable in a crowded digital landscape.

Curious about the inspiration, design process, and what this new Herond logo means to you? Let’s break it down.

The Meaning Behind “Herond”: Heron + 3D

Herond emerges from “Heron” – the majestic migratory bird symbolizing freedom, wisdom, and unparalleled navigation – fused with three powerful Ds: Defend, Discover, Decentralize.

Defend: Safeguarding users, data, and privacy in an increasingly vulnerable digital world. Discover: Unlocking boundless exploration of a vast, new internet frontier. Decentralize: Empowering users in the Web3 era, putting you in control.

The name Herond embodies the heron’s free-spirited flight, while spotlighting our core Herond Browser mission as part of the Herond rebrand: protect, guide discovery, and pioneer a decentralized network where you hold the reins.

Decoding the Herond Browser Logo: A Visual Manifesto of Our Rebrand

The Herond Browser logo is a bold visual declaration of our brand philosophy – where every curve and color tells our story of freedom, protection, and Web3 innovation.

The Heron Silhouette

Styled as a dynamic heron in flight, it embodies grace, freedom, and the ability to soar beyond limits – the core spirit of a future-forward Web3 browser.

Protective Circle

Sweeping curves form a protective ring, symbolizing safety, integrity, and seamless connectivity. This is our promise: the Herond Browser always shields your data and privacy.

Fly Freely – Flight Path Inspiration

The fluid, twisting structure mimics unbound flight, representing limitless exploration and the open essence of Web3.

Dawn Colors – The Dawn

A gradient shifting from deep blue to purple and pink evokes sunrise – the dawn of a new era. Herond heralds a free, transparent, decentralized browsing experience.

Universe Map – Cosmic Colors

Layered color transitions paint a cosmic universe: vast, mysterious, infinite. This mirrors the Herond Browser as your gateway to the expansive Web3 world.

Hot Core – Core Energy

The vibrant pink-purple center represents a pulsing energy core – symbolizing raw power, innovation drive, and cutting-edge tech.

The Core Message

Built on three pillars: Defend, Discover, Decentralize – the logo fuses Herond Browser’s brand meaning with the heron’s symbolism: protect, guide, and empower.

From the graceful heron silhouette to the cosmic dawn gradients, every element of the new Herond Browser logo embodies our unbreakable commitment: Defend your privacy, Discover infinite possibilities, and Decentralize power back to you.

This isn’t just a Herond rebrand. It’s your invitation to a freer, safer, smarter website. The future of browsing has landed.

Ready to experience it? Download the Herond Browser today and join the flight.

The post Herond Browser Rebrands: Unveiling Striking New Logo Design appeared first on Herond Blog.


Thales Group

Thales revolutionises the underwater battlespace with new Sonar 76Nano

Thales revolutionises the underwater battlespace with new Sonar 76Nano prezly Mon, 12/15/2025 - 08:00 Defence Naval Europe Share options Facebook
Thales revolutionises the underwater battlespace with new Sonar 76Nano prezly Mon, 12/15/2025 - 08:00 Defence Naval Europe

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 15 Dec 2025 Sonar 76Nano is a compact, advanced acoustic detection system aimed at enhancing maritime security for the UK, NATO, and allies, closely aligned with key UK defence priorities. Developed from concept to prototype in just ten months, it features modular deployment, Artificial Intelligence (AI)-driven detection, and the potential for seamless integration across naval platforms, including naval drones. Sonar 76Nano will not only detect submarines, it will be able to map the seabed, conduct threat acoustic and operational data collection, and even send messages underwater with a low risk of being detected. Sonar 76Nano strengthens UK industrial capability, supports potential exports and jobs, and underscores Thales’ ongoing commitment to innovation and national security.

Uncrewed underwater vessel with 76Nano Sonar ©Thales

Thales introduces the prototype of Sonar 76Nano, a revolutionary miniaturised acoustic detection system intended to redefine maritime security for the UK, NATO, and their allies. Building on the world-class legacy of the renowned Sonar 2076, the Sonar 76Nano directly supports the UK Government’s Strategic Defence Review and Defence Industrial Strategy ambitions to strengthen national security and UK industrial capability.

Innovation that keeps our warfighters ahead of emerging threats

From concept to real-world prototype in just 10 months, Sonar 76Nano sets a bold new standard in agility, innovation, and operational excellence. Thanks to its modular and flexible design, this sonar system can be deployed onboard a wider range of uncrewed underwater vehicles (UUV) and seabed monitoring systems rather than being limited to large high value platforms. It empowers greater flexibility and responsiveness across naval operations through a hybrid fleet - an approach aligned to the Royal Navy’s Atlantic Bastion vision for integrated defence across the North Atlantic and beyond.

Key features and breakthroughs

Miniaturised, proven technology: leveraging trusted Sonar 2076 capabilities in a compact, versatile form factor. Modular deployment: seamlessly integrated across the full spectrum of platforms — uncrewed systems to crewed systems. AI-enhanced acoustic detection: artificial intelligence accelerates target identification and decision-making with unprecedented precision. Digital native integration: fully compatible with existing defence infrastructure, enhancing interoperability across UK and NATO forces. Rapid development cycle.

This fast-paced innovation cycle underscores Thales’s commitment to fostering agility, cutting-edge research, and secure supply chains - directly supporting UK high-value jobs and driving technological exports.

Public debut with the Royal Navy

On 17th December, Sonar 76Nano will make its official public debut with the Royal Navy at a technology demonstrator, marking a pivotal moment for maritime defence. This will allow naval personnel and experts to witness the prototype’s capabilities first-hand and engage with Thales’ leading engineers and scientists.

Sonar 76Nano shows how strategic government defence priorities can drive tangible innovation that strengthens UK maritime dominance and industrial capability.

“Sonar 76Nano is a landmark innovation and a vivid demonstration of what focused ingenuity and collaboration can achieve with tight deadline. In just ten months, our world-class teams have progressed from visionary concepts to an advanced prototype that sets the stage for the next generation of underwater security and agility. As we present this innovation to the Royal Navy, we reaffirm our commitment to keeping the UK at the forefront of global maritime security.” Paul Armstrong, Managing Director Under Water Systems, Thales in the UK.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in the UK

Thales in the UK is a team of over 7,000 experts, including 4,500 highly skilled engineers, located across 16 sites.

Each year Thales invests over £575 million into its UK supply chain, working with over 2,000 companies. They are dedicated to research and technology, working with partners to invest over £130m+ in R&D in the UK annually.With a heritage of over 130 years, Thales in the UK understands the importance of developing skills for the future, which is why they have over 400 apprentices and graduates across the UK. Thales is committed to supporting its people, and continuously developing talent, and highly skilled experts.

Media Relations Contacts

Thales, Media relations - pressroom@thalesgroup.com

Recent images of Thales and its Defence, Aerospace and Cyber & Digital activities can be found on the Thales Media Library. For any specific requests, please contact the Media Relations team.

View PDF market_segment : Defence > Naval ; countries : Europe > United Kingdom https://thales-group.prezly.com/thales-revolutionises-the-underwater-battlespace-with-new-sonar-76nano thales-revolutionises-underwater-battlespace-new-sonar-76nano On Thales revolutionises the underwater battlespace with new Sonar 76Nano

Thales awards SFO Technologies RBE2 radar wired structures contract for Rafale under Make in India

Thales awards SFO Technologies RBE2 radar wired structures contract for Rafale under Make in India prezly Mon, 12/15/2025 - 06:30 Defence Air Asia-Pacific Share options Facebook
Thales awards SFO Technologies RBE2 radar wired structures contract for Rafale under Make in India prezly Mon, 12/15/2025 - 06:30 Defence Air Asia-Pacific

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 15 Dec 2025 First major order for high-value, technologically advanced complex wired structures —designed to withstand harsh environmental constraints— to be produced in India for the Dassault Aviation Rafale programme. ​ This strengthens Thales’ long-term partnership with SFO Technologies and enhances India’s indigenous defence manufacturing capabilities. ​ It supports India’s strategic localisation goals, expanding expertise from precision machining and wiring to complex systems integration.

Thales’ SEVP Operations & Performance Philippe Knoche with Thales’ VP Global Procurement for Engineering and for India & AMEWA Deepak Talwar with SFO Technologies’ Chairman & Managing Director N Jehangir and Sr. Corporate VP, Strategic Business Development UM Shafi at the signing ceremony in Bengaluru ©Thales

15 December 2025, Bengaluru, India: Thales, in partnership with SFO Technologies, has taken a significant step forward in supporting India’s strategic vision for self-reliance in defence manufacturing. The latest contract, awarded for the production of high-value, technically advanced complex wired structures of the RBE2 AESA Radar of the Indian Rafale, reinforces SFO Technologies’ long-standing expertise and enduring partnership with Thales across multiple major programmes.

This first order marks an important milestone in Thales’ Make in India strategy for the localisation of advanced radar systems, which is expected to boost local manufacturing capabilities for critical Rafale sub-systems supplied to the Indian Armed Forces. ​ Following the order of 26 Rafale aircraft for the Indian Navy, Thales, as a proud Dassault Aviation Rafale team member, continues to execute its ambitious localisation roadmap, partnering with the aeronautics and defence ecosystem in India. The scope of expertise delivered through this partnership ranges from precision machining and assembly/wiring to electronics, microelectronics, and complex system integration.

“This partnership with SFO Technologies reflects our steadfast commitment to the Make in India initiative. Through decades of strong local collaborations, we have consistently invested in building indigenous capabilities and fostering world-class expertise within the Indian ecosystem. SFO Technologies has demonstrated exceptional innovation and reliability in every project we undertake together. We are delighted to continue reinforcing our partnership, setting new benchmarks for quality and operational excellence in support of India’s self-reliance ambitions.” Philippe Knoche, SEVP Operations and Performance, Thales.
“We are honoured of Thales’ continued trust in SFO Technologies, and proud to contribute towards deploying new expertise in the Indian ecosystem, while actively taking part in the equipment production for the Rafale India. Quality and punctuality will be our priorities to satisfy our customers, as usual.” N. Jehangir, Chairman & Managing Director, SFO Technologies.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in India

Present in India since 1953, Thales is headquartered in Noida and has other operational offices and sites spread across Delhi, Gurugram, Bengaluru and Mumbai, among others. Over 2300 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Aerospace and Cyber & Digital sectors. Thales has two engineering competence centres (ECCs) in India - one in Noida focused on Cyber & Digital business, while the one in Bengaluru focuses on hardware, software and systems engineering capabilities for both the civil and defence sectors, serving global needs. Thales has also established an MRO (Maintenance, Repair & Overhaul) facility in Gurugram to provide comprehensive avionics maintenance and repair services to Indian airlines.

About SFO Technologies

SFO, a NeST Group company, is a 35-year-old high-tech, end-to-end solution provider, headquartered in Kochi, India. With 22 factories across the globe and over 8,000 employees, SFO offers hardware design, software development, and vertically integrated manufacturing of mission-critical and life-critical equipment for the defense, aerospace, space, healthcare, industrial, and transportation sectors.

PRESS CONTACTS

Thales, Media relations

pressroom@thalesgroup.com

Thales, Communications in India

Pawandeep Kaur, Communications Director, Thales in India

pawandeep.kaur@thalesgroup.com

SFO Technologies Media relations

SFO Technologies Pvt Ltd

Thomas Abrahm

thomas.abraham@nestgroup.net

View PDF market_segment : Defence > Air ; countries : Asia-Pacific > India https://thales-group.prezly.com/thales-awards-sfo-technologies-rbe2-radar-wired-structures-contract-for-rafale-under-make-in-india thales-awards-sfo-technologies-rbe2-radar-wired-structures-contract-rafale-under-make-india On Thales awards SFO Technologies RBE2 radar wired structures contract for Rafale under Make in India

Herond Browser

Herond Browser Rebrands: A Bold New Look for the Web3 Era

Herond Browser is excited to introduce our bold new rebrand, designed to better represent who we've become and where we're heading. The post Herond Browser Rebrands: A Bold New Look for the Web3 Era appeared first on Herond Blog.

Change is constant in the digital world, and Herond Browser is evolving with the rebrand. Today, we’re excited to introduce our bold new brand identity, designed to better represent who we’ve become and where we’re heading. While our mission to protect your privacy and empower your Web3 journey remains unchanged, our visual identity now fully captures that vision.

What’s New Bold New Logo

Our redesigned logo represents speed, security, and forward momentum. Modern and dynamic, it captures Herond’s commitment to innovation and leadership in the Web3 space. The refined design is instantly recognizable and built to stand the test of time.

Refreshed Color Palette

Vibrant yet sophisticated, our new colors inspire confidence and innovation. The updated palette balances bold energy with professional elegance, creating visual impact while maintaining accessibility and readability across all platforms and use cases.

Modern Typography

Clean, powerful, and highly readable, our new typeface family is designed for clarity at every touchpoint. From desktop to mobile, typography ensures effortless reading while projecting the strength and reliability Herond represents.

Evolved Design Language

Simplified, purposeful, and beautiful, our refined design system enhances every interaction. By reducing visual clutter and focusing on what matters, we’ve created an experience for all users.

What Stays the Same Fast, Secure Browsing Experience

The performance you depend on remains unchanged. Herond continues to deliver lightning-fast page loads and efficient resource management, ensuring your browsing experience is always smooth and responsive.

Industry-Leading Privacy Protection with Herond Shield

Your privacy is still our top priority. Herond Shield continues to block ads, prevent tracking, stop fingerprinting, and protect against phishing, all automatically and completely. We don’t collect your data, period.

Built-in Web3 Wallet

Seamless Web3 access stays right where you need it. Our integrated, non-custodial wallet continues to support multiple chains and instant dApp connections without any extensions required. Your keys, your control.

Zero Tracking, Zero Compromises

Our core values remain unchanged. No behavioral tracking, no data collection, no compromises. We build for your freedom, not for profit from your data, and that will never change.

What This Means for You

Your browsing experience remains exactly the same – same speed, same security, same privacy protection. All your bookmarks, passwords, and settings stay intact. No action required on your part.

The only difference in this rebrand is a refreshed, modern interface that’s more polished and intuitive. You’ll enjoy cleaner visuals and a more cohesive design while experiencing the same trusted performance and protection you rely on every day.

Simply continue browsing as usual. Your Herond browser will update automatically, preserving everything you’ve personalized while giving you a premium new look.

Experience the New Herond

Visit herond.org to explore our rebrand and see why millions choose Herond for private, Web3-ready browsing. Discover the complete transformation, from our new logo to our redesigned website, and experience what sets Herond apart.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Herond Browser Rebrands: A Bold New Look for the Web3 Era appeared first on Herond Blog.


FastID

An Indispensable Pillar of Resilience - The Human

The human operator is the ultimate pillar of resilience. Learn how Fastly empowers engineers to handle novel failures, drive systemic learning, and achieve antifragility.
The human operator is the ultimate pillar of resilience. Learn how Fastly empowers engineers to handle novel failures, drive systemic learning, and achieve antifragility.

Sunday, 14. December 2025

Dock

Outlook for 2026: Where Identity Is Headed Next

In our Year in Review, we closed with a section that sparked the most conversations: our outlook for 2026.  The signals emerging across governments, enterprises, telecoms and AI ecosystems all point in the same direction. 2026 is shaping up to be a defining year for digital identity. In

In our Year in Review, we closed with a section that sparked the most conversations: our outlook for 2026. 

The signals emerging across governments, enterprises, telecoms and AI ecosystems all point in the same direction. 2026 is shaping up to be a defining year for digital identity.

In this article, we will go through the trends we are seeing across the industry. These are the shifts that are starting quietly now but will shape how identity, trust and digital interactions evolve over the next 12 months.

Friday, 12. December 2025

TÜRKKEP A.Ş.

Datassist’in Bordro Platformu TÜRKKEP’in KEP Altyapısıyla Buluştu


MyDEX

Why the words we use matter

Winston Churchill once said that the US and UK were two nations divided by a common language. In our work we’re finding something similar, with people using the same words to mean entirely different things, thereby talking at crossed purposes. It’s really hard to have a meaningful conversation when this happens. One example is the word ‘monetisation’. Local authorities used the word ‘monetis

Winston Churchill once said that the US and UK were two nations divided by a common language. In our work we’re finding something similar, with people using the same words to mean entirely different things, thereby talking at crossed purposes. It’s really hard to have a meaningful conversation when this happens.

One example is the word ‘monetisation’. Local authorities used the word ‘monetisation’ to mean cost savings they can achieve through changes to how they operate. But in the private sector ‘monetisation’ is understood in terms of generating income — making a profit by selling something. Same word, very different meanings and implications.

In our work, we face similar issues with words like ‘data’, ‘personal data’, ‘economic’ and ‘infrastructure’.

Take ‘data’ for example. When they see the word ‘data’ many people associate it instantly with the ‘big data’ that is used for ‘analytics’. This completely misses the countless different ways in which data is used practically and operationally to get day-to-day stuff done. Which means they are instantly talking about only a small part of the picture.

Likewise with ‘personal data’, which is often seen in terms of the details you might say listed in an organisation’s ‘My Account’ facility: things like full name, date of birth, contact details and so on. Whereas, from our perspective personal data relates to any piece of information that relates to an identified (or identifiable) individual. When seen in the round, this personal data forms a complete data picture of your life. Examples include all your financial transactions, everything relating to your health, your education and career, your home, your possessions, your travel, your interests, and so on — from birth to death.

Critically this includes a person’s interaction with the world around them including service providers across all sectors and our own personal networks. This sort of personal data therefore covers countless different use cases, across every aspect of peoples’ lives including their relationships with others, for example people they care for or people who care for them.

Ditto with the word ‘economic’, which many people equate with spending money or making money. This means that if the costs of actually doing something in terms of time, effort, energy or materials are reduced but no money changes hands as a result, they don’t see this as being ‘economically’ important. (The way the Government measures ‘productivity’ is entirely in terms of money — measuring the money costs of an activity’s ‘inputs’ and the money price of resulting ‘outputs’. This creates enormous problems if this data is not available … which it isn’t, most of the time.)

The same is true of ‘infrastructure’. When Mydex talks about ‘infrastructure’ we mean something that helps people physically access key resources. For example the electricity national grid which brings electricity safely from where it is generated to when you flick a switch is ‘infrastructure’, as are our water and sewage systems, road and rail networks, the World Wide Web, Internet and so on. But others use the term much more vaguely.

For example, one recent Government publication talked of ‘data sharing infrastructure’ in terms of governance, legal considerations, regulations, data standards and security — a description which ignores the issue of people actually being able to access the data concerned. As if that’s somehow not relevant.

Sometimes, the word ‘infrastructure’ is used to mean ‘all the other things that need to happen, apart from the main thing for the main thing to be successful’. In the case of a major IT project for example, educating and training people about this IT might be regarded as ‘infrastructure’, even PR to help the public understand it. In this way, actually enabling physical access to key resources disappears out of the window and ‘infrastructure’ comes to mean the scaffolding that goes around a project rather than the project itself.

Why do we care about these definitional issues? Because when we try to explain what Mydex are doing, often it goes straight over peoples’ heads.

For example, we might say something like “The Government has an opportunity to introduce personal data logistics infrastructure that could unleash a productivity breakthrough that would transform the economy.”

What we mean by this is the practical, physical, operational ability for people and organisations to obtain, hold, share and use any of the data they need (about money, health, education, skills etc), when they need it, to dramatically improve how services are produced and delivered.

But what people hear is that “The Government has an opportunity to move a few details about people such as dates of birth and contact details from one organisation to another by tweaking a few rules about governance and data security.” And then they think “What on earth is the point of doing this? How is anybody going to make any money out of that? These people must be crazy!”

Yes, we are talking about the opportunity to transform the economy — to radically improve how every service that uses personal data goes about doing what it does, to produce much better outcomes and much lower cost. But for people using the words ‘personal data’, ‘economic’ and ‘infrastructure’ as described above it’s very difficult for them to see this.

Everyone uses thousands of words every day without ever stopping to think how they work as assumption icebergs with the visible word at the top representing a massive iceberg underneath it — that iceberg being the mental model that the word represents.

Nine times out of ten we don’t need to worry about this because everyone is using the same underlying mental model. We instantly know what each other means. But when the underlying mental model itself needs to change, the particular meanings we ascribe to these words become very important.

With ‘personal data’, ‘the economy’ and ‘infrastructure’ we are in such a situation right now.

Why the words we use matter was originally published in Mydex on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What Is Human-in-the-Loop AI and Why It Matters for Identity

Learn how Human-in-the-Loop AI works and why it's essential for secure, trustworthy identity systems that balance automation with human judgment.

As AI and machine learning accelerate across the enterprise, automation promises to make decisions faster, workflows smarter, and systems more autonomous. But full autonomy comes with a cost. When machines operate without context, oversight, or human input, they risk producing outcomes that fall outside of policy, introducing bias, or triggering errors that are hard to detect or correct.

 

Human-in-the-Loop (HITL) offers a crucial alternative. Rather than removing humans from the equation, HITL systems keep people embedded in the decision loop — at key points of training, validation, or execution. This hybrid model doesn’t just improve performance. It enhances accountability, reduces bias, and reinforces trust — especially in identity systems where security, fairness, and transparency are paramount.

 

For identity and access management (IAM), HITL helps answer a growing question: How do we use AI to make smarter decisions without giving up control? The answer lies in balancing automation with human judgment, and identity is the anchor that makes that possible.

 

Key Takeaways

 

HITL keeps humans in control of AI decisions by embedding oversight and feedback directly into the model lifecycle — from training data to policy enforcement.

In identity systems, HITL ensures that AI decisions remain explainable, auditable, and correctable, especially in high-risk areas like authentication, access control, and fraud detection.

HITL complements Zero Trust by supporting continuous verification and policy enforcement, even when AI is making split-second decisions at scale.

Tying HITL to identity — through verifiable roles, credentials, and policies — ensures that human input is secure, scoped, and governable.


Recognito Vision

Why Facial Scans Are Replacing Boarding Passes at Airports

Imagine arriving at the airport, juggling boarding passes, passports, and your phone, rushing through lines just to finally reach the gate. Now imagine a world where a simple facial scan replaces all that stress. You walk through security, check in, and board without ever touching a paper ticket or scanning a digital pass. This is...

Imagine arriving at the airport, juggling boarding passes, passports, and your phone, rushing through lines just to finally reach the gate. Now imagine a world where a simple facial scan replaces all that stress. You walk through security, check in, and board without ever touching a paper ticket or scanning a digital pass. This is not science fiction; it is happening today.

Airports around the world are increasingly adopting facial recognition boarding and digital boarding to replace traditional boarding passes. It is making travel faster, safer, and more convenient. Recognito provides a powerful Face Recognition SDK that helps airlines and airports implement biometric boarding solutions.

It enables biometric identity verification to enhance airport check-in, airport security, and the overall passenger experience. With our technology, travelers can enjoy truly contactless travel while airlines and airports streamline their operations.

The Old Boarding Experience Versus the New Face Scan Experience

Air travel has traditionally required multiple steps to verify a traveler’s identity. Showing paper or digital boarding passes at check-in, security, and boarding gates created long queues and stressed passengers. Today, facial scan technology is transforming these processes, allowing passengers to move seamlessly from check-in to boarding with one face scan.

Comparison Table: Old Way vs. New Face Scan Experience Aspect Traditional Boarding Facial Scan Experience Check-in Present paper or digital boarding pass Scan your face for instant check-in Security Manual ID and boarding pass check Facial recognition verifies identity quickly Boarding Show boarding pass multiple times One scan completes verification Time Longer waiting and lines Reduced wait times, faster boarding Contact Physical handling of tickets and documents Fully contactless travel

With facial recognition, a traveler’s face becomes their boarding pass, verified through biometric identity verification systems. Recognito makes this integration seamless for airlines and airports, delivering faster and more convenient airport check-in and security experiences.

How Facial Recognition Technology Works Travelers using biometric security checkpoints.

Travelers utilize advanced biometric facial recognition systems for seamless check-in, identity verification, and security clearance at a modern international airport.Facial recognition captures a live image of a person’s face and matches it against stored data to verify identity. In airports, this means passengers no longer need to present paper or digital boarding passes multiple times.

The process is fast and secure. Cameras at check-in, security, and boarding gates scan a traveler’s face and match it with stored biometric data. Advanced systems like Recognito’s Face Recognition SDK use sophisticated algorithms and FRVT pass technology to prevent fraud and ensure passenger safety. Many systems also include liveness detection to prevent spoofing attempts with photos or videos.

The detailed performance of facial recognition algorithms, including 1:1 verification tests, demonstrates how solutions like Recognito meet high standards, as published by NIST FRVT 1:1 Verification testing results.

Real-World Implementations Around the Globe

Several airports and airlines are already using facial recognition boarding to enhance the traveler experience.

1. Digi Yatra in India

Passengers can use a single facial scan for check-in, airport security, and boarding, reducing waiting times and eliminating the need for a paper boarding pass.

2. United States (CBP & TVS)

At major U.S. airports, the Traveler Verification Service allows passengers to move from security to the gate without showing a traditional ticket using facial recognition boarding.

3. Star Alliance Biometric Platform in Europe

Passengers traveling between multiple European airports can use a shared airport biometrics system. One face scan is all that is needed for check-in, security, and boarding, enabling truly contactless travel.

4. Dubai and Abu Dhabi Smart Airports

Dubai International and Abu Dhabi Airports have implemented airport security facial recognition systems for millions of travelers, integrating digital boarding for a smooth, paperless experience.

A traveler at Singapore’s Changi Airport shared how facial recognition boarding allowed them to walk straight from check-in to the lounge, skipping long queues and saving time before departure. These innovations align with global initiatives such as the IATA One ID initiative for seamless travel, which aims to make airport journeys smoother worldwide.

Benefits of Facial Recognition Boarding for Travelers

Facial recognition technology is transforming the airport experience, making travel faster, safer, and more convenient. Here are some key benefits for travelers:

Faster Journeys

Facial scans reduce waiting times at security and boarding gates. Automated biometric identity verification helps airlines board flights faster.

Stronger Security

Airport security facial recognition ensures accurate identification and reduces identity fraud risks. Unlike traditional boarding passes, biometrics cannot be easily forged.

Stress-Free Travel

Passengers no longer need to worry about losing a paper boarding pass or digital ticket. A facial scan is all it takes.

Contactless Experience

In a post-pandemic world, contactless travel is safer and more hygienic. Facial recognition allows passengers to pass through airports safely without physical contact.

Challenges and Considerations for Travelers

While the benefits of biometric boarding are clear, travelers should also be aware of some challenges.

Privacy Concerns

Some passengers worry about how their facial scan data is stored and whether it could be shared. Airports and providers like Recognito ensure data privacy and compliance with global standards. Travelers can read more about data protection regulations on the official GDPR website.

Optional Versus Mandatory

Many airports allow passengers to opt into facial recognition boarding, while others still provide traditional boarding passes as a fallback.

Technical Limitations

Lighting, face accessories, or system glitches can occasionally slow airport security facial recognition.

A traveler hesitant to use facial recognition opted in at a European airport. They found the experience faster and more convenient, building trust in the technology after reviewing the airport’s privacy policies.

How Recognito Helps Airports and Airlines Airport staff assisting a passenger with the Recognito self-service check-in system.

We offer a powerful Face Recognition SDK to help airlines and airports implement facial recognition boarding, digital boarding, and other airport biometrics solutions seamlessly. Our services include:

End-to-end biometric identity verification integration Compliance with airport security standards Staff training and passenger onboarding for contactless travel

Contact Recognito today to see how your airport or airline can leverage facial scan technology to deliver a faster, safer, and more enjoyable travel experience. You can also explore technical demos on our official GitHub repository.

The Future of Air Travel With Facial Recognition Boarding

Facial recognition boarding is transforming air travel by replacing traditional boarding passes with secure, fast, and seamless biometric identity verification. Travelers enjoy quicker airport check-in, enhanced airport security, and stress-free contactless travel. While privacy and opt-in choices remain important, the benefits are undeniable.

With solutions like Face Recognition, airports and airlines can provide travelers with a modern, user-friendly experience where the only ticket needed is the passenger’s face.

Frequently Asked Questions 1. Can facial scans really replace boarding passes at airports?

Yes, these systems enable facial recognition boarding, eliminating the need for traditional boarding passes.

2. Is it safe to use facial recognition for airport travel?

Yes, facial scan technology is secure, offering reliable airport security facial recognition and biometric identity verification.

3. Do all airports use facial scans instead of boarding passes?

No, adoption varies. Many airports now provide optional digital boarding and biometric boarding services.

4. What happens if facial recognition fails at the airport?

Travelers can show a traditional boarding pass or re-enroll in the facial recognition system.

5. Will I have to opt in for facial recognition travel?

Most airports require travelers to opt in for facial recognition boarding and airport biometrics programs.


FastID

4 Recommendations to Maximize Savings and Performance with Fastly’s CDN

Boost website speed and save on egress costs with Fastly's CDN. Learn to maximize cache efficiency, reduce origin traffic, and unlock 189% ROI.
Boost website speed and save on egress costs with Fastly's CDN. Learn to maximize cache efficiency, reduce origin traffic, and unlock 189% ROI.

Essential Checklist: Get Ready for Peak Traffic Season with Fastly

Stop holiday traffic outages! Learn best practices to optimize your website for peak performance, security, and sales this Black Friday season.
Stop holiday traffic outages! Learn best practices to optimize your website for peak performance, security, and sales this Black Friday season.

Thursday, 11. December 2025

LISNR

“Trust Me, Bro” is Not Proof of Presence

Retail Media’s Numbers Don’t Add Up Retail Media Networks are the fastest-growing segment in advertising, poised to rival TV spend  and reshape how brands influence shoppers at the digital and physical shelf. Alas, the confidence behind that spend hasn’t kept pace with its explosive growth, and the numbers underpinning that rise are beginning to collapse […] The post “Trust Me, Bro” is Not
Retail Media’s Numbers Don’t Add Up

Retail Media Networks are the fastest-growing segment in advertising, poised to rival TV spend  and reshape how brands influence shoppers at the digital and physical shelf. Alas, the confidence behind that spend hasn’t kept pace with its explosive growth, and the numbers underpinning that rise are beginning to collapse under scrutiny. Merchants and brand marketers are validating platform-reported ROAS against independent Marketing Mix Modeling (MMM), and the results are widening into a deeply uncomfortable gap.

According to NielsenIQ’s landmark report The Retail Media Mirage, some RMNs report ROAS as high as 14:1, while MMM-validated ROI falls closer to 0.4:1. It’s a 35× discrepancy that blows straight past “rounding error” into “structural credibility crisis.” This “variance” demands exploration for brands, merchants, and Retail Media Networks (RMN) alike.

Brands see numbers that feel too good to be true; RMNs insist the model is accurate. Neither can truly prove their case because both are built on probabilistic estimations from disjointed data pools that collapse the moment a shopper steps into a physical environment and measurement must map to real, physical behavior. 

This disconnect, between platform-reported metrics (what RMNs claim happened) and independently validated outcomes (what MMM shows actually happened), is what the industry has begun calling the Proof Gap.

The Proof Gap: When Attribution Collapses to Inference

The growing disconnect between platform-reported metrics (what RMNs claim happened) and independently validated outcomes (what MMM shows actually happened) is what the industry has begun calling the Proof Gap, and we can trace it to one root cause: the ecosystem is still modeling a physical-world behavior using probabilistic noise. GPS drifts, Wi-Fi misattributes, beacons overlap, and matchback reporting routinely mistakes coincidence for causality.

Anything not grounded in deterministic proof-of-presence is, by definition, a model; it’s an estimate, an interpolation, a derivative of weaker signals. Directionally useful, but not admissible as evidence.

This is the Proof Gap, and it isn’t just analyst skepticism. In a recent report, A Viable Framework for Maturing In-Store Media Measurement, the IAB identifies credibility gaps and inconsistent presence measurement as key barriers to RMN comparability and growth, emphasizing the need for verifiable in-store engagement.

Anatomy of a Failure: Why In-Store Attribution Breaks Down

The moment a shopper walks into the store, the measurement stack fractures. Every step in the physical journey becomes an analytic guess because let’s be honest; what the industry calls “impressions” are really just exposures. In-store environments introduce failure modes that probabilistic models simply can’t overcome:

The In-Store Black Hole Proof of causality disappears  into a black hole of derivative data the moment a shopper enters the physical store. Conversion events may happen, but verification is impossible because the signal chain is unreliable. The “Spray-and-Pray” Radio Frequency Problem RF-based technologies (BLE, Wi-Fi) leak, overlap, spoof, or bounce, making “right here” indistinguishable from “nearby,” resulting in inflated visit counts, mislabeled exposures, and inaccurate matchbacks. The Timing Fallacy Most attribution systems infer causality simply because exposure and purchase occurred in the same general timeframe. But correlation does not prove purchase influence. The Digital Signage Blind Spot A screen playing an ad is not proof that a human saw it, engaged with it, or was even nearby, yet most RMNs still count every loop rotation as an “exposure.” Without a verified connection between the shopper and the screen, signage attribution is impossible. 

All four failure modes reflect the same breakdown the IAB highlights in its measurement framework: the absence of reliable proof of presence and the inability to properly pair exposure with verified shopper proximity. When presence can only be modeled but not verified, the consequences ripple across the entire retail media ecosystem. But in-store doesn’t have to be a black hole where billions in influence go unproven.

Talk with Our Team about Retail Media Solutions Closing the Gap: Deterministic Proof of Presence

The only path out of the attribution crisis is moving from probabilistic guesswork to verifiable reality. Every other solution the industry has experimented with tries to triangulate presence indirectly. But as the stakes have risen and scrutiny has intensified, the limits of those derivative datapoints have become impossible to ignore. 

Deterministic proof of presence replaces inference with certainty: a real-time, observable moment that connects

 a person → a place → a device → an exposure → an action

This is the chain of custody that modern attribution has been missing, and without it, no amount of data science can compensate. The shift is moving your inferred or matched modelled signals, where your efforts are trying to reduce noise, to a verified signal – a deterministic, verifiable presence event.

This is where LISNR Radius becomes foundational. Radius provides the one thing the ecosystem has never had: a deterministic proof of presence signal that works inside the physical environments where attribution historically collapses. It verifies, in real time, that a specific device and therefore a specific user is physically present at a specific location.

The implications of this provable, auditable, and repeatable chain shift are far-reaching:

For brands, it transforms in-store attribution from “modeled influence” to measurable lift, finally closing the loop between exposure and outcome with defensible accuracy. For merchants, it connects digital identity to physical behavior, enabling loyalty, personalization, and media monetization to operate with confidence instead of approximation. For consumers (and increasingly, their AI agents), it creates a seamless world where experiences trigger automatically because the system knows, unambiguously, that they are there.

Because you’ve closed the Proof Gap with Radius-validated proof of presence, you can finally stop debating dashboards and start defending outcomes.

Anatomy of Perfect Attribution: When Presence Is Proven, Value Is Proven

Real-time deterministic proof of presence unlocks what every stakeholder has wanted for years: attribution with no drift, no overlap, no inference, and no stitched-together reporting.

A customer enters Aisle 4. Radius verifies presence through an ultrasonic signal that cannot be spoofed or drift. The RMN or digital signage triggers a relevant promotion. The shopper makes a purchase. The system connects exposure → presence → purchase with 100% confidence.

When attribution is grounded in deterministic proof of presence, everything becomes measurable, defensible, and monetizable without collapsing under cross-examination. Brands can tie verified exposure to real incremental lift, merchants can link identity to action across the full physical journey, and RMNs can confidently justify CPMs, renewals, and premium pricing. 

Deterministic proof of presence replaces the vague notion of “exposure” with verifiable fact, allowing brands to tie verified exposure to real incremental lift and calculate precise ROAS for physical campaigns and to move beyond probabilistic models for justification and optimization. For merchants and retailers it enables the linking of identity to action across the full physical journey from ad exposure to in-store movement and purchase, unlocking new personalization opportunities, improving store layout, and driving higher loyalty and basket sizes. Finally,  undeniable proof of presence allows RMNs to confidently justify CPMs, renewals, and premium pricing. It validates their inventory, transforming them into a critical, high-ROI media investment attracting sophisticated advertisers.

Instead of debating credibility, stakeholders align around validated outcomes, enabling growth, transparency, and long-term confidence in the retail media model.

The Era of “Trust Me” Metrics is Ending

Deterministic proof of presence is the new standard the entire ecosystem must adopt in order to restore trust, justify media investment, and unlock the next generation of omnichannel commerce. This is the difference between yet another measurement method claiming performance and proven performance, and only the latter will survive the scrutiny now reshaping the industry.

The IAB’s 2025 measurement framework makes clear that verified presence will become a non-negotiable standard for in-store media. The question for RMNs, brands, and merchants is simple: who will adapt fast enough to lead it.

This article is the second entry in a five-part exploration of how presence verification is becoming the foundational signal of modern commerce. Read the introductory article Proof of Presence: The Most Valuable Signal in Commerce discussing presence verification and its role in maximizing the value of retail media, loyalty, and transactional systems.

The post “Trust Me, Bro” is Not Proof of Presence appeared first on LISNR.


Indicio

Game over, traditional identity authentication — game on, Verifiable Credentials

The post Game over, traditional identity authentication — game on, Verifiable Credentials appeared first on Indicio.
2025 saw the rise of AI-generated digital fraud, from deepfakes to documents. But this is only half the story: Behind ever-more realistic biometric and document fakes, AI has turned fraud into a continuous, dynamic process, says a new report, one where malicious AI agents are learning to pass as human.

By Trevor Butterworth

Last call, writing on the wall, game over. That’s 2025’s parting message to legacy identity verification is that AI has made fraud a continuous, dynamic, adaptive, and automated process.

“Fraud has entered a biological rhythm—attempt, analyze, adjust, repeat—where deception behaves less like code and more like an adaptive organism,” says a new report by AU10TIX, a Dutch company with over 30-years experience in identity intelligence.

This transformation has been driven by access to AI tools that can easily create realistic biometrics — from faces to voices — and documents. These, say AU10TIX,  can be plugged into “agentic AI, or self-directed fraud engines, capable of autonomously creating and adapting synthetic identities,”

The result is the growth of “Fraud-as-a-Service (FaaS), a global market where identity forgery, credential spoofing, and document manipulation are packaged and sold on demand.”

AI agents are learning to pass as humans

You may be old enough to remember the 1993 New Yorker cartoon by Peter Steiner of a dog sitting in front of a desktop computer explaining to another dog that “On the Internet, nobody knows you’re a dog.”

The 2026 version (Iwith apologies to Peter Steiner)  is this:

 

 

Good AI agents are going to need a way to identify fake people. Real people are going to need a way to identify fake AI agents. Everyone is going to need a way to verify real documents. 

With the global, annual bill for digital fraud estimated at $534 billion dollars (Infosecurity Magazine), the prospect of it increasing further and eroding trust in our entire digital (and physical) economy calls for more than cybersecurity band aids.

How to stop the world being engulfed by fake people, fake documents

Keep doing identity authentication the same way but use more sophisticated signals intelligence to identify the precursor anomalies to dynamic fraud. This is AU10TIX and many other company’s solution. Essentially: let’s use good AI to defeat bad AI.

Stop doing identity authentication the same way and shift to decentralized identity and Verifiable Credentials for identity and document authentication. This is what Indicio does, and here are five ways it 

1. How Verifiable Credentials mitigate injection spoofing

Injection spoofing involves fooling identity authentication workflows by “injecting”  altered or fake biometrics, documents, or deepfake video. 

One of the key features of a Verifiable Credential is that you can cryptographically prove the origin of the credential in a way that can’t be spoofed by AI.

If you add an authenticated biometric to the credential (as is the case with Digital Travel Credentials containing authenticated passport biometrics either derived from a passport chip or issued directly by a government), you now have a way to authoritatively cross check any liveness or biometric check.

A real person presents their authenticated biometric when they present for a liveness check. The software automatically compares the tamper-proof biometric in the credential against the live image. 

The software can prove who issued this biometric credential in a way that can’t be spoofed by AI, stolen, or shared by the credential owner. 

And you don’t need to centrally store biometric or any personal data to perform the verification. Verifiable Credential technology radically simplifies authentication infrastructure while making compliance with data privacy simple.

This is how Indicio technology is being used for border management and travel.

2. How Verifiable Credentials can give AI agents verifiable identities

Simple: an organization issues its AI agents and its customers or users with Verifiable Credentials.

They connect by establishing a secure communications channel. Each party can cryptographically prove they control their end of the channel. 

Then, they verify each other’s identity by cryptographically proving that the issuer is trusted source for each identity credential and that the information in the credential hasn’t been altered. 

This means a person can be sure they’re interacting with a legitimate AI agent before they share their data.

Just as important, AI agents can verify other AI agents before sharing a person’s data with that person’s permission.Machine-readable governance files determine which AI agents are to be trusted within an ecosystem and which information needs to be shared and to which agent.

Verifiable Credentials make automated AI-systems and ecosystems secure and compliant.

3. Verifiable Credentials are a way to mitigate fake documents

There are multiple ways to address this problem. Indicio has partnered with Regula to provide document authentication as part of the credential issuance process. Regula is the leading provider of document identification in the world.

Documents can also be structured as Verifiable Credentials. There is a size limit, but if you want the ultimate in line-by-line verifiability, this is an option. Also, an image of a document can be included in a Verifiable Credential. Documents can also be attached to Verifiable Credential by using a hash link.

4. Verifiable Credentials prevent identity drift

Identity drift is where the metadata associated with an identity — access, device, behavior — subtly change over time, signalling that it has been compromised. 

Verifiable Credentials prevent identity drift by virtue of the credential data being digitally signed in a way that’s resistant to alteration — even by AI. The credential is bound to its holder through multiple layers of cryptography and biometric access. 

If a person loses their phone, the credential is revoked and reissued following the same original identity assurance process. The old credential is gone. It will be recorded as dead on a revocation list written to a global ledger.

Critically, Verifiable Credentials usage is non-correlatable. The peer-to-peer communications protocol used in presenting and verifying credential data is uniquely encrypted for each verifier.

5. Verifiable Credentials prevent credential replay

A Verifiable Credential is cryptographically bound to a person, an organization, a device, or a person. It’s not something that can be shared or stolen and reused by another party. And because the data is under the owner’s control, it can only be shared by their consent.

2026 — the year of decentralized identity

While we can’t have too many defenses in our anti-fraud armory, prevention is better than remediation.

And what makes decentralized identity so powerful isn’t just that it’s a superior preventive technology: it’s also an implementation technology for better digital interaction, delivering significant practical benefits to people and organizations alike. It allows organizations to implement simpler seamless processes and deliver faster and  better user experiences.

Similarly, Verifiable Credentials provides an implementation solution for developing and expanding AI agents and automated systems while meeting compliance and security requirements. 

Both of these business value propositions align with where the market is going:  The end of 2026 is the deadline for delivering European Digital Identity (EUDI) and the EU digital wallet to all EU citizens, residents, and businesses. This means hundreds of millions of people will start using Verifiable Credentials.

This is the real signal that it’s game on for Verifiable Credentials.

Talk with one of our digital identity experts about how you can gain a competitive edge with Verifiable Credentials and build the internet of tomorrow, today.

 

The post Game over, traditional identity authentication — game on, Verifiable Credentials appeared first on Indicio.


FastID

React2Shell Continued: What to know and do about the 2 latest CVEs

In the wake of the critical severity React2Shell CVEs, two new CVEs exploiting similar Next.js and React components were announced on December 11. Learn more about these new CVEs.
In the wake of the critical severity React2Shell CVEs, two new CVEs exploiting similar Next.js and React components were announced on December 11. Learn more about these new CVEs.

The Sleep Test: How Embracing Chaos Unlocks API Resilience

Learn how Fastly's API Discovery and new Inventory feature unlock API resilience for platform engineers by embracing chaos and strategic, thoughtful planning.
Learn how Fastly's API Discovery and new Inventory feature unlock API resilience for platform engineers by embracing chaos and strategic, thoughtful planning.

Wednesday, 10. December 2025

FastID

Upgrading Visibility for Compute with Domain Inspector

Fastly's Domain Inspector now supports Compute. Get robust domain-level observability, real-time traffic visibility, and faster troubleshooting at the edge.
Fastly's Domain Inspector now supports Compute. Get robust domain-level observability, real-time traffic visibility, and faster troubleshooting at the edge.

DDoS in November

DDoS attackers were largely absent on Black Friday 2025. Fastly’s latest report reveals why, and what the shifting attack patterns mean for your apps and APIs.
DDoS attackers were largely absent on Black Friday 2025. Fastly’s latest report reveals why, and what the shifting attack patterns mean for your apps and APIs.

Tuesday, 09. December 2025

TÜRKKEP A.Ş.

Doğuş Üniversitesi - TÜRKKEP Protokolü: Dijital Dönüşümde Bilgiyle Gücün İş Birliği

Dijitalleşen iş dünyasında kurumların gerçek gücü artık sadece kullandıkları teknolojilerden değil, o teknolojiyi üreten ve yöneten insanların bilgi birikiminden de geliyor. Dolayısıyla üniversitelerden kamu kurumlarına ve özel sektöre kadar ortak sorulan soru şu: Çalışanları ve öğrencileri, hızla değişen dünyaya nasıl daha iyi hazırlarız?
Dijitalleşen iş dünyasında kurumların gerçek gücü artık sadece kullandıkları teknolojilerden değil, o teknolojiyi üreten ve yöneten insanların bilgi birikiminden de geliyor. Dolayısıyla üniversitelerden kamu kurumlarına ve özel sektöre kadar ortak sorulan soru şu: Çalışanları ve öğrencileri, hızla değişen dünyaya nasıl daha iyi hazırlarız?

Dock

Dock Labs Year in Review 2025: Digital ID’s Breakout Year

For years, digital identity sat on the edge of mainstream adoption. It was discussed in working groups, tested in pilots, and championed by specialists, yet rarely treated as core infrastructure. In 2025, that changed. This was the year digital identity stopped being an add-on to existing systems and started becoming

For years, digital identity sat on the edge of mainstream adoption. It was discussed in working groups, tested in pilots, and championed by specialists, yet rarely treated as core infrastructure. In 2025, that changed.

This was the year digital identity stopped being an add-on to existing systems and started becoming the foundation they are built on. Governments moved from experimentation to execution, enterprises began designing around reusable identity instead of one-off verification, and new forms of commerce driven by AI agents exposed just how essential trust, delegation, and verified identity have become.


Spherical Cow Consulting

What I Wish I Knew When I Started in Identity

In this episode, discover how today’s rapidly shifting digital identity landscape is bringing new practitioners into the field and challenging long-held assumptions about IAM, trust frameworks, and governance. Learn why even foundational concepts can feel unexpectedly complex as identity becomes integral to products, security, and global compliance. In this episode, discover how community expert

“The day this blog post comes out, I’ll be in Grapevine, Texas, at the Gartner IAM conference, moderating a panel called ‘What I Wish I Knew When I Started in Identity.‘”

I’ll be on stage with Elizabeth Garber and Andrew Cameron—two people who understand identity from very different angles and bring the kind of honesty you want on a panel like this.

Preparing for the session meant sitting down with the questions I planned to ask and forcing myself to give the same level of reflection I expect from the panelists. And as I started writing, it felt like something worth sharing more broadly. Because what people wish they knew at the beginning often says a lot about how the field has changed and where it’s going next.

But before diving into the questions, it’s worth pausing on why this conversation matters now.

A Digital Identity Digest What I Wish I Knew When I Started in Identity Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:13:38 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Why this matters now

Maybe it’s because I’m in the middle of so many overlapping projects right now, but I’m seeing an unmistakable shift in who cares about digital identity. For years, it felt like a specialization tucked deep inside infrastructure teams, academic federations, or standards groups that met in windowless hotel conference rooms. Identity mattered, but it wasn’t popular.

Today, it’s showing up everywhere.

People are trying to figure out how they’re going to identify, authenticate, and authorize AI bots that act with varying levels of autonomy. Teams are scrambling to understand the reality of age-verification requirements and what it actually means to handle age signals responsibly. Product managers are waking up to the fact that wallets, passkeys, and browser-mediated login flows are going to shape their user experience, whether they planned for it or not.

Even people who don’t think they work in identity suddenly find themselves pulled into it. If you work on customer experience, fraud, security, payments, or content moderation, identity is now sitting right in the center of the decisions you need to make.

When more people enter the field—or simply become aware that this field is now part of their job—they usually arrive with a healthy mix of curiosity and apprehension. They can sense that identity is important, but they also sense it’s complicated. And they’re right.

That’s why this panel feels timely. We have a growing number of people stepping into identity, and they deserve not just technical documentation or product briefs but actual guidance. They deserve someone to say, “Here’s what I wish I knew when I started.” So that’s what I set out to do as I prepared my own notes.

Patterns I see in people new to identity today

There’s some received wisdom in the identity world that it takes about two to three years before someone feels steady navigating IAM. And that’s IAM in the traditional sense—provisioning, access control, directory systems—which is only one piece of the bigger digital-identity picture. Add things like federation, wallets, regulatory requirements, and browser-level shifts, and the surface area expands quickly.

Most newcomers have no idea what they’re stepping into.

They think they’re going to learn a couple of protocols, pick up some security basics, and maybe memorize a few acronyms. Instead, they find themselves in a space that touches policy, governance, trust frameworks, user experience, cross-border regulation, platform politics, and long-running historical debates that occasionally flare up like old family arguments.

The most common thing I hear from people in their first six months is some variant of: “I didn’t realize how big this was.”

And because it’s big, it’s also intimidating. The learning curve feels like a hill that grows steeper as you climb it. Newcomers oscillate between excitement—because identity really is fascinating—and frustration, because every answer seems to introduce a new question.

The encouraging part is this: people who stick with identity tend to be generous with their experience. The community has its quirks, but it also has a strong culture of mentorship. If you show enthusiasm and a willingness to learn, someone will take the time to help you understand the puzzle you’re staring at.

These are the patterns I had in mind as I drafted answers to the panel questions—because I want newcomers to know that the confusion they feel is normal. And survivable.

1. The misconception I had at the start

For a long time, I believed—quite sincerely—that everyone understood the critical nature of digital identity.

My very first job in tech was as a Galacticom Bulletin Board System operator. A BBS SysOp. This was long before I knew anything about formal IAM or the standards world. But even in that system, primitive as it was, my job boiled down to two essential responsibilities: managing accounts and deciding what those accounts could do.

It turns out that theme followed me everywhere. Every job I’ve ever had—whether it looked like identity or not—was ultimately about understanding users and determining what they should be able to access.

Because of that, I assumed other people saw technology the same way I did: through the lens of identity. But most people don’t. Most people think in terms of the service they’re using, not the infrastructure that makes it possible. They focus on sending and receiving email, buying and selling goods, publishing or reading online material. The identity plumbing that makes all of that possible feels invisible to them.

It took me longer than I’d like to admit to realize that what feels like “the obvious starting point” to me is not where most people’s mental models begin. Understanding that gap changed how I communicate and how I frame problems. It made me more patient. It helped me bridge conversations between IAM specialists and colleagues who were looking at the world through completely different lenses.

2. The advice I’d give my younger self

If I could go back, I’d tell my younger self not to be so intimidated by the people who had been working in identity forever.

When I started attending standards meetings and community gatherings, I found myself surrounded by people who seemed impossibly knowledgeable. They spoke in acronyms I didn’t know, referenced failures I had never heard of, and debated edge cases that felt light-years beyond my understanding. I spent the first few meetings convinced I should sit quietly and hope I didn’t expose how little I knew. (Does this sound familiar?)

But here’s the secret I wish I’d learned earlier: the people who have been in identity the longest are often the most generous with their time. They’ve made mistakes. They’ve learned hard lessons. They’ve held their heads in their hands after a deployment disaster. They’ve had opinions shaped by actual lived experience, not theoretical purity. And most of them stay in this field because they care about helping people understand it.

Once I started asking questions—real questions, not perform-to-impress questions—I discovered that these conversations were the best part of entering the identity world. They helped me understand not just the “what” but the “why.” They gave me a sense of the history that sits behind every architectural argument. And they helped me see identity as a community, not a solo pursuit.

I wish I’d started those conversations earlier. They’re the ones that helped me grow.

3. The tradeoff I didn’t fully appreciate

Identity is built on tradeoffs. That part I understood early on.

What I didn’t appreciate was how inconsistent people can be in identifying and managing risk.

Some people see risk everywhere. They’re so good at spotting potential problems that they become paralyzed. They worry themselves into inaction, convinced that every decision is too dangerous to make. When I worked in research & education, these people were the bane of my professional existence.

Others take the opposite approach: they assume the happy path will prevail and dismiss the possibility that anything could go wrong. They don’t see risk so much as they see inconvenience, and they ignore honest questions that might complicate the roadmap. (I’m not going to say I see this frequently in standards development, but, well…)

For a long time, I expected people to meet somewhere in the middle—to balance caution with pragmatism. It took years of experience to realize that “balanced” isn’t always a natural instinct. It’s something we have to cultivate. And in identity work, you see these extremes up close because identity touches everything. Every risk conversation becomes a mirror for the organization’s worldview.

Learning that was freeing. It helped me reframe risk discussions into something more human. It helped me understand that sometimes people aren’t being difficult—they’re reacting to uncertainty in the only way they know how. And it made me far more effective at guiding teams toward decisions that acknowledge real risk without turning everything into a crisis.

4. The moment I realized identity is more than authentication and authorization

Identity isn’t just authN and authZ. Most people who’ve been here for a while know that, but I learned it early thanks to the Research and Education world.

My first influences came from the federated-identity ecosystem built by Internet2 and the MACE community at the start of this century (links to that group are long gone, but you can see a brief description from this page published back in 2006). People like Lucy Lynch, Scott Cantor, Tom Barton, Ken Klingenstein, Michael Gettes, and many others were building not just technology, but relationships, trust frameworks, and community norms that allowed institutions to work together.

Working in that space made it obvious that identity is more than protocols. It’s how communities decide who belongs, who can access what, and what the boundaries of collaboration are. You can’t separate the technical choices from the governance choices, or the policy questions from the deployment questions.

To this day, that early influence shapes how I think about identity. I see it as infrastructure, yes—but also as a social function. And I’m grateful to the people who put me on that path.

5. What I hope newcomers learn faster than we did

If there’s one thing I hope people entering the field understand sooner, it’s this:

Focus on your immediate needs, but never lose sight of the bigger picture.

You may be solving a local problem—an onboarding flow, a consent experience, a directory restructuring, a federation integration—but the forces shaping your future are global. Regulations you’ve never heard of will affect you. Standards being debated in other regions will shape your architecture. Browser shifts driven by privacy expectations will break the assumptions your systems rely on. Identity doesn’t stand still. And neither do the requirements around it.

The sooner people recognize that identity sits at the intersection of local implementation and global influence, the quicker they can design systems—and careers—that are resilient. That understanding turns frustration into curiosity. It turns confusion into engagement. And it turns newcomers into practitioners who can help move the field forward.

Welcome to digital identity

Moderating this panel also reminded me how much I’ve benefited from other people’s generosity over the years. Every insight I have now came from a conversation, a mistake, a hallway debate, or a moment when someone took the time to help me understand something I was struggling with. None of us figures this stuff out alone.

If you’re new to identity, I hope some part of this helps. And if you’re at Gartner IAM this week, come say hello. There’s always room for more voices in this conversation; you never know what you’ll wish you knew today that you’ll be glad you learned tomorrow.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:04] Welcome to another episode of Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. In this episode, recorded while I’m in Grapevine, Texas for the Gartner IAM Conference, we explore a panel theme that turned out to be a great topic for both the blog and audio: What I Wish I Knew When I Started in Identity. transcript-What I Wish I Knew W…

As more newcomers enter the industry, understanding its nuances matters more than ever. And today’s conversation offers guidance for anyone beginning their identity journey.

Why This Conversation Matters

[00:00:30] As the industry shifts, digital identity is drawing in people who never expected to work in this space—product managers, security teams, payments specialists, fraud analysts, and more. Identity, once a quiet corner of tech, has become the thread running through everything. transcript-What I Wish I Knew W…

From AI bot identification to age-verification laws and wallet-driven user experiences, digital identity now influences decisions across organizations.

Short version:

The field is evolving quickly. Newcomers are arriving fast. The questions are getting harder.

And many people are looking around thinking, Where do I even begin?

The Learning Curve in Digital Identity

[00:02:53] There’s a common bit of unofficial wisdom: it takes two to three years to feel proficient in IAM. And that’s just the IAM slice—provisioning, access control, directories. The broader digital identity landscape includes: transcript-What I Wish I Knew W…

Policy and trust frameworks Cross-border regulation Governance models A long history of assumptions and decisions

Newcomers quickly discover that the answer to nearly everything is: “It depends.”

Identity is:

Messy Political Global Occasionally maddening

But the good news?
The identity community is full of people who remember exactly what it feels like to be overwhelmed—and are quick to help others find their footing.

Misconception #1 Everyone Understands How Important Identity Is

[00:04:30] One of the first misconceptions I had was assuming everyone else saw digital identity as foundational. My early tech jobs all revolved around managing accounts and access, so it felt obvious to me. But most people don’t think this way. transcript-What I Wish I Knew W…

[00:05:30] To them, identity is invisible—until it breaks. They think about what a service does, not who is behind the scenes managing identity plumbing.

Once I stopped assuming shared context and started building that context instead, conversations became much easier.

Advice to My Younger Self Don’t Be Intimidated by Standards Communities

[00:05:59] In my early standards-meeting days, everyone seemed impossibly knowledgeable, speaking in shorthand and referencing protocols long dead. I spent more time trying not to look ignorant than I spent learning. transcript-What I Wish I Knew W…

Key things I wish I had known earlier:

The most experienced people are often the most approachable. They want new voices in the room. Everyone—even the experts—has moments where they don’t know what’s going on.

Once I started asking honest questions, everything shifted. I gained not just technical understanding, but also the why behind decisions and standards.

Digital identity is not a field you grow into alone.

Understanding Trade-offs The Many Faces of Risk

[00:07:38] I knew risk mattered, but I didn’t understand how differently people experience it. Some see risk everywhere and freeze. Others see almost none and assume everything will work out fine. transcript-What I Wish I Knew W…

People experience risk based on:

Emotion Personal history Past failures Past successes

Recognizing this helped me shift conversations from fear or denial into something more balanced. Most people aren’t being difficult—they’re just responding to uncertainty with the tools they have.

And in identity, every decision ripples outward, so you learn this lesson quickly.

Identity Is Bigger Than Authentication The Human Side of Trust

[00:09:44] My earliest influences came from the research and education community—people like Scott Kantor, Tom Barton, Ken Klingenstein, Michael Geddes, and Lucy Lynch. They understood identity as not just technical, but social and collaborative. transcript-What I Wish I Knew W…

Identity shapes:

How communities operate How institutions decide who belongs How governance and technology intersect

That perspective changed everything for me. Authentication and authorization matter, but they’re only part of the larger ecosystem of trust.

What I Hope Newcomers Learn Sooner Think Locally and Globally

Newcomers often focus only on immediate challenges—onboarding flows, directory hygiene, access policies. But identity is shaped by much bigger forces. transcript-What I Wish I Knew W…

Global influences that will affect your system:

Regulations in other regions Standards drafted half a world away Browser or platform changes you didn’t vote on

Identity moves fast, and assumptions can shift overnight. The sooner you understand the global ecosystem, the better prepared you are to build systems that survive change.

Closing Reflections

Preparing for this Gartner panel meant looking back at the debates, uncertainties, and conversations that shaped my understanding of digital identity. Everything I know came from people willing to share their stories—and sometimes their cautionary tales. transcript-What I Wish I Knew W…

If you’re new here, I hope these reflections make the path a little clearer.
And if you’re at Gartner IAM this week, come find me—I always love meeting people just starting to understand how big and fascinating this field really is.

Outro

[00:13:02] Thanks for listening to this episode of Digital Identity Digest. If you found it helpful, please share it with a colleague and connect with me on LinkedIn @hlflanagan. And of course, subscribe and leave a review wherever you listen to podcasts. You can always find the written version at sphericalcowconsulting.com.

The post What I Wish I Knew When I Started in Identity appeared first on Spherical Cow Consulting.


liminal (was OWI)

Chargeback Prevention in eCommerce

The post Chargeback Prevention in eCommerce appeared first on Liminal.co.

The post Chargeback Prevention in eCommerce appeared first on Liminal.co.


uquodo

The State of Mobile Identity Security in MEA Telecom

The post The State of Mobile Identity Security in MEA Telecom appeared first on uqudo.

FastID

Black Friday is Dead, Long Live Black Friday: Cyber 5 Traffic Insights

Black Friday isn’t the traffic spike it used to be. Fastly’s data shows holiday demand now stretches across all of November. Here’s what Cyber 5 2025 really looked like.
Black Friday isn’t the traffic spike it used to be. Fastly’s data shows holiday demand now stretches across all of November. Here’s what Cyber 5 2025 really looked like.

How React2Shell is evolving: Industries and regions targeted

Fastly is seeing sustained React2Shell attacks across all industries and regions. Learn what’s happening and the critical steps enterprises should take to patch vulnerable apps.
Fastly is seeing sustained React2Shell attacks across all industries and regions. Learn what’s happening and the critical steps enterprises should take to patch vulnerable apps.

Monday, 08. December 2025

1Kosmos BlockID

1Kosmos highly commended in FinTech Futures’ 2025 PayTech Awards

The post 1Kosmos highly commended in FinTech Futures’ 2025 PayTech Awards appeared first on 1Kosmos.

Dock

The Biggest Misconception About Passkeys

One of the most common misconceptions we hear in the identity space is that Passkeys and Digital Identity are competing approaches. They’re not. The truth is much simpler: Passkeys and digital identity solve different problems.

One of the most common misconceptions we hear in the identity space is that Passkeys and Digital Identity are competing approaches. They’re not.

The truth is much simpler:

Passkeys and digital identity solve different problems.


Herond Browser

Herond Browser: November 2025 Report

November marked a major leap in our journey! We were thrilled to unveil the new Herond Browser concept and officially introduce the integrated Herond Wallet The post Herond Browser: November 2025 Report appeared first on Herond Blog.

November marked a major leap in our journey! We were thrilled to unveil the new Herond Browser concept and officially introduce the integrated Herond Wallet. The biggest feature? We’ve already powered up the Wallet with Uniswap aggregation, bringing optimized, secure swaps directly to your browser. Read on for the full details of your new all-in-one Web3 gateway.

Product Updates: The future of browsing starts here

Get ready for a smoother ride: We’ve rolled out major updates designed to elevate your Herond experience.

Herond Browser New Concept Updates Inspiring New Tab: The redesigned New Tab Page features a clean layout and a beautiful, bold background image. It places your most-visited sites front and center, complemented by a focused search bar for instant navigation. Vertical Tabs: We’re introducing Vertical Tabs by default on desktop. This design lets you see more titles and scan your open tabs faster, enhancing organization, especially for power users who juggle many tasks simultaneously. Instant Bookmarking: Saving resources is now done in a flash. Our one-tap save feature works with an instant, clear folder picker, eliminating guesswork and speeding up your content organization. Persistent Tabs: Say goodbye to losing your workflow. Pinned and grouped tabs now feature persistent state, meaning they always stay exactly how you left them, even after a browser restart. Trustworthy Sync: We’ve delivered smoother, more reliable cross-device synchronization. Your active sessions persist, and all data, from history to passwords, stays fresh and reliably up-to-date across all your devices. Refreshed Onboarding: The new onboarding process is fast, beautiful, and redesigned for both desktop and mobile. Go from your first tap to your first successful action in seconds, perfectly aligned with our new user-centric vision. Herond Wallet New Updates Keyless Wallet: The integrated Herond Wallet is a true game-changer: One Account, All Chains. It’s fully secure and eliminates the risk and hassle associated with traditional keys and seed phrases. Cross-Chain Swaps: You can now swap assets between any integrated chain in seconds. Our built-in aggregator ensures you get the best rates with minimal hassle and maximum efficiency. Fiat On-Ramp: We’ve made entry easier than ever. Buy crypto directly in-wallet using fiat, instantly connecting your traditional finances to the decentralized world. Asset Tracking: Keep your portfolio under control with crystal-clear charts and asset tracking. See your full financial picture at a glance, making informed decisions faster. Partnership Uniswap’s Trading API is officially coming to Herond Wallet

With the Uniswap integration, users can now trade cryptocurrencies directly and securely, eliminating the need for third-party tools or leaving the browser.

Integrating the Uniswap Trading API fundamentally transforms Herond into a complete, in-browser DeFi gateway. This means users can now trade cryptocurrencies directly on Uniswap without ever leaving the browser interface or requiring external wallet extensions.

By lowering these technical and security barriers, Herond empowers more users to participate confidently in the blockchain economy. Herond is now a truly comprehensive Web3 platform, unifying browsing, trading, and asset management in one secure solution.

Community and Events The Herond Trading Competition is LIVE

Trade, compete, and win! We’re launching the official Herond x Uniswap Trading Competition. Test your strategy on the premier DEX (right inside Herond) and secure your share of the monumental prize pool.

Key Dates: Campaign Period: 7:00 Nov 18th (UTC) – 7:00 Dec 17th, 2025 (UTC) The Prize Pool Breakdown (The “Big Prizes”): Total Prize Pool: Top traders share $2,000 USDT based on final rankings. How to join Download Herond Browserand log in to Herond Account. Set up your Herond Keyless Wallet and activate it for trading. Trade at least $300 volume on Herond Wallet’s Swap section with the Uniswap Aggregator to be counted on the leaderboard. How to create Herond Keyless Wallet?

Step 1. Tap the Wallet icon

Open Herond Browser and hit the Wallet icon on your toolbar.

Step 2. Choose Create Wallet

Select Create Wallet to start your secure Keyless setup.

Step 3. Set a strong password

Protect your wallet with a unique, powerful password, only you can access it.

Step 4. Ready to go!

You’re all set. Start trading, swapping, and join the campaign to earn rewards. Upcoming event Herond Quest: The Browser Battle

Herond Quest will officially launch in Vietnam on December 12th. Following the domestic release, Singapore, Thailand, and Australia have been confirmed as the first three international markets where Herond Quest will be available via local partners (further international dates are yet to be determined).

Herond Rebranding

Herond is on the brink of a major evolution. We are preparing to unveil something significantly bolder and more impactful, promising to redefine your digital experience. Stay tuned for exciting developments.

About Herond Browser

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Herond Browser: November 2025 Report appeared first on Herond Blog.


FastID

Fastly’s Proactive Protection for React2Shell, Critical React RCE CVE-2025-55182 and CVE-2025-66478

Protect your apps from the critical React RCE bugs (CVE-2025-55182/66478). Fastly's NGWAF Virtual Patch provides proactive defense.
Protect your apps from the critical React RCE bugs (CVE-2025-55182/66478). Fastly's NGWAF Virtual Patch provides proactive defense.

Unparalleled Performance: Bring Your C++ Logic to the Edge

Bring your C++ logic to the edge with the Beta Fastly Compute SDK. Achieve unparalleled, near-native performance, low-latency, and enhanced security via WebAssembly (Wasm).
Bring your C++ logic to the edge with the Beta Fastly Compute SDK. Achieve unparalleled, near-native performance, low-latency, and enhanced security via WebAssembly (Wasm).

Friday, 05. December 2025

myLaminin

Data Anonymization. What is it? How and when is it required?

As research grows more data-driven, protecting personal information has never been more critical. Data anonymization helps researchers share and analyze sensitive datasets without exposing identities—but doing it well is complex. This article breaks down what anonymization is, why it matters, common methods, legal requirements, and how tools like myLaminin make secure, compliant collaboration possi
As research grows more data-driven, protecting personal information has never been more critical. Data anonymization helps researchers share and analyze sensitive datasets without exposing identities—but doing it well is complex. This article breaks down what anonymization is, why it matters, common methods, legal requirements, and how tools like myLaminin make secure, compliant collaboration possible.

IDnow

Complying with the EU Accessibility Act made our solutions better – for everyone.

Digital identity verification should be inclusive, intuitive, and accessible to all, regardless of disability. That’s why, in 2025, we set out to enhance our solutions to comply with the latest European accessibility standards. Every year, we process over 100 million documents, which is approximately the same number of
Digital identity verification should be inclusive, intuitive, and accessible to all, regardless of disability. That’s why, in 2025, we set out to enhance our solutions to comply with the latest European accessibility standards.

Every year, we process over 100 million documents, which is approximately the same number of people that are registered as disabled in Europe. In fact, one in four European adults are registered as physically or cognitively disabled, with many unfortunately living without proper access to basic, essential services.

According to the European Council of the European Union, 29% of those with disabilities are at risk of poverty or social exclusion and are four times more likely to have unmet healthcare needs compared to the able-bodied. 

Technology can unlock a world of opportunity for those with disabilities, including enhancing independence, improving communication, and granting greater access to information.

For those with sight, hearing, mobility or cognition issues, being able to access online services provides an invaluable lifeline; an alternate entry point to essential services like banking and healthcare that are necessary to live a full and rich life that everyone is entitled to. 

Accessing online services also provides a much-needed window to the outside world, allowing people to feel part of a community and benefit from education and remote work opportunities.

Christophe Chaput, Senior Product Owner at IDnow

Unfortunately, however, not every website or app has been designed in a user-friendly way, especially for those with disabilities. The European Accessibility Act (EAA), which came into force in June 2025, set out to change all that. 

Discover how IDnow has helped address skin tone bias in global facial verification systems in our blog, ‘Breaking down biases in AI-powered facial verification.’ 

What is the European Accessibility Act?

The EAA is an EU directive aimed at improving technological inclusivity for a wide range of products and services, including information and communication technologies, financial services, transportation, emergency services, and e-commerce services. 

It mandates that all products and services are designed to be accessible to people with disabilities, particularly those related to sight, hearing, mobility, or cognition. Here are the EAA’s three key requirements:  

Accessibility of digital services and websites. 

Businesses must ensure their websites, mobile apps, and digital content are accessible. This includes screen reader compatibility, keyboard navigation, and text alternatives for non-text content, such as images and videos. 

Accessible design of products. 

Electronic devices such as smartphones, computers, ATMs, ticket machines, and e-readers must be designed to be usable by people with disabilities. This means including features like tactile buttons, audio output and adjustable font sizes or contrast settings. 

Accessibility of communication and customer support. 

Companies must provide accessible customer service channels (e.g., chat, phone, email) for people with disabilities. Information about products and services must also be available in accessible formats like large print, braille, or easy-to-read versions.

What this means for our customers.

In 2025, to ensure our products and services are inclusive, we invested in multiple initiatives — including our design system, ‘Sunflower’, which embeds accessibility at the very core of our solutions. Sunflower ensures: 

Adaptability: Just as a sunflower adapts its position to face the sun, our design system seamlessly adjusts to the needs of the user, providing an interface that is intuitive, responsive, and customized.   Structured Core: Like the well-defined core of a sunflower, IDnow’s Sunflower design system has a solid foundation, ensuring consistency and cohesion in every aspect, from color schemes to typography.   Versatility: The design elements within Sunflower are versatile, capable of being arranged in various configurations to cater to a multitude of user interfaces and experiences. 

A question we often get asked is how can we be so sure that our solutions are able to be used by everyone? Indeed, there are many different types of disability and different types of hardware and online services. 

After all, a person with a specific disability may need specific tools, devices or software to use as standard interfaces may not accommodate their unique needs or challenges. They may need specific hardware like a screen reader, voice command, braille reader, or other types of hardware. To ensure compatibility with all manner of tools, we follow the Web Content Accessibility Guidelines (WCAG) 2.1, which are a set of recommendations for making web content more accessible.

Is there an accessibility app for that?

So, what does accessibility look like in practice? Well, it’s important to break accessibility down into two key levels: the visible and the invisible. 

The visible part: The user interface. 

Of course, the user interface (UI) and graphical layout of the application are all that users tend to see and interact with. So, when we talk about accessibility, we’re referring to elements like: 

Contrast ratios between text and background colors  Font size and scalability, especially for users who increase text size on their devices  Clear icons and labels  Consistent layout and visual cues that help users navigate 

These design choices ensure that users with visual impairment, color blindness, or cognitive challenges can still perceive and interact with the interface effectively. It’s about making sure everyone can see and understand what’s happening on the screen. Our graphical interface has been completely overhauled to integrate accessibility requirements from the ground up. This includes improvements such as: 

Optimized color contrasts for better visibility  Scalable typography for enhanced readability  Structured navigation and meaningful labels for screen readers  Improved keyboard navigation and focus management 

These changes not only help users with disabilities navigate and interact with the application more easily, but they also create a smoother, clearer experience for everyone.

The invisible part: The user experience behind the scenes. 

The invisible part is less obvious — but just as important. This is where screen readers and assistive technologies come into play. Here, accessibility is embedded directly into the code to support users who rely on auditory or tactile feedback to navigate. For example: 

Reading order: Here, code defines the logical flow that elements are read aloud by screen readers. This order must match the order of the text and visuals to avoid confusion.  Accessible labels: Buttons and interactive elements need clear, meaningful descriptions. For example, instead of labeling a button simply as ‘Button,’ it might be described as ‘Stop onboarding’ to indicate its actual function.  Alt text and ARIA attributes: These provide additional context for non-visual users, such as describing an image or identifying the role of a UI element. Here’s how accessible identity verification works.

Let’s imagine a visually impaired user wanting to register for a service. She would activate her phone’s native screen reader, and the onboarding steps would be read out. For most users, it’s obvious that the ‘X’ icon in the corner closes the screen. But a screen reader can’t interpret icons visually — it depends on textual information coded into the element. That’s why it’s critical to assign a proper accessible label like ‘Stop onboarding,’ so the user knows what will happen when that element is activated. 

Accessibility is about making sure that everyone — regardless of their abilities — can interact with your online service. That includes what they see and what they don’t see. Good accessibility bridges the gap between design and code, creating an inclusive experience from the top to the bottom.

Why accessibility benefits everyone.

Accessibility isn’t just about compliance — it’s about creating intuitive, efficient journeys for every user. By removing friction points and clarifying interface elements, accessibility boosts usability across the board. The result? Better customer experiences and a direct impact on conversion rates. 

We are also taking the opportunity to significantly enhance the customizability of the IDnow interface, meaning customers will have more flexibility than ever to tailor the SDK’s visual appearance to their own brand guidelines. 

This ensures a seamless integration into customers’ products and provides a consistent and polished experience for users from the start to the finish. 

With these updates, our customers can soon deliver a more inclusive and high-converting experience — while keeping their brand identity front and center. 

Read more about how video verification can help give your company a competitive edge, while providing a more inclusive experience, in our blog ‘Do verification services have an identity crisis?’

Interested in more insights from our subject matter experts? Click below! 

Senior Architect at IDnow, Sebastian Elfors explains how technical standards are moving from technical guidelines to legal foundations — and what that means for banks, fintechs, wallet providers, and every European citizen.  Former INTERPOL Coordinator, and current Forensic Document Examiner at IDnow, Daniela Djidrovska explains why IDnow offers document fraud training to every customer, regardless of sector.  Research Scientist in the Biometrics Team at IDnow, Elmokhtar Mohamed Moussa explores the dangers of face verification bias and what steps must be taken to eradicate it.  Research Scientist at IDnow, Nathan Ramoly explores the dangers of deepfakes and explains how identity verification can help businesses stay one step ahead of fraudsters and build real trust in a digital world.

By

Christophe Chaput
Senior Product Owner at IDnow
Connect with Christophe on LinkedIn

 

FastID

Deploy for Performance: Fastly’s Principles of Infrastructure Diversity and Soft Control

Discover Fastly's core resilience principles: Infrastructure Diversity prevents outages, and Soft Influence optimizes traffic for peak performance.
Discover Fastly's core resilience principles: Infrastructure Diversity prevents outages, and Soft Influence optimizes traffic for peak performance.

Thursday, 04. December 2025

Indicio

How to get a quick win from Verifiable Credentials with Indicio Proven®

The post How to get a quick win from Verifiable Credentials with Indicio Proven® appeared first on Indicio.
Verifiable Credentials provide next-gen identity verification, biometric authentication and document validation for customer pre-authorization and seamless experiences.

By Helen Garneau

Switching digital identity systems raises the specter of long and complex technical roadmaps and uncertain timelines. 

But Verifiable Credentials for decentralized digital identity are different. They are a layer you can add to your existing systems, flick a switch, and immediately get going adding verifiable digital identity to any use case you chose. This means you can deploy an identity solution in days rather than months and start seeing value right away. 

Take customer-facing processes that depend on verifying documents and personal data, such as checking into a hotel.

Anyone who has stood in line to get their room keys after a long flight knows how quickly things can fall apart when a problem with a document arises. This could be a missing passport, a broken photocopy machine, or an unexpected requirement. The desk agent has to stop, problem solve, and figure out a workaround to complete the process. Time ticks. Stress rises. The line grows. And even when the problem isn’t necessarily the hotel’s fault, the frustration is usually directed at the agent and the brand.

No one wants this, especially not the growing pool of digitally-native consumers who expect frictionless experiences.

Authenticated biometrics = preauthorization = seamless authentication 

This is where Verifiable Credentials deliver the seamless experience that people expect and the efficiency that businesses want.

For example, a hotel can ask a customer to submit their identity information at the time they book when they pay to book their room. The customer scans their passport or other required documents, with their mobile phone, using digital wallet software. The software also asks them to perform a liveness check. 

They can then send the information directly to the hotel, which authenticates the document and compares the image of the person on the document with the liveness check (with a passport, it compares the biometric image held on the passport chip). 

Once validated, this information is sent back to the customer in the form of a tamper-proof Verifiable Credential, which they store in their digital wallet. This verifiable identity is linked to their booking.

So when the customer shows up to check-in, all they need to do is present the necessary Verifiable Credential(s) simply by scanning a QR code. The hotel uses verifier software that uses cryptography to confirm the origin of the credential and that the information has not been altered. Verifiable Credentials can’t be shared or stolen, so that’s it. No unexpected document requirements, no fumbling with documents, no time consuming manual review. Just instant authentication. All that’s left is collecting your key. 

And because authenticated biometrics are held within the credential, matching the credential holder to the credential, the entire process can be automated, so the hotel’s AI-powered services can easily prove their identities and that they aren’t a deepfake.

Start now to scale your competitive advantage with Indicio Proven

The fastest, easiest, and most powerful way to implement Verifiable Credentials is with Indicio Proven

You can validate over 250 identity document types from around the world, add liveness detection and face-mapping, and issue interoperable, tamper-proof credentials that align with global standards and can be verified anywhere. 

On day one. 

A white-label digital wallet, mobile SDK, and global distributed ledger are also included if needed. 

Indicio Proven is already in use by airlines, border agencies, and service providers around the world that need the highest possible digital identity assurance and global interoperability.

Talk with one of our digital identity experts about how you can gain a competitive edge with Verifiable Credentials and build the internet of tomorrow, today.


The post How to get a quick win from Verifiable Credentials with Indicio Proven® appeared first on Indicio.


ComplyCube

How to Lower Identity Verification Cost Without Sacrificing Compliance

Understanding the makeup of identity check pricing is crucial for realising cost efficiencies. It enables businesses to maintain a lower compliance budget without compromising security, accuracy, and scalability of KYC and AML. The post How to Lower Identity Verification Cost Without Sacrificing Compliance first appeared on ComplyCube.

Understanding the makeup of identity check pricing is crucial for realising cost efficiencies. It enables businesses to maintain a lower compliance budget without compromising security, accuracy, and scalability of KYC and AML.

The post How to Lower Identity Verification Cost Without Sacrificing Compliance first appeared on ComplyCube.


Ockto

Europese Digitale Identiteit (EDI)

Alles wat je over de Europese Digitale Identiteit moet weten Je hebt er wellicht al het nodige over gehoord: de Europese Digitale Identiteit is in ontwikkeling. Op verschillende plekken in Europa wordt gewerkt aan een digitale identiteit waar alle EU-burgers en ondernemingen in de EU gebruik v

Alles wat je over de Europese Digitale Identiteit moet weten

Je hebt er wellicht al het nodige over gehoord: de Europese Digitale Identiteit is in ontwikkeling. Op verschillende plekken in Europa wordt gewerkt aan een digitale identiteit waar alle EU-burgers en ondernemingen in de EU gebruik van kunnen maken. Langzaam maar zeker wordt er steeds meer duidelijk.

Maar wat is zo’n Europese Digitale Identiteit precies? Wat heeft een ID-wallet daarmee te maken? Welke gevolgen heeft die ontwikkeling voor jouw organisatie? Wij geven je hieronder antwoord op alle vragen rondom deze onderwerpen.


Herond Browser

Herond Browser x Vietnam Fintech Summit 2025

We are proud to announce our role as an Official Supporting Partner for the upcoming Vietnam Fintech Summit, which will take place on December 13, 2025 The post Herond Browser x Vietnam Fintech Summit 2025 appeared first on Herond Blog.

We are proud to announce our role as an Official Supporting Partner for the upcoming Vietnam Fintech Summit! Taking place on December 13, 2025, in Hanoi, and held under the umbrella of TECHFEST Vietnam and the Ministry of Science & Technology. Following this, the pivotal event is where Herond will connect our cutting-edge Web3 vision directly with regional finance leaders, solidifying our commitment to the future of digital finance.

Driving the Future of Digital Finance Together

Vietnam Fintech Summit 2025 is set to convene key stakeholders: regulators, financial institutions, digital banks, fintech innovators, blockchain leaders, and international experts. Having a chance to gather, this will focus on discussing Vietnam’s next chapter in digital finance.

This year’s theme perfectly highlights the nation’s strong momentum in shaping a secure and innovation-driven digital asset ecosystem. Since we are a pioneer in Keyless Wallet technology and comprehensive Web3 integration, Herond Browser is thrilled to contribute its voice and solutions to this critical national discussion.

We are deeply honored to support this vital national platform and are fully committed to contributing to the growth of Vietnam’s fintech and digital economy. Our partnership at VFS 2025 solidifies Herond Browser’s commitment to ensuring that user security and Web3 inclusivity form the very foundation of digital finance development moving forward.

We are excited to support this important national platform and contribute to the development of Vietnam’s fintech and digital economy.

Register to attend: https://luma.com/2s37rvkr

Learn more: vietnamfintechsummit.com

About Vietnam Fintech Summit 2025

Techfest Vietnam 2025 is the nation’s largest annual startup and innovation event. The event is directed by the Ministry of Science and Technology. The main event is to celebrate entrepreneurship and its role in shaping the country’s future. The 2025 edition will take place in Hanoi, the national technology hub.

Following this framework, the VFS serves as the official fintech event, gathering over 100 regulators, investors, and innovators. The VFS 2025 theme centers on the “Regulatory Sandbox & Digital Asset Market,” focusing on how new pilot mechanisms, policy, and technology will accelerate fintech growth and build trust in Vietnam’s digital finance ecosystem.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Herond Browser x Vietnam Fintech Summit 2025 appeared first on Herond Blog.


FastID

CAPTCHAs Are Costing Retailers Customers — Heavy AI Users Are the First to Walk Away

Our original survey of 881 online shoppers shows CAPTCHAs are quietly killing conversions especially among heavy AI users.
Our original survey of 881 online shoppers shows CAPTCHAs are quietly killing conversions especially among heavy AI users.

Wednesday, 03. December 2025

Holochain

Dev Pulse 153: Holochain 0.6 Released with Immune System

Dev Pulse 153

Holochain 0.6 is now released and ready for developing on! We’ve written about it a couple times already, so I won’t cover it in depth, but the biggest new piece is a functioning immune system that blocks bad agents if their data fails validation. The validation and networking code have been thoroughly overhauled and tested to make sure they function correctly.

The immune system isn’t complete yet — currently, agents with an invalid membrane proof can initially join an app’s network and start communicating before their membrane proof has been validated — but validation is now correctly hooked into the network transport to ensure that invalid actions are responded to with network-level blocking. Additionally, warrants are delivered to anyone who queries a bad agent’s public key, so they can find out they need to block the agent even if they’re not a validator.

There’s still work to be done on the immune system in releases down the road. First, warrants are currently only delivered to the agent public key authorities, so you have to check for warrants using get_agent_activity. Second, membrane proof checking is currently only enforced via normal validation, not during handshaking, so unauthorised agents are able to join a network and access it for a short time before being warranted and blocked.

The dev team is also continuing to work on assessing Holochain’s performance with Wind Tunnel, a distributed test runner. Some of the insights from this have worked their way into the Holochain 0.6 codebase.

We invite you to upgrade your hApp to 0.6 right away (or scaffold your first hApp) and report any issues you encounter. Happy hApping!


Recognito Vision

The Contribution of Generative AI to Next Generation Facial Analysis

Imagine giving a camera a pair of glasses that help it see better, even when the world gets messy. That is what generative AI does for systems built around facial analysis. It fills gaps in blurry images, recreates missing features, and strengthens overall decision-making. This makes identification more reliable, whether someone is standing in harsh...

Imagine giving a camera a pair of glasses that help it see better, even when the world gets messy. That is what generative AI does for systems built around facial analysis. It fills gaps in blurry images, recreates missing features, and strengthens overall decision-making. This makes identification more reliable, whether someone is standing in harsh sunlight, wearing glasses, or captured at an awkward angle.

Many improvements seen today in facial biometrics come from research shared through programs like the NIST FRVT. If you want to explore how global benchmarks work, you can visit the official pages at the FRVT program site and the result listings for 1:1 verification. These tests keep the entire industry honest by tracking accuracy and fairness across algorithms.

What makes generative AI so helpful is its power to imagine variations. It can generate more training samples, enrich datasets, and strengthen models without adding stress to developers. When you think about how diverse human faces are, this feels less like magic and more like the missing ingredient biometric teams have always needed.

 

Smarter Face Detection and Why Generative AI Makes It Work

Face detection has always played the role of a gatekeeper. If a system fails to locate a face, the entire verification pipeline collapses. Generative AI helps by reconstructing lower-quality regions and recovering features that cameras sometimes miss. Imagine a blurry security frame where only half a face appears because of glare. A generative model can rebuild enough structure for a detector to lock on and proceed confidently.

This matters for public spaces, airports, crowded venues, and remote onboarding, where lighting and angles change constantly. Instead of depending on perfect conditions, detection models trained with generative enhancements adapt to environments that older systems struggled with. Developers often explore these techniques through community resources like the Recognito GitHub.

The result is a far more dependable first step in the verification journey, allowing the rest of the system to operate more smoothly.

 

Improving Face Matching and Face Verification With Synthetic Refinement

Face matching compares two images to see if they belong to the same individual, while face verification checks identity directly. Both require consistent representations. Generative AI removes visual noise and strengthens important facial patterns, which leads to more stable embeddings across different sessions.

In many deployments, the models powered by synthetic enhancement show a major boost in reliability. Dim lighting, motion blur, or awkward angles stop being deal breakers because the system now understands how to fill the gaps. This creates smoother onboarding experiences in finance, travel, and digital authentication.

Below is a quick comparison table showing the differences commonly observed after introducing generative preprocessing:

Feature Before Enhancement After Enhancement Accuracy Moderate Strong and consistent Stability over time Varies More reliable Lighting sensitivity High Reduced impact Error rates Noticeable Significant improvement

Businesses that want hands-on verification capabilities often integrate tools like the face recognition SDK.

 

Liveness Detection That Adapts to Real People Instead of Fake Attempts

Liveness detection checks if the subject is a real human rather than a printed photo, screen replay, or impersonation attempt. Generative AI improves this area by creating challenging synthetic samples that prepare the model for fraud patterns it may never encounter naturally.

Instead of depending solely on traditional datasets, modern systems learn from realistic AI-generated spoofs. This training helps the system identify natural micro-movements like subtle eye reactions, genuine blinking, and tiny muscle shifts that fake media struggles to mimic.

Liveness technology becomes especially important in sectors like fintech and remote hiring, where identity fraud attempts continue to rise. Developers exploring these features can test them with the face liveness detection SDK.

 

How Anti-Spoofing Evolves With Generative Training Data

Anti-spoofing used to rely on manually created samples like printed photos or simple masks. Generative AI changes the game by producing large volumes of realistic attack variations. This is like training a security guard with every possible trick instead of a small handful of examples.

The stronger coverage helps the model detect even rare and advanced spoofing techniques. This includes synthetic faces, digital overlays, complex masks, and high-resolution replay attacks. Because the system learns from such variety, its defenses stay sharp even when fraudsters become inventive.

These improvements reduce false positives, increase trust in identity flows, and provide smoother onboarding experiences without making users jump through extra hoops.

 

Why Standards and Regulations Matter More Than Ever

As biometric systems grow more capable, responsible development becomes essential. Ethical systems avoid storing full facial images. Instead, they rely on encrypted templates that cannot be reversed into the original face.

Compliance frameworks such as GDPR help guide developers toward transparent and user-friendly practices. Anyone interested in understanding these protections can read the full regulation text at GDPR Info. Programs like FRVT also hold algorithms accountable by measuring accuracy, fairness, and robustness across diverse populations.

These standards make sure improvements powered by generative AI still respect privacy and maintain public trust.

 

Exploring AI-Driven Face Technology Through Interactive Tools

Biometric concepts become easier to understand when users can test them directly. Interactive platforms like the face biometric playground let developers, students, and businesses experiment with detection, matching, verification, and spoof testing in real-time.

Trying out these tools often reveals how lighting affects detection, how confidence scores change during matching, and how liveness reacts to tiny human movements. This practical approach helps teams visualize system performance before full deployment, reducing surprises later.

It also helps decision makers evaluate multiple vendors, compare models, and understand how generative enhancements influence behavior across edge cases.

 

A Real Case Study Highlighting Generative Benefits

A digital onboarding company faced a surge in failed verifications from users with older smartphones. Many selfies arrived grainy or dim, causing the previous system to miss faces entirely. This led to long review queues and frustrated applicants.

After adding generative reconstruction during preprocessing, the system started identifying faces more accurately in challenging conditions. Liveness detection also became more resilient, catching attempts that previously slipped through.

Within weeks, the company reported major improvements.

False rejection rates dropped sharply

Onboarding became faster with fewer manual checks

Face matching produced more stable results

Customer satisfaction grew as verification became smoother

This small shift in technology created a noticeable ripple effect across the entire workflow.

 

Looking Ahead at Next Generation Biometric Systems

Generative AI is shaping the future by helping systems adapt to the endless variety found in human faces. Upcoming trends are moving toward more natural interactions, using depth, emotion analysis, and richer synthetic modeling.

Instead of rigid rule-based systems, next-generation biometric tools will focus on flexibility. They will learn how people move, blink, talk, and react in different environments. This supports secure identity flows in remote banking, travel checkpoints, online exams, and beyond.

One thing is clear. As generative technology continues to evolve, biometric systems will become more helpful, more inclusive, and more dependable for everyday users.

 

Final Thoughts on the Growing Role of Generative AI

Generative AI improves everything from detection to anti-spoofing, helping identity systems perform well under real-world conditions. By building richer datasets, reducing noise, and strengthening verification patterns it brings security and convenience together without adding friction for users.

With global benchmarks like FRVT and practical tools offered through platforms such as Recognito, organizations can build systems that are both safe and user-friendly. In a world where digital identities matter more each day, innovations like these guide us toward a secure and trusted future powered by Recognito.

 

Frequently Asked Questions

 

1. What is generative AI in facial analysis?

Generative AI enhances facial analysis by improving image quality, reconstructing missing features, and creating synthetic data that strengthens accuracy and security in biometric systems.

 

2. How does generative AI improve face detection?

It rebuilds low-quality or partially visible facial regions, helping detection models locate faces more reliably in poor lighting, odd angles, or blurry conditions.

 

3. Why is generative AI important for liveness detection?

Generative models create realistic spoof samples that teach systems how to detect fake attempts, making liveness checks stronger against photos, videos, and deepfakes.

 

4. How does generative AI help reduce spoofing attacks?

It trains anti-spoofing models with a wide range of AI-generated attack variations, improving their ability to identify masks, digital overlays, and synthetic faces.

 

5. Is generative AI safe to use in biometric systems?

Yes. Modern systems use encrypted templates, comply with standards like GDPR, and avoid storing raw facial images, keeping biometric data private and secure.


Mythics

Mythics Accelerates Oracle Multi-Cloud Momentum, Driving Transformation Across Leading Hyperscalers

The post Mythics Accelerates Oracle Multi-Cloud Momentum, Driving Transformation Across Leading Hyperscalers appeared first on Mythics.

ComplyCube

Where Identity Document Verification Fits in the End-to-End KYC Process

Digital document verification is a crucial component of KYC compliance. However, many businesses face the dilemma of complying with stringent KYC compliance while ensuring their document verification processes remains seamless. The post Where Identity Document Verification Fits in the End-to-End KYC Process first appeared on ComplyCube.

Digital document verification is a crucial component of KYC compliance. However, many businesses face the dilemma of complying with stringent KYC compliance while ensuring their document verification processes remains seamless.

The post Where Identity Document Verification Fits in the End-to-End KYC Process first appeared on ComplyCube.


Dock

EUDI Wallet & Payments: What the Large-Scale Pilots Revealed

During one of our recent webinars, Esther Makaay, VP of Digital Identity at Signicat, shared key learnings from two years of real-world testing carried out through the EU Digital Identity Wallet Consortium (EWC). This session focused on what the Large-Scale Pilots have already demonstrated in one of the most

During one of our recent webinars, Esther Makaay, VP of Digital Identity at Signicat, shared key learnings from two years of real-world testing carried out through the EU Digital Identity Wallet Consortium (EWC).

This session focused on what the Large-Scale Pilots have already demonstrated in one of the most critical use cases: payments.

Below are the key insights Esther highlighted:‍

Tuesday, 02. December 2025

Indicio

Verifiable Credentials strengthen AML compliance, KYC for digital assets and finance

The post Verifiable Credentials strengthen AML compliance, KYC for digital assets and finance appeared first on Indicio.
As use of digital assets accelerates, the U.S. Treasury is seeking better identity solutions to close the compliance gaps created by paper-era verification. Here’s a summary of Indicio’s response to its RFI on detecting illicit activity in digital assets.

By Helen Garneau

With rapid expansion in the digital asset market, driven by  institutional adoption, regulatory clarity, advancements in decentralized finance (DeFi), and other related factors, one massive obstacle to the digitalization of finance remains: legacy KYC processes from a paper document-based world. 

KYC is the anchor keeping the possibility of instant cross-border payments and streamlined user experiences stuck to the age of waiting, oftentimes for days. But the result isn’t just inefficiency, cost, and missed opportunities, it’s also an increased risk in the age of AI, where documents and biometrics can be easily faked. 

Indeed, the mix of digital finance and analogue KYC and AML is highly combustible, leading the U.S. Treasury to issue  a Request for Information on practical ways financial institutions can spot illicit activity in this new environment.

Indicio’s submission argues that digitalized finance needs digitalized identity and the simplest and most secure way of doing this is through decentralized identity using verifiable credentials. We explain how verifiable digital identity, rooted in authenticated documents, biometrics, and cryptographic proofs, removes the security vulnerabilities inherent to analogue KYC and delivers the assurance needed by regulators, banks and financial institutions. 

Verifiable Credentials limit the amount of sensitive data institutions need to hold, improve auditability, and allow identity to be instantly confirmed at the moment it is needed. They also align with FATF, NIST, and ICAO standards, which gives Treasury a clear starting point for future guidance.

Many banks and financial institutions still run KYC through document scans and photo uploads, which keeps the process rooted in a paper-based model. These steps are slow, repetitive, and frustrating for customers; they also create an error-prone system that exposes institutions to deepfakes, synthetic identity fraud, and account takeover attempts — all while adding friction for users and increased risk for all parties involved.

Verifiable Credentials offer a cleaner, simpler, seamless path. A credential can be created from authenticated documents, tied to a user’s biometrics, and held in a digital wallet that they biometrically control. The data in the credential is digitally signed in a way that is resistant to AI, can only be shared with the user’s explicit consent (simplifying GDPR), while being instantly verifiable anywhere, and at any time.

It’s also possible for users to share data selectively, meaning that a user can just share specific data relevant to a transaction, simplifying data privacy compliance. Users stay in control of their information and institutions work with cryptographic proofs instead of raw data.  This makes onboarding easier, while improving quality of identity assurance. 

Verifiable Credentials for KYC leverage the exact same decentralized identity technology used to create “government-grade” digital identity for international travel and border crossing, following global specifications published by the International Civil Aviation Organization (ICAO DTC). This makes Verifiable Credentials with authenticated biometrics the strongest possible digital identity assurance available — and one that is being used for country-scale deployments.

The biggest challenge we’re helping the Treasury tackle is regulatory clarity. Many rules still assume institutions need to store full identity documents, and there is uncertainty about whether selective disclosure or cryptographic proofs meet exam expectations across different jurisdictions.

In our recommendations, we encouraged the Treasury to define how Verifiable Credentials satisfy AML and sanctions rules, ask for clear technical standards to be set that follow NIST and FATF guidance, request support for supervised pilots, and to provide consistent examiner playbooks. Steps like these would give institutions the confidence to adopt verifiable digital identity at scale.

Indicio’s overall message is simple: portable, verifiable digital identity gives financial institutions a fast, safe and privacy-preserving way to understand who they are dealing with while reducing fraud, improving security, and simplifying compliance. 

With clear direction from regulators, this approach can give the digital asset ecosystem the digital KYC it needs to move at a digital speed.Want to learn more? Speak with one of our experts about Verifiable Credentials and KYC or read Indicio’s full response.


 

The post Verifiable Credentials strengthen AML compliance, KYC for digital assets and finance appeared first on Indicio.


TÜRKKEP A.Ş.

Salesforce / Inspark & TÜRKKEP E-Fatura Entegrasyonu tamamlandı!

Tüm dünyada yaygın olarak kullanılan CRM çözümü Salesforce’un Türkiye Yetkili Satıcısı INSPARK ile TÜRKKEP, e-Fatura entegrasyonu tamamlandı. Bu entegrasyon sayesinde Salesforce kullanıcıları, e-Faturalarını doğrudan TÜRKKEP servisleri aracılığıyla kesiyor; yani satıştan faturaya uzanan süreç tek bir akış haline geliyor.
Tüm dünyada yaygın olarak kullanılan CRM çözümü Salesforce’un Türkiye Yetkili Satıcısı INSPARK ile TÜRKKEP, e-Fatura entegrasyonu tamamlandı. Bu entegrasyon sayesinde Salesforce kullanıcıları, e-Faturalarını doğrudan TÜRKKEP servisleri aracılığıyla kesiyor; yani satıştan faturaya uzanan süreç tek bir akış haline geliyor.

Dark Matter Labs

From a Question to a Living Library: Reflections on the Many-to-Many Launch

Welcome back to our ongoing reflections on the Many-to-Many project. In our previous posts, we’ve shared the journey of prototyping on ourselves, navigating complexity, and turning years of learning into a tangible system. Since our last update, we’ve crossed a major threshold: Version 1 of the entire Many-to-Many System website and its ecosystem of tools is now live in the world. This has b

Welcome back to our ongoing reflections on the Many-to-Many project. In our previous posts, we’ve shared the journey of prototyping on ourselves, navigating complexity, and turning years of learning into a tangible system. Since our last update, we’ve crossed a major threshold: Version 1 of the entire Many-to-Many System website and its ecosystem of tools is now live in the world.

This has been a nine-month journey of turning abstract ideas into a living system. Now that it’s out there, we wanted to pause and reflect on the final push. In this post, we — Gurden, Michelle, and Arianna — share our key learnings on the process of getting to a full launch.

Many-to-Many website available at manytomany.systems

Michelle: So, it’s time for our final blog of this phase. We’ve launched the first full version of the Many-to-Many System website, with all its documents, tools, examples, and case studies. What did we learn? What was most important in getting us over the finish line?

Gurden: I’ll start with the usability testing sessions. In my experience, testing often comes last, like a quick check just before release. But I loved that we gave it real importance in our schedule, giving us time to actually act on the feedback. If you don’t do that, the insights just get pushed to a hypothetical “version 2.0.” We were brave enough to show something incomplete, which came with a few cringe-worthy moments seeing small issues with the website live, but it was so valuable.

We could see what was working and what wasn’t. We heard people say, “The ‘Navigate Challenges’ page is so important, why isn’t it live yet?” or, “Who is it for? section is at the bottom — I might miss it!” Just as important as what they said was observing how they used it. We asked people to share their screens and give first impressions, which puts them on the spot, but it gave us the honest validation and critique we needed for the final push.

Arianna: For me, one of the most important parts of this phase was the structure around our work. We kept two sets of tasks: our internal ones, which helped us manage priorities throughout the build, and the separate tasks created during the usability sessions. Keeping them apart gave us a clearer view of what we had planned and what people genuinely needed.

After testing, we sorted the feedback into three groups: essential fixes, ideas for a later version, and topics that required more reflection before deciding. This helped us stay grounded. Instead of reacting instantly, we moved carefully through what could realistically be done before release.

Throughout the journey, we created well over nine prototypes, one after the other. Each explored a different way of entering or navigating the system, similar to what we described in the first blog. They showed the range of possibilities before committing to a final shape. The task board supported this by keeping everything efficient. It gave us a shared sense of priorities, so when we reached the final stretch we already knew which changes mattered and which ones could wait.

Some bits of our Notion tasks, feedback system with tags, tables, sections and priorities

Gurden: That’s a great point, and it connects to our scope management. In the tech and design field, it’s very easy to get distracted by a shiny new idea, like building a complex interactive tool before the foundation is solid. We were good at saying no, not just for the sake of it, but because we knew we had to validate the core of version 1 first. We stayed true to our scope and our release date, which is much easier said than done.

Michelle: I don’t have much to add to that, you’ve both said it perfectly. We had to work hard to balance between what was possible, and there was a lot that was possible, and what was ultimately desirable for potential users and for us. We were able to make really practical decisions about what to say yes or no to. I just kept saying, “first we need version 1 in the world.” We already have ideas for version 2, but we were disciplined enough to not start building it.

Gurden: Can you talk a bit about the ‘Learning from the Field’ page? It wasn’t in our original plan, and to be honest, I was a bit skeptical when you first suggested it. But now that it’s live, I’ve seen how many people go there first. A lawyer friend of mine went straight to the case studies to understand the project, and then worked backwards from there. It gives the whole system legitimacy.

Michelle: That’s such an interesting example and I remember your skepticism! At that time, we had three significant changes on the table, and we debated all of them. We knocked out two but agreed to get that one in. It shows a lot about our team dynamic, we could all use our experience to put our honest opinions on the table and end up with a combination that worked for the website, but also for us as a team. We didn’t put ourselves under unnecessary pressure by continually expanding the scope, which gave us more time to refine the parts we did include.

Gurden: The whole launch ended up being surprisingly smooth, without the usual last-minute panic. I think a big part of that backend success was the Content Management System (CMS). A huge shoutout to you, Michelle, and to Annette for getting in there and working so easily with it. It freed me up from managing content and all the dynamic changes.

Arianna: I also want to acknowledge the clarity of the CMS. Its structure followed the same logic we had been using all along, so translating decisions from Notion into the CMS felt natural. The tagging system made relations visible without creating complexity. This is what will allow the system to grow beyond us.

Gurden: I have to give credit back to both of you for that, because the CMS structure is just a copy of the Field Guide’s structure. The layers, the challenges, the tools — it’s all a reflection of that logic. It’s a testament to the fact that the Many-to-Many System is working on the inside, too.

Preview of our CMS, showing interconnections and data entry UIPreview of our CMS, showing interconnections and data entry UI

Michelle: We did have that intuition after writing the Field Guide, didn’t we, Arianna? We knew we needed a database. If we didn’t do that, the manual lift would have killed the website and meant it would have been out of date five minutes after it went live. And huge kudos to Annette, who just got in there and smashed through building the content.

Gurden: So, final question for you both. How does it feel now that it’s launched? For context, we’ve had over 2,400 unique visitors, 5,400 page views and 200+ detailed tool views within the first three weeks of being live. The main launch post had 50 reposts, which to me is a huge indicator that people find it valuable enough to share on their own. It’s safe to say it’s been a good response. How are you both sitting with it?

Arianna: Now that the website is live, it feels like the beginning of a new phase. I am curious to see how people move through it, where they start, which tools they stay with, and which paths they create that we did not imagine.

Thinking back to all the prototypes we developed (we talked about this in blog 1), I can see how many directions this work could have taken. The current version is only one of them. Now the interesting part is observing how others use it and letting that shape what comes next.

An overview of nine draft options for the Many-to-Many website. They are only a small selection from the many prototypes we created across different phases. Some were more developed, others stayed at the UX-draft level. Together they show the range of narrative and structural possibilities we tested, discussed, tried, and eventually set aside before designing the final version.

Michelle: How do I feel? The whole process has felt like wrestling a monster underwater — trying to build something when you have no idea what its final form will be. So, just trusting the process, especially in the early days with you, Arianna, turning concepts into visuals and information layers, was a huge leap of faith. Seeing it come to life with you, Gurden, outside of Figma sketches was the next step. I feel quite humbled by what we’ve been able to share.

For me, there was the “serious business” of being committed enough to create it, despite thinking many times that we should just quit. Now that it’s in the world, it feels like “playful business.” I don’t mind if people break it or reimagine it. It would be fascinating if they said, “This is good, but I would add X, Y, and Z.” I’m excited to see what people do with it, even if that’s throwing it in the bin — that’s also interesting data!

This launch marks the end of a significant chapter, but as Arianna said, it’s also a new beginning. Our goal has always been to learn out loud, and now, with the system living in the world, a new phase of listening and learning begins. Thank you for following along with us.

You can explore the full system at manytomany.systems or join our Sharing Session tomorrow at 12.30pm CET.

And a big thanks, as always, to the other members of our team — especially Annette — who are key stewards of this work.

From a Question to a Living Library: Reflections on the Many-to-Many Launch was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Deepfakes and Synthetic Identity in Payments Demo Day

The post Deepfakes and Synthetic Identity in Payments Demo Day appeared first on Liminal.co.

Spherical Cow Consulting

Robots, Humans, and the Edges of the Open Web

This episode explores what the “open web” truly means amid shifting standards, AI automation, and evolving economic pressures. Drawing on discussions from IETF 124 and W3C TPAC, it highlights how browser architects, policy experts, and researchers are reexamining long-held assumptions about access, interoperability, and the role of automated agents. Learn why openness isn’t a binary state but a

“At the IETF 124 meeting in Montréal, I enjoyed quality time in a very small, very crowded side room filled with an unusually diverse mix of people: browser architects, policy specialists, working group chairs, privacy researchers, content creators, and assorted observers who simply care about the future of the web.”

The session, titled Preserving the Open Web, was convened by Mark Nottingham and David Schinazi—people you should follow if you want to understand how technical and policy communities make sense of the Internet’s future.

A week later, at the W3C TPAC meetings in Kobe, Japan, I ended up in an almost identical conversation, hosted again by Mark and David, fortunately a somewhat larger room. That discussion brought in new faces, new community norms, and a different governance culture, but in both discussions, we almost immediately fell to the question:

What exactly are we trying to preserve when we talk about “the open web”?

For that matter, what is the open web? The phrase appears everywhere—policy documents, standards charters, conference talks, hallway discussions—yet when communities sit down to define it, they get stuck. In Montréal and Kobe, the lack of a shared definition proved to be a practical obstacle. New specifications are being written, new automation patterns are emerging, new economic pressures are forming, and without clarity about what “open” means, even identifying the problem becomes difficult.

This confusion isn’t new. The web has been wrestling with the meaning of “open” for decades.

A Digital Identity Digest Robots, Humans, and the Edges of the Open Web Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:18:20 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

How the web began, and why openness mattered

The earliest version of the web was profoundly human in scale and intent. Individuals wrote HTML pages, uploaded them to servers, and connected them with links. Publishing was permissionless. Reading was unrestricted. No identity was required. No subscription was expected. You didn’t need anyone’s approval to build a new tool, host a new site, or modify your browser.

The Web Foundation’s history of the web describes this period as a deliberate act of democratization. Tim Berners-Lee’s original design was intentionally simple, intentionally interoperable, and intentionally open-ended. Anyone should be able to create. Anyone should be able to link. Anyone should be able to access information using tools of their own choosing.

That was the first meaning of “open web”: a world where humans could publish, and humans could read, without needing to ask permission.

Then the robots arrived.

Robots.txt and the first negotiation with machines

In 1994, Martijn Koster created a lightweight mechanism for telling automated crawlers where they should and shouldn’t go: robots.txt. It was never a law; it was a social protocol. A well-behaved robot would read the file and obey it. A rogue one would ignore it and reveal itself by doing so.

That tiny file represented the web’s first attempt to articulate boundaries to non-human agents. It formalized a basic idea: openness for humans does not automatically imply openness for robots.

Even back then, openness carried nuance, and it was definitely not the last time the web community tried to define it.

The question keeps returning

One of the earliest modern attempts that I found to define the open web came in 2010, when Tantek Çelik wrote a piece simply titled “What is the Open Web?”. His framing emphasized characteristics rather than purity tests: anyone can publish; anyone can read; anyone can build tools; anyone can link; and interoperability creates more value than enclosure. It is striking how relevant those ideas remain, fifteen years later. These debates aren’t symptoms of crisis; they’re part of the web’s DNA.

The web has always needed periodic recalibration. It has always relied on communities negotiating what matters as technology, economics, and usage patterns change around them. As innovation happens, needs, wants, and desires for the web change.

And now, automation has forced us into another round of recalibration.

Automation came faster than consensus

The modern successors to robots.txt are now emerging in the IETF. One current effort, the AI Preferences Working Group (aipref) aims to provide a structured way for websites to express their preferences around AI training and automated data reuse. It’s an updated version of the old robots.txt promise: here is what we intend; please respect it.

But the scale is different. Search crawlers indexed pages so humans could find them. AI crawlers ingest pages so models can incorporate them. The stakes—legal, economic, creative, and infrastructural—are much higher.

Another effort, the newly chartered WebBotAuth Working Group (webbotauth), attempted to tackle the question of whether and how bots should authenticate themselves. The first meeting at IETF 124 made clear how tangled this space has become. Participants disagreed on what kinds of bots should be differentiated, what behavior should be encouraged or discouraged, and whether authentication is even a meaningful tool for managing the diversity of actors involved. The conversation grew complex (and heated) enough that the chairs questioned whether the group had been chartered before there was sufficient consensus to proceed.

None of this represents failure. It represents something more fundamental:

We do not share a common mental model of what “open” should mean in a web increasingly mediated by automated agents.

And this lack of clarity surfaced again—vividly—one week later in Kobe.

What the TPAC meeting added to the picture

The TPAC session began with a sentiment familiar to anyone who has been online for a while: one of the great gifts of the web is that it democratized information. Anyone could learn. Anyone could publish. Anyone could discover.

But then came the question that cut to the heart of the matter: Are we still living that reality today?

Participants pointed to shifts that complicate the old assumptions—paywalls, subscription bundles, identity gates, regional restrictions, content mediation, and, increasingly, AI agents that read but do not credit or compensate. Some sites once built for humans now pivot toward serving data brokers and automated extractors as their primary “audience.” Others, in response, block AI crawlers entirely. New economic pressures lead to new incentives, sometimes at odds with the early vision of openness.

From that starting point, several deeper themes emerged.

Openness is not, and has never been, binary

One of the most constructive insights from the TPAC discussion was the idea that “open web” should not be treated as a binary distinction. It’s a spectrum with many dimensions: price, friction, format, identity requirements, device accessibility, geographic availability, and more. Moving an article behind a paywall reduces openness in one dimension but doesn’t necessarily negate it in others. Requiring an email address adds friction but might preserve other characteristics of openness.

Trying to force the entire concept into a single yes/no definition obscures more than it reveals. It also leads to unproductive arguments, as different communities emphasize different attributes.

Recognizing openness as a spectrum helps explain why reaching consensus is so hard and why it may be unnecessary.

Motivations for publishing matter more than we think

Another thread that ran through the TPAC meeting was the simple observation that people publish content for very different reasons. Some publish for reputation, some to support a community, some for revenue, and some because knowledge-sharing feels inherently worthwhile. Those motivations shape how creators think about openness.

AI complicates this because it changes the relationship between creator intention and audience experience. If the audience receives information exclusively through AI summaries, the creator’s intended context or narrative can vanish. An article written to persuade, illuminate, or provoke thought may be reduced to a neutral paragraph. A tutorial crafted to help a community may be absorbed into a model with no attribution or path back to the original.

This isn’t just a business problem. It’s a meaning problem. And it affects how people think about openness.

The web is a commons, and commons require boundaries

At TPAC, someone invoked Eleanor Ostrom’s research on commons governance (here’s a video if you’re not familiar with that work): a healthy commons always has boundaries. Not barriers that prevent participation, but boundaries that help define acceptable use and promote sustainability.

That framing resonated well with many in the room. It helped reconcile something that often feels contradictory: promoting openness while still respecting limits. The original web was open because it was simple, not because it was boundary-free. Sharing norms emerged, often informally, that enabled sustainable growth.

AI-Pref and WebBotAuth are modern attempts to articulate boundaries appropriate for an era of large-scale automation. They are not restrictions on openness; they are acknowledgments that openness without norms is not sustainable. Now we just need to figure out what the new norms are in this brave new world.

We’re debating in the absence of shared data

Despite strong opinions across both meetings, participants kept returning to a sobering point: we don’t actually know how open the web is today. We lack consistent, shared metrics. We cannot easily measure the reach of automated agents, the compliance rates for directives, or the accessibility of content across regions and devices.

Chrome’s CRUX dataset, Cloudflare Radar, Common Crawl, and other sources offer partial insights, but there is no coherent measurement framework. This makes it difficult to evaluate whether openness is expanding, contracting, or simply changing form.

Without data, standards communities are arguing from instinct. And instinct is not enough for the scale of decisions now at stake.

Tradeoffs shape the web’s future

Another candid recognition from TPAC was that the web’s standards bodies cannot mandate behavior. They cannot force AI crawlers to comply. They cannot dictate which business models will succeed. They cannot enforce universal client behavior or constrain every browser’s design.

In other words: governance of the open web has always been voluntary, distributed, and rooted in negotiation.

The most meaningful contribution these communities can make is not to define one perfect answer, but to design spaces where tradeoffs are legible and navigable: spaces where different actors—creators, users, agents, platforms, governments—can negotiate outcomes without collapsing the web’s interoperability.

Toward a set of OpenWeb Values

Given the diversity of use cases, business models, motivations, and technical architectures involved, the chances of arriving at a single definition of “open web” are slim. But what Montréal and Kobe made clear is that communities might agree on values, even when they cannot agree on definitions.

Two small sections in this post lend themselves to concise lists without compromising narrative flow:

The values that surfaced repeatedly included: Access, understood as meaningful availability rather than unrestricted availability. Attribution, not only as a legal requirement but as a way of preserving the creator–audience relationship. Consent, recognizing that creators need ways to express boundaries in an ecosystem increasingly mediated by automation. Continuity, ensuring that the web remains socially, economically, and technically sustainable for both creators and readers.

These values echo what Tantek articulated in 2010 and what the Web Foundation cites in its historic framing of the web. They are principles rather than prescriptions, and they reflect the idea that openness is something we cultivate, not something we merely declare.

And in parallel, these values mirror the OpenStand Principles

—which articulated how open standards themselves should be developed. I wrote about this a few months ago in “Rethinking Digital Identity: What ARE Open Standards?” The fact that “open standard” means different things to different standards communities underscores that multiplicity of definitions does not invalidate a concept; it simply reflects the complexity of global ecosystems.

The same can be true for the open web. It doesn’t need a singular definition, but maybe it does needs a set of clear principles so we know what we are trying to protect.

Stewardship rather than preservation

This is why the phrase preserving the open web makes me a little uncomfortable. Preservation implies keeping something unchanged. But the web has never been static. It has always evolved through tension: between innovation and stability, between access and control, between human users and automated agents, between altruistic publication and economic incentive.

The Web Foundation’s history makes this clear, as does my own experience over the last thirty years (good gravy, has it really been that long?) The web survived because communities continued to adapt it. It grew because people kept showing up to argue, refine, and redesign. The conversations in Montréal and Kobe sit squarely in that tradition.

So perhaps the goal isn’t preservation at all. Perhaps it’s stewardship.

Stewardship acknowledges that the web has many purposes. It acknowledges that no single actor can dictate its direction. It acknowledges that openness requires boundaries, and boundaries require negotiation. And it acknowledges that tradeoffs are inevitable—but that shared values can guide how we navigate them.

Mark and David’s side meetings exist because a community still cares enough to have these conversations. The contentious first meeting of WebBotAuth was not a setback; it was a reminder of how difficult and necessary this work is. The TPAC discussions reinforced that, even in moments of disagreement, people are committed to understanding what should matter next.

If that isn’t the definition of an open web, it may be the best evidence we have that the open web still exists.

To Be Continued

The question “What is the open web?” is older than it appears. It surfaced in 1994 with robots.txt. It resurfaced in 2010 in Tantek’s writing. It has re-emerged now in the era of AI and large-scale automation. And it will likely surface again.

The real work is identifying what we value—access, attribution, consent, continuity—and ensuring that the next generation of tools, standards, and norms keeps those values alive.

If the conversations in Montréal and Kobe are any indication, people still care enough to argue, refine, and rebuild. And perhaps that, more than anything, is what will keep the web open.

If you’d like to read the notes that Mark and I took during the events, they are available here.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

00:00:30
Welcome back, everybody. This story begins in a very small, very crowded side room at IETF Meeting 124 in Montreal. It happened fairly recently, and it set the tone for a surprisingly deep conversation.

00:00:43
Picture this: browser architects, policy specialists, working group chairs, privacy researchers, content creators, and a handful of curious observers — all packed together, all invested in understanding where the web is headed.

00:00:57
The session was titled Preserving the Open Web. It was convened by Mark Nottingham and David Shkenazi — two people worth following if you want to understand how technical and policy perspectives meet to shape the future of the Internet.

00:01:10
A week later, at the W3C TPAC meeting in Kobe, Japan, I found myself in almost exactly the same conversation.

00:01:22
Once again, Mark and David convened the group to compare perspectives across different standards organizations.

00:01:28
They asked the same questions. The only real difference was the slightly larger room — and with it, new faces, new cultural norms, and a different governing style for the standards bodies.

00:01:41
But in both meetings, we landed almost immediately on the same question:

00:01:43
What exactly are we trying to preserve when we talk about the “open web”?

00:01:53
The phrase is everywhere. It appears in policy documents, standards charters, keynotes, and hallway conversations. Yet when you ask a room to define it, things get fuzzy very quickly. And that fuzziness isn’t academic — it matters.

00:02:36
Without clarity about what “open” means, identifying the actual problem becomes far more difficult as automation patterns shift and economic pressures evolve.

A Look Back

00:02:46
This isn’t a new dilemma. The web has been wrestling with the meaning of “open” for decades.

00:03:09
In the earliest days, everything about the web was profoundly human-scaled. People wrote HTML by hand. They published content to servers they controlled. They linked to one another freely. Publishing required no permission.

00:03:18
If you had an idea, a keyboard, a computer, and an Internet connection, you could build something.

00:03:26
The Web Foundation describes these early design choices as a deliberate act of democratization.

00:03:26–00:03:40
Tim Berners-Lee wanted anyone to create, anyone to link, and anyone to read — all using tools of their choosing. This spirit defined the earliest sense of an “open web”:

Humans publish Humans read Everything else stays out of the way

00:03:40
Dun dun dun.

00:03:42
Then the robots arrived.

Enter Automation

00:03:44
In 1994, Martin Koster proposed robots.txt, a simple file that told automated crawlers where they were and were not welcome.

00:04:14
It wasn’t a law. It was a social protocol. Well-behaved crawlers respected it. Bad actors ignored it and revealed themselves by doing so.

00:04:25
That tiny file introduced a big shift: openness for humans didn’t automatically mean openness for machines.

00:04:30
Even then, openness carried nuance.

Returning to Old Questions

00:04:35
When researching this post, one of the earliest attempts to define the open web I found was from Tantek Çelik in 2010.

00:05:07
His framing focused on characteristics — not purity tests:

Anyone can publish Anyone can read Anyone can build tools Anyone can link Interoperability creates more value than enclosure

00:05:16
Fifteen years later, it’s still uncannily relevant. And, amusingly, Tantek was in the room again at TPAC while we revisited the same conversation.

00:05:21
I can only imagine the déjà vu.

The Web Evolves

00:05:21–00:05:44
The web has always needed recalibration. As technology evolves, expectations shift — and so does the need to renegotiate what “open” should mean.

00:05:44
Automation and AI have pushed us into the next round of that negotiation.

00:05:51
Today’s successors to robots.txt are emerging in the IETF.

New Efforts in Standards Bodies

00:05:57
One is the AI Preferences Working Group, commonly known as AI-Pref.

00:06:03
They’re trying to create a structured way for websites to express preferences about AI training and automated data reuse.

00:06:11
Think of it as trying to define the language that might appear in a future robots-style file — same spirit, far higher stakes.

00:06:23
Why does this matter?

00:06:23–00:06:49
Traditional search crawlers index the web to help humans find things.
AI crawlers ingest the web so their models can absorb things.

This changes the stakes dramatically — legally, economically, creatively, and infrastructurally.

00:06:49
Another initiative in the IETF is the WebAuth Bot Working Group, which explores whether bots should authenticate themselves — and how.

00:06:59
Their early meetings focused on defining the scope of the problem. IETF124 was their first major gathering, and it highlighted how tangled this space really is.

00:07:29
With several hundred people in the room, discussions grew heated enough that the chairs questioned whether the group had been chartered prematurely.

00:07:42
Is that a failure? Maybe. Or maybe it reflects something deeper: we don’t share a mental model for what “open” should mean in a web mediated by automated agents.

Persistent Tensions

00:07:57
That same lack of clarity surfaced again at TPAC in Kobe.

00:08:08
The discussion began with a familiar sentiment: the web democratized information. It gave anyone with a computer and Internet access the ability to learn, publish, and discover.

00:08:35
But is that still true today?

Modern web realities include:

Paywalls Subscription bundles Identity gates Regional restrictions Content mediation AI agents that read without attribution or compensation

00:08:35–00:09:07
Some sites now serve data brokers. Others try to block AI crawlers entirely. Economic pressures and incentives have shifted — not always in ways aligned with early ideals of openness.

Openness as a Spectrum

00:09:07
At TPAC, one of the most useful insights was this: the open web isn’t a switch. It’s not a binary. It’s a spectrum.

00:09:33
Different dimensions define openness:

Price Friction Format Accessibility Identity requirements Device compatibility Geography

00:09:39
A paywall changes openness on one axis but not all. An email gate adds friction but doesn’t automatically “close” content.

00:09:52
The binary mindset has never reflected reality — which explains why consensus is so elusive.

00:10:05
Seeing openness as a spectrum creates room for nuance and suggests that agreement may not even be necessary.

Motivations for Publishing

00:10:17
Another important thread: people publish for many reasons.

00:10:29–00:10:43
Some publish for reputation.
Some publish for community.
Some for income.
Some for the joy of sharing knowledge.

00:10:47
These motivations shape how people feel about openness.

00:10:58
AI complicates things. When readers experience your work only through an AI-generated summary, the context and tone you cared about may be lost.

00:10:58–00:11:20
A persuasive piece may be flattened into neutrality.
A community-oriented tutorial may be absorbed into a model with no link back to you.

This isn’t only an economic problem — it’s also a meaning problem.

Boundaries and the Commons

00:11:20
Elinor Ostrom’s work on governing shared resources came up. One of her core principles: shared resources need boundaries.

00:11:51
Not restrictive walls — but clear expectations for use. This framing resonated deeply and helped reconcile the tension between openness and limits.

00:12:01
The web has never been boundary-less. It worked because shared norms — formal and informal — made sustainable use possible.

00:12:06
AI-Pref and WebAuth Bot aren’t restrictions on openness. They’re attempts to articulate healthy boundaries for a new era.

00:12:14
But agreeing on those boundaries is the hard part.

The Measurement Gap

00:12:18
To understand the scale of the problem, we need measurement. But we can’t measure what we haven’t defined.

00:12:32
We lack shared metrics for openness. We don’t know:

How open the web currently is How often automated agents obey directives How frequently data is reused against publishers’ intentions How accessibility shifts across devices and regions

00:12:47
Datasets like Chrome UX Report, Cloudflare Radar, and Common Crawl offer fragments — but no coherent measurement framework.

00:13:06
Without data, we argue from instinct rather than insight.

What Standards Bodies Can — and Cannot — Do

00:13:17
Another reality: standards bodies cannot mandate behavior.

00:13:34
They can’t force AI crawlers to respect preferences, dictate business models, control browser design, or enforce predictable client behavior.

00:13:45
Robots.txt has always been voluntary. It’s always been negotiation-based, not coercion-based.

00:13:56
The best contribution standards bodies can make is designing systems where trade-offs are visible and actors can negotiate without breaking interoperability.

00:14:02
It’s not glamorous work — but it’s necessary.

Shared Values

00:14:10
A single definition of the open web is unlikely.

00:14:17
But both Montreal and Kobe revealed alignment around a few core values:

00:14:21–00:14:46

Access — not unlimited, but meaningful Attribution — to preserve the creator-audience relationship Consent — to express boundaries in an automated ecosystem Continuity — to ensure sustainability socially, economically, and technically

00:14:46–00:14:58
These values echo Tantek’s 2010 framing, the Web Foundation’s historical narrative, and the OpenStand principles.

00:14:58
They’re not definitions — they’re guideposts.

Stewardship, Not Preservation

00:15:09
This is why preserving the open web may not be the right phrasing. Preservation implies stasis. But the web has never been static.

00:15:37
It has always evolved through tension — innovation vs. stability, openness vs. control, humans vs. automation, altruism vs. economic pressure.

00:15:40
Thirty years. Good gravy.

00:15:40–00:16:10
The web endured because communities adapted, argued, refined, and rebuilt.
The Montreal and Kobe conversations fit squarely into that tradition.
So perhaps the goal isn’t preservation — it’s stewardship.

Stewardship acknowledges:

The web serves many purposes No single actor controls it Openness requires boundaries Boundaries require negotiation

00:16:10
Trade-offs aren’t failures. They’re part of the ecosystem.

Looking Ahead

00:16:30
Mark and David’s side meetings exist because people care enough to do this work.

00:16:30–00:16:41
The contentious WebAuth Bot meeting wasn’t a setback — it was a reminder that the toughest conversations are the ones that matter.

00:16:41
TPAC showed that even without agreement, people are still trying to understand what comes next.

00:16:35–00:16:41
If that isn’t evidence that the open web still exists, I’m not sure what is.

00:16:41–00:17:18
This conversation is far from over.
It began with robots.txt in 1994.
It showed up in Tantek’s writing in 2010.
It’s appearing today in heated debates at standards meetings.
And it will almost certainly surface again — because the real work isn’t in defining “open.”

It’s in articulating what we value:

Access Attribution Consent Continuity

00:17:18
And ensuring our tools and standards reflect those values.

Final Thoughts

00:17:18–00:17:26
If the discussions in Montreal and Kobe are any indication, people still care. They still show up. They still argue, revise, and rebuild.

00:17:26
And maybe that, more than anything else, is what will keep the web open.

00:17:30
Thanks for your time. Drop into the written post if you’re looking for the links mentioned today, and see you next week.

00:17:44
And that’s it for this week’s episode of The Digital Identity Digest. If this helped clarify things — or at least made them more interesting — share it with a friend or colleague and connect with me on LinkedIn @hlflanagan.

If you enjoyed the show, please subscribe and leave a rating or review on Apple Podcasts or wherever you listen. You can also find the written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Robots, Humans, and the Edges of the Open Web appeared first on Spherical Cow Consulting.

Monday, 01. December 2025

Ocean Protocol

Annotators Hub: CivicLens — Turning Speeches into Signals

Annotators Hub: CivicLens — Turning Speeches into Signals Introduction The second annotation challenge, Civic Lens: Turning speeches into signals that we organized in collaboration with Lunor AI, has officially concluded, and the results offer valuable insights into political discourse analysis and high-quality data curation. In this challenge, contributors analyzed excerpts from European
Annotators Hub: CivicLens — Turning Speeches into Signals Introduction

The second annotation challenge, Civic Lens: Turning speeches into signals that we organized in collaboration with Lunor AI, has officially concluded, and the results offer valuable insights into political discourse analysis and high-quality data curation.

In this challenge, contributors analyzed excerpts from European Parliament speeches and answered a detailed set of questions about content, tone, and political positioning. Annotators assessed voting intentions, cooperation versus conflict, EU vs. national framing, verifiable claims, argument style, and ideological leaning.

The dataset produced from this challenge is bound to unlock broad applications, including building AI models that detect political stance and tone, analyzing trends in legislative debate, and support tools that promote transparency in European policymaking.

What have we achieved?

The challenge saw strong participation and high-volume annotation activity:

Total number of annotations: 216260 Total number of unique annotations: 10726 Total number of annotations by two annotators: 10726 Total number of annotators: 206 Leaderboard Scoring & Recommendations

Ensuring leaderboard fairness required a thorough analysis of annotation behavior. Each contributor was evaluated through our system across multiple dimensions to identify possible low-quality patterns, such as automated responses, random choices, or overly repetitive answer patterns. Let’s go through each of them:

Focus on Meaningful Contributors

We placed special emphasis on annotators with 80+ annotations, as their contributions significantly shape dataset quality. No one below this threshold had more than five sub-25-second annotations, making 80 a clean boundary.

2. Time Analysis

Several time-based indicators helped identify suspicious behavior:

Time patterns showed two distinct clusters. Very fast annotations (<25s) were treated cautiously. Annotators with >75% of annotations in this fast cluster were flagged. Unlike the previous challenge, accuracy wasn’t available, so this signal was combined with others rather than used alone (Figure 1).

Figure 1. Number of annotations by time spent for completion.

3. Correlation With Text Length

We checked whether longer texts led to longer annotation times:

Expected: positive correlation Observed: a wide range, including negative correlations ( Check figure 2 below) Figure 2. Distribution of correlation coefficients between time and text length.

Annotators with a correlation coefficient below 0.05 were flagged as suspicious, again treated as one signal among many.

4. Similarity & Consistency Analysis

To detect randomness, automation, or overly deterministic behavior, we evaluated each annotator along several criterias.

a) Perplexity Scores

Perplexity measures how varied an annotator’s answers are:

1 = deterministic Max value = random choice among all options

Example: For a binary question (Yes/No), a perfectly balanced 50/50 distribution yields a perplexity of 2, which could imply either genuine mixed responses or randomness (See Figure 3 below). To distinguish between these cases, we also computed the Kullback–Leibler (KL) divergence for each question which is explained in the following section

Figure 3. Distribution of perplexity scores for each question.

b) KL Divergence

KL divergence measures how far an annotator’s answer distribution deviates from the global answer distribution.

0 = identical answer to other annotators Higher values = larger deviation

Figure 4 below shows KL scores across annotators.

Figure 4. Distribution of KL scores for each question.

Using these metrics, we flagged annotators as suspect when their answers showed both unusually extreme perplexity values and clear deviations from the overall answer distribution. In practical terms, an annotator was marked as suspicious when their KL divergence exceeded 0.2 and either:

their perplexity was below 1.05 (indicating overly deterministic responses), or does not correspond to a value that would occur with a probability of 5% or lower under random choice of answers (i.e., uniform distribution)

Any annotations that met these combined conditions were treated as strong signals of low-quality or automated behavior. This was applied per question for granular detection.

5. Consistency & Similarity Checks

Two further checks helped separate legitimate divergent opinions from suspicious behavior:

a) Consistency (agreement with other annotators):

For items annotated by multiple humans, we computed each annotator’s agreement with peers. Annotators whose agreement with others fell below 50% were flagged as suspect. Such deviations may reflect valid alternative perspectives but are suspicious when combined with other evaluation criterias.

b) Similarity (agreement with an AI model):

We compared annotator labels with labels produced by an AI model applied to the same items. Annotators whose similarity-to-consistency ratio exceeded 1.5 were marked. High agreement with AI accompanied by low agreement with humans can show automation or pattern-following derived from model outputs (check Figure 5 below) Figure 5. Consistency and similarity distribution. Why It Matters

Reliable human-curated data is essential for training AI systems that can correctly interpret political tone, stance, and argumentation. The CivicLens challenge produced a path to create a clean, high-quality dataset that can power better classification models, improve analysis of parliamentary debates, and strengthen tools built for transparency and public understanding. High-utility datasets like this directly improve how AI handles real-world governance and policy content.

Annotators Hub: CivicLens — Turning Speeches into Signals was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Annotators Hub: LiteracyForge Challenge Results

The LiteracyForge challenge marked the first-ever task in the Ocean Annotators Hub, a new initiative designed to reward contributors for curating the data that trains AI. Organized in collaboration with Lunor, the challenge asked participants to review short English texts and assess the quality and difficulty of comprehension questions. The goal: to build a high-quality dataset that helps train m

The LiteracyForge challenge marked the first-ever task in the Ocean Annotators Hub, a new initiative designed to reward contributors for curating the data that trains AI. Organized in collaboration with Lunor, the challenge asked participants to review short English texts and assess the quality and difficulty of comprehension questions.

The goal: to build a high-quality dataset that helps train models to recognize question complexity and adapt to learner ability. These models could support adaptive tutors like Duolingo, improve literacy tools, and drive more inclusive learning systems.

What were the requirements?

Participants read short passages and:

Answered multiple-choice comprehension questions Rated how difficult the text and questions were Evaluated overall quality, from clarity to relevance What we achieved

In just three weeks:

88 contributors submitted qualifying work and received rewards 147 total contributors signed up 49,832 annotations were submitted 19,973 unique annotations were collected 17,581 entries were reviewed by at least two annotators

The challenge recognized consistent effort across the board. In total, 79 contributors received rewards, ranging from 0.27 to 1405.63 USDC. The leaderboard is available here, with top 10 contributors receiving between 240.21 USDC and 1405.63 USDC.

The resulting dataset offers broad applications, including training AI models to generate comprehension questions, enhancing question-answering systems, and developing tools to assess reading difficulty for educational and accessibility purposes.

How we ensured quality

To make sure rewards reflected real effort and high-quality output, a two-stage filtering process was implemented:

Time-Based Filtering

We analyzed the time spent per annotation. Entries completed in under 25 seconds were statistically less accurate.

As a result:

All annotations under 25s were excluded from the final leaderboard given the graphic below which shows a clear drop in answer accuracy for annotations under 25 seconds. Annotators with more than 75% of their submissions under 25s were flagged

This helped preserve genuine, fast-but-accurate work while filtering out low-effort entries.

Agreement Analysis

Each annotation was evaluated in two ways:

Similarity: how closely it matched labels generated by an AI model Consistency: how well it agreed with annotations made by other contributors

High similarity with AI alone wasn’t enough. The team specifically looked for instances where annotators aligned with the AI without matching human consensus, as a possible signal of automation.

No such patterns emerged during the LiteracyForge challenge, but the methodology sets a precedent for future quality control.

Why it matters

High-quality human annotation remains the single most effective way to improve AI models, especially in sensitive or complex areas like education, accessibility, or policy. Without precise human input, models can learn the wrong patterns, reproduce bias, or miss the nuance that real-world applications demand.

Ocean and Lunor are building the infrastructure to make this easier, more transparent, and more rewarding for contributors. Stay tuned for the next challenge!

Annotators Hub: LiteracyForge Challenge Results was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockto

210.000 aanvragen in één week – Het verhaal achter Noodfonds Energie

In aflevering 15 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Idriss Abdelmoula (Deloitte) en Roel ter Brugge (Ockto) over één van de meest urgente én ambitieuze regelingen van de afgelopen jaren: het Noodfonds Energie.

In aflevering 15 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Idriss Abdelmoula (Deloitte) en Roel ter Brugge (Ockto) over één van de meest urgente én ambitieuze regelingen van de afgelopen jaren: het Noodfonds Energie.


PingTalk

Upper-Right in the Magic Quadrant™. Top Scores in the Critical Capabilities.

Ping Identity is recognized as a Leader in Gartner’s 2025 Magic Quadrant™ and tops three use cases in the Critical Capabilities for Access Management.

Identity and access management (IAM) is no longer just a behind-the-scenes function. It’s the foundation of digital trust for modern enterprises. As organizations look to balance stronger security with frictionless experiences, Ping continues to lead the way forward.

 

We’re proud to share that Ping Identity has been named a Leader in the 2025 Gartner® Magic Quadrant™ for Access Management for the ninth consecutive year, positioned highest in Ability to Execute and furthest in Completeness of Vision (for the second year in a row).  

 

Ping also scored highest across three use cases—Workforce Access Management, Partner Access Management, and Machine Access Management—in the 2025 Gartner® Critical Capabilities for Access Management.

 

Sunday, 30. November 2025

Ontology

Community Connect: Web3 Trends Shaping Identity

In this week’s Community Connect Spaces, the discussion focused on one major theme: the biggest stories in crypto right now all point toward one thing — identity. From regulation and social media to AI and enterprise, decentralized identity (DID), verifiable credentials, and reputation are quickly moving from “nice to have” to “core infrastructure.” Below is a recap of the key narratives we cover

In this week’s Community Connect Spaces, the discussion focused on one major theme:
the biggest stories in crypto right now all point toward one thing — identity.

From regulation and social media to AI and enterprise, decentralized identity (DID), verifiable credentials, and reputation are quickly moving from “nice to have” to “core infrastructure.” Below is a recap of the key narratives we covered, and how they connect directly to what Ontology has been building for years.

👉 Download ONTO Wallet to create your first ONT ID, manage assets, and start building portable reputation across Web3.

As we head toward Ontology’s upcoming anniversary, this article is part of a wider series that highlights how today’s biggest crypto narratives are converging with the identity and trust vision we have been building for years.

The Global Regulatory Shift Toward Identity

Around the world, regulators are tightening their approach to crypto — but the most interesting trend isn’t enforcement, it’s how they’re thinking about identity.

Recent developments around MiCA implementation in Europe, growing scrutiny of exchanges in Asia, and continued enforcement in the U.S. all share a common theme:
regulators are increasingly talking about reusable, portable, privacy-preserving identity.

Instead of forcing users to complete KYC from scratch on every new platform, the emerging model looks like this:

Verify once with a trusted provider Receive a credential that proves your status Reuse that credential across multiple platforms and services

This model:

Reduces friction for users Lowers compliance overhead for platforms Creates a safer environment without over-collecting personal data

This is exactly the world Ontology has been designing for.

With ONT ID and the Verifiable Credentials framework, users can:

Prove who they are without repeatedly sharing sensitive documents Maintain user-owned, privacy-preserving identity Authenticate across platforms in a compliant way Meet regulatory requirements without compromising control over their data

Ontology has been advocating for reusable, verifiable identity for years. Now, the regulatory conversation is catching up. As this compliance layer becomes more standardized, ONT ID is positioned to act as a core building block for privacy-first, regulation-ready identity in Web3.

Social Platforms and Wallets Are Turning to DID

Another major narrative this week was the growing adoption of DID in the social and wallet space.

Decentralized social projects like Farcaster and Lens are putting identity at the center of their ecosystems, while larger, more traditional platforms and wallet providers are increasingly exploring stronger identity frameworks in response to:

AI-generated content Deepfakes Fake or bot-driven accounts

These dynamics are pushing apps toward identity systems that can:

Verify that a user is a real human Protect pseudonymity while still proving authenticity Make reputation portable across apps and communities

Again, this is where Ontology’s DID stack fits naturally.

Using ONT ID and Ontology’s DID infrastructure, social apps and wallets could enable:

Cross-platform authentication using a single identity Human verification without exposing private data Portable, DID-based social reputation Protection against bots, impersonators, and sybil attacks

In a world increasingly flooded with AI-generated profiles and synthetic content, DID is moving from optional addon to core requirement. Ontology offers a sovereign, decentralized, and portable identity layer that social platforms and wallet providers can integrate to build more trusted, user-centric experiences.

AI + Web3: Building the Trust Layer

One of the most important conversations of the week was the intersection between AI and blockchain.

Recent reports from leading ecosystem players have focused on a key idea:
AI is powerful, but without a trust layer, it becomes risky.

As AI reaches the point where its outputs are almost indistinguishable from human-created content, we face a global trust challenge:

We need cryptographic proof of:

Who created a piece of content When it was created Whether it has been altered Whether we are interacting with a human, an AI agent, or a hybrid

This is where decentralized identity and verifiable credentials become essential.

Ontology’s infrastructure is designed not just for human identities, but also for:

AI agents Bots and automated systems Machine-to-machine interactions

In an AI-powered world, Ontology envisions:

Humans verifying that they are interacting with a legitimate AI service AI agents verifying each other before exchanging data or executing tasks Content tied cryptographically to its original creator and source Algorithms and models carrying credentials that prove their integrity and provenance

The narrative is shifting from generic “AI + blockchain hype” to identity-driven trust for AI. Ontology is already building the DID and credential layers that can anchor this new trust fabric.

Reputation in DeFi, GameFi, and Airdrops

Reputation is rapidly becoming one of the most valuable assets in Web3.

This week highlighted a surge of interest in reputation-based systems across:

DeFi protocols, especially lending GameFi projects, battling bots and unfair play Airdrops and community rewards, focusing on quality over quantity

The old model of “anyone with a wallet can claim” is fading. Projects increasingly want:

Genuine, long-term users Reduced sybil activity and bot farming Reward mechanisms that favor engaged communities rather than opportunists

DeFi is exploring reputation-based credit; GameFi is seeking identity-aware mechanisms to ensure fair participation; and airdrops are increasingly gated by activity, history, and contribution quality.

Ontology’s identity and reputation tools offer exactly what this evolution needs:

Sybil-resistant reward systems Verified, identity-aware airdrops Trust-based access tiers and community segments Loyalty and engagement scoring based on real behavior Identity-driven community structures and roles

With ONT ID and Ontology’s reputation framework, reputation becomes portable, verifiable, and secure — not trapped inside a single platform. This unlocks a more sustainable and fair approach to incentives across ecosystems.

Enterprise Interest in Decentralized Identity

Beyond crypto-native platforms, enterprises across multiple industries are accelerating their exploration of decentralized identity and verifiable credentials.

We are seeing growing activity around DID in:

Supply chain — product-level identity and provenance Education — verifiable diplomas, credentials, and skill certificates Workforce and HR platforms — tamper-proof worker profiles and histories Healthcare — privacy-preserving patient identity and data access control

Enterprises are looking for ways to:

Reduce fraud Improve data integrity Avoid centralized honeypots of sensitive information Comply with strict data protection regulations

Ontology is well positioned here, with years of experience designing and deploying identity solutions for real-world partners in finance, automotive, and more.

Our DID and credential tools are:

Modular — adaptable to different use cases and architectures Cross-chain — not locked into a single network Enterprise-ready — designed to meet real operational and compliance needs

As more industries converge on DID standards, Ontology’s infrastructure can serve as a reliable, interoperable trust layer for real-world data.

Where Ontology Is Focusing Next

In light of these converging trends — regulation, social identity, AI, reputation, and enterprise adoption — Ontology is doubling down on several strategic priorities:

Expanding interoperable DID across multiple blockchains Building identity support for AI agents and automated systems Enhancing reputation scoring models for users, entities, and machines Deepening ecosystem partnerships across DeFi, GameFi, and infrastructure Strengthening developer tooling around ONT ID and verifiable credentials Continuing enterprise pilots and collaborations in key industries Growing community reputation and reward mechanisms powered by DID

These focus areas place Ontology at the center of the emerging trust-layer narrative for both Web3 and AI.

Conclusion: Identity as the Foundation of the Future Internet

The stories shaping crypto and Web3 this week — from regulatory frameworks and social platforms to AI and enterprise systems — all point in the same direction.

Identity is becoming the foundation of the next internet.

Decentralized identity, verifiable credentials, and portable reputation are no longer niche concepts. They are quickly becoming essential components for:

Compliant yet user-centric regulation Safer and more authentic social platforms Trustworthy AI interactions Fair and sustainable DeFi and GameFi ecosystems Secure, interoperable enterprise data infrastructure

This is the world Ontology has been building toward from the start.

As the demand for a decentralized trust layer grows, ONT ID and Ontology’s broader identity stack are ready to power the next generation of applications — across Web3, AI, and the real-world economy.

Ontology will continue to push forward as the trust layer for Web3, AI, and beyond.

Recommended Reading 8 Years of Trust, Your Ontology Story Begins Here — A look back at Ontology’s journey as a trust-focused Layer 1, highlighting the milestones, partnerships, and identity innovations that shaped its first eight years — and where it’s heading next. ONT ID: Decentralized Identity and Data — A deep dive into Ontology’s decentralized identity framework, including DID, verifiable credentials, and how ONT ID underpins privacy-preserving identity across multiple ecosystems. Verifiable Credentials & Trust Mechanism in Ontology — Technical overview of how Ontology issues, manages, and verifies credentials using ONT ID, including credential structure, signatures, and on-chain attestations. Identity Theft Explained — A clear, practical explainer on how identity theft works today and how decentralized identity, self-sovereign identity, and zero-knowledge proofs can finally flip the script in users’ favor. Ready to keep exploring Ontology and DID?

👉 Stay connected with Ontology, join our community, and never miss an update:
https://linktr.ee/OntologyNetwork

Community Connect: Web3 Trends Shaping Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Saturday, 29. November 2025

Ontology

Letter from the Founder: Ontology’s MainNet Upgrade

Ontology is celebrating its 8th anniversary and introducing one of its biggest updates yet — the v3.0.0 MainNet upgrade. Li Jun, Ontology’s Founder, shared the full announcement on X. Key Highlights From Ontology’s v3.0.0 MainNet Upgrade Strengthened Token Economy and Incentive Model Ontology’s v3.0.0 upgrade introduces major improvements to Ontology’s dual-token model (ONT and ONG),

Ontology is celebrating its 8th anniversary and introducing one of its biggest updates yet — the v3.0.0 MainNet upgrade. Li Jun, Ontology’s Founder, shared the full announcement on X.

Key Highlights From Ontology’s v3.0.0 MainNet Upgrade Strengthened Token Economy and Incentive Model

Ontology’s v3.0.0 upgrade introduces major improvements to Ontology’s dual-token model (ONT and ONG), designed to support long-term sustainability and ecosystem growth.

The total ONG supply has been reduced from 1 billion to 800 million, with 100 million ONG permanently locked. This lowers inflation and strengthens long-term token value. Updated reward distribution now allocates 80% of newly issued ONG to ONT stakers and 20% to liquidity and ecosystem expansion, balancing network security with growth incentives.

These changes align Ontology’s token model with long-term utility and healthier economic design.

Network Upgrades, Identity Integration, and Governance

The v3.0.0 upgrade enhances the core performance, interoperability, and identity tooling of the Ontology Blockchain.

Upcoming support for EIP-7702 will introduce a more flexible account system and stronger compatibility with the Ethereum ecosystem, improving cross-chain liquidity and builder experience. Core upgrades to consensus, stability, and gas management make the network faster and more reliable. ONT ID will soon be creatable directly on Ontology EVM, unlocking seamless decentralized identity use cases across DeFi, gaming, and social platforms. All tokenomics updates were approved through on-chain governance, reflecting a mature and aligned Ontology community.

These improvements position Ontology as a more interoperable, identity-driven, and community-governed Web3 infrastructure layer.

Product Enhancements, Developer Growth, and Real-World Utility

Ontology continues to expand its ecosystem with new tools, user experiences, and privacy-preserving features.

Expanded grants, developer tools, and onboarding resources make it easier to build with ONT, ONG, and ONT ID. A new encrypted IM solution launching later this year will leverage decentralized identity and zero-knowledge technology to protect user sovereignty and secure communication. The ONTO Wallet has been upgraded with a refined identity module, better UX, and new payfi functionality developed with partners, improving Web3 payments and digital identity management. Orange Protocol is advancing its zkTLS framework to turn verified, privacy-preserving reputation signals into real economic utility — strengthening Ontology’s mission to make decentralized trust measurable and portable. Recommended Reading

ONG Tokenomics Adjustment Proposal Passes Governance vote

The proposal secured over 117 million votes in approval, signaling strong consensus within the network to move forward with the next phase of ONG’s evolution.

Mainnet Upgrade Announcement

Initial update about the upcoming MainNet v3.0.0 upgrade and Consensus Nodes upgrade on December 1, 2025. This release will improve network performance and implement the approved ONG tokenomics update.

8 Years of Trust — Your Story Campaign

The first campaign to kick off Ontology’s 8th anniversary celebrations. It shares updates from the 2025 roadmap along with details on how to win rewards just for sharing your story with Ontology. We want to hear from you!

Your Guide to Joining The Node Campaign

Everything you need to know about how to get involved in Ontology’s node campaign, including key dates and requirements.

Letter from the Founder: Ontology’s MainNet Upgrade was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 28. November 2025

Northern Block

From 0 to 80%: How Bhutan Built a National Digital Identity in Two Years (with Pallavi Sharma)

Bhutan’s digital identity success story: 80% adoption in two years. Pallavi Sharma reveals strategies, use cases, and lessons learned. The post From 0 to 80%: How Bhutan Built a National Digital Identity in Two Years (with Pallavi Sharma) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

🎥 Watch on YouTube 🎥
🎧 Listen On Spotify 🎧
🎧 Listen On Apple Podcasts 🎧

What does it really take for a nation to jump from fragmented, paper-based services to 80% digital identity adoption in under two years?

In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Pallavi Sharma, Lead of Marketing & Communications for Bhutan’s National Digital Identity (NDI) program, to unpack one of the world’s most successful national-scale identity rollouts.

This conversation is both inspiring and deeply practical. Pallavi shares how Bhutan transitioned from piloting verifiable credentials in 2023 to achieving widespread adoption across banks, telecom companies, insurance providers, and other sectors. She outlines the political, cultural, and technical conditions that enabled rapid progress, conditions that other countries and digital-identity implementers can learn from, regardless of scale or region.

You’ll hear how Bhutan balanced decentralization principles with real-world user expectations, why their messaging strategy had to shift dramatically, and how features like self-attested photos, digital signatures, and even P2P chat unexpectedly drove massive user growth. Pallavi also outlines their future roadmap, from cross-border interoperability testing with India’s Digi Yatra, to biometrics-backed e-voting, to long-term ambitions for blockchain-based asset tokenization and CBDC integration.

Key Insights Bhutan achieved 80% national digital identity adoption in two years by integrating public services, banks, telcos, and private providers into a unified ecosystem. Strong backing from His Majesty the King and regulatory bodies enabled frictionless adoption and minimized political pushback. Users cared far more about seamless access than decentralization, privacy, or SSI principles, leading to a shift in messaging from “consent” to “convenience.” Offering remote onboarding, self-attested passport photos, digital signatures, and passwordless login enabled banks to scale eKYC rapidly. Even small features like P2P chat spiked adoption, showing that familiar, high-value use cases matter more than SSI theory. A centralized trust registry governs issuers/verifiers today, but the platform is expanding to include health, credit bureau, and employee credentials. Bhutan is testing cross-border interoperability with India’s DigiYatra and expanding support for multi-blockchain issuance (Polygon + Ethereum). The government sees value in Web3: exploration of CBDCs, NFTs, crypto payments, and blockchain-backed land tokenization. Inclusion remains core: cloud wallets for non-smartphone users, guardian features for children, voice & dual language support, and bio-crypto QR codes. Future vision: enabling high-stakes digital processes such as e-voting, land transactions, insurance claims, and remote verification using biometrics. Strategies Regulator-first alignment: Work closely with central banks, telecom authorities, and government agencies to ensure legal backing for digital credentials. Start simple (passwordless login): Begin with a universally valuable feature, then expand into more sophisticated credential issuance. Co-design with service providers: Analyze business workflows to identify credential gaps and add features (e.g., live-verified photos, e-signatures). Use mandatory government services as onboarding channels: Services like marriage certificates or police clearances drive mass citizen adoption. Promote use-case messaging rather than technical messaging: Highlight convenience (“open a bank account from home”) rather than decentralization. Introduce features that mimic familiar behaviors: P2P chat drove major user uptake by offering an intuitive, everyday-use function. Leverage biometrics for high-trust actions: Face-search, liveness checks, and crypto-QR codes enable secure remote workflows for future use cases. Additional resources: Episode Transcript Bhutan National Digital Identity (NDI) Website Previous SSI Orbit episode with Pallavi (2023 launch) Bhutan NDI Act DigiYatra (India’s Digital Travel Credential Framework) About Guest

Pallavi Sharma is the Lead of Marketing & Communications for Bhutan’s National Digital Identity (NDI) program, where she drives nationwide awareness, adoption, and stakeholder engagement for the country’s digital identity ecosystem. She plays a key role in shaping how citizens, regulators, and service providers understand and interact with Bhutan’s digital public infrastructure.

Previously, Pallavi worked with Deloitte Consulting in India and holds a Master’s degree in International Relations. She is passionate about digital inclusion, user-centric design, and building trusted digital systems that empower citizens and simplify access to essential services. LinkedIn

The post From 0 to 80%: How Bhutan Built a National Digital Identity in Two Years (with Pallavi Sharma) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

ComplyCube

How SSN Validation Works: A Practical Guide

Discover how SSN validation checks identify invalid, deceased, or mismatched Social Security numbers early in the process, minimising fraud risks, reducing manual review, and enhancing compliance across U.S. customer onboarding journeys. The post How SSN Validation Works: A Practical Guide first appeared on ComplyCube.

Discover how SSN validation checks identify invalid, deceased, or mismatched Social Security numbers early in the process, minimising fraud risks, reducing manual review, and enhancing compliance across U.S. customer onboarding journeys.

The post How SSN Validation Works: A Practical Guide first appeared on ComplyCube.


Herond Browser

Real vs. Artificial: Comparing the Best Types of Christmas Trees for Your Home

This guide is designed to navigate the complexities of this decision, breaking down the critical pros, cons, popular types of Christmas trees The post Real vs. Artificial: Comparing the Best Types of Christmas Trees for Your Home appeared first on Herond Blog.

The festive season is fast approaching, bringing with it the age-old holiday question: Which types of Christmas trees should grace your living room this year, a fresh, fragrant pine, or a perfectly shaped, reusable artificial tree? This guide is designed to navigate the complexities of this decision, breaking down the critical pros, cons, popular types of Christmas trees, and specific considerations for both real and artificial options. Our goal is to help you make the most informed and sustainable choice that perfectly suits your home, budget, and holiday traditions.

The Case for Real Christmas Trees Core Appeal: The primary draw is the authentic holiday experience rooted in tradition. Nothing compares to the fresh, unique scent of pine and the beautiful, imperfectly natural shape that instantly transforms a room into a festive haven. Popular Types of Christmas Trees

Choosing the right type affects both looks and longevity.

Fraser Fir: Highly prized for its excellent needle retention (staying green longer) and exceptionally strong, upturned branches, which are ideal for supporting heavier ornaments. Balsam Fir: The classic choice, renowned for producing the most potent and traditional Christmas fragrance. Its deep green color and symmetrical, pyramid shape are aesthetically perfect. Douglas Fir: A very popular and highly affordable option. It offers a dense, full shape and good aroma, making it a great budget-friendly choice for families. Scotch Pine: Known for its long-lasting freshness, it can stay fresh for over a month—and stiff branches that easily hold ornaments. Its needles are sharp and retained well, even when dry. Pros and Cons Summary (Real) ProsConsNatural Scent & Aesthetics: Provides an unmatched, authentic look and the irreplaceable pine fragrance.High Maintenance: Requires consistent watering to prevent it from drying out prematurely.Eco-Friendly: Supports local Christmas tree farms, is renewable, and is completely biodegradable (can be chipped or mulched).Needle Mess: Shedding needles can create a significant mess on the floor and carpets, requiring frequent sweeping.Unique Look: Each tree is unique, offering character and individuality that artificial trees cannot replicate.Fire Hazard: If allowed to dry out, a real tree can become a dangerous fire hazard.Supports Local Farms: Buying from a tree farm helps local agriculture and often includes a fun family outing.Limited Lifespan: Typically only lasts about 4 to 6 weeks indoors before beginning to visibly decline. The Case for Artificial Christmas Trees Core Appeal: The main advantages of artificial trees are their unparalleled convenience, long-term reusability, and the extensive variety of shapes, sizes, and colors available, allowing for perfect integration into any decorating scheme. Popular Types of Artificial Trees

The quality and realism of an artificial tree depend heavily on the material used.

PVC (Polyvinyl Chloride): This is the most common and budget-friendly option. PVC needles are cut from flat sheets and twisted onto wires, creating a full and dense, albeit less realistic, appearance. PE (Polyethylene) / True Needle Technology: These trees offer the highest level of realism. The needles are injection-molded directly from casts of real tree branches, resulting in a three-dimensional shape and texture that closely mimics natural evergreen foliage. Fiber Optic/Pre-lit Trees: Valued for maximum convenience, these options come with lights professionally strung and integrated directly into the branches. Fiber optic trees use tiny light strands within the needles themselves for a unique glowing effect. Pros and Cons Summary (Artificial) ProsConsZero Maintenance: Requires no watering, cleaning, or other upkeep once assembled, saving time during the busy holiday season.Requires Storage Space: Needs dedicated space to be carefully packed away and stored for 10 or 11 months of the year.Reusable for Years: High-quality trees can last 10 to 20 years, making them a lower cost per use over time.Lack of Natural Scent: Does not provide the characteristic pine aroma, requiring supplementary scents (like candles or diffusers) if desired.Consistent Shape & Look: Provides a perfect, symmetrical look year after year, with options for specific color palettes or pre-fluffed branches.Non-Recyclable Materials: Most artificial trees are made from plastics (PVC) and metals, which are difficult to recycle and end up in landfills.Pre-lit Options: Eliminates the effort and frustration of stringing lights and simplifies take-down and storage.High Initial Cost: The most realistic models (PE/True Needle) can have a significantly higher upfront purchase price than a real tree. Direct Comparison: Key Decision Factors FactorReal TreeArtificial TreeWinner (Why?)ScentStrong Natural ScentOften unscented (can use sprays)Real Tree (for authenticity)MaintenanceDaily watering, vacuuming needlesOne-time setup and takedownArtificial Tree (for convenience)Durability/Lifespan4-6 weeks5-15 yearsArtificial Tree (for long-term investment)Initial CostLower ($50 – $150)Higher ($100 – $1,000+)Real Tree (for single season)Environmental ImpactCarbon sink, biodegradableProduction emissions, waste (if discarded too soon)Tie (Depends on usage length) Making the Final Decision Based on Home Needs

The best choice ultimately depends on your household’s specific priorities, safety concerns, and commitment to maintenance.

Best Choice for Families with Pets or Children: Artificial. This option is significantly safer for younger children and pets, as there is no risk of drinking chemically treated tree water and PVC/PE materials are often treated to be fire-retardant. Furthermore, the lack of sharp needles reduces injury risk and eliminates the mess. Best Choice for Traditionalists/Purists: Real. For those who prioritize full sensory experience, the real tree is the clear winner. The authentic pine fragrance, the experience of selecting the tree, and the beautifully asymmetrical look cannot be convincingly replicated by synthetic materials. Best Choice for Small Spaces/Apartments: Artificial. These trees offer superior utility for cramped living areas. They are available in narrow “pencil” or “slim” profiles and often come pre-lit, requiring less setup time and providing easy, compact storage once the season ends. The Environmental Tie-Breaker: The sustainability debate hinges on one metric: longevity. An artificial tree must be kept and reused for a minimum of 5 to 10 years to fully offset the carbon footprint and resource costs associated with its manufacturing and shipping, making a long-term commitment key to its environmental advantage. Conclusion

The choice between types of Christmas trees ultimately hinges on two factors: tradition versus convenience. If you prioritize the unmatched festive scent, annual tradition, and natural beauty, the real tree is your classic choice, despite the extra effort of watering and disposal. If your focus is on convenience, consistent appearance, long-term value, and minimal mess, modern artificial trees (especially high-quality PE models) are the clear winner, offering incredible realism without maintenance. The best tree for your home is simply the one that fits your lifestyle and makes your holiday season shine the brightest.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Real vs. Artificial: Comparing the Best Types of Christmas Trees for Your Home appeared first on Herond Blog.


CapCut Login Guide: How to Sign In, Fix Issues & Reset Password

Running into CapCut login issues - whether it's a forgotten password, an unknown error, or simply needing a reliable sign-in method - can halt your creativity. The post CapCut Login Guide: How to Sign In, Fix Issues & Reset Password appeared first on Herond Blog.

CapCut has cemented its place as the go-to video editing application for social media creators worldwide. Whether you’re on the mobile app, desktop software, or the online editor, signing in is essential to access your saved projects and maintain workflow sync. However, running into CapCut login issues – whether it’s a forgotten password, an unknown error, or simply needing a reliable sign-in method – can halt your creativity.

This comprehensive guide provides you with step-by-step instructions on how to sign in to CapCut, troubleshoot common problems, and quickly reset your password, ensuring you get back to editing your videos smoothly and without interruption.

CapCut Login Methods: Step-by-Step Guide

The process of logging into CapCut is fast and simple, regardless of whether you are using the mobile application or the desktop client. Here are the detailed instructions for the most common platforms and sign-in methods.

Logging In via Mobile App (iOS/Android) Step 1: Open the CapCut App. Launch the CapCut application on your iOS or Android smartphone. Ensure you have the latest version installed to avoid compatibility or authentication issues. Step 2: Navigate to the Profile Section. Look for the “Me” tab, or a dedicated profile icon (often a silhouette of a person or a bust), typically located in the bottom-right or upper-right corner of the main screen. Tapping this will take you to the user center. Step 3: Select Your Login Method. CapCut offers several convenient sign-in options. Choose your preferred method from the available buttons: TikTok, Google, Facebook, or the traditional Email/Phone number combination. Choosing a social media option often speeds up the process significantly. Step 4: Complete the Authentication Process. Depending on your chosen method, you will be redirected to an authentication screen. If using a social media account, you must grant CapCut permission to access your profile information. If using Email/Phone, enter your credentials and follow any on-screen prompts for verification codes or password entry. Once successful, you will be logged in and returned to the main editing interface. Logging In on CapCut Desktop App (PC/Mac) Step 1: Launch the CapCut Desktop Application. Open the installed CapCut software on your PC or Mac. Wait for the application to fully load, presenting you with the main welcome or project screen. Step 2: Locate the Sign-In Prompt. Click the “Log In” or profile button, which is usually found prominently in the top right corner of the application window. This action will open the dedicated login interface. Step 3: Utilize the Quick QR Code Scan (Recommended). CapCut strongly encourages a speedy mobile-to-desktop login. If you are already signed in on your CapCut mobile app, simply use the app’s internal scanner to scan the QR code displayed on your desktop screen. This provides instant, hassle-free authentication. Step 4: Choose an Associated Account. If you cannot use the QR code, select one of the alternative sign-in options available below the code. These typically include Google, TikTok, or Facebook. Click your chosen provider and follow the external browser window prompts to verify your identity and authorize CapCut. Logging In to CapCut Online Editor (Web Browser) Step 1: Navigate to the CapCut Online Website. Open your preferred web browser and go to the official CapCut online editor URL. The web editor offers many of the same features as the desktop app without requiring a download. Step 2: Initiate the Sign-In Process. Look for the “Sign In” or “Log In” button, usually located in the upper-right corner of the page. Clicking this will bring up the dedicated login dialog box, displaying all available authentication methods. Step 3: Perform the Account Authentication Steps. Select your preferred login method (such as Google, TikTok, or Facebook). You will be prompted to enter your credentials or confirm your identity through the third-party service. Once verified, the page will refresh, and you will gain access to the web editor’s full suite of features and your cloud-synced projects. How to Reset CapCut Password (When You Forget) Step 1: Access the Recovery Link. When you reach the sign-in screen on the Mobile, Desktop, or Web app, look for the ‘Forgot Password?’ link, typically placed beneath the password input field, and click it to begin the recovery sequence. Step 2: Provide Account Identification. Input the exact email address or mobile phone number associated with your CapCut account. This is the crucial identifier used to link your identity to the password recovery process. Step 3: Verify with the Security Code. A security verification code will be instantaneously dispatched to the contact method you provided (inbox or SMS). Retrieve this code and accurately enter it into the dedicated field on the password recovery screen within a short time limit. Step 4: Establish a New Password. Once the code is successfully verified, you will be prompted to create a new, strong password. Confirm the new password by entering it again and click “Reset” or “Confirm” to finalize the change and instantly regain secure access to your account. Troubleshooting Common CapCut Login Issues Error: “Account Not Found” or “Invalid Account” Cause: Authentication Method Mismatch. This error almost always occurs when you initially created your CapCut account using a third-party service (like Sign in with Google or Continue with TikTok) but are now attempting to log in using the standard Email/Password field. Your account is linked to the social provider, not a traditional password, which is why the system cannot find a matching email entry. Fix: Verify the Original Sign-Up Method. You must verify and select the exact platform button you used when you first registered for CapCut. For instance, if you signed up with Google, click the “Continue with Google” button; if you signed up with TikTok, click “Continue with TikTok.” These providers handle the authentication on CapCut’s behalf. Failure to Log In via TikTok/Google/Facebook Cause: Permission or Connection Error. This failure often stems from an interruption in the communication between CapCut’s servers and the third-party API (like TikTok, Google, or Facebook). This includes failure to explicitly grant CapCut the necessary access permissions during the sign-in redirect, or general network time-outs and API throttling issues. Fix A: Clear Cached Data. Before attempting to sign in again, try clearing your browser’s cache and cookies (if using the Web Editor) or clearing the application data/cache (if using the Mobile or Desktop App). Stale or corrupted authentication tokens stored locally can often interfere with the new connection attempt. Fix B: Check Third-Party App Permissions. Manually navigate to the security or app settings page within your associated account (Google, Facebook, or TikTok). Find the list of apps granted access, and ensure that CapCut is still listed and authorized to use your profile for login. If the connection was revoked or expired, re-authorize it before trying the CapCut login button again. Fix C: Restart Your Device and Network. A temporary fix for many connection errors is a simple restart of the CapCut application and your device. If the issue persists, also consider briefly cycling your network connection (turning Wi-Fi or cellular data off and back on) to ensure a clean path for the external API requests. Connection Error or App Glitches Cause: Network Instability or Outdated Software. These errors typically happen when the CapCut application is unable to maintain a stable connection to its servers (due to a fluctuating internet connection or an overly restrictive VPN), or when the software version is outdated and encountering compatibility issues with the latest server-side updates. Fix A: Check Your Internet/VPN Connection. Ensure your Wi-Fi or cellular data connection is strong and stable. If you are using a Virtual Private Network (VPN), temporarily disable it and attempt to log in again. Some VPNs may block the necessary ports or IP addresses used by CapCut for authentication. Fix B: Update the CapCut Application. If your app is not running the latest version, go to your device’s app store (Google Play Store or Apple App Store) or the CapCut website/desktop updater. Install any pending updates. Developers frequently push fixes for login-related bugs in new releases. Fix C: Try Logging In on a Different Platform. If the error is consistently occurring on one platform (e.g., the Desktop application), try logging into the CapCut Online Editor in a web browser or the Mobile app. This can often confirm whether the issue is localized to a specific version of the software or a wider account problem. If you can log in to another platform, you can at least proceed with your work while troubleshooting the initial platform. Conclusion

Mastering the CapCut login process, whether you’re using the mobile app, desktop client, or web editor, is the first step toward creating stunning video content. While signing in via social accounts like TikTok and Google offers speed and convenience, it’s crucial to remember your original sign-up method to prevent “Account Not Found” errors. Should you encounter a connection error or app glitch, remember that simple fixes—like clearing your cache, checking VPN settings, or ensuring your app is updated – can quickly restore access. By following these straightforward steps, you can ensure your CapCut login remains quick, secure, and reliable, keeping you focused on your creative projects.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post CapCut Login Guide: How to Sign In, Fix Issues & Reset Password appeared first on Herond Blog.

Thursday, 27. November 2025

ComplyCube

The CryptoCubed Newsletter: November Edition

The November edition of CryptoCubed includes Coinbase Europe's €21.46 million fine for AML breaches, X hit with a €5 million penalty for unauthorized crypto ads, and South Korea's increased enforcement penalties on crypto exchanges. The post The CryptoCubed Newsletter: November Edition first appeared on ComplyCube.

The November edition of CryptoCubed includes Coinbase Europe's €21.46 million fine for AML breaches, X hit with a €5 million penalty for unauthorized crypto ads, and South Korea's increased enforcement penalties on crypto exchanges.

The post The CryptoCubed Newsletter: November Edition first appeared on ComplyCube.


Ockto

Alles over brondata: de slimme basis voor betere processen

Digitaal persoonsgegevens ophalen via databronnen Brondata zijn gegevens die digitaal aangeleverd worden - direct uit de originele bron. Wist je dat er gemiddeld vier contactmomenten nodig zijn om alle benodigde docu
Digitaal persoonsgegevens ophalen via databronnen

Brondata zijn gegevens die digitaal aangeleverd worden - direct uit de originele bron.

Wist je dat er gemiddeld vier contactmomenten nodig zijn om alle benodigde documenten van een klant te ontvangen? En dat 70% van de dossiers de eerste keer uitvalt door onjuiste data?
Veel professionals in sectoren als financiële diensten, vastgoed en verzekeringen ervaren dit in de acceptatie- of onboardingsprocessen. Bijvoorbeeld bij de aanvraag van financiering of het rondmaken van een huurcontract. Gelukkig is er een oplossing om die processen te versimpelen: brondata. Met brondata maak je je aanvraagproces eenvoudiger, sneller en beter. 

Gebruik van brondata maakt het makkelijker voor klanten om in één keer de juiste gegevens aan te leveren. Op deze pagina bespreken we de meest gestelde vragen rondom brondata. We gaan in op de de voordelen, hoe brondata nuttig kan zijn voor jouw organisatie, én hoe je het AVG-proof houdt.


Kin AI

Investing in your mind is the only edge that compounds forever

A founder reflects on depression, building a personal advisory board, and why compounding thinking matters more than compounding capital.
Yngvi Karlson, Co-founder of KIN

It is not that hard to be 45 and make good money.

What is hard is being 45, making good money, and also having a healthy family, a healthy body, a healthy mind, real friendships, a good romantic relationship, and a set of values you actually live by.

Most of us secretly know this. Still, we spend an absurd amount of time optimizing our businesses and almost no structured time investing in the one thing that sits underneath everything else. Our mind.

This is the story of how I learned to fix that. For myself first. And eventually, for others.

The first advisor that changed everything

In my early twenties I hit a wall.

I had always been the type who pushed through with effort. More hours, more intensity, more “I will just figure it out”. On paper I was doing well. Inside, I was not.

At some point depression crept in. I resisted talking to a psychologist for a long time. It felt like something “other people” did. Eventually I gave in, sat down, and started talking.

In one of those early sessions, I spent twenty minutes explaining why I couldn’t fire someone on my team. I had reasons, justifications, contingencies. He listened, then asked: “What would you tell a friend in this exact situation?” I answered in ten seconds. The fog lifted.

That moment was embarrassing and liberating at the same time.

Embarrassing, because I realised how much energy I had been wasting trying to solve everything alone. Liberating, because I got a taste of what it felt like to have someone in my corner whose only job was to help me think. I walked out of that session knowing exactly what to do on that messy project, which actions to take, and I could finally move on to the next problem instead of spinning on this one.

It was the first time I really understood this simple thing:

My mind is not an infinite free resource. If I treat it like one, I pay for it later.

That first advisor encounter planted a seed that would grow into something much bigger.

Building a real life personal advisory board

Over the next years I did something that, at the time, did not feel very strategic. I just kept adding people around me who could help me think better.

A psychologist for my inner life and relationships.

A health partner to run with, reflect with, and keep my body and energy in a good place.

A leadership coach to help me give feedback, grow teams, and grow up as a leader.

Later, a personal legal advisor to help protect the downside, and a personal financial advisor to help me think clearly about risk, security, and long-term bets.

Mentors. Fellow entrepreneurs. Close friends. People further ahead on the path. Over time, almost every important part of my life had at least one person I could lean on.

For context, I have been an entrepreneur for about twenty years, and my natural state is too many ideas, too much switching, and very little built-in handbrake.

I did not have the luxury of reinventing the wheel every time.

So this personal advisory board became my invisible infrastructure.

If I was stuck on a leadership problem, I would call the leadership coach and use their brain to sharpen my own. Sometimes I would even pull them into mentoring younger founders I had invested in, so they could grow their feedback muscles faster than I did.

If I was facing questions about family, difficult emotions, how to structure my life, or who I actually wanted to be, I would sit in my psychologist’s office and talk it through with him. He helped me sort things, see what really mattered, and understand the trade offs I was making.

Instead of being this heroic solo decision maker, I became the person who asks for help early. And that changed everything.

I stopped wasting cycles reinventing basic mental tools. I started reusing playbooks. I made better calls with less drama. Over time that translated into real outcomes:

A stable household. A strong relationship. Kids who are (mostly) okay. Healthy businesses. Good friendships. A mind that is still intense, but more anchored. A body that can keep up. A financial life that doesn’t feel like a constant emergency. And even some actual space in my life for rest and simple fun that isn’t just collapsing between crises.

The hidden lesson: your mind is the real compounding asset

Looking back, the pattern is simple.

Each advisor gave me mental models. Language. Ways of seeing. Those models did not disappear after one session, they stayed with me. They compounded.

We talk a lot about compounding capital as founders. We do not talk enough about compounding thinking.

When you invest in your own mind, everything else gets a little bit easier:

Decisions cost less energy

Conflicts become easier to navigate

You recover faster from setbacks

You can hold more complexity without burning out

Meanwhile, a lot of the founder culture is still obsessively focused on one narrow form of success. Revenue. Valuation. Headcount.

Making money by 45 is not actually the hard part.

The hard part is doing that and still liking who you are, still having a partner who trusts you, kids who feel seen, friends you did not sacrifice on the altar of “one more round”, and a body and mind that are not completely wrecked.

There is a version of this journey that I once described as a kind of ‘Faustian bargain’ - or ‘Deal with the Devil’ - for other entrepreneurs.

A lot of founders I’ve met – including myself – started out running on external fuel: money, status, approval, the need to prove something to someone. That “schoolyard strategy” works incredibly well in your early twenties. It gets you out of bad circumstances, helps you survive, sometimes even helps you win.

But eventually you have to pay the devil his due.

You’re no longer 22 with nothing to lose. You’re 35 or 45. You might have a partner, maybe kids, a team who depends on you. And the same strategy that helped you survive the schoolyard starts quietly wrecking your life.

The price is often a ruined relationship, burnout, depression, anxiety, or a constant feeling that whatever you build is never enough.

I’m one of the people who made that deal early on. It took me four to five hard years to shift from that external, proving mindset into something closer to inner motivation and self-compassion. To stop chasing only the scoreboard and start investing seriously in my own mind, values, and relationships. I’m probably still paying off some of the interest.

Building a life where you have money and also a healthy family, friendships, a body and a mind you can live with is not an accident. It is the result of investing in your inner infrastructure as seriously as you invest in your company’s infrastructure.

For me, that is the real root of Kin. It is not about productivity tricks. It is about building a system that helps your mind compound.

Who gets your secrets?

While I was building this advisory board in my physical life, a parallel question started nagging at me. One that would shape how we’re building Kin.

I have always been drawn to questions of autonomy and responsibility. In the physical world we take certain things for granted. You can own a home. You can own a piece of land. Countries have borders. Those borders create sovereignty. There are basic property rights that say “this is mine, you cannot just walk in and take it.”

I started to wonder why we do not think the same way about our digital lives.

Around the same time I was reading prominent libertarian thinkers, and watching the early crypto movement obsess over the same idea: people should be able to own their assets directly, instead of trusting large institutions.

It became clear to me that the same question would show up in our inner digital lives too.

Who owns your thoughts? Who owns your patterns? Who owns the very detailed psychological map that gets created when you pour your inner life into a tool?

We have spent a decade watching big platforms treat personal information like an extractive resource. It is very efficient. It also, in my view, feels fundamentally wrong on a human level.

So when large language models arrived and I started imagining what a truly personal AI could be, the trust question came up immediately.

If we’re going to build something that sits with you in your most vulnerable moments, that hears your doubts about your partner, your investors, your kids, your fears, your failures, then this cannot be “just another cloud product”.

It has to be built on the exact opposite logic.

Your data is yours. Stored locally by default. Encrypted. Portable. Editable. You are not the product. You are the owner.

That is not a marketing angle for me. It is the only way I can look myself in the mirror and still be proud of shipping this.

Why the existing tools were not enough

There are already great tools out there.

Therapists, coaches, group programs. I will always recommend them. They changed my life.

There are also great AI tools. AI Chatbots that can write copy, summarise documents, debug code, help you brainstorm. I use some of them every day.

But when I looked at what I actually wanted for myself as a founder and as a human, there was still a big gap.

I could not find anything that:

Felt like a long term, emotionally invested thinking partner, not just a transactional Q and A engine

Helped me connect dots across my life, not just across one project

Respected my desire for privacy and self sovereignty instead of hoovering up my inner life into someone else’s black box

I did not want a productivity machine. I wanted something that felt closer to that personal board of advisors I had built in the real world.

A place where different “voices” could help me see the same situation from different angles. Career. Relationships. Health. Values. And where all of that was grounded in knowledge about who I actually am and what I care about.

What Kin is today - honest 0.9 mode

So we started building Kin.

Not as some abstract AGI fantasy, but as a very practical, personal system that sits next to you in your real life.

Today, Kin is not a polished version 1. It is closer to 0.9. Rough edges included. Here is what it already does well.

If you have used tools like ChatGPT, the difference you feel with Kin is that it does not just answer a prompt and move on. It remembers your world and thinks with you over time. It feels closer to a small advisory board than a single AI chatbot.

When I journal or debrief a day, it doesn’t just disappear into a chat log. I can talk to an advisor inside Kin about the entry and reflect on it, and Kin turns those moments into threads I can come back to later, so board meetings, product decisions, health issues and family situations build on top of each other instead of starting from zero each time.

Last Tuesday I woke up at 5:25, did my usual yoga, and then sat down with too many things in my head: investor follow-ups, a product decision that had been stuck for a week, and guilt about working all evening the day before.

I opened Kin. Pulse pulled in my low recovery score I shared from WHOOP, the back to back meetings directly from my Calendar, and the 2-hour padel session I had booked that evening and basically said: “This is not a hero day. Move one thing, lower the strain, and protect your evening.” It sounds small, but for me that is the difference between stumbling through the day and actually having something left for my family at night.

That’s the difference. Not productivity tricks. Thinking partners who know my patterns and help me catch myself.

Underneath all of this is a privacy first architecture. My data is stored locally by default, encrypted and under my control. My inner life is something to be protected, not mined.

Is it perfect? No.

Does it already feel, to me, like a meaningful extension of that advisory board idea into software? Yes.

Money, valuations and exits are fine. I still care about those too. But I no longer believe that is the real scoreboard.

Kin is what I’m building to make that a little bit easier. A private board of advisors that compounds with you over time.

The real scoreboard is whether you can build something meaningful in the world without losing yourself or the people you love in the process.

- Yngvi

If this resonates, you can find Kin here, and follow me on LinkedIn

Wednesday, 26. November 2025

Indicio

Indicio expands Japanese market presence with membership of JETRO J-Bridge Program

The post Indicio expands Japanese market presence with membership of JETRO J-Bridge Program appeared first on Indicio.
Building on its partnerships with leading Japanese companies — NEC, Dai Nippon Printing, Digital Knowledge Co., and Nippon RAD — Indicio has been accepted into the Japan External Trade Organization (JETRO) J-Bridge Program, which is designed to foster business collaboration between overseas and Japanese companies.

By Helen Garneau

Tokyo, Japan – November 26, 2025 — Indicio, a global leader in decentralized identity and Verifiable Credentials, announced today that it has been accepted into the Japan External Trade Organization’s (JETRO) J-Bridge program, marking another milestone in the company’s expanding presence in Japan.

The J-Bridge Program, supported by Japan’s Ministry of Economy, Trade, and Industry (METI), is designed to help innovative international companies establish and scale their presence in Japan. It will provide Indicio with access to JETRO’s comprehensive business support infrastructure. 

Indicio is already helping Japan meet the growing market demand for secure, interoperable, and user-controlled identity verification through its commercial partnerships with NEC, Dai Nippon Printing (DNP), Digital Knowledge Co., and Nippon RAD to provide Indicio Proven® to a wide range of sectors. 

Indicio Proven is a decentralized identity platform for creating reusable and interoperable government-grade digital identity with globally-interoperable Verifiable Credentials and digital wallets. 

Customers across travel and hospitality banking and finance, and public services are using Indicio Proven to eliminate manual verification, improve efficiency, and to protect users from surging identity fraud, including social engineering scams and deepfakes. 

What makes Indicio Proven uniquely powerful is that it lets customers create digital identity credentials that combine authenticated biometrics and document validation, then present these credentials anywhere for instant, seamless authentication to over 15,000 identity documents from 254 countries and territories, combine them with face mapping and liveness checks, and turn that verified information into portable Verifiable Credentials in any major credential format. 

Credentials created with Indicio Proven follow global standards, including the European Union’s Digital Identity framework (EUDI) and digital wallet guidelines. Proven also makes it easy to combine different credential types in a single, seamless workflow managed from a single digital wallet.

“We’ve had tremendous interest from leading Japanese companies in our technology and this has led to dynamic and exciting partnerships and collaboration,” said Heather Dahl, CEO of Indicio. “So we are both honored to be accepted into JETRO’s J-Bridge program and eager to expand our presence in Japan. We have much to learn from Japan’s rich tradition in technology innovation and vibrant, world-leading companies and industries — and we believe we have much to contribute to creating a new era of digital transformation based around decentralized identity, portable, cryptographic trust, and data privacy.” 

About Indicio

Indicio is the global leader in decentralized identity and Verifiable Credential technology for seamless, secure, and privacy-preserving authentication across any system or network. With Indicio Proven, customers are able to combine document validation and authenticated biometrics to create reusable, government-grade digital identity in minutes for instant verification anywhere. Proven gives customers the widest choice of credential and protocol choices with full interoperability, a digital wallet compatible with EUDI and global standards, and a mobile SDK for adding credentials to Android and IoS apps. With our technology deployed across the globe, from border control to financial KYC, and with expansion into identity for IoT and AI, Indicio solutions are eliminating fraud, reducing costs, and improving user experiences. Headquartered in Seattle, Indicio operates globally with customers and partners across North America, Europe, Asia, and the Pacific.

 

The post Indicio expands Japanese market presence with membership of JETRO J-Bridge Program appeared first on Indicio.


Ontology

Ontology Network Nigeria: Building Trust, Identity, and Community in Web3

There was a time when “Web3” sounded like something far away — a digital universe only a few could touch. But today, it’s right here with us in Nigeria. From creators to developers, everyone’s trying to understand how this new internet of trust and decentralization fits into our everyday lives. For us at Ontology Network Nigeria, that’s where the magic happens. We’re not just a blockch

There was a time when “Web3” sounded like something far away — a digital universe only a few could touch. But today, it’s right here with us in Nigeria. From creators to developers, everyone’s trying to understand how this new internet of trust and decentralization fits into our everyday lives.

For us at Ontology Network Nigeria, that’s where the magic happens.

We’re not just a blockchain community; we’re a movement of young Africans exploring how identity, data ownership, and decentralized trust can shape our digital future.

The Rebirth of a Community

When Ontology Nigeria first started, it felt like lighting a small candle in a dark room. Web3 was new, confusing, and filled with jargon. But slowly, our community began to grow — one conversation, one event, one campaign at a time.

From educating students about decentralized identity to running fun challenges (like our recent Halloween contest 🎃), we’ve built something more than just engagement. We’ve built belonging.

People started realizing that Ontology wasn’t just another blockchain — it was a platform that put people first, giving everyone control over their data and digital reputation.

Why Ontology Matters to Us

In a world drowning in data leaks and privacy concerns, Ontology Network stands for digital freedom. Its mission? To give every user a self-sovereign identity (ONT ID) — meaning you own your data, decide who sees it, and still stay connected to global systems.

Imagine a Nigeria where your education records, business credentials, or creative portfolio live on the blockchain — verifiable, secure, and fully under your control. That’s the kind of future Ontology inspires us to build.

And it’s not just theory — tools like the ONTO Wallet make it real. Every time someone in our community creates their ONT ID, it’s not just a technical act. It’s a declaration of digital independence.

The Pulse of Ontology Nigeria 💙

Our community has grown into a space where ideas meet action. We’ve hosted workshops, creative contests, and tech conversations that help demystify blockchain for the average Nigerian youth.

Some came to learn, some came to build — but all left with a new understanding: Web3 isn’t just for coders, it’s for dreamers too.

We’ve seen poets, designers, and content creators explore how Ontology tools can support creativity, ownership, and visibility in the decentralized web. That’s the kind of energy that keeps us going.

Looking Ahead

We know the journey isn’t easy — education gaps, internet challenges, and sometimes, skepticism. But we also know that innovation always starts with curiosity, and Ontology is helping us keep that flame alive.

The next phase for Ontology Network Nigeria is about impact and inclusion — bringing even more local voices into the Web3 space, showing that Africa’s creativity deserves not just recognition but ownership.

Because in the end, this isn’t just about technology.
It’s about trust, empowerment, and identity.

Final Thoughts

Every time I meet someone new in the community and they say, “I created my ONT ID today”, it reminds me why we started — to make sure Africans aren’t just consumers of the next internet, but active architects of it.

So here’s my call to everyone reading this:
Join the movement. Learn, build, and share. Let’s redefine what ownership means — together, through Ontology.

Welcome to the new era of digital trust.
Welcome to Ontology Network Nigeria 🇳🇬💙

Ontology Network Nigeria: Building Trust, Identity, and Community in Web3 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


8 Years of Trust: Your Ontology Story Begins Here

Eight years ago, Ontology set out with a simple but powerful vision: to build a decentralized identity and data ecosystem that people could trust. Today, as we celebrate our 8th anniversary, we’re honored to reflect on a journey shaped not just by innovation, but by a global community that has grown with us, supported us, challenged us, and helped define what Ontology has become. Our anniver

Eight years ago, Ontology set out with a simple but powerful vision: to build a decentralized identity and data ecosystem that people could trust. Today, as we celebrate our 8th anniversary, we’re honored to reflect on a journey shaped not just by innovation, but by a global community that has grown with us, supported us, challenged us, and helped define what Ontology has become.

Our anniversary theme, “Celebrating 8 Years of Trust. Unlocking ∞ Possibilities”, represents exactly that. Trust has always been our foundation. Possibility has always been our future. And between them lies our community, the bridge that made eight years possible.

This milestone is more than a celebration. It’s an invitation.

An invitation to look back… and to build forward.

Your Story, Our Story: Announcing “Ontology Life — Ontology & Me”

To mark our 8th anniversary, we’re launching a special community campaign:

🌀 Ontology Life — Ontology & Me Story Sharing Contest

Over the years, Ontology hasn’t just been a protocol; it’s been part of people’s journeys as builders, developers, creators, partners, and community members. We want to hear those stories.

📅 Timeline

November 21 — November 30

💰 Prize Pool: $1,000 in ONG 🥉 $200 ONG 🥈 $300 ONG 🥇 $500 ONG

Winners will be chosen based on:

Story quality Engagement and votes on social media Final selection by the Ontology team 📣 How to Join Look for the official anniversary post on X. Reply under the tweet with your Ontology story, your first interaction, biggest milestone, favorite moment, or how Ontology has shaped your Web3 journey. Share it, hype it, tag friends, and celebrate with us.

Whether you’ve been here since the beginning or you joined last week, your story matters. Your voice is part of our history.

A Look Back: 8 Years of Building Trust

Over the past eight years, Ontology has powered secure decentralized identity, enterprise adoption, cross-chain integrations, and real-world solutions. We led the charge on DID long before it became a Web3 trend. We consistently pushed for a safer, more private, more human digital world.

From ONT ID to ONTO Wallet, from enterprise partnerships to community-led initiatives, this journey has been built together, step by step, block by block.

And the future?

The future is even more exciting.

Unlocking ∞ Possibilities: What’s Next for Ontology

We’ve been making great progress in implementing our 2025 Roadmap, as our 8th year shapes up to be an important foundation in creating a future with infinite possibilities. This next chapter is designed around one mission: empowering people with tools that make Web3 more open, more intelligent, and more connected than ever before.

Below is a deeper look at the innovations coming to the Ontology ecosystem.

🔒 Ontology IM: Decentralized, Private Instant Messaging

Our biggest launch of the year is also one of our most ambitious.

Ontology IM is a decentralized, identity-verified messaging protocol designed for the next era of Web3 communication. Unlike traditional messaging apps that rely on centralized servers, user profiling, and opaque data practices, Ontology IM offers:

Full end-to-end encryption with cryptographic identity verification True censorship resistance — no middleman controls your data DID-anchored messaging, ensuring that every conversation is real, verifiable, and secure

This protocol is built not just as another messaging app, but as an infrastructure layer for decentralized communication, enabling:

dApps to embed secure messaging instantly Communities to coordinate without fear of censorship Users to enjoy seamless, private conversations across chains

After months of development and testing, Ontology IM is nearing its public debut, and the first hands-on experience will be in your grasp very soon.

🤖 AI Marketplaces

AI and blockchain are converging faster than ever, and Ontology is positioned at the center of that evolution.

Our upcoming AI Marketplace will introduce a new layer of intelligence to the Web3 experience by enabling creators, developers, and users to:

Deploy AI agents tailored for wallet management, data insights, identity verification, transaction support, and more Access AI-driven services that plug directly into the Ontology ecosystem Build and monetize AI tools in an open marketplace powered by decentralized identity Combine ONT ID + AI for secure, privacy-protected personalization

For users, this opens the door to Web3 that is more intuitive and more helpful:

Your wallet becomes your assistant Your AI agents understand your on-chain activity while keeping your data private Everyday tasks become smoother, more automated, and more efficient

For developers, it’s an opportunity to launch intelligent tools into a global marketplace built on real identity and trust.

💧 DEX Integration & Liquidity Expansion

Liquidity is the lifeblood of any blockchain ecosystem, and 2025 will mark a major strengthening of Ontology’s market foundations.

Following the latest community vote and the upcoming MainNet Upgrade, the stage is set for:

Improved liquidity for both ONT and ONG Enhanced token utility across the Ontology EVM A more accessible and connected trading environment for users and partners

Alongside these improvements, we are finalizing the requirements to bring a DEX to the Ontology EVM, enabling:

Native swaps Yield and liquidity opportunities More robust token flows across the ecosystem Easier onboarding for developers building decentralized applications

This upgrade ensures Ontology remains competitive, flexible, and attractive to new builders in a rapidly evolving multi-chain world.

Thank You for 8 Years of Trust

Eight years in Web3 is more than a milestone, it’s a testament to resilience, innovation, and community. Through bull runs, bear markets, technological shifts, and global changes, Ontology has remained committed to building infrastructure that empowers people, not platforms.

To everyone who believed in Ontology, who built with us, who shared feedback, who held discussions, who contributed code, who created content, who participated in governance, who simply showed up —

Thank you.

Our story continues.

And now, we want to hear yours.

Share your story. Celebrate our journey. Shape what comes next.

Happy 8th Anniversary, Ontology!

8 Years of Trust: Your Ontology Story Begins Here was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


UbiSecure

IDS 2025.2, Security Headers and Refresh Tokens

In the second release of 2025 we have improved HTTP Security Headers for both SSO and CustomerID. While many customers will have... The post IDS 2025.2, Security Headers and Refresh Tokens appeared first on Ubisecure Digital Identity Management.

In the second release of 2025 we have improved HTTP Security Headers for both SSO and CustomerID. While many customers will have these already set at the proxy level, having the ability to control security headers within each application’s deployment may also benefit your deployment. These HTTP Security Headers are set to default off for this deployment, permitting you to become aware of them and test them in your environment. For our next release in spring of 2026, these HTTP Security Headers will be set to be on as default, though you will still have the ability to turn them off in your environment.  

A core feature has been added to SSO; you can now manage Refresh Tokens. There is a new ability to apply policies to act on Refresh Tokens; these can be set against existing Refresh Tokens, or you can create policy to be applied for new tokens only. Please take a look at our release notes. You will find a link to the Refresh Token Expiration Policy page which lays out how to use the new feature. If you have concerns or questions, feel free to open a service desk ticket and we are always happy to help clarify (we will take your question as an opportunity to improve the documentation as well).  

Within IDS 2024.2, SSO 9.5, we corrected several CVEs. Unfortunately, in correcting a small number of these, an error was introduced within Tomcat. This leads to unneeded threads being created, which can impact a very large or very long uptime environment. We observed a slight performance decrease in our release testing but were only able to identify the cause early this fall. We have created patch releases for SSO 9.5 and SSO 9.6. If you are unable to update to IDS 2025.2 with SSO 9.8, please consider deploying a patch to your environment, or ensure that it is rebooted regularly, which will release the unneeded threads.  

There are, as always, several CVEs and other corrections that have been made to the Identity Platform.  

One highlight for the upcoming release that we would like to mention. We are working to update a number of core technologies used by Identity Platform. At this time, we are aiming to deliver IDS 2026.1 as SSO 10.0 and CustomerID 7.0.  These are major version upgrades as they contain backward incompatible changes. We will update the full platform to Java 21. SSO will have Tomcat updated from 9.x to 10.x. And CustomerID will be migrated from Wildfly to Spring Boot; note that UI and APIs will remain unaltered.   

As with all software, Ubisecure would like to encourage you to upgrade your Identity Platform in a timely manner. Please contact your Integration Partner or Ubisecure Account Representative with any questions. Ubisecure encourages all customers to review and schedule service upgrade to this latest release. Bringing system flexibility, security, and new features to ensure the best user experience possible for your business is our goal.  

For full details of the IDS 2025.2 release, please review the Release Notes and System Recommendations pages found on our Developer Portal.  

The post IDS 2025.2, Security Headers and Refresh Tokens appeared first on Ubisecure Digital Identity Management.


Dock

DPRIN Partners with Dock Labs to Verify UK Product Compliance in Real Time

The Digital Product Regulation Innovation Network (DPRIN) — a UK initiative bringing together regulators, academics, and technology experts — have worked in partnership with Dock Labs to verify product compliance in real time, helping prevent unsafe or non-compliant goods from being sold through online marketplaces. The

The Digital Product Regulation Innovation Network (DPRIN) — a UK initiative bringing together regulators, academics, and technology experts — have worked in partnership with Dock Labs to verify product compliance in real time, helping prevent unsafe or non-compliant goods from being sold through online marketplaces.

The collaboration initially focuses on the example of eBike compliance, a growing concern as high-risk products are increasingly sold online with limited oversight, and will soon extend to other regulated product categories where real-time verification can enhance safety and consumer trust.

Using Dock Labs’ Truvera platform in close collaboration with their team, DPRIN has developed a digital compliance solution that makes regulatory data transparent, tamper-proof, and instantly verifiable across the supply chain.


Recognito Vision

How Digital Identity Verification Is Transforming Secure User Onboarding in India

India’s digital ecosystem is expanding faster than ever. With millions of people signing up for mobile apps, fintech platforms, gig services, and online marketplaces, businesses must verify identities quickly and accurately. What once relied on slow manual checks has now become a sophisticated process powered by artificial intelligence. Modern digital identity verification enables Indian companies.

India’s digital ecosystem is expanding faster than ever. With millions of people signing up for mobile apps, fintech platforms, gig services, and online marketplaces, businesses must verify identities quickly and accurately. What once relied on slow manual checks has now become a sophisticated process powered by artificial intelligence. Modern digital identity verification enables Indian companies to confirm user authenticity instantly while improving compliance and reducing the risk of identity fraud.

Fraudsters today are more advanced than ever. Edited Aadhaar cards, manipulated PAN cards, deepfake selfies, replay attacks, and synthetic identities frequently appear during user onboarding. For a mobile-first nation like India, where users join from diverse devices and environments, platforms need verification systems that can think, adapt, and detect suspicious activity in real time. AI-driven verification provides exactly that through a mix of biometric intelligence, behavioural analysis, and device pattern recognition.

 

The Shift Toward Smarter Verification in India’s Digital Landscape

Not long ago, onboarding relied heavily on manual document uploads and basic OTP-based checks. These methods were inconsistent and often unable to detect tampering. Today, AI identity verification has transformed this process into something far more reliable and scalable.

Advanced systems use facial recognition technology to examine facial landmarks, micro details, angles, and lighting variations, comparing them against ID photos with precision. Their performance is measured through global benchmarks such as the NIST Face Recognition Vendor Test and the NIST FRVT 1:1 Assessment, which many Indian businesses evaluate when assessing biometric reliability and face recognition accuracy.

In a country where users join from various environments such as urban offices, rural homes, low-light rooms, and budget smartphones, AI-driven verification ensures that identity checks remain consistent and dependable regardless of conditions.

 

Why Indian Businesses Are Choosing AI-Powered Identity Security

Industries across India including fintech, gig platforms, mobility, gaming, healthcare, and others need scalable identity verification that is accurate and frictionless. AI-powered systems detect subtle inconsistencies, identify repeated login attempts, analyze unusual device behaviour, and recognize tampered images more effectively than manual reviews.

They also help companies align with global privacy expectations outlined in frameworks like the GDPR.

Here is why more Indian organizations are adopting intelligent identity verification:

Faster onboarding that reduces user drop-offs and keeps signups smooth

Higher accuracy in detecting tampered IDs or abnormal face patterns

Stronger identity fraud prevention supported by behavioural insights

Compliance-friendly frameworks suitable for regulated sectors

Better trust and transparency for users joining digital platforms

Many companies begin exploring these features with the Face Recognition SDK for matching, the Face Liveness Detection SDK for spoof prevention, and the Face Biometric Playground to test different verification flows.

How Liveness Detection Strengthens India’s Fight Against Identity Fraud

Fraud techniques are evolving quickly in India. Deepfake videos, printed photos, digital masks, and reused images frequently appear during remote onboarding. This is why face liveness detection has become essential for Indian platforms seeking strong security.

A robust liveness detection system evaluates depth, movement, reflections, and real-time user presence to confirm that a live individual is in front of the camera. It prevents many of the common spoofing attempts exploited in fintech onboarding, gig platform verifications, loan approvals, and ride-hailing identity checks. This additional layer ensures that identity verification remains resistant to impersonation even as fraud tools become more sophisticated.

 

Creating Trust Through Better Biometric Verification

A dependable biometric verification system does more than match a face. It evaluates behavioural cues, device patterns, and contextual signals. This is particularly important in India, where onboarding must work seamlessly across varying camera qualities, lighting situations, and device types.

Companies also depend on biometric authentication tools to handle large verification volumes efficiently. These tools support real-time user verification, allowing platforms to approve genuine users instantly while flagging high-risk attempts for deeper review.

Developers and researchers often rely on transparent, community-driven innovation, supported by open contributions available in the Recognito GitHub repository.

 

Balancing Privacy and Security for Indian Users

Indian users are increasingly aware of how their data is collected and stored. Organizations implementing identity verification must balance accuracy with responsible data handling. Encrypting biometric templates, minimizing stored data, following clear retention policies, and communicating transparently all help build user trust.

Following guidelines inspired by GDPR enables businesses to maintain strong privacy standards and meet the expectations of India’s digital audience.

 

Real-World Impact Across India’s Fast-Growing Sectors

Robust verification now plays a vital role across India’s digital landscape. Financial services rely on automated KYC verification to reduce fraud and speed up onboarding. E-commerce and online marketplaces use digital onboarding security to block fake buyers and prevent misuse. Gig platforms and mobility services depend on identity clarity to safeguard both customers and workers.

Across these environments, intelligent verification helps platforms maintain fairness, reduce fraudulent activity, and provide safer user experiences.

 

The Future of Identity Verification in India

As fraud continues evolving, identity verification technology must advance alongside it. India’s systems will increasingly incorporate deeper behavioural analysis, enhanced spoof detection, smarter risk scoring, and adaptive AI-powered identity checks.

Emerging approaches, such as document-free identity verification, will also increase in adoption, enabling users to verify their identities without uploading traditional documents.

These advancements will create an environment where verification remains strong while becoming nearly frictionless for genuine users.

 

Building a Safer Digital Ecosystem Through Intelligent Verification

Trust forms the core of every digital platform. When businesses can reliably verify who their users are, they create safer interactions, reduce fraud, and maintain smoother onboarding experiences. With India’s digital sector expanding at a remarkable speed, AI-driven verification ensures that identity checks remain secure and future-ready.

Solutions available at Recognito continue helping organizations implement precise, privacy-focused verification systems built for long-term reliability and trust.

 

Frequently Asked Questions

 

1. What is digital identity verification?

It is a process that uses AI and biometrics to confirm whether a user is genuine, helping Indian platforms onboard real users securely.

 

2. Why is liveness detection important?

It ensures the person on camera is real by detecting natural facial movement, blocking spoof attempts using photos, videos, or deepfakes.

 

3. How does AI help reduce identity fraud?

AI detects tampered images, unusual device activity, and suspicious behaviour that manual checks often miss, making fraud harder to execute.

 

4. Does biometric verification protect user privacy?

Yes. When implemented properly, it uses encrypted data, minimal storage, and privacy-focused policies to keep user information secure.

 

5. Which sectors benefit most in India?

Fintech, gig platforms, mobility, e-commerce, and gaming rely heavily on digital identity checks to prevent fraud and onboard users safely.

Tuesday, 25. November 2025

liminal (was OWI)

From Pilot Programs to Global Mandate: Age Assurance’s New Reality

Back in May, we hosted a Demo Day focused on how a few early adopters were approaching age estimation and age verification. At the time, most of the conversation lived in pockets. Some regulators were experimenting with frameworks. A handful of platforms were testing new signals. Age assurance felt like a problem that mattered, but […] The post From Pilot Programs to Global Mandate: Age Assuranc

Back in May, we hosted a Demo Day focused on how a few early adopters were approaching age estimation and age verification. At the time, most of the conversation lived in pockets. Some regulators were experimenting with frameworks. A handful of platforms were testing new signals. Age assurance felt like a problem that mattered, but not yet a global priority.

Six months later, everything has changed. Age assurance has moved from a developing trend to a global mandate, and the broader world of age verification has shifted with it. More than 45 countries now require platforms to verify age through methods like device-based age verification, on-device estimation, reusable credentials, or structured checks tied to new age verification laws. Regulators expect measurable accuracy, and platforms are adopting modern age verification systems to prove compliance without storing sensitive data, all while protecting user experience at scale.

This Demo Day showed what that shift looks like in practice. Eight vendors demonstrated how they verify age across streaming, social platforms, gaming, in-person experiences, and new regulatory environments. Each had 15 minutes to present, followed by a live Q&A with trust and safety leaders, product teams, compliance professionals, and policy experts.

What stood out was not just the technology. It was the direction of the market. Privacy-first design is becoming a requirement, on-device processing is becoming the expectation, reusable identity is becoming real, and age assurance has officially gone global.

What age assurance is really solving now

Age assurance covers two essential needs. First, preventing minors from accessing adult content, gambling, and other restricted digital experiences. Second, preventing adults from entering minor-only spaces such as youth communities, education platforms, and social channels designed for children.

These needs sit alongside more traditional age verification online methods that protect minors from adult content and adults from entering youth environments. Youth access laws are expanding, enforcement is tightening, and deepfakes, AI based impersonation, and synthetic content are creating new risks on both sides. Minors now have more tools to appear older, and adults have more ways to appear younger, which makes the protection problem harder for platforms and regulators that are trying to keep everyone in the right place.

This tension shaped the entire event and pushed every vendor to show how they minimize exposure, reduce friction, and still create defensible assurances.

Privacy-first design is branching beyond biometrics

Across the demos, privacy was not an add-on. It was the foundation. Vendors were focused on verifying age without collecting or storing sensitive data, and several demonstrated pathways that move far beyond traditional face or document checks.

Deepak Tiwari, CEO of Privately, set the tone early:

“Our technology is privacy by design. We download the machine learning model into the device, and the entire age estimation happens on-device.”

He emphasized the minimal output:

“All facial processing stays on the device. The only thing that leaves is the signal that the person is above a certain age.”

Other vendors pushed this principle even further. One of the clearest examples came from Jean-Michel Polit, Chief Business Officer of NeedDemand, whose company verifies age using only hand movements. There is no face capture, no document upload, and no voice sample. The model analyzes motion patterns that naturally differ between adolescence and adulthood.

Jean-Michel explained:

“A seventeen-year-old does not know how the hand of an eighteen-year-old moves. These differences are minute and impossible to fake.”

Because the system immediately stops scanning if a face enters the frame, it also reduces the risk of unintended biometric capture, which is one of the biggest compliance concerns in global youth safety laws.

Together, these approaches show that privacy-first design is no longer limited to minimizing what data is stored. It is expanding toward methods that avoid personal data entirely, giving platforms new ways to meet global regulatory requirements without friction or biometric exposure.

Reusable identity is becoming practical

Reusable identity has been a theoretical ideal for years, but most systems struggled with adoption or required centralized storage. FaceTec demonstrated a version that felt practical, portable, and privacy-preserving.

Alberto Lima, SVP Business Development at Facetec, introduced the UR code, a digitally signed QR format that stores an irreversible biometric vector.

“This is not just a QR code. It is a minified, irreversible face vector that still delivers extremely high accuracy.”

Verification can happen offline, identity does not need to be reissued, and there are no centralized databases. The user holds their proof, and platforms can verify it without re-running heavy identity flows.

Alberto summarized the shift:

“Everyone can be an identity issuer. The UR code becomes the interface for verification.”

This model serves alcohol delivery, ticketing, retail age checks, gaming, and other in-person flows where connectivity is inconsistent, and users expect a quick pass, not a multi-step verification process.

ECG-based age estimation shows how fast the science is moving

One of the most forward-looking sessions came from Azfar Adib, Graduate Researcher at TECH-NEST, where teams are exploring how smartwatch ECG signals can estimate age with high accuracy. It builds on the idea that physiological signals are difficult to fake and inherently tied to liveness.

Azfar shared a line that captured the concept clearly:

“ECG is a real-time sign of liveness. The moment I die, you cannot get an ECG signal anymore.”

The study showed:

“Age classification reaches up to 97 percent accuracy using only a smartwatch ECG.”

These findings are early but demonstrate how multimodal verification will evolve as platforms look for signals that are resistant to spoofing and do not require collecting images or documents.

Email age checks are becoming a high-volume default

Biometrics are not appropriate for every platform or flow. That is where Verifymy offered one of the most immediately deployable solutions.

Steve Ball, Chief Executive Officer of Verifymy, explained:

“Email is consistently the most preferred method users are willing to share, and we can verify age with just that.”

The method relies on digital footprint analysis tied to email addresses, not inbox content or personal messages, which keeps privacy intact while enabling large-scale adoption.

Regulators are acknowledging the method:

“Regulators around the world now explicitly reference email age checks as a highly effective method.”

For high-volume platforms, this offers a low-friction way to layer age checks without compromising onboarding speed.

Location has become a compliance requirement

In many markets, age gates depend entirely on where a user is physically located, so platforms must know whether the user is in a jurisdiction with specific restrictions. GeoComply framed this challenge with clarity.

Chris Purvis, Director of Business Development at GeoComply, explained:

“The law does not say you must check age unless they are using a VPN. It says you must prevent minors from accessing restricted content. Full stop.”

He added:

“Location complacency is not location compliance.”

Reliance on IP is no longer enough. VPN use spikes whenever new safety rules are introduced, so location assurance is now a required part of the age assurance stack.

State-level policies such as Florida’s age verification law and the wave of states requiring age verification are further accelerating vendor adoption, especially in markets focused on age verification for adult content and social media.

Standards will reshape the market

The final presentation came from Tony Allen, Chief Executive of ACCS, who is leading the upcoming ISO 27566 standard for age assurance. His guidance was direct and will likely influence procurement decisions next year.

“Every age assurance system must prove five core characteristics: functionality, performance, privacy, security, and acceptability.”

He also noted:

“Expect procurement teams to start saying that if you do not have ISO certification, you will not even qualify.”

Certification will separate providers with measurable accuracy and defensible privacy controls from those relying on promises alone.

Where the market is moving next

This Demo Day made one thing clear. The next generation of age assurance will be built on the principles of privacy preservation, zero data retention, multimodal analysis, and portable proofs that function across various contexts. These capabilities will sit alongside scalable age verification systems that can adapt to global rules and emerging compliance requirements.

The old model relied on document uploads and selfie flows. The emerging model relies on lightweight signals that minimize exposure and scale more easily across jurisdictions. Platforms need methods that work across global regulations, including high-pressure markets such as age verification for adult content, while users want verification to feel invisible and regulators want outcomes that withstand scrutiny.

These demands are aligning rather than conflicting. The vendors on stage showed how fast this space is evolving and what the next decade of online trust will require.

Watch the Recording

Did you miss our Age Assurance Demo Day? You can still catch the full replay of vendor demos, product walk-throughs, and expert insights:
Watch the Age Assurance Demo Day recording here

The post From Pilot Programs to Global Mandate: Age Assurance’s New Reality appeared first on Liminal.co.


Indicio

Four reasons why travel companies using AI need Indicio Proven

The post Four reasons why travel companies using AI need Indicio Proven appeared first on Indicio.
How to prevent a spoofed AI agent in your travel solution? AI isn’t going to work if customers can’t trust your agents — and right now legacy authentication isn’t up to the task. At Indicio, we’re making AI systems a practical, implementable reality for travel and tourism by using decentralized identity and Verifiable Credentials for secure, privacy-preserving, GDPR-compliant authentication and data sharing.

By Trevor Butterworth

Do you want your customers to be phished by fake AI agents pretending to be from your company?

Of course you don’t. That’s the stuff of nightmares.

You’re dreaming up amazing ways to use AI to help your customers and simplify your operations. And to turn that dream into reality in travel, the focus is on delivering automated performance. How do you solve the customer’s problem, meet their goal — and beat every other competitor trying to do the same thing?

Authentication isn’t even an afterthought.

Newsflash: the nightmare of spoofed agents is coming. And it’s bringing a friend, the specter of regulatory compliance.

Do you think you can just grab and projectile-share tons of customer personal data, unencrypted and unprotected, as if GDPR doesn’t exist? 

Do you think legacy authentication tech, built on usernames and passwords, is up to the task of protecting these radically new customer interactions?

In one breach, you will become a global news story, lose your customers, and be lucky if you aren’t fined into oblivion.

This is why you need Verifiable Credentials for AI.

1. Making AI implementable means taking authentication seriously

The bad news is that conventional, legacy, centralized authentication technology can’t protect your AI agents and your customer interactions.

The good news is that decentralized identity and Verifiable Credentials can — faster and cheaper.

The trick is that an AI agent with a Verifiable Credential and a customer with a Verifiable Credential are able to authenticate each other before sharing data.

And they’re able to do this cryptographic authentication in a way that is resistant to the bad kind of reverse-engineering AI. 

This means that if you are an airline or a hotel chain, a customer can instantly prove that they are interacting with an AI agent from that airline or hotel chain.

At the same time the AI agent can prove the customer is also who they say they are — in other words, that they have been issued with a Verifiable Credential from their airline. 

This all happens instantly.

2. Making AI implementable means taking GDPR seriously

The European Union’s General Data Protection Regulation (GDPR) is the gold standard for data privacy and a model for other jurisdictional data privacy and protection law.

GDPR requires a data subject — your customer — to consent to share their data. It requires the data processor — your company — to minimize the amount of personal data it uses and limit the purposes for which it can be used.

Right now, no one appears to be thinking about any of this; It’s a personal data-palooza. But this isn’t the web of 20 years ago. You can’t say people don’t care about data privacy when GDPR came into effect in 2018.

Again, Verifiable Credentials solve this problem. They are a privacy-by-design technology providing the customer with full control over their data. For an AI agent to access that data, the person must explicitly consent to sharing their data.

They can also share this data selectively, so you can meet the requirements of data and purpose minimization.

3. Expanding AI means taking delegated authority seriously

It’s going to be a multi-agent world. AI agents will need to talk to other AI agents to accomplish tasks. To make this work, a customer will have to give a special kind of permission to the first point of agentic contact: delegated authority. 

This means a customer must explicitly consent to an agent sharing data with another agent, whether that second or third agent is inside the same company or outside.

Again, Verifiable Credentials make that kind of consent easy for the customer. On the back end, decentralized governance makes it easy for a company to implement and manage these kinds of AI agent networks.

An AI agent can hold a trust list of other AI agents it can interact with. And because all these agents also have Verifiable Credentials, every agent can authenticate each other — as if they were customers.

4. The additional incredibly useful benefit of Verifiable Credentials: structured data.

The great thing about this technology is not just that it solves the problems you haven’t really thought about, it also helps to solve the problem that you’re currently focused on: the need for structured data.

Verifiable Credentials are ways to structure trustable information. If you as an airline create, say, a loyalty program credential, the information in that credential can be trusted as authentic. It comes from you; it’s not manually entered, potentially incorrectly, by the customer. It’s also digitally-signed so it cannot be altered by the user or anyone else. 

So when an AI agent gets permission from a customer to access a loyalty program credential, it is able to automatically ingest that accurate, verifiable, information from the credential and immediately act on it.

Think about how easy it is now for a chatbot to interact with a passenger on a flight and provide instant access to services, or rebook a connecting flight, or connect them to a hotel agent — and then use mileage points associated with a Loyalty Program Credential to pay. (We’ve also enabled regular payments using Verifiable Credentials).

No more manual mis-typed data entry slowing things down creating frustration and customer dropoff. The customer has a user experience that works for them and delivers frictionless customer service from you. But only if you implement authentication and permissioned data access.

Indicio is leading identity authentication for AI 

We’ve been recognized by Gartner for our innovation, we’ve been accepted into NVIDIA’s Inception Program, we’ve partnered with NEC to create the trust layer for automated AI systems. 

Contact us to make AI a secure and compliant reality.

 

The post Four reasons why travel companies using AI need Indicio Proven appeared first on Indicio.


Spherical Cow Consulting

Digital Identity Wallet Standards, the DC API, and Politics

Digital identity wallets are becoming a central focus in global identity conversations, driven by regulatory pressure, rapid technical evolution, and growing expectations around interoperability. This episode examines how layered architectures, protocol choices, and platform behaviors shape the user experience in ways that are often misunderstood. Listeners will learn why the Digital Credentials

“Digital identity wallet standards are having a moment. In certain circles, they’re the topic.”

The European Union is driving much of the global conversation through eIDAS2, which requires every member state to deploy digital wallets and verifiable credentials that work across borders. In the United States, things look very different: deployment is happening state by state as DMVs explore mobile driver’s licenses (mDLs) and experiment with how these new credential formats fit into existing identity systems.

For most people, this shows up as “more stuff on my phone,” though desktops matter, too. Regulators and lawmakers see more areas where they need to set rules regarding security, privacy, and usability. Developers and standards architects see something else: a complex web of digital identity wallet standards that include the layers of browsers, OS platforms, protocols, and wallets that must interoperate cleanly for anything to work at all.

We saw a version of this during the October AWS outage, where headlines immediately blamed “the DNS.” But the DNS protocol behaved exactly as it should. What failed was the implementation layer: AWS’s tooling that feeds DNS, not DNS itself.

A similar pattern is emerging around digital wallets and verifiable credentials. At last week’s W3C TPAC meeting, I lost count of how many times I heard a version of:

“It’s the Digital Credential API’s fault/responsibility.”

Just as “it’s the DNS” became shorthand for an entire problem space, “it’s the DC API” is now becoming shorthand for an entire stack of browser, OS, and wallet behavior. And that shorthand is obscuring real issues.

A Digital Identity Digest Digital Identity Wallet Standards, the DC API, and Politics Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:18:34 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Why the web has layers in the first place

Before diving into the DC API, it’s worth remembering why the ecosystem is structured this way. The web is layered by design, and when it comes to digital wallets, that layer looks like this:

Browsers enforce security boundaries between websites and users. Operating systems manage device trust, hardware access, and application permissions. Wallets specialize in credential storage, presentation, and user experience. Protocols define how information moves across these components in interoperable ways.

No single layer can—or should—control the entire flow. The fragmentation is intentional. But when political or regulatory urgency collides with this architectural reality, confusion is almost guaranteed.

If you want a broader overview of the standards bodies and protocols that shape digital identity wallets, I wrote about that landscape in a post last year: “The Importance of Digital Identity Wallet Standards.” It provides helpful context for how these layers evolved and why they interact the way they do.

What the Digital Credential API actually is

The DC API is a protocol under development at the W3C. It isn’t a standard yet, it isn’t a wallet, and it isn’t an OS-level API. It’s one layer in a larger system.

What it actually does:

The browser receives a request to present a credential. That request uses a specific presentation protocol, such as: ISO/IEC 18013-7 Annex C (from the mDL ecosystem) OpenID for Veriable Presentations (OpenID4VP, developed in the OpenID Foundation) The DC API passes that request to the device’s platform-specific APIs. The OS hands the request to the wallet(s) installed on the device. If the credential needs to move between devices, the DC API relies on the Client to Authenticator Protocol (CTAP, developed in the FIDO Alliance) for secure cross-device interactions.

That’s it. The DC API connects a request from the browser to a wallet. Everything else happens above or below it. Tim Cappalli is doing amazing work documenting this on the Digital Credentials Developers website.

A similar dynamic showed up years ago with Identity Provider Discovery, where some stakeholders wanted a trusted, independent way to verify that the user was being shown the correct identity providers. Some would like the DC API to offer similar guarantees for wallets and credentials. But that kind of oversight is not in scope for this layer. The DC API doesn’t govern UX, verify correctness, or audit the platform; it only bridges protocols to the OS.

Two important clarifications

As one of the co-chairs of the W3C group working on the DC API, I spend a lot of time keeping the specification tightly scoped to the layer it actually controls. Outside the working group, though, “the DC API” often becomes shorthand for every layer in the wallet stack, which results in unfortunate (at least from my perspective) confusion.

The DC API supports multiple presentation protocols, but browsers, OSs, and wallets don’t have to.

The DC API can transport both Annex C and OpenID4VP. But support across layers varies:

Google supports both protocols. Apple supports Annex C only. Third-party wallets choose based on product goals. Government-built wallets align protocol choices with policy, privacy, and interoperability requirements.

So while the DC API can support multiple protocols, the ecosystem around it is not uniform. That’s a separate but very relevant problem. At this time, this critical standard is only fully supported by one vendor, and yet, the option is this or the less-than-secure option that is custom URL schemes

The DC API allows multiple wallets on one device, but the ecosystem isn’t ready.

In theory, multiple wallets are fine. In practice, this raises unresolved questions:

How should a device present wallet choices? How does a wallet advertise which credentials it holds? What happens when wallets support different protocols?

These aren’t DC API issues, but misunderstandings about them often land at the DC API’s feet. So why not make it the DC API’s responsibility? There are reasons.

Why pressure lands on the DC API

Some requirements emerging from the European Commission would require changes in how the OS platform layer behaves; that’s the layer that controls platform APIs, secure storage, inter-app communication, and hardware-backed protections.

But the OS platform layer is proprietary. No external standards body governs it, and regulators cannot directly influence it.

The EC can influence some other layers. For example, they engage actively with the OpenID Foundation’s OpenID4VP work. But OpenID4VP has already been released as a final specification. The EC can request clarifications or plan for future revisions, but they cannot reshape the protocol on the timeline required for deployment.

That leaves the DC API.

Because the DC API is still in draft, it is one of the few open, standards-based venues where the EC can place requirements for transparency, security controls, and protocol interoperability. It is, quite literally, the only part of this stack where immediate governance pressure is possible.

When pressure lands on the wrong layer

The problem arises when that pressure is directed at a layer that does not control the behaviors in question. Some EC requirements cannot be met without OS-level changes, and those changes are outside the influence of a W3C specification. The deeper issue is that governments need predictable, enforceable behavior from digital wallets—behavior that works the same way across browsers, devices, and vendors. But if support for key standards like Annex C or OpenID4VP varies by platform, and wallet behavior differs across OS ecosystems, governments are left with only two real levers: regulate platforms directly, or mandate interoperability requirements that implementations must meet. Neither of those levers sits at the DC API layer. That layer can expose capabilities, but it cannot enforce consistency across implementations.

Regulators aren’t wrong to feel frustrated. They want outcomes—stronger security, clearer transparency, and technical mechanisms supporting regulatory oversight, and better privacy protections—that would require deeper hooks into the platform. But today, the only open venue available to them is the DC API, not the proprietary OS layers below it, and not a recently finalized protocol like OpenID4VP. By pressuring the DC API, there is hope that this might encourage OS vendors to change their ways.

The missing layer: the W3C TAG’s concerns about credential abuse

Another factor complicating the landscape is that each layer is thinking about the “big picture.” The W3C Technical Architecture Group (TAG) recently published a statement, Preventing Digital Credential Abuse, outlining the risks associated with the abuse of digital credentials, highlighting how easily these technologies can be misused for tracking, surveillance, exclusion, and other harms if guardrails aren’t in place.

Their guidance is deliberately broad. They are looking across browsers, wallets, protocols, and policy environments, and raising concerns that span multiple layers. That kind of review is their mandate: they examine the technical architecture (as implied by the name of the group) and provide guidance to ensure that protocols developed for the web are as safe and useful as possible. 

Unfortunately, the TAG is similar to the EC in that it can only influence so much when it comes to standards. Critical decisions about privacy and security often occur outside the remit of any single specification or standards organization. A browser can mitigate some risks. An OS can mitigate others. Wallets can add protections. Protocols can limit what they expose. But no single layer can fix everything. That said, the EC (unlike the TAG) can regulate the issue and force platforms to specific implementations, which is its own political challenge. 

The missing governance layer

This is part of why there is so much political, architectural, and cultural pressure around digital credentials. Everyone is trying to look after the whole system, even though no layer actually controls it. This is why some stakeholders argue that a new layer—one explicitly designed for governance—may be required. A layer where wallet- and platform-facing behaviors are standardized in a way regulators can rely on, rather than inferred from proprietary implementation details. If governments want consistent privacy controls, consistent credential-selection behavior, or consistent transparency requirements, they will eventually have to either regulate platform behavior directly or create a standards-based layer that makes such oversight possible.

When political pressure meets technical layering

Digital wallets and verifiable credentials are being deployed globally, but the gravitational center of policy influence remains Brussels. The “Brussels Effect” is real. Discussions in the W3C Federated Identity WG, the OpenID Foundation’s DCP WG, and other standards groups frequently reference the EUDI ARF and eIDAS2 timelines.

Political pressure isn’t inherently bad. Sometimes it’s the only reason standards work moves at all. But when pressure targets the wrong layer, we get:

misaligned expectations rushed architectural decisions poorly targeted requirements and the temptation to fall back to less secure approaches (such as custom URL schemes)

Some EC expectations cannot be met without changes to the layering of the current technical solutions. Achieving the desired outcomes for privacy and transparency would require OS vendors to publicly disclose platform behaviors or implement new layers that afford regulatory oversight, standardized controls, or support additional protocol features. 

Closing: shorthand is helpful… until it isn’t

“The DC API” is increasingly used as shorthand for the entire credential-presentation ecosystem. It’s understandable and perhaps unavoidable. But it isn’t accurate.

When inconsistencies show up across implementations, it’s tempting to assume the DC API should be redesigned to become a more prescriptive or “smarter” layer. That is technically possible, but it has little support within the working group—and for good reason. A protocol-agnostic DC API preserves permissionless innovation and prevents any one protocol or architecture from dominating the ecosystem. It is meant to expose capabilities, not dictate which protocols must be used or how wallets should behave.

If governments need consistent, enforceable behavior across platforms—and many do—those guarantees must come from a different part of the stack or through regulation. The DC API simply does not have the authority to enforce the kinds of requirements that regulators are pointing toward, nor should it be transformed into that kind of enforcement mechanism.

If we want secure, interoperable digital wallets, we need to name the right problems and assign them to the right layers. Expecting the DC API to control behaviors outside its remit won’t get us the outcomes we want.

The real work starts with a better question: “Which layer is responsible for this?” Only then can we move on to the most important question: “How do we manage these layers to best support and protect the people using the technology?”

Everything else follows from that.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Welcome back. Today, we’re talking about digital identity wallets because they are definitely having a moment right now—at least among the people who spend their days thinking about standards, protocols, governance, and all the delightful complexity of making digital identity work at scale.

In many circles, digital wallets aren’t just one topic among many. Instead, they have become the topic.

Furthermore, much of this momentum is coming from the European Union. Under eIDAS 2, every EU member state must deploy digital wallets and verifiable credentials that work across borders. That’s a massive mandate, and it’s shaping the conversation far beyond Europe.

Meanwhile, in the U.S., the topic looks a bit different. Deployment is happening primarily state by state, as DMVs explore mobile driver’s licenses (MDLs). The results include:

Fragmented implementation A lack of harmonization State-driven experimentation

Yet this patchwork now coexists with the EU’s broad alignment efforts, while other regions adapt approaches that suit their needs.

More Wallets, More Icons, More Complexity

For most everyday users, this simply shows up as more stuff on their phones—an airline ticket here, a government wallet there, and possibly more icons appearing in the future as deployments expand.

Even though phones dominate the conversation, desktops continue to matter. Different stakeholders view the shift through different lenses:

Regulators identify more areas where intervention may be required. Architects see a complex interplay between browsers, operating systems, protocols, and wallet design. Product teams wrestle with the UX and expectations users bring. Lawmakers try to understand the implications and risks.

And unsurprisingly, interoperability sometimes breaks.

A great example comes from the AWS outage back in October. Headlines blamed DNS—but DNS worked exactly as designed. The failure stemmed from the implementation layer that feeds DNS, not DNS itself.

This is important, because a similar pattern is emerging in the conversations around digital wallets and the Digital Credentials API (DC API).

The DC API Becomes a Scapegoat

At last week’s W3C TPAC meeting, “It’s the DC API’s fault” became a common refrain.

However, just as with DNS, “the DC API” has become shorthand for an entire stack—browser behavior, OS behavior, application logic, protocol integration, and wallet decisions. And that shorthand obscures real issues.

So let’s step back and revisit the basics.

Why We Have Layers

The web is layered, intentionally. Fragmentation is a feature, not a bug.

Each layer plays a different role:

Browsers enforce security boundaries and mediate how websites interact with users. Operating systems manage device trust, hardware access, and permissions for inter-app communication. Wallets store credentials and manage the user experience around presentation. Protocols govern how data moves between each layer in interoperable ways.

No single layer can—or should—control the entire ecosystem.

However, political urgency tends to collide with this distributed design, creating confusion.

If you want a broader primer, check out the earlier piece The Importance of Digital Identity Wallet Standards, which explains how these layers historically evolved.

But today, our focus is the DC API.

What the DC API Actually Does

The Digital Credentials API is a protocol under development at the W3C. It is:

Not a standard yet Not a wallet Not an OS-level API Just one layer with a narrow scope

Here’s the actual flow:

The browser receives a request to present a credential. That request uses a presentation protocol—typically: ISO 18013-7 (Annex C) from the MDL ecosystem, or OpenID for Verifiable Presentations (OID4VP). The DC API passes the request to the operating system. The OS hands it to whichever wallet(s) are installed. If the credential needs to be presented across devices (e.g., from phone to desktop), the DC API uses CTAP from the FIDO Alliance.

As you can already see, this brings in several standards bodies:

ISO OpenID Foundation W3C FIDO Alliance

The DC API simply bridges browser protocol traffic to the OS. Everything else happens above or below it.

Tim Cappalli has a great visual diagram of this—linked from the written blog post.

Echoes of Old Identity Problems

This is not the first time these issues have come up. Years ago, during the identity provider discovery debates, some parties wanted browsers to verify the “correct” identity provider.

Now, we’re hearing similar suggestions: that the DC API should verify whether the “correct” wallet is being used.

However, this is out of scope. The DC API does not:

Govern UX Verify correctness Audit platform behavior

It only bridges protocols.

As one of the W3C co-chairs working on the DC API, I spend a great deal of time keeping the specification tightly scoped, while the world often uses “DC API” as a catch-all term for problems elsewhere.

Clarification #1: Protocol Support Varies

The DC API supports multiple presentation protocols, but vendors do not have to.

For example:

Google supports both Annex C and OpenID4VP. Apple supports only Annex C. Third-party wallets choose based on their own goals. Government wallets choose based on policy and privacy requirements.

So while the DC API is protocol-agnostic, the surrounding ecosystem is not uniform.

Without the DC API, implementers would rely on custom URI schemes, which are problematic for security and privacy.

Clarification #2: Multiple Wallets on One Device

The DC API technically allows multiple wallets on the same device, but the ecosystem is not yet ready.

Key unanswered questions include:

How should devices present wallet choices? Should wallets advertise which credentials they hold? What if different wallets prefer different protocols?

These are important questions—but not questions for the DC API.

So Why the Pressure on the DC API?

Much of it comes from regulatory pressure, especially from the European Commission.

Some EC requirements require OS-level changes, but OS behavior is proprietary and not governed by standards bodies—so regulators cannot influence it directly.

Protocols such as OID4VP and Annex C are already nearly finalized, leaving little wiggle room.

Therefore, regulators turn to the one venue still open: the DC API Working Group.

Unfortunately, some expectations require changes outside what the DC API controls.

Governments need:

Consistent behavior across devices Transparent selection mechanisms Predictable privacy protections

If platform support varies, governments must either:

Regulate platforms directly, or Mandate compliance through policy

In practice, the choice increasingly becomes regulation.

The W3C TAG Weighs In

The W3C Technical Architecture Group (TAG) recently published Preventing Digital Credential Abuse, which outlines risks including:

Tracking and surveillance User exclusion Privacy harms

The TAG offers broad guidance across layers. However, the TAG cannot enforce behavior.

Browsers can mitigate some risks.
OS platforms can mitigate others.
Wallets can add protections.
Protocols can define boundaries.

But no single layer can solve them all.

The European Commission can regulate, though doing so creates additional complexities.

A Possible New Governance Layer?

Because of the competing pressures—political, architectural, and cultural—some stakeholders now wonder if the ecosystem may eventually require a new governance layer.

Such a layer could standardize:

Wallet-facing behaviors Platform-facing behaviors Privacy controls Transparency requirements

This would give regulators a consistent target, rather than relying on inferred or proprietary platform behavior.

The Brussels Effect

Digital wallets and verifiable credentials are being deployed globally, but Brussels still exerts immense policy influence.

You can feel it in:

W3C Working Groups OpenID Foundation discussions Broader standards ecosystem conversations

Political pressure isn’t inherently bad—it often accelerates progress. But when the pressure is misaligned with where change must occur, the results are predictable:

Misaligned expectations Rushed architecture decisions Poorly targeted requirements A retreat to less secure options (like custom URI schemes) The DC API Is Not the Entire Ecosystem

The DC API often becomes shorthand for the entire credential presentation ecosystem—but that’s inaccurate.

Protocols live in different standards bodies. Implementations live in different layers. Governments and companies make different choices.

When inconsistencies appear, some suggest making the DC API more prescriptive. While technically possible, it has little support—for good reason.

A protocol-agnostic DC API:

Preserves permissionless innovation Prevents one protocol from dominating Exposes capabilities Avoids dictating wallet behavior

If governments need enforceable consistency, it must come from either:

Regulation, or A new standards-based governance layer

It won’t come from the DC API.

Closing Thoughts

If we want secure, interoperable digital wallets, we must name the right problems and assign them to the right layers.

The key question is:

Which layer is responsible for this component?

Once we answer that, we can address how to manage the layers to best support and protect users.

Everything else follows from that.

Outro

And that wraps up this week’s episode of the Digital Identity Digest.

If this helped clarify a complex topic, please share it with a colleague and connect with me on LinkedIn @hlflanagan.

Subscribe, leave a rating or review, and check out the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Digital Identity Wallet Standards, the DC API, and Politics appeared first on Spherical Cow Consulting.


FastID

How Fastly and Yottaa Transform Site Performance with Early Hints

Early Hints for Compute is now generally available, dramatically improving website performance and user experience by preloading essential resources.
Early Hints for Compute is now generally available, dramatically improving website performance and user experience by preloading essential resources.

Monday, 24. November 2025

Ontology

Web3 Horror Stories: Security Lessons Learned

Web3 horror stories lessons learned — this summary turns scary headlines into simple education: self custody, bridge safety, venue vetting, stablecoin plans, and an incident checklist. We posted the full session on X here. If you missed it, this summary gives you the practical habits to use Web3 with more confidence. Note: The information below is for education only. It describes options, questio

Web3 horror stories lessons learned — this summary turns scary headlines into simple education: self custody, bridge safety, venue vetting, stablecoin plans, and an incident checklist. We posted the full session on X here. If you missed it, this summary gives you the practical habits to use Web3 with more confidence.

Note: The information below is for education only. It describes options, questions, and factors to consider.

Web3 security foundations

Blockchain in one sentence: a public ledger where many computers agree on the same list of transactions.
Private key: the secret that lets you move your coins. Whoever controls it controls the funds.
Self custody vs custodial: self custody means you hold the keys. Custodial means a platform holds them for you.

Choosing venues: exchanges and custodians

What people usually try to learn about a venue

How customer assets are held and whether segregation is documented Whether the venue publishes proof of reserves and whether liabilities are discussed What governance or policy controls exist for large transfers How compliance, KYC/AML, and audits are described Incident history and the clarity of post-incident communications Withdrawal behavior during periods of stress

Common storage language

Hot storage: internet-connected and convenient Cold storage: offline and aimed at reducing online attack surface

Trading and custody involve process and oversight. Public signals such as disclosures, status pages, and audit summaries help readers form their own view of venue risk.

Bridge security: moving across chains safely

Think of bridges as corridors, not parking lots. A bridge locks or escrows assets on one chain and represents them on another. Because value crosses systems, bridges can be complex and high-value points in the flow.

Typical points to check or ask about

Official interface and domain Current status or incident notes published by the team Fee estimates and expected timing Any approvals a wallet is about to grant and to which contract Whether a small “test” transfer is supported and how it is verified How the project communicates delays or stuck transfers Whether there is a public pause or circuit-breaker policy

Terms that appear in bridge discussions

Validator and quorum or multisig: several independent signers must approve sensitive actions Reentrancy: a contract is triggered again before it finishes updating state Toolchain: compilers and languages a contract depends on; versions and advisories matter

Movement across chains touches multiple systems at once. Understanding interfaces, messages, and approvals can help readers evaluate their own tolerance for operational complexity.

Stablecoins: reserves, design, and plans

What a “dollar on-chain” can be backed by

Cash and short-term treasuries at named institutions Crypto collateral with over-collateralization rules Algorithmic or hybrid mechanisms

Questions readers often ask themselves

What assets back the stablecoin and where are they held How concentration across banks, issuers, or designs is handled What signals would trigger a partial swap or a wait-and-see approach Which sources are monitored for updates during stress

Example elements of a personal depeg plan

Signals: price levels or time thresholds that prompt a review Actions: small, incremental adjustments rather than all-or-nothing moves Sources: issuer notices, status pages, and established news outlets

Designs behave differently under stress. Defining personal signals and information sources ahead of time can make decisions more methodical.

Human layer protection: phishing, privacy, browser hygiene

Patterns commonly seen in phishing or social engineering

Urgency or exclusivity, requests to “verify” a wallet, surprise airdrops Lookalike domains, QR codes from unknown accounts, unsigned or opaque transactions Requests for seed phrases or private keys (legitimate support does not request these)

Privacy points that often come up

Use of a work or pickup address for hardware deliveries Awareness that marketing databases can leak personal details

Browser and device considerations people weigh

A separate browser profile for web3 use with minimal extensions Regular device and wallet firmware updates For shared funds, whether a multisig or policy-based account would add useful checks

Many losses begin with human interaction rather than code. Recognizing common patterns can help readers evaluate messages and prompts more calmly.

Web3 security glossary

Bridge: locks an asset on chain A and issues a representation on chain B
Wrapped token: an IOU on one chain representing an asset on another
Oracle: external data or price feed for smart contracts
Reentrancy: re entering a contract before the state updates which can enable over withdrawal
Multisig or quorum: multiple keys must sign before funds move
Proof of reserves: an attestation that holdings cover obligations and is meaningful only if it includes liabilities
Self custody: you hold the private keys which brings more responsibility and less venue risk
Cold storage: offline key storage that is safer from online attack
KYC or AML: identity and anti money laundering controls
Seed phrase: the words that are your wallet. Anyone with them can empty it

Important definitions

Keys

Where are long-term funds held Is there a way to verify address and network before larger transfers Is a small confirmation transfer practical in the current situation

Approvals

Which contracts currently have spending permission Are there tools to review or remove old allowances if desired

Bridges

Is the interface official and the status normal Are there recent notices about delays or upgrades If something looks off, where are the official communications checked

Monitoring

Which status pages are bookmarked for wallets, bridges, and venues Which channels are considered primary for updates during turbulence

Venues

Is there public information on liabilities alongside assets How are customer assets segregated according to the venue What governance and audit information is available

Comms hygiene

How are links verified before use What is the process when receiving unexpected DMs or QR codes What information will never be shared (for example, seed phrases)

Playbooks

What are the personal thresholds for a stablecoin price review What are the steps if an exchange pauses withdrawals What is the process if a wallet compromise is suspected Note for readers

This article is an educational takeaway from our community call. The full call is on X here. It is not advice. It is meant to help readers develop their own questions, checklists, and comfort levels when using web3 tools.

Web3 Horror Stories: Security Lessons Learned was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ONG Tokenomics Adjustment Proposal Passes Governance Vote

The Ontology community has voted, and the results are in: the ONG Tokenomics Adjustment Proposal has officially passed. After three days of voting, from October 28 to October 31, 2025 (UTC), Ontology Triones Nodes reached a unanimous decision in favor of the proposal. The proposal secured over 117 million votes in approval, signaling strong consensus within the network to move forward with the ne

The Ontology community has voted, and the results are in: the ONG Tokenomics Adjustment Proposal has officially passed.

After three days of voting, from October 28 to October 31, 2025 (UTC), Ontology Triones Nodes reached a unanimous decision in favor of the proposal. The proposal secured over 117 million votes in approval, signaling strong consensus within the network to move forward with the next phase of ONG’s evolution.

A Vote for Long-Term Sustainability

This proposal represents a significant step in refining ONG’s tokenomics to ensure long-term stability, strengthen staking incentives, and promote sustainable ecosystem growth.

Here’s a quick recap of what’s changing and why it matters.

Key Objectives Cap total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value to strengthen liquidity and reduce circulating supply. Rebalance incentives for ONT stakers while ensuring long-term token sustainability. ONG Max and Total Supply will decrease from 1 billion to 800 million, with 200 million ONG burned immediately. ONG Circulating Supply remains unchanged immediately after the event; however, circulating supply could drop to around 750 million (assuming that 1 $ONG = 1 $ONT) in the future due to the permanent lock mechanism. Implementation Plan

Adjust ONG Release Curve

Cap total supply at 800 million ONG. Extend total release period from 18 to 19 years. Maintain a consistent 1 ONG per second release rate for the remaining years.

Released ONG Allocation

80% of released ONG will continue to flow to governance as ONT staking incentives. 20%, plus transaction fees, will be contributed to ecological liquidity.

Swap Mechanism

ONG will be used to acquire ONT within a set fluctuation range. The acquired ONG and ONT will be paired to provide liquidity and generate LP tokens. LP tokens will be burned, permanently locking the underlying assets to maintain supply discipline. Community Q&A Highlights

Q1. How long will the ONT + ONG (worth 100 million ONG) be locked?

It’s a permanent lock.

Q2. Why extend the release period if total ONG supply decreases?

Under the previous model, the release rate increased sharply in the final years. By keeping the release rate steady at 1 ONG per second, the new plan slightly extends the schedule — from 18 to roughly 19 years — while maintaining predictable emissions.

Q3. Will ONT staking APY be affected?

Rewards will shift slightly, with ONG emissions reduced by around 20%. However, as ONG becomes scarcer, its market value could rise, potentially offsetting or even improving overall APY.

Q4. What does this mean for the Ontology ecosystem?

With the total supply capped and 200 million ONG burned immediately, and 100 million $ONG equivalent-valued $ONG and $ONT permanently locked, effective circulating supply could drop to around 750 million (assuming that 1 $ONG = 1 $ONT). This scarcity, paired with ongoing ONG utility and swapping mechanisms, should strengthen market dynamics and improve long-term network health.

Q5. Who was eligible to vote?

All Triones nodes participated via OWallet, contributing to Ontology’s decentralized governance process.

The Vote at a Glance

ProposalONG Tokenomics AdjustmentVoting PeriodOct 28–31, 2025 (UTC)Vote Status✅ ApprovedTotal Votes in Favor117,169,804Votes Against0StatusFinished

What Happens Next

With the proposal approved, the Ontology team will begin implementing the updated tokenomics plan according to the outlined schedule. The gradual rollout will ensure stability across the staking ecosystem and DEX liquidity pools as the new mechanisms are introduced.

This marks an important milestone in Ontology’s ongoing effort to evolve its token economy and strengthen decentralized governance.

As always, we thank our Triones nodes for participating and shaping the direction of the Ontology network.

Stay tuned for implementation updates and the next phase of Ontology’s roadmap.

ONG Tokenomics Adjustment Proposal Passes Governance Vote was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Friendly ACH in Banking

The post Friendly ACH in Banking appeared first on Liminal.co.

The post Friendly ACH in Banking appeared first on Liminal.co.


Okta

Unlock the Secrets of a Custom Sign-In Page with Tailwind and JavaScript

We recommend redirecting users to authenticate via the Okta-hosted sign-in page powered by the Okta Identity Engine (OIE) for your custom-built applications. It’s the most secure method for authenticating. You don’t have to manage credentials in your code and can take advantage of the strongest authentication factors without requiring any code changes. The Okta Sign-In Widget (SIW) built into t

We recommend redirecting users to authenticate via the Okta-hosted sign-in page powered by the Okta Identity Engine (OIE) for your custom-built applications. It’s the most secure method for authenticating. You don’t have to manage credentials in your code and can take advantage of the strongest authentication factors without requiring any code changes.

The Okta Sign-In Widget (SIW) built into the sign-in page does the heavy lifting of supporting the authentication factors required by your organization. Did I mention policy changes won’t need any code changes?

But you may think the sign-in page and the SIW are a little bland. And maybe too Okta for your needs? What if you can have a page like this?

With a bright and colorful responsive design change befitting a modern lifestyle.

Let’s add some color, life, and customization to the sign-in page.

In this tutorial, we will customize the sign-in page for a fictional to-do app. We’ll make the following changes:

Use Tailwind CSS framework to create a responsive sign-in page layout Add a footer for custom brand links Display a terms and conditions modal using Alpine.js that the user must accept before authenticating

Take a moment to read this post on customizing the Sign-In Widget if you aren’t familiar with the process, as we will be expanding from customizing the widget to enhancing the entire sign-in page experience.

Stretch Your Imagination and Build a Delightful Sign-In Experience

Customize your Gen3 Okta Sign-In Widget to match your brand. Learn to use design tokens, CSS, and JavaScript for a seamless user experience.

In the post, we covered how to style the Gen3 SIW using design tokens and customize the widget elements using the afterTransform() method. You’ll want to combine elements of both posts for the most customized experience.

Table of Contents

Customize your Okta-hosted sign-in page Use Tailwind CSS to build a responsive layout Use Tailwind for custom HTML elements on your Okta-hosted sign-in page Add custom interactivity on the Okta-hosted sign-in page using an external library Customize Okta-hosted sign-in page behavior using Web APIs Add Tailwind, Web APIs, and JavaScript libraries to customize your Okta-hosted sign-in page

Prerequisites

To follow this tutorial, you need:

An Okta account with the Identity Engine, such as the Integrator Free account. Your own domain name A basic understanding of HTML, CSS, and JavaScript A brand design in mind. Feel free to tap into your creativity! An understanding of customizing the sign-in page by following the previous blog post

Let’s get started!

Before we begin, you must configure your Okta org to use your custom domain. Custom domains enable code customizations, allowing us to style more than just the default logo, background, favicon, and two colors. Sign in as an admin and open the Okta Admin Console, navigate to Customizations > Brands and select Create Brand +.

Follow the Customize domain and email developer docs to set up your custom domain on the new brand.

Customize your Okta-hosted sign-in page

We’ll first apply the base configuration using the built-in configuration options in the UI. Add your favorite primary and secondary colors, then upload your favorite logo, favicon, and background image for the page. Select Save when done. Everyone has a favorite favicon, right?

I’ll use #ea3eda and #ffa738 as the primary and secondary colors, respectively.

On to the code. In the Theme tab:

Select Sign-in Page in the dropdown menu Select the Customize button On the Page Design tab, select the Code editor toggle to see a HTML page

Note

You can only enable the code editor if you configure a custom domain.

You’ll see the lightweight IDE already has code scaffolded. Press Edit and replace the existing code with the following.

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="robots" content="noindex,nofollow" /> <!-- Styles generated from theme --> <link href="{{themedStylesUrl}}" rel="stylesheet" type="text/css"> <!-- Favicon from theme --> <link rel="shortcut icon" href="{{faviconUrl}}" type="image/x-icon"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> <link href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Manrope:wght@200..800&display=swap" rel="stylesheet"> <title>{{pageTitle}}</title> {{{SignInWidgetResources}}} <style nonce="{{nonceValue}}"> :root { --font-header: 'Inter Tight', sans-serif; --font-body: 'Manrope', sans-serif; --color-gray: #4f4f4f; --color-fuchsia: #ea3eda; --color-orange: #ffa738; --color-azul: #016fb9; --color-cherry: #ea3e84; --color-purple: #b13fff; --color-black: #191919; --color-white: #fefefe; --color-bright-white: #fff; --border-radius: 4px; --color-gradient: linear-gradient(12deg, var(--color-fuchsia) 0%, var(--color-orange) 100%); } {{#useSiwGen3}} html { font-size: 87.5%; } {{/useSiwGen3}} #okta-auth-container { display: flex; background-image: {{bgImageUrl}}; } #okta-login-container { display: flex; justify-content: center; align-items: center; height: 100vh; width: 50vw; background: var(--color-white); } </style> </head> <body> <div id="okta-auth-container"> <div id="okta-login-container"></div> </div> <!-- "OktaUtil" defines a global OktaUtil object that contains methods used to complete the Okta login flow. --> {{{OktaUtil}}} <script type="text/javascript" nonce="{{nonceValue}}"> // "config" object contains default widget configuration // with any custom overrides defined in your admin settings. const config = OktaUtil.getSignInWidgetConfig(); config.theme = { tokens: { BorderColorDisplay: 'var(--color-bright-white)', PalettePrimaryMain: 'var(--color-fuchsia)', PalettePrimaryDark: 'var(--color-purple)', PalettePrimaryDarker: 'var(--color-purple)', BorderRadiusTight: 'var(--border-radius)', BorderRadiusMain: 'var(--border-radius)', PalettePrimaryDark: 'var(--color-orange)', FocusOutlineColorPrimary: 'var(--color-azul)', TypographyFamilyBody: 'var(--font-body)', TypographyFamilyHeading: 'var(--font-header)', TypographyFamilyButton: 'var(--font-header)', BorderColorDangerControl: 'var(--color-cherry)' } } config.i18n = { 'en': { 'primaryauth.title': 'Log in to create tasks', } } // Render the Okta Sign-In Widget const oktaSignIn = new OktaSignIn(config); oktaSignIn.renderEl({ el: '#okta-login-container' }, OktaUtil.completeLogin, function (error) { // Logs errors that occur when configuring the widget. // Remove or replace this with your own custom error handler. console.log(error.message, error); } ); </script> </body> </html>

This code adds style configuration to the SIW elements and configures the text for the title when signing in. Press Save to draft.

We must allow Okta to load font resources from an external source, Google, by adding the domains to the allowlist in the Content Security Policy (CSP).

Navigate to the Settings tab for your brand’s Sign-in page. Find the Content Security Policy and press Edit. Add the domains for external resources. In our example, we only load resources from Google Fonts, so we added the following two domains:

*.googleapis.com *.gstatic.com

Select Save to draft, then Publish to view your changes.

The sign-in page looks more stylized than before. If you try resizing the browser window, we see it’s not handling different form factors well. Let’s use Tailwind CSS to add a responsive layout.

Use Tailwind CSS to build a responsive layout

Tailwind makes delivering cool-looking websites much faster than writing our CSS manually. We’ll load Tailwind via CDN for our demonstration purposes.

Add the CDN to your CSP allowlist:

https://cdn.jsdelivr.net

Navigate to Page Design, then Edit the page. Add the script to load the Tailwind resources in the <head>. I added it after the <style></style> definitions before the </head>.

<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4" nonce="{{nonceValue}}"></script>

Loading external resources, like styles and scripts, requires a CSP nonce to mitigate cross-site scripting (XSS). You can read more about the CSP nonce on the CSP Quick Reference Guide.

Note

Don’t use Tailwind from NPM CDN for production use cases. The Tailwind documentation notes this is for experimentation and prototyping only, as the CDN has rate limits. If your brand uses Tailwind for other production sites, you’ve most likely defined custom mixins and themes in Tailwind. Therefore, reference your production Tailwind resources in place of the CDN we’re using in this post.

Remove the styles for #okta-auth-container and #okta-login-container from the <style></style> section. We can use Tailwind to handle it. The <style></style> section should only contain the CSS custom properties defined in :root and the directive to use SIW Gen3.

Add the styles for Tailwind. We’ll add the classes to show the login container without the hero image in smaller form factors, then display the hero image with different widths depending on the breakpoints.

The two div containers look like this:

<div id="okta-auth-container" class="h-screen flex bg-(--color-gray) bg-[{{bgImageUrl}}]"> <div id="okta-login-container" class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center"></div> </div>

Save the file and publish the changes. Feel free to test it out!

Use Tailwind for custom HTML elements on your Okta-hosted sign-in page

Tailwind excels at adding styled HTML elements to websites. We can also take advantage of this. Let’s say you want to maintain continuity of the webpage from your site through the sign-in page by adding a footer with links to your brand’s sites. Adding this new section involves changing the HTML node structure and styling the elements.

We want a footer pinned to the bottom of the view, so we’ll need a new parent container with vertical stacking and ensure the height of the footer stays consistent. Replace the HTML node structure to look like this:

<div class="flex flex-col min-h-screen"> <div id="okta-auth-container" class="flex grow bg-(--color-gray) bg-[{{bgImageUrl}}]"> <div class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center"> <div id="okta-login-container"></div> </div> </div> <footer class="font-(family-name:--font-body)"> <ul class="h-12 flex justify-evenly items-center text-(--color-azul)"> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Terms</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Docs</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com/blog">Blog</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://devforum.okta.com">Community</a></li> </ul> </footer> </div>

Everything redirects to the Okta Developer sites. 😊 I also maintained the style of font, text colors, and text decoration styles to match the SIW elements. CSS custom properties make consistency manageable.

Feel free to save and publish to check it out!

Add custom interactivity on the Okta-hosted sign-in page using an external library

Tailwind is great at styling HTML elements, but it’s not a JavaScript library. If we want interactive elements on the sign-in page, we must rely on Web APIs or libraries to assist us. Let’s say we want to ensure that users who sign in to the to-do app agree to the terms and conditions. We want a modal that blocks interaction with the SIW until the user agrees.

We’ll use Alpine for the heavy lifting because it’s a lightweight JavaScript library that suits this need. We add the library via the NPM CDN, as we have already allowed the domain in our CSP. Add the following to the <head></head> section of the HTML. I added mine directly after the Tailwind script.

<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js" nonce="{{nonceValue}}"></script>

Note

We’re including Alpine from the NPM CDN for demonstration and experimentation. For production applications, use a CDN that supports production scale. The NPM CDN applies rate limiting to prevent production-grade use.

Next, we add the HTML tags to support the modal. Replace the HTML node structure to look like this:

<div class="flex flex-col min-h-screen"> <div id="modal" x-data x-cloak x-show="$store.modal.open" x-transition:enter="transition ease-out duration-300" x-transition:enter-start="opacity-0" x-transition:enter-end="opacity-100" x-transition:leave="transition ease-in duration-200" x-transition:leave-start="opacity-100" x-transition:leave-end="opacity-0 hidden" class="fixed inset-0 z-50 flex items-center justify-center bg-(--color-black)/80 bg-opacity-50"> <div x-transition:enter="transition ease-out duration-300" x-transition:enter-start="opacity-0 scale-90" x-transition:enter-end="opacity-100 scale-100" x-transition:leave="transition ease-in duration-200" x-transition:leave-start="opacity-100 scale-100" x-transition:leave-end="opacity-0 scale-90" class="bg-(--color-white) rounded-(--border-radius) shadow-lg p-8 max-w-md w-full mx-4"> <h2 class="text-2xl font-(family-name:--font-header) text-(--color-black) mb-4 text-center">Welcome to to-do app</h2> <p class="text-(--color-black) mb-6">This app is in beta. Thank you for agreeing to our terms and conditions.</p> <button @click="$store.modal.hide()" class="w-full bg-(--color-fuchsia) hover:bg-(--color-orange) text-(--color-bright-white) font-medium py-2 px-4 rounded-(--border-radius) transition duration-200"> Agree </button> </div> </div> <div id="okta-auth-container" class="flex grow bg-(--color-gray) bg-[{{bgImageUrl}}]"> <div class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center"> <div id="okta-login-container"></div> </div> </div> <footer class="font-(family-name:--font-body)"> <ul class="h-12 flex justify-evenly items-center text-(--color-azul)"> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Terms</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Docs</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com/blog">Blog</a></li> <li><a class="hover:text-(--color-orange) hover:underline" href="https://devforum.okta.com">Community</a></li> </ul> </footer> </div>

It’s a lot to add, but I want the smooth transition animations. 😅 The built-in enter and leave states make adding the transition animation so much easier than doing it manually.

Notice we’re using a state value to determine whether to show the modal. We’re using global state management, and setting it up is the next step. We’ll add initializing the state when Alpine initializes. Find the comment // Render the Okta Sign-In Widget within the <script></script> section, and add the following code that runs after Alpine initializes:

document.addEventListener('alpine:init', () => { Alpine.store('modal', { open: true, show() { this.open = true; }, hide() { this.open = false; } }); });

The event listener watches for the alpine:init event and runs a function that defines an element in Alpine’s store, modal. The modal store contains a property to track whether it’s open and some helper methods for showing and hiding.

When you save and publish, you’ll see the modal upon site reload!

We made the modal fixed even if the user presses Esc or selects the scrim. Users must agree to the terms to continue.

Customize Okta-hosted sign-in page behavior using Web APIs

We display the modal as soon as the webpage loads. It works, but we can also display the modal after the Sign-In Widget renders. Doing so allows us to use the nice enter and leave CSS transitions Alpine supports. We want to watch for changes to the DOM within the <div id="okta-login-container"></div>. This is the parent container that renders the SIW. We can use the MutationObserver Web API and watch for DOM mutations within the div.

In the <script></script> section, after the event listener for alpine:init, add the following code:

const loginContainer = document.querySelector("#okta-login-container"); // Use MutationObserver to watch for auth container element const mutationObserver = new MutationObserver(() => { const element = loginContainer.querySelector('[data-se*="auth-container"]'); if (element) { document.getElementById('modal').classList.remove('hidden'); // Open modal using Alpine store Alpine.store('modal').show(); // Clean up the observer mutationObserver.disconnect(); } }); mutationObserver.observe(loginContainer, { childList: true, subtree: true });

Let’s walk through what the code does. First, we’re creating a variable to reference the parent container for the SIW, as we’ll use it as the root element to target our work. Mutation observers can negatively impact performance, so it’s essential to limit the scope of the observer as much as possible.

Create the observer

We create the observer and define the behavior for observation. The observer first looks for the element with the data attribute named se, which includes the value auth-container. Okta adds a node with the data attribute for internal operations. We’ll do the same for our internal operations. 😎

Define the behavior upon observation

Once we have an element matching the auth-container data attribute, we show the modal, which triggers the enter transition animation. Then we clean up the observer.

Identify what to observe

We begin by observing the DOM and pass in the element to use as the root, along with a configuration specifying what to watch for. We want to look for changes in child elements and the subtree from the root to find the SIW elements.

Lastly, let’s enable the modal to trigger based on the observer. I intentionally provided you with code snippets that force the modal to display before the SIW renders, so you could take sneak peeks at your work as we went along.

In the HTML node structure, find the <div id="modal">. It’s missing a class that hides the modal initially. Add the class hidden to the class list. The class list for the <div> should look like

<div id="modal" x-data x-cloak x-show="$store.modal.open" x-transition:enter="transition ease-out duration-300" x-transition:enter-start="opacity-0" x-transition:enter-end="opacity-100" x-transition:leave="transition ease-in duration-200" x-transition:leave-start="opacity-100" x-transition:leave-end="opacity-0 hidden" class="hidden fixed inset-0 z-50 flex items-center justify-center bg-(--color-black)/80 bg-opacity-50"> <!-- Remaining modal structure here. Compare your work to the class list above --> </div>

Then, in the alpine:init event listener, change the modal’s open property to default to false:

document.addEventListener('alpine:init', () => { Alpine.store('modal', { open: false, show() { this.open = true; }, hide() { this.open = false; } }); });

Save and publish your changes. You’ll now notice a slight delay before the modal eases into view. So smooth!

It’s worth noting that our solution isn’t foolproof; a savvy user can hide the modal and continue interacting with the sign-in widget by manipulating elements in the browser’s debugger. You’ll need to add extra checks and more robust code for foolproof methods. Still, this example provides a general idea of capabilities and how one might approach adding interactive components to the sign-in experience.

Don’t forget to test any implementation changes to the sign-in page for accessibility. The default site and the sign-in widget are accessible. Any changes or customizations we make may alter the accessibility of the site.

You can connect your brand to one of our sample apps to see it work end-to-end. Follow the instructions in the README of our Okta React Sample to run the app locally. You’ll need to update your Okta OpenID Connect (OIDC) application to work with the domain. In the Okta Admin Console, navigate to Applications > Applications and find the Okta application for your custom app. Navigate to the Sign On tab. You’ll see a section for OpenID Connect ID Token. Select Edit and select Custom URL for your brand’s sign-in URL as the Issuer value.

You’ll use the issuer value, which matches your brand’s custom URL, and the Okta application’s client ID in your custom app’s OIDC configuration.

Add Tailwind, Web APIs, and JavaScript libraries to customize your Okta-hosted sign-in page

I hope you found this post interesting and unlocked the potential of how much you can customize the Okta-hosted Sign-In Widget experience.

You can find the final code for this project in the GitHub repo.

If you liked this post, check out these resources.

Stretch Your Imagination and Build a Delightful Sign-In Experience The Okta Sign-In Widget

Remember to follow us on LinkedIn and subscribe to our YouTube for more exciting content. Let us know how you customized the Okta-hosted sign-in page. We’d love to see what you came up with.

We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


FastID

Why the Future of Streaming Lives at the Edge

The future of streaming lives at the edge. Explore how Fastly reduces latency, boosts performance, and unlocks sustainable content delivery for media companies.
The future of streaming lives at the edge. Explore how Fastly reduces latency, boosts performance, and unlocks sustainable content delivery for media companies.

Friday, 21. November 2025

Shyft Network

Shyft Network Brings Travel Rule Compliance to India’s First Outlet Exchange, Fincrypto

India’s crypto market, with over 90 million users, continues to demand infrastructure that balances innovation with regulatory compliance. Shyft Network has integrated Veriscope with Fincrypto, marking another step in bringing Travel Rule compliance to India’s evolving digital asset ecosystem. Fincrypto is India’s first outlet exchange, pioneering a hybrid model that combines online crypto tradin

India’s crypto market, with over 90 million users, continues to demand infrastructure that balances innovation with regulatory compliance. Shyft Network has integrated Veriscope with Fincrypto, marking another step in bringing Travel Rule compliance to India’s evolving digital asset ecosystem.

Fincrypto is India’s first outlet exchange, pioneering a hybrid model that combines online crypto trading with physical walk-in locations. The platform provides instant INR-crypto on-ramp and off-ramp services, plus spot trading for Bitcoin, Ethereum, and major stablecoins (USDT, USDC, DAI, USDD). The Veriscope integration brings automated compliance through cryptographic proof verification without disrupting user experience.

Bridging the Trust Gap in India’s Crypto Market

As India’s regulatory framework for digital assets continues to take shape, Virtual Asset Service Providers need solutions that address both compliance requirements and user trust. Fincrypto’s unique approach — offering physical outlet locations alongside online services — tackles a critical challenge in emerging crypto markets: building user confidence through tangible presence while maintaining the efficiency of digital platforms.

The outlet exchange model is particularly relevant in India, where users often prefer the security of face-to-face transactions for significant financial decisions. By integrating Veriscope, Fincrypto ensures that both its physical and digital operations maintain consistent compliance standards, creating a seamless experience whether customers walk into an outlet or trade online.

This integration enables Fincrypto to verify wallet ownership, conduct secure data exchanges with counterparty VASPs, and maintain audit trails — all while preserving user privacy through cryptographic verification rather than centralized data storage.

Compliance for India’s Physical-Digital Hybrid

The Shyft Network-Fincrypto integration brings key capabilities to India’s first outlet exchange:

Automated Travel Rule Compliance: Cryptographic proof exchanges handle regulatory requirements without disrupting transactions, whether initiated online or at physical locations Privacy-First Verification: User Signing technology protects customer data while enabling secure identity verification across all service channels Scalable Infrastructure: Built-in compliance architecture supports Fincrypto’s expansion plans, including their roadmap for 100+ outlets nationwide and entry into GIFT City Cross-Platform Consistency: Uniform security and compliance standards across online platform and physical outlets, building trust with retail investors, traders, and businesses Veriscope’s Growing Presence in India

With Fincrypto’s integration, Veriscope expands its presence in India’s rapidly growing crypto ecosystem, joining platforms like Nowory and Carret in building compliant infrastructure for the market. Each integration addresses different segments of India’s diverse digital asset landscape — from retail trading to institutional services to hybrid physical-digital models.

As India’s regulatory environment matures and global compliance standards become essential for market participants, Veriscope provides VASPs with the infrastructure needed to meet both domestic KYC/AML requirements and international FATF Travel Rule obligations. This positions Indian platforms for sustainable growth and international partnerships.

About Veriscope

Veriscope, built on Shyft Network, provides Travel Rule compliance infrastructure for Virtual Asset Service Providers worldwide. Using User Signing cryptographic technology, the platform enables secure wallet verification and data exchanges between VASPs while protecting user privacy. Veriscope simplifies regulatory compliance for crypto exchanges and payment platforms operating in regulated markets.

About Fincrypto

Fincrypto, operated by Digital Secure Service Private Limited, is India’s first outlet exchange combining online trading with physical walk-in locations. The platform offers instant INR-crypto on/off ramps and spot trading for Bitcoin, Ethereum, and stablecoins (USDT, USDC, DAI, USDD). All services are KYC-verified and compliant with Indian regulations, providing transparent pricing and instant settlement for retail and business customers.

Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.

Book a consultation at calendly.com/tomas-shyft or email bd@shyft.network

Shyft Network Brings Travel Rule Compliance to India’s First Outlet Exchange, Fincrypto was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 20. November 2025

Aergo

HPP Migration Portal Is Now Live Ahead of GOPAX and Coinone Listings

We’re excited to announce that the official House Party Protocol (HPP) migration portal is now open, marking a key milestone in the Aergo → HPP transition. [Official Migration Portal and Bridge] This portal will allow legacy Aergo/AQT holders to convert their tokens into HPP ahead of upcoming listings on GOPAX and Coinone, so users can be fully prepared when trading goes live. As we e

We’re excited to announce that the official House Party Protocol (HPP) migration portal is now open, marking a key milestone in the Aergo → HPP transition.

[Official Migration Portal and Bridge]

This portal will allow legacy Aergo/AQT holders to convert their tokens into HPP ahead of upcoming listings on GOPAX and Coinone, so users can be fully prepared when trading goes live.

As we enter this next phase, our priority is to ensure everyone can migrate and move liquidity smoothly across exchanges. The guide below explains everything you need to know.

❌Before You Trade: Deposit HPP (Mainnet) Only, Not AERGO or AQT❌

Both GOPAX and Coinone support only HPP (Mainnet) deposits and withdrawals. That means:

Do NOT deposit AERGO. Do NOT deposit AQT. Only HPP (Mainnet) tokens will be accepted for trading on both exchanges.

If you currently hold AERGO or AQT, you must first migrate them to HPP via the two-step process (Migration → Bridge) before depositing them to exchanges.

[Token Swap & Migration Guide]

Migration Guide

Your migration path depends on which network your AERGO tokens are on and where they are held. Below are the two most common cases.

Case 1: AERGO (ETH). If your AERGO tokens are on ERC-20 (held on other exchanges or private wallets):

AERGO (ETH) → Migration Portal HPP (ETH) → HPP Bridge HPP (Mainnet)

Case 2: AERGO (Mainnet). If your AERGO is on Mainnet, such as staking, Bithumb, or any exchange that supports Aergo Mainnet:

AERGO (Mainnet) → Migration Portal /Aergo Bridge HPP (ETH) → HPP Bridge HPP (Mainnet)

Only after you complete both steps (Migration and Bridge) can you deposit your HPP (Mainnet) to GOPAX or Coinone. This route ensures your legacy Mainnet tokens convert properly into HPP.

Final Step: Trade on GOPAX & Coinone

Once your tokens are on HPP (Mainnet), you can deposit them to either exchange and begin trading immediately. Both GOPAX and Coinone support the same HPP Mainnet token, ensuring consistent trading and liquidity across platforms.

❌ Do NOT send HPP (ETH) to the exchanges. Only HPP (Mainnet) deposits are supported.

HPP Migration Backed by Coinone and GOPAX, With Further Updates Ahead

What’s Next

More global exchanges will support HPP as the migration continues. We’ll provide additional guides, visuals, and tutorials to help users transition without confusion.

Welcome to the House Party Protocol, and thank you for being part of the journey!

📢 HPP Migration Portal Is Now Live Ahead of GOPAX and Coinone Listings was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


Herond Browser

Every Minute You Spend Online, 30 Ads Are Stalking You – Herond Browser Says “No”

This article reveals how Herond Browser helps you completely eliminate this digital "pollution." Discover now to experience faster, cleaner, and uninterrupted web browsing. The post Every Minute You Spend Online, 30 Ads Are Stalking You – Herond Browser Says “No” appeared first on Herond Blog.

Did you know that for every minute you spend online, 30 ads are silently stalking you and slowing down your device? From intrusive pop-ups to autoplay videos, ads are not just annoying – they drain your data and invade your privacy. It’s time to say “No” to it all. This article reveals how Herond Browser helps you completely eliminate this digital “pollution.” Discover now to experience faster, cleaner, and uninterrupted web browsing.

Why Current Adblockers Are Failing You The Failure of Traditional Adblockers (Filter List Reliance) Outdated Mechanism: Traditional adblockers rely heavily on fixed Filter Lists to identify and hide ads. Easily Bypassed: This method is easily circumvented by major platforms when they update their code structures. The Biggest Challenge: They cannot effectively counter Server-Side Ad Insertion (SSAI) technology – where ads are stitched directly into the content stream from the server – making blocking attempts futile. Result: Ad-blocking performance is severely compromised, leading to a disrupted user experience. The Threat of Malvertising and Tracking Malware Risks (Malvertising): Ads are often gateways for Malvertising, allowing viruses or malware to infiltrate your device the moment an ad loads, even without a click. Privacy Issues: Ads come with countless invisible Trackers that continuously collect data on your browsing habits, interests, and location. The Goal: This tracking creates a detailed personal profile, severely violating your privacy to serve hyper-personalized ads. Result: Browsing without protection means you are voluntarily trading your safety and personal information. The Ultimate Solution: Herond Browser’s “No Compromise” Mechanism Next-Gen Blocking Engine Core Mechanism: Herond utilizes a highly efficient ad-blocking mechanism at the Network/DNS level. How It Works: Ads are blocked before they can even load onto the browser, optimizing speed and saving system resources. Defeating SSAI: Herond is capable of processing and bypassing complex ad streams like Server-Side Ad Insertion (SSAI). Smart Content Handling: Similar to SponsorBlock, Herond intelligently identifies and skips sponsored segments or irrelevant content, delivering a seamless entertainment experience. Security and Privacy-First Features Anti-Tracking: Proactively blocks invisible trackers, pixels, and third-party cookies. Privacy Protection: Ensures your browsing activity remains absolutely private, with no personal data collection. Anti-Malvertising: Integrated protection against malicious websites with automatic Phishing alerts. Absolute Safety: Creates a secure browsing environment, shielding you from potential malware threats. Built-in VPN/Proxy: Enhances security by encrypting traffic and hiding your real IP address. Unmatched Speed and Efficiency Superior Page Load Speed: Achieves significantly faster load times by eliminating the need to download heavy scripts, images, and ad videos. Data Savings: A clear benefit for mobile users, significantly reducing monthly 3G/4G/5G data consumption. Battery Saver: Reduces the load on your CPU and memory by not processing ads, extending your device’s battery life. How to Activate and Experience an Ad-Free Internet with Herond Browser

Ready to reclaim your internet? Follow these simple steps to install Herond Browser:

Step 1: Visit herond.org and click on “Download Herond”. Step 2: Select the Herond version compatible with your current device configuration. Step 3: Click “Download”. Step 4: Launch Herond and start browsing safely! Conclusion: The Future is in Your Hands, with Herond Browser

In summary, Herond Browser is more than just a browser; it is a comprehensive solution offering three core benefits: Superior Speed + Absolute Security + A Completely Ad-Free Experience.

Gone are the days of accepting 30 ads stalking you every minute and trading away your privacy. Herond puts control of the Internet back in your hands.

Don’t wait any longer. Download Herond Browser today to start experiencing web browsing as it was meant to be: fast, safe, and uninterrupted.

About Herond Browser

Herond is a browser dedicated to blocking ads and tracking cookies. With lightning-fast page load speeds, it allows you to surf the web comfortably without interruptions. Currently, Herond features two core products:

Herond Shield: Advanced software for ad-blocking and user privacy protection. Herond Wallet: A multi-chain, non-custodial social wallet.

Herond Browser aims to bring Web 3.0 closer to global users. We hope that in the future, everyone will have full control over their own data. The browser app is now available on Google Play and the App Store, delivering a convenient experience for users everywhere.

Follow our upcoming posts for more useful information on safe and effective web usage. If you have any suggestions or questions, contact us immediately on the following platforms:

Telegram: https://t.me/herond_browser X (Twitter): @HerondBrowser

The post Every Minute You Spend Online, 30 Ads Are Stalking You – Herond Browser Says “No” appeared first on Herond Blog.


90% of Internet Users Are Being Tracked Without Knowing It – Are You One of Them?

According to research from the Pew Research Center (2025), 90% of global internet users are having their online activities tracked The post 90% of Internet Users Are Being Tracked Without Knowing It – Are You One of Them? appeared first on Herond Blog.

Every day you surf the web, your personal data is being collected without your consent. According to research from the Pew Research Center (2025), 90% of global internet users are having their online activities tracked.

They watch everything from your search history and shopping habits to your precise geographic location. With every click and every website visit, hundreds of hidden trackers are recording your behavior to serve targeted ads. Are you being “stalked” online without realizing it? Don’t lose your privacy before it’s too late!

The Reality of Online Tracking

You are being tracked – right now.

Whether you’re scrolling through TikTok or using payment apps, hundreds of cookies and trackers are silently recording your behavior. According to Statista (2025), 72% of global users are worried about their privacy. In Vietnam, this number is even higher.

Cisco (2025) revealed a shocking statistic: 85% of users do not know their personal data is being sold to advertising companies and even hackers.

If you don’t act today, your data will be exploited every single second. So, what can we do?

The Consequences of Being Tracked

Being tracked isn’t just “annoying” – it is a real threat.

Financial Risk: McKinsey (2025) reports that 71% of users would stop shopping with a brand if they knew their data was being abused. Personal Harassment: Pew Research (2025) warns that 44% of users in Vietnam have faced online harassment due to leaked personal information. This ranges from phishing messages and “stalker” ads to the theft of bank accounts.

Millions have lost their data permanently. Do you want to be the next victim?

The Solution is Coming

Herond Browser – The Breakthrough Ad-Blocking Browser of 2025 is Launching Soon!

In just a few days, we will officially launch Herond Browser – the ultimate tool to block 100% of trackers, stop global surveillance, boost browsing speed by 3x, and eliminate intrusive ads on all devices: mobile, desktop, and tablet.

Absolute Security: No cookies, no traces left behind. Ad-Free Experience: Browse the web clean and fast. Herond Shield Integration: Experience smooth, A-Z safety with our built-in protection suite. Conclusion

90% of Internet users are being tracked without knowing it – and you could be one of them.

Every click, every transaction, and every message is being silently recorded by hundreds of trackers, sold to advertisers, or falling into the hands of hackers. You don’t have much time left. Act now to protect yourself in the online space!

About Herond

Herond is a browser dedicated to blocking ads and tracking cookies. With lightning-fast page load speeds, it allows you to surf the web comfortably without interruptions. Currently, Herond features two core products:

Herond Shield: Advanced software for ad-blocking and user privacy protection. Herond Wallet: A multi-chain, non-custodial social wallet.

Herond Browser aims to bring Web 3.0 closer to global users. We hope that in the future, everyone will have full control over their own data. The browser app is now available on Google Play and the App Store, delivering a convenient experience for users everywhere.

Follow our upcoming posts for more useful information on safe and effective web usage. If you have any suggestions or questions, contact us immediately on the following platforms:

Telegram: https://t.me/herond_browser X (Twitter): @HerondBrowser

The post 90% of Internet Users Are Being Tracked Without Knowing It – Are You One of Them? appeared first on Herond Blog.


FastID

Mitigating DDoS attacks faster and with even more accuracy

Learn how Fastly's Adaptive Threat Engine update for DDoS Protection boosts mitigation accuracy and reduces Mean Time to Mitigation by 72% for the holidays.
Learn how Fastly's Adaptive Threat Engine update for DDoS Protection boosts mitigation accuracy and reduces Mean Time to Mitigation by 72% for the holidays.

Wednesday, 19. November 2025

liminal (was OWI)

Redefining Age Assurance

The post Redefining Age Assurance appeared first on Liminal.co.

The post Redefining Age Assurance appeared first on Liminal.co.


Recognito Vision

How an Age Verification System Can Eliminate Fake IDs and Compliance Risks in India

In today’s fast-moving digital world, knowing the real age of your users is more than a safety measure. It is a responsibility. Businesses in gaming, e-commerce, adult platforms, retail, and social media, especially those growing rapidly in India’s digital economy, are under pressure to verify users quickly and accurately. Fake IDs such as edited Aadhaar...

In today’s fast-moving digital world, knowing the real age of your users is more than a safety measure. It is a responsibility. Businesses in gaming, e-commerce, adult platforms, retail, and social media, especially those growing rapidly in India’s digital economy, are under pressure to verify users quickly and accurately. Fake IDs such as edited Aadhaar cards, PAN cards, or altered Driving Licences are becoming dangerously good. Regulations are becoming stricter. And minors are finding new creative ways to slip into spaces they should not be in.

This is where a modern age verification system becomes a lifesaver. It gives companies the confidence that every interaction is genuine, compliant, and safe.

Artificial intelligence has pushed online age verification into a new era. Whether it is age-checking software, an AI age-detection tool, a face age checker, or a complete age verification solution, the goal remains the same. Keep users safe. Keep platforms compliant. And keep things running smoothly without slowing down real adults.

 

The Evolution of Age Identification Technology in India’s Growing Digital Landscape

Age verification has changed dramatically over the years. Not long ago, websites relied on a simple checkbox that asked users if they were eighteen. Teenagers checked that box faster than they could finish a bag of chips. That system lasted only because there were no better options.

Today the situation is very different. Powerful AI models examine IDs, faces, devices, and behavior patterns to determine whether a user is genuinely of legal age. Modern tools used for age identification can verify a person in milliseconds while following global data standards such as the rules detailed in the GDPR compliance guidelines.

This matters even more in India, where millions of new users join digital platforms every month.

Most systems combine document scanning with facial comparison. AI analyzes facial landmarks to match them with ID photos. It checks lighting, angles, micro textures, and even subtle facial changes. Performance studies published in the NIST Face Recognition Vendor Test show that advanced models now reach extremely high accuracy levels in identity and age validation.

These advances prove that digital age validation is no longer a basic checkbox. It is a critical layer of online trust.

 

Why Indian Businesses Are Switching to Smart Age Verification Solutions

Manual checks are slow, inconsistent, and easy to fool. A teenager with enough determination can outsmart a distracted employee or upload a slightly edited Indian ID. A sturdy digital age verification solution removes these weak spots.

Businesses across retail, entertainment, gaming, and online marketplaces use automated checks to avoid compliance issues and keep their platforms safe. This is especially relevant in India’s mobile-first ecosystem, where large onboarding volumes require fast, accurate verification.

Here is why companies are embracing this approach:

Faster verification: Users get approved in seconds. Higher accuracy: AI detects manipulated IDs that humans would overlook. Better compliance: Aligned with global privacy and safety requirements. Strong fraud prevention: Impossible to bypass with a borrowed or stolen ID. Smoother user experience: No long forms or awkward verification steps.

Many organizations explore these capabilities through the ID Document Recognition SDK, which lets them integrate age checks into onboarding flows effortlessly.

 

How Device-Based Age Verification Strengthens Security for Indian Platforms

Not every fraud attempt happens inside a document. Sometimes the red flag is hidden inside the device itself. That is why many companies use device-based age verification to detect suspicious behavior before any user reaches the final verification screen.

This system analyzes device history, repeated identity switching, login patterns, and other signals that reveal risky attempts. It stops situations where minors use the IDs of older siblings or attempt multiple logins on the same phone, a common pattern in many Indian households where devices are shared.

Combined with automated checks, device intelligence creates a multi-layered shield that keeps platforms safe from repeat offenders.

 

How AI Age Detection Helps Screen Users in Seconds

AI age detection has become a powerful screening tool. It gives platforms a quick estimate of a user’s age based on facial features. This does not require storing full images. Instead, the system evaluates the face momentarily and keeps only an age range.

Many companies rely on benchmark results shared in the NIST FRVT evaluations, which show how modern models estimate ages with impressive consistency. This is especially useful for platforms that offer age verification for adult content, including those operating in India where age-restricted content rules are strict.

AI age estimation is not a final verification step. It is an intelligent filter that helps determine whether a user is likely old enough before they undergo full verification.

 

Why an Adult Verification System Protects Businesses in India

A reliable adult verification system shields companies from violations, helps them avoid legal trouble, and ensures that minors never gain access to restricted environments. It strengthens safety while building trust with users who want a protected community.

A complete adult verification workflow includes the following:

Document authenticity scanning Facial biometric matching Liveness checks Age estimation Device intelligence Fraud behavior monitoring

Liveness detection plays a major role because it ensures that a real person is present. Many minors attempt to trick systems using printed images or digital masks. This is why companies often rely on tools powered by the ID Document Liveness Detection SDK, which tests for motion, depth, and natural facial movement.

 

Ensuring Privacy and Meeting Global Standards Relevant to India

Privacy is one of the biggest concerns surrounding digital verification. Companies must follow strict guidelines to keep data protected. Responsible systems use encrypted templates rather than raw images. They only store what is necessary and follow regulated retention policies.

Organizations that follow the compliance rules outlined in the GDPR regulatory framework can confidently use age verification tools without compromising user rights. This approach also aligns well with the expectations of Indian internet users, who are becoming increasingly privacy-aware.

 

How a Face Age Checker Enhances Accuracy and Flow

A face age checker offers an easy way to screen users before requesting full identity documents. If a user appears to be too young, the system redirects them to a thorough verification step. If they look old enough, onboarding remains quick and smooth.

This approach keeps user experience intact while maintaining strict age control. Developers often explore this process through the ID Document Verification Playground, where different checks can be tested in real time.

 

Real-World Use Cases Where Age Verification Matters Most in India

 

1. Age-restricted content

Platforms offering adult material use layered checks to block minors and avoid strict penalties.

 

2. Retail and e-commerce

Stores selling alcohol, tobacco, vapes, or other restricted products use age scanners to prevent misuse, an important requirement as online deliveries rise in India.

 

3. Financial services

Banks use identity verification to confirm eligibility and reduce fraud.

 

4. Streaming platforms

Video services with adult categories rely on automated screening to stay within regulations.

Each of these industries benefits from AI-powered identity tools that improve both safety and trust.

 

Comparing Different Types of Digital Age Verification Tools

Every platform has different needs. Some need fast screening. Others need top-level fraud resistance. Some must meet strict regulations. A quick look at the most common verification methods helps businesses decide which approach works best for their users.

Below is a simple comparison table that keeps things clear and easy to digest.

Verification Method What It Checks Strengths Best Use Case Document Scanning ID authenticity, security markers, text accuracy Accurate detection of fake or altered IDs Platforms that need strong identity proof before access AI Age Estimation Facial features that predict approximate age Fast screening, low friction, works well for high-volume signups Apps that want quick checks before full verification Face Match and Biometric Analysis Compares selfie with ID photo Very hard to bypass, stops stolen or borrowed IDs Services offering sensitive or restricted content Device-Based Age Signals Device history, login behavior, repeated identity switches Stops repeat attempts and risky patterns Platforms experiencing multiple fraud attempts from the same device Liveness Detection Confirms the user is a real person and not a photo or mask Blocks deepfakes, printed images, and replay tricks Businesses facing high levels of identity spoofing

 

Challenges and Ethical Responsibilities in India’s Diverse Digital Environment

Even advanced systems must handle challenges. AI models need constant training to avoid bias. Lighting conditions, camera quality, and access environments vary widely, especially in a country as diverse as India. Fraud attempts also evolve, pushing developers to create better defenses.

Ethical responsibility plays a major part too. Users must know when and why their data is being collected. Clear communication builds confidence. Initiatives like the open research shared on the Recognito GitHub page help encourage transparency and innovation across the industry.

 

The Future of Age Verification and Safe Digital Access in India

As technology improves, age verification tools will only grow more advanced. Future systems may combine 3D sensors, behavioral analytics, improved document checks, and deeper fraud detection to stay ahead of misuse.

The goal is not to replace traditional verification. It is to enhance it. These innovations will build a digital ecosystem where safety is automatic and effortless.

 

Building Trust in the Era of Intelligent Verification

Trust is the foundation of every online platform. A smart and well-designed age verification system gives businesses the confidence that every user is genuine. With AI-driven tools, global compliance, and layered protection, companies can keep minors away from restricted spaces while creating a smooth and secure experience for real adults.

Recognito continues helping organizations build this trust by offering AI-powered verification tools that combine accuracy, privacy, and confidence at every step.

 

Frequently Asked Questions

 

1. How does an age verification system detect fake IDs?

AI checks ID security features, patterns, and micro-details to spot tampering quickly and accurately.

 

2. Why is AI age estimation important?

It offers a fast age check from facial features, helping filter minors before full verification.

 

3. Can age verification help with compliance?

Yes, it follows global data rules like GDPR to keep verification secure and legally compliant.

 

4. What makes liveness detection essential?

It confirms the user is real by detecting natural movements, blocking photos, masks, and deepfakes.


BlueSky

Updates to Our Moderation Process

We're improving our in-app reporting and introducing new moderation systems and processes to better serve the Bluesky community.

Our community has doubled in size in the last year. It has grown from a collection of small communities to a global gathering place to talk about what’s happening. People are coming to Bluesky because they want a place where they can have conversations online again. Most platforms are now just media distribution engines. We are bringing social back to social media.

On Bluesky, people are meeting and falling in love, being discovered as artists, and having debates on niche topics in cozy corners. At the same time, some of us have developed a habit of saying things behind screens that we'd never say in person. To maintain a space for friendly conversation as well as fierce disagreement, we need clear standards and expectations for how people treat each other. In October, we announced our commitment to building healthier social media and updated our Community Guidelines. Part of that work includes improving how users can report issues, holding repeat violators accountable and providing greater transparency.

Today, we're introducing updates to how we track Community Guidelines violations and enforce our policies. We're not changing what's enforced - we've streamlined our internal tooling to automatically track violations over time, enabling more consistent, proportionate, and transparent enforcement.

What's Improving

As Bluesky grows, so must the complexity of our reporting system. In Bluesky’s early days, our reporting system was simple, because it served a smaller community. Now that we’ve grown to 40 million users across the world, regulatory requirements apply to us in multiple regions.

To meet those needs, we’re rolling out a significant update to Bluesky’s reporting system. We’ve expanded post-reporting options from 6 to 39. This update offers you more precise ways to flag issues, provides moderators better tools to act on reports quickly and accurately, and strengthens our ability to address our legal obligations around the world.

Our previous tools tracked violations individually across policies. We've improved our internal tooling so that when we make enforcement decisions, those violations are automatically tracked in one place and users receive clear information about where they stand.

This system is designed to strengthen Bluesky’s transparency, proportionality, and accountability. It applies our Community Guidelines, with better infrastructure for accurate tracking and to communicate outcomes clearly.

In-app Reporting Changes

Last month, we introduced updated Community Guidelines that provide more detail on the minimum requirements for acceptable behavior on Bluesky. These guidelines were shaped by community feedback along with regulatory requirements. The next step is aligning the reporting system with that same level of clarity.

Starting with the next app release, users will notice new reporting categories in the app. The old list of broad options has been replaced with more specific, structured choices.

For example:

You can now report Youth Harassment or Bullying, or content such as Eating Disorders directly, to reflect the increased need to protect youth from harm on social media. You can flag Human Trafficking content, reflecting requirements under laws like the UK’s Online Safety Act.

This granularity helps our moderation systems and teams act faster and with greater precision. It also allows for more accurate tracking of trends and harms across the network.

Strike System Changes

When content violates our Community Guidelines, we assign a severity rating based on potential harm. Violations can result in a range of actions - from warnings and content removal for first-time, lower-risk violations to immediate permanent bans for severe violations or patterns demonstrating intent to abuse the platform.

Severity Levels Critical Risk: Severe violations that threaten, incite, or encourage immediate real-world harm, or patterns of behavior demonstrating intent to abuse the platform → Immediate permanent ban High Risk: Severe violations that threaten harm to individuals or groups → Higher penalty Moderate Risk: Violations that degrade community safety → Medium penalty Low Risk: Policy violations where education and behavior change are priorities → Lower penalty Account-Level Actions

As violations accumulate, account-level actions escalate from temporary suspensions to permanent bans.

Not every violation leads to immediate account suspension - this approach prioritizes user education and gradual enforcement for lower-risk violations. But repeated violations escalate consequences, ensuring patterns of harmful behavior face appropriate accountability.

In the coming weeks, when we notify users about enforcement actions, we will provide more detailed information, including:

Which Community Guidelines policy was violated The severity level of the violation The total violation count How close the user is to the next account-level action threshold The duration and end date of any suspension

Every enforcement action can be appealed. For post takedowns, email moderation@blueskyweb.xyz. For account suspensions, appeal directly in the Bluesky app. Successful appeals undo the enforcement action – we restore your account standing and end any suspension immediately.

Looking Ahead

As Bluesky grows, we’ll continue improving the tools that keep the network safe and open. An upcoming project will be a moderation inbox to move notifications on moderation decisions from email into the app. This will allow us to send a higher volume of notifications, and be more transparent about the moderation decisions we are taking on content.

These updates are part of our broader work on community health. Our goal is to ensure consistent, fair enforcement that holds repeat violators accountable while serving our growing community as we continue to scale.

As always, thank you for being part of this community and helping us build better social media.


FastID

Outages, Attacks, and a Need for Resilience

Cloud outages are a stark reminder of our digital economy's fragility. Learn how Fastly mitigated a major traffic failover and concurrent DDoS attacks with zero disruption.
Cloud outages are a stark reminder of our digital economy's fragility. Learn how Fastly mitigated a major traffic failover and concurrent DDoS attacks with zero disruption.

Tuesday, 18. November 2025

LISNR

Your Omnichannel Promise Has a Presence Problem

Your Omnichannel Promise Has a Presence Problem The past decade has delivered extraordinary innovation across retail, media, loyalty, payments, and consumer experience. As brands scale their Retail Media ambitions to the global tune of $180 billion; as consumers adopt loyalty apps, rewards programs, and AI-powered tools; and as merchants modernize in-store operations with self-checkout, digital […
Your Omnichannel Promise Has a Presence Problem

The past decade has delivered extraordinary innovation across retail, media, loyalty, payments, and consumer experience. As brands scale their Retail Media ambitions to the global tune of $180 billion; as consumers adopt loyalty apps, rewards programs, and AI-powered tools; and as merchants modernize in-store operations with self-checkout, digital signage, and mobile payments, every layer of this ecosystem becomes increasingly interconnected. 

Unfortunately, none of those connections fully work unless the system can confirm deterministically that a shopper is actually in front of a screen, in an aisle, at a drive-thru, or at the point of sale.  

Volumes of customer data can yield a loose, probabilistic approximation, but without that verified moment in time and space, the promise of modern commerce ultimately falls apart. Personalization falters, attribution weakens, AI agents can only guess, loyalty fails to engage, and omnichannel experiences remain fragmented.

This is why so many of the industry’s most persistent challenges, from Retail Media skepticism, to in-store attribution gaps, to broken loyalty journeys, to disjointed checkout flows, can all be traced back to the same root cause: the absence of a universal, real-world, proof-of-presence signal. Until that gap is closed, the entire stack above it can only approximate value instead of delivering it with confidence.

Three Players, Three Layers, One Missing Signal

Modern commerce is a triangle built around three core players:

Brands, who need to understand which touchpoints genuinely influence behavior Merchants, who need to recognize customers, honor loyalty, and connect digital identity to physical action Consumers, who expect seamless experiences: rewards that travel with them, merchants that recognize them, and journeys that don’t break between screens

and three layers of activity that bind them together:

The Transaction & Fulfillment Layer (AI agents, e-commerce, in-store payments, BOPIS) The Marketing Activation Layer (RMN, DOOH, Digital Signage) The Loyalty & Engagement Layer (Perks, Status, Rewards)

Each layer exists to provide value for the three players, but all three layers share a common constraint keeping them from maximizing that value: they lack a shared, deterministic proof of presence.

Why Existing Signals Still Fall Short: Probabilistic Presence

Today’s technologies can approximate presence, but they never truly verify it. GPS is too coarse, too drifty, and unreliable indoors; BLE beacons are prone to overlap, spoofing, and signal bleed; QR codes depend entirely on user action and interrupt the journey; and Wi-Fi-based signals are noisy, shared, and often tied to the wrong device, rendering them unsuitable for precise indoor presence verification.

Each can get “close.” But when 80% of purchasing decisions happen in-store, close is not enough. Without deterministic proof of presence:

The Transaction Layer can’t optimize what it can’t verify. The Marketing Layer can’t attribute what it can’t observe. The Loyalty Layer can’t reward what it can’t confirm.

This is where LISNR’s Radius technology enters the picture: an ultrasonic proximity protocol built to verify presence in environments where traditional signals fail: stores, transit, venues, drive-thrus, checkout lanes, and everyday physical spaces.

Talk with Our Team about Retail Media Solutions How LISNR Radius Unlocks the Full Potential of Modern Commerce: Deterministic Presence

Radius stabilizes the existing stack instead of replacing it, giving marketers, retailers, and product teams a deterministic signal they can trust across environments and devices. It turns proximity from a probabilistic guess into a verifiable event. This kind of low-level, deterministic context signal is the missing “substrate” that indoor positioning and context-aware computing experts have argued is necessary for higher-level services, like personalization and attribution, to perform consistently. Radius enables every layer–from loyalty, to media, to transactions–to operate with clarity instead of assumption.

For brands, this means every in-store interaction can finally be tied to authenticated exposure and incremental lift without depending on multiple modalities, stitched-together proxies, or platform-bounded measurement. For merchants and infrastructure providers, it means digital identity and loyalty recognition at the point of entry, not just the point of purchase, and the ability to measure dwell time throughout the physical space with confidence. And for consumers, it means experiences “just work”: loyalty triggers automatically, offers follow them, and AI agents can act on their behalf because the system knows where they are—accurately and securely.

Deterministic proof of presence isn’t simply a technical upgrade; it’s the essential signal that enables the next generation of omnichannel commerce. And Radius is the first technology built to provide it at scale.

Proof of Presence: The Most Valuable Signal in Commerce

We’re entering a new era in which the most valuable signal in commerce is not the click, the impression, or even the transaction.

The most valuable signal is the proof that a shopper is there: physically, verifiably, and in real time.

That signal unlocks confidence in spend, precision in personalization, and alignment across every stakeholder in the modern retail ecosystem. Marketing becomes measurable. Loyalty becomes composable. Transactions become anticipatory. Omnichannel finally behaves like a single channel and not a patchwork of disconnected systems.

This shift is underway now, and the organizations that ground their media, loyalty, and transactional systems in deterministic presence will define the future standards everyone else has to follow.

This article is the first entry in a five-part exploration of how presence verification, and its role in maximizing the value of retail media, loyalty, and transactional systems, is becoming the foundational signal of modern commerce. The next article examines why legacy signals fall short in real-world environments and what LISNR’s proof-of-presence enables for brands, merchants, and consumers alike.

Part 2: “Trust Me, Bro” is Not Proof of Presence

The post Your Omnichannel Promise Has a Presence Problem appeared first on LISNR.


Spherical Cow Consulting

The Regulator’s Dilemma

This episode explores the regulator’s dilemma at the heart of digital infrastructure, where accountability, compliance, and governance reshape the systems they aim to protect. Heather Flanagan examines how modern identity, critical infrastructure, and risk management challenges emerge as digital environments outgrow traditional oversight models. Listeners will learn why compliance-era controls n

“In my last post, I wrote that resilience demands choice and that some dependencies matter more than others, and pretending otherwise spreads resources too thin. I joked that I was glad I didn’t have to make those decisions, but of course, I do.”

We all do, in our own way. Every time we vote, renew a passport, or trust a digital service, we’re participating in a system of priorities. Someone, somewhere, has decided what counts as critical and what doesn’t.

That realization sent me down another research rabbit hole about who decides what counts as critical, and what happens once something is labeled that way. How does accountability become infrastructure, and how do those rules, once written for slower systems, now shape the limits of resilience?

Critical infrastructure sounds like a technical category, but it’s really a political one. It represents a social contract between governments, markets, and citizens; a way of saying, “these systems matter enough to regulate.” The problem is that the rules we inherited for defining and managing critical infrastructure were written for a different kind of world. They were designed for slower systems, clearer ownership, and risks that respected national borders.

A Digital Identity Digest The Regulator’s Dilemma Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:51 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Generational change

This brings me to something a friend and longtime identity thinker, Ian Glazer, has been discussing lately: Generational Change: Building Modern IAM Today (great talk at Authenticate 2025). His premise is simple: every major shift in how we manage identity begins with a crisis of accountability.

For our generation, at least in the U.S., that story starts twenty-five years ago with Enron. When the company collapsed under the weight of its own fraud, it triggered the Sarbanes-Oxley Act (SOX), a sweeping effort to rebuild public trust through enhanced oversight and auditing. SOX didn’t just transform corporate governance; it rewired the architecture of digital systems.

To prove they weren’t lying to their shareholders, companies had to prove who had access to financial systems, when they had it, and why. That single regulatory demand gave birth to the modern identity and access management (IAM) industry. User provisioning, quarterly access reviews, and separation-of-duties rules were not technical innovations for their own sake. They were compliance artifacts.

Accountability as a system requirement.

It worked, mostly. However, it also froze an entire generation of identity practices in a compliance mindset designed for static environments—the kind of world where servers sat in data centers, applications were monolithic, and auditors could literally count user accounts.

That world no longer exists. I’m not sure it ever really did, but it came close enough for compliance purposes.

Today’s infrastructure is a living network of APIs, ephemeral containers, and machine-to-machine connections. Permissions change constantly; roles are inferred rather than assigned. The SOX model of accountability—document-based, periodic, human-verified—cannot keep up with the speed and fluidity of digital operations.

Yet we still design our controls as if that old world were intact. Every new regulation borrows the same logic: prove compliance through evidence after the fact. In an API economy, that’s like measuring river depth with a snapshot.

This is the essence of what it now means to be a regulated industry: to operate in a world where compliance frameworks lag behind reality, and where the very act of proving control can undermine the flexibility that keeps systems running.

The challenge ahead is to re-imagine accountability for systems that no human can fully audit anymore.

Who decides what’s critical

The SOX era showed us what happens when accountability becomes infrastructure. Once something is declared essential to the public good, the expectations around it change. Auditors appear. Processes multiply. Documentation becomes proof of virtue. The thing itself—energy, banking, cloud, identity—may not inherently change, but the burden of accountability does, and how that accountability is handled influences how much innovation is allowed to happen.

That’s the quiet tension at the heart of every critical infrastructure discussion: as systems become more indispensable, the scrutiny around them intensifies. The stakes rise, and so do the checklists.

When oversight can’t keep up

At the top of that pyramid sits government. Regulation is, in theory, how society expresses its collective expectations—how we agree that safety and reliability matter more than speed or convenience. But in practice, the model of oversight we keep reusing comes from a slower era: a world where an inspector could show up with a clipboard, verify that the valves were turned the right way, and sign off.

In digital infrastructure, that model doesn’t scale. Governments can loom over the industry’s figurative shoulder, but they can’t keep up with its velocity. The old rituals of control—compliance reports, annual audits, quarterly attestations—look increasingly ceremonial when infrastructure changes by the second.

And yet, the instinct to regulate through prescription persists. When governments define something as critical, they tend to follow with a detailed checklist of how to do the job, codifying procedures in the name of safety. It’s a natural response to risk, but one that struggles to survive contact with continuous deployment pipelines and automated policy engines.

So maybe the harder question isn’t what counts as critical, but how much definition it requires. Can we acknowledge essentiality without turning it into bureaucracy? Can we create accountability without demanding omniscience? What governments introduce is another common paradox: the need to be specific without actually suggesting what tools to use to get there from here.

If the first generation of regulation hard-coded accountability into organizations, the next will need to hard-code trust into systems—without pretending that trust can be reduced to a form.

Accountability without omniscience

Declaring something “critical” has always carried the weight of oversight. The assumption is that governments, or their proxies, can both understand and manage the risk. But as infrastructure becomes increasingly digital and interconnected, that assumption begins to fail.

The OECD’s Government at a Glance 2025 report states that most countries now recognize that infrastructure resilience demands a whole-of-government, system-wide approach—one that acknowledges interdependencies, information-sharing, and trust as policy instruments in their own right. Yet the governance structures built for power grids and pipelines aren’t well-suited to cloud platforms, APIs, or digital identity systems. The more critical these become, the less feasible it is for any single authority to monitor and manage every component.

That’s the paradox of modern accountability: the more connected systems get, the harder it is to define responsibility in a way that scales. The critical infrastructure lab and the Research Network on Internet Governance (REDE), funded by the Internet Society, made the case that sovereignty and resilience now depend less on control and more on coordination—on being able to share risk data and dependencies transparently across borders and sectors. In principle, it’s the right move. In practice, it’s a trust exercise that few institutions are prepared for.

The idea of distributed accountability sounds progressive, but it also has a familiar flaw. When everyone is accountable, no one is accountable. The result is a kind of modern Bystander Effect: every actor assumes someone else will notice, intervene, or fix the problem. The chain of command dissolves into a web of good intentions.

This is the point where governance runs into the limits of imagination. Most people—and most regulators—can picture centralized oversight. They can picture privatized responsibility. But a shared model of accountability, one that is collaborative without being amorphous, is much harder to design. And yet that’s exactly what digital infrastructure demands.

We don’t need omniscience. But we do need visibility, and a clear sense of who moves first when something goes wrong.

When visibility becomes control

The inability to imagine shared accountability has consequences. When coordination feels uncertain, governments reach for the tools they already understand: classification, jurisdiction, and control.

It’s an understandable impulse. No one wants to be caught watching a crisis unfold with no clear authority to act. So when infrastructure becomes essential, the default response is to anchor it to sovereignty—to say, “this part of the network belongs to us.” Visibility becomes control.

But this is also where the governance model for critical infrastructure collides with the architecture of the Internet. Digital systems don’t map neatly onto national boundaries, and yet the instinct to assert jurisdiction persists.

The European Union’s NIS2 and CER directives, the United States’ NSM-22, India’s CERT-In, and a growing list of regional cybersecurity laws all share the same structure: protect the systems that matter most within the territory you can regulate.

Each framework makes sense in isolation. Together, they create a patchwork of compliance zones that overlap but rarely align. The more governments move to secure their slice of the Internet, the more the global system fragments. Resilience becomes something you defend domestically rather than something you coordinate internationally.

There are various ways to interpret this, as scholars like Niels ten Oever and others exploring “infrastructural power” have noted. I’ll refer to it as the politics of dependencies—states now manage risk not only by building capacity, but by redefining what (and whom) they depend on. It’s a rational strategy in an interdependent world, but it comes with trade-offs. Limiting dependency also limits collaboration. A jurisdiction that can’t tolerate external risk soon finds itself isolated from shared innovation and shared recovery.

This is what makes the regulator’s dilemma so difficult. The very act of governing risk can create new vulnerabilities. The more states assert control over digital infrastructure, the more brittle global interdependence becomes. And yet, doing nothing isn’t an option either.

Doing something (without breaking everything)

If doing nothing isn’t an option, what does doing something look like?

The compliance model

The easiest path is the one we know: expand the existing machinery of audits, attestations, and oversight. This approach offers the comfort of processes we already know, which provide defined responsibilities, measurable outcomes, and the illusion of control.

Unfortunately, as discussed, the compliance model is self-limiting. Checklists don’t scale well when the systems they’re meant to protect evolve faster than the paperwork. It works locally but drags globally, slowing innovation in the name of assurance.

The sovereignty model

The second path is already well underway. Nations reassert control by treating digital infrastructure as a means of asserting sovereignty. Clouds become domestic. Data stays home. Dependencies are pruned in the name of national security.

This approach can strengthen internal resilience, but only within its own borders. The cost of the sovereignty model is interoperability. The more countries pursue sovereign safety, the more brittle cross-border systems become, and the more the Internet looks like a federation of incompatible jurisdictions.

The coordination model (and its limits)

The third path—shared coordination—remains the ideal of any globalist like me, but it’s also the least likely.

True collaboration demands transparency, and transparency means exposing dependencies that are strategically or commercially sensitive. In a world leaning toward self-protection, that kind of openness is rare.

So coordination won’t vanish, but it will devolve by shifting from global alignment to regional or sector-based pacts, where trust is built within smaller, semi-compatible networks. That’s not the open Internet we once imagined, but it may be the one we have to learn to live with.

Each of these paths has trade-offs.

Compliance centralizes process. Sovereignty centralizes power. Coordination, when it happens, centralizes trust, and that is becoming a scarce resource. The challenge now is not to prevent fragmentation, but to make it survivable.

The next generation of accountability

Ian Glazer’s idea of Generational Change in Identity has changed how I see the evolution of regulation and infrastructure. Every generation inherits a crisis it didn’t design and a set of controls that no longer fit.

For ours, that crisis isn’t fraud or corporate malfeasance. It’s fragility; the uneasy recognition that we’ve built systems so interdependent that no one can fully explain how they work, let alone govern them coherently.

If the last generation of regulation was born from the failure of a few companies, the next will emerge from the failure of shared systems. We need to start thinking about what kind of accountability we design in response. Do we double down on compliance and centralization (something that would make me Very Sad), or accept that resilience must now be negotiated—sometimes awkwardly, sometimes locally—among the people and institutions who can still see each other across the network?

We may not get a global framework this time. We may get overlapping, regionally aligned regimes that reflect the trade-offs each society is willing to make between openness, autonomy, and control. That’s not necessarily a failure. It’s the kind of adaptation that complex systems make when they outgrow the rules that shaped them.

If the last generation of accountability was about proving control, the next must be about sharing it: building systems where visibility replaces omniscience, and cooperation replaces the illusion of total oversight.

That’s the generational change we’re living through: the slow shift from auditing the past to governing the very immediate present. And if we’re good, we’ll learn to design for accountability the way we once designed for uptime.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Introduction

[00:00:30] Welcome back.

In my last post, The Paradox of Protection, I argued that resilience requires choice — that some dependencies matter far more than others. And pretending otherwise only spreads resources too thin.

I also joked — particularly in the written post — that I was very glad I personally did not have to make those decisions.

But of course, the truth is that we all do.
Every time we vote, renew a passport, or trust a digital service, we participate in a system of priorities. Someone, somewhere, has already decided what counts as critical and what doesn’t.

That realization sent me down a research rabbit hole:
Who decides what’s critical — and what happens once something earns that label?

Because the moment a system becomes critical, it doesn’t just get protection.
It gets rules, institutions, oversight, and redefined accountability.

The Regulator’s Dilemma

[00:01:24] Accountability becomes surprisingly rigid.

[00:01:28] Today, let’s talk about the regulator’s dilemma — and how the very act of governing risk creates new vulnerabilities.

A few weeks ago, my friend and longtime identity thinker Ian Glaser gave a Talk@Authenticate 2025 called Generational Building: Modern IAM Today. One idea from his opening really stuck with me:

Every major shift in how we manage identity begins with a crisis of accountability.

[00:02:01] For our generation — the last 25 years — that story begins with Enron.

Its collapse triggered the Sarbanes–Oxley Act (SOX), a sweeping effort to rebuild trust through oversight and auditing. And SOX didn’t just transform corporate governance — it rewired digital architecture.

To prove honesty to shareholders, companies had to prove:

Who had access When they had it And why

That requirement gave birth to modern identity and access management (IAM).

Key IAM practices that emerged:

User provisioning Quarterly access reviews Separation of duties Access certification

Not because they were fun technical innovations, but because compliance required them.

And to be fair, it worked. Mostly.

But it also froze an entire generation of identity practices into a compliance mindset built for:

Static servers Data centers Monolithic applications Human auditors counting accounts

A world that no longer exists.

The Accountability Lag

[00:03:21] Today’s infrastructure is a living network — APIs, ephemeral containers, and machine-to-machine connections.

[00:03:28] Permissions shift constantly.
Roles are inferred.
Identity is dynamic.

[00:03:33] And yet we still audit like it’s 2005.

The SOX model — document-based, periodic, human-verified — cannot keep up with the speed of cloud and automation.

Ironically, the more we try to prove control… the more we slow down the systems we’re trying to protect.

Compliance starts to undermine resilience.

It’s like measuring a rushing river using a still photograph.
You can capture the moment — but not the motion.

Once something becomes critical, auditors appear, processes multiply, and documentation becomes proof of virtue. Not capability.

The Weight of Critical Infrastructure

[00:04:42] As systems become indispensable, scrutiny intensifies.

[00:04:47] Stakes rise.
[00:04:50] Checklists expand.

At the top of all this sits government regulation — society’s way of expressing collective expectations around safety and reliability.

In theory, it’s how we prioritize the public good.
In practice, oversight is modeled on a world where inspectors carried clipboards and verified valves.

[00:05:21] That model doesn’t scale to digital infrastructure.

Government can loom over an industry’s shoulder — but it cannot see fast enough or deep enough to match the pace of automation.

[00:05:31] Annual audits and quarterly attestations become ceremonial when infrastructure shifts every second.

And yet the instinct to regulate through prescriptive checklists persists.

But prescriptive rules do not survive contact with:

Continuous deployment API-driven systems Automated policy engines

So the real question becomes:

[00:06:01] What truly counts as critical — and how much definition is necessary?

Can we acknowledge essentiality without creating bureaucratic drag?

Can we create accountability without pretending omniscience?

When Accountability Becomes Trust

[00:06:25] If the last generation of regulation hard-coded accountability into organizations…
[00:06:30] The next will have to hard-code trust into systems.

But not the kind of trust that can be reduced to a form or a checklist.

[00:06:39] Declaring something critical always invites oversight — and assumes governments can understand and manage the risk.

Increasingly, they can’t.

The OECD’s Government at a Glance 2025 notes that resilience now demands a whole-of-government approach, treating information sharing and trust as policy instruments.

Yet our governance frameworks were built for:

Power grids Pipelines Physical infrastructure

Not cloud platforms, APIs, or digital identity.

[00:07:15] The more critical digital systems become, the less feasible it is for any single authority to monitor them.

It’s the paradox of modern accountability.

[00:07:23] More connectivity = harder definitions of responsibility.

Research from the Critical Infrastructure Lab and the Internet Society’s REED Project highlights this shift:
Resilience now depends less on control and more on coordination.

But coordination has a flaw:

When everyone is accountable, no one is accountable.

Governance as a Trust Exercise

[00:08:03] When every actor assumes someone else will intervene, the chain of command dissolves.

[00:08:10] Governance then becomes imagination:
Regulators can picture centralization.
They can picture privatization.
But shared accountability — structured collaboration — is harder to design.

Yet digital infrastructure demands exactly that.

We don’t need omniscience.
We need visibility and clarity about who moves first when something goes wrong.

When coordination feels uncertain, governments default to familiar tools:

Classification Jurisdiction Control

Because no one wants to watch a crisis unfold without clear authority to act.

Sovereignty and Fragmentation

[00:08:50] When infrastructure becomes essential, governments anchor to sovereignty:
“This part of the network is ours.”

But digital systems ignore borders.

Still, the instinct persists.

The result is a wave of regional cybersecurity laws:

EU NIS2 U.S. NSM-22 India’s CERT rules Regional data residency mandates

Each makes sense alone.
Together, they form a patchwork of compliance zones that rarely align.

The more governments secure their slice of the Internet, the more the global system fragments.

Resilience becomes domestic instead of international.

The Three Paths of Modern Governance

[00:09:55] When doing nothing isn’t an option, what does doing something look like?

There are three paths:

Compliance

Comfortable, measurable, familiar — but self-limiting.
Checklists don’t scale.
Paperwork slows innovation.

Sovereignty

Domestic clouds. Localized data.
Strengthens internal resilience but fractures interoperability.

Coordination

Shared governance, mutual visibility, collective risk management.
Globally the best path — but increasingly rare because it requires uncomfortable transparency.

And transparency exposes dependencies that many institutions don’t want exposed.

So coordination shrinks:

From global to regional From universal to sectoral From open to semi-compatible

[00:11:29] Not the Internet we imagined.
But possibly the one we have to live with.

The Generational Shift Ahead

Each governance path centralizes something:

Compliance → process Sovereignty → power Coordination → trust

And trust is scarce.

This brings us back to Internet fragmentation:
We cannot prevent fragmentation, but we can make it survivable.

Identity governance is a generational story.
Each generation inherits a crisis it didn’t design and controls that don’t fit anymore.

The crisis today isn’t fraud.
It’s fragility — and our recognition that our systems are too interdependent to fully understand.

[00:12:32] The next regulatory wave will emerge from failures in shared systems.

We must choose what kind of accountability to design:

Double down on compliance and centralization? Or negotiate resilience — sometimes awkwardly, sometimes locally — with the people who can still see each other across the network?

Likely we’ll see:

Overlapping Regionally aligned Sector-specific

Regimes that reflect societal trade-offs between openness, autonomy, and control.

This isn’t failure.
It’s adaptation.

From Proving Control to Sharing It

If the last generation of accountability was about proving control…

The next must be about sharing it.

Building systems where:

Visibility replaces omniscience Cooperation replaces total oversight Resilience is negotiated, not dictated

This is the shift from auditing the past to governing the present.

[00:13:41] And if we’re good — if we learn from trade-offs without repeating them — we might design accountability the way we once designed for uptime.

We’ll see how it goes.

Closing Thoughts

There’s more in the written blog, including research links that informed this episode.
If you’d like to reflect or push back, I’d love to continue the conversation.

[00:14:07] Have a great rest of your day.

Outro

If this episode helped make things clearer — or at least more interesting — share it with a friend or colleague.

Connect with me on LinkedIn: @hlflanagan

And if you enjoyed the show, please subscribe and leave a rating and review on your favorite podcast platform.

You’ll find the full written post at sphericalcowconsulting.com.
Stay curious. Stay engaged. Let’s keep these conversations going.

The post The Regulator’s Dilemma appeared first on Spherical Cow Consulting.


TÜRKKEP A.Ş.

Maltepe Üniversitesi-TÜRKKEP Protokolü: KEP, E-Yazışma Güvenliği, Ar-Ge ve Staj Olanakları

Dijital dönüşüm artık sadece teknoloji şirketlerinin gündeminde değil; üniversitelerin, kamu kurumlarının ve özel sektörün ortak meselesi. Bugün kurumlar için güvenli iletişim, mevzuata uyum ve hızlanan iş süreçleri kadar, bu dönüşümü destekleyecek doğru işbirlikleri de oldukça kritik. Tam da bu noktada dijital dönüşümün pazar lideri TÜRKKEP’in Ar-Ge birimi tarafından yürütülen üniversite-sanayi iş
Dijital dönüşüm artık sadece teknoloji şirketlerinin gündeminde değil; üniversitelerin, kamu kurumlarının ve özel sektörün ortak meselesi. Bugün kurumlar için güvenli iletişim, mevzuata uyum ve hızlanan iş süreçleri kadar, bu dönüşümü destekleyecek doğru işbirlikleri de oldukça kritik. Tam da bu noktada dijital dönüşümün pazar lideri TÜRKKEP’in Ar-Ge birimi tarafından yürütülen üniversite-sanayi işbirliği çalışmaları öne çıkıyor. Bu işbirlikleriyle; TÜRKKEPin Ar-Ge çalışmalarına akademik destek sağlanması, ortak projeler yürütülmesi ve üniversite imkânlarının kullanılması amaçlanıyor.

Okta

Secure Authentication with a Push Notification in Your iOS Device

Building secure and seamless sign-in experiences is a core challenge for today’s iOS developers. Users expect authentication that feels instant, yet protects them with strong safeguards like multi-factor authentication (MFA). With Okta’s DirectAuth and push notification support, you can achieve both – delivering native, phishing-resistant MFA flows without ever leaving your app. In this post, w

Building secure and seamless sign-in experiences is a core challenge for today’s iOS developers. Users expect authentication that feels instant, yet protects them with strong safeguards like multi-factor authentication (MFA). With Okta’s DirectAuth and push notification support, you can achieve both – delivering native, phishing-resistant MFA flows without ever leaving your app.

In this post, we’ll walk you through how to:

Set up your Okta developer account Configure your Okta org for DirectAuth and push notification factor Enable your iOS app to drive DirectAuth flows natively Create an AuthService with the support of DirectAuth Build a fully working SwiftUI demo leveraging the AuthService

Note: This guide assumes you’re comfortable developing in Xcode using Swift and have basic familiarity with Okta’s identity flows.

If you want to skip the tutorial and run the project, you can follow the instructions in the project’s README.

Table of Contents

Use Okta DirectAuth with push notification factor Prefer phishing-resistant authentication factors Set up your iOS project with Okta’s mobile SDKs Authenticate your iOS app using Okta DirectAuth Add the OIDC configuration to your iOS app Add authentication in your iOS app without a browser redirect using Okta DirectAuth Secure, native sign-in in iOS Sign-out users when using DirectAuth Refresh access tokens securely Display the authenticated user’s information Build the SwiftUI views to display authenticated state Read ID token info View the authenticated user’s profile info Keeping tokens refreshed and maintaining user sessions Build your own secure native sign-in iOS app Use Okta DirectAuth with push notification factor

The first step in implementing Direct Authentication with push-based MFA is setting up your Okta org and enabling the Push Notification factor. DirectAuth allows your app to handle authentication entirely within its own native UI – no browser redirection required – while still leveraging Okta’s secure OAuth 2.0 and OpenID Connect (OIDC) standards under the hood.

This means your app can seamlessly verify credentials, obtain tokens, and trigger a push notification challenge without switching contexts or relying on the SafariViewController.

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Select Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Native Application as the application type, then select Next Enter an app integration name Configure the redirect URIs: Redirect URI: com.okta.{yourOktaDomain}:/callback Post Logout Redirect URI: com.okta.{yourOktaDomain}:/ (where {yourOktaDomain}.okta.com is your Okta domain name). Your domain name is reversed to provide a unique scheme to open your app on a device. Select Advanced v. Select the OOB and MFA OOB grant types. In the Controlled access section, select the appropriate access level Select Save

NOTE: When using a custom authorization server, you need to set up authorization policies. Complete these additional steps:

In the Admin Console, go to Security > API > Authorization Servers Select your custom authorization server (default) On the Access Policies tab, ensure you have at least one policy: If no policies exist, select Add New Access Policy Give it a name like “Default Policy” Set Assign to “All clients” Click Create Policy For your policy, ensure you have at least one rule: Select Add Rule if no rules exist Give it a name like “Default Rule” Set Grant type is to “Authorization Code” Select Advanced and enable “MFA OOB” Set User is to “Any user assigned the app” Set Scopes requested to “Any scopes” Select Create Rule

For more details, see the Custom Authorization Server documentation.

Where are my new app's credentials?

Creating an OIDC Native App manually in the Admin Console configures your Okta Org with the application settings.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Prefer phishing-resistant authentication factors

When implementing DirectAuth with push notifications, security remains your top priority. Every new Okta Integrator Free Plan account requires admins to configure multi-factor authentication (MFA) using Okta Verify by default. We’ll keep these default settings for this tutorial, as they already support Okta Verify Push, the recommended factor for a native and secure authentication experience.

Push notifications through Okta Verify provide strong, phishing-resistant protection by requiring the user to approve sign-in attempts directly from a trusted device. Combined with biometric verification (Face ID or Touch ID) or device PIN enforcement, Okta Verify Push ensures that only the legitimate user can complete the authentication flow – even if credentials are compromised.

By default, push factor isn’t enabled in the Integrator Free org. Let’s enable it now.

Navigate to Security > Authenticators. Find Okta Verify and select Actions > Edit. In the Okta Verify modal, find Verification options and select Push notification (Android and iOS only). Select Save.

Set up your iOS project with Okta’s mobile SDKs

Before integrating Okta DirectAuth and Push Notification MFA, make sure your development environment meets the following requirements:

Xcode 15.0 or later – This guide assumes you’re comfortable developing iOS apps in Swift using Xcode. Swift 5+ – All examples use modern Swift language features. Swift Package Manager (SPM) – Dependency manager handled through SPM, which is built into Xcode.

Once your environment is ready, create a new iOS project in Xcode and prepare it for integration with Okta’s mobile libraries.

Authenticate your iOS app using Okta DirectAuth

If you are starting from scratch, create a new iOS app:

Open Xcode Go to File > New > Project Select iOS App and select Next Enter the name of the project, such as “okta-mfa-direct-auth” Set the Interface to SwiftUI Select Next and save your project locally

To integrate Okta’s Direct Authentication SDK into your iOS app, we’ll use Swift Package Manager (SPM) – the recommended and modern way to manage dependencies in Xcode.

Follow these steps:

Open your project in Xcode (or create a new one if needed) Go to File > Add Package Dependencies In the search field at the top-right, enter: https://github.com/okta/okta-mobile-swift and press Return. Xcode will automatically fetch the available packages. Select the latest version (recommended) or specify a compatible version with your setup When prompted to choose which products to add, ensure that you select your app target next to OktaDirectAuth and AuthFoundation Select Add Package

These packages provide all the tools you need to implement native authentication flows using OAuth 2.0 and OpenID Connect (OIDC) with DirectAuth, including secure token handling and MFA challenge management – without relying on a browser session.

Once the integration is complete, you’ll see OktaMobileSwift and its dependencies listed under your project’s Package Dependencies section in Xcode.

Add the OIDC configuration to your iOS app

The cleanest and most scalable way to manage configuration is to use a property list file for Okta stored in your app bundle.

Create the property list for your OIDC and app config by following these steps:

Right-click on the root folder of the project Select New File from Template (New File in legacy Xcode versions) Ensure you have iOS selected on the top picker Select Property List template and select Next Name the template Okta and select Create to create an Okta.plist file

You can edit the file in XML format by right-clicking and selecting Open As > Source Code. Copy and paste the following code into the file.

<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>scopes</key> <string>openid profile offline_access</string> <key>redirectUri</key> <string>com.okta.{yourOktaDomain}:/callback</string> <key>clientId</key> <string>{yourClientID}</string> <key>issuer</key> <string>{yourOktaDomain}/oauth2/default</string> <key>logoutRedirectUri</key> <string>com.okta.{yourOktaDomain}:/</string> </dict> </plist>

Replace {yourOktaDomain} and {yourClientID} with the values from your Okta org.

If you use something like this in your code, you can directly access the DirectAuth shared instance, which is already initialized and ready to handle authentication requests.

Add authentication in your iOS app without a browser redirect using Okta DirectAuth

Now that you’ve added the SDK and property list file, let’s implement the main authentication logic for your app.

We’ll build a dedicated service called AuthService, responsible for logging users in and out, refreshing tokens, and managing session state.

This service will rely on OktaDirectAuth for native authentication and AuthFoundation for secure token handling.

To set it up, create a new folder named Auth under your project’s folder structure, then add a new Swift file called AuthService.swift.

Here, you’ll define your authentication protocol and a concrete class that integrates directly with the Okta SDK – making it easy to use across your SwiftUI or UIKit views.

import AuthFoundation import OktaDirectAuth import Observation import Foundation protocol AuthServicing { // The accessToken of the logged in user var accessToken: String? { get } // State for driving SwiftUI var state: AuthService.State { get } // Sign in (Password + Okta Verify Push) func signIn(username: String, password: String) async throws // Sign out & revoke tokens func signOut() async // Refresh access token if possible (returns updated token if refreshed) func refreshTokenIfNeeded() async throws // Getting the userInfo out of the Credential func userInfo() async throws -> UserInfo? }

With this added, you will get an error that AuthService can’t be found. That’s because we haven’t created the class yet. Below this code, add the following declarations of the AuthService class:

@Observable final class AuthService: AuthServicing { }

After doing so, we next need to confirm the AuthService class to the AuthServicing protocol and also create the State enum, which will hold all the states of our Authentication process.

To do that, first let’s create the State enum inside the AuthService class like this:

@Observable final class AuthService: AuthServicing { enum State: Equatable { case idle case authenticating case waitingForPush case authorized(Token) case failed(errorMessage: String) } }

The new code resolved the two errors about the AuthService and the State enum. We only have one error to fix, which is confirming the class to the protocol.

We will start implementing the functions top to bottom. Let’s first add the two variables from the protocol, accessToken and state. After the definition of the enum, we will add the properties:

@Observable final class AuthService: AuthServicing { enum State: Equatable { case idle case authenticating case waitingForPush case authorized(Token) case failed(errorMessage: String) } private(set) var state: State = .idle var accessToken: String? { return nil } }

For now, we will leave the accessToken getter with a return value of nil, as we are not using the token yet. We’ll add the implementation later.

Next, we’ll add a private property to hold a reference to the DirectAuthenticationFlow instance.

This object manages the entire DirectAuth process, including credential verification, MFA challenges, and token issuance. The object must persist across authentication steps.

Insert the following variable between the existing state and accessToken properties:

private(set) var state: State = .idle @ObservationIgnored private let flow: DirectAuthenticationFlow? var accessToken: String? { return nil }

To allocate the flow variable, we will need to implement an initializer for the AuthService class. Inside, we’ll allocate the flow using the PropertyListConfiguration that we introduced earlier. Just after the accessToken getter, add the following function:

// MARK: Init init() { // Prefer PropertyListConfiguration if Okta.plist exists; otherwise fall back if let configuration = try? OAuth2Client.PropertyListConfiguration() { self.flow = try? DirectAuthenticationFlow(client: OAuth2Client(configuration)) } else { self.flow = try? DirectAuthenticationFlow() } }

This will try to fetch the Okta.plist file from the project’s folder, and if not found, will fall back to the default initializer of the DirectAuthenticationFlow. We have now successfully allocated the DirectAuthenticationFlow, and we can proceed with implementing the next functions of the protocol.

Moving down to the first function in the protocol, which is the signIn(username: String, password: String).

The signIn method below performs the full authentication flow using Okta DirectAuth and Auth Foundation. It authenticates a user with their username and password, handles MFA challenges (in this case, Okta Verify Push), and securely stores the resulting token for future API calls. Add the following code just under the Init that we just added.

// MARK: AuthServicing func signIn(username: String, password: String) { Task { @MainActor in // 1️⃣ Start the Sign-In Process // Update UI state and begin the DirectAuth flow with username/password. state = .authenticating do { let result = try await flow?.start(username, with: .password(password)) switch result { // 2️⃣ Handle Successful Authentication // Okta validated credentials, return access/refresh/ID tokens. case .success(let token): let newCred = try Credential.store(token) Credential.default = newCred state = .authorized(token) // 3️⃣ Handle MFA with Push Notification // Okta requires MFA, wait for push approval via Okta Verify. case .mfaRequired: state = .waitingForPush let status = try await flow?.resume(with: .oob(channel: .push)) if case let .success(token) = status { Credential.default = try Credential.store(token) state = .authorized(token) } default: break } } catch { // 4️⃣ Handle Errors Gracefully // Update state with a descriptive error message for the UI. state = .failed(errorMessage: error.localizedDescription) } } }

Let’s break down what’s happening step by step:

1. Start the sign-in process

When the function is called, it launches a new asynchronous Task and sets the UI state to .authenticating. It then initiates the DirectAuth flow using the provided username and password:

let result = try await flow?.start(username, with: .password(password))

This sends the user’s credentials to Okta’s Direct Authentication API and waits for a response.

2. Handle successful authentication

If Okta validates the credentials and no additional verification is needed, the result will be .success(token).

The returned Token object contains access, refresh, and ID tokens.

We securely persist the credentials using AuthFoundation:

let newCred = try Credential.store(token) Credential.default = newCred state = .authorized(token)

This marks the user as authenticated and updates the app state, allowing your UI to transition to the signed-in experience.

3. Handle MFA with push notification

If Okta determines that an MFA challenge is required, the result will be .mfaRequired. The app updates its state to .waitingForPush, prompting the user to approve the login on their Okta Verify app:

state = .waitingForPush let status = try await flow?.resume(with: .oob(channel: .push))

The .oob(channel: .push) parameter resumes the authentication flow by waiting for the push approval event from Okta Verify.

Once the user approves, Okta returns a new token:

if case let .success(token) = status { Credential.default = try Credential.store(token) state = .authorized(token) }

4. Handle errors

If any step fails (e.g., invalid credentials, network issues, or push timeout), the catch block updates the UI to show an error message:

state = .failed(errorMessage: error.localizedDescription)

The error function allows your app to display user-friendly error states while preserving robust error handling for debugging.

Secure, native sign-in in iOS

This function demonstrates a complete native sign-in experience with Okta DirectAuth, no web views, no redirects.

It authenticates the user, manages token storage securely, and handles push-based MFA all within your app’s Swift layer – making the authentication flow fast, secure, and frictionless.

The following diagram illustrates how the authentication flow works under the hood when using Okta DirectAuth with push notification authentication factor:

Sign-out users when using DirectAuth

Next from the protocol functions is the sign-out method. This method provides a clean and secure way to log the user out of the app.

It revokes the user’s active tokens from Okta and resets the local authentication state, ensuring that no stale credentials remain on the device. Add the following code right below the signIn method:

func signOut() async { if let credential = Credential.default { try? await credential.revoke() } Credential.default = nil state = .idle }

Let’s look at what each step does: 1. Check for an existing credential

if let credential = Credential.default {

The method first checks if a stored credential (token) exists in memory. Credential.default represents the current authenticated session created earlier during sign-in.

2. Revoke the tokens from Okta

try? await credential.revoke()

This line tells Okta to invalidate the access and refresh tokens associated with that credential. Calling revoke() ensures that the user’s session terminates locally and in the authorization server, preventing further API access with those tokens.

The try? operator is used to safely ignore any errors (e.g., network failure during logout), since token revocation is a best-effort operation.

3. Clear local credential data

Credential.default = nil

After revoking the tokens, the app clears the local credential object.

This removes any sensitive authentication data from memory, ensuring that no valid tokens remain on the device.

4. Reset the authentication state

state = .idle

Finally, the app updates its internal state back to .idle, which tells the UI that the user is now logged out and ready to start a new session.

You can use this state to trigger a transition back to the login screen or turn off authenticated features.

The protocol confirmation is almost complete, and we only have two functions remaining to implement.

Refresh access tokens securely

Access tokens issued by Okta have a limited lifetime to reduce the risk of misuse if compromised. OAuth clients that can’t maintain secrets, like mobile apps, require short access token lifetimes for security.

To maintain a seamless user experience, your app should refresh tokens automatically before they expire. The refreshTokenIfNeeded() method handles this process securely using AuthFoundation’s built-in token management APIs.

Let’s walk through what it does. Add the following code right after the signOut method:

func refreshTokenIfNeeded() async throws { guard let credential = Credential.default else { return } try await credential.refresh() }

1. Check for an existing credential

guard let credential = Credential.default else { return }

Before attempting a token refresh, the method checks whether a valid credential exists. If no credential is stored (e.g., the user hasn’t signed in yet or has logged out), the method exits early.

2. Refresh the token

try await credential.refresh()

This line tells Okta to exchange the refresh token for a new access token and ID token.

The refresh() method automatically updates the Credential object with the new tokens and securely persists them using AuthFoundation.

If the refresh token has expired or is invalid, this call throws an error – allowing your app to detect the issue and prompt the user to sign in again.

Display the authenticated user’s information

Lastly, let’s look at the userInfo() function. After authenticating, your app can access the user’s profile information – such as their name, email, or user ID – from Okta using a standard OIDC endpoint.

The userInfo() method retrieves this data from the ID token or by calling the authorization server’s /userinfo endpoint. The ID token doesn’t necessarily include all of the profile information though, as the ID token is intentionally lightweight.

Here’s how it works. Add the following code after the end of refreshTokenIfNeeded():

func userInfo() async throws -> UserInfo? { if let userInfo = Credential.default?.userInfo { return userInfo } else { do { guard let userInfo = try await Credential.default?.userInfo() else { return nil } return userInfo } catch { return nil } } }

1. Return the cached user info

if let userInfo = Credential.default?.userInfo { return userInfo }

If the user’s profile information has already been fetched and stored in memory, the method returns it immediately.

This avoids unnecessary network calls, providing a fast and responsive experience.

2. Fetch user info

guard let userInfo = try await Credential.default?.userInfo() else { return nil }

If the cached data isn’t available, the method fetches it directly from Okta using the UserInfo endpoint.

This endpoint returns standard OpenID Connect claims such as:

sub (the user's unique ID) name email preferred_username etc...

The AuthFoundation SDK handles the request and parsing for you, returning a UserInfo object.

3. Handle errors gracefully

catch { return nil }

If the request fails (for example, due to a network issue or expired token), the function returns nil. This prevents your app from crashing and allows you to handle the error by displaying a default user state or prompting re-authentication.

With this implemented, you’ve resolved all the errors and should be able to build the app. 🎉

Build the SwiftUI views to display authenticated state

Now that we’ve built the AuthService to handle sign-in, sign-out, token management, and user info retrieval, let’s see how to integrate it into your app’s UI.

To maintain consistency in your architecture, rename the default ContentView to AuthView and update all references accordingly.

This clarifies the purpose of the view – it will serve as the primary authentication interface. Then, create a Views folder under your project’s folder, drag and drop the AuthView into the newly created folder, and create a new file named AuthViewModel.swift in the same folder.

The AuthViewModel will encapsulate all authentication-related state and actions, acting as the communication layer between your view and the underlying AuthService.

Add the following code in AuthViewModel.swift:

import Foundation import Observation import AuthFoundation /// The `AuthViewModel` acts as the bridge between your app's UI and the authentication layer (`AuthService`). /// It coordinates user actions such as signing in, signing out, refreshing tokens, and fetching user profile data. /// This class uses Swift's `@Observable` macro so that your SwiftUI views can automatically react to state changes. @Observable final class AuthViewModel { // MARK: - Dependencies /// The authentication service responsible for handling DirectAuth sign-in, /// push-based MFA, token management, and user info retrieval. private let authService: AuthServicing // MARK: - UI State Properties /// Stores the user's token, which can be used for secure communication /// with backend services that validate the user's identity. var accessToken: String? /// Represents a loading statex. Set to `true` when background operations are running /// (such as sign-in, sign-out, or token refresh) to display a progress indicator. var isLoading: Bool = false /// Holds any human-readable error messages that should be displayed in the UI /// (for example, invalid credentials or network errors). var errorMessage: String? /// The username and password properties are bound to text fields in the UI. /// As the user types, these values update automatically thanks to SwiftUI's reactive data binding. /// The view model then uses them to perform DirectAuth sign-in when the user submits the form. var username: String = "" var password: String = "" /// Exposes the current authentication state (idle, authenticating, waitingForPush, authorized, failed) /// as defined by the `AuthService.State` enum. The view can use this to display the correct UI. var state: AuthService.State { authService.state } // MARK: - Initialization /// Initializes the view model with a default instance of `AuthService`. /// You can inject a mock `AuthServicing` implementation for testing. init(authService: AuthServicing = AuthService()) { self.authService = authService } // MARK: - Authentication Actions /// Attempts to authenticate the user with the provided credentials. /// This triggers the full DirectAuth flow -- including password verification, /// push notification MFA (if required), and secure token storage via AuthFoundation. @MainActor func signIn() async { setLoading(true) defer { setLoading(false) } do { try await authService.signIn(username: username, password: password) accessToken = authService.accessToken } catch { errorMessage = error.localizedDescription } } /// Signs the user out by revoking active tokens, clearing local credentials, /// and resetting the app's authentication state. @MainActor func signOut() async { setLoading(true) defer { setLoading(false) } await authService.signOut() } // MARK: - Token Handling /// Refreshes the user's access token using their refresh token. /// This allows the app to maintain a valid session without requiring /// the user to log in again after the access token expires. @MainActor func refreshToken() async { setLoading(true) defer { setLoading(false) } do { try await authService.refreshTokenIfNeeded() accessToken = authService.accessToken } catch { errorMessage = error.localizedDescription } } // MARK: - User Info Retrieval /// Fetches the authenticated user's profile information from Okta. /// Returns a `UserInfo` object containing standard OIDC claims (such as `name`, `email`, and `sub`). /// If fetching fails (e.g., due to expired tokens or network issues), it returns `nil`. @MainActor func fetchUserInfo() async -> UserInfo? { do { let userInfo = try await authService.userInfo() return userInfo } catch { errorMessage = error.localizedDescription return nil } } // MARK: - UI Helpers /// Updates the `isLoading` property. This is used to show or hide /// a loading spinner in your SwiftUI view while background work is in progress. private func setLoading(_ value: Bool) { isLoading = value } }

With the view model in place, the next step is to bind it to your SwiftUI view. The AuthView will observe the AuthViewModel, updating automatically as the authentication state changes.

It will show the user’s ID token when authenticated and provide controls for signing in, signing out, and refreshing the token.

Open AuthView.swift, remove the existing template code, and insert the following implementation:

import SwiftUI import AuthFoundation /// A simple wrapper for `UserInfo` used to present user profile data in a full-screen modal. /// Conforms to `Identifiable` so it can be used with `.fullScreenCover(item:)`. struct UserInfoModel: Identifiable { let id = UUID() let user: UserInfo } /// The main SwiftUI view for managing the authentication experience. /// This view observes the `AuthViewModel`, displays different UI states /// based on the current authentication flow, and provides controls for /// signing in, signing out, refreshing tokens, and viewing user or token information. struct AuthView: View { // MARK: - View Model /// The view model that manages all authentication logic and state transitions. /// It uses `@Observable` from Swift's Observation framework, so changes here /// automatically trigger UI updates. @State private var viewModel = AuthViewModel() // MARK: - State and Presentation /// Holds the currently fetched user information (if available). /// When this value is set, the `UserInfoView` is displayed as a full-screen sheet. @State private var userInfo: UserInfoModel? /// Controls whether the Token Info screen is presented as a full-screen modal. @State private var showTokenInfo = false // MARK: - View Body var body: some View { VStack { // Render the UI based on the current authentication state. // Each case corresponds to a different phase of the DirectAuth flow. switch viewModel.state { case .idle, .failed: loginForm case .authenticating: ProgressView("Signing in...") case .waitingForPush: // Waiting for Okta Verify push approval WaitingForPushView { Task { await viewModel.signOut() } } case .authorized: successView } } .padding() } } // MARK: - Login Form View private extension AuthView { /// The initial sign-in form displayed when the user is not authenticated. /// Captures username and password input and triggers the DirectAuth sign-in flow. private var loginForm: some View { VStack(spacing: 16) { Text("Okta DirectAuth (Password + Okta Verify Push)") .font(.headline) // Email input field (bound to view model's username property) TextField("Email", text: $viewModel.username) .keyboardType(.emailAddress) .textContentType(.username) .textInputAutocapitalization(.never) .autocorrectionDisabled() // Secure password input field SecureField("Password", text: $viewModel.password) .textContentType(.password) // Triggers authentication via DirectAuth and Push MFA Button("Sign In") { Task { await viewModel.signIn() } } .buttonStyle(.borderedProminent) .disabled(viewModel.username.isEmpty || viewModel.password.isEmpty) // Display error message if sign-in fails if case .failed(let message) = viewModel.state { Text(message) .foregroundColor(.red) .font(.footnote) } } } } // MARK: - Authorized State View private extension AuthView { /// Displayed once the user has successfully signed in and completed MFA. /// Shows the user's ID token and provides actions for token refresh, user info, /// token details, and sign-out. private var successView: some View { VStack(spacing: 16) { Text("Signed in 🎉") .font(.title2) .bold() // Scrollable ID token display (for demo purposes) ScrollView { Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)") .font(.footnote) .textSelection(.enabled) .padding() .background(.thinMaterial) .cornerRadius(8) } .frame(maxHeight: 220) // Authenticated user actions signoutButton } .padding() } } // MARK: - Action Buttons private extension AuthView { /// Signs the user out, revoking tokens and returning to the login form. var signoutButton: some View { Button("Sign Out") { Task { await viewModel.signOut() } } .font(.system(size: 14)) } }

With this added, you will receive an error stating that WaitingForPushView can’t be found in scope. To fix this, we need to add that view next. Add a new empty Swift file in the Views folder and name it WaitingForPushView. When complete, add the following implementation inside:

import SwiftUI struct WaitingForPushView: View { let onCancel: () -> Void var body: some View { VStack(spacing: 16) { ProgressView() Text("Approve the Okta Verify push on your device.") .multilineTextAlignment(.center) Button("Cancel", action: onCancel) } .padding() } }

Now you can run the application on a simulator, and it should present you with the option to log in first with a username and password. After selecting SignIn, it will redirect to the “Waiting for push notification” screen and remain active until you acknowledge the request from the Okta Verify App. If you’re logged in, you’ll see the access token and a sign-out button.

Read ID token info

Once your app authenticates a user with Okta DirectAuth, the resulting credentials are securely stored in the device’s keychain through AuthFoundation.

These credentials include access, ID, and (optionally) refresh tokens – all essential for securely calling APIs or verifying user identity.

In this section, we’ll create a skeleton TokenInfoView that reads the current tokens from Credential.default and displays them in a developer-friendly format.

This view helps visualize the credential in the store and to inspect the scope. And it helps verify that the authentication flow works.

Create a new Swift file in the Views folder and name it TokenInfoView. Add the following code:

import SwiftUI import AuthFoundation /// Displays detailed information about the tokens stored in the current /// `Credential.default` instance. This view is helpful for debugging and /// validating your DirectAuth flow -- confirming that tokens are correctly /// issued, stored, and refreshed. /// /// ⚠️ **Important:** Avoid showing full token strings in production apps. /// Tokens should be treated as sensitive secrets. struct TokenInfoView: View { /// Retrieves the current credential object managed by `AuthFoundation`. /// If the user is signed in, this will contain their access, ID, and refresh tokens. private var credential: Credential? { Credential.default } /// Used to dismiss the current view when the close button is tapped. @Environment(\.dismiss) var dismiss var body: some View { ScrollView { VStack(alignment: .leading, spacing: 20) { // MARK: - Close Button // Dismisses the token info view when tapped. Button { dismiss() } label: { Image(systemName: "xmark.circle.fill") .resizable() .foregroundStyle(.black) .frame(width: 40, height: 40) .padding(.leading, 10) } // MARK: - Token Display // Displays the token information as formatted monospaced text. // If no credential is available, a "No token found" message is shown. Text(credential?.toString() ?? "No token found") .font(.system(.body, design: .monospaced)) .padding() .frame(maxWidth: .infinity, alignment: .leading) } } .background(Color(.systemGroupedBackground)) .navigationTitle("Token Info") .navigationBarTitleDisplayMode(.inline) } } // MARK: - Credential Display Helper extension Credential { /// Returns a formatted string representation of the stored token values. /// Includes access, ID, and refresh tokens as well as their associated scopes. /// /// - Returns: A multi-line string suitable for debugging and display in `TokenInfoView`. func toString() -> String { var result = "" result.append("Token type: \(token.tokenType)") result.append("\n\n") result.append("Access Token: \(token.accessToken)") result.append("\n\n") result.append("Scopes: \(token.scope?.joined(separator: ",") ?? "No scopes found")") result.append("\n\n") if let idToken = token.idToken { result.append("ID Token: \(idToken.rawValue)") result.append("\n\n") } if let refreshToken = token.refreshToken { result.append("Refresh Token: \(refreshToken)") result.append("\n\n") } return result } }

To view this on screen, we need to instruct SwiftUI to present it. We added the State variable in the AuthView for this purpose - it’s named showTokenInfo. Next, we need to add a button to present the TokenInfoView. Go to the AuthView.swift and scroll down to the last private extension where it says “Action Buttons” and add the following button:

/// Opens the full-screen view showing token info. var tokenInfoButton: some View { Button("Token Info") { showTokenInfo = true } .disabled(viewModel.isLoading) }

Now that this is in place, we need to tell SwiftUI that we want to present TokenInfoView whenever the showTokenInfo boolean is true. In the AuthView, find the body and add this code at the end below the .padding():

// Show Token Info full screen .fullScreenCover(isPresented: $showTokenInfo) { TokenInfoView() }

If you build and run the app, you’ll no longer see the Token Info button when logged in. To keep the button visible, we also need to reference the tokenInfoButton in the successView. In the AuthView file, scroll down to “Authorized State View” (successView) and reference the button just above the signoutButton like this:

private var successView: some View { VStack(spacing: 16) { Text("Signed in 🎉") .font(.title2) .bold() // Scrollable ID token display (for demo purposes) ScrollView { Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)") .font(.footnote) .textSelection(.enabled) .padding() .background(.thinMaterial) .cornerRadius(8) } .frame(maxHeight: 220) // Authenticated user actions tokenInfoButton // this is added signoutButton } .padding() }

Try building and running the app. You should now see the Token Info button after logging in. Tapping the button should open the Token Info View.

View the authenticated user’s profile info

Once your app authenticates a user with Okta DirectAuth, it can use the stored credentials to request profile information from the UserInfo endpoint securely.

This endpoint returns standard OpenID Connect (OIDC) claims, including the user’s name, email address, and unique identifier (sub).

In this section, you’ll add a User Info button to your authenticated view and implement a corresponding UserInfoView that displays these profile details.

This is a quick and powerful way to confirm the validity of the access token and that your app can retrieve user data after sign-in.

Create a new empty Swift file in the Views folder and name it UserInfoView. Then add the following code:

import SwiftUI import AuthFoundation /// A view that displays the authenticated user's profile information /// retrieved from Okta's **UserInfo** endpoint. /// /// The `UserInfo` object is provided by **AuthFoundation** and contains /// standard OpenID Connect (OIDC) claims such as `name`, `preferred_username`, /// and `sub` (subject identifier). This view is shown after the user has /// successfully authenticated, allowing you to confirm that your access token /// can retrieve user data. struct UserInfoView: View { /// The user information returned by the Okta UserInfo endpoint. let userInfo: UserInfo /// Used to dismiss the view when the close button is tapped. @Environment(\.dismiss) var dismiss var body: some View { ScrollView { VStack(alignment: .leading, spacing: 20) { // MARK: - Close Button // Dismisses the full-screen user info view. Button { dismiss() } label: { Image(systemName: "xmark.circle.fill") .resizable() .foregroundStyle(.black) .frame(width: 40, height: 40) .padding(.leading, 10) } // MARK: - User Information Text // Displays formatted user claims (name, username, subject, etc.) Text(formattedData) .font(.system(size: 14)) .frame(maxWidth: .infinity, alignment: .leading) .padding() } } .background(Color(.systemBackground)) .navigationTitle("User Info") .navigationBarTitleDisplayMode(.inline) } // MARK: - Data Formatting /// Builds a simple multi-line string of readable user information. /// Extracts common OIDC claims and formats them for display. private var formattedData: String { var result = "" // User's full name result.append("Name: " + (userInfo.name ?? "No name set")) result.append("\n\n") // Preferred username (email or login identifier) result.append("Username: " + (userInfo.preferredUsername ?? "No username set")) result.append("\n\n") // Subject identifier (unique Okta user ID) result.append("User ID: " + (userInfo.subject ?? "No ID found")) result.append("\n\n") // Last updated timestamp (if available) if let updatedAt = userInfo.updatedAt { let dateFormatter = DateFormatter() dateFormatter.dateStyle = .medium dateFormatter.timeStyle = .short let formattedDate = dateFormatter.string(for: updatedAt) result.append("Updated at: " + (formattedDate ?? "")) } return result } }

Once again, to display this in our app, we need to add a new button to show the new view. To do that, open the AuthView.swift, scroll down to the last private extension where it says “Action Buttons”, and add the following button just below the tokenInfoButton:

/// Loads user info and presents it full screen. @MainActor var userInfoButton: some View { Button("User Info") { Task { if let user = await viewModel.fetchUserInfo() { userInfo = UserInfoModel(user: user) } } } .font(.system(size: 14)) .disabled(viewModel.isLoading) }

Next, we need to add the button to the successView like we did with the tokenInfoButton. Then, we will use the userInfo property in the AuthView, which we added at the start. Navigate to the AuthView.swift file and find the successView in the “Authorized State View” mark and reference the userInfoButton after the tokenInfoButton like this:

private var successView: some View { VStack(spacing: 16) { Text("Signed in 🎉") .font(.title2) .bold() // Scrollable ID token display (for demo purposes) ScrollView { Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)") .font(.footnote) .textSelection(.enabled) .padding() .background(.thinMaterial) .cornerRadius(8) } .frame(maxHeight: 220) // Authenticated user actions tokenInfoButton userInfoButton // this is added signoutButton } .padding() }

We need to tell SwiftUI that we want to open a new UserInfoView whenever the value on the userInfo property changes. To do so, open the AuthView and find the body variable, add the following code after the last closing bracket:

// Show User Info full screen .fullScreenCover(item: $userInfo) { info in UserInfoView(userInfo: info.user) }

The body of your AuthView should look like this now:

var body: some View { VStack { // Render the UI based on the current authentication state. // Each case corresponds to a different phase of the DirectAuth flow. switch viewModel.state { case .idle, .failed: loginForm case .authenticating: ProgressView("Signing in...") case .waitingForPush: // Waiting for Okta Verify push approval WaitingForPushView { Task { await viewModel.signOut() } } case .authorized: successView } if viewModel.isLoading { ProgressView() } } .padding() // Show Token Info full screen .fullScreenCover(isPresented: $showTokenInfo) { TokenInfoView() } // Show User Info full screen .fullScreenCover(item: $userInfo) { info in UserInfoView(userInfo: info.user) } } Keeping tokens refreshed and maintaining user sessions

Access tokens have a limited lifetime to ensure your app’s security. When a token expires, the user shouldn’t have to sign-in again – instead, your app can request a new access token using the refresh token stored in the credential.

In this section, you’ll add support for token refresh, allowing users to stay authenticated without repeating the entire sign-in and MFA flow.

You’ll add an action in the UI that calls the refreshTokenIfNeeded() method from your AuthService, which silently exchanges the refresh token for a new set of valid tokens. We’re making this call manually, but you can watch for upcoming expiry and refresh the token before it happens preemptively. While we don’t show it here, you should use Refresh Token Rotation to ensure refresh tokens are also short-lived as a security measure.

First, we need to add the refreshTokenButton, which we’ll add to the AuthView. Open the AuthView, scroll down to the last private extension in the “Action Buttons” mark, and add the following button at the end of the extension:

/// Refresh Token if needed var refreshTokenButton: some View { Button("Refresh Token") { Task { await viewModel.refreshToken() } } .font(.system(size: 14)) .disabled(viewModel.isLoading) }

Next, we need to reference the button somewhere in our view. We will do that inside the successView, like we did with the other buttons. Find the successView and add the button. Your successView should look like this:

private var successView: some View { VStack(spacing: 16) { Text("Signed in 🎉") .font(.title2) .bold() // Scrollable ID token display (for demo purposes) ScrollView { Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)") .font(.footnote) .textSelection(.enabled) .padding() .background(.thinMaterial) .cornerRadius(8) } .frame(maxHeight: 220) // Authenticated user actions tokenInfoButton userInfoButton refreshTokenButton // this is added signoutButton } .padding() }

Now, if you run the app and tap the refreshTokenButton, you should see your token change in the token preview label.

One thing that we didn’t implement and left with a default implementation to return nil is the accessToken property on the AuthService. Navigate to the AuthService, find the accessToken property, and replace the code so it looks like this:

var accessToken: String? { switch state { case .authorized(let token): return token.accessToken default: return nil } }

Currently, if you restart the app, you’ll get a prompt to log in each time. This is not a good user experience, and the user should remain logged in. We can add this feature by adding code in the AuthService initializer. Open your AuthService class and replace the init function with the following:

init() { // Prefer PropertyListConfiguration if Okta.plist exists; otherwise fall back if let configuration = try? OAuth2Client.PropertyListConfiguration() { self.flow = try? DirectAuthenticationFlow(client: OAuth2Client(configuration)) } else { self.flow = try? DirectAuthenticationFlow() } // Added if let token = Credential.default?.token { state = .authorized(token) } } Build your own secure native sign-in iOS app

You’ve now built a fully native authentication flow on iOS using Okta DirectAuth with push notification MFA – no browser redirects required. You can check your work against the GitHub repo for this project.

Your app securely signs users in, handles multi-factor verification through Okta Verify, retrieves user profile details, displays token information, and refreshes tokens to maintain an active session. By combining AuthFoundation and OktaDirectAuth, you’ve implemented a modern, phishing-resistant authentication system that balances strong security with a seamless user experience – all directly within your SwiftUI app.

If you found this post interesting, you may want to check out these resources:

How to Build a Secure iOS App with MFA Introducing the New Okta Mobile SDKs A History of the Mobile SSO (Single Sign-On) Experience in iOS

Follow OktaDev on Twitter and subscribe to our YouTube channel to learn about secure authentication and other exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


FastID

Meet Image Optimizer in Compute: Flexible Image Workflows at the Edge

Fastly’s Image Optimizer is now available in Compute, letting developers build flexible, programmable image optimization workflows at the edge with full control and scale.
Fastly’s Image Optimizer is now available in Compute, letting developers build flexible, programmable image optimization workflows at the edge with full control and scale.

Monday, 17. November 2025

1Kosmos BlockID

FedRAMP High Authorization: What It Is & What It Means for 1Kosmos

The post FedRAMP High Authorization: What It Is & What It Means for 1Kosmos appeared first on 1Kosmos.

Dock

EUDI: Key Takeaways From Europe's Largest Digital ID Pilot [Video and Takeaways]

The European Digital Identity Wallet is entering one of the most consequential phases of its rollout, and few people are closer to the work than Esther Makaay, VP of Digital Identity at Signicat. After spending the last three years at the center of the European Identity Wallet Consortium

The European Digital Identity Wallet is entering one of the most consequential phases of its rollout, and few people are closer to the work than Esther Makaay, VP of Digital Identity at Signicat. After spending the last three years at the center of the European Identity Wallet Consortium (EWC), Esther joined us for a deep-dive presentation on what the Large-Scale Pilots have actually delivered, and how ready Europe truly is for the 2026 deadline.

Across payments, travel, and organizational identity, Esther walked through the key results from the pilots: what worked, what didn’t, which technical and regulatory pieces are still missing, and the biggest barriers to adoption that Member States and the private sector now need to solve. She also examined interoperability tests, trust infrastructure gaps, signing flows, business models, governance frameworks, and the early findings from user adoption research.

Below are the core takeaways from her presentation, distilled into the critical insights anyone working in digital identity, payments, or IAM needs to understand as Europe moves into the final countdown toward the EUDI Wallet becoming a reality.


Ontology

Revolutionizing the Supply Chain with Ontology’s Modular Toolkit

Toward Transparent and Secure Traceability The global supply chain the backbone of international trade continues to face persistent challenges in transparency, security, and traceability. Traditional systems, often fragmented and reliant on trust between intermediaries, remain vulnerable to fraud, inefficiencies, and high administrative costs. The emergence of blockchain technology offers a disru
Toward Transparent and Secure Traceability

The global supply chain the backbone of international trade continues to face persistent challenges in transparency, security, and traceability. Traditional systems, often fragmented and reliant on trust between intermediaries, remain vulnerable to fraud, inefficiencies, and high administrative costs. The emergence of blockchain technology offers a disruptive solution, and the Ontology platform is at the forefront of this transformation with its modular approach centered on decentralized identity.

Challenges of Traceability in a Connected World

In a traditional supply chain, tracking a product from its origin to its final destination is a complex process. Data is stored in silos, making it difficult to establish a single source of truth. This lack of transparency leads to several key issues:

Vulnerability to Counterfeiting: Without tamper-proof verification of product origin and movement, counterfeit goods can easily enter the market. Lack of Consumer Trust: Consumers increasingly demand to know where their products come from especially regarding ethical sourcing and sustainability. Operational Inefficiencies: Product recalls and dispute resolutions become lengthy and costly due to the inability to quickly identify the point of failure.

Blockchain technology, with its distributed and immutable ledger, provides a natural solution. It enables every transaction and step in a product’s lifecycle to be recorded securely and transparently.

Ontology’s Modular Toolkit: A Targeted Approach

Instead of offering a monolithic solution, Ontology has developed a set of interconnected tools forming a true Modular Toolkit for Supply Chain Management. This toolkit leverages blockchain’s power to specifically address the crucial needs of identity and verification, which are essential for traceability.

ONT ID (Decentralized Identity)

Description: A self-sovereign identity (SSI) system allowing users, businesses, and even IoT devices to own and control their digital identity. Role in the Supply Chain: Authentication to ensure every actor (supplier, carrier, product) is a verified and unique entity.

Verifiable Credentials (VCs)

Description: Cryptographically secured digital attestations that prove facts such as quality certification, harvest date, or regulatory compliance.
Role in the Supply Chain: Traceability & Proof to certify the product’s origin, condition, and key milestones with tamper-proof verification.

Ontology Blockchain

Description: A high-performance public blockchain featuring sub-second transactions and minimal costs crucial for handling large volumes of logistics data.
Role in the Supply Chain: Security & Performance, provided by the immutable, distributed ledger for securely recording traceability data.

Integrating these components creates an ecosystem where trust is no longer presumed but cryptographically verified.

Toward Transparent and Secure Traceability

Applying this modular toolkit transforms traceability into a transparent and secure process:

Anti-Counterfeiting Security:
Each product can be assigned an ONT ID and a set of VCs verifying its authenticity. For instance, Japanese inventory management company ZAICO used Ontology’s blockchain to develop an anti-counterfeiting feature that secures its supply chain and significantly reduces storage and data processing costs. Selective Data Sharing:
Through decentralized identity, companies can share specific information (VCs) with partners or regulators without disclosing sensitive business data. Inventory information, for example, can be encrypted on Ontology’s blockchain — ensuring complete confidentiality while still allowing external verification. Interoperability and Flexibility:
Ontology supports multiple virtual machines (EVM, NeoVM, WASM), offering extensive flexibility. Businesses can integrate the solution with their existing systems (ERP, WMS) without being locked into a single technical ecosystem — reinforcing the modular nature of the toolkit. Conclusion

Ontology’s Modular Toolkit represents a major step forward in the digital transformation of the supply chain. By placing decentralized identity and verifiable credentials at its core, Ontology does more than enhance traceability — it establishes a new layer of digital trust for global commerce. This approach, combining high performance, low cost, and cryptographic security, is the key to building truly transparent, resilient, and Web3-ready supply chains.

Revolutionizing the Supply Chain with Ontology’s Modular Toolkit was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Optimizing your multi-CDN infrastructure to improve performance

Learn how a multi-CDN infrastructure can, and should be, optimized for performance. Get tips from Fastly on considerations to evaluate against your current content delivery strategy.
Learn how a multi-CDN infrastructure can, and should be, optimized for performance. Get tips from Fastly on considerations to evaluate against your current content delivery strategy.

Sunday, 16. November 2025

Dock

The Biggest Misconception About Digital ID Wallets

When most people hear “digital ID wallet”, they picture an app that looks like an Apple Wallet, filled with digital cards for IDs, licenses, and credentials. But that mental image is limiting. Because an identity wallet isn’t about digital cards, or even about apps.

When most people hear “digital ID wallet”, they picture an app that looks like an Apple Wallet, filled with digital cards for IDs, licenses, and credentials.

But that mental image is limiting.

Because an identity wallet isn’t about digital cards, or even about apps. 

It’s simply a secure way to store and share verified data, data that’s cryptographically signed and privacy-protected.

In reality, a wallet can take many forms depending on the use case.

Friday, 14. November 2025

Northern Block

You Can Cryptographically Sign a Lie: Why Digital Trust Needs Governance (with Scott Perry)

Learn why cryptography alone cannot stop misinformation and deepfakes, and how C2PA and governance frameworks help rebuild trust in digital content. The post You Can Cryptographically Sign a Lie: Why Digital Trust Needs Governance (with Scott Perry) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

🎥 Watch on YouTube 🎥
🎧 Listen On Spotify 🎧
🎧 Listen On Apple Podcasts 🎧

Can you cryptographically sign a lie? Yes, and that single fact exposes a major flaw in how digital trust works today.

In this episode of The SSI Orbit Podcast, host Mathieu Glaude speaks with Scott Perry, CEO of the Digital Governance Institute, about why cryptography alone cannot solve the growing crisis of misinformation, AI-generated content, and digital manipulation.

The conversation centers on C2PA, a global standard that embeds a “nutrition label” into digital content at the moment it is created. This provenance data reveals how a digital object was generated, whether it has been altered, and which tools were used, giving people the context they need to judge trustworthiness.

However, as Scott explains, technical tools are only half of the solution. True digital trust requires governance, including transparent conformance programs, certificate authorities, and accountability frameworks that ensure consistency, security, and fairness across all participating products and industries.

The episode also explores the next layers of the trust stack:
• Creator Assertions, which allow individuals to add identity-backed claims to their content
• JPEG Trust, which adds rights and ownership information for legal clarity and compensation

With fraud, deepfakes, and impersonation rising across journalism, insurance, entertainment, and politics, these combined layers of provenance, identity, rights, and governance represent the new trust infrastructure the internet urgently needs.

Key Insights

Cryptography is not enough to guarantee truth. Cryptographic signatures can prove integrity and origin, but they cannot determine whether the content itself is accurate or honest.

AI has amplified the urgency for content provenance. Traditional methods like CAPTCHA are no longer reliable because AI can pass them. This accelerates the need for cryptographic provenance systems.

C2PA acts as a global provenance standard for digital media. It embeds a signed manifest into images, videos, audio, and other digital objects at the moment of creation, functioning like a “nutrition label” for content.

Generator products must meet strict governance and conformance requirements. Phones, cameras, and software tools must obtain approved signing certificates through the C2PA conformance program.

Certificate authorities play a central role. Public CAs and enterprise-grade CAs issue the X.509 certificates used for content credential signing. They must meet the requirements outlined in the C2PA certificate policy.

Creator Assertions allow individuals and organizations to add identity-backed claims. This layer, governed by the Creator Assertions Working Group under DIF, enables people to add context and metadata to content.

Rights and ownership require an additional governance layer. JPEG Trust extends the system to define legal rights, IP claims, and ownership for use in court or licensing contexts.

Industry self-regulation is essential. Sectors like journalism, entertainment, insurance, and brand management are expected to police their own registries and authorized signers.

Fraud prevention is a major driver. AI-manipulated images are already causing real financial losses in industries like insurance.

Digital identity credentials will eventually enable end users to sign their own assertions. Verifiable credentials will allow creators to link identity claims to content in a trustworthy way.

Governance must be transparent and fair. Oversight, checks and balances, and multi-party decision making are essential to avoid exclusion or bias.

Strategies

Use cryptography combined with governance, not cryptography alone. Provenance, conformance programs, and accountability frameworks must work together.

Adopt C2PA provenance for any digital content creation flow. Integrate C2PA manifests at the point of generation for images, video, audio, and documents.

Obtain signing certificates only from trusted certificate authorities. Use public CAs or enterprise-grade CAs approved by the C2PA program.

Implement secure software practices and continuous attestation. Higher assurance levels require proof of updated patches, secure architecture, and verified implementation.

Document generator product architecture using the C2PA template. Applicants must clearly describe all components involved in creating and signing content.

Leverage creator assertions for identity and contextual claims. Individuals or organizations can add structured, signed metadata throughout a content asset’s lifecycle.

Use provenance and rights frameworks to combat fraud. Industries like insurance and media should implement provenance tools to detect manipulation and support claims assessment.

Rely on industry-specific trust registries. Fields such as journalism already use trusted lists to validate authorized contributors.

Build governance frameworks that emphasize transparency and fairness. Prevent exclusion by maintaining multi party oversight and clearly documented decision making.

Additional resources: Episode Transcript Digital Governance Institute C2PA (Coalition for Content Provenance and Authenticity) Creator Assertions Working Group (hosted by the Decentralized Identity Foundation) JPEG Trust NIST Post Quantum Cryptography Program X.509 Certificate Standard Trust Over IP Foundation and the Governance Meta Model About Guest

Scott Perry is a longtime expert in digital trust and governance who has spent his career helping organizations make technology more reliable and accountable. He leads the Digital Governance Institute, where he advises on cyber assurance, conformance programs, and how to build trust into digital systems.

Scott plays a key role in the C2PA as the Conformance Program Administrator, making sure content-generating products and certificate authorities meet high standards for provenance and authenticity. He also co-leads the Creator Assertions Working Group and contributes to governance work at the Trust Over IP Foundation, focusing on how identity and metadata shape trust in digital content.

With a background in IT audit and deep experience with cryptography and certification authorities, Scott brings a practical, real-world approach to governance, assurance, and digital identity. LinkedIn

The post You Can Cryptographically Sign a Lie: Why Digital Trust Needs Governance (with Scott Perry) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

Ontology

Ontology MainNet Upgrade Announcement

Ontology will begin the MainNet v3.0.0 upgrade and Consensus Nodes upgrade on December 1, 2025. This release improves network performance and implements the approved ONG tokenomics update. Ontology blockchain users and stakers will not be affected. Upgrade Overview This upgrade will be completed in two phases with two public releases: November 27, 2025: v2.7.0 December 1, 2025: v3.

Ontology will begin the MainNet v3.0.0 upgrade and Consensus Nodes upgrade on December 1, 2025. This release improves network performance and implements the approved ONG tokenomics update.

Ontology blockchain users and stakers will not be affected.

Upgrade Overview

This upgrade will be completed in two phases with two public releases:

November 27, 2025: v2.7.0 December 1, 2025: v3.0.0

All nodes and dApp nodes on Ontology’s MainNet will upgrade in step.

Timeline and Requirements

Phase 1: v2.7.0 (Nov 27, 2025)

Who must upgrade: Consensus nodes

Deadline: Before Dec 1, 2025

Included:

Consensus mechanism optimization Gas limit optimization

Phase 2: v3.0.0 (Dec 1, 2025)

Who must upgrade: Triones nodes

Included:

ONG tokenomics update Consensus mechanism optimization

Please complete the upgrade step by step to avoid any synchronization pause.

About the ONG Tokenomics Update

The tokenomics changes, approved by consensus nodes, are designed to enhance long-term sustainability and align incentives:

Cap total ONG supply at 800 million Permanently lock ONT + ONG equal to 100 million ONG in value Extend the release schedule from 18 to 19 years Direct 80% of released ONG to ONT staking incentives

Read the full governance summary here.

Ontology MainNet Upgrade Announcement was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

HPP Migration Backed by Coinone and GOPAX, With Further Updates Ahead

We are pleased to share that DAXA-member exchanges Coinone and GOPAX have confirmed their support for the House Party Protocol (HPP) migration. We’re deeply grateful for their support — and this is only the beginning. For your safety, rely only on the official notices published by these exchanges and ignore any messages from third parties. Use only the official bridge, migration portal, and the e

We are pleased to share that DAXA-member exchanges Coinone and GOPAX have confirmed their support for the House Party Protocol (HPP) migration. We’re deeply grateful for their support — and this is only the beginning.

For your safety, rely only on the official notices published by these exchanges and ignore any messages from third parties. Use only the official bridge, migration portal, and the exchanges we announce to keep your assets secure.

What You Should Do

If your assets are on the exchange:

Make sure your assets remain on Coinone or GOPAX before the deadlines noted in each exchange’s notice. No further action is required on your side.

If you self-custody:

Use the official HPP migration portal once it opens, and always double-check that you are on the correct URL. We will never ask you to transfer tokens to a specific address through DMs or private messages. Safety first

1) Avoid deposits during suspension: Do not transfer assets to legacy(old) deposit addresses during suspension. Wait for the official “HPP deposit/withdraw open” notice.

2) Double-check the deposit addresses and the official contract details on each exchange’s asset page before depositing or trading. Sending AERGO tokens to HPP addresses will result in permanent loss.

What to Know Going Forward

Once the migration and listings are underway, we’ll share the next steps for the HPP ecosystem:

New Staking Portal
We will launch a new staking portal based on HPP’s AI-native model. If you are currently staking AERGO, we recommend transitioning to HPP staking, where new opportunities and updated staking models will be introduced. AIP-21 voting rewards will also be claimable through the new portal once it goes live. Aergo’s Role Moving Forward
After the migration, the Aergo Mainnet will stay active and continue functioning as HPP’s enterprise layer, supporting existing clients with anchoring, compliance, and hybrid deployments.

More announcements are coming soon — stay tuned!

Ticket ID: 1297620 | Ticket submitted by: han@aergo.io We would like to register two http://hpp.io emails to manage HPP’s self-reporting dashboard on CoinMarketCap. Ticket ID: 1301903 | Ticket submitted by: han@aergo.io

HPP Migration Backed by Coinone and GOPAX, With Further Updates Ahead was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Wikipedia Tells AI Companies to "Stop Scraping"

Wikipedia cracks down on AI scraping, citing server strain and lost traffic. See why publishers are fighting back and turning to bot management.
Wikipedia cracks down on AI scraping, citing server strain and lost traffic. See why publishers are fighting back and turning to bot management.

Thursday, 13. November 2025

Radiant Logic

Shrinking the IAM Attack Surface: How Unify, Observe, Act Transforms Identity Security 

Discover why unified visibility, real-time observability, and automated action are the key ingredients to shrinking identity risk and strengthening your overall security posture — according to the latest Gartner research. The post Shrinking the IAM Attack Surface: How Unify, Observe, Act Transforms Identity Security  appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity

How many dormant accounts are quietly eroding your cyber defenses? What’s your true mean time to remediate (MTTR) a privilege creep?  

Organizations juggle sprawling cloud apps and siloed directories. Risk-averse CISOs track these outcome-driven indicators: cut orphaned identities, slash MFA exceptions, and speed up risk fixes. They form and reveal your attack surface’s true size where misconfigurations, dormant accounts, and inconsistent access policies quietly expand risk. 

According to Gartner® report, Reduce Your IAM Attack Surface Using Visibility, Observability, and Remediation (Rebecca Archambault, 2025), IAM leaders can strengthen security across centralized and decentralized environments by focusing on three key pillars: visibility, observability, and remediation. Today’s IAM ecosystems are often fragmented across numerous directories, identity providers, and access systems. Business units may configure tools independently, resulting in inconsistent policies and poor oversight.

Common symptoms include: 

Disabled multifactor authentication (MFA)  Orphaned or dormant accounts  Exposed machine credentials  Over-privileged service accounts 

These gaps are rarely visible in real time, leaving organizations vulnerable to misuse and lateral movement. As Gartner notes, the market for IAM posture, hygiene, and identity threat detection tools is crowded, yet many offerings address only part of the problem — making it difficult for security leaders to measure progress or understand the full scope of their attack surface. The Solution: A Continuous Loop of Unify → Observe → Act 

At Radiant Logic, we believe reducing IAM risk starts with a closed-loop process: Unify → Observe → Act. This model provides the visibility and feedback necessary to continuously measure and improve your identity security posture. 

1. Unify: Break Down Silos and Establish a Trusted Identity Fabric 

The first step is to unify human, non-human and agentic AI identity data across all sources — on-premises directories, cloud platforms, HR systems, and custom applications — into a single, consistent view. RadiantOne’s Identity Data Management layer ingests, correlates, and normalizes identity attributes to create a complete, authoritative profile for every user, device, and service. 

This unified data foundation eliminates blind spots and provides accurate, consistent information that downstream tools need to enforce policy and evaluate risk. Without unification, observability is fragmented — and remediation becomes guesswork. 

2. Observe: Gain Real-Time Insight into Identity Hygiene, Posture, and Risk 

Once data is unified, organizations can observe how identities interact across systems and where exposures lie. Dashboards and analytics help teams visualize dormant accounts, privilege creep, and inactive entitlements. Outcome-driven metrics (ODMs) replace simple control counts with measurable results — such as the percentage of risky permissions removed or the reduction in mean time to remediate. 

Radiant Logic’s observability capabilities make it possible to quantify security progress and track attack-surface reduction over time. These insights allow IAM and security teams to shift from reactive audits to proactive defense, aligning security metrics with business outcomes. 

3. Act: Remediate Identity Risks and Automate with Confidence 

Visibility is only valuable if it leads to action. The final step in the loop is to act — automating remediation workflows and runtime responses that address risks as soon as they are discovered. 

Using RadiantOne’s integration and orchestration capabilities, organizations can trigger alerts, open tickets, or execute corrective actions automatically. For example, if a risky entitlement is detected or a service account behaves abnormally, RadiantOne can inform the appropriate system to disable access or enforce MFA. Integration with runtime protocols such as the Continuous Access Evaluation Profile (CAEP) also enables dynamic policy enforcement — terminating or quarantining suspect sessions until investigation is complete. 

Measuring What Matters 

We believe Gartner emphasizes the importance of outcome-driven metrics to evaluate IAM effectiveness. Rather than focusing on the number of controls deployed, organizations should measure tangible improvements such as: 

Fewer orphaned or dormant accounts  Reduced over-privileged access  Shorter remediation times for risky identities  Lower rates of MFA exceptions  Documented decreases in IAM-related audit findings 

By tracking these outcomes over time, IAM teams can quantify their progress in shrinking the attack surface and demonstrate real value to business leadership. Radiant Logic enables these measurements through centralized visibility and continuous feedback loops. 

From Visibility to Value 

As Gartner notes, Identity Visibility and Intelligence Platforms (IVIPs) represent a major innovation in the IAM market — providing rapid integration, analytics, and a single view of identity data, activity, and posture. We believe Radiant Logic’s inclusion in Hype Cycle for Digital Identity, 2025 underscores our position in this emerging category. 

By implementing the Unify → Observe → Act loop, organizations can: 

Eliminate identity data silos  Reveal hidden access risks across environments  Automate policy enforcement and remediation  Quantify security improvements with outcome-driven metrics 

This continuous cycle transforms identity security from a static process into a dynamic system of improvement — one that strengthens Zero Trust architectures and aligns security outcomes with measurable business value. 

Start Closing IAM Security Gaps with Radiant Logic 

Reducing your IAM attack surface begins with unified visibility. Radiant Logic helps organizations integrate and understand their identity data, observe it in context, and act with precision. The result is not just stronger security — it’s a measurable path to risk reduction and operational resilience. 

Disclosure

Gartner, Reduce Your IAM Attack Surface Using Visibility, Observability, and Remediation, Rebecca Archambault, 8 October 2025 

Gartner, Hype Cycle for Digital Identity, 2025, Nayara Sangiorgio, Nathan Harris, 14 July 2025 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

The post Shrinking the IAM Attack Surface: How Unify, Observe, Act Transforms Identity Security  appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


This week in identity

E65 - ConductorOne funding, Ping + Keyless, JumpCloud + Breez Security, Imprivata + Verosint, Twilio + Stytch

Summary In this episode of the Analyst Brief podcast, Simon Moffatt and David Mahdi discuss the latest trends in identity security, including recent funding rounds and acquisitions. They explore the growing importance of identity governance, the intersection of security and identity management, and the role of trust in the age of AI. The conversation also touches on the significance of ITDR and t

Summary

In this episode of the Analyst Brief podcast, Simon Moffatt and David Mahdi discuss the latest trends in identity security, including recent funding rounds and acquisitions. They explore the growing importance of identity governance, the intersection of security and identity management, and the role of trust in the age of AI. The conversation also touches on the significance of ITDR and the implications of recent acquisitions for the market. The hosts reflect on the future of identity security and the need for continuous innovation in this evolving landscape.

Chapters

00:00 Introduction to the Analyst Brief Podcast

03:03 Autumn Conference Season Insights

06:05 Funding and Acquisitions in Identity Governance

08:59 The Growing Complexity of Identity Governance

12:01 The Intersection of Security and Identity

14:49 The Future of Cybersecurity and Identity Integration

17:42 Understanding the Broader Ecosystem of Cybersecurity

21:04 The Importance of Protecting All Identities

23:53 First Principles in Cybersecurity Strategy

29:10 Navigating Resilience and Availability in Security

29:56 Funding Trends in Identity Security

31:45 The Impact of Acquisitions on Identity Security

32:13 Twilio's Acquisition of Stitch: A New Era in Identity

36:33 Building Trust in the Age of AI

39:21 Zero Trust: Establishing and Maintaining Trust

44:14 Ping Identity's Acquisition of Keyless: Innovations in Biometric Authentication

55:11 JumpCloud Acquires Breeze Security: Enhancing ITDR Solutions

59:54 Improvata's Strategic Moves in Identity Security


Keywords

identity security, funding, acquisitions, AI, trust, governance, ITDR, cybersecurity, authentication, market trends



Wednesday, 12. November 2025

Safle Wallet

Concordium Is Live on Safle: PLTs, CCD, and Staking Are Just a Tap Away

Safle has fully integrated the Concordium blockchain into its wallet, introducing native support for Protocol-Level Tokens (PLTs), CCD transfers, staking and delegation — all within a single, non-custodial interface. This update not only expands Safle’s multi-chain capabilities but also reinforces a shared vision with Concordium: to make secure, compliant, and scalable blockchain access simple fo

Safle has fully integrated the Concordium blockchain into its wallet, introducing native support for Protocol-Level Tokens (PLTs), CCD transfers, staking and delegation — all within a single, non-custodial interface.

This update not only expands Safle’s multi-chain capabilities but also reinforces a shared vision with Concordium: to make secure, compliant, and scalable blockchain access simple for everyone — from everyday users to enterprise developers.

Why PLTs Matter

The integration of Protocol-Level Tokens (PLTs) marks a major step forward in Safle’s mission to support next-generation blockchain standards. Unlike tokens issued through smart contracts, PLTs are built directly into Concordium’s protocol layer, giving them stronger foundations for performance, security, and compliance.

By incorporating PLTs, Safle enables:

Native efficiency: Token operations like minting, transferring, or burning are handled by the protocol itself, reducing friction and network costs. Enhanced security: No external contract code means fewer vulnerabilities and lower attack surfaces. Compliance by design: Each token can integrate Concordium’s identity verification framework, aligning with the growing need for compliant on-chain assets.

Through PLT integration, Safle becomes a key access point for Concordium’s PayFi ecosystem — enabling users to interact with programmable, regulation-ready assets without compromising on control or usability.

Key Highlights of the Integration

Beyond wallet creation and identity support from the earlier release, users can now manage PLTs, transfer CCD, and stake or delegate — all within the Safle Wallet.

Access to Protocol-Level Tokens (PLTs) PLT management: Store, send, and receive PLTs seamlessly through Concordium’s protocol. Lower risk and cost: Direct protocol execution eliminates complex contract dependencies. Future-ready assets: Connect to Concordium’s network of regulated tokens and PayFi utilities.

2. Staking and Delegation

Earn directly within Safle: Stake or delegate CCD without leaving the app. Flexible options: Choose between Passive Delegation or Validator Pools based on your goals. Transparent control: Manage, update, or stop delegations anytime — with full asset ownership preserved.

3. Seamless CCD and PLT Transfers

Instant transactions: Transfer CCD and PLTs in seconds using wallet addresses or QR codes. Clear visibility: Track real-time balances and transaction histories in one place. Multi-account access: Effortlessly manage multiple Concordium wallets within Safle.

Looking Ahead

The integration of Concordium within Safle represents more than just added functionality — it brings the complete Concordium experience into a single, secure environment.

This milestone strengthens the foundation for ongoing collaboration between the two ecosystems, paving the way for deeper protocol support, enhanced developer tools, and wider adoption of Concordium’s trust-driven technology.

Explore Concordium on Safle today — your gateway to secure, compliant, and effortless blockchain access.

Download the Safle Wallet

Download for iOS | Download for Android

Join Our Community

Stay connected for updates and announcements:

Concordium: x.com/ConcordiumNet Safle: x.com/GetSafle

Ockto

We need to provide necessary information in line with the life cycle

De mogelijkheden om digitaal persoonlijke data te delen vinden steeds breder aftrek. Ook in Azië. Eind vorig jaar werd in Zuid-Korea bijvoorbeeld een door de overheid gesteunde MyData service gelanceerd . Dit moet burgers de mogelijkheid bieden om persoonlijke financiële gegevens van verschillende financiële instellingen op één platform te beheren Ockto's Paul Janssen werd gevraagd

De mogelijkheden om digitaal persoonlijke data te delen vinden steeds breder aftrek. Ook in Azië. Eind vorig jaar werd in Zuid-Korea bijvoorbeeld een door de overheid gesteunde MyData service gelanceerd . Dit moet burgers de mogelijkheid bieden om persoonlijke financiële gegevens van verschillende financiële instellingen op één platform te beheren

Ockto's Paul Janssen werd gevraagd om tijdens het 11e Seoul Asian Financial Forum te spreken over wat er in Nederland mogelijk is op dit gebied en natuurlijk hoe we dit als Ockto doen.

Zijn optreden is hier terug te kijken:


Okta

Stretch Your Imagination and Build a Delightful Sign-In Experience

When you choose Okta as your IAM provider, one of the features you get access to is customizing your Okta-hosted Sign-In Widget (SIW), which is our recommended method for the highest levels of identity security. It’s a customizable JavaScript component that provides a ready-made login interface you can use immediately as part of your web application. The Okta Identity Engine (OIE) utilizes auth

When you choose Okta as your IAM provider, one of the features you get access to is customizing your Okta-hosted Sign-In Widget (SIW), which is our recommended method for the highest levels of identity security. It’s a customizable JavaScript component that provides a ready-made login interface you can use immediately as part of your web application.

The Okta Identity Engine (OIE) utilizes authentication policies to drive authentication challenges, and the SIW supports various authentication factors, ranging from basic username and password login to more advanced scenarios, such as multi-factor authentication, biometrics, passkeys, social login, account registration, account recovery, and more. Under the hood, it interacts with Okta’s APIs, so you don’t have to build or manage complex auth logic yourself. It’s all handled for you!

One of the perks of using the Okta SIW, especially with the 3rd Generation Standard (Gen3), is that customization is a configuration thanks to design tokens, so you don’t have to write CSS to style the widget elements.

Style the Okta Sign-In Widget to match your brand

In this tutorial, we will customize the Sign In Widget for a fictional to-do app. We’ll make the following changes:

Replace font selections Define border, error, and focus colors Remove elements from the SIW, such as the horizontal rule and add custom elements Shift the control to the start of the site and add a background panel

Without any changes, when you try to sign in to your Okta account, you see something like this:

At the end of the tutorial, your login screen will look something like this 🎉

We’ll use the SIW gen3 along with new recommendations to customize form elements and style using design tokens.

Table of Contents

Style the Okta Sign-In Widget to match your brand Customize your Okta-hosted sign-in page Understanding the Okta-hosted Sign-In Widget default code Customize the UI elements within the Okta Sign-In Widget Organize your Sign-In Widget customizations with CSS Custom properties Extending the SIW theme with a custom color palette Add custom HTML elements to the Sign-In Widget Overriding Okta Sign-In Widget element styles Change the layout of the Okta-hosted Sign-In page Customize your Gen3 Okta-hosted Sign-In Widget

Prerequisites To follow this tutorial, you need:

An Okta account with the Identity Engine, such as the Integrator Free account. The SIW version in the org we’re using is 7.36. Your own domain name A basic understanding of HTML, CSS, and JavaScript A brand design in mind. Feel free to tap into your creativity!

Let’s get started!

Customize your Okta-hosted sign-in page

Before we begin, you must configure your Okta org to use your custom domain. Custom domains enable code customizations, allowing us to style more than just the default logo, background, favicon, and two colors. Sign in as an admin and open the Okta Admin Console, navigate to Customizations > Brands and select Create Brand +.

Follow the Customize domain and email developer docs to set up your custom domain on the new brand.

You can also follow this post if you prefer.

A Secure and Themed Sign-in Page

Redirecting to the Okta-hosted sign-in page is the most secure way to authenticate users in your application. But the default configuration yield a very neutral sign-in page. This post walks you through customization options and setting up a custom domain so the personality of your site shines all through the user's experience.

Alisa Duncan

Once you have a working brand with a custom domain, select your brand to configure it. First, navigate to Settings and select Use third generation to enable the SIW Gen3. Save your selection.

⚠️ Note

The code in this post relies on using SIW Gen3. It will not work on SIW Gen2.

Navigate to Theme. You’ll see a default brand page that looks something like this:

Let’s start making it more aligned with the theme we have in mind. Change the primary and secondary colors, then the logo and favicon images with your preferred options

To change either color, click on the text field and enter the hex code for each. We’re going for a bold and colorful approach, so we’ll use #ea3eda as the primary color and #ffa738 as the secondary color, and upload the logo and favicon images for the brand. Select Save.

Take a look at your sign-in page now by navigating to the sign-in URL for the brand. With your configuration, the sign-in widget looks more interesting than the default view, but we can make things even more exciting.

Let’s dive into the main task, customizing the signup page. On the Theme tab:

Select Sign-in Page in the dropdown menu Select the Customize button On the Page Design tab, select the Code editor toggle to see a HTML page

Note: You can only enable the code editor if you configure a custom domain.

Understanding the Okta-hosted Sign-In Widget default code

If you’re familiar with basic HTML, CSS, and JavaScript, the sign-in code appears standard, although it’s somewhat unusual in certain areas. There are two major blocks of code we should examine: the top of the body tag on the page and the sign-in configuration in the script tag.

The first one looks something like this:

<div id="okta-login-container"></div>

The second looks like this:

var config = OktaUtil.getSignInWidgetConfig(); // Render the Okta Sign-In Widget var oktaSignIn = new OktaSignIn(config); oktaSignIn.renderEl({ el: '#okta-login-container' }, OktaUtil.completeLogin, function(error) { // Logs errors that occur when configuring the widget. // Remove or replace this with your own custom error handler. console.log(error.message, error); } );

Let’s take a closer look at how this code works. In the HTML, there’s a designated parent element that the OktaSignIn instance uses to render the SIW as a child node. This means that when the page loads, you’ll see the <div id="okta-login-container"></div> in the DOM with the HTML elements for SIW functionality as its child within the div. The SIW handles all authentication and user registration processes as defined by policies, allowing us to focus entirely on customization.

To create the SIW, we need to pass in the configuration. The configuration includes properties like the theme elements and messages for labels. The method renderEl() identifies the HTML element to use for rendering the SIW. We’re passing in #okta-login-container as the identifier.

The #okta-login-container is a CSS selector. While any correct CSS selector works, we recommend you use the ID of the element. Element IDs must be unique within the HTML document, so this is the safest and easiest method.

Customize the UI elements within the Okta Sign-In Widget

Now that we have a basic understanding of how the Okta Sign-In Widget works, let’s start customizing the code. We’ll start by customizing the elements within the SIW. To manipulate the Okta SIW DOM elements in Gen3, we use the afterTransform method. The afterTransform method allows us to remove or update elements for individual or all forms.

Find the button Edit on the Code editor view, which makes the code editor editable and behaves like a lightweight IDE.

Below the oktaSignIn.renderEl() method within the <script> tag, add

oktaSignIn.afterTransform('identify', ({ formBag }) => { const title = formBag.uischema.elements.find(ele => ele.type === 'Title'); if (title) { title.options.content = "Log in and create a task"; } const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help'); const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock'); const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider'); formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele)); });

This afterTransform hook only runs before the ‘identify’ form. We can find and target UI elements using the FormBag. The afterTransform hook is a more streamlined way to manipulate DOM elements within the SIW before rendering the widget. For example, we can search elements by type to filter them out of the view before rendering, which is more performant than manipulating DOM elements after SIW renders. We filtered out elements such as the unlock account element and dividers in this snippet.

Let’s take a look at what this looks like. Press Save to draft and Publish.

Navigate to your sign-in URL for your brand to view the changes you made. When we compare to the default state, we no longer see the horizontal rule below the logo or the “Help” link. The account unlock element is no longer available.

We explored how we can customize the widget elements. Now, let’s add some flair.

Organize your Sign-In Widget customizations with CSS Custom properties

At its core, we’re styling an HTML document. This means we operate on the SIW customization in the same way as we would any HTML page, and code organization principles still apply. We can define customization values as CSS Custom properties (also known as CSS variables).

Defining styles using CSS variables keeps our code DRY. Setting up style values for reuse even extends beyond the Okta-hosted sign-in page. If your organization hosts stylesheets with brand color defined as CSS custom properties publicly, you can use the colors defined there and link your stylesheet.

Before making code edits, identify the fonts you want to use for your customization. We found a header and body font to use.

Open the SIW code editor for your brand and select Edit to make changes.

Import the fonts into the HTML. You can <link> or @import the fonts based on your preference. We added the <link> instructions to the <head> of the HTML.

<link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> <link href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Poiret+One&display=swap" rel="stylesheet">

Find the <style nonce="{{nonceValue}}"> tag. Within the tag, define your properties using the :root selector:

:root { --color-gray: #4f4f4f; --color-fuchsia: #ea3eda; --color-orange: #ffa738; --color-azul: #016fb9; --color-cherry: #ea3e84; --color-purple: #b13fff; --color-black: #191919; --color-white: #fefefe; --color-bright-white: #fff; --border-radius: 4px; --font-header: 'Poiret One', sans-serif; --font-body: 'Inter Tight', sans-serif; }

Feel free to add new properties or replace the property value for your brand. Now is a good opportunity to add your own brand colors and customizations!

Let’s configure the SIW with our variables using design tokens.

Find var config = OktaUtil.getSignInWidgetConfig();. After this line of code, set the values of the design tokens using your CSS Custom properties. You’ll use the var() function to access your variables:

config.theme = { tokens: { BorderColorDisplay: 'var(--color-bright-white)', PalettePrimaryMain: 'var(--color-fuchsia)', PalettePrimaryDark: 'var(--color-purple)', PalettePrimaryDarker: 'var(--color-purple)', BorderRadiusTight: 'var(--border-radius)', BorderRadiusMain: 'var(--border-radius)', PalettePrimaryDark: 'var(--color-orange)', FocusOutlineColorPrimary: 'var(--color-azul)', TypographyFamilyBody: 'var(--font-body)', TypographyFamilyHeading: 'var(--font-header)', TypographyFamilyButton: 'var(--font-body)', BorderColorDangerControl: 'var(--color-cherry)' } }

Save your changes, publish the page, and view your brand’s sign-in URI site. Yay! You see, there’s no border outline, the border radius of the widget and HTML elements changed, a different focus color, and a different color for element outlines when there’s a form error. You can inspect the HTML elements and view the computed styles. Or if you prefer, feel free to update the CSS variables to something more visible.

When you inspect your brand’s sign-in URL site, you’ll notice that the fonts aren’t loading properly and that there are errors in your browser’s debugging console. This is because you need to configure Content Security Policies (CSP) to allow resources loaded from external sites. CSPs are a security measure to mitigate cross-site scripting (XSS) attacks. You can read An Overview of Best Practices for Security Headers to learn more about CSPs.

Navigate to the Settings tab for your brand’s Sign-in page. Find the Content Security Policy and press Edit. Add the domains for external resources. In our example, we only load resources from Google Fonts, so we added the following two domains:

*.googleapis.com *.gstatic.com

Press Save to draft and press Publish to view your changes. The SIW now displays the fonts you selected!

Extending the SIW theme with a custom color palette

In our example, we selectively added colors. The SIW design system adheres to WCAG accessibility standards and relies on Material Design color palettes.

Okta generates colors based on your primary color that conform to accessibility standards and contrast requirements. Check out Understand Sign-In Widget color customization to learn more about color contrast and how Okta color generation works. You must supply accessible colors to the configuration.

Material Design supports themes by customizing color palettes. The list of all configurable design tokens displays all available options, including Hue* properties for precise color control. Consider exploring color palette customization options tailored to your brand’s specific needs. You can use Material palette generators such as this color picker from the Google team or an open source Material Design Palette Generator that allows you to enter a HEX color value.

Don’t forget to keep accessibility in mind. You can run an accessibility audit using Lighthouse in the Chrome browser and the WebAIM Contrast Checker. Our selected primary color doesn’t quite meet contrast requirements. 😅

Add custom HTML elements to the Sign-In Widget

Previously, we filtered HTML elements out of the SIW. We can also add new custom HTML elements to SIW. We’ll experiment by adding a link to the Okta Developer blog. Find the afterTransform() method. Update the afterTransform() method to look like this:

oktaSignIn.afterTransform('identify', ({formBag}) => { const title = formBag.uischema.elements.find(ele => ele.type === 'Title'); if (title) { title.options.content = "Log in and create a task"; } const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help'); const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock'); const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider'); formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele)); const blogLink = { type: 'Link', contentType: 'footer', options: { href: 'https://developer.okta.com/blog', label: 'Read our blog', dataSe: 'blogCustomLink' } }; formBag.uischema.elements.push(blogLink); });

We created a new element named blogLink and set properties such as the type, where the content resides, and options related to the type. We also added a dataSe property that adds the value blogCustomLink to an HTML data attribute. Doing so makes it easier for us to select the element for customization or for testing purposes.

When you continue past the ‘identify’ form in the sign-in flow, you’ll no longer see the link to the blog.

Overriding Okta Sign-In Widget element styles

We should use design tokens for customizations wherever possible. In cases where a design token isn’t available for your styling needs, you can fall back to defining style manually.

Let’s start with the element we added, the blog link. Let’s say we want to display the text in capital casing. It’s not good practice to define the label value using capital casing for accessibility. We should use CSS to transform the text.

In the styles definition, find the #login-bg-image-id. After the styles for the background image, add the style to target the blogCustomLink data attribute and define the text transform like this:

a[data-se="blogCustomLink"] { text-transform: uppercase; }

Save and publish the page to check out your changes.

Now, let’s say you want to style an Okta-provided HTML element. Use design tokens wherever possible, and make style changes cautiously as doing so adds brittleness and security concerns.

Here’s a terrible example of styling an Okta-provided HTML element that you shouldn’t emulate, as it makes the text illegible. Let’s say you want to change the background of the Next button to be a gradient. 🌈

Inspect the SIW element you want to style. We want to style the button with the data attribute okta-sign-in-header.

After the blogCustomLink style, add the following:

button[data-se="save"] { background: linear-gradient(12deg, var(--color-fuchsia) 0%, var(--color-orange) 100%); }

Save and publish the site. The button background is now a gradient.

However, style the Okta-provided SIW elements with caution. The dangers with this approach are two-fold:

The Okta Sign-in widget undergoes accessibility audits, and changing styles and behavior manually may decrease accessibility thresholds The Okta Sign-in widget is internationalized, and changing styles around text layout manually may break localization needs Okta can’t guarantee that the data attributes or DOM elements remain unchanged, leading to customization breaks

In the rare case where you style an Okta-provided SIW element you may need to pin the SIW version so your customizations don’t break from under you. Navigate to the Settings tab and find the Sign-In Widget version section. Select Edit and select the most recent version of the widget, as this one should be compatible with your code. We are using widget version 7.36 in this post.

⚠️ Note

When you pin the widget, you won’t get the latest and greatest updates from the SIW without manually updating the version. Pinning the version prevents any forward progress in the evolution and extensibility of the end-user experiences. For the most secure option, allow SIW to update automatically and avoid overly customizing the SIW with CSS. Use the design tokens wherever possible.

Change the layout of the Okta-hosted Sign-In page

We left the HTML nodes defined in the SIW customization unedited so far. You can change the layout of the default <div> containers to make a significant impact. Change the display CSS property to make an impactful change, such as using Flexbox or CSS Grid. I’ll use Flexbox in this example.

Find the div for the background image container and the okta-login-container. Replace those div elements with this HTML snippet:

<div id="login-bg-image-id" class="login-bg-image tb--background"> <div class="login-container-panel"> <div id="okta-login-container"></div> </div> </div>

We moved the okta-login-container div inside another parent container and made it a child of the background image container.

Find #login-bg-image style. Add the display: flex; property. The styles should look like this:

#login-bg-image-id { background-image: {{bgImageUrl}}; display: flex; }

We want to style the okta-login-container’s parent <div> to set the background color and to center the SIW on the panel. Add new styles for the login-container-panel class:

.login-container-panel { background: var(--color-white); display: flex; justify-content: center; align-items: center; width: 40%; min-width: 400px; }

Save your changes and view the sign-in page. What do you think of the new layout? 🎊

⚠️ Note

Flexbox and CSS Grid are responsive, but you may still need to add properties handling responsiveness or media queries to fit your needs.

Your final code might look something like this:

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="robots" content="noindex,nofollow" /> <!-- Styles generated from theme --> <link href="{{themedStylesUrl}}" rel="stylesheet" type="text/css"> <!-- Favicon from theme --> <link rel="shortcut icon" href="{{faviconUrl}}" type="image/x-icon"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> <link href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Poiret+One&display=swap" rel="stylesheet"> <title>{{pageTitle}}</title> {{{SignInWidgetResources}}} <style nonce="{{nonceValue}}"> :root { --font-header: 'Poiret One', sans-serif; --font-body: 'Inter Tight', sans-serif; --color-gray: #4f4f4f; --color-fuchsia: #ea3eda; --color-orange: #ffa738; --color-azul: #016fb9; --color-cherry: #ea3e84; --color-purple: #b13fff; --color-black: #191919; --color-white: #fefefe; --color-bright-white: #fff; --border-radius: 4px; } {{ #useSiwGen3 }} html { font-size: 87.5%; } {{ /useSiwGen3 }} #login-bg-image-id { background-image: {{bgImageUrl}}; display: flex; } .login-container-panel { background: var(--color-white); display: flex; justify-content: center; align-items: center; width: 40%; min-width: 400px; } a[data-se="blogCustomLink"] { text-transform: uppercase; } </style> </head> <body> <div id="login-bg-image-id" class="login-bg-image tb--background"> <div class="login-container-panel"> <div id="okta-login-container"></div> </div> </div> <!-- "OktaUtil" defines a global OktaUtil object that contains methods used to complete the Okta login flow. --> {{{OktaUtil}}} <script type="text/javascript" nonce="{{nonceValue}}"> // "config" object contains default widget configuration // with any custom overrides defined in your admin settings. const config = OktaUtil.getSignInWidgetConfig(); config.theme = { tokens: { BorderColorDisplay: 'var(--color-bright-white)', PalettePrimaryMain: 'var(--color-fuchsia)', PalettePrimaryDark: 'var(--color-purple)', PalettePrimaryDarker: 'var(--color-purple)', BorderRadiusTight: 'var(--border-radius)', BorderRadiusMain: 'var(--border-radius)', PalettePrimaryDark: 'var(--color-orange)', FocusOutlineColorPrimary: 'var(--color-azul)', TypographyFamilyBody: 'var(--font-body)', TypographyFamilyHeading: 'var(--font-header)', TypographyFamilyButton: 'var(--font-body)', BorderColorDangerControl: 'var(--color-cherry)' } } // Render the Okta Sign-In Widget const oktaSignIn = new OktaSignIn(config); oktaSignIn.renderEl({ el: '#okta-login-container' }, OktaUtil.completeLogin, function (error) { // Logs errors that occur when configuring the widget. // Remove or replace this with your own custom error handler. console.log(error.message, error); } ); oktaSignIn.afterTransform('identify', ({ formBag }) => { const title = formBag.uischema.elements.find(ele => ele.type === 'Title'); if (title) { title.options.content = "Log in and create a task"; } const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help'); const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock'); const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider'); formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele)); const blogLink = { type: 'Link', contentType: 'footer', options: { href: 'https://developer.okta.com/blog', label: 'Read our blog', dataSe: 'blogCustomLink' } }; formBag.uischema.elements.push(blogLink); }); </script> </body> </html>

You can also find the code in the GitHub repository for this blog post. With these code changes, you can connect this with an app to see how it works end-to-end. You’ll need to update your Okta OpenID Connect (OIDC) application to work with the domain. In the Okta Admin Console, navigate to Applications > Applications and find the Okta application for your custom app. Navigate to the Sign On tab. You’ll see a section for OpenID Connect ID Token. Select Edit and select Custom URL for your brand’s sign-in URL as the Issuer value.

You’ll use the issuer value, which matches your brand’s custom URL, and the Okta application’s client ID in your custom app’s OIDC configuration. If you want to try this and don’t have a pre-built app, you can use one of our samples, such as the Okta React sample.

Customize your Gen3 Okta-hosted Sign-In Widget

I hope you enjoyed customizing the sign-in experience for your brand. Using the Okta-hosted Sign-In widget is the best, most secure way to add identity security to your sites. With all the configuration options available, you can have a highly customized sign-in experience with a custom domain without anyone knowing you’re using Okta.

If you like this post, there’s a good chance you’ll find these links helpful:

Create a React PWA with Social Login Authentication Secure your first web app How to Build a Secure iOS App with MFA

Remember to follow us on Twitter and subscribe to our YouTube channel for fun and educational content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below! Until next time!

Friday, 05. September 2025

Radiant Logic

California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic

What Is California’s AB 869 and Why Does It Matter? California has returned to the Zero-Trust front line. When Assemblymember Jacqui Irwin re-introduced the mandate this year as AB 869, she rewound the clock only far enough to give agencies a fighting chance: every executive-branch department must show a mature Zero-Trust architecture by June 1, […] The post California’s Countdown to Zero Trust—
What Is California’s AB 869 and Why Does It Matter?

California has returned to the Zero-Trust front line. When Assemblymember Jacqui Irwin re-introduced the mandate this year as AB 869, she rewound the clock only far enough to give agencies a fighting chance: every executive-branch department must show a mature Zero-Trust architecture by June 1, 2026.  

The bill sailed through the Assembly without a dissenting vote and now sits in the Senate Governmental Organization Committee with its first hearing queued for early July. Momentum looks strong: the measure already carries public endorsement from major players in the security space such as Okta, Palo Alto Networks, Microsoft, TechNet, Zscaler and a unanimous fiscal-committee green light.  

The text itself is straightforward. It lifts the same three pillars that the White House spelled out in Executive Order 14028—multi factor authentication everywhere, enterprise-class endpoint detection and response and forensic-grade logging—and stamps a date on each pillar. Agencies that fail will be out of statutory compliance, but, as the committee’s analysis warns, the real price tag is the downtime, ransom and public-trust loss that follow a breach.  

Why Unifying Identity Data Is the Real Challenge in Zero Trust

California has spent four years laying technical groundwork. The Cal-Secure roadmap already calls for continuous monitoring, identity lifecycle discipline and tight access controls. Yet progress has stalled because most departments still lack a single, authoritative view of who and what is touching their systems. Identity data lives in overlapping Active Directory forests, SaaS directories, HR databases and contractor spreadsheets. When job titles lag three weeks behind reality or an account persists after its owner leaves, even the best MFA prompt or EDR sensor can’t make an accurate determination.

Identity Data Fabric and the RadiantOne Platform: How Radiant Logic Creates a Single Source of Identity Truth

Radiant Logic solves the obstacle at its root. The platform connects to every identity store—on-prem, cloud, legacy or modern—then correlates, cleans and serves a real-time global profile for every person and device. That fabric becomes the single source of truth that each Zero-Trust control needs and consumes: 

MFA tokens draw fresh role and device attributes, so “adaptive” policies really do adapt.  EDR and SIEM events carry one immutable user + device ID, letting analysts trace lateral movement in minutes instead of days.  Log files share the same identifier, turning post-incident forensics into a straight line instead of a spider web. 

The system’s built-in hygiene analytics spotlight dormant accounts, stale entitlements and toxic combinations—precisely the gaps auditors test when they judge “least-privilege” maturity. 

A Concrete, 12-Month Playbook: What an Identity Data Fabric Does in Practice Connect all identity sources. Map and connect every authoritative and shadow identity source to RadiantOne. No production system needs to stop; the platform operates as an overlay.  Redirect authentication flows—IdPs, VPNs, ZTNA gateways—so their policy engines read from the new identity fabric.  Legacy applications gain modern, attribute-driven authorization without code changes.  Stream context into security tools. By streaming enriched context into existing EDR and SIEM pipelines, alerts can now include the “who, what and where” information that incident responders crave.  Run hygiene dashboards to purge inactive or over-privileged accounts.  The same reports double as proof of progress for the annual OIS maturity survey. 

Teams that follow the sequence typically see two wins long before the statutory deadline, one being faster mean-time-to-detect during adversarial red-teaming exercises and, secondly, a dramatic cut in audit questions that start with, “How do you know…?” 

Beyond Compliance: Why Zero Trust is More than a Checkbox 

AB 869 may be the nudge, but the destination is bigger than a check box. When de facto identity is the new perimeter—and when that identity is always current, complete and trustworthy—California’s digital services stay open even on the worst cyber day. Radiant Logic provides the identity fabric that makes Zero-Trust controls smarter, cheaper and easier to prove. 

The countdown ends June 1, 2026. The journey can start with a single connection to your first directory. 

REFERENCES 

https://cdt.ca.gov/wp-content/uploads/2021/10/Cybersecurity_Strategy_Plan_FINAL.pdf

https://calmatters.digitaldemocracy.org/bills/ca_202520260ab869

The post California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust

 What Does Gartner’s 2025 Hype Cycle Say About Zero Trust? In many places in the world, Zero Trust has shifted from being a security philosophy to a mandate by regulators, including the U.S., as discussed in California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic. Gartner’s 2025 Hype Cycle for Zero Trust Technology highlights […] The post Gartner® Recognizes Radiant Logi
 What Does Gartner’s 2025 Hype Cycle Say About Zero Trust?

In many places in the world, Zero Trust has shifted from being a security philosophy to a mandate by regulators, including the U.S., as discussed in California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic. Gartner’s 2025 Hype Cycle for Zero Trust Technology highlights identity as the foundation for Zero Trust success and names Radiant Logic as a Sample Vendor enabling that foundation in the AI for Access Administration category. 

Regulatory Mandates Are Accelerating Zero Trust Adoption

Across both public and private sectors, the push for implementing Zero Trust is accelerating. California’s Assembly Bill 869, for example, requires every executive-branch agency to demonstrate a mature Zero Trust architecture by June 2026. This is one example of how regulations are putting firm dates on adoption. Gartner’s recognition underscores why Radiant Logic matters in this context.

Zero Trust depends not only on reliable identity data but also on making that data accessible. The challenge for most organizations is not the lack of Zero Trust tools but the difficulty of getting the right identity data. Attributes, context, and relationships all need to be provided to the tools in a format and way that these can be used.

Without that foundation, Zero Trust efforts typically stall.  

Why Identity is Central to Zero Trust 

The National Institute of Standards and Technologies (NIST) defines Zero Trust around a simple idea: never trust, always verify. Every request must be authenticated and authorized in its context. Yet in most enterprises, identity data is fragmented across directories, cloud services, HR systems, and contractor databases. This is the reality of what we call identity sprawl. When accounts linger after employees leave or when attributes are out of date, even the best MFA solutions or EDR policies falter. 

Gartner cautions that organizations lacking visibility into their identity data face both elevated security risks and operational inefficiencies. Zero Trust controls cannot deliver on their promise if they operate on incomplete or inconsistent input. That means that the result is only as good as the underlying identity data.  

Radiant Logic’s Role 

RadiantOne unifies identity data from every source into a single, correlated view of every identity, whether human or non-human. That fabric becomes the authoritative context that Zero Trust controls require and need to be successful. This foundation lets MFA policies adapt dynamically to current identity and device signals while, at the same time, unifying log files under a single identifier and enabling Zero Trust access, network segmentation, and more. So why is this important? Many regulatory initiatives are tightening up the reporting should a breach occur; therefore, correlating identities into a single view streamlines forensic work and ultimately allows for swift signaling or reporting to a competent authority.  

The importance of identity data hygiene is that it allows organizations to detect dormant accounts, stale entitlements, and toxic combinations before auditors or adversaries find them.

Maintaining this hygiene is critical to mitigating risk and ensuring that Zero Trust policies are enforced on accurate, trustworthy data. By ensuring Zero Trust policies run on clean, governed identity data, Radiant Logic enables organizations to enforce least privilege, reduce the attack surface, and meet compliance obligations in a timely fashion. 

The Business Impact 

For CISOs, this reduces risk by closing identity gaps before attackers exploit them. 

For CIOs, it modernizes access controls without disrupting legacy systems. 

For compliance leaders, it provides defensible evidence for regulatory audits and mandates and, in case of a breach, a swift response to regulators signaling and reporting requirements. 

Zero Trust is no longer an academic philosophic idea — it is operational to modern security. Gartner’s recognition of Radiant Logic validates our role in making it achievable, practical, and provable. 

Learn More

The full report can be downloaded here. Discover how Radiant Logic strengthens Zero Trust initiatives with unified, real-time identity data and intelligence. To discuss with an identity and Zero Trust expert, contact us today.  

 

The post Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Identity: The Lifeline of Modern Healthcare

Why Identity Access Management Is Healthcare’s Hidden Bottleneck In today’s healthcare ecosystem, seconds can mean the difference between life and death. Clinicians need instant access to systems, patient records, and tools that guide treatment decisions. But too often, identity and access management (IAM) becomes a silent bottleneck—slowing workflows, increasing frustration, and opening new avenu
Why Identity Access Management Is Healthcare’s Hidden Bottleneck

In today’s healthcare ecosystem, seconds can mean the difference between life and death. Clinicians need instant access to systems, patient records, and tools that guide treatment decisions. But too often, identity and access management (IAM) becomes a silent bottleneck—slowing workflows, increasing frustration, and opening new avenues for attackers. 

Identity is not just an IT function. It is the connective tissue between operational efficiency and strong security. When access works seamlessly, clinicians focus on patients. When it falters, care delivery stalls. The stakes are that high.

Key Takeaway: In modern healthcare, fast, secure identity access isn’t an IT issue—it’s a patient safety issue The Legacy Identity Problem in Healthcare Common IAM Pain Points for Healthcare Providers

Healthcare organizations carry a legacy burden that includes identity infrastructures stitched together from mergers, acquisitions, and outdated systems. The results are familiar and painful: 

Slow onboarding: Clinicians wait days or weeks to access EHRs, e-prescribing platforms, or HR systems  Siloed systems: Contractors, vendors, and students are often tracked manually or inconsistently, creating blind spots  Fragmented logins: Multiple usernames and passwords drain productivity, encourage weak credential practices, and create security risks  Why Fragmented Systems Put Patients and Data at Risk

Each inefficiency cascades into operational and security problems. In a shared workstation environment where multiple staff members rotate across terminals, the friction of multiple logins is more than inconvenient—it is unsafe. 

How the “Persona Problem” Impacts Clinicians

Modern clinicians often wear many hats: surgeon, professor, and clinic practitioner. Each role demands different entitlements, application views, and permissions. Legacy IAM systems struggle to keep pace, forcing clinicians into frustrating workarounds that compromise both care and compliance. 

A modern identity data foundation solves this “persona problem” by enabling: 

multi-persona profiles: A unified identity that captures every role under one credential  contextual access: Role-specific entitlements delivered at the point of authorization  streamlined governance: Fewer duplicates, cleaner oversight, and enforced least privilege 

The result? Clinicians move seamlessly across their responsibilities without juggling multiple logins, and security teams gain a clearer, more manageable access model. 

Identity as the Frontline of Healthcare Cybersecurity

Disconnected directories, inconsistent access records, and orphaned accounts create fertile ground for attackers. The 2024 Change Healthcare ransomware incident, traced back to compromised remote access credentials, highlighted the catastrophic impact that a single identity failure can unleash. 

The Compliance Consequences of Poor Identity Hygiene

Poor IAM hygiene doesn’t just slow down care—it invites compliance nightmares. Regulations like HIPAA require clear evidence of least-privilege access and timely de-provisioning, but piecing that evidence together from fractured systems is a losing battle. 

Temporary fixes and one-off integrations won’t cure healthcare’s identity problem. What is needed is a modern identity data foundation that: 

unifies identity data from HR systems, AD domains, credentialing databases, cloud apps, and more rationalizes and correlates records into a single, authoritative profile for each user delivers tailored views to each consuming system—EHR, tele-health, billing, scheduling—through standard protocols like LDAP, SCIM, and REST strengthens ISPM by ensuring security policies, risk analytics, and compliance reporting all act on the same high-quality identity data

RadiantOne provides that foundation. Acting as a universal adapter and central nervous system, it abstracts away complexity, enables day-one M&A integration, supports multi-tenant models for affiliated clinics, and reduces costly manual cleanup. 

Healthcare’s identity challenge is not theoretical. It is visible every day in delayed access, clinician frustration, regulatory fines, and high-profile breaches. But it doesn’t have to be this way. 

With a unified identity data foundation, healthcare organizations can: 

accelerate clinician onboarding  reduce operational bottlenecks  strengthen identity security posture  simplify compliance  empower caregivers with seamless, secure access 

The question is no longer whether identity impacts care delivery and security: it is whether your identity infrastructure is helping or holding you back. 

Download the white paper, The Unified Identity Prescription: Securing Modern Healthcare & Empowering Caregivers, to explore how a unified identity data foundation can power better care and stronger security.

The post Identity: The Lifeline of Modern Healthcare appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


AI for Access Administration: From Promise to Practice

Why AI for Access Administration Is an Emerging Priority Gartner’s 2025 Hype Cycle for Digital Identity and Hype Cycle for Zero-Trust Technology, 2025 highlights AI for Access Administration as an emerging innovation with high potential, or as it is called by Gartner, an “Innovation trigger.” The promise to automate entitlement reviews, streamline least-privilege enforcement and […] The post AI
Why AI for Access Administration Is an Emerging Priority

Gartner’s 2025 Hype Cycle for Digital Identity and Hype Cycle for Zero-Trust Technology, 2025 highlights AI for Access Administration as an emerging innovation with high potential, or as it is called by Gartner, an “Innovation trigger.” The promise to automate entitlement reviews, streamline least-privilege enforcement and replace months of manual cleanup with intelligent, adaptive identity governance is very compelling.

But as Gartner cautions, “AI is no better than human intelligence at dealing with data that doesn’t exist.” 

When it comes to AI, the limiting factor is not the algorithms: it’s the data. Fragmented directories, inconsistent entitlement models, and dormant accounts create blind spots that undermine any attempt at automation. Without a reliable identity foundation, AI has little to work with and what it does work with is riddled with flaws and problems.  

Key Takeaway: The barrier to AI success in access governance isn’t algorithms—it’s bad identity data. Identity-Driven Attacks Are Outpacing Traditional IAM Processes

Verizon’s 2025 DBIR confirms credential misuse as the leading breach vector, with attackers increasingly exploiting valid accounts rather than brute-forcing their way in. IBM X-Force highlights that the complexity of responding to identity-driven incidents nearly doubles compared to other attack types. Trend Micro adds that risky cloud app access and stale accounts remain among the most common exposure points. These are just three out of many prominent organizations voicing their concern.

What This Means: Static certifications and spreadsheet-based entitlement reviews cannot keep pace with adversaries who are already automating their side of the equation.  Making Identity Data AI-Ready 

Radiant Logic is recognized in Gartner’s Hype Cycle for enabling AI for Access Administration as a Sample Vendor. Our role is foundational—we provide the trustworthy identity data layer that AI systems require to function effectively. 

The RadiantOne Platform unifies identity information from directories, HR systems, cloud services, and databases into one semantic identity layer. This layer ensures that access intelligence operates on clean, normalized, and correlated data. The result is an explainable and auditable basis for AI-driven recommendations and automation. 

From Episodic to Continuous Access Intelligence

With this semantic identity layer in place, AI can shift access administration from episodic to continuous monitoring, detecting entitlement drift, rationalizing excessive access, and adapting policies in near real time. 

Enabling Agentic AI in Access Governance 

Radiant Logic is investing deeply in advancing the field of Agentic AI and has already delivered tangible innovations for customers through AIDA and fastWorkflow

What Is AIDA (AI Data Assistant)?

AIDA (AI Data Assistant) is a core capability of the platform. It is presented as a virtual assistant to simplify user interactions, improve operational efficiency and help to make more informed decisions. 

How AIDA Simplifies Access Reviews

For example, AIDA is used to address one of the most resource-heavy processes in IAM: user access reviews. Instead of overwhelming reviewers with raw data, AIDA highlights isolated access, surfaces over-privileged or dormant accounts, and proposes remediations in plain language. Each suggestion is linked to the underlying identity relationships, ensuring decisions remain auditable and defensible.  

What Is fastWorkflow and Why It Matters

The result is a faster review cycle with less fatigue for reviewers, while giving compliance teams confidence that AI assistance does not compromise accountability. At its core, AIDA leverages fastWorkflow—A reliable Agentic Python Framework.  

fastWorkflow aims to address common challenges in AI agent development such as intent misunderstanding, incorrect tool calling, parameter extraction hallucinations, and difficulties in scaling. 

The outcome is much faster agent development, providing deterministic results even when leveraging smaller (and cheaper) AI models. 

Open-Sourcing fastWorkflow for the Community

Radiant Logic has released fastWorkflow to the open-source community under the permissive Apache 2.0 license, enabling developers to accelerate their AI initiatives with a flexible and proven framework. 

If you are interested in knowing more about fastWorfklow, this article series is available. You can access the project and code for fastWorkflow on GitHub.

These capabilities are the first public expressions of our broader Agentic AI strategy, moving AI beyond theoretical promise into operational reality. These innovations are part of a larger roadmap exploring how intelligent agents can fundamentally transform the way enterprises secure and govern identity data. 

Our recognition in Gartner’s Hype Cycle for Digital Identity reflects why this matters: most AI initiatives in IAM fail not because of algorithms, but because of poor data quality and unreliable execution. By unifying identity data, enabling explainable guidance through AIDA, and ensuring safe, reliable execution with fastWorkflow, we are making Agentic AI practical for access governance today—while laying the foundation for what comes next.

The Business Impact 

For CISOs, this means reducing exposure by closing gaps before they are exploited. For CIOs, it delivers modernization without breaking legacy systems. For compliance leaders, it simplifies audits with data-backed, explainable decisions. 

AI for Access Administration will not replace governance programs, but it will change their tempo. What was once a quarterly campaign becomes a continuous process. What was once a compliance checkbox becomes a dynamic part of security posture. This is closely in line with regulatory initiatives where a continuous risk-based security posture is critical.  

Radiant Logic provides the missing foundation: unified, governed, and observable identity data.  

See how you can shift from a reactive identity security posture to a proactive, data-centric, AI-driven approach: contact us today. 

The post AI for Access Administration: From Promise to Practice appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Spherical Cow Consulting

The Paradox of Protection

When every digital system is labeled as critical infrastructure, do we actually make the Internet safer—or just more fragile? In this episode of The Digital Identity Digest, Heather Flanagan examines the growing tension between protection, control, and interdependence in our global digital ecosystem. Through examples from the U.S. and EU, Heather explores how expanding definitions of “critical”

“Last month’s AWS outage did more than interrupt chats and scramble payment systems. It reignited a political argument that has been simmering for years: whether cloud platforms have become too essential to be left in private hands.”

In the U.K., calls for digital sovereignty resurfaced almost immediately. Across Europe, people again questioned their dependence on U.S. providers. Even for companies that weren’t directly affected, the incident felt uncomfortably close.

In The Infrastructure We Forgot We Built, I pointed out that private infrastructure now performs public functions. The question isn’t whether these systems are critical—demonstrably, they are—it’s what happens when everything is critical. Governments continue to expand their definitions of “critical infrastructure,” extending the term to encompass finance, cloud, data, and communications. Each new addition feels justified, but the result is an ever-growing list that no one can fully protect.

Declaring something “critical” once meant ensuring its safety. Now it often means claiming jurisdiction. It creates an uncomfortable paradox: the more we classify, the more we appear to protect, and the less effective we become at coordinating a response when the next outage arrives.

Let’s poke at some interesting ramifications of classifying a service as critical.

A Digital Identity Digest The Paradox of Protection Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:09 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The American model: expanding scope, dispersing responsibility

Nowhere is this inflation more visible than in the United States, where “critical infrastructure” has evolved from a short list of sixteen sectors, including energy, water, transportation, and communications, to a sprawling catalog of national functions. The Cybersecurity and Infrastructure Security Agency (CISA) calls them National Critical Functions: over fifty interconnected capabilities that “enable the nation to function.” It’s an attempt to capture the web of dependencies that tie one system to another, but the list is so long that prioritization becomes impossible.

At the same time, National Security Memorandum 22 (NSM-22) shifted much of the responsibility for protecting those functions away from federal oversight. Under NSM-22, agencies and private operators were expected to manage their own resilience planning. In theory, decentralization builds flexibility; in practice, it creates a policy map with thousands of overlapping boundaries. The government defines criticality broadly, but control over what that means in practice is increasingly diffuse.

As of 2025, the current U.S. administration is reviewing NSM-22 and several other cybersecurity and infrastructure policies in an effort to clarify lines of responsibility and modernize federal strategy. According to Inside Government Contracts, this review could lead to significant revisions in how critical infrastructure is defined and governed, though the direction remains uncertain.

What’s unlikely to change is the underlying trend: expansion without coordination. The more functions labeled critical, the thinner the resources spread to defend them. If everyone is responsible, no one really is.

The European model: bureaucracy as resilience

Europe has taken almost the opposite approach. Where the U.S. delegates, the European Union codifies. The NIS2 Directive and the Critical Entities Resilience (CER) Directive bring a remarkable range of organizations, such as cloud providers, postal services, and wastewater plants, under the umbrella of “essential” or “important” entities. Each must demonstrate compliance with a thick stack of risk-management, incident-reporting, and supply-chain-security obligations.

It’s tempting to see this as overreach, but there’s a strange effectiveness to it. A friend recently observed that bureaucracy can be a form of resilience: it forces repeatable, auditable behavior, even when it slows everything down. Under NIS2, an outage may still occur, but the process for recovery is at least predictable. Europe’s system may be cumbersome, but it institutionalizes the habit of preparedness.

If the U.S. model risks diffusion, the European one risks inertia. Both confuse activity with assurance. To put it another way, expanding oversight doesn’t guarantee protection; it guarantees paperwork. Protection might just be a happy accident.

Interdependence cuts both ways

Underlying both approaches is the same dilemma: interdependence magnifies both stability and fragility. The OECD warns about “systemic risk” in its 2025 Governments at a Glance report. Similarly, the WEF describes this characteristic as “interconnected risk” in their Global Risks Report 2024. In both cases, they are talking about how a disturbance in one sector can ripple instantly into others, turning what should be a local failure into a global one.

But interdependence also enables the efficiencies that modern economies depend on. The same cloud architectures that expose organizations to shared risk also deliver shared recovery. If an AWS region goes down, another can often pick up the load within minutes. That doesn’t make the system invulnerable; it makes it tightly coupled, which is both a feature and a flaw.

That is the paradox of microservice design: locally resilient, globally fragile. The further we distribute responsibility, the more brittle the whole becomes. Managing that trade-off is less about eliminating interdependence than about deciding which dependencies are worth keeping.

Coordination in a fragmented world

The Carnegie Endowment’s recent report on global cybersecurity clearly frames the problem: the challenge is no longer whether to protect critical systems, but how to coordinate that protection across borders. The Internet made infrastructure transnational; regulation still stops at the border.

That tension was at the center of my earlier series, The End of the Global Internet. Fragmentation, through data-localization mandates, competing technical standards, and geopolitical distrust, is shrinking the space for cooperation. The systems that most need collective protection are emerging at the moment when collective action is least feasible.

That was made more than clear during the October 2025 AWS outage.

In the U.K., it reignited arguments about tech sovereignty, with commentators and MPs warning that reliance on U.S. providers left the country strategically exposed. In Brussels, the outage reinforced calls to accelerate the European Cloud Federation and “limit reliance on American hyperscalers.”

Tech.eu put it bluntly: “A global AWS outage exposes fragile digital foundations.” They are not wrong.

A technical event at this scale offers impressive political ammunition. The debate becomes about more than just uptime. It’s also about who controls the tools a society can’t seem to function without.

Labeling platforms as critical infrastructure amplifies that instinct. Once something is “critical,” every government wants jurisdiction. Every region seeks its own version. The intent is to strengthen sovereignty, but the result is a more fragmented Internet. Protection turns into partition.

Openness vs. control: lessons from digital public infrastructure

This tension between openness and control shows up again in global discussions around Digital Public Infrastructure (DPI). A recent G20 policy brief argues that while DPI and Critical Information Infrastructure (CII) both serve public purposes, they arise from opposite design instincts. DPI emphasizes inclusion, interoperability, and openness; CII emphasizes security, restriction, and control.

Some systems are designated critical only after they become indispensable. India’s Aadhaar identity platform is a great example. The Central Identities Data Repository (CIDR) was declared a Protected System under the country’s CII rules in 2015—five years after Aadhaar’s rollout—adding compliance obligations to what began as open, widely used public infrastructure. Those regulations were and are necessary, but it’s reasonable to ask whether a system managing such sensitive data should ever have operated without that protection in the first place.

The challenge isn’t simply timing. Too early can stifle innovation; too late can amplify harm. The real question is how societies decide when openness must yield to oversight, and whether that transition preserves the trust that made the system valuable in the first place.

The politics of protection

Critical infrastructure has always been political. As the Brookings Institution observed more than a decade ago, infrastructure investment—and, by extension, classification—has always reflected political will as much as technical necessity. The same logic applies online. Designating something “critical” can attract funding, exemptions, or strategic leverage. In a digital economy where perception drives policy, criticality itself becomes a form of currency.

The temptation to leverage the classification of “critical” is understandable: declaring something critical signals seriousness. But it also invites lobbying, nationalization, and regulatory capture. In the analog era, the line between public good and private gain was already blurry; the digital era simply made it blur faster and more broadly.

Criticality has become a negotiation, and as with all negotiations, outcomes depend less on evidence than on who has the microphone.

The discipline of selective resilience

If the first post in this series leaned toward recognizing new kinds of critical infrastructure, this one argues for restraint in doing so. Declaring everything critical doesn’t make societies safer; it makes prioritization impossible. Resilience requires hierarchy, specifically knowing what must endure, what can fail safely, and how systems recover in between.

That’s an uncomfortable truth for both policymakers and providers. (I would say I’m glad I don’t have that job, but I kind of do as a voting member of society) Safety sounds equitable; prioritization sounds elitist. But in practice, resilience demands choice. It asks us to acknowledge that some dependencies matter more than others, and to build systems that tolerate loss rather than pretending loss is preventable.

The more we classify, the more we appear to protect, and the less effective we become at coordinating when the next outage arrives. The task ahead isn’t expanding the list. It’s learning to live with a smaller one.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30]
Welcome back. Last month’s AWS outage did more than just interrupt chats and scramble payment systems — it ignited a long-simmering argument about whether cloud platforms have become too essential to be left entirely in private hands.

In the UK, calls for digital sovereignty resurfaced almost immediately. Across Europe, governments and enterprises once again questioned their dependence on U.S. providers. And even for organizations that weren’t directly affected, the outage felt uncomfortably close. The internet wobbled — and everybody noticed.

Defining What’s “Critical”

In my post last week, The Infrastructure We Forgot We Built, I argued that private infrastructure now performs public functions.
That’s the heart of the question here — not whether these systems are critical infrastructure (they are), but what happens when everything becomes critical?

When every failure becomes a matter of national concern, the language of protection starts collapsing under its own weight.

So, what do we actually mean when we say critical infrastructure? The phrase sounds straightforward, but it isn’t. Every jurisdiction defines it differently. Broadly speaking, critical infrastructure refers to assets, systems, and services essential for society and the economy — things whose disruption would cause harm to public safety, economic stability, or national security.

That definition works for power grids and water systems, but it gets complicated when we start talking about DNS, payments, or authentication services — the digital glue holding everything together.

Today, critical is no longer just about physical survival. It’s about functional continuity and keeping society running.

When Everything Is Critical, Nothing Is

Each country’s list of what’s critical keeps getting longer — and fuzzier. Declaring something critical once meant ensuring its safety. Now, it feels more like staking a claim to control.

That’s the paradox. The more we classify, the more we appear to protect — but the less effective we become when the next outage hits.

This tension is especially visible in the United States. Critical infrastructure once referred to 16 sectors — energy, water, transportation, communications — things you could point to in the real world.

Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recognizes more than 50 “national critical functions.” These include both government and private-sector operations so vital that their disruption could debilitate the nation.

It’s a noble definition — but a recipe for paralysis. Because if everything is critical, then nothing truly is.

Expansion Without Coherence

The National Security Memorandum 22 (NSM 22) was intended to modernize how those functions are managed. In theory, it decentralizes responsibility, allowing agencies and private operators to tailor protections to their own risk environments.

In practice, it’s become a policy map full of overlapping boundaries — blurry accountability, scattered resources, and fragmented oversight.

It’s a patchwork: agencies, regulators, and corporate partners each hold a piece of the responsibility, but no one has the full picture.

While the U.S. administration is reviewing these policies, the underlying trend remains: we keep expanding the definition of “critical” without improving coordination.

The result?

Expansion without coherence Protection without prioritization A system too diffused to defend

It’s the digital version of the bystander effect: if everyone is responsible, no one truly is.

Bureaucracy as Resilience

Let’s shift to the European model, which takes almost the opposite approach. Where the U.S. delegates, the EU codifies — through the NIS 2 Directive and the Critical Entities Resilience Directive.

These cover a wide range of organizations — from cloud providers to waste-water plants — all classified as “essential” or “important.” Each must prove compliance with risk management, incident reporting, and supply-chain security requirements.

It’s easy to dismiss that as bureaucratic overreach — and in part, it is.
But it’s also effective in its own way. Bureaucracy, for all its flaws, enforces repeatable, auditable behavior even as it slows things down.

Under NIS 2, an outage may still occur, but the recovery process is predictable. You may not like the paperwork, but you’ll have it — and sometimes, that’s half the battle.

Still, the EU’s model has limits. If the U.S. risks diffusion, the EU risks inertia. Both can be mistaken for resilience, but neither guarantees protection. What bureaucracy guarantees is documentation, not defense.

Interdependence and Fragility

Both systems face the same dilemma: interdependence.
It magnifies both stability and fragility. A local failure can ripple across sectors and become a global event — yet shared infrastructure also provides recovery pathways.

When an AWS region fails, another often takes over. That’s designed resilience, but it isn’t limitless. As we’ve seen, microservice architecture provides local stability but global fragility. The more distributed a system becomes, the harder it is to understand its failure points.

When everything depends on everything else, “critical infrastructure” starts to lose meaning.

The goal isn’t to eliminate dependencies — that’s impossible — but to decide which ones we can live with.

The Coordination Gap

Coordination, or the lack of it, is the real challenge.
A recent Carnegie Endowment report put it plainly: the issue isn’t whether to protect critical systems, but how to coordinate that protection across borders.

The internet made infrastructure transnational.
Regulation, however, still stops at the border. The wider that gap grows, the more fragile the entire system becomes.

We’re trying to protect a global network at a time when global cooperation is at a low point.

During the October AWS outage, responses were swift — and revealing:

In the UK, debates about tech sovereignty resurfaced. In Brussels, attention turned to reducing dependence on U.S. hyperscalers. Across tech journalism, the consensus was clear: a global AWS outage exposes fragile digital foundations.

And they’re right. But this technical failure has become political ammunition. The debate has shifted from uptime to control — who controls the tools we can’t function without?

From Protection to Fragmentation

Once something is labeled critical, every government wants jurisdiction.
Every region wants its own version. The intent is protection; the result is fragmentation.

This same tension shows up in debates about Digital Public Infrastructure (DPI) versus Critical Information Infrastructure (CII).

DPI emphasizes inclusion, interoperability, and openness. CII emphasizes security, restriction, and control.

Both serve public goals — they just stem from different design instincts.

For example, India’s Aadhaar identity system began as an open platform for inclusion. Five years later, it was reclassified as protected critical infrastructure. That shift was probably necessary, but it raises an uncomfortable question:

Should systems managing that level of personal data ever have operated without such protections?

Move too early, and you stifle innovation.
Move too late, and you amplify harm.

Timing, Trust, and Trade-Offs

The challenge is timing — and trust.
How do we decide when openness must yield to oversight, and how do we maintain public confidence when that shift happens?

Declaring something critical is never neutral. It’s a political act.
In the digital economy, criticality itself becomes a kind of currency — attracting investment, lobbying, and influence.

If a nation declares a platform critical, is it for resilience or for leverage?
Realistically, it’s both.

Selective Resilience

If The Infrastructure We Forgot We Built was about recognizing new kinds of critical systems, this reflection argues for restraint.

Declaring everything critical doesn’t make us safer — it makes prioritization impossible.
Resilience requires hierarchy: knowing what must endure, what can fail safely, and how recovery happens in between.

That’s uncomfortable for policymakers. Safety sounds equitable; prioritization sounds elitist.
But resilience demands choice. It asks us to build systems that tolerate failure rather than pretending it won’t happen.

The more we classify, the more we appear to protect — and the less control we have when it matters most.

Maybe the real task isn’t expanding the list of critical infrastructure, but learning to live with a smaller one.
Because protection is ultimately about trade-offs:

Between autonomy and interdependence Between openness and control Between trust and necessity

The harder we try to protect everything, the more fragile we make the whole.

[00:13:33]
That’s it for this week’s episode of The Digital Identity Digest.

[00:13:38]
If this helped make things clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn @hlflanagan, and if you enjoyed the show, please subscribe and leave a rating on your favorite podcast platform.

You can also read the full post at sphericalcowconsulting.com.
Stay curious, stay engaged, and let’s keep these conversations going.

The post The Paradox of Protection appeared first on Spherical Cow Consulting.


IDnow

The true face of fraud #2: The industrialization of fraud – How crime syndicates run $1 trillion scam empires.

The world's most dangerous criminal organizations don't look like what you'd expect – they resemble Fortune 500 companies. They are sophisticated, disciplined and scaled to the point of industrialization. In this part of our fraud series, we examine the inner workings of the world’s most pervasive crime: social engineering fraud. We go inside the compounds and their corporate-style departments to r
The world’s most dangerous criminal organizations don’t look like what you’d expect – they resemble Fortune 500 companies. They are sophisticated, disciplined and scaled to the point of industrialization. In this part of our fraud series, we examine the inner workings of the world’s most pervasive crime: social engineering fraud. We go inside the compounds and their corporate-style departments to reveal the organized machinery that makes them so hard to dismantle. 

Romance scams. Spear phishing. Authorized Push Payment fraud (APP fraud). These social engineering atttacks are no longer marginal threats. For banks and financial institutions, they represent one of the fastest-growing forms of fraud – costing billions each year and eroding customer trust and institutional reputation. 

In our first article of our fraud series, we revealed who is behind this global enterprise worth over $1 trillion and looked inside their vast complexes around the world, housing hundreds to thousands of trafficked workers. Now, we turn our focus towards how scam compounds operate; how they replicate corporate structure, scale with technology, deploy Fraud-as-a-Service (FaaS) and drive threats that risk not just money, but reputation and trust. 

Fraud Inc.: Departments like real companies 

Step inside a scam compound and what you’ll find looks less like a criminal hideout and more like a corporate headquarters. Inside, these operations function as fully fledged business ecosystems.  

It all begins with procurement, the recruitment process that fuels the enterprise. Recruiters post fake job ads on social media and employment platforms, offering high salaries and promising conditions. Many who apply are students, retirees, or people in vulnerable economic situations. Few realize they’re being drawn into a human trafficking network. Once they arrive at what they believe is their new workplace, they find themselves trapped within guarded compounds and forced into labour – trained and deployed to defraud victims around the world. 

From there, new arrivals enter structured training academies that mirror legitimate corporate onboarding. They are given scripts, coached on tone and persuasion, and taught to impersonate trusted individuals or institutions. They learn how to overcome objections, create urgency, and craft convincing messages and emails – all the hallmarks of professional sales training, repurposed for deception. 

Once trained, recruits join the call centres, the heart of the operation. Floor after floor of desks are filled with “sales teams” executing scams around the clock. Performance is tracked obsessively: conversion rates, value per victim, number of successful interactions, and response times to leads. High performers are rewarded. Those who fall behind face severe punishment

Underpinning it all are the operations and IT teams, ensuring the smooth running of the criminal enterprise. Infrastructure is maintained, systems monitored, and data managed. Meanwhile, payroll and accounting functions handle the proceeds, laundering the fraudulently obtained funds and reinvesting them to expand and sustain the operation further. 

But perhaps most sophisticated is the R&D unit: its sole purpose is to stay one step ahead of banks’ fraud prevention measures. These teams constantly evolve and fine-tune new attack methods to bypass the latest defenses. They test social engineering workflows, refine bypasses for two-factor authentication and explore how to exploit gaps in identity verification. Increasingly, they use AI tools to deepen deception with deepfake voice impersonations, synthetic IDs or AI-generated phishing platforms. 

On paper, you would not be able to distinguish the internal structure from that of a legitimate company. 

Scaling fraud with AI & FaaS 

No single compound has to reinvent the wheel – and increasingly, these large criminal enterprises are even franchising out their operations. Through fraud-as-a-Service (FaaS) models, they sell or lease “pluggable fraud kits” on the dark web. Those contain identity-spoofing services, exploit packages, script libraries, all available with a few clicks, making it easier for individuals to deploy sophisticated APP scams or impersonation attacks without any previous technical or scam background. It’s a franchise model for cybercrime. 

Using software and AI to streamline scams 

Scammers must reach the high call volume KPIs required everyday and to do so, they must rely on Voice-over-IP (VoIP) services. VoIP allows them to make international calls cheaply via the internet while spoofing caller IDs with UK or EU country codes to appear more credible. These tools also provide a steady supply of fresh phone numbers when agents’ numbers get blacklisted as spam. 

Scammers also use software stacks that mirror legitimate corporate tools. CRM-style dashboards track leads and capture victim information like investment experience, call history and personal details. Stolen identity databases enable highly personalised attacks, and increasingly, AI chatbots automate message personalisation and generate deepfakes. Tools like ChatGPT are actively deployed inside compounds to craft convincing investor narratives and sustain prolonged, trust-building conversations with victims. 

Why banks must look beyond the transaction 

Fraud losses are exploding. In 2024, consumers in the U.S. lost over $12.5 billion to scams, with investment and imposter scams alone making up most of it. In Norway, losses from social engineering rose 21% between 2021 and 2022, reaching NOK 290.3 million ($25-30 million USD) as more users were manipulated into authorizing payments – a trend that has also been noted by European banks, which saw digital payment fraud rise by 43% in 2024 compared to 2023, with social engineering tactics increasing by 156% and phishing by 77%.  

These operations hurt banks in far more ways than immediate financial loss. Each successful scam erodes trust – from customers, regulators and the public. When customers believe their bank can’t protect them, they may flee to competitors or lose faith. Regulatory scrutiny and fines also increase, especially as social engineering becomes the fraud vector regulators are watching most closely. 

The human toll and what can be done 

It is apparent that fraud is shifting from solely technical compromises to manipulations of human trust, but not only of those deceived to send money. Many scammers are recruited under false pretenses, trafficked or working under duress – a grim reality upon which these industrial fraud machines are built.  

Tools to fight (social engineering) fraud 

Social engineering scams are among the most challenging threats banks face today. Unlike traditional fraud like forged documents, these scams manipulate genuine customers into authorizing payments or sharing sensitive information – often without realizing they’re being deceived. This is especially true in cases like APP fraud, where the victim is tricked into sending money themselves, and because the transaction appears legitimate and is initiated by the account holder, detecting these scams demands a new level of vigilance and smarter technology. 

To combat this, banks need tools that go beyond standard identity checks. Solutions must be able to spot subtle signs of coercion and manipulation in real time. Video-based verification solutions are purpose-built for this and are the only verification method capable of detecting social engineering attempts through dynamic, human-led interactions and social-engineering–style questioning that reveal behavioral inconsistencies or signs of distress that may indicate a customer is being manipulated by a scammer. 

With social engineering, the focus shifts from verifying identity to understanding intent. That’s where platforms like the IDnow Trust Platform comes in. By integrating behavioral biometrics, such as erratic transaction histories, geographical inconsistencies and device switching, it flags suspicious patterns and enables real-time risk assessment throughout the entire customer lifecycle, not just at onboarding. 

In addition, end‑user education is a critical pillar: in the UK, where APP fraud has been huge, banks are now required to reimburse victims up to £85,000. With prevention efforts now in place, case volumes have fallen by 15 %.  

Together, these capabilities transform fraud prevention from reactive patching to proactive defense. 

Social engineering has always existed but in today’s digital, hyperconnected world, it has evolved into a global trade. What once were isolated scams have become industrialized operations running 24/7, powered by automation and scale. Fraud factories exploit the weakest link in the chain – human vulnerability – making them harder to detect and the biggest threat to banks today. For financial institutions, the challenge is no longer about patching single points of failure, it’s about dismantling entire production lines of deception. Understanding what happens inside these operations is now the first line of defense in a war that criminals are currently winning. 

Interested in more stories like this one? Check out: The true face of fraud #1: The masterminds behind the $1 trillion crime industry to explore who is behind the fastest-growing schemes today, where are the hubs of so-called scam compounds and what financial organizations must understand about their opponent. The rise of social media fraud: How one man almost lost it all to learn about romance fraud to investment scams and the multitude of ways that fraudsters use social media to target victims.

By

Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Dark Matter Labs

Open invitation: Participate in designing a grant call for system demonstrators linked to climate…

Open invitation: Participate in designing a grant call for system demonstrators linked to climate neutral and smart cities Belém, Brazil, 11 November 2025. Viable Cities, in collaboration with UN-Habitat and Dark Matter Labs, is inviting urban and funding partners — including governments, development donors, philanthropies, foundations and investors — to co-design a new grant call for climat
Open invitation: Participate in designing a grant call for system demonstrators linked to climate neutral and smart cities

Belém, Brazil, 11 November 2025. Viable Cities, in collaboration with UN-Habitat and Dark Matter Labs, is inviting urban and funding partners — including governments, development donors, philanthropies, foundations and investors — to co-design a new grant call for climate-neutral, resilient, and future-ready cities.

Following the successful 2021 Climate Smart Cities Challenge, developed in collaboration with UN Habitat, Viable Cities has implemented a programme of system demonstrators in Sweden as part of its mission to support cities in becoming climate neutral by 2030. Exploring multi-actor climate city contracts, integrated action and investment plans, as well as national-local collaboration, this initiative provided the building blocks for the current European 100 Climate-neutral and Smart Cities Mission. Combining breakthrough actions across key emission sectors, Swedish cities have started demonstrating how building portfolio approaches and public-private ecosystems can scale game changing interventions and create the conditions for a new climate neutral normal.

System demonstrators, accelerating and scaling the transition

System demonstrators are designed to test how whole-systems approaches across governance, finance, infrastructure, data, and citizen engagement can accelerate the transition to climate neutral and resilient cities that are futureproof. They also explore how to create an agile and derisked operating framework for public and private actors to design and implement viable businesses and value cases at scale. Using key societal priorities such as the energy transition, affordable housing, and aggregated purchasing power to help launch new innovations, technologies and markets as initial wedges, they start a transition journey building momentum, partnerships, and long term impact. Early examples include CoAction in Lund and STOLT in Stockholm, which focus on the nexus of energy, housing, and mobility and the realisation of emission-free inner cities, while also exploring new ways of organising collaboration and investment to achieve climate-neutrality at city scale.

Building on this work, Viable Cities, Dark Matter Labs, and UN-Habitat have since 2021 been collaborating with cities in Brazil, Uganda, Colombia, and the UK to apply and adapt the system demonstrator approach. The partnership works with cities including Curitiba, Makindye Ssabagabo, Bogotá, and Bristol to explore how systemic innovation can help them transition. Together, these efforts aim to generate practical learning about how cities can transform toward climate neutrality and resilience through coordinated, system-wide action.

Game changers, driven by local leadership

In Lund, Sweden, a game changer approach was set up as part of the system demonstrator: With EnergyNet connecting deregulation to deployable infrastructure, the green transition becomes commercially possible.

EnergyNet is a new way to manage the distribution of electricity. This is important because today’s electricity grids have major challenges in managing local production, storage and sharing. The system is suitable for use in energy communities, but can also be used outside. EnergyNet makes it possible to connect an unlimited amount of local energy resources, which creates completely new conditions for low electricity prices for large quantities of green electricity. How does it work? EnergyNet is developed according to the same principles as the Internet. It is therefore decentralized, which makes it significantly more resistant to disruptions. Through new types of power electronics, electricity distribution can now be completely controlled by software. Classic challenges for the electricity grid such as frequency and balance are no longer blocking. The new networks are not only decentralized but also distributed, which makes it easier to solve electricity needs as close to the consumer as possible.

The EnergyNet in Lund, driven by the CoAction initiative, represents a breakthrough in integrating multiple energy solutions into a unified, city-wide system. It was set up as a collaborative multi-stakeholder platform bringing together public authorities, businesses, and citizens to co-design a sustainable energy network. The approach integrates local renewable energy sources, smart grids, and demand-side management to optimize energy use across districts. This collaborative governance model fosters cross-sector partnerships and supports a data-driven approach to managing energy efficiency, helping Lund meet its climate targets while offering a model for scalable urban energy transitions globally.

Bristol’s Affordable Housing Initiative, driven by the Housing Festival, represents a game-changing approach to addressing the city’s housing crisis, combining climate-smart and social rent housing solutions. In response to a chronic social housing deficit, with 18,000 people on the waiting list and over 1,000 families in temporary accommodation, the initiative focused on aggregating small brownfield sites across the city to enhance housing viability using Modern Methods of Construction (MMC). The initiative’s main goal was to demonstrate how aggregation of these sites could help create net-zero social homes on small plots that are often seen as unviable for traditional housing projects.

The housing system demonstrator aims to test this aggregation model by building 25 zero-carbon social rent homes across six small sites in Bristol, which would not have been feasible individually. A digital tool was developed to help identify these sites and assess their viability, while a collaborative multi-stakeholder approach — involving Bristol City Council, Atkins Realis, Edaroth, and Lloyds Banking Group — was key to moving from concept to implementation. The project’s unique approach also includes a redefined notion of ‘viability’, integrating social infrastructure investments alongside traditional capital repayment models.

The initiative’s innovative approach has garnered support for scaling through the Small Sites Aggregator program, which aims to unlock thousands of small, underutilized brownfield sites across the UK. This strategy is seen as a path towards building 10,000 homes annually and addressing wider housing shortages, with ongoing testing in cities such as Bristol, Sheffield, and London’s Lewisham Borough. Through this work, the Housing Festival has created the Social Housing at Pace Playbook, which outlines a replicable ecosystem solution to deliver affordable, climate-smart housing at scale. This initiative has demonstrated the potential for collaboration across sectors, innovative financing, and climate-conscious design to provide a pathway for cities worldwide facing similar housing challenges.Bristol’s Affordable Housing Initiative, driven by the Housing Festival, is an innovative model for tackling housing affordability through community-led co-design and collaborative financing. The initiative brought together local authorities, housing developers, social enterprises, and citizens to explore new ways of creating affordable, sustainable homes. The Housing Festival served as a platform for crowdsourcing ideas, testing alternative financial models, and showcasing eco-friendly building techniques. By blending public, private, and philanthropic investment, the initiative created a dynamic ecosystem that accelerates the delivery of affordable housing, prioritizing local engagement and long-term sustainability. This approach is revolutionizing how cities can rethink housing challenges by embedding innovation into the policy framework.

Looking forward: launching a global and distributed system demonstrator initiative

The first step in program alignment is the development of a System Demonstrator Grant Call, as part of the new Viability Fund for Cities. Building on the experiences of system demonstrators in Europe, Latin America and Africa, the ambition is to develop a new standard system demonstrator global grant call The first phase will prioritise Brazil, California, India, Sweden, Ukraine and selected global programmes. The goal is to create a shared practical framework that funders can adapt and apply in their own contexts to support system demonstrator initiatives, but which at the same time allows for joint learning, implementation, and demand side aggregation.

Between January and March 2026, three co-design meetings will be held, with drafting and review work taking place in between. Through this process, participating organisations will jointly develop a general, open source, call text and an operating and fundraising structure that can be used to launch coordinated calls for proposals in multiple countries. In April 2026, Viable Cities, Dark Matter Labs, UN-Habitat and other partners will reflect and deliberate on the outcomes of the dialogues to decide on the launch of an international call for system demonstrators.

Organisations interested in taking part in this collaborative process are invited to submit an expression of interest. Participation is flexible, and actors can step in or out at any time before March 2026.

Submit your expression of interest to join the co-design process and help shape the future of system demonstrator funding.

https://form.typeform.com/to/JOGXlMai

For more information, contact systemdemo@viabilityfund.org

Monday, 10. November 2025

1Kosmos BlockID

The Best Identity Verification Software Providers in 2026

The post The Best Identity Verification Software Providers in 2026 appeared first on 1Kosmos.

Dock

Centralized ID, federated ID, decentralized ID: what’s the difference?

In our recent live workshop, Introduction to Decentralized Identity, Richard Esplin (Dock Labs' Head of Product) and Agne Caunt (Dock Labs' Product Owner) explained how digital identity has evolved over the years and why decentralized identity represents such a fundamental shift. If you couldn’t

In our recent live workshop, Introduction to Decentralized Identity, Richard Esplin (Dock Labs' Head of Product) and Agne Caunt (Dock Labs' Product Owner) explained how digital identity has evolved over the years and why decentralized identity represents such a fundamental shift.

If you couldn’t attend, here’s a quick summary of the three main identity models they covered:


HYPR

HYPR and Yubico Deepen Partnership to Secure and Scale Passkey Deployment Through Automated Identity Verification

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.

Today, that partnership continues to expand. As enterprise adoption of YubiKeys continues to accelerate worldwide, HYPR and Yubico are proud to announce innovations that help enterprises to further validate that the employees receiving or using their YubiKeys are assured to the highest levels of identity verification. 

HYPR Affirm, a leading identity verification orchestration product, now integrates directly with Yubico’s provisioning capabilities, enabling organizations to securely verify, provision, and deploy YubiKeys to their distributed workforce with full confidence that each key is used by the right, verified individual.

Secure YubiKey Provisioning for Hybrid Teams

Security leaders routinely purchase YubiKeys by the hundreds or thousands, only to confront a stubborn challenge: securely provisioning those keys to a remote or hybrid workforce quickly and verifiably.

Manual processes, from shipment tracking to recipient activation, are no longer adequate for modern security. The current setup, while seemingly robust, lacks the critical identity assurance needed to withstand today's threats. Even the most advanced hardware security key is compromised if it's issued or activated by an unverified individual. What’s needed is not just faster fulfillment, but a secure, automated bridge that links verified identity directly with hardware credentialing.

What YubiKey Provisioning with HYPR Affirm Delivers

Enterprises can now link a verified human identity to a hardware-backed, phishing-resistant credential before a device is shipped. Yubico provisions a pre-registered FIDO credential to the YubiKey, binds it to the organization’s identity provider (IdP), and ships the key directly to the end user - no IT or security team intermediation required. The user receives a key that’s ready to activate in minutes - no shared secrets over insecure communications, no guesswork, zero gaps of trust. This joint approach streamlines operations while preserving Yubico’s gold-standard hardware security and user experience.

How It Works: Pre-Register → Verify → Activate

The flow is seamless. To activate a YubiKey, HYPR Affirm verifies that the intended user is, in fact, the right individual through high-assurance identity verification that incorporates orchestration capabilities that include options such as government ID scanning, facial biometrics with liveness detection, location data, and can even include live video verification with peer-based attestation. Policy settings can be easily grouped by role & responsibility.
Once verified, the user is issued a PIN to activate the pre-registered, phishing-resistant credential on the YubiKey, linked to the organization’s identity provider. When the user receives their key, activation is simple, secure, and immediate.

The result is an end-to-end, verifiable trust chain that gives IT, security, and compliance teams the assurance that:

The YubiKey was issued to a verified user. The credential was provisioned securely and cannot be intercepted. An auditable record ties the verified identity to the hardware-backed credential.

Scalable Remote Distribution and Faster Rollouts

This is built for the real world: companies that buy 100, 1,000, or 10,000 keys and need to deploy them across regions, time zones, and employment types. By anchoring every key to a verified user before it ships, organizations reduce failed enrollments, eliminate back-and-forth helpdesk tickets, and accelerate time-to-protection for global teams. 

Beyond Day One: Resets, Re-issues, and Role Changes

Implementing automated identity verification checks into the YubiKey provisioning process streamlines initial deployment, but the same model applies after initial rollout. When a new employee is being onboarded, or a key is lost, damaged, or reassigned, HYPR Affirm can re-verify identity at the moment of risk, and Yubico can provision a replacement credential with the same tight linkage between proofing and issuance. This reduces social-engineering exposure during high-risk helpdesk moments and keeps lifecycle events as deterministic as day one.

Building a Future of Trusted, Effortless Authentication

Yubico set the global benchmark for hardware-backed, phishing-resistant authentication. HYPR is extending that foundation to unlock identity assurance at scale - ensuring every YubiKey is ready to protect access from day one.

Together, we’re transforming what has traditionally been a manual, trust-based process into a verifiable, automated, and user-friendly standard for enterprise security.

From my perspective, this partnership represents something bigger than integration. It’s a proof point that security and simplicity can coexist at scale - and that’s what excites me most. We’re helping organizations move faster toward a passwordless future where verified identity and hardware-backed trust work seamlessly, everywhere.

Learn more about how HYPR and Yubico are redefining workforce identity and authentication for the modern era: Explore the Integration.

HYPR and Yubico FAQ

Q: What changes with this new HYPR and Yubico partnership?

A: Identity verification and YubiKey provisioning are now tightly connected, so each key is pre-registered to a user before shipment and is activated through identity verification upon arrival.

Q: How does this improve remote rollouts?

A: Enterprises can ship keys globally with proof that intended recipients are the ones who activate the device, reducing logistics friction and failed enrollments.

Q:  What compliance benefits does this provide?

A: The verified identity event is linked to the cryptographic credential, producing a clear audit trail and aligning with NIST 800-63-3’s assurance model (IAL for proofing, AAL for authentication) while enabling AAL3 from first use.

Q: Does this help with loss, replacement, or re-enrollment?

A: Yes. HYPR Affirm can trigger re-verification for high-risk events (like replacement or role change) before provisioning, reducing social-engineering risk and maintaining assurance over time. Yubico Enterprise Delivery allows organizations to seamlessly replace lost authenticators in a secure and simple workflow.

Q: What is the end-user experience like?

A: Receive a pre-registered YubiKey and activate with a simple identity verification. They log in with phishing-resistant passkeys - no passwords or complex setup.

 

Sunday, 09. November 2025

Ockto

Alles wat je moet weten over digitaal identificeren

Bij de aanvraag van een financieel product kun je er niet omheen: de klant moet zichzelf identificeren en je moet de identiteit verifiëren. Het liefst laat je dit op een zo efficiënt mogelijke manier doen zonder mogelijkheden tot fraude. Uiteraard ook volgens alle geldende wet- en regelgeving, zoals de Wwft en AVG. De oplossing daarvoor? Digitaal identificeren. Zo kun je klanten snel, c

Bij de aanvraag van een financieel product kun je er niet omheen: de klant moet zichzelf identificeren en je moet de identiteit verifiëren. Het liefst laat je dit op een zo efficiënt mogelijke manier doen zonder mogelijkheden tot fraude. Uiteraard ook volgens alle geldende wet- en regelgeving, zoals de Wwft en AVG. De oplossing daarvoor? Digitaal identificeren. Zo kun je klanten snel, compliant en met een zo hoog mogelijke first time right ratio onboarden. Tenminste, als je het goed aanpakt.


Ockam

Guerrilla Marketing for SaaS

How to Get Noticed When You Have No Budget Continue reading on Medium »

How to Get Noticed When You Have No Budget

Continue reading on Medium »

Tuesday, 26. August 2025

Radiant Logic

Rethinking Enterprise IAM Deployments with Radiant Logic's Cloud-Native SaaS Innovation

What are the challenges enterprises face when deploying IAM systems in cloud-native environments? In today’s cloud-first enterprise landscape, organizations face unprecedented challenges in managing identity and access across distributed, hybrid environments. Traditional on-premises IAM systems have become operational bottlenecks, with deployment cycles measured in weeks rather than hours, securit
What are the challenges enterprises face when deploying IAM systems in cloud-native environments?


In today’s cloud-first enterprise landscape, organizations face unprecedented challenges in managing identity and access across distributed, hybrid environments. Traditional on-premises IAM systems have become operational bottlenecks, with deployment cycles measured in weeks rather than hours, security vulnerabilities emerging from static configurations, and scaling limitations that can’t keep pace with business growth. As enterprises accelerate their digital transformation and embrace cloud-native architectures, these legacy constraints threaten competitive advantage and operational resilience. 

Key Takeaway: Traditional IAM systems can’t keep pace with cloud-native speed, scale, and security demands.

At Radiant Logic, we recognized these industry-wide pain points weren’t just technical challenges—they represented a fundamental shift in how IAM must be delivered and managed in the cloud era.  

Addressing the Cloud-Native IAM Gap 

The enterprise IAM landscape has been stuck in a legacy mindset while the infrastructure beneath it has transformed completely. Organizations are migrating critical workloads to Kubernetes clusters, embracing microservices architectures, and demanding the same agility from their IAM infrastructure that they have achieved in their application delivery pipelines. Yet most IAM solutions still operate with monolithic deployment models, manual configuration processes, and reactive monitoring approaches that belong to the pre-cloud era. Setting up new environments can take weeks, and keeping everything secure and compliant is a constant battle with the rollout of version patches and updates. 

The Three Critical Gaps in Traditional IAM Delivery

Through our extensive work with enterprise customers, we identified the following critical gaps in traditional IAM delivery: 

Deployment velocity: enterprises need IAM environments provisioned in hours, not weeks, to match the pace of modern DevOps practices Operational resilience: IAM systems must be designed for failure, with automatic healing capabilities and zero-downtime updates Real-time observability: security teams need continuous visibility into IAM performance, usage patterns, and potential threats as they emerge

Radiant Logic’s cloud-native IAM approach addresses these gaps by fundamentally reimagining how IAM infrastructure is delivered, managed, and operated in cloud-native environments. 

Re-Imagining Your IAM Operations with a Strategic Cloud-Native Architecture 

Our Environment Operations Center (EOC) is exclusively available as part of our SaaS offering, representing our commitment to cloud-native IAM delivery. This isn’t simply hosting traditional software in the cloud—it is a ground-up reimagining of IAM operations leveraging Kubernetes orchestration, microservices architecture, and cloud-native design principles. 

Why EOC Is Different from Traditional Cloud Hosting

Every EOC deployment provides customers with their own private, isolated cloud environment built on Kubernetes foundations. This cloud-native, container-based approach delivers four strategic advantages that traditional IAM deployments simply cannot match. 

Agility through microservices architecture Each component of the IAM stack operates as an independent service that can be updated, scaled, or modified without affecting other system elements. This eliminates the risk of monolithic upgrades that have historically plagued enterprise IAM deployments and enables continuous delivery of new features and security patches. Resilience through Kubernetes orchestration The EOC leverages Kubernetes’ self-healing capabilities, automatically detecting and recovering from failures at the container, pod, and node levels. This means your IAM infrastructure maintains availability even when individual components experience issues, providing the operational resilience that modern enterprises demand. Automation through cloud-native tooling Manual configuration and deployment processes are replaced by automated workflows that provision, configure, and maintain IAM environments according to defined policies. This reduces human error, accelerates deployment cycles, and ensures consistent security posture across all environments. Real-time observability through integrated monitoring The EOC provides comprehensive visibility into system health, performance metrics, and security events through cloud-native observability tools that integrate seamlessly with existing enterprise monitoring infrastructure.  Key Takeaway: Cloud-native IAM replaces static deployments with flexible, self-healing, continuously observable environments.
Real-time Insights: AI-Powered Operations Management 

The EOC’s cloud-native architecture enables sophisticated AI-driven operations management that goes far beyond traditional monitoring approaches. Our platform continuously analyzes metrics including CPU utilization, memory consumption, network traffic patterns, and application response times across your Kubernetes-based IAM infrastructure. 

How AI Can Detect and Resolve Issues Automatically

When our AI detects anomalous patterns—such as unexpected spikes in authentication requests, unusual network traffic flows, or resource consumption trends that indicate potential security threats—it doesn’t just alert operators. The system automatically triggers remediation actions, such as scaling pod replicas to handle increased load, reallocating resources to maintain performance, or isolating potentially compromised components while maintaining overall system availability. 

This proactive approach to operations management represents a fundamental shift from reactive problem-solving to predictive optimization. Instead of waiting for issues to impact users, the EOC identifies and addresses potential problems before they affect service delivery. 

Unified Management: Purpose-Built for Enterprise Operations 

The EOC consolidates all aspects of IAM operations management into a single, intuitive interface designed specifically for enterprise security and IT teams. Our dashboards provide real-time visibility into system health, performance trends, and security posture across your entire IAM infrastructure. 

Streamlining Everyday IAM Operations Through One Interface

Critical operations such as application version management, automated backup orchestration, and security policy enforcement are streamlined through purpose-built workflows that integrate naturally with existing enterprise tools. The platform’s responsive design ensures full functionality whether accessed from desktop workstations or mobile devices, enabling operations teams to maintain visibility and control regardless of location. 

Because the EOC is built specifically for our SaaS offering, it includes deep integration with Radiant Logic’s IAM capabilities while maintaining compatibility with your existing identity, monitoring, logging, and security infrastructure. This ensures seamless operations without requiring wholesale replacement of existing tooling. 

Future-Ready: Adaptive Security and Compliance 

The EOC’s cloud-native foundation enables adaptive security capabilities that automatically adjust protection levels based on real-time risk assessment. Our compliance management tools leverage automation to maintain regulatory adherence across dynamic, distributed environments, reducing the manual overhead traditionally associated with compliance reporting and audit preparation. 

As enterprises continue their cloud transformation journey, the EOC evolves alongside changing requirements, leveraging Kubernetes’ extensibility and our continuous delivery capabilities to introduce new features and capabilities without disrupting ongoing operations. 

Transform Your IAM Operations 

By delivering cloud-native IAM infrastructure through our SaaS platform, we are helping enterprises achieve the agility, resilience, and security required to compete in the cloud era. 

Ready to see how to transform your identity and access management operations? Contact Radiant Logic for a demo and discover how our cloud-native SaaS innovation can accelerate your organization’s digital transformation journey. 

The post Rethinking Enterprise IAM Deployments with Radiant Logic's Cloud-Native SaaS Innovation appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Ockto

Van informeren naar begeleiden bij pensioenkeuzes – efficiënt en betaalbaar

De nieuwe pensioenwet (Wet toekomst pensioenen – Wtp) vraagt pensioenuitvoerders niet alleen om hun regelingen aan te passen, maar ook om deelnemers actiever te begeleiden bij het maken van keuzes. Het gaat niet langer alleen om informeren. Deelnemers moeten echt geholpen worden om verstandige beslissingen te nemen.

De nieuwe pensioenwet (Wet toekomst pensioenen – Wtp) vraagt pensioenuitvoerders niet alleen om hun regelingen aan te passen, maar ook om deelnemers actiever te begeleiden bij het maken van keuzes. Het gaat niet langer alleen om informeren. Deelnemers moeten echt geholpen worden om verstandige beslissingen te nemen.


Recognito Vision

How to Protect Yourself from Identity Theft Using Trusted Biometric Solutions

In today’s connected world, your identity is more than your name or password. It’s your access key to everything from your bank to your email to your online shopping carts. But while technology has made life easier, it has also opened new doors for identity theft. Fortunately, trusted biometric solutions are here to close those...

In today’s connected world, your identity is more than your name or password. It’s your access key to everything from your bank to your email to your online shopping carts. But while technology has made life easier, it has also opened new doors for identity theft.

Fortunately, trusted biometric solutions are here to close those doors. These systems don’t just protect your data. They protect you using your unique traits to make identity theft nearly impossible.

 

The Growing Problem of Identity Theft

Identity theft isn’t a problem for tomorrow. It’s happening right now. According to cybersecurity analysts, global cases of online identity theft have jumped by more than 35% in just a year. Hackers now use deepfakes, AI-generated profiles, and synthetic data to impersonate real people.

Once your information is stolen, trying to recover it can be like chasing a ghost.

The most common forms of identity fraud include:

Financial theft: Stealing banking or credit details for unauthorized use

Medical identity theft: Using stolen identities for treatment or prescriptions

Synthetic identities: Creating fake people from pieces of real data

Social or digital impersonation: Cloning accounts to scam others

It’s not just about losing money. Victims spend months repairing their reputation, accounts, and credit. The best way to win this fight is by stopping it before it starts and biometric identity theft protection does exactly that.

 

Why Biometrics Are the Future of Identity Theft Protection

Biometrics use your unique physical and behavioral features, like your face, fingerprint, or voice, to verify your identity. Unlike passwords or PINs, they can’t be stolen, guessed, or shared.

Modern systems powered by AI are incredibly accurate. The NIST Face Recognition Vendor Test reports that advanced facial recognition models reach over 99% accuracy. That means they can verify you faster and more securely than traditional login methods.

Biometric security isn’t just the future of identity theft protection services. It’s becoming the standard for how we protect everything we value online.

 

How Biometric Identity Monitoring Services Work

Traditional identity theft monitoring only tells you something went wrong after it happens. But biometric protection acts before any damage occurs. It’s active, precise, and nearly foolproof.

Here’s how it works step by step:

1. Capture

The system starts by securely capturing your biometric data, such as a face scan. It’s quick, natural, and effortless. This becomes your digital signature, a personal identity key that no one else can copy.

2. Encryption

Your biometric data is instantly encrypted. Instead of storing your actual face or fingerprint, it’s turned into coded data that even a hacker couldn’t understand. This is where real identity theft prevention begins.

3. Matching

Whenever you try to log in or verify your identity, the system compares your live scan with your stored encrypted data. If it matches, access is granted. If it doesn’t, the system blocks entry and triggers identity fraud detection to check for suspicious behavior.

4. Alert

If the system spots something unusual, it alerts you immediately or locks down access. This rapid response prevents identity fraud protection breaches before they happen.

You can see how this works by trying Recognito’s Face Biometric Playground. It’s a fun, interactive way to see how biometric verification distinguishes real people from imposters in real time.

 

The Best Identity Theft Protection Uses Biometrics

The best identity theft protection doesn’t wait for alerts. It stops fraud before it starts. That’s what biometrics do so well, they make your physical presence part of the security process.

Modern systems use:

Facial recognition to instantly confirm identity

Liveness detection to ensure it’s a real person, not a photo or deepfake

Behavioral biometrics to monitor how users type or move

Voice recognition for call-based verification

Businesses can integrate these tools using Recognito’s Face Recognition SDK and Face Liveness Detection SDK. Together, they form the core of intelligent identity monitoring services that protect users from digital fraud without adding friction.

 

Real-Life Examples of Biometric Identity Fraud Prevention

 

1. Banking and Fintech

A global bank implemented facial verification to confirm customer logins. Within months, they prevented hundreds of fraudulent account openings. Fraudsters tried using edited selfies, but the liveness detection caught every fake.

2. E-commerce

Online retailers now use face recognition at checkout to confirm identity. Even if a hacker has your card details, they can’t mimic your live face or expressions.

3. Healthcare

Hospitals are starting to use biometrics for patient verification. This prevents criminals from using stolen identities for prescriptions or insurance fraud.

These are real examples of identity fraud protection at work. It’s fast, accurate, and much harder for scammers to outsmart.

 

Compliance and Data Security Come First

The rise of biometrics comes with responsibility. Ethical systems never store your photo or raw data. Instead, they keep encrypted templates that can’t be reverse-engineered.

This approach complies with the GDPR and other global privacy standards. It also promotes transparency, with open programs like FRVT 1:1 and community-driven research such as Recognito Vision’s GitHub. These efforts ensure fairness, security, and accountability across the biometric industry.

 

How Biometrics Stop Online Identity Theft

Online identity theft has become one of the fastest-growing cybercrimes in the world. Phishing scams, deepfakes, and password breaches make it easy for hackers to impersonate you online.

Biometric technology makes that nearly impossible. Even if criminals get your password, they can’t fake your face, voice, or live presence. AI-powered identity theft prevention systems recognize you using micro-expressions, natural movement, and behavioral patterns.

It’s no wonder that industries like banking, insurance, and remote onboarding are rapidly adopting these systems. They offer the perfect blend of convenience and unbeatable security.

 

Traditional vs Biometric Identity Protection

 

Feature Traditional Protection Biometric Protection Verification Based on what you know (passwords, PINs) Based on who you are Speed Slower, manual authentication Instant, automated Accuracy Prone to errors or guessing Over 99% accurate Fraud Prevention Reactive after breaches Proactive before breaches User Experience Complex and time-consuming Seamless and secure

If traditional methods are locks, biometrics are smart vaults that open only for their rightful owner.

 

The Future of Identity Theft Protection Services

The next generation of identity theft protection services will utilize a combination of AI, blockchain, and multi-biometric authentication for comprehensive digital security. Imagine verifying yourself anywhere in seconds, without sharing sensitive personal data.

Future systems will likely combine:

Face recognition for instant authentication

Voice and gesture biometrics for multi-layered security

Blockchain-backed identity to make personal data tamper-resistant

Regulators and innovators are already working together to ensure these systems stay ethical, inclusive, and bias-free. The goal is simple: a safer, more personal internet for everyone.

 

Staying Ahead with Recognito

Ultimately, identity theft protection is about trust. Biometrics provides that trust by using something only you have.

If you want to explore how biometric security can protect you or your business. Learn how Recognito helps organizations secure users through advanced facial recognition and liveness technology, keeping identities safe while making the user experience simple.

Because in the digital world, there’s only one you, and Recognito makes sure it stays that way.

 

Frequently Asked Questions

 

1. How does biometric technology prevent identity theft?

Biometric technology uses your unique traits, like your face or voice to verify your identity. It stops criminals from using stolen passwords or fake profiles, providing stronger identity theft protection than traditional methods.

 

2. Are biometric identity monitoring services secure?

Yes. Identity monitoring services that use biometrics encrypt your data, so your face or fingerprint is never stored as an image. This makes them safe, private, and nearly impossible for hackers to exploit.

 

3. What is the best way to protect yourself from online identity theft?

The best identity theft protection combines biometric verification with secure passwords and regular monitoring. Using facial recognition and liveness detection makes it much harder for cybercriminals to impersonate you online.

 

4. Can biometrics detect identity fraud in real time?

Yes. Modern identity fraud detection systems can instantly recognize fake attempts using AI and liveness checks. They verify real human presence and block fraud before any damage occurs.


Radiant Logic

Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity

The 2025 Gartner Hype Cycle for Digital Identity talks about the growing need for standardization in identity management—especially as organizations navigate fragmented directories, cloud sprawl, and increasingly complex hybrid environments. Among the mentioned technologies, SCIM (System for Cross-domain Identity Management) stands out as a foundational protocol for modern, scalable identity

The 2025 Gartner Hype Cycle for Digital Identity talks about the growing need for standardization in identity management—especially as organizations navigate fragmented directories, cloud sprawl, and increasingly complex hybrid environments. Among the mentioned technologies, SCIM (System for Cross-domain Identity Management) stands out as a foundational protocol for modern, scalable identity lifecycle management. 

Radiant Logic is proud to be recognized in this report. Our platform’s robust SCIMv2 support positions RadiantOne as a key enabler of identity automation, built on open standards and enterprise-proven architecture. 

Why Standardized Identity Management Matters 

SCIM was introduced to replace earlier models like SPML, offering a RESTful, schema-driven protocol to streamline identity resource management across systems. It defines a consistent structure and a set of operations for creating, reading, updating, and deleting (CRUD) identity resources such as User and Group. 

Today, SCIM is broadly adopted by SaaS and IAM platforms alike. It reduces manual effort, eliminates brittle custom integrations, and strengthens governance and compliance through standardized lifecycle operations. 

Without SCIM—or a consistent identity abstraction layer behind it—organizations are forced to manage identities with ad hoc connectors, divergent schemas, and fragile provisioning scripts. Gartner rightly identifies SCIM as essential to achieving identity governance at scale, enabling consistent policy enforcement and lowering operational risk. 

Radiant Logic’s SCIM Implementation 

RadiantOne delivers full SCIMv2 support, allowing organizations to extend standardized provisioning across their entire environment—cloud, on-prem, and hybrid—without rearchitecting existing infrastructure. 

As both a SCIM client and server, RadiantOne can expose enriched identity views to downstream applications or ingest SCIM-based data from external sources for correlation and normalization. This bidirectional flexibility eliminates the need for custom connectors and hardcoded integrations. 

At the core is RadiantOne’s semantic identity layer, which unifies identity data across sources, ensures consistency, and drives intelligent automation. This data foundation supports not only SCIM-based lifecycle management, but also Zero Trust access control, governance workflows, and AI-driven analytics. 

Where RadiantOne and SCIM Deliver Real Value 

Here are six practical use cases where Radiant Logic’s SCIM support drives immediate impact: 

Accelerated Onboarding with Trusted Identity Data  RadiantOne consolidates authoritative sources—HR, AD, ERP, SaaS—into a single, richly structured identity record. That record is exposed over SCIM v2 (or any preferred connector) to the customer’s existing join-move-leave engines—IGA, ITSM, or custom workflows—so they grant birth-right access through the tools already built for approvals and fulfillment.   Offering complete and accurate provisioning with minimal integration effort, RadiantOne stays focused on delivering clean, governed identity data rather than duplicating workflow logic. From SSO to Lifecycle Management  SSO controls access, but SCIM controls who gets access. RadiantOne aggregates and enriches identity data from sources like Active Directory, LDAP, and HR systems, making it available to SCIM-enabled applications. Provisioning decisions are based on accurate, policy-aligned identity, ensuring access is granted appropriately from the start.  This closes the gap between authentication and authorization, reducing overhead and aligning with Zero Trust principles.  Simplifying Application Migrations  RadiantOne delivers a clean, normalized identity record and, through its enriched SCIM v2 interface, maps every attribute name and value to the exact schema and format the target expects. This built-in translation removes custom scripts, connector rewrites, and brittle middleware, so admins can load thousands of users into new SaaS platforms quickly during M&A, re-platforming, or app consolidation.  Admins can provision thousands of users into SaaS platforms efficiently, making this ideal for M&A, re-platforming, or app consolidation.  Real-Time Updates as Identity Changes  RadiantOne keeps identity data current as roles change or users depart. Apps simply ask RadiantOne via SCIM v2 for the latest record—no custom sync jobs or code—so they can enforce least-privilege and de-provision on time while their own workflows remain untouched.  This ensures timely de-provisioning and continuous enforcement of least-privilege access.  Precision Access for Governance and PAM  Provisioning isn’t just account creation—it’s about controlled access. RadiantOne adds business context to identity data, such as org structure, clearance, and location, so SCIM can support fine-grained entitlements.  This aligns with PAM policies, improves audit readiness, and enhances IGA and analytics accuracy.  Keeping Workflows and Business Logic in Sync  SCIM also supports operational workflows. RadiantOne keeps identity attributes—like manager relationships, email, or job status—accurate across systems.   This ensures approval chains, directories, and collaboration tools function correctly without manual updates. Conclusion

Radiant Logic’s SCIM implementation is already powering identity automation in some of the world’s most complex IT environments, proving its value in delivering standards-based, high-integrity identity infrastructure. Book a demo to explore how Radiant Logic’s SCIM-enabled identity platform can transform your organization’s identity management practices, drive operational excellence, and secure your digital identity future. 

  

Disclaimers: 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 

 

 

The post Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.

Thursday, 06. November 2025

IDnow

Digital trust is the new differentiator: How congstar balances security and seamless onboarding.

speed with security and compliance. We sat down with Christopher Krause, Senior Manager Customer Finance at congstar, to discuss how the company's partnership with IDnow enables seamless eSIM activations and postpaid sign-ups in seconds through AI-powered identity verification, why digital trust has become as important as price or network speed, and why telcos are uniq
In a competitive telecom market where customers expect instant onboarding, congstar must balance speed with security and compliance.

We sat down with Christopher Krause, Senior Manager Customer Finance at congstar, to discuss how the company’s partnership with IDnow enables seamless eSIM activations and postpaid sign-ups in seconds through AI-powered identity verification, why digital trust has become as important as price or network speed, and why telcos are uniquely positioned to become anchors of digital identity in the era of EU Digital Wallets and decentralized credentials.

congstar is known for fair, transparent tariffs and a simple, digital customer experience. What role does identity verification play in keeping the onboarding process both secure and smooth? 

We put convenience and security on an equal pedestal. When a customer signs up for a postpaid plan, we offer a range of identification options, especially automated identity verifications so that only real, eligible customers get through – this protects against fraud and satisfies legal requirements in Germany.  

When customers sign up for postpaid plans or activate eSIMs, how do you ensure the process feels instant and seamless, while still meeting all regulatory and fraud-prevention requirements? 

For postpaid or eSIM activation, speed is king, but we never cut corners on compliance. A perfect example is our sub-brand fraenk, our fully digital mobile phone app tariff. Here we rely on fully digital KYC tools, in our case IDnow’s automated identity verification solution, that scans a photo ID and do a quick selfie/liveness check. These AI-powered checks turn around in seconds, replacing old-school video calls or lengthy paperwork. As a result, signing up feels almost instant to the customer – yet it still meets all legal requirements.  

Another good example is our congstar website, where we have incorporated the verification step into the checkout process. By doing the identity check with a fast, in-browser process (no extra app needed) and clearly explaining each step, customers hardly feel any friction.  

In short, we assure our customers are safe, while keeping the process simple and transparent – a key part of our “fair and worry-free” brand promise. 

You’ve been working with IDnow for identity verification. How does this collaboration support congstar’s goals for digital efficiency, compliance and customer satisfaction? 

Our partnership with IDnow is a cornerstone of this approach. IDnow’s automated identity verification solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations. 

Thanks to that, we can scale up our digital sales without bottlenecks – maintaining our light, digital touch while staying on the right side of the law. In practice, this means high customer satisfaction because sign-ups are almost instant, and our operations save time – all contributing to our goal of a smooth, yet secure and digital experience. 

IDnow’s automated solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations.

Christopher Krause, Senior Manager Customer Finance at congstar
With growing eSIM adoption and automated onboarding, where do you see the biggest opportunities or challenges for congstar in the next few years? 

Looking ahead, the rise of eSIMs and automated onboarding is a big opportunity for us. Analysts expect eSIMs to boom soon. For us, this means we can offer even more flexible, instant activations. It also cuts costs – no more plastic SIM cards or waiting for mail delivery. The flip side: as onboarding goes 100% digital, we need to stay vigilant against evolving fraud, like SIM swap attacks or deepfakes. We’re preparing by continuously improving our automated checks and monitoring tools. Overall, the shift is positive – it lets us focus on the best customer experience and leaves us more bandwidth to innovate on products and services. The main challenge is simply staying one step ahead of bad actors as we grow digitally. 

Do you see digital trust as a new differentiator in the telecom market, similar to how speed or price once defined competition? 

Absolutely – we see digital trust becoming a real differentiator. In a mature market like ours, price and speed are table stakes. What sets a brand apart now is how much customers trust it with their heart, their data and security. Trust wins loyalty: research shows that trusted telcos gain more market share, foster long-term customers and are recommended more than others. Our brand is built on transparency and fairness, so emphasizing trust feels natural for us.  

When a customer goes through an identity check, what do you want them to feel –safety, simplicity, control?  

In the identity check itself, we want customers to feel a sense of calm and confidence. They should feel that the process is simple as we guide them clearly through each step and respectful as they decide what information to share. Altogether, we want people to walk away thinking: “That was easy, and I know my account is protected.” 

How does your verification journey contribute to that emotional experience? 

Our verification flow is designed to build those positive feelings. For example, we use the in-app browser for IDnow’s automated identity verification solution in our fraenk app, which keeps the process friendly and fast. The user sees clear instructions and immediate feedback, so they never feel lost. Every step is optimized for transparency: we show progress bars, explain why we need each check, and never ask for data twice. The result is a consistent, reassuring experience that strengthens the feeling of security and control. 

What sets a brand apart now is how much customers trust it with their heart, their data and security.

Christopher Krause, Senior Manager Customer Finance at congstar
How do you prepare for upcoming regulations like eIDAS 2.0 and the EUDI Wallet, and what opportunities do these create for telcos?  

We’re monitoring the developments around eIDAS 2.0 and the EU Digital Wallet and we are seeing them as enablers rather than headaches. As the regulations come into force, we review them with our identification partners and examine how we can further improve identification with the new options available. For telcos, the opportunity could be big: according to experts, eIDAS-compliant credentials mean we can verify any EU customer’s identity seamlessly and with reduced risk of fraud.  

In a world of digital wallets and decentralised identity, how do you see the telco’s role in verifying and protecting digital identity?  

Telcos have a vital role to play. We already have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for certain credentials – for instance, confirming that a person is a current mobile subscriber, or verifying age to enable services. Industry analysts note that telcos are well-positioned as “mobile-SIM-anchored” issuers of digital credentials.  

Telcos have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for verifying credentials.

Christopher Krause, Senior Manager Customer Finance at congstar
How important is orchestration, i.e connecting verification, fraud and user experience, to achieving a scalable, future-proof onboarding process?  

Orchestration is absolutely critical for scaling securely. We can’t treat identity checks, fraud detection and user experience as separate silos. Instead, we tie them together. For example, if our system flags an order as high-risk, it immediately triggers additional steps. Conversely, if everything looks legitimate, the user sails through. This end-to-end coordination (identification, device risk profiling and behavior analytics) is what lets us grow quickly without ballooning costs.  

How do data-sharing initiatives or consortium-based approaches help strengthen fraud prevention across the telecom sector?  

Industry-wide collaboration is a force-multiplier against fraud because fraudsters don’t respect company boundaries. For instance, telecoms worldwide have started exchanging through platforms like the Deutsche Fraud Forum. In addition, regular and transparent communication with government authorities such as the BNetzA and LKA is essential to set uniform industry standards and combat potential fraud equally. 

How do you see AI helping to detect fraud in real time without adding friction for genuine users?  

Finally, AI is becoming essential to catch clever fraud without inconveniencing users. We use AI and machine learning models that watch behind the scenes. The smart part is that genuine customers hardly notice: the system learns normal behavior and only steps in (with an extra check or block) when something truly stands out. This adaptive learning means false alarms drop over time, reducing friction for legitimate users. We also benefit by deploying solutions like IDnow’s automated identity verification solution, which already uses AI trained on millions of data points to verify identities. In network operations, we complement that with risk scores on each transaction. The net effect is real-time fraud defense that locks out attackers but lets loyal customers pass through hassle-free.  

About congstar: 

Founded in 2007 as a subsidiary of Telekom Deutschland GmbH, congstar offers flexible, transparent, and fair mobile and DSL products tailored for digital-savvy customers. Known for its customer-first approach and innovative app-based brand fraenk, congstar continues to redefine simplicity and security in Germany’s telecom market. 

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. DGGS’s CEO, Florian Werner, talked to us about how strict regulatory requirements are shaping online gaming in Germany and what it’s like to be the first brand to receive a national slot licence.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


BlueSky

The World Series Was Electric — So Was Bluesky

“How can you not be romantic about baseball?” — Moneyball 2011

As blue confetti settles in Los Angeles after an historic World Series win, we close the chapter on another electric sports event on Bluesky. It’s during these cultural flashpoints when Bluesky is at its best – when stadium crowds are going wild, you can feel that same excitement flooding into posts.

Numbers can help describe the scale of that feeling. There were approximately 600,000 baseball posts made during the World Series, based on certain key terms. (note: we’re pretty sure that this number is an undercount, as it’s hard for us to accurately attribute to baseball the flood of “oh shit” posts that came in during Game 7)

At least 3% of all posts made on November 1 (Game 7) were about baseball. The final game also resulted in a +30% bump in traffic to Bluesky, with engagement spikes up to +66% from previous days.

We loved seeing World Series weekend in action, but it wasn’t a total surprise. In the last three months, sports generated the third-highest number of unique posts of any topic. Sports posters are also seeing greater engagement rates from posting on Bluesky than on legacy social media apps - up to ten times better.

But in a world of analytics, it’s easy to lose the love of the game. In that regard, we’re fortunate to have a roster of posters who bring the intangibles. They genuinely care about sports. Less hate, more substance and celebration.

yep, this is the baseball place now

[image or embed]

— Keith Law (@keithlaw.bsky.social) November 1, 2025 at 10:28 PM

That was the greatest baseball game I’ve ever seen.

— Molly Knight (@mollyknight.bsky.social) November 1, 2025 at 9:19 PM

If this World Series proved anything, it’s that big moments are more enjoyable when they unfold in real time, with real people. Sports has the juice on Bluesky — and every high-stakes game is bringing more fans into the conversation.

Wednesday, 05. November 2025

Mythics

Mythics, LLC Appoints Sundar Padmanaban as Executive Vice President, Consulting Sales & Solution Engineering, to Drive Transformative Growth

The post Mythics, LLC Appoints Sundar Padmanaban as Executive Vice President, Consulting Sales & Solution Engineering, to Drive Transformative Growth appeared first on Mythics.

Tuesday, 04. November 2025

Shyft Network

Shyft Veriscope Expands VASP Compliance Network with Endl Integration

The global stablecoin payment market is experiencing explosive growth, with businesses demanding infrastructure that combines seamless cross-border transactions with robust FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol and compliance solution provider, has partnered with Endl, a stablecoin neobank and payment rail provider, to integrate Veriscope for regulatory co

The global stablecoin payment market is experiencing explosive growth, with businesses demanding infrastructure that combines seamless cross-border transactions with robust FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol and compliance solution provider, has partnered with Endl, a stablecoin neobank and payment rail provider, to integrate Veriscope for regulatory compliance. This collaboration showcases how VASPs can enable secure, compliant digital finance for modern payment infrastructure while prioritizing user privacy and meeting global regulatory standards.

Building the Future of Compliant Payment Infrastructure

As the digital payments landscape evolves, Virtual Asset Service Providers (VASPs) need blockchain compliance tools that ensure regulatory adherence without adding friction to user experience. Veriscope leverages cryptographic proof technology to facilitate secure, privacy-preserving data exchanges, aligning with FATF Travel Rule requirements and AML compliance standards. By integrating Veriscope, Endl demonstrates how next-generation stablecoin payment platforms can achieve regulatory readiness seamlessly while maintaining operational efficiency.

Endl is a regulatory-compliant stablecoin neobank providing fiat and stablecoin payment rails, multicurrency accounts, and crypto on/off ramps designed for businesses and individuals seeking secure cross-border payment solutions. By integrating Veriscope for Travel Rule compliance, Endl strengthens its commitment to security and regulatory compliance, enabling users to seamlessly convert, manage, and transfer both fiat and cryptocurrencies while meeting global AML and KYC compliance standards.

The Power of Veriscope for Global Payment Platforms

The Shyft Network-Endl partnership highlights Veriscope’s ability to transform crypto compliance and blockchain regulatory infrastructure for payment platforms:

Seamless FATF Travel Rule Compliance: Automated cryptographic proof exchanges ensure FATF Travel Rule compliance for VASPs without disrupting user workflows or transaction speed Privacy-First AML Verification: User Signing technology enables secure KYC data verification and AML compliance while protecting customer privacy through blockchain encryption Global Regulatory Readiness for VASPs: Position Endl for expansion into regulated crypto markets worldwide with built-in compliance infrastructure that meets international standards Enhanced Trust in Digital Asset Transactions: Demonstrate commitment to security and regulatory standards, building confidence with both users and institutional partners in the stablecoin payment ecosystem

Zach Justein, co-founder of Veriscope, emphasized the integration’s impact on the crypto compliance landscape:

“The future of digital payments lies in seamless integration between fiat and digital assets with robust regulatory compliance. Veriscope’s integration with Endl reflects Shyft Network’s commitment to enabling compliant, privacy-preserving blockchain infrastructure for the next generation of payment platforms. As stablecoin adoption accelerates globally, FATF Travel Rule solutions like this will be essential for VASPs serving international markets and meeting evolving regulatory requirements.”

Endl joins a global network of Virtual Asset Service Providers adopting Veriscope to meet FATF Travel Rule and AML compliance demands seamlessly. This partnership underscores the critical need for secure, compliant crypto infrastructure as stablecoin payments become mainstream across cross-border transactions, international remittances, and B2B digital asset payments.

About Veriscope

Veriscope, built on Shyft Network, is the leading blockchain compliance infrastructure for Virtual Asset Service Providers (VASPs), offering a frictionless solution for FATF Travel Rule compliance and AML regulatory requirements. Powered by User Signing cryptographic proof technology, it enables VASPs to request verification from non-custodial wallets, simplifying secure KYC data verification while prioritizing privacy through blockchain encryption. Trusted globally by leading crypto exchanges and payment platforms, Veriscope reduces compliance complexity and empowers VASPs to operate confidently in regulated digital asset markets worldwide.

About Endl

Endl is a digital asset payment infrastructure provider established in 2024. The company operates stablecoin payment rails, multicurrency account services, and fiat-to-crypto conversion infrastructure for commercial and retail clients. Services include cross-border transaction processing, linked card spending functionality, and yield generation on deposited assets. The platform is designed to meet regulatory compliance standards in jurisdictions where it operates, including FATF Travel Rule and anti-money laundering requirements.

Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.

Book a consultation at calendly.com/tomas-shyft or email bd @ shyft.network

Shyft Veriscope Expands VASP Compliance Network with Endl Integration was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

The Future of Identity Security: Demands of a New Era

The post The Future of Identity Security: Demands of a New Era appeared first on uqudo.

Spherical Cow Consulting

The Infrastructure We Forgot We Built

When AWS went down, payments failed and digital life froze — exposing how fragile our cloud-based world really is. In this episode of Digital Identity Digest, Heather Flanagan explores why AWS, Stripe, Twilio, and Okta have become the new critical infrastructure of global commerce. Discover how invisible digital dependencies shape resilience, why uptime isn’t true stability, and what “too big to

“A friend sent over an interesting article by Ross Haleluik that opened with ‘Why it’s not just power grid and water, but also tools like Stripe and Twilio that should be defined as critical infrastructure.'”

The point being made is that there are some services (as demonstrated by the recent AWS outage) that cause significant harm if they become unavailable. The definition of critical infrastructure needs to go beyond power, water, or even core ICT networking.

So let’s talk about that outage. On 19 October 2025, an AWS outage (of course it was the DNS) made the Internet wobble. Payments failed. Authentication broke. Delivery systems froze. For a few hours, the digital economy looked a lot less digital and a lot more fragile.

From my perspective, the strangest things failed. I was in the process of boarding a plane to the Internet Identity Workshop. Air traffic control was fine (yay for archaic systems!), but the gate agent couldn’t run the automated bag check tools. The flight purser couldn’t see what catering had been loaded. And my seatmate completely panicked, wondering if it was even safe to fly.

So many things broke. A lot of things didn’t. Everyday people had no idea how to differentiate what mattered. That moment reminded me how fragile modern “resilience” can be.

We used to worry about power grids, water, and transportation—the visible bones of civilization. Now it is APIs, SaaS platforms, and cloud services that keep everything else alive. The outage didn’t just break a few apps; it exposed how invisible dependencies have become the modern equivalents of roads and power lines.

A Digital Identity Digest The Infrastructure We Forgot We Built Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:02 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The invisible backbone

Modern business runs on other people’s APIs (I’m also looking at you, too, MCP). Stripe handles payments. Twilio delivers authentication codes and customer messages. Okta provides identity. AWS, Google Cloud, and Azure host it all.

These are not niche conveniences. They form the infrastructure of global commerce. When you tap your phone to pay for coffee, when a government portal confirms your tax filing, or when an airline gate agent scans your boarding pass, one of these services is quietly mediating that interaction.

They don’t look like infrastructure. There are no visible grids, transformers, or pipes. They exist as lines of code, data centers, and service contracts in that they are modular, rentable, and ephemeral. Yet they behave like utilities in every meaningful way.

We have replaced public infrastructure with private platforms. The shift brought convenience and innovation, but also a new kind of risk. Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will stay online. We stopped building things to last and started building things to scale by leveraging someone else’s efficiencies. The assumption that the lights will always stay on has not caught up with reality.

The paradox of “resilient” design

Cloud architecture is often described as inherently resilient. Redundancy, failover, and microservices are meant to prevent collapse. But “resilient” in one dimension can mean “fragile” in another. I talked about this in an earlier post, The End of the Global Internet.

Designing for resilience makes sense in a world where the Internet is fragmenting. Companies build multi-region redundancies, on-prem backups, and hybrid clouds to protect themselves from geopolitical risk, supply chain issues, and simple human error. That same design logic—isolating risk, duplicating services, layering complexity—often increases fragility at the systemic level. Resilience is considered important, but efficiency is even better.

Microservices make each node stronger while the overall network becomes more brittle. Every layer of redundancy adds another point of failure and another dependency. A service might survive the loss of one data center but not the failure of a shared authentication API or DNS resolver. Local resilience frequently amplifies global fragility.

The AWS outage demonstrated this clearly. A system built for reliability failed because its dependencies were too successful. Interdependence works in both directions. When everyone relies on the same safety net, everyone falls together.

Utility or vendor?

This raises a larger question: should services like AWS, Stripe, or Twilio be treated as critical infrastructure? Haleluik says yes. I’m trying to decide where I stand on this, which is why I’m writing this series of blog posts.

In the United States, the FTC and FCC have debated for decades whether the Internet itself (aka, “broadband”) qualifies. If you aren’t familiar with that quagmire, you might be interested in “FCC vs FTC: A Primer on Broadband Privacy Oversight.”

The arguments for the designation are clear. Without broadband access, the modern economy falters. The arguments against it are equally clear. Labeling something as critical infrastructure introduces regulation, and regulation remains politically unpopular when applied to the Internet.

To put it another way, declaring something critical brings oversight, compliance requirements, and coordination mandates. Avoiding that label preserves flexibility and profit margins but leaves everyone downstream exposed. The result is an uneasy middle ground. These systems operate as essential infrastructure but remain governed by private interest. Their reach exceeds their obligations.

In traditional utilities, physical constraints limited monopoly power. Another way to look at it, though, is that traditional utilities are monopolized by government agencies (ideally) to the benefit of all. The economics of software, however, reward centralization. Success creates scale, and scale discourages competition. Very few can afford to get there (big enough to mask failures) from here (small enough to feel them).

I think we’re seeing quite a bit of magical thinking when it comes to the stories companies tell themselves when it comes to resilience: When your infrastructure depends on someone else’s business continuity plan, governance becomes an act of faith.

When “public” meets “critical”

While the debate over “critical infrastructure” in the United States often focuses on regulation versus innovation, the rest of the world is having a different but related conversation under the banner of digital public infrastructure (DPI).

Across the G20 and beyond, governments are grappling with whether digital public infrastructure—such as national payment systems, digital identity programs, and data exchange platforms—should be designated as critical information infrastructure (CII). A recent policy brief from a G20 engagement group argues that while both concepts overlap, they represent opposing design instincts: DPI is built for openness, interoperability, and inclusion, whereas CII emphasizes restriction, control, and national security.

That tension is already visible in India, where systems such as the Unified Payments Interface (UPI) have become de facto critical infrastructure. Although UPI has not been formally designated as CII, its scale and centrality to the nation’s payment system have raised similar questions about oversight and control. Its success has increased trust and security expectations, but also heightened concerns about market access for private and foreign participants, as well as the challenges of cross-border interoperability.

The G20 paper calls for ex-ante (early) designation of digital public systems as critical, rather than ex-post (after deployment), to avoid costly retrofits and policy confusion. But the underlying debate remains unresolved: Should public-facing digital infrastructure be treated like essential utilities, or like regulated assets of the state? The answer may depend less on technology and more on who society believes should bear responsibility for keeping the digital lights on. The answer to that won’t be the same everywhere.

Security versus availability

That tension over control doesn’t stop at the policy level. It runs straight through the design decisions companies make every day. When regulation is ambiguous, incentives fill the gap—and the strongest incentive of all is keeping systems online.

Availability has become the real currency of trust. It’s a strange thing, if you think about it logically, but human trust rarely is. (Cryptographic trust is another matter entirely.) Downtime brings backlash, lost revenue, and penalties, so companies do the rational thing: they optimize for uptime above all else. Security comes later. I don’t like it, but I understand why it happens.

Availability wins because it’s visible. Customers notice an outage immediately. They don’t notice an insecure configuration, a quiet policy failure, or a missing audit trail until something goes horribly wrong and the media gets a hold of the after-action report.

That visibility gap distorts priorities. When reliability is measured only by uptime, risk grows quietly in the background. You can’t meaningfully secure systems you don’t control, yet most organizations depend on the illusion that control and accountability can be outsourced while reliability remains intact.

And then there are the incentives, a word I probably use too often, but for good reason. The incentives in this landscape reward continuity, not transparency. Revenue flows as long as the service runs, even if it runs insecurely. Yes, fines exist, but they are exceptions, not deterrents.

What counts as “working” is still negotiated privately, even when the consequences are public. Until those definitions include societal resilience, we’ll continue to mistake uptime for stability.

Regulated by dependence

All of this sounds like arguments for the critical infrastructure label, doesn’t it? But remember, formal regulation is only one kind of control. Dependence is another, because dependence acts as a form of unofficial regulation.

Society already treats many tech platforms as critical infrastructure even without saying so. Governments host essential services on AWS. Health systems use commercial clouds for patient records. Banks rely on private payment APIs to move billions each day.

We trust these companies to act in the public interest, not because they must, but because we lack alternatives. Massive failures result in conversations like this post about whether these companies need to be more thoroughly monitored. This is the logic of “too big to fail,” translated into digital infrastructure. Authentication services, data hosting, and communication gateways now carry systemic risk once reserved for banks.

We have built a layer of critical infrastructure that is privately owned but publicly relied upon. It operates by trust, not by oversight, and that is a fragile foundation for a system this essential.

The illusion of choice

Dependence isn’t only a matter of trust. It’s also the result of market design. The systems we treat as infrastructure are built on platforms that appear competitive but converge around the same few providers.

Vendor neutrality looks fine on a procurement slide but falters in practice.

Ask a CIO whether their organization could migrate off a cloud provider; most will say yes. Ask whether they could do it today, and the answer shortens to silence.

APIs, SDKs, and proprietary integrations make switching possible but painful. That pain enforces dependence. It isn’t necessarily malicious, but it keeps theoretical competition safely theoretical.

Lock-in is the quiet tax on convenience.

The market appears to offer many choices, but those choices often lead back to the same infrastructure. A handful of global providers now underpin authentication, messaging, hosting, and payments.

When a platform failure can delay paychecks, ground flights, or disrupt hospital systems, we’re no longer talking about preference or pricing. We’re talking about public safety.

The same qualities that once made the Internet adaptable—modular APIs, composable services, seamless integration—have made it fragile in aggregate. We built a dependency chain and called it innovation.

That dependency chain doesn’t just reshape markets. It reshapes how societies determine what constitutes essential. When the same few providers sit beneath every major system, “critical infrastructure” stops being a policy category and starts being a description of reality.

The expanding definition of “critical”

What we’re looking at is the challenge that “critical” is just too big a concept. As societies become more technically complex, the definition of critical infrastructure also keeps growing.

Power, water, and transport once defined the baseline. Then came telecommunications. Then the Internet. The stack now includes authentication, payments, communication APIs, and identity services. Each layer improves capability while expanding exposure.

Whether or not you believe that these tools should exist, their failure now extends beyond the control of any single organization. As dependencies multiply, the distinction between convenience and infrastructure fades.

An AWS outage can make it really hard to check in for your flight. A Twilio misconfiguration can interrupt millions of authentication codes. A payment API failure can halt payroll for small businesses. These systems support not only individual companies but also the systems that support those companies.

If we decide that these systems function as critical infrastructure, the next question is what to do about them. Recognition doesn’t come free. It brings oversight, obligations, and trade-offs that neither governments nor providers are fully prepared to bear.

The cost of recognition

Calling an API a utility isn’t about nationalization. It’s about acknowledging that private infrastructure now performs public functions. With that acknowledgment comes responsibility.

Critical infrastructure is what society cannot function without. That definition once focused on physical essentials; now it includes the digital plumbing that supports everything else. Expanding that list has consequences. Every new addition stretches oversight thinner and diffuses accountability.

Resources are finite. Attention is finite. When every system is declared critical, prioritization becomes impossible. The next challenge isn’t deciding whether to protect these dependencies, but how much protection each truly deserves.

I can (and will!) say a lot more on this particular subject. Stay tuned for next week’s post.

Closing thoughts

Ross Haleluik’s observation was an interesting perspective on what utilities look like in modern life. Stripe, Twilio, AWS, and others do not just enable business; they are the business. They have become the unacknowledged utilities of a digital economy.

When I watched Delta’s systems falter during the AWS outage, it was not just an airline glitch. It was a glimpse into the depth of interdependence that defines a modern technical society. If efficiency is the goal, then labeling these systems as critical infrastructure may be the right path. But if resilience is the goal, then perhaps we have other choices to make.

The next outage will not be an exception. It will serve as another reminder that the foundations of the modern world are built on rented infrastructure, and the bill is coming due.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29]
Hi and welcome back.

I’m recording this episode while dealing with the cold that everyone seems to have right now — so apologies for it being a little bit late. I had hoped the cold would pass before I picked up the microphone again.

But here we are, and today I want to talk about Critical Infrastructure.

Rethinking What Counts as Critical

A friend of mine recently sent over an article by Ross Havelwick that began with an interesting point:

“It’s not just power grids and water systems that count as critical infrastructure, but also tools like Stripe and Twilio.”

His argument was simple yet powerful — some services have become so essential that when they fail, the impact ripples far beyond their own operations. The AWS outage in October proved that vividly.

Before diving deeper, it’s worth defining what we mean by critical infrastructure.
These are systems and assets so vital that their disruption would harm national security, the economy, or public safety.

In the United States, the Cybersecurity and Infrastructure Security Agency (CISA) identifies 16 sectors, including energy, water, transportation, and communications. Other countries use similar frameworks, but all share the same idea: protect what society cannot function without — ideally with some level of government oversight.

Yet, as Havelwick and others note, this list keeps expanding.

When AWS Went Down

On October 19, 2025, AWS experienced a major outage in one region. A database error cascaded into failures across DNS, payments, and authentication systems.

For a few hours, the digital economy looked far less digital.

I remember it clearly: I was boarding a flight to the Internet Identity Workshop. Air traffic control was fine — archaic but stable. Yet the gate agent couldn’t check bags, and the purser couldn’t confirm catering. My seatmate was visibly anxious about whether it was even safe to fly.

So many systems failed, yet many didn’t. What struck me most was how few people could tell the difference between what mattered and what didn’t.

Invisible Dependencies and Fragile Resilience

This incident made something clear — modern resilience is fragile because we’ve built it atop invisible dependencies that we rarely acknowledge.

Modern businesses run on other people’s APIs:

Stripe handles payments. Twilio delivers authentication codes. Okta manages identity. AWS, Google Cloud, Azure host nearly everything.

These aren’t niche conveniences anymore — they’re the infrastructure of global commerce. When you tap your phone to pay for coffee or file taxes online, one of these services is working silently in the background.

They may not look like traditional infrastructure — no visible grids or pipes — but they behave like utilities.

In short, we’ve replaced public infrastructure with private platforms.

Innovation and Its Risks

This shift has brought incredible innovation but also new risks.
Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will always work.

We’ve optimized for scale, not longevity.
But our assumptions about resilience haven’t kept pace.

There’s a paradox here:

Cloud architectures are built for redundancy and fault tolerance. Yet every layer of resilience adds another dependency — and therefore, another potential point of failure.

When a shared DNS resolver or authentication API fails, the entire ecosystem can crumble, no matter how many backups you have.

Interdependence and Oversight

Interdependence cuts both ways. When everyone relies on the same few providers, a failure for one becomes a failure for all.

So the big question arises:
Should services like AWS or Stripe be treated as critical infrastructure?

Havelwick argues yes. I’m not entirely convinced — but I see both sides.

In the U.S., agencies like the FTC and FCC have debated for decades whether the Internet itself qualifies as critical infrastructure.
Supporters argue that broadband is essential to modern life; opponents worry that regulation could slow innovation.

Declaring something “critical” brings oversight and compliance. Avoiding the label keeps flexibility — but also leaves society exposed.

We now have systems that operate like infrastructure yet remain governed by private interests. Their influence extends far beyond their legal obligations.

Digital Public Infrastructure and Global Perspectives

Outside the U.S., this debate continues under the banner of Digital Public Infrastructure (DPI).
Governments across the G20 are exploring whether payment systems, digital identity networks, and data exchange platforms should be classified as Critical Information Infrastructure (CII).

A recent G20 policy brief captured the tension well:

DPI emphasizes openness and inclusion. CII emphasizes restriction and control.

For example, India’s Unified Payments Interface (UPI) functions as critical infrastructure in practice, even if not in name.

Its success raises key questions:

Who controls access? How should foreign participants interact? Can cross-border interoperability be trusted?

The G20’s advice: identify critical systems early, before they become too big to retrofit with proper governance. But again, recognition invites regulation, which can stifle the innovation that made those systems successful.

The Incentive Problem

When regulation lags, incentives take over — and the biggest incentive of all is uptime.

Companies prioritize continuity because:

Downtime is visible. Security failures often aren’t.

As a result, availability becomes the currency of trust.
Revenue flows as long as systems run — even if they run insecurely.

Until we include societal resilience in our definition of “working,” we’ll keep mistaking uptime for stability.

The Trust Dilemma

Dependency itself already acts as a form of regulation.
Governments host their services on AWS. Hospitals store patient records in the cloud.

We trust these platforms — not because they’re obligated to serve the public interest, but because we have no alternative.

It’s the logic of too big to fail rewritten for the digital era.
We’ve built a layer of infrastructure that’s privately owned yet publicly indispensable — and it’s running on trust, not oversight.

Lock-In and Market Gravity

Dependence isn’t just about trust — it’s about design.

If you ask most CIOs whether they could migrate off a major cloud provider, they’ll say yes.
Ask if they could do it today, and the answer is no.

Proprietary integrations make switching possible but painful. That pain enforces dependence — not maliciously, but through market gravity.

Lock-in is the tax on convenience.
And when a platform failure can delay paychecks, disrupt hospitals, or ground flights, this isn’t about preference — it’s about public safety.

Expanding the Definition of Critical

As technology grows more complex, the concept of critical infrastructure keeps expanding.

Power, water, and transportation were once the baseline. Then came telecommunications and the Internet. Now we’re talking about authentication, payments, messaging, and identity services.

Each layer increases capability — but also multiplies exposure.

The real question isn’t whether these systems are critical. They clearly are.
It’s how to manage the responsibilities that come with that recognition.

Responsibility and Resilience

[00:13:10]
Calling an API a utility doesn’t mean nationalizing it. It means acknowledging that private infrastructure now performs public functions, and that recognition carries responsibility.

Yet every new addition to the “critical” list spreads oversight thinner. If everything’s a priority, nothing truly is.

We have to decide which dependencies deserve protection — and which risks we can live with.

Stripe, AWS, and similar services don’t just enable business. They are business. They’ve become the unacknowledged utilities of our digital economy.

When I saw my airline systems falter during the AWS outage, it wasn’t just a glitch — it was a glimpse into how deeply interwoven our dependencies have become.

If your goal is efficiency, labeling these systems as critical may help create stability through regulation.

But if your goal is resilience, perhaps it’s time to design for flexibility — to accept failure as part of stability, and to plan for it.

The next outage will happen. It won’t be an exception. It will simply remind us that the foundations of the modern world run on rented infrastructure, and that rent always comes due.

[00:15:26]
And that’s it for this week’s episode of The Digital Identity Digest.

If it helped make things a little clearer — or at least more interesting — share it with a friend or colleague and connect with me on LinkedIn @HLFLanagan.

If you enjoyed the show, please subscribe and leave a rating or review on Apple Podcasts or wherever you listen.

You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep these conversations going.

The post The Infrastructure We Forgot We Built appeared first on Spherical Cow Consulting.


Ocean Protocol

Ocean Community 51% Tokens

By: Bruce Pon I want to address the following false statements and allegations made by Sheikh, Goertzel and Burke: Oct 9, 2025 — Sheikh on X Spaces (Dmitrov)Oct 9, 2025 — Jamie Burke X PostOct 13, 2025 — Sheikh and Goertzel on X Spaces (Benali)Oct 15, 2025 — Jamie Burke X PostOct 17, 2025 — Sheikh on All-in-Crypto Podcast To put to rest any false claims or allegations of
By: Bruce Pon

I want to address the following false statements and allegations made by Sheikh, Goertzel and Burke:

Oct 9, 2025 — Sheikh on X Spaces (Dmitrov)Oct 9, 2025 — Jamie Burke X PostOct 13, 2025 — Sheikh and Goertzel on X Spaces (Benali)Oct 15, 2025 — Jamie Burke X PostOct 17, 2025 — Sheikh on All-in-Crypto Podcast

To put to rest any false claims or allegations of “theft” of property, let’s track the story from start to finish. This post has also been prepared with input from Ocean Expeditions.

Prior to March 2021, Ocean Protocol Foundation (‘OPF’) owned the minting rights to the $OCEAN token contract (0x967) with 5 signers assigned by OPF.

The Ocean community allocation, which would comprise 51% of eventual $OCEAN supply, (‘51% Tokens’) had not yet been minted but its allocation had been communicated to the Ocean community in the Ocean whitepaper.

To set up the oceanDAO properly, OPF engaged commercial lawyers, accountants and auditors to conceive a legal and auditable pathway to grant the oceanDAO the rights of minting the Ocean community allocation.

In June 2022 (but with documents dated March 2021), the rights of minting the 51% Tokens was irrevocably signed over to the oceanDAO. Along with this, seven Ocean community members and independent crypto OGs, stepped in, in their individual capacities to become trustees of the 51% Tokens.

March 31, 2021 — Legal and formal sign-over of assets to oceanDAO from Ocean Protocol Foundation

One year later, in May 2023, using the minting rights it had been granted in June 2022, the oceanDAO minted the 51% Token supply and irreversibly relinquished all control over the $OCEAN token contract 0x967 for eternity. The $OCEAN token lives wholly independent on the Ethereum blockchain and cannot be changed, modified or stopped.

TX ID: https://etherscan.io/tx/0x9286ab49306fd3fca4e79a1e3bdd88893042fcbd23ddb5e705e1029c6f53a068

The 51% Tokens were minted into existence on the Ethereum blockchain. The address holding the $OCEAN tokens sat in the ether, owned by no one. The address could release $OCEAN when at least 4 of 7 signers activated their key in the Gnosis Safe vault, which governs the address.

None of the signers had any claim of ownership over the address, Gnosis Safe vault or the contents (51% Tokens). They acted in the interest of the Ocean community and not anyone else, and certainly not OPF or the Ocean Founders.

During the ASI talks in Q2/2024, Fetch.ai applied significant pressure on OPF to convert the entire 51% Token supply immediately to $FET. OPF pushed back clearly stating that it had no power to do so as the 51% Tokens were not the property of, or under the control of OPF.

During those talks, OPF also repeatedly emphasized to Fetch.ai and SingularityNET that the property rights of EVERY token holder (including those of OPF and oceanDAO) must be respected, and that these rights were completely separate to the ASI Alliance. Fetch.ai and SingularityNET agreed with this fundamental principle.

March 24, 2024 — First discussion about Ocean joining ASIApril 3, 2024 — Pon to the Sheikh (Fetch.ai), Goertzel, Lake, Casiraghi (SingularityNET)May 24, 2024 — Pon to D. Levy (Fetch.ai)August 6, 2024 — ASI Board Call where Sheikh calls for ASI Alliance to refrain from exerting control over ASI membersAugust 17, 2025 — SingularityNET / Ocean Call

It must also be highlighted that oceanDAO was never a party to any agreement with Fetch.ai and SingularityNET. It had its own investment priorities and objectives as a regular token holder. oceanDAO’s existence, and the fact that it was the entity controlling the 51% Tokens with 7 signers, was acknowledged by SingularityNET at the very start of talks in March 2024.

In those discussions, OPF explained the intentions of the oceanDAO to release $OCEAN tokens to the community on an emission schedule.

In mid-2024, after the formation of the ASI Alliance, Fetch minted 611 million $FET tokens for the Ocean community. The sole purpose of minting this 611 million $FET was to accommodate a 100% swap-out of the Ocean community’s token supply of 1.41 billion $OCEAN. This swap-out would be via a $OCEAN-$FET token bridge and migration contract.

At that time, the oceanDAO did not initiate a conversion of the Gnosis Safe vault 51% Tokens from $OCEAN to $FET. The 51% of tokens sat dormant, as they had since being minted in May 2023.

However, with the continued and relentlessly deteriorating price of $FET due to the actions of Fetch.ai and SingularityNET, the Ocean community treasury had fallen in value from $1 billion in Q1/2024 to $300 million in Q2/2025.

oceanDAO therefore decided around April 2025 that it needed to take steps to avoid further fall in the value of the 51% Tokens for the Ocean community by converting some of the $OCEAN into other crypto-currencies or stablecoins, so that the oceanDAO would not be saddled with a large supply of steadily depreciating token.

The immediate and obvious risk to the Ocean community would be that if and when suitable projects come about, the Ocean community rewards could very well be worthless due to the continued fall in $FET price. This was an important consideration for the oceanDAO when it eventually decided that active steps had to be taken to protect the interests of the Ocean community.

Upon the establishment of Ocean Expeditions, a Cayman trust, in late-June, 2025, oceanDAO transferred signing rights over the 51% Tokens to Ocean Expeditions, who then initiated a conversion of $OCEAN to $FET using the token bridge and migration contract.

TX ID: https://etherscan.io/tx/0xce91eef8788c15c445fa8bb6312e8d316088ce174454bb3c96e7caeb62da980d

Sheikh alluded to this act of conversion of $OCEAN to $FET, along with his incorrect understanding of their purpose in a podcast.

Oct 17, 2025 — Sheikh speaking on All-in Crypto

However, unlike what Sheikh falsely claimed, Ocean Expeditions’s conversion of the 51% Tokens from $OCEAN to $FET and the selling of some of these tokens, is in no way a “theft” of these tokens by Ocean Expeditions or by OPF. These are unequivocally, not ASI community tokens, not “ASI” community reward tokens and not under any consideration of the ASI community.

Ocean Expeditions converted its $OCEAN holdings into $FET by utilising the $FET that were specifically minted for the Ocean community and earmarked by Fetch.ai for this conversion. It is important to emphasize that Ocean Expeditions did not tap into any other portion of the $FET supply. Simply put, there was no “theft” because Ocean Expeditions had claimed what it was rightfully allocated and entitled to.

Any token movements of the 51% Tokens to 30 wallets, Binance, GSR or any other recipient AND any token liquidations or disposals, are the sole right of, and at the discretion of Ocean Expeditions, and no one else.

Ocean Expeditions sought to preserve the value of the community assets, for the good of the Ocean community. Any assets, whether in $FET, other tokens or fiat, remain held by Ocean Expeditions in trust for the Ocean community. The assets have not been transferred to, or in any other way taken by OPF or the Ocean Founders.

We demand that Fetch.ai, Sheikh and all other representatives of the ASI Alliance who have promulgated any lies, incitement and misrepresentations (e.g. “stolen” “scammers” “we will get you”) immediately retract their statements, delete their social media posts where these statements were made, issue a clarification to the broader news media and issue a formal apology to Ocean Expeditions, Ocean Protocol Foundation, and the Ocean Founders.

We repeat that the 51% Tokens are owned by Ocean Expeditions, for the sole purpose of the Ocean community and no one else.

Ocean Community 51% Tokens was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 03. November 2025

1Kosmos BlockID

What Is eKYC? A Quick Guide

The post What Is eKYC? A Quick Guide appeared first on 1Kosmos.

The post What Is eKYC? A Quick Guide appeared first on 1Kosmos.


Dock

The UAE Becomes the First Country to Phase Out SMS and Email OTPs

The Central Bank of the United Arab Emirates has taken a groundbreaking step in financial security.  It is now mandating the phase-out of SMS and email one-time passwords (OTPs).

The Central Bank of the United Arab Emirates has taken a groundbreaking step in financial security. 

It is now mandating the phase-out of SMS and email one-time passwords (OTPs).

Sunday, 02. November 2025

Ockam

The 7-Day Compounding Sprint

The difference between effort that multiplies and effort that disappears Continue reading on Medium »

The difference between effort that multiplies and effort that disappears

Continue reading on Medium »

Saturday, 01. November 2025

Recognito Vision

How a Biometric Face Scanner Helps Businesses Verify Users with Confidence

In today’s digital age, knowing who’s on the other side of the screen isn’t just a security measure, it’s a necessity. From online banking to employee attendance, businesses across the globe are under pressure to verify users faster and with absolute accuracy. That’s where the biometric face scanner steps in, giving companies the confidence to...

In today’s digital age, knowing who’s on the other side of the screen isn’t just a security measure, it’s a necessity. From online banking to employee attendance, businesses across the globe are under pressure to verify users faster and with absolute accuracy. That’s where the biometric face scanner steps in, giving companies the confidence to know every interaction is authentic.

The rise of artificial intelligence has turned face scanning into a cornerstone of identity verification. Whether it’s an AI face scanner, a facial recognition scanner, or an advanced face scanning machine, the goal remains the same: keep things secure without slowing people down.

 

The Evolution of Facial Scanning Technology

Facial recognition has come a long way since the early 2000s. What once required large systems and clunky cameras now fits into a sleek face scanner device powered by deep learning. Modern facial scanning technology can detect and verify faces in milliseconds while maintaining compliance with global data standards such as GDPR.

AI-driven algorithms analyze facial landmarks, comparing them with stored biometric templates. This process ensures unmatched accuracy. A study conducted by NIST’s Face Recognition Vendor Test confirmed that advanced AI models now achieve over 99% accuracy in matching and verification, outperforming traditional biometric systems like fingerprints under certain conditions.

These results show that biometric verification isn’t just futuristic talk, it’s an essential layer of digital trust.

 

Why Businesses Are Switching to Face Scanner Biometric Systems

Passwords, ID cards, and manual checks are vulnerable to theft, fraud, and human error. A face scanner biometric solution eliminates these weaknesses. For many businesses, it’s not about replacing human judgment, it’s about enhancing it.

Companies are now using AI face scan systems to authenticate employees, onboard new clients, and manage visitor access seamlessly. Here’s why adoption is growing so fast:

Faster verification: A simple glance replaces lengthy manual identity checks.

Stronger security: Faces can’t be borrowed, stolen, or easily replicated.

Higher accuracy: The system adapts to lighting, angles, and even subtle changes like facial hair.

Better compliance: Aligned with data protection and global standards.

It’s the balance between convenience and control that makes facial recognition scanners invaluable in sectors such as finance, healthcare, retail, and corporate access management.

 

How a Face Scan Attendance Machine Improves Workforce Management

Time theft and attendance fraud cost businesses millions annually. Traditional punch cards or RFID systems can be manipulated, but a face scan attendance machine offers transparency and efficiency. Employees simply look into a face scan camera, and their attendance is logged instantly.

This system ensures that only real, verified individuals are recorded. No more buddy punching or proxy logins. Companies integrating such systems experience improved productivity and cleaner attendance data. It’s a small change that brings big operational discipline.

Solutions like the face recognition SDK make implementation simple by offering APIs that integrate directly into existing HR and access management software.

 

The Technology Behind AI Face Scanners

A biometric face scanner operates on the principles of artificial intelligence and computer vision. It starts by mapping key facial points such as eyes, nose, jawline, and contours to create a unique mathematical pattern.

Here’s how the process unfolds:

A face scan camera captures the user’s face in real-time.

The AI model extracts biometric data points.

The AI face scanner compares the captured data with stored templates.

The result is an instant verification decision.

Unlike passwords or tokens, facial biometrics are almost impossible to replicate. Many systems also include liveness detection to distinguish between a live person and a photo or mask. Businesses can test this feature through the face liveness detection SDK, ensuring their verification process isn’t fooled by fake attempts.

 

Ensuring Privacy and Data Security

One major concern surrounding facial scanning technology is data privacy. Responsible companies know that collecting biometric data requires careful handling. The good news is that modern systems don’t store raw images. Instead, they use encrypted templates, mathematical representations that can’t be reverse-engineered into a real face.

Organizations adhering to GDPR and global privacy laws can confidently deploy face scanner devices without compromising user rights. Transparency, consent, and clear data retention policies are the pillars of ethical AI use.

To stay updated on compliance standards and performance benchmarks, many developers reference the NIST FRVT 1:1 reports, which highlight progress in algorithmic accuracy and fairness.

 

Real-World Applications of Face Scanning Machines

Facial recognition scanners have a wide range of real-world applications that continue to grow each year. Here are some key areas where they are being used:

1. Banking and Finance

Facial recognition technology helps prevent identity fraud during digital onboarding, ensuring secure access to banking services.

2. Corporate Offices

These systems provide secure and frictionless access control, allowing employees to enter restricted areas without the need for physical keys or ID cards.

3. Airports

Airports use facial recognition to streamline processes, offering faster and more secure boarding and immigration checks.

4. Education

In education, facial recognition is used for automated attendance tracking and exam proctoring, reducing administrative overhead and ensuring exam integrity.

For developers or businesses looking to explore how these systems work, the face biometric playground provides a hands-on environment to test AI-based facial recognition in action.

 

Challenges and Ethical Considerations

While the benefits are undeniable, biometric systems must still address several challenges. AI bias, varying lighting conditions, and evolving spoofing methods are ongoing hurdles. Continuous algorithm training using diverse datasets is key to ensuring fairness and reliability.

Ethical implementation also plays a major role. Users must always know when and why their data is being collected. Transparent policies build trust, the same trust that a biometric face scanner promises to uphold.

Open-source initiatives like Recognito Vision’s GitHub repository are helping drive responsible innovation by allowing researchers to refine and test AI-based recognition models openly and collaboratively.

 

The Future of Face Scanning and Business Verification

As AI becomes more sophisticated, so will biometric systems. Future scanners will combine 3D depth sensing, emotion analytics, and advanced liveness detection to improve security even further.

The evolution of AI face scan systems is not about replacing traditional verification but complementing it, building a security framework that feels effortless to users yet nearly impossible to breach.

 

Building Trust in the Age of Intelligent Verification

Trust isn’t built in a day, but it can be verified in a second. A well-designed biometric face scanner offers that confidence, enabling companies to know their users without a doubt. From corporate offices to fintech platforms, businesses that invest in intelligent verification today will lead tomorrow’s secure digital economy.

As one of the pioneers in ethical biometric verification, Recognito continues to empower organizations with AI-driven identity solutions that combine precision, privacy, and confidence.

 

Frequently Asked Questions

 

1. What is a biometric face scanner and how does it work?

It’s an AI-powered system that analyzes facial features to verify identity in seconds.

 

2. Is facial recognition technology safe for user privacy?

Yes. Modern systems use encrypted facial templates instead of storing real images.

 

3. What are the main benefits of using facial recognition in businesses?

It offers faster verification, stronger security, and reduced fraud risks.

 

4. How can companies integrate a biometric face scanner into their systems?

They can use APIs or SDKs to easily add facial verification to existing software.

Friday, 31. October 2025

Ocean Protocol

DF160, DF161 Complete and DF162 Launches

Predictoor DF160, DF161 rewards available. DF162 runs October 30th — November 6th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor. Data Farming Rounds 160 (DF160) and 161 (DF161) have completed and rewards are now available after a temporary interruption in service from Oct 13 —
Predictoor DF160, DF161 rewards available. DF162 runs October 30th — November 6th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.

Data Farming Rounds 160 (DF160) and 161 (DF161) have completed and rewards are now available after a temporary interruption in service from Oct 13 — Oct30.

DF162 is live, October 30th. It concludes on November 6th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF162 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF162

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean and DF Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF160, DF161 Complete and DF162 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

How to Safely Entrust a ‘Digital Power of Attorney’ to My AI

Introducing a patent-pending framework that fuses AI, DID, and Blockchain to enable secure AI delegation Imagine this: What if you could give your personal AI assistant a Power of Attorney — so it could act on your behalf? That sounds convenient, but only if it’s done in a fully secure, verifiable, and trustworthy way. No more sharing passwords. No more blind trust in centralized services. Just

Introducing a patent-pending framework that fuses AI, DID, and Blockchain to enable secure AI delegation

Imagine this:

What if you could give your personal AI assistant a Power of Attorney — so it could act on your behalf? That sounds convenient, but only if it’s done in a fully secure, verifiable, and trustworthy way. No more sharing passwords. No more blind trust in centralized services. Just a cryptographically proven system of delegation.

A recent patent filed by CPLABS, titled “Method for Delegated AI Agent-Based Service Execution on Behalf of a User,” outlines exactly such a future. This patent proposes a new model that combines Decentralized Identity (DID), blockchain, and AI agents — so your AI can act for you, but with boundaries, verifiability, and accountability built in.

The Problem with Today’s Delegation Methods

We delegate all the time to people, apps, and APIs. But today’s delegation models are broken

Paper-based authorizations are outdated: Physical documents are cumbersome and easily forged, and verifying who issued them is hard. API keys & password sharing are risky: Tokens can be leaked, and once exposed, there’s no way to limit or track their use. Most systems lack built-in revocation or expiration controls. There is no clear apparent trace of responsibility: If your AI does something using your credentials, it is recorded as if you did it. There is no audit trail, proof of scope, or consent.

We need a more secure and user-centric model in the age of AI agents acting autonomously.

A New Solution: DID + Blockchain + AI Agent

The patent proposes an architecture built on three core technologies

1. Decentralized Identity (DID)

Every user has a self-sovereign, blockchain-based digital ID. So does the AI agent — it operates as its own verifiable identity.

2. Blockchain Ledger

All actions and delegations are immutably recorded on-chain. Who delegated what, to whom, when, and how is traceable and tamper-proof.

3. Encrypted Delegation Credential (Digital PoA)

Instead of paper documents, users issue digitally signed credentials. These include

The agent’s DID The scope of authority Expiration timestamp Revocation endpoint
This creates a “digital power of attorney” that can be cryptographically verified by any service.

The entire process runs without centralized intermediaries and is powered by standardized DID and blockchain protocols.

How It Actually Works User delegates authority to AI
e.g., “My AI may book doctor appointments this month.” AI submits delegation proof when acting
The AI presents both its own DID and the user-signed credential. The service provider verifies
The service checks the signatures and revocation status via the DID registry and the blockchain. Authorization scope is enforced
If the action goes beyond the delegated scope, it’s rejected. Every action is logged on-chain
Whether successful or failed, all attempts are transparently recorded. The user can revoke at any time
Revocation is immediate and recorded immutably on-chain.

The AI can only act using the digital “key” you’ve granted — and every move is auditable.

Potential Use Cases Healthcare: The AI assistant retrieves records from Hospital A, forwards them to Hospital B with full user consent, and logs them securely. Finance: Delegate your AI to automate transfers up to $1,000 per day. Every transaction is verified and capped. Government services: AI files address changes or document requests using digital credentials — recognized legally as your proxy. Smart home access: Courier arrives? Your AI is granted temporary “open door” access, which is revoked automatically post-delivery. Why This Matters User-Controlled Delegation
You define the rules. Nothing happens without your explicit, cryptographically backed consent. Verifiable Trust
Anyone can audit the record on-chain. Services don’t need to “trust” blindly. Scalable Automation
Enables safe, rule-bound AI delegation across sectors. Clear Responsibility
Transparent logs help determine who’s accountable if anything goes wrong. In Summary

This system provides a secure infrastructure for AI-human collaboration, backed by blockchain. It’s like handing your AI a digitally signed key with built-in expiration and tracking — ensuring it never oversteps its bounds.

This patent envisions a simple but powerful future: Your AI can act for you, but only within the rules you define, and everything it does is traceable and accountable.

That’s not just clever tech — it’s the foundation of digital trust in an AI-driven world.

내 AI에게 ‘디지털 위임장’을 안전하게 맡기는 법 내 AI에게 권한을 주는 시대

생각해보세요. 내 개인 AI 비서에게 위임장(Power of Attorney)을 주어 나를 대신해 일을 처리하게 한다면 어떨까요? 하지만 이때 중요한 건 완전히 안전하고 신뢰할 수 있는 방식으로 맡기는 것입니다. 더 이상 비밀번호를 공유하거나, 앱에 내 데이터 전체를 맡기며 불안해할 필요가 없는 거죠.

최근 CPLABS에서 출원된 특허(명칭: 사용자로부터 권한을 위임받아 서비스를 대리 수행하는 방법 및 이를 이용한 AI 에이전트)는 바로 이런 미래를 보여줍니다. 이 기술은 분산 신원(DID)과 블록체인, 그리고 AI 에이전트를 결합해, AI가 사용자를 대신해 행동하되 모든 것이 검증 가능하고 추적 가능한 방식으로 이루어지도록 합니다.

지금의 위임 방식이 가진 문제점

우리는 종종 다른 사람이나 소프트웨어에 일을 맡깁니다. 하지만 현재의 위임 방식에는 여러 가지 문제가 있습니다.

서류 기반 위임의 불편함: 종이 위임장은 작성도 번거롭고 위조도 쉽습니다. 위임자와 수임자의 관계를 확인하기도 애매하고, 신뢰성이 떨어집니다. API 키/비밀번호 공유의 위험성: 오늘날 앱이나 서비스 연결은 보통 API 키나 토큰을 공유하는 방식입니다. 이 키가 유출되면 공격자가 무제한 권한을 행사할 수 있죠. 만료나 철회 기능도 미비합니다. 책임 추적의 부재: AI가 내 계정으로 수행한 행동은 결국 ‘내가 한 것’처럼 기록돼 분쟁 시 책임 소재가 불분명합니다.

지금의 위임은 너무 불편하거나, 너무 위험합니다. 특히 AI 시대에는 이런 방식으로는 부족합니다.

새로운 해법: DID + 블록체인 + AI 에이전트

이번 특허가 제안하는 해법은 세 가지 핵심 기술을 결합합니다.

1. DID (Decentralized Identifier) 중앙 기관 없이 발급되고, 사용자가 직접 관리하는 디지털 신분증입니다. 사용자와 AI 에이전트 모두 독립적인 DID를 가집니다. 2. 블록체인 위임 관계와 AI 행동 내역을 변조 불가능하게 기록합니다. 누구에게, 언제, 어떤 권한이 위임되었고 AI가 어떤 행동을 했는지를 모두 투명하게 남깁니다. 3. 디지털 자격증명 기반 위임장 사용자가 직접 전자 서명해 발급하는 위임 토큰(Credential)에는 에이전트 DID, 권한 범위, 만료일, 철회 URL 등이 포함됩니다. 누구든 검증 가능한 디지털 위임장입니다.

이 모든 것은 중앙 시스템 없이, DID와 블록체인 기반 인프라 위에서 이루어집니다.

실제 흐름은 이렇게 작동합니다 사용자가 AI에 위임장 발급 (예: “내 AI는 이번 달 동안 병원 예약을 대신할 수 있다”) AI가 서비스에 위임장 제출 (AI는 자신의 DID 서명과 함께, 위임장을 제시하며 요청을 수행합니다.) 서비스가 검증 수행 (블록체인 DID 레지스트리를 통해 사용자와 AI의 DID를 검증하고, 위임장 서명도 확인합니다.) 권한 범위 확인 (위임장에 명시된 범위를 벗어난 요청은 거부됩니다.) 행동 로그 기록 (성공/실패 여부를 블록체인에 기록하여 감사가 가능합니다.) 즉시 철회 가능 (사용자는 언제든 위임을 취소할 수 있고, 철회 사실도 블록체인에 반영됩니다.)

즉, AI는 당신이 발급한 디지털 열쇠로만 행동할 수 있으며, 모든 흔적은 감사 가능한 증거로 남습니다.

활용 시나리오 의료: 환자가 AI에게 병원 기록 접근 권한을 위임. AI는 병원 A에서 기록을 받아 병원 B에 전달 (오남용은 불가능) 금융: AI 금융비서에게 “하루 100만원 한도 자동 이체” 권한을 위임. 초과 시 자동 거절 행정: 시민이 AI에게 주소 변경, 주민등록등본 발급 등 민원 업무를 맡김. AI는 합법적인 디지털 대리인으로 인식됨 생활/스마트홈: 택배기사 방문 시간에만 유효한 “문 열기 권한”을 발급. 시간이 지나면 자동 철회 왜 중요한가? 사용자 중심 통제: 내가 허락하지 않은 행동은 애초에 불가능 디지털 신뢰 확보: 블록체인 기반의 검증 가능한 기록 AI 자동화 확산: 신뢰 기반으로 더 많은 업무를 맡길 수 있음 책임 구분 가능: 문제가 생겨도 책임 주체가 명확

결국 이 시스템은 AI와 사람이 안전하게 협업할 수 있는 새로운 신뢰 인프라입니다. 마치 AI에게 범위가 제한된 디지털 열쇠를 주고, 그 모든 흔적을 공증처럼 남기는 셈이죠.

마치며

이 특허 기술은 단순한 자동화 도구가 아닙니다. AI가 우리를 대신해 행동할 수 있는 디지털 신뢰 프레임워크를 구현한 사례입니다. 앞으로 다가올 “AI와 함께 사는 세상”에서, 우리는 이런 시스템을 통해 안전하게 협업하고, 책임을 분명히 나누며, 더 많은 자유를 누릴 수 있게 될 것입니다.

이제, 당신의 AI에게 진짜 믿을 수 있는 디지털 열쇠를 쥐어줄 때입니다.

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KOR) | http://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

How to Safely Entrust a ‘Digital Power of Attorney’ to My AI was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

BC 101 #7: DAO, Standardization, and Neutrality

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced. For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this stru

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced.

For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this structure provides the transparency and accountability required by regulated entities while enabling decentralized control.

DAO governance delivers substantial value by providing a standardized, neutral framework for coordination that reduces operational and regulatory friction.

Third-Party and In-House DAO Infrastructures

In recent years, the infrastructure supporting DAOs has advanced significantly. A variety of third-party governance solutions now offer stable, enterprise-ready interfaces for managing proposals, conducting votes, and executing multi-signature transactions. Some noteworthy platforms include:

Snapshot: An off-chain, gasless voting platform widely used across leading protocols. It allows flexible voting strategies, quorum requirements, and verifiable results without introducing high transaction costs. Tally: A fully on-chain governance dashboard built on Ethereum, designed for transparency and auditability of protocol votes, treasury management, and proposal lifecycle tracking.

These solutions form a growing middleware ecosystem that brings governance to the same level of technical maturity as enterprise resource planning systems.

At the same time, in-house DAO frameworks extend beyond generic governance tooling. They integrate DAO logic with the project’s native identity, treasury, and compliance layers, enabling seamless coordination between on-chain and organizational processes. This approach ensures that governance not only reflects community consensus but also aligns with operational and regulatory realities.

DAO Governance as a Mechanism for Neutrality

DAO governance reinforces network neutrality, a crucial characteristic for projects that operate across multiple jurisdictions or regulatory contexts. This structural neutrality diminishes the concentration of control that can lead to compliance issues and enables projects to remain resilient during regulatory or organizational changes.

For blockchain systems aimed at enterprises, DAO infrastructure provides three measurable benefits:

Regulatory Adaptability: Transparent proposal and voting systems create a verifiable governance record suitable for audits, disclosures, or compliance reviews. Operational Continuity: Distributed governance logic allows decision-making to persist independently of any single corporate entity or leadership group. Stakeholder Alignment: Token-weighted or role-based participation aligns validators, contributors, and investors under a unified, rule-based coordination framework. Toward Structured and Resilient Governance

As blockchain networks evolve into critical data and financial infrastructure, governance must progress beyond mere symbolic decentralization. DAO systems offer a structured, compliant, and resilient approach to managing complex ecosystems.

DAOs are not merely voting or staking platforms. They serve as the operational core that defines how decentralized systems make, record, and enforce decisions. Only with a well-structured DAO model can projects establish the legal, operational, and procedural foundation required to function as sustainable organizations.

BC 101 #7: DAO, Standardization, and Neutrality was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Progress Update: Building Healthier Social Media

Over the next few months, we’ll be iterating on the systems that make Bluesky a better place for healthy conversations. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.

At Bluesky, we’re building a place where people can have better conversations, not just louder ones. We’re not driven by engagement-at-all-costs metrics or ad incentives, so we’re free to do what’s good for people. One of the biggest parts of that is the replies section. We want fun, genuine, and respectful exchanges that build friendships, and we’re taking steps to make that happen.

So far, we’ve introduced several tools that give people more control over how they interact on Bluesky. The followers-only reply setting helps posters keep discussions focused on trusted connections, mod lists make it easier to share moderation preferences, and the option to detach quote posts gives people a way to limit unwanted attention or dogpiling. These features have laid the groundwork for what we’re focused on now: improving the quality of replies and making conversations feel more personal, constructive, and in your control.

In our recent post, we shared some of the new ideas we were starting to develop to encourage healthier interactions. Since then, we’ve started rolling out updates, testing new ranking models, and studying how small product decisions can change the tone of conversations across the network.

We’re testing a mix of ranking updates, design changes, and new feedback tools — all aimed at improving the quality of conversation and giving people more control over their experience.

Social proximity

We’re developing a system that maps the “social neighborhoods” that naturally form on Bluesky — the people you already interact with or would likely enjoy knowing. By prioritizing replies from people closer to your neighborhood, we can make conversations feel more relevant, familiar, and less prone to misunderstandings.

Dislikes beta

Soon, we’ll start testing a “dislike” option as a new feedback signal to improve personalization in Discover and other feeds. Dislikes help the system understand what kinds of posts you’d prefer to see less of. They may also lightly inform reply ranking, reducing the visibility of low-quality replies. Dislikes are private and the signal isn’t global — it mainly affects your own experience and, to an extent, others in your social neighborhood.

Improved toxicity detection

Our latest model aims to do a better job of detecting replies that are toxic, spammy, off-topic, or posted in bad faith. Posts that cross the line are down-ranked in reply threads, search results, and notifications, reducing their visibility while keeping conversations open for good-faith discussion.

Reply context

We’re testing a small change to how the “Reply” button works on top-level posts: instead of jumping straight into the composer, it now takes you to the full thread first. We think this will encourage people to read before replying — a simple way to reduce context collapse and redundant replies.

Reply settings refresh

Bluesky’s reply settings give posters fine-grained control over who can reply, but many people don’t realize they exist. We’re rolling out a clearer design and a one-time nudge in the post composer to make them easier to find and use. Better visibility means more people can shape their own conversations and prevent unwanted replies before they happen. Conversations you start should belong to you.

We won’t get everything right on the first try. Building healthier social media will take ongoing experimentation supported by your feedback. This work matters because it tackles a root flaw in how social platforms have been built in the past — systems that optimize for attention and outrage instead of genuine conversation. Improving replies cuts to the heart of that problem.

Over the next few months, we’ll keep refining these systems and measuring their impact on how people experience Bluesky. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.

Thursday, 30. October 2025

IDnow

Breaking down biases in AI-powered facial verification.

How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone. While the ability of artificial intelligence (AI) to optimize certain processes is well documented, there are still genuine concerns regarding the link between unfa
How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone.

While the ability of artificial intelligence (AI) to optimize certain processes is well documented, there are still genuine concerns regarding the link between unfair and unequal data processing and discriminatory practices and social inequality.    

In November 2022, IDnow, alongside 12 European partners, including academic institutions, associations and private companies, began the MAMMOth project, which set out to explore ways of addressing bias in face verification systems. 

Funded by the European Research Executive Agency, the goal of the three-year long project, which wrapped on October 30, 2025, was to study existing biases and create a toolkit for AI engineers, developers and data scientists so they may better identify and mitigate biases in datasets and algorithm outputs.   

Three use cases were identified:    

Face verification in identity verification processes.  Evaluation of academic work. In the academic world, the reputation of a researcher is often tied to the visibility of their scientific papers, and how frequently they are cited. Studies have shown that on certain search engines, women and authors coming from less prestigious countries/universities tend to be less represented.    Assessment of loan applications. 

IDnow predominantly focused on the face verification use case, with the aim of implementing methods to mitigate biases found in algorithms.   

Data diversity and face verification bias.

Even the most state-of-the-art face verification models are typically trained on conventional public datasets, which features an underrepresentation of minority demographics. A lack of diversity in data makes it difficult for models to perform well on underrepresented groups, leading to higher error rates for people with darker skin tones.  

To address this issue, IDnow proposed using a ‘style transfer’ method to generate new identity card photos that mimic the natural variation and inconsistencies found in real-world data. By augmenting the training dataset with synthetic images, it not only improves model robustness through exposure to a wider range of variations but also enables a further reduction of bias against darker skin faces, which significantly reduces error rates for darker-skinned users, and provides a better user experience for all.

The MAMMOth project has equipped us with the tools to retrain our face verification systems to ensure fairness and accuracy – regardless of a user’s skin tone or gender. Here’s how IDnow Face Verification works.

When registering for a service or onboarding, IDnow runs the Capture and Liveness step, which detects the face and assesses image quality. We also run a liveness/ anti‑spoofing check to check that photos, screen replays, or paper masks are not used. 

The image is then cross-checked against a reference source, such as a passport or ID card. During this stage, faces from the capture step and the reference face are converted into compact facial templates, capturing distinctive features for matching. 

Finally, the two templates are compared to determine a “match” vs. “non‑match”, i.e. do the two faces belong to the same person or not? 

Through hard work by IDnow and its partners, we developed the MAI-BIAS Toolkit to enable developers and researchers to detect, understand, and mitigate bias in datasets and AI models.

We are proud to have been a part of such an important collaborative research project. We have long recognized the need for trustworthy, unbiased facial verification algorithms. This is the challenge that IDnow and MAMMOth partners set out to overcome, and we are delighted to have succeeded.

Lara Younes, Engineering Team Lead and Biometrics Expert at IDnow.
What’s good for the user is good for the business.

While the MAI-BIAS Toolkit has demonstrated clear technical improvements in model fairness and performance, the ultimate validation, as is often the case, will lie in the ability to deliver tangible business benefits.  

IDnow has already began to retrain its systems with learnings from the project to ensure our solutions are enhanced not only in terms of technical performance but also in terms of ethical and social responsibility.

Top 5 business benefits of IDnow’s unbiased face verification. Fairer decisions: The MAI-BIAS Toolkit ensures all users, regardless of skin color or gender, are given equal opportunities to pass face verification checks, ensuring that no group is unfairly disadvantaged.   Reduced fraud risks: By addressing biases that may create security gaps for darker skinned users, the MAI-BIAS Toolkit strengthens overall fraud prevention by offering a more harmonized fraud detection rate across all demographics.  Explainable AI: Knowledge is power, and the Toolkit provides actionable insights into the decision-making processes of AI-based identity verification systems. This enhances transparency and accountability by clarifying the reasons behind specific algorithmic determinations.   Bias monitoring: Continuous assessment and mitigation of biases are supported throughout all stages of AI development, ensuring that databases and models remain fair with each update to our solutions.   Reducing biases: By following the recommendations provided in the Toolkit, research methods developed within the MAMMOth project can be applied across industries and contribute to the delivery of more trustworthy AI solutions.  

As the global adoption of biometric face verification systems continues to increase across industries, it’s crucial that any new technology remains accurate and fair for all individuals, regardless of skin tone, gender or age.

Montaser Awal, Director of AI & ML at IDnow.

“The legacy of the MAMMOth project will continue through its open-source tools, academic resources, and policy frameworks,” added Montaser. 

For a more technical deep dive into the project from one of our research scientists, read our blog ‘A synthetic solution? Facing up to identity verification bias.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Ontology

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem

Introduction Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digi
Introduction

Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digital identity, data security, and decentralized trust.

Unlike general-purpose blockchains, Ontology focuses on building trust infrastructure for individuals, businesses, and governments. By leveraging Ontology’s features, Zambia can unlock innovation in financial inclusion, supply chain transparency, e-governance, and education.

1. Digital Identity for All Zambians

A key challenge in Zambia is limited access to official identification. Without proper IDs, many citizens struggle to open bank accounts, access healthcare, or register land. Ontology’s ONT ID (a decentralized digital identity solution) could:

Provide every citizen with a secure, self-sovereign digital ID stored on the blockchain. Link identity with services such as mobile money, health records, and education certificates. Reduce fraud in financial services, voting systems, and government benefit programs.

This supports Zambia’s push for universal access to identification while protecting privacy.

2. Financial Inclusion & Digital Payments

With a large unbanked population, Zambia’s fintech growth depends on trust and interoperability. Ontology offers:

Decentralized finance (DeFi) solutions for micro-loans, savings, and remittances without reliance on traditional banks. Cross-chain compatibility to connect Zambian fintech startups with global crypto networks. Reduced transaction fees compared to traditional remittance channels, making it cheaper for Zambians abroad to send money home. 3. Supply Chain Transparency (Agriculture & Mining)

Agriculture and mining are Zambia’s economic backbones, but inefficiencies and lack of transparency hinder growth. Ontology can:

Enable farm-to-market tracking of crops, ensuring farmers get fair prices and buyers trust product origins. Provide traceability in copper and gemstone mining, reducing smuggling and boosting global market confidence. Help cooperatives and SMEs access financing by proving their transaction history and supply chain credibility via blockchain records. 4. E-Government & Service Delivery

The Zambian government aims to digitize public services. Ontology Blockchain could:

Power secure land registries, reducing disputes and fraud. Create tamper-proof records for civil registration (births, deaths, marriages). Support digital voting systems that are transparent, verifiable, and resistant to manipulation. Improve public procurement processes by reducing corruption through transparent contract tracking. 5. Education & Skills Development

Certificates and qualifications are often hard to verify in Zambia. Ontology offers:

Blockchain-based education records: universities and colleges can issue tamper-proof digital diplomas. A verifiable skills database that employers and training institutions can trust. Empowerment of youth in blockchain and Web3 development, opening new economic opportunities. 6. Data Security & Trust in the Digital Economy

Zambia’s growing reliance on mobile money and e-commerce requires strong data protection. Ontology brings:

User-controlled data sharing: individuals decide who can access their personal information. Decentralized identity verification for businesses, preventing fraud in digital transactions. Strong compliance frameworks to align with Zambia’s Data Protection Act of 2021. Challenges to Overcome

Digital literacy gaps: Zambian citizens need training to use blockchain-based services.

Regulatory clarity: as Zambia we must craft clear policies around blockchain and cryptocurrencies.

Infrastructure: because reliable internet and mobile access are essential for blockchain adoption.

Conclusion

Ontology Blockchain provides Zambia with more than just a digital ledger — it offers a trust framework for identity, finance, governance, and innovation. By integrating Ontology into key sectors like agriculture, health, mining, and public administration, Zambia can accelerate its journey toward a secure, inclusive, and transparent digital economy.

This is not just about technology it’s about empowering citizens, building investor confidence, and positioning Zambia as a leader in blockchain innovation in Africa.

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

Putting responsible AI into practice: IDnow’s work on bias mitigation

As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification. London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings
As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification.

London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings on reducing bias in artificial intelligence (AI) systems. Funded by the EU’s Horizon Europe program, the project brought together organizations from a consortium of leading universities, research centers, and private companies across Europe. 

IDnow, a leading identity verification platform provider in Europe, was directly involved in the implementation of the project as an industry partner. Through targeted research and testing, an optimized AI model was developed to significantly reduce bias in facial recognition, which is now integrated into IDnow’s solutions.

Combating algorithmic bias in practice

Facial recognition systems that leverage AI are increasingly used for digital identity verification, for example, when opening a bank account or registering for car sharing. Users take a digital image of their face, and AI compares it with their submitted ID photo. However, such systems can exhibit bias, leading to poorer results for certain demographic groups. This is due to the underrepresentation of minorities in public data sets, which can result in higher error rates for people with darker skin tones. 

A study by MIT Media Lab showed just how significant these discrepancies can be: while facial recognition systems had an error rate of only 0.8% for light-skinned men, the error rate for dark-skinned women was 34.7%. These figures clearly illustrate how unbalanced many AI systems are – and how urgent it is to rely on more diverse data. 

As part of MAMMOth, IDnow worked specifically to identify and minimize such biases in facial recognition – with the aim of increasing both fairness and reliability.

Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application. By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.

Montaser Awal, Director of AI & ML at IDnow.
Technological progress with measurable impact

As part of the project, IDnow investigated possible biases in its facial recognition algorithm, developed its own approaches to reduce these biases, and additionally tested bias mitigation methods proposed by other project partners.

For example, as ID photos often undergo color adjustments by issuing authorities, skin tone can play a challenging role, especially if the calibration is not optimized for darker skin tones. Such miscalibration can lead to inconsistencies between a selfie image and the person’s appearance in an ID photo.  

To solve this problem, IDnow used a style transfer method to expand the training data, which allowed the model to become more resilient to different conditions and significantly reduced the bias toward darker skin tones.

Tests on public and company-owned data sets showed that the new training method achieved an 8% increase in verification accuracy – while using only 25% of the original training data volume. Even more significantly, the accuracy difference between people with lighter and darker skin tones was reduced by over 50% – an important step toward fairer identity verification without compromising security or user-friendliness. 

The resulting improved AI model was integrated into IDnow’s identity verification solutions in March 2025 and has been in use ever since.

Setting the standard for responsible AI

In addition to specific product improvements, IDnow plans to use the open-source toolkit MAI-BIAS developed in the project in internal development and evaluation processes. This will allow fairness to be comprehensively tested and documented before new AI models are released in the future – an important contribution to responsible AI development. 

“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” adds Montaser Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”

Wednesday, 29. October 2025

Ocean Protocol

Ocean Protocol: Q4 2025 Update

A look at what the Ocean core team has built, and what’s to come · 1. Introduction · 2. Ocean Nodes: from Foundation to Framework · 3. Annotators Hub: Community-driven data annotations · 4. Lunor: Crowdsourcing Intelligence for AI · 5. Predictoor and DeFi Trading · 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence · 7. Conclusion 1. Introduction Back in June,
A look at what the Ocean core team has built, and what’s to come

· 1. Introduction
· 2. Ocean Nodes: from Foundation to Framework
· 3. Annotators Hub: Community-driven data annotations
· 4. Lunor: Crowdsourcing Intelligence for AI
· 5. Predictoor and DeFi Trading
· 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence
· 7. Conclusion

1. Introduction

Back in June, we shared the Ocean Protocol Product Update half-year check-in for 2025 where we outlined the progress made across Ocean Nodes, Predictoor, and other Ocean ecosystem initiatives. This post is a follow-up, highlighting the major steps taken since then and what’s next as we close out 2025.

We’re heading into the final stretch of 2025, so it’s only fitting to have a look over what the core team has been working on and what is soon to be released. Ocean Protocol was built to level the playing field for AI and data. From day one, the vision has been to make data more accessible, AI more transparent, and infrastructure more open. The Ocean tech stack is built for that mission: to combine decentralized compute, smart contracts, and open data marketplaces to help developers, researchers, and companies tap into the true potential of AI.

This year has been about making that mission real. Here’s how:

2. Ocean Nodes: from Foundation to Framework

Since the launch of Ocean Nodes in August 2024, the Ocean community has shown what’s possible when decentralized infrastructure meets real-world ambition. With over 1.7 million nodes deployed across 70+ countries, the network has grown far beyond expectations.

Throughout 2025, the focus has been on reducing friction, boosting usability, and enabling practical workflows. A highlight: the release of the Ocean Nodes Visual Studio Code extension. It lets developers and data scientists run compute jobs directly from their editor — free (within defined parameters), fast, and frictionless. Whether they’re testing algorithms or prototyping dApps, it’s the quickest path to real utility. The extension is now available on the VS Code Marketplace, as well as in Cursor and other distributions, via the Open VSX registry.

We’ve also seen strong momentum from partners like NetMind and Aethir, who’ve helped push GPU-ready infrastructure into the Ocean stack. Their contribution has paved the way for Phase 2, a major upgrade that the core team is still actively working on and that’s set to move the product from PoC to real production-grade capabilities.

That means:

Compute jobs that actually pay, with a pay-as-you-go system in place Benchmarking GPU nodes to shape a fair and scalable reward model Real-world AI workflows: from model training to advanced evaluation

And while Phase 2 is still in active development, it’s now reached a stage where user feedback is needed. To get there, we’ve launched the Alpha GPU Testers program, for a small group of community members to help us validate performance, stability and reward mechanics across GPU nodes. Selected participants simply need to set their GPU node up and make it available for the core team to run benchmark tests. As a thank-you for their effort and uptime, each successfully tested node will receive a $100 reward.

Key information:

Node selection: Oct 24–31, 2025 Benchmark testing: Nov 3–17, 2025 Reward: $100 per successfully tested node Total participants: up to 15, on a first come-first served basis. Only one node/owner is allowed

With Phase 2 of Ocean Nodes, we will be laying the groundwork for something even bigger: the Ocean Network. Spoiler alert: it will be a peer-to-peer AI Compute-as-a-Service platform designed to make GPU infrastructure accessible, affordable, and censorship-resistant for anyone who needs it.

More details on the transition are coming soon. But if you’re running a node, building on Ocean, or following along, you’re already part of it.

What else have we launched?

3. Annotators Hub: Community-driven data annotations Current challenge: CivicLens, ends on Oct 31, 2025

AI doesn’t work without quality data. And creating it still is a huge bottleneck. That’s why we’ve launched the Annotators Hub: a structured, community-driven initiative where contributors help evaluate and shape high-quality datasets through focused annotation challenges.

The goal is to improve AI performance by improving what it learns from: the data. High-quality annotations are the foundation for reliable, bias-aware, and domain-relevant models. And Ocean is building the tools and processes to make that easier, more consistent, and more inclusive.

Human annotations remain the single most effective way to improve AI performance, especially in complex domains like education and politics. By contributing to the Annotators Hub, the Ocean community members directly help build better models, that can power adaptive tutors, improve literacy tools, and even make political discourse more accessible.

For example, LiteracyForge, the first challenge ran in collaboration with Lunor.ai, focused on improving adaptive learning systems by collecting high-quality evaluations of reading comprehension material. The aim: to train AI that better understands question complexity and supports literacy tools. Here are a few highlights, as the challenge is currently being evaluated:

49,832 total annotations submitted 19,973 unique entries 147 annotators joined throughout the three weeks of the first challenge 17,581 double-reviewed annotation

The second challenge will finish in just 2 days, on Friday, October 31. This time we’re looking into analyzing speeches from the European Parliament, to help researchers, civic organizations as well as the general public better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible. There’s still time to jump in and become an annotator,

Yes, this initiative can be seen as a “launchpad” for a marketplace with ready-to-use, annotated data, designed to give everyone access to training-ready data that meets real-world quality standards. But on that in an upcoming blogpost.

As we get closer to the end of 2025, we’re doubling down on utility, usability, and adoption. The next phase is about scale and about creating tangible ways for Ocean’s community to contribute, earn, and build.

4. Lunor: Crowdsourcing Intelligence for AI

Lunor is building a crowdsourced intelligence ecosystem where anyone can co-create, co-own, and monetize Intelligent Systems. As one of the core projects within the Ocean Ecosystem, Lunor represents a new approach to AI, one where the community drives both innovation and ownership.

Lunor’s achievements so far, together with Ocean Protocol, comprise of:

Over $350,000 in rewards distributed from the designated Ocean community wallet More than 4,000 contributions submitted 38 structured data and AI quests completed

Assets from Lunor Quests are published on the Ocean stack, while future integration with Ocean nodes will bring private and compliant Compute-to-Data for secure model training.

Together with Ocean, Lunor has hosted quests like LiteracyForge, showcasing how open collaboration can unlock high-quality data and AI for education, sustainability, and beyond.

5. Predictoor and DeFi Trading

About Predictoor. In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. The “earn $” part is key, because it fosters usage.

Predictoor involves two groups of people:

Predictoors: data scientists who use AI models to predict what the price of ETH, BTC, etc will be 5 (or 60) minutes into the future. The scientists run bots that submit these predictions onto the chain every 5 minutes. Predictoors earn $ based on sales of the feeds, including sales from Ocean’s Data Farming incentives program. Traders: run bots that input predictoors’ aggregated predictions, to use as alpha in trading. It’s another edge for making $ while trading.

Predictoor is built using the Ocean stack. And, it runs on Oasis Sapphire; we’ve partnered with the Oasis team.

Predictoor traction. Since mainnet launch in October 2023, Predictoor has accumulated about $2B total volume. [Source: DappRadar]. Furthermore, in spring 2025, our collaborators at Oasis launched WT3, a decentralized, verifiable trading agent that uses Predictoor feeds for its alpha.

Predictoor past, present, future. After Predictoor product and rewards program were launched in fall 2023, the next major goal was “traders to make serious $”. If that is met, then traders will spend $ to buy feeds; which leads to serious $ for predictoors. The Predictoor team worked towards this primary goal throughout 2024, testing trading strategies with real $. Bonus side effects of this were improved analytics and tooling.

Obviously “make $ trading” is not an easy task. It’s a grind taking skill and perseverance. The team has ratcheted, inching ever-closer to making money. Starting in early 2025, the live trading algorithms started to bear fruit. The team’s 2025 plan was — and is — keep grinding, towards the goal “make serious $ trading”. It’s going well enough that there is work towards a spinoff. We can expect trading work to be the main progress in Predictoor throughout 2025. Everything else in Predictoor and related will follow.

6. bci/acc: accelerate brain-computer interfaces towards human superintelligence

Another stream in Ocean has been taking form: bci/acc. Ocean co-founder Trent McConaghy first gave a talk on bci/acc at NASA in Oct 2023, and published a seminal blog post on it a couple months later. Since then, he’s given 10+ invited talks and podcasts, including Consensus 2025 and Web3 Summit 2025.

bci/acc thesis. AI will likely reach superintelligence in the next 2–10 years. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence.

Ocean bci/acc team. In January 2025, Ocean assembled a small research team to pursue bci/acc, with the goal to create BCI killer apps that it can take to market. The team has been building towards this ever since: working with state-of-the-art BCI devices, constructing AI-data pipelines, and running data-gathering experiments. Ocean-style decentralized access control will play a role, as neural signals are perhaps the most private data of all: “not your keys, not your thoughts”. In line with Ocean culture and practice, we look forward to sharing more details once the project has progressed to tangible utility for target users.

7. Conclusion

2025 has been a year of turning vision into practice. From Predictoor’s trading traction, Ocean Nodes being pushed into a GPU-powered Phase 2, to the launch of Annotators Hub, and with ecosystem projects like Lunor driving community-led AI forward, it feels like the pieces of the Ocean vision are falling into place.

The focus is clear for the Ocean core team in Q4: scale, usability, and adoption. Thanks for being part of it. The best is yet to come.

Ocean Protocol: Q4 2025 Update was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Voting will take place on OWallet from October 28, 2025 (00:00 UTC) through October 31, 2025 (00:00 UTC).

Understanding the Current Model

Let’s start with where things stand today.

Total ONG Supply: 1 billion Total Released: ≈ 450 million (≈ 430 million circulating) Annual Release: ≈ 31.5 million ONG Release Curve: All ONG unlocked over 18 years. The remaining 11 years follow a mixed release pace: 1 ONG per second for 6 years, then 2, 2, 2, 3, and 3 ONG per second in the final 5 years.

Currently, both unlocked ONG and transaction fees flow back to ONT stakers as incentives, generating an annual percentage rate of roughly 23 percent at current prices.

What the Proposal Changes

The new proposal suggests several key adjustments to rebalance distribution and align long-term incentives:

Cap the total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value, effectively removing them from circulation. Strengthen staker rewards and ecosystem growth by making the release schedule steadier and liquidity more sustainable. Implementation Plan

1. Adjust the ONG Release Curve

Total supply capped at 800 million. Release period extended from 18 to 19 years. Maintain a 1 ONG per second release rate for the remaining years.

2. Allocation of Released ONG

80 percent directed to ONT staking incentives. 20 percent, plus transaction fees, contributed to ecological liquidity.

3. Swap Mechanism

Use ONG to acquire ONT within a defined fluctuation range. Pair the two tokens to create liquidity and receive LP tokens. Burn the LP tokens to permanently lock both ONG and ONT, tightening circulating supply. Community Q & A

Q1. How long will the ONT + ONG (worth 100 million ONG) be locked?

It’s a permanent lock.

Q2. Why does the total ONG supply decrease while the release period increases?

Under the current model, release speeds up in later years. This proposal keeps the rate fixed at 1 ONG per second, so fewer tokens are released overall but over a slightly longer span — about 19 years in total.

Q3. Will this affect ONT staking APY?

It may, but not necessarily negatively. While staking rewards in ONG drop 20 percent, APY depends on market prices of ONT and ONG. If ONG appreciates as expected, overall returns could remain steady or even rise.

Q4. How does this help the Ontology ecosystem?

Capping supply at 800 million and permanently locking 100 million ONG will make ONG scarcer. With part of the released ONG continuously swapped for ONT to support DEX liquidity, the effective circulating supply may fall to around 750 million. That scarcity, paired with new products consuming ONG, could strengthen price dynamics and promote sustainable network growth. More on-chain activity would also mean stronger rewards for stakers.

Q5. Who can vote, and how?

All Triones nodes have the right to vote through OWallet during the official voting window.

Why It Matters

This isn’t just a supply adjustment. It’s a structural change designed to balance reward distribution, liquidity, and governance in a way that benefits both the Ontology network and its long-term participants.

Every vote counts. By joining this governance round, Triones nodes have a direct hand in shaping how value flows through the Ontology ecosystem — not just for today’s staking cycle, but for the years of decentralized growth ahead.

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Ping YOUniverse 2025: Resilient Trust in Motion

Ping YOUniverse 2025 traveled to Sydney, Melbourne, Singapore, Jakarta, Austin, London, and Amsterdam. Read the highlights of our global conference, and see how identity, AI, and Resilient Trust took center stage.

Identity is moving fast–AI agents, new fraud patterns, and tightening regulations are reshaping the identity landscape under our feet. At Ping YOUniverse 2025, thousands of identity leaders, customers, and partners came together to confront this dramatic shift.

 

We compared notes on what matters now:

 

Stopping account takeover without killing conversion, so security doesn’t tax your revenue engine.

Orchestrating trust signals across apps and partners, so decisions get smarter everywhere.

Shrinking risk and cost with just‑in‑time access, so the right access appears—and disappears—on demand.

 

This recap distills the most useful takeaways for you: real-world use cases, technical demos within our very own Trust Lab, and deep-dive presentations from partners like Deloitte, AWS, ProofID, Midships, Versent, and more—plus guest keynotes from Former Secretary General of Interpol, Dr. Jürgen Stock and cybersecurity futurist, Heather Vescent. And it’s unified by a single theme: Resilient Trust isn’t a moment. It’s a mindset.

 

Tuesday, 28. October 2025

Spherical Cow Consulting

Can Standards Survive Trade Wars and Sovereignty Battles?

For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all. The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.

“For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all.”

That premise is a lovely ideal, but it no longer reflects reality. The Internet isn’t collapsing, but it is fragmenting: tariffs, digital sovereignty drives, export controls, and surveillance regimes all chip away at the illusion of universality. Standards bodies that still aim for global consensus risk paralysis. And yet, walking away from standards altogether isn’t an option.

The real question isn’t whether we still need standards. The question is how to rethink them for a world that is fractured by design.

This is the fourth of a four-part series on what the Internet will look like for the next generation of people on this planet.

First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: “The People Problem: How Demographics Decide the Future of the Internet“ Fourth post: [this one]

A Digital Identity Digest Can Standards Survive Trade Wars and Sovereignty Battles? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:24 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Global internet, local rulebooks

If you look closely, today’s Internet is already more a patchwork quilt of overlapping, sometimes incompatible regimes, and less global.

Europe pushes digital sovereignty and data protection rules, with eIDAS2 and the AI Act setting global precedents. The U.S. leans on export controls and sanctions, using access to chips and cloud services as levers of influence. China has doubled down on domestic control, firewalling traffic and setting its own technical specs. Africa and Latin America are building data centers and digital ID schemes to reduce dependence on foreign providers, while still trying to keep doors open for trade and investment.

Standards development bodies now live in this reality. The old model where universality was the goal and compromise was the method is harder to sustain. If everyone insists on their priorities, consensus stalls. But splintering into incompatible systems isn’t viable either. Global supply chains, cross-border research, and the resilience of communications all require at least a shared baseline.

The challenge is to define what “interoperable enough” looks like.

The cost side is getting heavier

The incentives for participation in global standards bodies used to be relatively clear: access to markets, influence over technical direction, and reputational benefits. Today, the costs of cross-border participation have gone up dramatically.

Trade wars have re-entered the picture. The U.S. has imposed sweeping tariffs on imports from China and other countries, hitting semiconductors and electronics with rates ranging from 10% to 41%. These costs ripple across supply chains. On top of tariffs, the U.S. has restricted exports of advanced chips and AI-related hardware to China. The uncertainty of licensing adds compliance overhead and forces firms to hedge.

Meanwhile, the “China + 1” strategy—where companies diversify sourcing away from over-reliance on China—comes with a hefty price tag. Logistics get more complex, shipping delays grow, and firms often hold more inventory to buffer against shocks. A 2025 study estimated these frictions alone cut industrial output by over 7% and added nearly half a percent to inflation.

And beyond tariffs or logistics, transparency and compliance laws add their own burden. The U.S. Corporate Transparency Act requires firms to disclose beneficial ownership. Germany’s Transparency Register and Norway’s Transparency Act impose similar obligations, with Norway’s rules extending to human-rights due diligence.

The result is that companies are paying more just to maintain cross-border operations. In that climate, the calculus for standards shifts. “Do we need this standard?” becomes “Is the payoff from this standard enough to justify the added cost of playing internationally?”

When standards tip the scales

The good news is that standards can offset some of these costs when they come with the right incentives.

One audit, many markets. Standards that are recognized across borders save money. If a product tested in one region is automatically accepted in another, firms avoid duplicative testing fees and time-to-market shrinks.

Case study: the European Digital Identity Wallet (EUDI). In 2024, the EU adopted a reform of the eIDAS regulation that requires all Member States to issue a European Digital Identity Wallet and mandates cross-border recognition of wallets issued by other states. The premise here is that if you can prove your identity using a wallet in France, that same credential should be accepted in Germany, Spain, or Italy without new audits or registrations.

The incentives are potentially powerful. Citizens gain convenience by using one credential for many services. Businesses reduce onboarding friction across borders, from banking to telecoms. Governments get harmonized assurance frameworks while retaining the ability to add national extensions. Yes, the implementation costs are steep—wallet rollouts, legal alignment, security reviews—but the payoff is smoother digital trade and service delivery across a whole bloc.

Regulatory fast lanes. Governments can offer “presumption of conformity” when products follow recognized standards. That reduces legal risk and accelerates procurement cycles.

Procurement carrots. Large buyers, both public and private, increasingly bake interoperability and security standards into tenders. Compliance isn’t optional; it’s the ticket to compete.

Risk transfer. Demonstrating that you followed a recognized standard can reduce penalties after a breach or compliance failure. In practice, standards act as a form of liability insurance.

Flexibility in a fractured market. A layered approach—global minimums with regional overlays—lets companies avoid maintaining entirely separate product lines. They can ship one base product, then configure for sovereignty requirements at the edges.

When incentives aren’t enough

Of course, there are limits to how far incentives can stretch. Sometimes the costs simply outweigh the benefits.

Consider a market that imposes steep tariffs on imports while also requiring its own unique technical standards, with no recognition of external certifications. In such a case, the incentive of “one audit, many markets” collapses. Firms face a choice between duplicating compliance efforts, forking product lines, or withdrawing from the market entirely.

Similarly, rules of origin can blunt the value of global standards. Even if a product complies technically, it may still fail to qualify for preferential access if its components are sourced from disfavored regions. Political volatility adds another layer of uncertainty. The back-and-forth implementation of the U.S. Corporate Transparency Act illustrates how compliance obligations can change rapidly, leaving firms unable to plan long-term around standards incentives.

These realities underline a sad reality that incentives alone cannot overcome every cost. Standards must be paired with trade policies, recognition agreements, and regulatory stability if they are to deliver meaningful relief. Technology is not enough.

How standards bodies must adapt

It’s easy enough to say “standards still matter.” What’s harder is figuring out how the institutions that make those standards need to change. The pressures of a fractured Internet aren’t just technical. They’re geopolitical, economic, and regulatory. That means standards bodies can’t keep doing business as usual. They need to adapt on two fronts: process and scope.

Process: speed, modularity, and incentives

The traditional model of consensus-driven standards development assumes time and patience are plentiful. Groups grind away until they’ve achieved broad agreement. In today’s climate, that often translates to deadlock. Standards bodies need to recalibrate toward “minimum viable consensus” that offer enough agreement to set a global baseline, even if some regions add overlays later.

Speed also matters. When tariffs or export controls can be announced on a Friday and reshape supply chains by Monday, five-year standards cycles are untenable. Bodies need mechanisms for lighter-weight deliverables: profiles, living documents, and updates that track closer to regulatory timelines.

And then there’s participation. Costs to attend international meetings are rising, both financially and politically. Without intervention, only the biggest vendors and wealthiest governments will show up. That’s why initiatives like the U.S. Enduring Security Framework explicitly recommend funding travel, streamlining visa access, and rotating meetings to more accessible locations. If the goal is to keep global baselines legitimate, the doors have to stay open to more than a handful of actors.

Scope: from universality to layering

Just as important as process is deciding what actually belongs in a global standard. The instinct to solve every problem universally is no longer realistic. Instead, standards bodies need to embrace layering. At the global level, focus on the minimums: secure routing, baseline cryptography, credential formats. At the regional level, let overlays handle sovereignty concerns like privacy, lawful access, or labor requirements.

This shift also means expanding scope beyond “pure technology.” Standards aren’t just about APIs and message formats anymore; they’re tied directly to procurement, liability, and compliance. If a standard can’t be mapped to how companies get through audits or how governments accept certifications, it won’t lower costs enough to be worth the trouble.

Finally, standards bodies must move closer to deployment. A glossy PDF isn’t sufficient if it doesn’t include reference implementations, test suites, and certification paths. Companies need ways to prove compliance that regulators and markets will accept. Otherwise, the promise of “interoperability” remains theoretical while costs keep mounting.

The balance

So is it process or scope? The answer is both. Process has to get faster, more modular, and more inclusive. Scope has to narrow to what can truly be global while expanding to reflect regulatory and economic realities. Miss one side of the equation, and the other can’t carry the weight. Get them both right, and standards bodies can still provide the bridges we desperately need in a fractured world.

A layered model for fractured times

So what might a sustainable approach look like? I expect the future will feature layered models rather than a universal one.

At the bottom of this new stack are the baseline standards for secure software development, routing, and digital credential formats. These don’t attempt to satisfy every national priority, but they keep the infrastructure interoperable enough to enable trade, communication, and research.

On top of that baseline are regional overlays. These extensions allow regions to encode sovereignty priorities, such as privacy protections in Europe, lawful access in the U.S., or data localization requirements in parts of Asia. The overlays are where politics and local control find their expression.

This design isn’t neat or elegant. But it’s pragmatic. The key is ensuring that overlays don’t erode the global baseline. The European Digital Identity Wallet is a good example: the baseline is cross-border recognition across EU states, while national governments can still add extensions that reflect their specific needs. The balance isn’t perfect, but it shows how interoperability and sovereignty can coexist if the model is layered thoughtfully.

What happens if standards fail

It’s tempting to imagine that if standards bodies stall, the market will simply route around them. But the reality of a fractured Internet is far messier. Without viable global baselines, companies retreat into regional silos, and the costs of compliance multiply. This section is the stick to go with the carrots of incentives.

If standards fail, cross-border trade slows as every shipment of software or hardware has to be retested for each jurisdiction. Innovation fragments as developers build for narrow markets instead of global ones, losing economies of scale. Security weakens as incompatible implementations open new cracks for attackers. And perhaps most damaging, trust erodes: governments stop believing that interoperable solutions can respect sovereignty, while enterprises stop believing that global participation is worth the cost.

The likely outcome is not resilience, but duplication and waste. Firms will maintain redundant product lines, governments will fund overlapping infrastructures, and users will pay the bill in the form of higher prices and poorer services. The Internet won’t collapse, but it will harden into a collection of barely connected islands.

That’s why standards bodies cannot afford to drift. The choice isn’t between universal consensus and nothing. The choice is between layered, adaptable standards that keep the floor intact or a slow grind into fragmentation that makes everyone poorer and less secure.

Closing thought

The incentives versus cost tradeoff is not a side issue in standards development. It is the issue. The technical community must accept that tariffs, sovereignty, and compliance aren’t temporary distractions but structural realities.

The key question to ask about any standard today is simple: Does this make it cheaper, faster, or less risky to operate across borders? If the answer is yes, the standard has a future. If not, it risks becoming another paper artifact, while fragmentation accelerates.

Now I have a question for you: in your market, do the incentives for adopting bridge standards outweigh the mounting costs of tariffs, export controls, and compliance regimes? Or are we headed for a world where regional overlays dominate and the global floor is paper-thin?

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Welcome back to A Digital Identity Digest.

Today, I’m asking a big question that’s especially relevant to those of us working in technical standards development:

Can standards survive trade wars and sovereignty battles?

For decades, the story of Internet standards seemed fairly simple — though never easy:
get the right people in the room, hammer out details, and eventually end up with rules that worked for everyone.

[00:00:58] The Internet was one global network, and standards reflected that vision.

[00:01:09] That story, however, is starting to fall apart.
We’re not watching the Internet collapse, but we are watching it fragment — and that fragmentation carries real consequences for how standards are made, adopted, and enforced.

[00:01:21] In this episode, we’ll explore:

Why the cost of participating in global standards has gone up How incentives can still make standards development worthwhile What happens when those incentives fall short And how standards bodies need to adapt to stay relevant

[00:01:36] So, let’s dive in.

The Fragmenting Internet

[00:01:39] When the Internet first spread globally, it seemed like one big network — or at least, one big concept.

[00:01:55] But that’s not quite true anymore.

Let’s take a few regional examples.

Europe has leaned heavily into digital sovereignty, with rules like GDPR, the AI Act, and the updated eIDAS regulation. Their focus is clear: privacy and sovereignty come first. The United States takes a different tack, using export controls and sanctions as tools of influence — with access to semiconductors and cloud services as leverage in its geopolitical strategy. China has gone further, building its own technical standards and asserting domestic control over traffic and infrastructure. Africa and Latin America are investing in local data centers and digital identity schemes, aiming to reduce dependency while keeping doors open for global trade and investment.

[00:02:46] When every region brings its own rulebook, global consensus doesn’t come easily.
Bodies like ISO, ITU, IETF, or W3C risk stalling out.

Yet splintering into incompatible systems is also costly:

It disrupts supply chains Slows research collaborations And fractures global communications

[00:03:31] So let’s start by looking at what all of this really costs.

The Rising Cost of Participation

[00:03:35] Historically, incentives for joining standards efforts were clear:

Influence technology direction Ensure interoperability Build goodwill as a responsible actor

[00:03:52] But that equation is changing.

Take tariffs, for example.

U.S. tariffs on imports from China and others now range from 10% to 41% on semiconductors and electronics. Export controls restrict the flow of advanced chips, reshaping entire markets. Companies face new costs: redesigning products, applying for licenses, and managing uncertainty.

[00:04:33] Add in supply chain rerouting — the so-called “China Plus One” strategy — and you get:

More complex logistics Longer delays Higher inventory buffers

Recent studies show these frictions cut industrial output by over 7% and add 0.5% to inflation.

[00:04:58] It’s not just the U.S. — tariffs are now a global trend.

Then there are transparency laws, like:

The U.S. Corporate Transparency Act Germany’s Transparency Register Norway’s Transparency Act, which even mandates human rights due diligence

[00:05:33] The result?
The baseline cost of cross-border operations is rising — forcing companies to ask if global standards participation is still worth it.

Why Standards Still Matter

[00:05:50] So, why bother with standards at all?

Because well-designed standards can offset many of these costs.

[00:05:56] Consider the power of recognition.
If one region accepts a product tested in another, companies save on duplicate testing and reach markets faster.

[00:06:07] A clear example is the European Digital Identity Wallet (EUDI Wallet).

In 2024, the EU updated eIDAS to:

Require each member state to issue a European Digital Identity Wallet Mandate mutual recognition between member states

This means:

A wallet verified in France also works in Germany or Spain Citizens gain convenience Businesses reduce onboarding friction Governments maintain a harmonized baseline with room for local adaptation

[00:06:56] Though rollout costs are high — covering legal alignment, wallet development, and security testing — the payoff is smoother digital trade.

Beyond recognition, strong standards also offer:

Regulatory fast lanes: Reduced legal risk when products follow recognized standards Procurement advantages: Interoperability requirements in public tenders Risk transfer: Accepted standards can serve as a partial defense after incidents

[00:07:34] In effect, standards can act as liability insurance.

[00:07:41] But not all incentives outweigh the costs.
When countries insist on unique local standards without mutual recognition, “one audit, many markets” collapses.

[00:08:05] Companies duplicate compliance, fork product lines, or leave markets.
Rules of origin and political volatility add further uncertainty.

[00:08:44] So yes — standards can tip the scales, but they can’t overcome every barrier.

The Changing Role of Standards Bodies

[00:08:54] Saying “standards still matter” is one thing — ensuring their institutions adapt is another.

[00:09:02] The pressures shaping today’s Internet are not just technical but geopolitical, economic, and regulatory.

That means standards bodies must evolve in two key ways:

Process adaptation Scope adaptation

[00:09:19] The old “everyone must agree” consensus model now risks deadlock.
Bodies need to move toward a minimum viable consensus — enough agreement to set a baseline, even if regional overlays come later.

[00:09:39] Increasingly, both state and corporate actors exploit the process to delay progress.
Meanwhile, when trade policies change in months, a five-year standards cycle is useless.

[00:10:16] Standards organizations must embrace:

Lighter deliverables Living documents Faster updates aligned with regulatory change

[00:10:32] Participation costs are another barrier.
If only the richest governments and companies can attend meetings, legitimacy suffers.

Efforts like the U.S. Enduring Security Framework, which supports broader participation, are essential.

[00:11:10] Remote participation helps — but it’s not enough.
In-person collaboration still matters because trust is built across tables, not screens.

Rethinking Scope and Relevance

[00:11:31] Scope matters too.

Standards bodies should embrace layering:

Global level: focus on secure routing, baseline cryptography, credential formats Regional level: handle sovereignty overlays — privacy, lawful access, labor rules

[00:11:55] Moreover, the scope must expand beyond technology to include:

Procurement Liability Compliance

If standards don’t reduce costs in these areas, they won’t gain traction — no matter how elegant they look in PDF form.

[00:12:12] Standards also need to move closer to deployment:

Include reference implementations Provide test suites Define certification paths that regulators will accept

Without these, interoperability remains theoretical while costs keep rising.

[00:12:53] Ultimately, this is both a process problem and a scope problem.
Processes must be faster and more inclusive.
Scopes must be realistic and economically relevant.

The Risk of Fragmentation

[00:13:11] Some argue that if standards bodies stall, the market will route around them.
But a fractured Internet is messy:

Cross-border trade slows under multiple testing regimes Innovation fragments into narrow regional silos Security weakens as incompatible implementations open new vulnerabilities

[00:13:45] And perhaps worst of all, trust erodes.
Governments lose faith in interoperability; companies question the value of participation.

[00:13:55] The outcome isn’t resilience — it’s duplication, waste, and higher costs.

[00:14:07] The Internet won’t disappear, but it risks hardening into isolated digital islands.
That’s why standards bodies can’t afford drift.

[00:14:26] The real choice is between:

Layered, adaptable standards that maintain a shared baseline Or a slow grind into fragmentation that makes everyone poorer and less secure Wrapping Up

[00:14:38] The incentives-versus-cost trade-off is no longer a side note in standards work — it’s the core issue.

Tariffs, sovereignty, and compliance regimes aren’t temporary distractions.
They’re structural realities shaping the future of interoperability.

[00:14:52] The key question for any new standard is:

Does this make it cheaper, faster, or less risky to operate across borders?

If yes — that standard has a future.
If no — it risks becoming another PDF gathering dust while fragmentation accelerates.

[00:15:03] With that thought — thank you for listening.

I’d love to hear your perspective:

Do incentives for adopting bridge standards outweigh the rising costs of sovereignty battles? Or are we headed toward a world of purely regional overlays?

[00:15:37] Share your thoughts, and let’s keep this conversation going.

[00:15:48] That’s it for this week’s Digital Identity Digest.

If this episode helped clarify or inspire your thinking, please:

Share it with a friend or colleague Connect with me on LinkedIn @hlflanagan Subscribe and leave a rating on Apple Podcasts or wherever you listen

[00:16:00] You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep the dialogue alive.

The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.


Ocean Protocol

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” — Disproven

Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below. Source: X Spaces — Oct 9, 2025 @BubbleMaps has made this very helpful diagram to
Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon

Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below.

Source: X Spaces — Oct 9, 2025

@BubbleMaps has made this very helpful diagram to identify the flows of $FET from the Ocean community wallet (give them a follow):

https://x.com/bubblemaps/status/1980601840388723064

So, what’s the truth behind the distribution of $FET out of a single wallet and into 30 wallets?

Was it, as Sheikh claims, an ill-intentioned action to obfuscate the token flows and “dump” on the ASI community? Absolutely not.

First, it was done out of prudence. Given that a significant number of tokens were held in a single wallet, it was to reduce the risk of having the community treasury tokens hacked or otherwise vulnerable to bad actors. Clearly, spreading the tokens across 30 wallets greatly reduces the risk of their being hacked or forcefully taken compared to tokens being held in a single wallet.

Second, the spreading of the community treasury tokens across many wallets was something that Fetch and Singularity had themselves requested we do, to avoid causing problems with ETF deals which they had decided to enter into using $FET.

As presented in the previous “ASI Alliance from Ocean Perspective” blogpost, on Aug 13, 2025, Casigrahi, SingularityNet’s CFO, wrote an email to Ocean Directors, cc’ing Dr. Goertzel and Lake:

In it, he references 8 ETF deals in progress that were underway with institutional investors and the concerns that “the window — is open now” to close these deals.

Immediately after this email, Casigrahi reached out to a member of the Ocean community, explaining that such a large sum of $FET in the Ocean community wallet, which is not controlled by either Fetch or SingularityNET, would raise difficult questions from ETF issuers. Recall that Ocean did not participate in these side deals promoted by Fetch, and was often kept out of the loop, e.g. the TRNR deal.

Casiraghi requested (on behalf of Fetch and SingularityNET) that if the $FET in the Ocean community wallet could not be frozen, whether arrangements could be made to split the $FET tokens across multiple wallets?

Casiraghi explained that if this could be done with the $FET in the Ocean community wallet, Fetch and SingularityNET could plausibly deny the existence of a very large token holder which they had no control over. They could sweep it under the rug and avoid uncomfortable due diligence questions.

On Aug 16 2025, David Levy of Fetch called me with the same arguments, reasoning and plea, whether Ocean could obfuscate the tokens and split them across more wallets?

Incidentally, in this call Levy also for the first time, shared with me the details of the TRNR deal which alarmed me once I understood the implications (“TRNR” Section §12).

At this juncture, it should be recalled that the Ocean community wallet is under the control of Ocean Expeditions. The Ocean community member who spoke with Casiraghi, as well as myself, informed the Ocean Expeditions trustees of this request and reasoning. Thereafter a decision was made by the Ocean Expedition’s trustees, as an act of goodwill, to distribute the $FET across 30 wallets as requested by Fetch and SingularityNet.

Turning back to the bigger picture, as a pioneer in the blockchain space, I am obviously well aware that all token movements are absolutely transparent to the world. Any transfers are recorded immutably forever and can be traced easily by anyone with a modicum of basic knowledge. I build blockchains for a living. It is ridiculous to suggest that I or anyone in Ocean could have hoped to “conceal” tokens in this public manner.

A simple act of goodwill and cooperation that was requested by both Fetch and SingularityNET has instead been deliberately blown up by Sheikh, and painted as a malicious act to harm the ASI community.

Sheikh has now used the wallet distribution to launch an all-out assault on Ocean Expeditions and start a manhunt to identify the trustees of the Ocean Expeditions wallet.

Sheikh has wantonly spread lies, libel and misinformation to muddy the waters, construct a false narrative accusing Ocean and its founders of misappropriation, and to incite community sentiment against us.

Sheikh’s accusations and his twisting of the facts to mislead the community are so absurd that they would be laughable, if they were not so dangerous and harmful to the whole community.

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” — Disproven was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 27. October 2025

KILT

KILT Liquidity Incentive Program

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base. The portal can be accessed here: liq.kilt.io For the best experience, desktop/browser use is recommended. Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for whic

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base.

The portal can be accessed here: liq.kilt.io

For the best experience, desktop/browser use is recommended.

Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for which you have been part of the program. Your liquidity is not locked in any way; you can add or remove liquidity at any time. The portal does not take custody of your KILT or ETH; positions remain on Uniswap under your direct control. Rewards can be claimed after 24hrs, and then at any time of your choosing. You will need KILT (0x5D0DD05bB095fdD6Af4865A1AdF97c39C85ad2d8) on Base ETH or wETH on Base An EVM wallet (e.g. MetaMask etc.) Joining the LIP Overview

There are two steps to joining the LIP:

Add KILT and ETH/wETH to the Uniswap pool in a full-range position. The correct pool is v3 with 0.3% fees. Note that whilst part of the LIP you will continue to earn the usual Uniswap pool fees as well. Register this position on the Liquidity Portal. Your rewards will start automatically. 1) Adding Liquidity

Positions may be created either on Uniswap in the usual way, or directly via the portal. If you choose to create positions on Uniswap then return to the portal afterwards to register them.

To create a position via the portal:

Go to liq.kilt.io and connect your wallet. Under the Overview tab, you may use the Quick Add Liquidity function. For more features, go to the Add Liquidity tab where you can choose how much KILT and ETH to contribute. 2) Registering Positions

Once you have created a position, either on Uniswap or via the portal, return to the Overview tab

Your KILT:ETH positions will be displayed under Eligible Positions. Select your positions and Register them to enroll in the LIP. Monitoring your Positions and Rewards

Once registered, you can find your positions in the Positions tab. The Analytics tab provides more information, for example your time bonuses and details about each position’s contribution towards your rewards.

Claiming Rewards

Your rewards start accumulating from the moment you register, but the portal may not reflect this immediately. Go to the Rewards tab to view and claim your rewards. Rewards are locked for the first 24hrs, after which you may claim at any time.

Removing Liquidity

Your LP remains 100% under your control; there are no locks or other restrictions and you may remove liquidity at any time. This can be done in the usual way directly on Uniswap. Removing LP will not in any way affect rewards accumulated up to that time, but if you later re-join the program then any time bonuses will have been reset.

How are my Rewards Calculated?

Rewards are based on:

The value of your KILT/ETH position(s). The total combined value of the pool as a whole. The number of days your position(s) have been registered.

Rewards are calculated from the moment you register a position, but the portal may not reflect them right away.

Need Help?

Support is available in our telegram group: https://t.me/KILTProtocolChat

-The KILT Foundation

KILT Liquidity Incentive Program was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


1Kosmos BlockID

FedRAMP Moderate Authorization: Why It Matters for Government Security

The post FedRAMP Moderate Authorization: Why It Matters for Government Security appeared first on 1Kosmos.