Elliptic
Keeping investors safe from memecoin scams: How Elliptic automatically detects rug pulls
Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!
“At the IETF 124 meeting in Montréal, I enjoyed quality time in a very small, very crowded side room filled with an unusually diverse mix of people: browser architects, policy specialists, working group chairs, privacy researchers, content creators, and assorted observers who simply care about the future of the web.”
The session, titled Preserving the Open Web, was convened by Mark Nottingham and David Schinazi—people you should follow if you want to understand how technical and policy communities make sense of the Internet’s future.
A week later, at the W3C TPAC meetings in Kobe, Japan, I ended up in an almost identical conversation, hosted again by Mark and David, fortunately a somewhat larger room. That discussion brought in new faces, new community norms, and a different governance culture, but in both discussions, we almost immediately fell to the question:
What exactly are we trying to preserve when we talk about “the open web”?
For that matter, what is the open web? The phrase appears everywhere—policy documents, standards charters, conference talks, hallway discussions—yet when communities sit down to define it, they get stuck. In Montréal and Kobe, the lack of a shared definition proved to be a practical obstacle. New specifications are being written, new automation patterns are emerging, new economic pressures are forming, and without clarity about what “open” means, even identifying the problem becomes difficult.
This confusion isn’t new. The web has been wrestling with the meaning of “open” for decades.
A Digital Identity Digest Robots, Humans, and the Edges of the Open Web Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:18:20 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
How the web began, and why openness matteredThe earliest version of the web was profoundly human in scale and intent. Individuals wrote HTML pages, uploaded them to servers, and connected them with links. Publishing was permissionless. Reading was unrestricted. No identity was required. No subscription was expected. You didn’t need anyone’s approval to build a new tool, host a new site, or modify your browser.
The Web Foundation’s history of the web describes this period as a deliberate act of democratization. Tim Berners-Lee’s original design was intentionally simple, intentionally interoperable, and intentionally open-ended. Anyone should be able to create. Anyone should be able to link. Anyone should be able to access information using tools of their own choosing.
That was the first meaning of “open web”: a world where humans could publish, and humans could read, without needing to ask permission.
Then the robots arrived.
Robots.txt and the first negotiation with machinesIn 1994, Martijn Koster created a lightweight mechanism for telling automated crawlers where they should and shouldn’t go: robots.txt. It was never a law; it was a social protocol. A well-behaved robot would read the file and obey it. A rogue one would ignore it and reveal itself by doing so.
That tiny file represented the web’s first attempt to articulate boundaries to non-human agents. It formalized a basic idea: openness for humans does not automatically imply openness for robots.
Even back then, openness carried nuance, and it was definitely not the last time the web community tried to define it.
The question keeps returningOne of the earliest modern attempts that I found to define the open web came in 2010, when Tantek Çelik wrote a piece simply titled “What is the Open Web?”. His framing emphasized characteristics rather than purity tests: anyone can publish; anyone can read; anyone can build tools; anyone can link; and interoperability creates more value than enclosure. It is striking how relevant those ideas remain, fifteen years later. These debates aren’t symptoms of crisis; they’re part of the web’s DNA.
The web has always needed periodic recalibration. It has always relied on communities negotiating what matters as technology, economics, and usage patterns change around them. As innovation happens, needs, wants, and desires for the web change.
And now, automation has forced us into another round of recalibration.
Automation came faster than consensusThe modern successors to robots.txt are now emerging in the IETF. One current effort, the AI Preferences Working Group (aipref) aims to provide a structured way for websites to express their preferences around AI training and automated data reuse. It’s an updated version of the old robots.txt promise: here is what we intend; please respect it.
But the scale is different. Search crawlers indexed pages so humans could find them. AI crawlers ingest pages so models can incorporate them. The stakes—legal, economic, creative, and infrastructural—are much higher.
Another effort, the newly chartered WebBotAuth Working Group (webbotauth), attempted to tackle the question of whether and how bots should authenticate themselves. The first meeting at IETF 124 made clear how tangled this space has become. Participants disagreed on what kinds of bots should be differentiated, what behavior should be encouraged or discouraged, and whether authentication is even a meaningful tool for managing the diversity of actors involved. The conversation grew complex (and heated) enough that the chairs questioned whether the group had been chartered before there was sufficient consensus to proceed.
None of this represents failure. It represents something more fundamental:
We do not share a common mental model of what “open” should mean in a web increasingly mediated by automated agents.
And this lack of clarity surfaced again—vividly—one week later in Kobe.
What the TPAC meeting added to the pictureThe TPAC session began with a sentiment familiar to anyone who has been online for a while: one of the great gifts of the web is that it democratized information. Anyone could learn. Anyone could publish. Anyone could discover.
But then came the question that cut to the heart of the matter: Are we still living that reality today?
Participants pointed to shifts that complicate the old assumptions—paywalls, subscription bundles, identity gates, regional restrictions, content mediation, and, increasingly, AI agents that read but do not credit or compensate. Some sites once built for humans now pivot toward serving data brokers and automated extractors as their primary “audience.” Others, in response, block AI crawlers entirely. New economic pressures lead to new incentives, sometimes at odds with the early vision of openness.
From that starting point, several deeper themes emerged.
Openness is not, and has never been, binaryOne of the most constructive insights from the TPAC discussion was the idea that “open web” should not be treated as a binary distinction. It’s a spectrum with many dimensions: price, friction, format, identity requirements, device accessibility, geographic availability, and more. Moving an article behind a paywall reduces openness in one dimension but doesn’t necessarily negate it in others. Requiring an email address adds friction but might preserve other characteristics of openness.
Trying to force the entire concept into a single yes/no definition obscures more than it reveals. It also leads to unproductive arguments, as different communities emphasize different attributes.
Recognizing openness as a spectrum helps explain why reaching consensus is so hard and why it may be unnecessary.
Motivations for publishing matter more than we thinkAnother thread that ran through the TPAC meeting was the simple observation that people publish content for very different reasons. Some publish for reputation, some to support a community, some for revenue, and some because knowledge-sharing feels inherently worthwhile. Those motivations shape how creators think about openness.
AI complicates this because it changes the relationship between creator intention and audience experience. If the audience receives information exclusively through AI summaries, the creator’s intended context or narrative can vanish. An article written to persuade, illuminate, or provoke thought may be reduced to a neutral paragraph. A tutorial crafted to help a community may be absorbed into a model with no attribution or path back to the original.
This isn’t just a business problem. It’s a meaning problem. And it affects how people think about openness.
The web is a commons, and commons require boundariesAt TPAC, someone invoked Eleanor Ostrom’s research on commons governance (here’s a video if you’re not familiar with that work): a healthy commons always has boundaries. Not barriers that prevent participation, but boundaries that help define acceptable use and promote sustainability.
That framing resonated well with many in the room. It helped reconcile something that often feels contradictory: promoting openness while still respecting limits. The original web was open because it was simple, not because it was boundary-free. Sharing norms emerged, often informally, that enabled sustainable growth.
AI-Pref and WebBotAuth are modern attempts to articulate boundaries appropriate for an era of large-scale automation. They are not restrictions on openness; they are acknowledgments that openness without norms is not sustainable. Now we just need to figure out what the new norms are in this brave new world.
We’re debating in the absence of shared dataDespite strong opinions across both meetings, participants kept returning to a sobering point: we don’t actually know how open the web is today. We lack consistent, shared metrics. We cannot easily measure the reach of automated agents, the compliance rates for directives, or the accessibility of content across regions and devices.
Chrome’s CRUX dataset, Cloudflare Radar, Common Crawl, and other sources offer partial insights, but there is no coherent measurement framework. This makes it difficult to evaluate whether openness is expanding, contracting, or simply changing form.
Without data, standards communities are arguing from instinct. And instinct is not enough for the scale of decisions now at stake.
Tradeoffs shape the web’s futureAnother candid recognition from TPAC was that the web’s standards bodies cannot mandate behavior. They cannot force AI crawlers to comply. They cannot dictate which business models will succeed. They cannot enforce universal client behavior or constrain every browser’s design.
In other words: governance of the open web has always been voluntary, distributed, and rooted in negotiation.
The most meaningful contribution these communities can make is not to define one perfect answer, but to design spaces where tradeoffs are legible and navigable: spaces where different actors—creators, users, agents, platforms, governments—can negotiate outcomes without collapsing the web’s interoperability.
Toward a set of OpenWeb ValuesGiven the diversity of use cases, business models, motivations, and technical architectures involved, the chances of arriving at a single definition of “open web” are slim. But what Montréal and Kobe made clear is that communities might agree on values, even when they cannot agree on definitions.
Two small sections in this post lend themselves to concise lists without compromising narrative flow:
The values that surfaced repeatedly included: Access, understood as meaningful availability rather than unrestricted availability. Attribution, not only as a legal requirement but as a way of preserving the creator–audience relationship. Consent, recognizing that creators need ways to express boundaries in an ecosystem increasingly mediated by automation. Continuity, ensuring that the web remains socially, economically, and technically sustainable for both creators and readers.These values echo what Tantek articulated in 2010 and what the Web Foundation cites in its historic framing of the web. They are principles rather than prescriptions, and they reflect the idea that openness is something we cultivate, not something we merely declare.
And in parallel, these values mirror the OpenStand Principles—which articulated how open standards themselves should be developed. I wrote about this a few months ago in “Rethinking Digital Identity: What ARE Open Standards?” The fact that “open standard” means different things to different standards communities underscores that multiplicity of definitions does not invalidate a concept; it simply reflects the complexity of global ecosystems.
The same can be true for the open web. It doesn’t need a singular definition, but maybe it does needs a set of clear principles so we know what we are trying to protect.
Stewardship rather than preservationThis is why the phrase preserving the open web makes me a little uncomfortable. Preservation implies keeping something unchanged. But the web has never been static. It has always evolved through tension: between innovation and stability, between access and control, between human users and automated agents, between altruistic publication and economic incentive.
The Web Foundation’s history makes this clear, as does my own experience over the last thirty years (good gravy, has it really been that long?) The web survived because communities continued to adapt it. It grew because people kept showing up to argue, refine, and redesign. The conversations in Montréal and Kobe sit squarely in that tradition.
So perhaps the goal isn’t preservation at all. Perhaps it’s stewardship.
Stewardship acknowledges that the web has many purposes. It acknowledges that no single actor can dictate its direction. It acknowledges that openness requires boundaries, and boundaries require negotiation. And it acknowledges that tradeoffs are inevitable—but that shared values can guide how we navigate them.
Mark and David’s side meetings exist because a community still cares enough to have these conversations. The contentious first meeting of WebBotAuth was not a setback; it was a reminder of how difficult and necessary this work is. The TPAC discussions reinforced that, even in moments of disagreement, people are committed to understanding what should matter next.
If that isn’t the definition of an open web, it may be the best evidence we have that the open web still exists.
To Be ContinuedThe question “What is the open web?” is older than it appears. It surfaced in 1994 with robots.txt. It resurfaced in 2010 in Tantek’s writing. It has re-emerged now in the era of AI and large-scale automation. And it will likely surface again.
The real work is identifying what we value—access, attribution, consent, continuity—and ensuring that the next generation of tools, standards, and norms keeps those values alive.
If the conversations in Montréal and Kobe are any indication, people still care enough to argue, refine, and rebuild. And perhaps that, more than anything, is what will keep the web open.
If you’d like to read the notes that Mark and I took during the events, they are available here.
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript00:00:30
Welcome back, everybody. This story begins in a very small, very crowded side room at IETF Meeting 124 in Montreal. It happened fairly recently, and it set the tone for a surprisingly deep conversation.
00:00:43
Picture this: browser architects, policy specialists, working group chairs, privacy researchers, content creators, and a handful of curious observers — all packed together, all invested in understanding where the web is headed.
00:00:57
The session was titled Preserving the Open Web. It was convened by Mark Nottingham and David Shkenazi — two people worth following if you want to understand how technical and policy perspectives meet to shape the future of the Internet.
00:01:10
A week later, at the W3C TPAC meeting in Kobe, Japan, I found myself in almost exactly the same conversation.
00:01:22
Once again, Mark and David convened the group to compare perspectives across different standards organizations.
00:01:28
They asked the same questions. The only real difference was the slightly larger room — and with it, new faces, new cultural norms, and a different governing style for the standards bodies.
00:01:41
But in both meetings, we landed almost immediately on the same question:
00:01:43
What exactly are we trying to preserve when we talk about the “open web”?
00:01:53
The phrase is everywhere. It appears in policy documents, standards charters, keynotes, and hallway conversations. Yet when you ask a room to define it, things get fuzzy very quickly. And that fuzziness isn’t academic — it matters.
00:02:36
Without clarity about what “open” means, identifying the actual problem becomes far more difficult as automation patterns shift and economic pressures evolve.
00:02:46
This isn’t a new dilemma. The web has been wrestling with the meaning of “open” for decades.
00:03:09
In the earliest days, everything about the web was profoundly human-scaled. People wrote HTML by hand. They published content to servers they controlled. They linked to one another freely. Publishing required no permission.
00:03:18
If you had an idea, a keyboard, a computer, and an Internet connection, you could build something.
00:03:26
The Web Foundation describes these early design choices as a deliberate act of democratization.
00:03:26–00:03:40
Tim Berners-Lee wanted anyone to create, anyone to link, and anyone to read — all using tools of their choosing. This spirit defined the earliest sense of an “open web”:
00:03:40
Dun dun dun.
00:03:42
Then the robots arrived.
00:03:44
In 1994, Martin Koster proposed robots.txt, a simple file that told automated crawlers where they were and were not welcome.
00:04:14
It wasn’t a law. It was a social protocol. Well-behaved crawlers respected it. Bad actors ignored it and revealed themselves by doing so.
00:04:25
That tiny file introduced a big shift: openness for humans didn’t automatically mean openness for machines.
00:04:30
Even then, openness carried nuance.
00:04:35
When researching this post, one of the earliest attempts to define the open web I found was from Tantek Çelik in 2010.
00:05:07
His framing focused on characteristics — not purity tests:
00:05:16
Fifteen years later, it’s still uncannily relevant. And, amusingly, Tantek was in the room again at TPAC while we revisited the same conversation.
00:05:21
I can only imagine the déjà vu.
00:05:21–00:05:44
The web has always needed recalibration. As technology evolves, expectations shift — and so does the need to renegotiate what “open” should mean.
00:05:44
Automation and AI have pushed us into the next round of that negotiation.
00:05:51
Today’s successors to robots.txt are emerging in the IETF.
00:05:57
One is the AI Preferences Working Group, commonly known as AI-Pref.
00:06:03
They’re trying to create a structured way for websites to express preferences about AI training and automated data reuse.
00:06:11
Think of it as trying to define the language that might appear in a future robots-style file — same spirit, far higher stakes.
00:06:23
Why does this matter?
00:06:23–00:06:49
Traditional search crawlers index the web to help humans find things.
AI crawlers ingest the web so their models can absorb things.
This changes the stakes dramatically — legally, economically, creatively, and infrastructurally.
00:06:49
Another initiative in the IETF is the WebAuth Bot Working Group, which explores whether bots should authenticate themselves — and how.
00:06:59
Their early meetings focused on defining the scope of the problem. IETF124 was their first major gathering, and it highlighted how tangled this space really is.
00:07:29
With several hundred people in the room, discussions grew heated enough that the chairs questioned whether the group had been chartered prematurely.
00:07:42
Is that a failure? Maybe. Or maybe it reflects something deeper: we don’t share a mental model for what “open” should mean in a web mediated by automated agents.
00:07:57
That same lack of clarity surfaced again at TPAC in Kobe.
00:08:08
The discussion began with a familiar sentiment: the web democratized information. It gave anyone with a computer and Internet access the ability to learn, publish, and discover.
00:08:35
But is that still true today?
Modern web realities include:
Paywalls Subscription bundles Identity gates Regional restrictions Content mediation AI agents that read without attribution or compensation00:08:35–00:09:07
Some sites now serve data brokers. Others try to block AI crawlers entirely. Economic pressures and incentives have shifted — not always in ways aligned with early ideals of openness.
00:09:07
At TPAC, one of the most useful insights was this: the open web isn’t a switch. It’s not a binary. It’s a spectrum.
00:09:33
Different dimensions define openness:
00:09:39
A paywall changes openness on one axis but not all. An email gate adds friction but doesn’t automatically “close” content.
00:09:52
The binary mindset has never reflected reality — which explains why consensus is so elusive.
00:10:05
Seeing openness as a spectrum creates room for nuance and suggests that agreement may not even be necessary.
00:10:17
Another important thread: people publish for many reasons.
00:10:29–00:10:43
Some publish for reputation.
Some publish for community.
Some for income.
Some for the joy of sharing knowledge.
00:10:47
These motivations shape how people feel about openness.
00:10:58
AI complicates things. When readers experience your work only through an AI-generated summary, the context and tone you cared about may be lost.
00:10:58–00:11:20
A persuasive piece may be flattened into neutrality.
A community-oriented tutorial may be absorbed into a model with no link back to you.
This isn’t only an economic problem — it’s also a meaning problem.
Boundaries and the Commons00:11:20
Elinor Ostrom’s work on governing shared resources came up. One of her core principles: shared resources need boundaries.
00:11:51
Not restrictive walls — but clear expectations for use. This framing resonated deeply and helped reconcile the tension between openness and limits.
00:12:01
The web has never been boundary-less. It worked because shared norms — formal and informal — made sustainable use possible.
00:12:06
AI-Pref and WebAuth Bot aren’t restrictions on openness. They’re attempts to articulate healthy boundaries for a new era.
00:12:14
But agreeing on those boundaries is the hard part.
00:12:18
To understand the scale of the problem, we need measurement. But we can’t measure what we haven’t defined.
00:12:32
We lack shared metrics for openness. We don’t know:
00:12:47
Datasets like Chrome UX Report, Cloudflare Radar, and Common Crawl offer fragments — but no coherent measurement framework.
00:13:06
Without data, we argue from instinct rather than insight.
00:13:17
Another reality: standards bodies cannot mandate behavior.
00:13:34
They can’t force AI crawlers to respect preferences, dictate business models, control browser design, or enforce predictable client behavior.
00:13:45
Robots.txt has always been voluntary. It’s always been negotiation-based, not coercion-based.
00:13:56
The best contribution standards bodies can make is designing systems where trade-offs are visible and actors can negotiate without breaking interoperability.
00:14:02
It’s not glamorous work — but it’s necessary.
00:14:10
A single definition of the open web is unlikely.
00:14:17
But both Montreal and Kobe revealed alignment around a few core values:
00:14:21–00:14:46
Access — not unlimited, but meaningful Attribution — to preserve the creator-audience relationship Consent — to express boundaries in an automated ecosystem Continuity — to ensure sustainability socially, economically, and technically00:14:46–00:14:58
These values echo Tantek’s 2010 framing, the Web Foundation’s historical narrative, and the OpenStand principles.
00:14:58
They’re not definitions — they’re guideposts.
00:15:09
This is why preserving the open web may not be the right phrasing. Preservation implies stasis. But the web has never been static.
00:15:37
It has always evolved through tension — innovation vs. stability, openness vs. control, humans vs. automation, altruism vs. economic pressure.
00:15:40
Thirty years. Good gravy.
00:15:40–00:16:10
The web endured because communities adapted, argued, refined, and rebuilt.
The Montreal and Kobe conversations fit squarely into that tradition.
So perhaps the goal isn’t preservation — it’s stewardship.
Stewardship acknowledges:
The web serves many purposes No single actor controls it Openness requires boundaries Boundaries require negotiation00:16:10
Trade-offs aren’t failures. They’re part of the ecosystem.
00:16:30
Mark and David’s side meetings exist because people care enough to do this work.
00:16:30–00:16:41
The contentious WebAuth Bot meeting wasn’t a setback — it was a reminder that the toughest conversations are the ones that matter.
00:16:41
TPAC showed that even without agreement, people are still trying to understand what comes next.
00:16:35–00:16:41
If that isn’t evidence that the open web still exists, I’m not sure what is.
00:16:41–00:17:18
This conversation is far from over.
It began with robots.txt in 1994.
It showed up in Tantek’s writing in 2010.
It’s appearing today in heated debates at standards meetings.
And it will almost certainly surface again — because the real work isn’t in defining “open.”
It’s in articulating what we value:
Access Attribution Consent Continuity00:17:18
And ensuring our tools and standards reflect those values.
00:17:18–00:17:26
If the discussions in Montreal and Kobe are any indication, people still care. They still show up. They still argue, revise, and rebuild.
00:17:26
And maybe that, more than anything else, is what will keep the web open.
00:17:30
Thanks for your time. Drop into the written post if you’re looking for the links mentioned today, and see you next week.
00:17:44
And that’s it for this week’s episode of The Digital Identity Digest. If this helped clarify things — or at least made them more interesting — share it with a friend or colleague and connect with me on LinkedIn @hlflanagan.
If you enjoyed the show, please subscribe and leave a rating or review on Apple Podcasts or wherever you listen. You can also find the written post at sphericalcowconsulting.com.
Stay curious, stay engaged, and let’s keep these conversations going.
The post Robots, Humans, and the Edges of the Open Web appeared first on Spherical Cow Consulting.
In aflevering 15 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Idriss Abdelmoula (Deloitte) en Roel ter Brugge (Ockto) over één van de meest urgente én ambitieuze regelingen van de afgelopen jaren: het Noodfonds Energie.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 01 Dec 2025 At the I/ITSEC exhibition in Orlando, Florida (USA), Thales unveils a new training capability designed to meet the evolving demands of modern battlefield environments, enabling easy integration of drones into live training simulation systems. This addition enhances training solutions by enabling more realistic and flexible exercises that reflect the complexity of current and future threats. Compatible with a wide range of drones, the new system covers multiple drone use cases, for both ‘friendly’ and ‘enemy’ drone scenarios.Thales Land Live Training System ©Thales
Thales is a global leader in live training for collaborative engagements. Deployed in flagship Combat Training Centres such as CENZUB in France (for professional military use) and GAZ in Switzerland (which operates on a conscription model), Thales’ combat-proven training systems deliver robust and user-friendly capabilities for today’s armed forces. With the introduction of this new drone training capability, which can be seamlessly integrated into live training, Thales further demonstrates its ability to adapt to an ever-changing operational environment.
This new capability enables armed forces to train for both ‘friendly’ and malicious drone scenarios, thereby enhancing the effectiveness of operational readiness. The drone agnostic kit solution is compatible with drones in categories C0–C6 and A1–A3 (ranging from a few hundred grams to several dozen kilograms) and offers unparalleled flexibility for armed forces. Key features include:
Highly immersive simulation: Each drone can be equipped with a kit that includes sensors and indicators to simulate the effects of drone neutralisation and provide real-time feedback on the drone’s status during training exercises. Operational modularity: Drones can be fitted with transmitters to simulate loitering munitions (self-detonation) or armed drones (virtual release of explosive devices). Data-driven training: All exercise data is automatically recorded for post-training debriefing and detailed analysis.This new drone-training module highlights Thales’ ongoing commitment to providing the most effective solutions for military preparedness. It enhances the realism of training scenarios, while enabling soldiers to refine their skills in countering aerial threats (“Red Force”) and optimising drone use on the battlefield (“Blue Force”).
With this latest innovation, Thales strengthens its position at the forefront of training and simulation technologies, ensuring that forces are prepared for the challenges of tomorrow.
“Drones are playing an increasingly decisive role on the battlefield, and it is crucial to have training solutions that can accurately simulate these new threats. Thales’ new drone training capability offers an adaptable and reliable training experience, allowing military personnel to effectively prepare for engagements and confrontations with these devices. We are proud to support the armed forces by providing training solutions that are as close as possible to their operational realities.” Benoit Broudy, Vice President of Training & Simulation activities, Thales.
To learn more about Thales’ Training & Simulation expertise: Land Force Readiness | Thales Group and its drones expertise: Drone warfare | Thales Group.
View PDF market_segment : Defence > Land | Air https://thales-group.prezly.com/thales-enhances-armed-forces-readiness-by-integrating-drones-into-live-training thales-enhances-armed-forces-readiness-integrating-drones-live-training On Thales enhances armed forces readiness by integrating drones into live training
The second annotation challenge, Civic Lens: Turning speeches into signals that we organized in collaboration with Lunor AI, has officially concluded, and the results offer valuable insights into political discourse analysis and high-quality data curation.
In this challenge, contributors analyzed excerpts from European Parliament speeches and answered a detailed set of questions about content, tone, and political positioning. Annotators assessed voting intentions, cooperation versus conflict, EU vs. national framing, verifiable claims, argument style, and ideological leaning.
The dataset produced from this challenge is bound to unlock broad applications, including building AI models that detect political stance and tone, analyzing trends in legislative debate, and support tools that promote transparency in European policymaking.
What have we achieved?The challenge saw strong participation and high-volume annotation activity:
Total number of annotations: 216260 Total number of unique annotations: 10726 Total number of annotations by two annotators: 10726 Total number of annotators: 206 Leaderboard Scoring & RecommendationsEnsuring leaderboard fairness required a thorough analysis of annotation behavior. Each contributor was evaluated through our system across multiple dimensions to identify possible low-quality patterns, such as automated responses, random choices, or overly repetitive answer patterns. Let’s go through each of them:
Focus on Meaningful ContributorsWe placed special emphasis on annotators with 80+ annotations, as their contributions significantly shape dataset quality. No one below this threshold had more than five sub-25-second annotations, making 80 a clean boundary.
2. Time Analysis
Several time-based indicators helped identify suspicious behavior:
Time patterns showed two distinct clusters. Very fast annotations (<25s) were treated cautiously. Annotators with >75% of annotations in this fast cluster were flagged. Unlike the previous challenge, accuracy wasn’t available, so this signal was combined with others rather than used alone (Figure 1).
Figure 1. Number of annotations by time spent for completion.3. Correlation With Text Length
We checked whether longer texts led to longer annotation times:
Expected: positive correlation Observed: a wide range, including negative correlations ( Check figure 2 below) Figure 2. Distribution of correlation coefficients between time and text length.Annotators with a correlation coefficient below 0.05 were flagged as suspicious, again treated as one signal among many.
4. Similarity & Consistency Analysis
To detect randomness, automation, or overly deterministic behavior, we evaluated each annotator along several criterias.
a) Perplexity Scores
Perplexity measures how varied an annotator’s answers are:
1 = deterministic Max value = random choice among all optionsExample: For a binary question (Yes/No), a perfectly balanced 50/50 distribution yields a perplexity of 2, which could imply either genuine mixed responses or randomness (See Figure 3 below). To distinguish between these cases, we also computed the Kullback–Leibler (KL) divergence for each question which is explained in the following section
Figure 3. Distribution of perplexity scores for each question.b) KL Divergence
KL divergence measures how far an annotator’s answer distribution deviates from the global answer distribution.
0 = identical answer to other annotators Higher values = larger deviationFigure 4 below shows KL scores across annotators.
Figure 4. Distribution of KL scores for each question.Using these metrics, we flagged annotators as suspect when their answers showed both unusually extreme perplexity values and clear deviations from the overall answer distribution. In practical terms, an annotator was marked as suspicious when their KL divergence exceeded 0.2 and either:
their perplexity was below 1.05 (indicating overly deterministic responses), or does not correspond to a value that would occur with a probability of 5% or lower under random choice of answers (i.e., uniform distribution)Any annotations that met these combined conditions were treated as strong signals of low-quality or automated behavior. This was applied per question for granular detection.
5. Consistency & Similarity Checks
Two further checks helped separate legitimate divergent opinions from suspicious behavior:
a) Consistency (agreement with other annotators):
For items annotated by multiple humans, we computed each annotator’s agreement with peers. Annotators whose agreement with others fell below 50% were flagged as suspect. Such deviations may reflect valid alternative perspectives but are suspicious when combined with other evaluation criterias.b) Similarity (agreement with an AI model):
We compared annotator labels with labels produced by an AI model applied to the same items. Annotators whose similarity-to-consistency ratio exceeded 1.5 were marked. High agreement with AI accompanied by low agreement with humans can show automation or pattern-following derived from model outputs (check Figure 5 below) Figure 5. Consistency and similarity distribution. Why It MattersReliable human-curated data is essential for training AI systems that can correctly interpret political tone, stance, and argumentation. The CivicLens challenge produced a path to create a clean, high-quality dataset that can power better classification models, improve analysis of parliamentary debates, and strengthen tools built for transparency and public understanding. High-utility datasets like this directly improve how AI handles real-world governance and policy content.
Annotators Hub: CivicLens — Turning Speeches into Signals was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
The LiteracyForge challenge marked the first-ever task in the Ocean Annotators Hub, a new initiative designed to reward contributors for curating the data that trains AI. Organized in collaboration with Lunor, the challenge asked participants to review short English texts and assess the quality and difficulty of comprehension questions.
The goal: to build a high-quality dataset that helps train models to recognize question complexity and adapt to learner ability. These models could support adaptive tutors like Duolingo, improve literacy tools, and drive more inclusive learning systems.
What were the requirements?Participants read short passages and:
Answered multiple-choice comprehension questions Rated how difficult the text and questions were Evaluated overall quality, from clarity to relevance What we achievedIn just three weeks:
88 contributors submitted qualifying work and received rewards 147 total contributors signed up 49,832 annotations were submitted 19,973 unique annotations were collected 17,581 entries were reviewed by at least two annotatorsThe challenge recognized consistent effort across the board. In total, 79 contributors received rewards, ranging from 0.27 to 1405.63 USDC. The leaderboard is available here, with top 10 contributors receiving between 240.21 USDC and 1405.63 USDC.
The resulting dataset offers broad applications, including training AI models to generate comprehension questions, enhancing question-answering systems, and developing tools to assess reading difficulty for educational and accessibility purposes.
How we ensured qualityTo make sure rewards reflected real effort and high-quality output, a two-stage filtering process was implemented:
Time-Based FilteringWe analyzed the time spent per annotation. Entries completed in under 25 seconds were statistically less accurate.
As a result:
All annotations under 25s were excluded from the final leaderboard given the graphic below which shows a clear drop in answer accuracy for annotations under 25 seconds. Annotators with more than 75% of their submissions under 25s were flaggedThis helped preserve genuine, fast-but-accurate work while filtering out low-effort entries.
Agreement AnalysisEach annotation was evaluated in two ways:
Similarity: how closely it matched labels generated by an AI model Consistency: how well it agreed with annotations made by other contributorsHigh similarity with AI alone wasn’t enough. The team specifically looked for instances where annotators aligned with the AI without matching human consensus, as a possible signal of automation.
No such patterns emerged during the LiteracyForge challenge, but the methodology sets a precedent for future quality control.
Why it mattersHigh-quality human annotation remains the single most effective way to improve AI models, especially in sensitive or complex areas like education, accessibility, or policy. Without precise human input, models can learn the wrong patterns, reproduce bias, or miss the nuance that real-world applications demand.
Ocean and Lunor are building the infrastructure to make this easier, more transparent, and more rewarding for contributors. Stay tuned for the next challenge!
Annotators Hub: LiteracyForge Challenge Results was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 01 Dec 2025 For the first time, two French IT Security Evaluation Centers (CESTI) — those of Thales and CEA — are cooperating in the assessment of post-quantum cryptography (PQC) algorithms as part of the GIVERNY project. CEA’s CESTI was the first French laboratory accredited by ANSSI to conduct evaluations of products integrating PQC under the new EUCC European certification scheme. Thales’ CESTI, already engaged in this process with ANSSI, is expected to join soon. Presented at the European Cyber Week 2025 in Rennes (France), from 17 to 20 November, this milestone represents a major contribution to European digital sovereignty and allows solution providers to obtain PQC certification to protect their users against the challenges posed by quantum computing.As the quantum revolution compels us to rethink the foundations of cybersecurity, Thales — a global high-tech leader in Defense, Aerospace, and Cyber & DigItal — and CEA, a public research body serving the State to ensure France’s scientific and technological sovereignty, have collaborated to assess and strengthen the new generation of post-quantum cryptographic (PQC) algorithms.
In 2025, Thales and CEA launched a joint project called GIVERNY, aimed at exploring the resistance to attacks of two post-quantum algorithms: the HAWK signature scheme, which is closely related to the NIST FN-DSA (Digital Signature Standard), and the FAEST signature scheme, which is related to the AES (Advanced Encryption Standard).
Through joint development and cross-evaluation work, Thales and CEA cryptographers and cybersecurity experts analysed the robustness of FAEST and HAWK under real-world implementation scenarios comparable to those of future electronic products integrating post-quantum cryptography.
This collaboration between the two CESTIs recognized by ANSSI for their post-quantum cryptography expertise is unprecedented in France and marks a key step in preparing the industry for the transition to quantum-resistant systems.
The GIVERNY project results come at a time when ANSSI has recently announced that products placed on the market after 2030 must transition to post-quantum algorithms, with this requirement applying as early as 2027 for sensitive products seeking qualification. (Source: Cryptographie post-quantique - FAQ | ANSSI)
“Post-quantum cryptography represents a strategic turning point for European digital sovereignty. The outcome of this collaboration between Thales and CEA is the result of major research investments and demonstrates France’s ability to combine innovation and security at the highest level. Thales and CEA are positioning themselves as key players in tomorrow’s cybersecurity, serving Europe’s digital trust,” said Pierre Jeanne, Vice President, Sovereign Cybersecurity, Thales.
“As a trusted player in France and Europe, and deeply involved in building and advancing technology ecosystems, CEA is strengthening its partnerships with major Defence and security industrial actors. Collaboration with Thales in the field of cybersecurity is a real asset for CEA, which conducts technological research to support the emergence of a risk-free digital transition for citizens, businesses, and institutions,” added Philippe Mazabraud, Director of the Global Security Cross-Programme, CEA.
The GIVERNY project enabled the first security analysis of FAEST and HAWK implementations against side-channel attacks. Thales and CEA experts identified vulnerabilities not yet disclosed to the scientific community and assessed the effectiveness of initial countermeasures. This cooperation illustrates the growing maturity and world-class expertise of the French ecosystem in the field of post-quantum cryptography.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.
Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
About CEA
The CEA is a public research organisation whose mission is to inform public-policy decisions and provide French and European companies and local authorities with the scientific and technological means to better address four major societal transformations: the energy transition, the digital transition, the future of healthcare, and Defence and global security.
Its purpose is to act to ensure that France and Europe maintain scientific, technological, and industrial leadership, and to contribute to a safer and better-controlled present and future for all.
More information: www.cea.fr
Guilhem Boyer
Press officer Physics, Space, Defense & Innovation
+33 6 73 41 42 45
View PDF market_segment : Cybersecurity ; advanced_technologies : Advanced technologies > Quantum algorithms https://thales-group.prezly.com/thales-and-cea-an-unprecedented-partnership-to-strengthen-french-post-quantum-cybersecurity thales-and-cea-unprecedented-partnership-strengthen-french-post-quantum-cybersecurity On Thales and CEA: an unprecedented partnership to strengthen French post-quantum cybersecurity
Identity and access management (IAM) is no longer just a behind-the-scenes function. It’s the foundation of digital trust for modern enterprises. As organizations look to balance stronger security with frictionless experiences, Ping continues to lead the way forward.
We’re proud to share that Ping Identity has been named a Leader in the 2025 Gartner® Magic Quadrant™ for Access Management for the ninth consecutive year, positioned highest in Ability to Execute and furthest in Completeness of Vision (for the second year in a row).
Ping also scored highest across three use cases—Workforce Access Management, Partner Access Management, and Machine Access Management—in the 2025 Gartner® Critical Capabilities for Access Management.
In this week’s Community Connect Spaces, the discussion focused on one major theme:
the biggest stories in crypto right now all point toward one thing — identity.
From regulation and social media to AI and enterprise, decentralized identity (DID), verifiable credentials, and reputation are quickly moving from “nice to have” to “core infrastructure.” Below is a recap of the key narratives we covered, and how they connect directly to what Ontology has been building for years.
👉 Download ONTO Wallet to create your first ONT ID, manage assets, and start building portable reputation across Web3.
As we head toward Ontology’s upcoming anniversary, this article is part of a wider series that highlights how today’s biggest crypto narratives are converging with the identity and trust vision we have been building for years.
The Global Regulatory Shift Toward IdentityAround the world, regulators are tightening their approach to crypto — but the most interesting trend isn’t enforcement, it’s how they’re thinking about identity.
Recent developments around MiCA implementation in Europe, growing scrutiny of exchanges in Asia, and continued enforcement in the U.S. all share a common theme:
regulators are increasingly talking about reusable, portable, privacy-preserving identity.
Instead of forcing users to complete KYC from scratch on every new platform, the emerging model looks like this:
Verify once with a trusted provider Receive a credential that proves your status Reuse that credential across multiple platforms and servicesThis model:
Reduces friction for users Lowers compliance overhead for platforms Creates a safer environment without over-collecting personal dataThis is exactly the world Ontology has been designing for.
With ONT ID and the Verifiable Credentials framework, users can:
Prove who they are without repeatedly sharing sensitive documents Maintain user-owned, privacy-preserving identity Authenticate across platforms in a compliant way Meet regulatory requirements without compromising control over their dataOntology has been advocating for reusable, verifiable identity for years. Now, the regulatory conversation is catching up. As this compliance layer becomes more standardized, ONT ID is positioned to act as a core building block for privacy-first, regulation-ready identity in Web3.
Social Platforms and Wallets Are Turning to DIDAnother major narrative this week was the growing adoption of DID in the social and wallet space.
Decentralized social projects like Farcaster and Lens are putting identity at the center of their ecosystems, while larger, more traditional platforms and wallet providers are increasingly exploring stronger identity frameworks in response to:
AI-generated content Deepfakes Fake or bot-driven accountsThese dynamics are pushing apps toward identity systems that can:
Verify that a user is a real human Protect pseudonymity while still proving authenticity Make reputation portable across apps and communitiesAgain, this is where Ontology’s DID stack fits naturally.
Using ONT ID and Ontology’s DID infrastructure, social apps and wallets could enable:
Cross-platform authentication using a single identity Human verification without exposing private data Portable, DID-based social reputation Protection against bots, impersonators, and sybil attacksIn a world increasingly flooded with AI-generated profiles and synthetic content, DID is moving from optional addon to core requirement. Ontology offers a sovereign, decentralized, and portable identity layer that social platforms and wallet providers can integrate to build more trusted, user-centric experiences.
AI + Web3: Building the Trust LayerOne of the most important conversations of the week was the intersection between AI and blockchain.
Recent reports from leading ecosystem players have focused on a key idea:
AI is powerful, but without a trust layer, it becomes risky.
As AI reaches the point where its outputs are almost indistinguishable from human-created content, we face a global trust challenge:
We need cryptographic proof of:
Who created a piece of content When it was created Whether it has been altered Whether we are interacting with a human, an AI agent, or a hybridThis is where decentralized identity and verifiable credentials become essential.
Ontology’s infrastructure is designed not just for human identities, but also for:
AI agents Bots and automated systems Machine-to-machine interactionsIn an AI-powered world, Ontology envisions:
Humans verifying that they are interacting with a legitimate AI service AI agents verifying each other before exchanging data or executing tasks Content tied cryptographically to its original creator and source Algorithms and models carrying credentials that prove their integrity and provenanceThe narrative is shifting from generic “AI + blockchain hype” to identity-driven trust for AI. Ontology is already building the DID and credential layers that can anchor this new trust fabric.
Reputation in DeFi, GameFi, and AirdropsReputation is rapidly becoming one of the most valuable assets in Web3.
This week highlighted a surge of interest in reputation-based systems across:
DeFi protocols, especially lending GameFi projects, battling bots and unfair play Airdrops and community rewards, focusing on quality over quantityThe old model of “anyone with a wallet can claim” is fading. Projects increasingly want:
Genuine, long-term users Reduced sybil activity and bot farming Reward mechanisms that favor engaged communities rather than opportunistsDeFi is exploring reputation-based credit; GameFi is seeking identity-aware mechanisms to ensure fair participation; and airdrops are increasingly gated by activity, history, and contribution quality.
Ontology’s identity and reputation tools offer exactly what this evolution needs:
Sybil-resistant reward systems Verified, identity-aware airdrops Trust-based access tiers and community segments Loyalty and engagement scoring based on real behavior Identity-driven community structures and rolesWith ONT ID and Ontology’s reputation framework, reputation becomes portable, verifiable, and secure — not trapped inside a single platform. This unlocks a more sustainable and fair approach to incentives across ecosystems.
Enterprise Interest in Decentralized IdentityBeyond crypto-native platforms, enterprises across multiple industries are accelerating their exploration of decentralized identity and verifiable credentials.
We are seeing growing activity around DID in:
Supply chain — product-level identity and provenance Education — verifiable diplomas, credentials, and skill certificates Workforce and HR platforms — tamper-proof worker profiles and histories Healthcare — privacy-preserving patient identity and data access controlEnterprises are looking for ways to:
Reduce fraud Improve data integrity Avoid centralized honeypots of sensitive information Comply with strict data protection regulationsOntology is well positioned here, with years of experience designing and deploying identity solutions for real-world partners in finance, automotive, and more.
Our DID and credential tools are:
Modular — adaptable to different use cases and architectures Cross-chain — not locked into a single network Enterprise-ready — designed to meet real operational and compliance needsAs more industries converge on DID standards, Ontology’s infrastructure can serve as a reliable, interoperable trust layer for real-world data.
Where Ontology Is Focusing NextIn light of these converging trends — regulation, social identity, AI, reputation, and enterprise adoption — Ontology is doubling down on several strategic priorities:
Expanding interoperable DID across multiple blockchains Building identity support for AI agents and automated systems Enhancing reputation scoring models for users, entities, and machines Deepening ecosystem partnerships across DeFi, GameFi, and infrastructure Strengthening developer tooling around ONT ID and verifiable credentials Continuing enterprise pilots and collaborations in key industries Growing community reputation and reward mechanisms powered by DIDThese focus areas place Ontology at the center of the emerging trust-layer narrative for both Web3 and AI.
Conclusion: Identity as the Foundation of the Future InternetThe stories shaping crypto and Web3 this week — from regulatory frameworks and social platforms to AI and enterprise systems — all point in the same direction.
Identity is becoming the foundation of the next internet.
Decentralized identity, verifiable credentials, and portable reputation are no longer niche concepts. They are quickly becoming essential components for:
Compliant yet user-centric regulation Safer and more authentic social platforms Trustworthy AI interactions Fair and sustainable DeFi and GameFi ecosystems Secure, interoperable enterprise data infrastructureThis is the world Ontology has been building toward from the start.
As the demand for a decentralized trust layer grows, ONT ID and Ontology’s broader identity stack are ready to power the next generation of applications — across Web3, AI, and the real-world economy.
Ontology will continue to push forward as the trust layer for Web3, AI, and beyond.
Recommended Reading 8 Years of Trust, Your Ontology Story Begins Here — A look back at Ontology’s journey as a trust-focused Layer 1, highlighting the milestones, partnerships, and identity innovations that shaped its first eight years — and where it’s heading next. ONT ID: Decentralized Identity and Data — A deep dive into Ontology’s decentralized identity framework, including DID, verifiable credentials, and how ONT ID underpins privacy-preserving identity across multiple ecosystems. Verifiable Credentials & Trust Mechanism in Ontology — Technical overview of how Ontology issues, manages, and verifies credentials using ONT ID, including credential structure, signatures, and on-chain attestations. Identity Theft Explained — A clear, practical explainer on how identity theft works today and how decentralized identity, self-sovereign identity, and zero-knowledge proofs can finally flip the script in users’ favor. Ready to keep exploring Ontology and DID?👉 Stay connected with Ontology, join our community, and never miss an update:
https://linktr.ee/OntologyNetwork
Community Connect: Web3 Trends Shaping Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
Ontology is celebrating its 8th anniversary and introducing one of its biggest updates yet — the v3.0.0 MainNet upgrade. Li Jun, Ontology’s Founder, shared the full announcement on X.
Key Highlights From Ontology’s v3.0.0 MainNet Upgrade Strengthened Token Economy and Incentive ModelOntology’s v3.0.0 upgrade introduces major improvements to Ontology’s dual-token model (ONT and ONG), designed to support long-term sustainability and ecosystem growth.
The total ONG supply has been reduced from 1 billion to 800 million, with 100 million ONG permanently locked. This lowers inflation and strengthens long-term token value. Updated reward distribution now allocates 80% of newly issued ONG to ONT stakers and 20% to liquidity and ecosystem expansion, balancing network security with growth incentives.These changes align Ontology’s token model with long-term utility and healthier economic design.
Network Upgrades, Identity Integration, and GovernanceThe v3.0.0 upgrade enhances the core performance, interoperability, and identity tooling of the Ontology Blockchain.
Upcoming support for EIP-7702 will introduce a more flexible account system and stronger compatibility with the Ethereum ecosystem, improving cross-chain liquidity and builder experience. Core upgrades to consensus, stability, and gas management make the network faster and more reliable. ONT ID will soon be creatable directly on Ontology EVM, unlocking seamless decentralized identity use cases across DeFi, gaming, and social platforms. All tokenomics updates were approved through on-chain governance, reflecting a mature and aligned Ontology community.These improvements position Ontology as a more interoperable, identity-driven, and community-governed Web3 infrastructure layer.
Product Enhancements, Developer Growth, and Real-World UtilityOntology continues to expand its ecosystem with new tools, user experiences, and privacy-preserving features.
Expanded grants, developer tools, and onboarding resources make it easier to build with ONT, ONG, and ONT ID. A new encrypted IM solution launching later this year will leverage decentralized identity and zero-knowledge technology to protect user sovereignty and secure communication. The ONTO Wallet has been upgraded with a refined identity module, better UX, and new payfi functionality developed with partners, improving Web3 payments and digital identity management. Orange Protocol is advancing its zkTLS framework to turn verified, privacy-preserving reputation signals into real economic utility — strengthening Ontology’s mission to make decentralized trust measurable and portable. Recommended ReadingONG Tokenomics Adjustment Proposal Passes Governance vote
The proposal secured over 117 million votes in approval, signaling strong consensus within the network to move forward with the next phase of ONG’s evolution.
Initial update about the upcoming MainNet v3.0.0 upgrade and Consensus Nodes upgrade on December 1, 2025. This release will improve network performance and implement the approved ONG tokenomics update.
8 Years of Trust — Your Story Campaign
The first campaign to kick off Ontology’s 8th anniversary celebrations. It shares updates from the 2025 roadmap along with details on how to win rewards just for sharing your story with Ontology. We want to hear from you!
Your Guide to Joining The Node Campaign
Everything you need to know about how to get involved in Ontology’s node campaign, including key dates and requirements.
Letter from the Founder: Ontology’s MainNet Upgrade was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
🎥 Watch on YouTube 🎥
🎧 Listen On Spotify 🎧
🎧 Listen On Apple Podcasts 🎧
What does it really take for a nation to jump from fragmented, paper-based services to 80% digital identity adoption in under two years?
In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Pallavi Sharma, Lead of Marketing & Communications for Bhutan’s National Digital Identity (NDI) program, to unpack one of the world’s most successful national-scale identity rollouts.
This conversation is both inspiring and deeply practical. Pallavi shares how Bhutan transitioned from piloting verifiable credentials in 2023 to achieving widespread adoption across banks, telecom companies, insurance providers, and other sectors. She outlines the political, cultural, and technical conditions that enabled rapid progress, conditions that other countries and digital-identity implementers can learn from, regardless of scale or region.
You’ll hear how Bhutan balanced decentralization principles with real-world user expectations, why their messaging strategy had to shift dramatically, and how features like self-attested photos, digital signatures, and even P2P chat unexpectedly drove massive user growth. Pallavi also outlines their future roadmap, from cross-border interoperability testing with India’s Digi Yatra, to biometrics-backed e-voting, to long-term ambitions for blockchain-based asset tokenization and CBDC integration.
Key Insights Bhutan achieved 80% national digital identity adoption in two years by integrating public services, banks, telcos, and private providers into a unified ecosystem. Strong backing from His Majesty the King and regulatory bodies enabled frictionless adoption and minimized political pushback. Users cared far more about seamless access than decentralization, privacy, or SSI principles, leading to a shift in messaging from “consent” to “convenience.” Offering remote onboarding, self-attested passport photos, digital signatures, and passwordless login enabled banks to scale eKYC rapidly. Even small features like P2P chat spiked adoption, showing that familiar, high-value use cases matter more than SSI theory. A centralized trust registry governs issuers/verifiers today, but the platform is expanding to include health, credit bureau, and employee credentials. Bhutan is testing cross-border interoperability with India’s DigiYatra and expanding support for multi-blockchain issuance (Polygon + Ethereum). The government sees value in Web3: exploration of CBDCs, NFTs, crypto payments, and blockchain-backed land tokenization. Inclusion remains core: cloud wallets for non-smartphone users, guardian features for children, voice & dual language support, and bio-crypto QR codes. Future vision: enabling high-stakes digital processes such as e-voting, land transactions, insurance claims, and remote verification using biometrics. Strategies Regulator-first alignment: Work closely with central banks, telecom authorities, and government agencies to ensure legal backing for digital credentials. Start simple (passwordless login): Begin with a universally valuable feature, then expand into more sophisticated credential issuance. Co-design with service providers: Analyze business workflows to identify credential gaps and add features (e.g., live-verified photos, e-signatures). Use mandatory government services as onboarding channels: Services like marriage certificates or police clearances drive mass citizen adoption. Promote use-case messaging rather than technical messaging: Highlight convenience (“open a bank account from home”) rather than decentralization. Introduce features that mimic familiar behaviors: P2P chat drove major user uptake by offering an intuitive, everyday-use function. Leverage biometrics for high-trust actions: Face-search, liveness checks, and crypto-QR codes enable secure remote workflows for future use cases. Additional resources: Episode Transcript Bhutan National Digital Identity (NDI) Website Previous SSI Orbit episode with Pallavi (2023 launch) Bhutan NDI Act DigiYatra (India’s Digital Travel Credential Framework) About GuestPallavi Sharma is the Lead of Marketing & Communications for Bhutan’s National Digital Identity (NDI) program, where she drives nationwide awareness, adoption, and stakeholder engagement for the country’s digital identity ecosystem. She plays a key role in shaping how citizens, regulators, and service providers understand and interact with Bhutan’s digital public infrastructure.
Previously, Pallavi worked with Deloitte Consulting in India and holds a Master’s degree in International Relations. She is passionate about digital inclusion, user-centric design, and building trusted digital systems that empower citizens and simplify access to essential services. LinkedIn
The post From 0 to 80%: How Bhutan Built a National Digital Identity in Two Years (with Pallavi Sharma) appeared first on Northern Block | Self Sovereign Identity Solution Provider.
Learn how SSN validation checks filter impossible, deceased, or mismatched numbers early, reducing fraud, lowering review workloads, and improving compliance outcomes across U.S. onboarding flows.
The post How SSN Validation Works: A Practical Guide first appeared on ComplyCube.
The festive season is fast approaching, bringing with it the age-old holiday question: Which types of Christmas trees should grace your living room this year, a fresh, fragrant pine, or a perfectly shaped, reusable artificial tree? This guide is designed to navigate the complexities of this decision, breaking down the critical pros, cons, popular types of Christmas trees, and specific considerations for both real and artificial options. Our goal is to help you make the most informed and sustainable choice that perfectly suits your home, budget, and holiday traditions.
The Case for Real Christmas Trees Core Appeal: The primary draw is the authentic holiday experience rooted in tradition. Nothing compares to the fresh, unique scent of pine and the beautiful, imperfectly natural shape that instantly transforms a room into a festive haven. Popular Types of Christmas TreesChoosing the right type affects both looks and longevity.
Fraser Fir: Highly prized for its excellent needle retention (staying green longer) and exceptionally strong, upturned branches, which are ideal for supporting heavier ornaments. Balsam Fir: The classic choice, renowned for producing the most potent and traditional Christmas fragrance. Its deep green color and symmetrical, pyramid shape are aesthetically perfect. Douglas Fir: A very popular and highly affordable option. It offers a dense, full shape and good aroma, making it a great budget-friendly choice for families. Scotch Pine: Known for its long-lasting freshness, it can stay fresh for over a month—and stiff branches that easily hold ornaments. Its needles are sharp and retained well, even when dry. Pros and Cons Summary (Real) ProsConsNatural Scent & Aesthetics: Provides an unmatched, authentic look and the irreplaceable pine fragrance.High Maintenance: Requires consistent watering to prevent it from drying out prematurely.Eco-Friendly: Supports local Christmas tree farms, is renewable, and is completely biodegradable (can be chipped or mulched).Needle Mess: Shedding needles can create a significant mess on the floor and carpets, requiring frequent sweeping.Unique Look: Each tree is unique, offering character and individuality that artificial trees cannot replicate.Fire Hazard: If allowed to dry out, a real tree can become a dangerous fire hazard.Supports Local Farms: Buying from a tree farm helps local agriculture and often includes a fun family outing.Limited Lifespan: Typically only lasts about 4 to 6 weeks indoors before beginning to visibly decline. The Case for Artificial Christmas Trees Core Appeal: The main advantages of artificial trees are their unparalleled convenience, long-term reusability, and the extensive variety of shapes, sizes, and colors available, allowing for perfect integration into any decorating scheme. Popular Types of Artificial TreesThe quality and realism of an artificial tree depend heavily on the material used.
PVC (Polyvinyl Chloride): This is the most common and budget-friendly option. PVC needles are cut from flat sheets and twisted onto wires, creating a full and dense, albeit less realistic, appearance. PE (Polyethylene) / True Needle Technology: These trees offer the highest level of realism. The needles are injection-molded directly from casts of real tree branches, resulting in a three-dimensional shape and texture that closely mimics natural evergreen foliage. Fiber Optic/Pre-lit Trees: Valued for maximum convenience, these options come with lights professionally strung and integrated directly into the branches. Fiber optic trees use tiny light strands within the needles themselves for a unique glowing effect. Pros and Cons Summary (Artificial) ProsConsZero Maintenance: Requires no watering, cleaning, or other upkeep once assembled, saving time during the busy holiday season.Requires Storage Space: Needs dedicated space to be carefully packed away and stored for 10 or 11 months of the year.Reusable for Years: High-quality trees can last 10 to 20 years, making them a lower cost per use over time.Lack of Natural Scent: Does not provide the characteristic pine aroma, requiring supplementary scents (like candles or diffusers) if desired.Consistent Shape & Look: Provides a perfect, symmetrical look year after year, with options for specific color palettes or pre-fluffed branches.Non-Recyclable Materials: Most artificial trees are made from plastics (PVC) and metals, which are difficult to recycle and end up in landfills.Pre-lit Options: Eliminates the effort and frustration of stringing lights and simplifies take-down and storage.High Initial Cost: The most realistic models (PE/True Needle) can have a significantly higher upfront purchase price than a real tree. Direct Comparison: Key Decision Factors FactorReal TreeArtificial TreeWinner (Why?)ScentStrong Natural ScentOften unscented (can use sprays)Real Tree (for authenticity)MaintenanceDaily watering, vacuuming needlesOne-time setup and takedownArtificial Tree (for convenience)Durability/Lifespan4-6 weeks5-15 yearsArtificial Tree (for long-term investment)Initial CostLower ($50 – $150)Higher ($100 – $1,000+)Real Tree (for single season)Environmental ImpactCarbon sink, biodegradableProduction emissions, waste (if discarded too soon)Tie (Depends on usage length) Making the Final Decision Based on Home NeedsThe best choice ultimately depends on your household’s specific priorities, safety concerns, and commitment to maintenance.
Best Choice for Families with Pets or Children: Artificial. This option is significantly safer for younger children and pets, as there is no risk of drinking chemically treated tree water and PVC/PE materials are often treated to be fire-retardant. Furthermore, the lack of sharp needles reduces injury risk and eliminates the mess. Best Choice for Traditionalists/Purists: Real. For those who prioritize full sensory experience, the real tree is the clear winner. The authentic pine fragrance, the experience of selecting the tree, and the beautifully asymmetrical look cannot be convincingly replicated by synthetic materials. Best Choice for Small Spaces/Apartments: Artificial. These trees offer superior utility for cramped living areas. They are available in narrow “pencil” or “slim” profiles and often come pre-lit, requiring less setup time and providing easy, compact storage once the season ends. The Environmental Tie-Breaker: The sustainability debate hinges on one metric: longevity. An artificial tree must be kept and reused for a minimum of 5 to 10 years to fully offset the carbon footprint and resource costs associated with its manufacturing and shipping, making a long-term commitment key to its environmental advantage. ConclusionThe choice between types of Christmas trees ultimately hinges on two factors: tradition versus convenience. If you prioritize the unmatched festive scent, annual tradition, and natural beauty, the real tree is your classic choice, despite the extra effort of watering and disposal. If your focus is on convenience, consistent appearance, long-term value, and minimal mess, modern artificial trees (especially high-quality PE models) are the clear winner, offering incredible realism without maintenance. The best tree for your home is simply the one that fits your lifestyle and makes your holiday season shine the brightest.
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post Real vs. Artificial: Comparing the Best Types of Christmas Trees for Your Home appeared first on Herond Blog.
The post Real vs. Artificial: Comparing the Best Types of Christmas Trees for Your Home appeared first on Herond Blog.
CapCut has cemented its place as the go-to video editing application for social media creators worldwide. Whether you’re on the mobile app, desktop software, or the online editor, signing in is essential to access your saved projects and maintain workflow sync. However, running into CapCut login issues – whether it’s a forgotten password, an unknown error, or simply needing a reliable sign-in method – can halt your creativity.
This comprehensive guide provides you with step-by-step instructions on how to sign in to CapCut, troubleshoot common problems, and quickly reset your password, ensuring you get back to editing your videos smoothly and without interruption.
CapCut Login Methods: Step-by-Step GuideThe process of logging into CapCut is fast and simple, regardless of whether you are using the mobile application or the desktop client. Here are the detailed instructions for the most common platforms and sign-in methods.
Logging In via Mobile App (iOS/Android) Step 1: Open the CapCut App. Launch the CapCut application on your iOS or Android smartphone. Ensure you have the latest version installed to avoid compatibility or authentication issues. Step 2: Navigate to the Profile Section. Look for the “Me” tab, or a dedicated profile icon (often a silhouette of a person or a bust), typically located in the bottom-right or upper-right corner of the main screen. Tapping this will take you to the user center. Step 3: Select Your Login Method. CapCut offers several convenient sign-in options. Choose your preferred method from the available buttons: TikTok, Google, Facebook, or the traditional Email/Phone number combination. Choosing a social media option often speeds up the process significantly. Step 4: Complete the Authentication Process. Depending on your chosen method, you will be redirected to an authentication screen. If using a social media account, you must grant CapCut permission to access your profile information. If using Email/Phone, enter your credentials and follow any on-screen prompts for verification codes or password entry. Once successful, you will be logged in and returned to the main editing interface. Logging In on CapCut Desktop App (PC/Mac) Step 1: Launch the CapCut Desktop Application. Open the installed CapCut software on your PC or Mac. Wait for the application to fully load, presenting you with the main welcome or project screen. Step 2: Locate the Sign-In Prompt. Click the “Log In” or profile button, which is usually found prominently in the top right corner of the application window. This action will open the dedicated login interface. Step 3: Utilize the Quick QR Code Scan (Recommended). CapCut strongly encourages a speedy mobile-to-desktop login. If you are already signed in on your CapCut mobile app, simply use the app’s internal scanner to scan the QR code displayed on your desktop screen. This provides instant, hassle-free authentication. Step 4: Choose an Associated Account. If you cannot use the QR code, select one of the alternative sign-in options available below the code. These typically include Google, TikTok, or Facebook. Click your chosen provider and follow the external browser window prompts to verify your identity and authorize CapCut. Logging In to CapCut Online Editor (Web Browser) Step 1: Navigate to the CapCut Online Website. Open your preferred web browser and go to the official CapCut online editor URL. The web editor offers many of the same features as the desktop app without requiring a download. Step 2: Initiate the Sign-In Process. Look for the “Sign In” or “Log In” button, usually located in the upper-right corner of the page. Clicking this will bring up the dedicated login dialog box, displaying all available authentication methods. Step 3: Perform the Account Authentication Steps. Select your preferred login method (such as Google, TikTok, or Facebook). You will be prompted to enter your credentials or confirm your identity through the third-party service. Once verified, the page will refresh, and you will gain access to the web editor’s full suite of features and your cloud-synced projects. How to Reset CapCut Password (When You Forget) Step 1: Access the Recovery Link. When you reach the sign-in screen on the Mobile, Desktop, or Web app, look for the ‘Forgot Password?’ link, typically placed beneath the password input field, and click it to begin the recovery sequence. Step 2: Provide Account Identification. Input the exact email address or mobile phone number associated with your CapCut account. This is the crucial identifier used to link your identity to the password recovery process. Step 3: Verify with the Security Code. A security verification code will be instantaneously dispatched to the contact method you provided (inbox or SMS). Retrieve this code and accurately enter it into the dedicated field on the password recovery screen within a short time limit. Step 4: Establish a New Password. Once the code is successfully verified, you will be prompted to create a new, strong password. Confirm the new password by entering it again and click “Reset” or “Confirm” to finalize the change and instantly regain secure access to your account. Troubleshooting Common CapCut Login Issues Error: “Account Not Found” or “Invalid Account” Cause: Authentication Method Mismatch. This error almost always occurs when you initially created your CapCut account using a third-party service (like Sign in with Google or Continue with TikTok) but are now attempting to log in using the standard Email/Password field. Your account is linked to the social provider, not a traditional password, which is why the system cannot find a matching email entry. Fix: Verify the Original Sign-Up Method. You must verify and select the exact platform button you used when you first registered for CapCut. For instance, if you signed up with Google, click the “Continue with Google” button; if you signed up with TikTok, click “Continue with TikTok.” These providers handle the authentication on CapCut’s behalf. Failure to Log In via TikTok/Google/Facebook Cause: Permission or Connection Error. This failure often stems from an interruption in the communication between CapCut’s servers and the third-party API (like TikTok, Google, or Facebook). This includes failure to explicitly grant CapCut the necessary access permissions during the sign-in redirect, or general network time-outs and API throttling issues. Fix A: Clear Cached Data. Before attempting to sign in again, try clearing your browser’s cache and cookies (if using the Web Editor) or clearing the application data/cache (if using the Mobile or Desktop App). Stale or corrupted authentication tokens stored locally can often interfere with the new connection attempt. Fix B: Check Third-Party App Permissions. Manually navigate to the security or app settings page within your associated account (Google, Facebook, or TikTok). Find the list of apps granted access, and ensure that CapCut is still listed and authorized to use your profile for login. If the connection was revoked or expired, re-authorize it before trying the CapCut login button again. Fix C: Restart Your Device and Network. A temporary fix for many connection errors is a simple restart of the CapCut application and your device. If the issue persists, also consider briefly cycling your network connection (turning Wi-Fi or cellular data off and back on) to ensure a clean path for the external API requests. Connection Error or App Glitches Cause: Network Instability or Outdated Software. These errors typically happen when the CapCut application is unable to maintain a stable connection to its servers (due to a fluctuating internet connection or an overly restrictive VPN), or when the software version is outdated and encountering compatibility issues with the latest server-side updates. Fix A: Check Your Internet/VPN Connection. Ensure your Wi-Fi or cellular data connection is strong and stable. If you are using a Virtual Private Network (VPN), temporarily disable it and attempt to log in again. Some VPNs may block the necessary ports or IP addresses used by CapCut for authentication. Fix B: Update the CapCut Application. If your app is not running the latest version, go to your device’s app store (Google Play Store or Apple App Store) or the CapCut website/desktop updater. Install any pending updates. Developers frequently push fixes for login-related bugs in new releases. Fix C: Try Logging In on a Different Platform. If the error is consistently occurring on one platform (e.g., the Desktop application), try logging into the CapCut Online Editor in a web browser or the Mobile app. This can often confirm whether the issue is localized to a specific version of the software or a wider account problem. If you can log in to another platform, you can at least proceed with your work while troubleshooting the initial platform. ConclusionMastering the CapCut login process, whether you’re using the mobile app, desktop client, or web editor, is the first step toward creating stunning video content. While signing in via social accounts like TikTok and Google offers speed and convenience, it’s crucial to remember your original sign-up method to prevent “Account Not Found” errors. Should you encounter a connection error or app glitch, remember that simple fixes—like clearing your cache, checking VPN settings, or ensuring your app is updated – can quickly restore access. By following these straightforward steps, you can ensure your CapCut login remains quick, secure, and reliable, keeping you focused on your creative projects.
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post CapCut Login Guide: How to Sign In, Fix Issues & Reset Password appeared first on Herond Blog.
The post CapCut Login Guide: How to Sign In, Fix Issues & Reset Password appeared first on Herond Blog.
TikTok has cemented its place as the premier platform for short-form video. When downloading content -whether for offline viewing or preparing your own work for multi-platform sharing – you inevitably encounter the persistent, watermarked logo. This embedded signature is crucial for creator attribution and protecting intellectual property. For creators seeking to manage their video quality and adapt their content for distribution on other sites, mastering how to handle these files is essential. This guide walks you through the best and most ethical methods and tools for efficiently managing your downloaded TikTok Watermark Remover, compatible with PC, Android, and iPhone.
Why Choose an Online Watermark Remover? Free & Accessible: No cost involved and typically requires no registration. These web-based utilities are generally free to use, offering immediate, no-cost access. Since account creation is rarely required, the process is streamlined and helps maintain your privacy for one-off tasks. Cross-Platform Compatibility: Works via any web browser, regardless of the operating system. As a browser-based service, the tool functions flawlessly on all major platforms, including Windows, macOS, Android, and iOS. This universal access means you don’t need dedicated software for each device. No Software Installation: Saves storage space and avoids potential security risks from third-party applications. Processing is handled entirely in the cloud, eliminating the need to download large files onto your local machine. This preserves your storage space and minimizes the risk of installing potentially harmful or unwanted third-party executables. Speed and Efficiency: Processes quickly, often by just pasting the video link. The entire workflow is highly efficient: you simply paste the video’s URL, and the remote server handles the removal process. This delivery mechanism results in a very fast turnaround time compared to downloading and editing files manually. Step-by-Step Guide: Using the Free TikTok Watermark RemoverThese are general instructions applicable to all mobile and desktop devices.
Step 1: Copy the TikTok Video Link Locate the Video: Open the TikTok app and find the specific video you wish to save without the watermark. Copy the URL: Tap the Share button (typically an arrow icon) and look for the icon or option labeled Copy Link. Tapping this will instantly save the full video URL to your device’s clipboard, ready for the next step. Step 2: Access the Online Remover Tool Navigate the Website: Open your preferred web browser and go to the Online Watermark Removal Tool. Paste the Link: Find the main input box, usually highlighted on the homepage. Tap and hold (or right-click) in the box and select Paste to insert the video link you copied in Step 1. Step 3: Process and Download the Clean Video Initiate Processing: Click the clear, prominent button labeled Download or Process next to the input field. Select the Clean File: After the tool analyzes the link, it will present multiple options. Crucially, select the option explicitly labeled “No Watermark,” “MP4 without watermark,” or similar to ensure you get the clean file. Save to Device: Your browser will prompt you to save the file. Confirm the download, and the processed video will be saved directly to your device’s gallery or downloads folder. Platform-Specific Usage & Tips in using TikTok Watermark RemoverThese optimization tips address unique device types and guide you through common post-download challenges.
For PC and Mac Users Efficiency Tip: To speed up the process, use keyboard shortcuts for pasting: Ctrl + V (Windows/PC) or Cmd + V (Mac) to quickly insert the copied link into the input box. Locating Files: After processing, the video file typically lands in your device’s default download folder, often simply named “Downloads.” Check there first if the file doesn’t immediately appear on your desktop. For Android Users Seamless Integration: Android often provides the most straightforward experience, as the downloaded video frequently saves directly to the designated Gallery or media storage location without extra steps. Troubleshooting: If the download fails or seems to disappear, check your browser’s storage access permissions. You may need to grant the browser explicit permission to save files to your local device storage. For iPhone and iOS Users The Key Challenge (Saving): On iOS (iPhone/iPad), files downloaded from Safari or Chrome do not automatically appear in your Photos app. They are first saved into the Files app. Look for the download progress icon in the browser and tap it to access the file location in the Files app. Post-Download Tip (Sharing): Once the file is secure in your Files app, you may need an extra step if you want to edit or share it quickly. Open the video in the Files app, tap the Share button, and choose Save Video to move a copy into your Photos library. ConclusionBy following these simple, platform-specific steps, you have mastered the process of utilizing the Online Watermark Removal Tool. The beauty of this solution lies in its versatility: whether you’re working on a powerful Mac, a pocket-sized Android, or an iOS device, the workflow remains quick and efficient. You can now easily download and manage your video content, ensuring a clean, ready-to-share file is always saved exactly where you need it, in your PC Downloads, Android Gallery, or iPhone Photos library.
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post TikTok Watermark Remover Online: Free Tool for PC, Android, and iPhone appeared first on Herond Blog.
The post TikTok Watermark Remover Online: Free Tool for PC, Android, and iPhone appeared first on Herond Blog.
The November edition of CryptoCubed includes Coinbase Europe's €21.46 million fine for AML breaches, X hit with a €5 million penalty for unauthorized crypto ads, and South Korea's increased enforcement penalties on crypto exchanges.
The post The CryptoCubed Newsletter: November Edition first appeared on ComplyCube.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 27 Nov 2025 Thales consolidates its role as a key defence partner by obtaining two new PERSEUS labels for its innovations in electronic warfare: CURCO (Radar Detector for Drones) and Golden AI (Artificial Intelligence-based analysis tool). CURCO provides a lightweight, accurate and reliable radar detection capability, adaptable to multiple platforms. Golden AI accelerates and automates the analysis of R-ESM data, thus reducing the stripping and analysis work experts have to do and improving the perception of the electronic order of battle. These technologies are designed to meet the current challenges involved in mastering the electromagnetic spectrum, against a background of massive development of drones and the intensification of conflicts.Ronan Bourbon, Ronan Guillamet, Jean Walter, Fabien Richard, Valentin Cavarec ©Thales
Thales has received two major PERSEUS distinctions, awarded by the French Navy, the Délégation Générale de l'Armement (the French procurement agency) and the Agence de l’Innovation de Défense (Defence Innovation Agency). After winning an award in 2023 for its Sentinel electronic warfare solution, Thales is once again certified for CURCO, a compact electronic warfare payload for drones, as well as for Golden AI, an AI-based intelligent analysis tool for electronic warfare data. These two solutions were officially presented at the 2025 Forum innovation défense.
CURCO is designed as a lightweight and compact external payload, adaptable to any drone and air, sea or land vehicle. It meets the SWAP requirements (size, weight and power). Capable of defining the electromagnetic environment over a wide frequency band, CURCO offers accurate, fast and reliable detection of enemy radar emissions. It transmits advanced tactical information to operators in real time for better anticipation. Its simple interface and compatibility with numerous mission software programmes has been established during experiments in real conditions, notably during two exercises carried out in 2024 and 2025 in partnership with the French Navy. As part of further developments, CURCO will incorporate, as an option, an on-board jammer aimed at disrupting detected enemy radars.
CURCO, a lightweight and compact external payload, adaptable to any drone and air, sea or land vehicle ©Thales
As for Golden AI, it offers a major evolution towards cognitive electronic warfare, thanks to two innovative functions that enable the data collected by radar interceptors (R-ESM) to be analysed up to 4 times faster. First, it enables the training of AI models from the databases of capitalised radar electromagnetic interceptions. It also enables the analysis of electromagnetic interceptions recorded during a mission, in order to identify the names of the corresponding radars and to enrich the database. By standardising and accelerating analysis, this tool significantly reduces operators’ workload, while improving the accuracy of interception reports. During tests conducted on the ground and then on board during the Clemenceau 25 exercise, Golden AI demonstrated its ability to handle the capitalisation base of the Navy. It also facilitated the debriefings of operators on board, accelerated analysis on the ground, and improved the perception of the tactical situation, while guaranteeing data sovereignty thanks to reliable and scalable AI.
Created in 2003, the PERSEUS prize aims to accelerate the integration of key technologies by bringing together sailors, DGA engineers and industrial players, in order to quickly equip the armed forces. Thus, as the mastery of the electromagnetic spectrum becomes critical, in an increasingly dense and complex operational environment, CURCO and Golden AI meet essential needs, both for the Navy and for an export market.
With more than 60 years of expertise in electronic warfare, Thales is a key partner for French and European defence, and confirms its ability to support forces in detecting and neutralising radar threats, regardless of the environment.
“We are proud to once again receive this PERSEUS label for our advances in electronic warfare. It is the recognition of our innovation strategy, and our willingness to quickly develop concrete solutions, whose value is demonstrated through operational exercises conducted with and for the armed forces," said Marie Gayrel, Intelligence, Surveillance and Reconnaissance Vice President at Thales.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies in advanced for the Defence, Aerospace and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
View PDF market_segment : Defence ; advanced_technologies : Advanced technologies > Artificial intelligence https://thales-group.prezly.com/thales-wins-two-perseus-awards-for-its-innovations-in-electronic-warfare-and-artificial-intelligence thales-wins-two-perseus-awards-its-innovations-electronic-warfare-and-artificial-intelligence On Thales wins two PERSEUS awards for its innovations in electronic warfare and artificial intelligence
It is not that hard to be 45 and make good money.
What is hard is being 45, making good money, and also having a healthy family, a healthy body, a healthy mind, real friendships, a good romantic relationship, and a set of values you actually live by.
Most of us secretly know this. Still, we spend an absurd amount of time optimizing our businesses and almost no structured time investing in the one thing that sits underneath everything else. Our mind.
This is the story of how I learned to fix that. For myself first. And eventually, for others.
The first advisor that changed everythingIn my early twenties I hit a wall.
I had always been the type who pushed through with effort. More hours, more intensity, more “I will just figure it out”. On paper I was doing well. Inside, I was not.
At some point depression crept in. I resisted talking to a psychologist for a long time. It felt like something “other people” did. Eventually I gave in, sat down, and started talking.
In one of those early sessions, I spent twenty minutes explaining why I couldn’t fire someone on my team. I had reasons, justifications, contingencies. He listened, then asked: “What would you tell a friend in this exact situation?” I answered in ten seconds. The fog lifted.
That moment was embarrassing and liberating at the same time.
Embarrassing, because I realised how much energy I had been wasting trying to solve everything alone. Liberating, because I got a taste of what it felt like to have someone in my corner whose only job was to help me think. I walked out of that session knowing exactly what to do on that messy project, which actions to take, and I could finally move on to the next problem instead of spinning on this one.
It was the first time I really understood this simple thing:
My mind is not an infinite free resource. If I treat it like one, I pay for it later.
That first advisor encounter planted a seed that would grow into something much bigger.
Building a real life personal advisory boardOver the next years I did something that, at the time, did not feel very strategic. I just kept adding people around me who could help me think better.
A psychologist for my inner life and relationships.
A health partner to run with, reflect with, and keep my body and energy in a good place.
A leadership coach to help me give feedback, grow teams, and grow up as a leader.
Later, a personal legal advisor to help protect the downside, and a personal financial advisor to help me think clearly about risk, security, and long-term bets.
Mentors. Fellow entrepreneurs. Close friends. People further ahead on the path. Over time, almost every important part of my life had at least one person I could lean on.
For context, I have been an entrepreneur for about twenty years, and my natural state is too many ideas, too much switching, and very little built-in handbrake.
I did not have the luxury of reinventing the wheel every time.
So this personal advisory board became my invisible infrastructure.
If I was stuck on a leadership problem, I would call the leadership coach and use their brain to sharpen my own. Sometimes I would even pull them into mentoring younger founders I had invested in, so they could grow their feedback muscles faster than I did.
If I was facing questions about family, difficult emotions, how to structure my life, or who I actually wanted to be, I would sit in my psychologist’s office and talk it through with him. He helped me sort things, see what really mattered, and understand the trade offs I was making.
Instead of being this heroic solo decision maker, I became the person who asks for help early. And that changed everything.
I stopped wasting cycles reinventing basic mental tools. I started reusing playbooks. I made better calls with less drama. Over time that translated into real outcomes:
A stable household. A strong relationship. Kids who are (mostly) okay. Healthy businesses. Good friendships. A mind that is still intense, but more anchored. A body that can keep up. A financial life that doesn’t feel like a constant emergency. And even some actual space in my life for rest and simple fun that isn’t just collapsing between crises.
The hidden lesson: your mind is the real compounding assetLooking back, the pattern is simple.
Each advisor gave me mental models. Language. Ways of seeing. Those models did not disappear after one session, they stayed with me. They compounded.
We talk a lot about compounding capital as founders. We do not talk enough about compounding thinking.
When you invest in your own mind, everything else gets a little bit easier:
Decisions cost less energy
Conflicts become easier to navigate
You recover faster from setbacks
You can hold more complexity without burning out
Meanwhile, a lot of the founder culture is still obsessively focused on one narrow form of success. Revenue. Valuation. Headcount.
Making money by 45 is not actually the hard part.
The hard part is doing that and still liking who you are, still having a partner who trusts you, kids who feel seen, friends you did not sacrifice on the altar of “one more round”, and a body and mind that are not completely wrecked.
There is a version of this journey that I once described as a kind of ‘Faustian bargain’ - or ‘Deal with the Devil’ - for other entrepreneurs.
A lot of founders I’ve met – including myself – started out running on external fuel: money, status, approval, the need to prove something to someone. That “schoolyard strategy” works incredibly well in your early twenties. It gets you out of bad circumstances, helps you survive, sometimes even helps you win.
But eventually you have to pay the devil his due.
You’re no longer 22 with nothing to lose. You’re 35 or 45. You might have a partner, maybe kids, a team who depends on you. And the same strategy that helped you survive the schoolyard starts quietly wrecking your life.
The price is often a ruined relationship, burnout, depression, anxiety, or a constant feeling that whatever you build is never enough.
I’m one of the people who made that deal early on. It took me four to five hard years to shift from that external, proving mindset into something closer to inner motivation and self-compassion. To stop chasing only the scoreboard and start investing seriously in my own mind, values, and relationships. I’m probably still paying off some of the interest.
Building a life where you have money and also a healthy family, friendships, a body and a mind you can live with is not an accident. It is the result of investing in your inner infrastructure as seriously as you invest in your company’s infrastructure.
For me, that is the real root of Kin. It is not about productivity tricks. It is about building a system that helps your mind compound.
Who gets your secrets?While I was building this advisory board in my physical life, a parallel question started nagging at me. One that would shape how we’re building Kin.
I have always been drawn to questions of autonomy and responsibility. In the physical world we take certain things for granted. You can own a home. You can own a piece of land. Countries have borders. Those borders create sovereignty. There are basic property rights that say “this is mine, you cannot just walk in and take it.”
I started to wonder why we do not think the same way about our digital lives.
Around the same time I was reading prominent libertarian thinkers, and watching the early crypto movement obsess over the same idea: people should be able to own their assets directly, instead of trusting large institutions.
It became clear to me that the same question would show up in our inner digital lives too.
Who owns your thoughts? Who owns your patterns? Who owns the very detailed psychological map that gets created when you pour your inner life into a tool?
We have spent a decade watching big platforms treat personal information like an extractive resource. It is very efficient. It also, in my view, feels fundamentally wrong on a human level.
So when large language models arrived and I started imagining what a truly personal AI could be, the trust question came up immediately.
If we’re going to build something that sits with you in your most vulnerable moments, that hears your doubts about your partner, your investors, your kids, your fears, your failures, then this cannot be “just another cloud product”.
It has to be built on the exact opposite logic.
Your data is yours. Stored locally by default. Encrypted. Portable. Editable. You are not the product. You are the owner.
That is not a marketing angle for me. It is the only way I can look myself in the mirror and still be proud of shipping this.
Why the existing tools were not enoughThere are already great tools out there.
Therapists, coaches, group programs. I will always recommend them. They changed my life.
There are also great AI tools. AI Chatbots that can write copy, summarise documents, debug code, help you brainstorm. I use some of them every day.
But when I looked at what I actually wanted for myself as a founder and as a human, there was still a big gap.
I could not find anything that:
Felt like a long term, emotionally invested thinking partner, not just a transactional Q and A engine
Helped me connect dots across my life, not just across one project
Respected my desire for privacy and self sovereignty instead of hoovering up my inner life into someone else’s black box
I did not want a productivity machine. I wanted something that felt closer to that personal board of advisors I had built in the real world.
A place where different “voices” could help me see the same situation from different angles. Career. Relationships. Health. Values. And where all of that was grounded in knowledge about who I actually am and what I care about.
What Kin is today - honest 0.9 modeSo we started building Kin.
Not as some abstract AGI fantasy, but as a very practical, personal system that sits next to you in your real life.
Today, Kin is not a polished version 1. It is closer to 0.9. Rough edges included. Here is what it already does well.
If you have used tools like ChatGPT, the difference you feel with Kin is that it does not just answer a prompt and move on. It remembers your world and thinks with you over time. It feels closer to a small advisory board than a single AI chatbot.
When I journal or debrief a day, it doesn’t just disappear into a chat log. I can talk to an advisor inside Kin about the entry and reflect on it, and Kin turns those moments into threads I can come back to later, so board meetings, product decisions, health issues and family situations build on top of each other instead of starting from zero each time.
Last Tuesday I woke up at 5:25, did my usual yoga, and then sat down with too many things in my head: investor follow-ups, a product decision that had been stuck for a week, and guilt about working all evening the day before.
I opened Kin. Pulse pulled in my low recovery score I shared from WHOOP, the back to back meetings directly from my Calendar, and the 2-hour padel session I had booked that evening and basically said: “This is not a hero day. Move one thing, lower the strain, and protect your evening.” It sounds small, but for me that is the difference between stumbling through the day and actually having something left for my family at night.
That’s the difference. Not productivity tricks. Thinking partners who know my patterns and help me catch myself.
Underneath all of this is a privacy first architecture. My data is stored locally by default, encrypted and under my control. My inner life is something to be protected, not mined.
Is it perfect? No.
Does it already feel, to me, like a meaningful extension of that advisory board idea into software? Yes.
Money, valuations and exits are fine. I still care about those too. But I no longer believe that is the real scoreboard.
Kin is what I’m building to make that a little bit easier. A private board of advisors that compounds with you over time.
The real scoreboard is whether you can build something meaningful in the world without losing yourself or the people you love in the process.
- Yngvi
If this resonates, you can find Kin here, and follow me on LinkedIn
The cryptocurrency ecosystem continues to grow across hundreds of blockchains and thousands of digital assets in 2025, and cybercriminals are exploiting this complexity with sophisticated laundering techniques. Chief among them, chain-hopping has emerged as the defining money laundering method of modern crypto crime.
There is a growing problem lurking in your identity infrastructure—one that doesn’t trigger alerts, isn’t flagged by vulnerability scanners, and yet quietly compounds security vulnerabilities: IAM technical debt.
Why IAM Technical Debt is a Growing RiskIt is not just a side effect of legacy systems anymore. It is a direct result of the growing gap between rapid digital transformation and the brittle, aging identity plumbing beneath it. According to the 2025 Verizon Data Breach Investigations Report, stolen credentials were involved in 88% of web application attacks—reinforcing identity as the top threat vector. But now, Gartner adds another critical lens.
In their June 2025 research GTP report, Reduce IAM Technical Debt1, Gartner® analysts Nat Krishnan and Erik Wahlstrom warn that “technical debt weakens the agility of an IAM team and the effectiveness of organizational security controls.” In our opinion, their findings highlight the same five culprits we see in the field every day: siloed tools, outdated integrations, incomplete identity discovery, poor IAM hygiene and inconsistent application onboarding.
When identity becomes fragmented, so does control—and without control, it defeats the very purpose of why we do IAM in the first place.
What Is IAM Technical Debt, Really?To explain what technical debt is, think of it as the accumulated cost of shortcuts: ad-hoc integrations and workarounds, siloed tools, rushed deployments and postponed cleanup. It forms slowly, but the result is predictable. When left unchecked, it creates operational drag, governance blind spots, increased threat surface and catastrophic risk exposure.
Common Causes of IAM Technical Debt: Here’s What Drives It Custom and siloed IAM tools that don’t communicate Legacy and nonstandard apps still critical to operations but incompatible with modern identity governance Incomplete discovery of identities and entitlements Weak hygiene around least-privilege, access reviews and MFA Fragmented onboarding of apps and services into IAM systemsWhen identity becomes fragmented, so does control. And in today’s cloud-first, hybrid-everything reality, that is both inefficient and dangerous.
From Sprawl to Strategy: Reclaiming Identity ControlFour Practical Steps to Rebuild IAM on a Stronger FoundationFixing IAM technical debt isn’t about ripping and replacing—it’s about rethinking identity as a data problem and solving it with the right architecture.
Based on both industry research and hands-on field experience, the path forward includes four critical steps:
Identify your silos: Map out identity sources—across AD forests, cloud apps, legacy tools, shadow IT—and expose where the cracks begin Consolidate and virtualize: Aggregate fragmented data into a unified identity data lake. Use abstraction to simplify integration and reduce your connector footprint Control identity sprawl: Build bridges, not walls—stitch together disparate identity records without replacing systems and bring order to the chaos Orchestrate across the mess: Govern consistently across central and distributed environments, enabling context-rich enforcement no matter where access decisions happen How Radiant Logic Eliminates IAM Technical DebtRadiant Logic’s platform RadiantOne was built to solve this problem and to unify, enrich and activate identity data.
RadiantOne virtualizes all identity sources into a single semantic layer—whether they come from AD, LDAP, Azure AD, Okta, SaaS applications or custom databases. It then brings real-time observability to the identity layer, enabling you to spot risky access patterns, automate entitlement cleanup and surface context-rich insights to stakeholders before an auditor or attacker finds the gap to exploit.
With RadiantOne:
You turn fragmented identity data into a governable, observable asset You gain line-of-sight across humans, machines, and APIs You eliminate the root causes of IAM project failures and identity-related incidents Final Thought: Identity Debt is Not Just IT’s ProblemIAM technical debt isn’t just a nuisance—it’s a strategic liability. It stalls digital transformation and cloud projects, burdens compliance and weakens your security posture. But with the right foundation, it can be reversed.
Ready to act? Schedule a demo of RadiantOne and start reducing your identity debt today.
1: Gartner, Reduce IAM Technical Debt, ID G00798396, June 23, 2025, by Nat Krishnan and Erik Wahlstrom. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
The post How to Reverse IAM Technical Debt & Stop a Security Crisis appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
Open standards are the backbone of any trustworthy digital identity ecosystem. When states choose protocols that are transparent, globally supported, and freely implementable, they create systems that protect privacy by design, support long-term interoperability, and mitigate the risks associated with proprietary or restricted technologies. Open standards aren’t just a technical choice, they are a policy commitment to resident rights, market competition, and sustainable public infrastructure.
In this post, we explore how open standards strengthen digital identity systems by enforcing privacy, supporting competition, and ensuring long-term interoperability without relying on proprietary technology.
Open Standards as a Public Interest ImperativeWe believe that states should require or at least express strong preference that the technical standards used in a state digital identity program be open, freely available, and implementable by the public and private sector without proprietary licensing restrictions. Open standards are critical to ensuring transparency, interoperability, and long-term sustainability.
They allow agencies, citizens, and vendors to independently verify that the technology operates as intended, building public trust in a system where privacy and unlinkability must be enforced by protocol rather than policy alone.
Freely available standards also support interoperability across jurisdictions and with federal programs, and they also allow for smaller vendors to more easily meet the requirements. If state digital identity credentials can rely upon open protocols, holders are far more likely to be able to use their credentials beyond state borders, verifiers can more readily adopt the technology without prohibitive costs or licensing fees, and states avoid lock-in to a single vendor or proprietary solution due to a competitive market of solution providers.
Aligning Standards With Statutory PrinciplesStandards should be evaluated not only for accessibility but also for alignment with the statutory principles in state code, including individual control, technological compliance, and data minimization. We believe a state’s role should be to publish a clear state digital identity profile that specifies which open standards are suitable and how they must be configured to enforce privacy-preserving features.”
To maintain relevance and adaptability, states should also establish a governance process for updating the state digital identity profile. This process should include structured input from public agencies, private vendors, civil society, and technical experts, and ensure that updates are guided by both statutory principles, technology maturity, and real-world market adoption.
This approach ensures that the state benefits from the innovation and global adoption that open standards enable while maintaining control over the protections that are uniquely important to residents.
Open Protocols for Transparency and TrustStates should rely on open protocols to the greatest extent feasible in a state digital identity program. Adoption will be far faster if states align with protocols already in use because verifiers and private-sector partners (in the United States and globally) are already beginning to adopt those standards. Adoption is ultimately driven by how many verifiers are able and willing to accept a credential, and using open standards that already have traction will help de-risk the program by ensuring compatibility with the broadest set of verifiers.
Open protocols also ensure that the methods of issuance, holding, and verification can be independently reviewed, tested, and implemented by both public and private actors. This reduces the risk of vendor lock-in, enhances interoperability, and strengthens transparency by demonstrating that features such as unlinkability and minimal disclosure are built directly into the technology.
Balancing Open Source With Security and MaturityOpen source can increase accountability and public confidence, allowing security researchers, civil society organizations, and other stakeholders to inspect implementations for compliance with relevant standards and codes. It can also increase the ability of small firms to enter the market that provide differentiated solutions for specific use cases, while also meeting state digital identity credential requirements.
However, not all open source projects are developed or maintained at the same level of maturity. If states were to release or ordain immature open source implementations as “official,” it could create new risks through rushed or under-resourced deployments that could contain security flaws or bugs.
To mitigate this risk, states should require that any open source software used in the state digital identity ecosystem demonstrate active community support, regular updates, and independent security audits. States should maintain even higher standards for software they release officially, such that it does not “pick technology winners” and create adverse incentives in competitive markets meant to deliver the best solutions at the lowest costs. Where projects do not meet these maturity thresholds for open source software, states should consider certification requirements or mandate third-party audits to ensure confidence. This balance allows states to demonstrate transparency while still protecting sensitive operational components. Even when open source software is audited, it does not obviate the requirement for end-to-end systems which incorporate them as components to go through their own audit, as security models are extremely sensitive and dependent upon contextualized use.
Finally, states should focus on open standards and projects with well-established, actively supported ecosystems that are able to reach long-term sustainability. Technical standards with broad governance and multiple implementers encourage innovation, ensure interoperability across jurisdictions, and reduce the risk of states relying on under-supported technologies. By combining reliance on open protocols, balanced use of open source, and rigorous independent audits, states can build a program that maximizes adoption, transparency, and security in equal measure.
Interoperability Through Open, Multi-Format StandardsTo ensure that state digital identity credentials can be accepted in other jurisdictions, states should ensure that the state digital identity framework is compatible with data formats and protocols that have received large investments and are broadly acceptable, to the extent they are compatible with state digital identity principles. Credential formats such as W3C Verifiable Credentials, mdocs found in the ISO/IEC 18013-5/7 specifications for mobile driver’s licenses, the forthcoming ISO/IEC 23220 which describe mdocs standalone, or SD-JWTs from IETF have received significant investments and growing adoption in the United States and abroad.
By ensuring that state digital identity credentials are compatible with these particular formats without compromising on its principles, states maximize the likelihood that other states, federal agencies, and private-sector entities will be able to validate and trust state-issued credentials without requiring custom integrations. A best practice emerging in the field is multi-format issuance, where credentials are simultaneously provisioned in both ISO mDL (ISO/IEC 18013-5/-7), IETF SD-JWT, W3C VC, and/or other formats. This ensures acceptance in federal and regulated contexts that require ISO compliance, such as TSA checkpoints, while still supporting broader digital use cases like online eligibility verification and cross-border interoperability.
It is important to note that adoption in other jurisdictions is not automatic simply because a credential is standards-compliant. Verifiers in other states and federal agencies will only integrate technologies that align with the ecosystems they are already adopting. If states did not support major protocols, the program could face significant delays in recognition and limited utility outside the state. By taking a framework approach to state digital identity that can create protections in a way compatible with dominant standards today, states reduce their adoption risk and accelerate acceptance across both government and private-sector contexts, towards providing usable, efficient, and cost-effective solutions for its residents.
Building Trustworthy Identity Infrastructure for the Long TermOpen standards provide states with the strongest foundation for a trustworthy digital identity. They safeguard privacy through protocol-level protections, ensure systems remain interoperable across agencies and jurisdictions, and create a healthy marketplace where multiple vendors can compete. Openly governed standards aren’t just a technical preference, they are the only way to build digital identity infrastructure that endures, evolves, and earns public trust.
If your organization is exploring how to build digital identity systems grounded in open, privacy-preserving standards, SpruceID can help translate policy requirements and open standards into secure, interoperable implementations.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 26 Nov 2025Reporting to Pascal Bouchiat, Senior Executive Vice President, Chief Financial Officer, Louis Igonet will be responsible for financial communication, including relations with investors and financial analysts. He will succeed Alexandra Boucheron and will join Thales on 5 January 2026.
A graduate of French business school EDHEC, Louis Igonet is currently Head of Investor Relations, Strategy and Development at Tikehau Capital.
Louis brings more than 20 years of experience in investor relations and corporate finance. He began his career in 2004 at Bouygues within the Group’s investor relations team, then at TF1, in the financial controlling team, before becoming Deputy Head of Investor Relations. He subsequently served as Head of Financial Communication at Edenred between 2015 and 2017, and then at Carrefour in 2017. He joined Tikehau Capital in 2018, shortly after the IPO, to establish its investor relations function, and later expanded his responsibilities to strategy and development.
View PDF corporate : Group + Investor relations https://thales-group.prezly.com/thales-appoints-louis-igonet-as-vice-president-head-of-investor-relations thales-appoints-louis-igonet-vice-president-head-investor-relations On Thales appoints Louis Igonet as Vice President, Head of Investor Relations
By Helen Garneau
Tokyo, Japan – November 26, 2025 — Indicio, a global leader in decentralized identity and Verifiable Credentials, announced today that it has been accepted into the Japan External Trade Organization’s (JETRO) J-Bridge program, marking another milestone in the company’s expanding presence in Japan.
The J-Bridge Program, supported by Japan’s Ministry of Economy, Trade, and Industry (METI), is designed to help innovative international companies establish and scale their presence in Japan. It will provide Indicio with access to JETRO’s comprehensive business support infrastructure.
Indicio is already helping Japan meet the growing market demand for secure, interoperable, and user-controlled identity verification through its commercial partnerships with NEC, Dai Nippon Printing (DNP), Digital Knowledge Co., and Nippon RAD to provide Indicio Proven® to a wide range of sectors.
Indicio Proven is a decentralized identity platform for creating reusable and interoperable government-grade digital identity with globally-interoperable Verifiable Credentials and digital wallets.
Customers across travel and hospitality banking and finance, and public services are using Indicio Proven to eliminate manual verification, improve efficiency, and to protect users from surging identity fraud, including social engineering scams and deepfakes.
What makes Indicio Proven uniquely powerful is that it lets customers create digital identity credentials that combine authenticated biometrics and document validation, then present these credentials anywhere for instant, seamless authentication to over 15,000 identity documents from 254 countries and territories, combine them with face mapping and liveness checks, and turn that verified information into portable Verifiable Credentials in any major credential format.
Credentials created with Indicio Proven follow global standards, including the European Union’s Digital Identity framework (EUDI) and digital wallet guidelines. Proven also makes it easy to combine different credential types in a single, seamless workflow managed from a single digital wallet.
“We’ve had tremendous interest from leading Japanese companies in our technology and this has led to dynamic and exciting partnerships and collaboration,” said Heather Dahl, CEO of Indicio. “So we are both honored to be accepted into JETRO’s J-Bridge program and eager to expand our presence in Japan. We have much to learn from Japan’s rich tradition in technology innovation and vibrant, world-leading companies and industries — and we believe we have much to contribute to creating a new era of digital transformation based around decentralized identity, portable, cryptographic trust, and data privacy.”
About IndicioIndicio is the global leader in decentralized identity and Verifiable Credential technology for seamless, secure, and privacy-preserving authentication across any system or network. With Indicio Proven, customers are able to combine document validation and authenticated biometrics to create reusable, government-grade digital identity in minutes for instant verification anywhere. Proven gives customers the widest choice of credential and protocol choices with full interoperability, a digital wallet compatible with EUDI and global standards, and a mobile SDK for adding credentials to Android and IoS apps. With our technology deployed across the globe, from border control to financial KYC, and with expansion into identity for IoT and AI, Indicio solutions are eliminating fraud, reducing costs, and improving user experiences. Headquartered in Seattle, Indicio operates globally with customers and partners across North America, Europe, Asia, and the Pacific.
The post Indicio expands Japanese market presence with membership of JETRO J-Bridge Program appeared first on Indicio.
There was a time when “Web3” sounded like something far away — a digital universe only a few could touch. But today, it’s right here with us in Nigeria. From creators to developers, everyone’s trying to understand how this new internet of trust and decentralization fits into our everyday lives.
For us at Ontology Network Nigeria, that’s where the magic happens.
We’re not just a blockchain community; we’re a movement of young Africans exploring how identity, data ownership, and decentralized trust can shape our digital future.
The Rebirth of a Community
When Ontology Nigeria first started, it felt like lighting a small candle in a dark room. Web3 was new, confusing, and filled with jargon. But slowly, our community began to grow — one conversation, one event, one campaign at a time.
From educating students about decentralized identity to running fun challenges (like our recent Halloween contest 🎃), we’ve built something more than just engagement. We’ve built belonging.
People started realizing that Ontology wasn’t just another blockchain — it was a platform that put people first, giving everyone control over their data and digital reputation.
Why Ontology Matters to Us
In a world drowning in data leaks and privacy concerns, Ontology Network stands for digital freedom. Its mission? To give every user a self-sovereign identity (ONT ID) — meaning you own your data, decide who sees it, and still stay connected to global systems.
Imagine a Nigeria where your education records, business credentials, or creative portfolio live on the blockchain — verifiable, secure, and fully under your control. That’s the kind of future Ontology inspires us to build.
And it’s not just theory — tools like the ONTO Wallet make it real. Every time someone in our community creates their ONT ID, it’s not just a technical act. It’s a declaration of digital independence.
The Pulse of Ontology Nigeria 💙
Our community has grown into a space where ideas meet action. We’ve hosted workshops, creative contests, and tech conversations that help demystify blockchain for the average Nigerian youth.
Some came to learn, some came to build — but all left with a new understanding: Web3 isn’t just for coders, it’s for dreamers too.
We’ve seen poets, designers, and content creators explore how Ontology tools can support creativity, ownership, and visibility in the decentralized web. That’s the kind of energy that keeps us going.
Looking Ahead
We know the journey isn’t easy — education gaps, internet challenges, and sometimes, skepticism. But we also know that innovation always starts with curiosity, and Ontology is helping us keep that flame alive.
The next phase for Ontology Network Nigeria is about impact and inclusion — bringing even more local voices into the Web3 space, showing that Africa’s creativity deserves not just recognition but ownership.
Because in the end, this isn’t just about technology.
It’s about trust, empowerment, and identity.
Final Thoughts
Every time I meet someone new in the community and they say, “I created my ONT ID today”, it reminds me why we started — to make sure Africans aren’t just consumers of the next internet, but active architects of it.
So here’s my call to everyone reading this:
Join the movement. Learn, build, and share. Let’s redefine what ownership means — together, through Ontology.
Welcome to the new era of digital trust.
Welcome to Ontology Network Nigeria 🇳🇬💙
Ontology Network Nigeria: Building Trust, Identity, and Community in Web3 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
Eight years ago, Ontology set out with a simple but powerful vision: to build a decentralized identity and data ecosystem that people could trust. Today, as we celebrate our 8th anniversary, we’re honored to reflect on a journey shaped not just by innovation, but by a global community that has grown with us, supported us, challenged us, and helped define what Ontology has become.
Our anniversary theme, “Celebrating 8 Years of Trust. Unlocking ∞ Possibilities”, represents exactly that. Trust has always been our foundation. Possibility has always been our future. And between them lies our community, the bridge that made eight years possible.
This milestone is more than a celebration. It’s an invitation.
An invitation to look back… and to build forward.
Your Story, Our Story: Announcing “Ontology Life — Ontology & Me”To mark our 8th anniversary, we’re launching a special community campaign:
🌀 Ontology Life — Ontology & Me Story Sharing ContestOver the years, Ontology hasn’t just been a protocol; it’s been part of people’s journeys as builders, developers, creators, partners, and community members. We want to hear those stories.
📅 Timeline 💰 Prize Pool: $1,000 in ONG 🥉 $200 ONG 🥈 $300 ONG 🥇 $500 ONGWinners will be chosen based on:
Story quality Engagement and votes on social media Final selection by the Ontology team 📣 How to Join Look for the official anniversary post on X. Reply under the tweet with your Ontology story, your first interaction, biggest milestone, favorite moment, or how Ontology has shaped your Web3 journey. Share it, hype it, tag friends, and celebrate with us.Whether you’ve been here since the beginning or you joined last week, your story matters. Your voice is part of our history.
A Look Back: 8 Years of Building TrustOver the past eight years, Ontology has powered secure decentralized identity, enterprise adoption, cross-chain integrations, and real-world solutions. We led the charge on DID long before it became a Web3 trend. We consistently pushed for a safer, more private, more human digital world.
From ONT ID to ONTO Wallet, from enterprise partnerships to community-led initiatives, this journey has been built together, step by step, block by block.
And the future?
The future is even more exciting.
Unlocking ∞ Possibilities: What’s Next for OntologyWe’ve been making great progress in implementing our 2025 Roadmap, as our 8th year shapes up to be an important foundation in creating a future with infinite possibilities. This next chapter is designed around one mission: empowering people with tools that make Web3 more open, more intelligent, and more connected than ever before.
Below is a deeper look at the innovations coming to the Ontology ecosystem.
🔒 Ontology IM: Decentralized, Private Instant MessagingOur biggest launch of the year is also one of our most ambitious.
Ontology IM is a decentralized, identity-verified messaging protocol designed for the next era of Web3 communication. Unlike traditional messaging apps that rely on centralized servers, user profiling, and opaque data practices, Ontology IM offers:
Full end-to-end encryption with cryptographic identity verification True censorship resistance — no middleman controls your data DID-anchored messaging, ensuring that every conversation is real, verifiable, and secureThis protocol is built not just as another messaging app, but as an infrastructure layer for decentralized communication, enabling:
dApps to embed secure messaging instantly Communities to coordinate without fear of censorship Users to enjoy seamless, private conversations across chainsAfter months of development and testing, Ontology IM is nearing its public debut, and the first hands-on experience will be in your grasp very soon.
🤖 AI MarketplacesAI and blockchain are converging faster than ever, and Ontology is positioned at the center of that evolution.
Our upcoming AI Marketplace will introduce a new layer of intelligence to the Web3 experience by enabling creators, developers, and users to:
Deploy AI agents tailored for wallet management, data insights, identity verification, transaction support, and more Access AI-driven services that plug directly into the Ontology ecosystem Build and monetize AI tools in an open marketplace powered by decentralized identity Combine ONT ID + AI for secure, privacy-protected personalizationFor users, this opens the door to Web3 that is more intuitive and more helpful:
Your wallet becomes your assistant Your AI agents understand your on-chain activity while keeping your data private Everyday tasks become smoother, more automated, and more efficientFor developers, it’s an opportunity to launch intelligent tools into a global marketplace built on real identity and trust.
💧 DEX Integration & Liquidity ExpansionLiquidity is the lifeblood of any blockchain ecosystem, and 2025 will mark a major strengthening of Ontology’s market foundations.
Following the latest community vote and the upcoming MainNet Upgrade, the stage is set for:
Improved liquidity for both ONT and ONG Enhanced token utility across the Ontology EVM A more accessible and connected trading environment for users and partnersAlongside these improvements, we are finalizing the requirements to bring a DEX to the Ontology EVM, enabling:
Native swaps Yield and liquidity opportunities More robust token flows across the ecosystem Easier onboarding for developers building decentralized applicationsThis upgrade ensures Ontology remains competitive, flexible, and attractive to new builders in a rapidly evolving multi-chain world.
Thank You for 8 Years of TrustEight years in Web3 is more than a milestone, it’s a testament to resilience, innovation, and community. Through bull runs, bear markets, technological shifts, and global changes, Ontology has remained committed to building infrastructure that empowers people, not platforms.
To everyone who believed in Ontology, who built with us, who shared feedback, who held discussions, who contributed code, who created content, who participated in governance, who simply showed up —
Thank you.
Our story continues.
And now, we want to hear yours.
Share your story. Celebrate our journey. Shape what comes next.Happy 8th Anniversary, Ontology!
8 Years of Trust: Your Ontology Story Begins Here was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the second release of 2025 we have improved HTTP Security Headers for both SSO and CustomerID. While many customers will have these already set at the proxy level, having the ability to control security headers within each application’s deployment may also benefit your deployment. These HTTP Security Headers are set to default off for this deployment, permitting you to become aware of them and test them in your environment. For our next release in spring of 2026, these HTTP Security Headers will be set to be on as default, though you will still have the ability to turn them off in your environment.
A core feature has been added to SSO; you can now manage Refresh Tokens. There is a new ability to apply policies to act on Refresh Tokens; these can be set against existing Refresh Tokens, or you can create policy to be applied for new tokens only. Please take a look at our release notes. You will find a link to the Refresh Token Expiration Policy page which lays out how to use the new feature. If you have concerns or questions, feel free to open a service desk ticket and we are always happy to help clarify (we will take your question as an opportunity to improve the documentation as well).
Within IDS 2024.2, SSO 9.5, we corrected several CVEs. Unfortunately, in correcting a small number of these, an error was introduced within Tomcat. This leads to unneeded threads being created, which can impact a very large or very long uptime environment. We observed a slight performance decrease in our release testing but were only able to identify the cause early this fall. We have created patch releases for SSO 9.5 and SSO 9.6. If you are unable to update to IDS 2025.2 with SSO 9.8, please consider deploying a patch to your environment, or ensure that it is rebooted regularly, which will release the unneeded threads.
There are, as always, several CVEs and other corrections that have been made to the Identity Platform.
One highlight for the upcoming release that we would like to mention. We are working to update a number of core technologies used by Identity Platform. At this time, we are aiming to deliver IDS 2026.1 as SSO 10.0 and CustomerID 7.0. These are major version upgrades as they contain backward incompatible changes. We will update the full platform to Java 21. SSO will have Tomcat updated from 9.x to 10.x. And CustomerID will be migrated from Wildfly to Spring Boot; note that UI and APIs will remain unaltered.
As with all software, Ubisecure would like to encourage you to upgrade your Identity Platform in a timely manner. Please contact your Integration Partner or Ubisecure Account Representative with any questions. Ubisecure encourages all customers to review and schedule service upgrade to this latest release. Bringing system flexibility, security, and new features to ensure the best user experience possible for your business is our goal.
For full details of the IDS 2025.2 release, please review the Release Notes and System Recommendations pages found on our Developer Portal.
The post IDS 2025.2, Security Headers and Refresh Tokens appeared first on Ubisecure Digital Identity Management.
The Digital Product Regulation Innovation Network (DPRIN) — a UK initiative bringing together regulators, academics, and technology experts — have worked in partnership with Dock Labs to verify product compliance in real time, helping prevent unsafe or non-compliant goods from being sold through online marketplaces.
The collaboration initially focuses on the example of eBike compliance, a growing concern as high-risk products are increasingly sold online with limited oversight, and will soon extend to other regulated product categories where real-time verification can enhance safety and consumer trust.
Using Dock Labs’ Truvera platform in close collaboration with their team, DPRIN has developed a digital compliance solution that makes regulatory data transparent, tamper-proof, and instantly verifiable across the supply chain.
India’s digital ecosystem is expanding faster than ever. With millions of people signing up for mobile apps, fintech platforms, gig services, and online marketplaces, businesses must verify identities quickly and accurately. What once relied on slow manual checks has now become a sophisticated process powered by artificial intelligence. Modern digital identity verification enables Indian companies to confirm user authenticity instantly while improving compliance and reducing the risk of identity fraud.
Fraudsters today are more advanced than ever. Edited Aadhaar cards, manipulated PAN cards, deepfake selfies, replay attacks, and synthetic identities frequently appear during user onboarding. For a mobile-first nation like India, where users join from diverse devices and environments, platforms need verification systems that can think, adapt, and detect suspicious activity in real time. AI-driven verification provides exactly that through a mix of biometric intelligence, behavioural analysis, and device pattern recognition.
The Shift Toward Smarter Verification in India’s Digital Landscape
Not long ago, onboarding relied heavily on manual document uploads and basic OTP-based checks. These methods were inconsistent and often unable to detect tampering. Today, AI identity verification has transformed this process into something far more reliable and scalable.
Advanced systems use facial recognition technology to examine facial landmarks, micro details, angles, and lighting variations, comparing them against ID photos with precision. Their performance is measured through global benchmarks such as the NIST Face Recognition Vendor Test and the NIST FRVT 1:1 Assessment, which many Indian businesses evaluate when assessing biometric reliability and face recognition accuracy.
In a country where users join from various environments such as urban offices, rural homes, low-light rooms, and budget smartphones, AI-driven verification ensures that identity checks remain consistent and dependable regardless of conditions.
Why Indian Businesses Are Choosing AI-Powered Identity Security
Industries across India including fintech, gig platforms, mobility, gaming, healthcare, and others need scalable identity verification that is accurate and frictionless. AI-powered systems detect subtle inconsistencies, identify repeated login attempts, analyze unusual device behaviour, and recognize tampered images more effectively than manual reviews.
They also help companies align with global privacy expectations outlined in frameworks like the GDPR.
Here is why more Indian organizations are adopting intelligent identity verification:
Faster onboarding that reduces user drop-offs and keeps signups smooth Higher accuracy in detecting tampered IDs or abnormal face patterns Stronger identity fraud prevention supported by behavioural insights Compliance-friendly frameworks suitable for regulated sectors Better trust and transparency for users joining digital platformsMany companies begin exploring these features with the Face Recognition SDK for matching, the Face Liveness Detection SDK for spoof prevention, and the Face Biometric Playground to test different verification flows.
How Liveness Detection Strengthens India’s Fight Against Identity FraudFraud techniques are evolving quickly in India. Deepfake videos, printed photos, digital masks, and reused images frequently appear during remote onboarding. This is why face liveness detection has become essential for Indian platforms seeking strong security.
A robust liveness detection system evaluates depth, movement, reflections, and real-time user presence to confirm that a live individual is in front of the camera. It prevents many of the common spoofing attempts exploited in fintech onboarding, gig platform verifications, loan approvals, and ride-hailing identity checks. This additional layer ensures that identity verification remains resistant to impersonation even as fraud tools become more sophisticated.
Creating Trust Through Better Biometric Verification
A dependable biometric verification system does more than match a face. It evaluates behavioural cues, device patterns, and contextual signals. This is particularly important in India, where onboarding must work seamlessly across varying camera qualities, lighting situations, and device types.
Companies also depend on biometric authentication tools to handle large verification volumes efficiently. These tools support real-time user verification, allowing platforms to approve genuine users instantly while flagging high-risk attempts for deeper review.
Developers and researchers often rely on transparent, community-driven innovation, supported by open contributions available in the Recognito GitHub repository.
Balancing Privacy and Security for Indian Users
Indian users are increasingly aware of how their data is collected and stored. Organizations implementing identity verification must balance accuracy with responsible data handling. Encrypting biometric templates, minimizing stored data, following clear retention policies, and communicating transparently all help build user trust.
Following guidelines inspired by GDPR enables businesses to maintain strong privacy standards and meet the expectations of India’s digital audience.
Real-World Impact Across India’s Fast-Growing Sectors
Robust verification now plays a vital role across India’s digital landscape. Financial services rely on automated KYC verification to reduce fraud and speed up onboarding. E-commerce and online marketplaces use digital onboarding security to block fake buyers and prevent misuse. Gig platforms and mobility services depend on identity clarity to safeguard both customers and workers.
Across these environments, intelligent verification helps platforms maintain fairness, reduce fraudulent activity, and provide safer user experiences.
The Future of Identity Verification in India
As fraud continues evolving, identity verification technology must advance alongside it. India’s systems will increasingly incorporate deeper behavioural analysis, enhanced spoof detection, smarter risk scoring, and adaptive AI-powered identity checks.
Emerging approaches, such as document-free identity verification, will also increase in adoption, enabling users to verify their identities without uploading traditional documents.
These advancements will create an environment where verification remains strong while becoming nearly frictionless for genuine users.
Building a Safer Digital Ecosystem Through Intelligent Verification
Trust forms the core of every digital platform. When businesses can reliably verify who their users are, they create safer interactions, reduce fraud, and maintain smoother onboarding experiences. With India’s digital sector expanding at a remarkable speed, AI-driven verification ensures that identity checks remain secure and future-ready.
Solutions available at Recognito continue helping organizations implement precise, privacy-focused verification systems built for long-term reliability and trust.
Frequently Asked Questions
1. What is digital identity verification?
It is a process that uses AI and biometrics to confirm whether a user is genuine, helping Indian platforms onboard real users securely.
2. Why is liveness detection important?
It ensures the person on camera is real by detecting natural facial movement, blocking spoof attempts using photos, videos, or deepfakes.
3. How does AI help reduce identity fraud?
AI detects tampered images, unusual device activity, and suspicious behaviour that manual checks often miss, making fraud harder to execute.
4. Does biometric verification protect user privacy?
Yes. When implemented properly, it uses encrypted data, minimal storage, and privacy-focused policies to keep user information secure.
5. Which sectors benefit most in India?
Fintech, gig platforms, mobility, e-commerce, and gaming rely heavily on digital identity checks to prevent fraud and onboard users safely.
Back in May, we hosted a Demo Day focused on how a few early adopters were approaching age estimation and age verification. At the time, most of the conversation lived in pockets. Some regulators were experimenting with frameworks. A handful of platforms were testing new signals. Age assurance felt like a problem that mattered, but not yet a global priority.
Age Assurance’s New Reality – A Recap of Liminal’s Age Assurance Demo DaySix months later, everything has changed. Age assurance has moved from a developing trend to a global mandate, and the broader world of age verification has shifted with it. More than 45 countries now require platforms to verify age through methods like device-based age verification, on-device estimation, reusable credentials, or structured checks tied to new age verification laws. Regulators expect measurable accuracy, and platforms are adopting modern age verification systems to prove compliance without storing sensitive data, all while protecting user experience at scale.
This Demo Day showed what that shift looks like in practice. Eight vendors demonstrated how they verify age across streaming, social platforms, gaming, in-person experiences, and new regulatory environments. Each had 15 minutes to present, followed by a live Q&A with trust and safety leaders, product teams, compliance professionals, and policy experts.
What stood out was not just the technology. It was the direction of the market. Privacy-first design is becoming a requirement, on-device processing is becoming the expectation, reusable identity is becoming real, and age assurance has officially gone global.
What age assurance is really solving nowAge assurance covers two essential needs. First, preventing minors from accessing adult content, gambling, and other restricted digital experiences. Second, preventing adults from entering minor-only spaces such as youth communities, education platforms, and social channels designed for children.
These needs sit alongside more traditional age verification online methods that protect minors from adult content and adults from entering youth environments. Youth access laws are expanding, enforcement is tightening, and deepfakes, AI based impersonation, and synthetic content are creating new risks on both sides. Minors now have more tools to appear older, and adults have more ways to appear younger, which makes the protection problem harder for platforms and regulators that are trying to keep everyone in the right place.
This tension shaped the entire event and pushed every vendor to show how they minimize exposure, reduce friction, and still create defensible assurances.
Privacy-first design is branching beyond biometricsAcross the demos, privacy was not an add-on. It was the foundation. Vendors were focused on verifying age without collecting or storing sensitive data, and several demonstrated pathways that move far beyond traditional face or document checks.
Deepak Tiwari, CEO of Privately, set the tone early:
“Our technology is privacy by design. We download the machine learning model into the device, and the entire age estimation happens on-device.”
He emphasized the minimal output:
“All facial processing stays on the device. The only thing that leaves is the signal that the person is above a certain age.”
Other vendors pushed this principle even further. One of the clearest examples came from Jean-Michel Polit, Chief Business Officer of NeedDemand, whose company verifies age using only hand movements. There is no face capture, no document upload, and no voice sample. The model analyzes motion patterns that naturally differ between adolescence and adulthood.
Jean-Michel explained:
“A seventeen-year-old does not know how the hand of an eighteen-year-old moves. These differences are minute and impossible to fake.”
Because the system immediately stops scanning if a face enters the frame, it also reduces the risk of unintended biometric capture, which is one of the biggest compliance concerns in global youth safety laws.
Together, these approaches show that privacy-first design is no longer limited to minimizing what data is stored. It is expanding toward methods that avoid personal data entirely, giving platforms new ways to meet global regulatory requirements without friction or biometric exposure.
Reusable identity is becoming practicalReusable identity has been a theoretical ideal for years, but most systems struggled with adoption or required centralized storage. FaceTec demonstrated a version that felt practical, portable, and privacy-preserving.
Alberto Lima, SVP Business Development at Facetec, introduced the UR code, a digitally signed QR format that stores an irreversible biometric vector.
“This is not just a QR code. It is a minified, irreversible face vector that still delivers extremely high accuracy.”
Verification can happen offline, identity does not need to be reissued, and there are no centralized databases. The user holds their proof, and platforms can verify it without re-running heavy identity flows.
Alberto summarized the shift:
“Everyone can be an identity issuer. The UR code becomes the interface for verification.”
This model serves alcohol delivery, ticketing, retail age checks, gaming, and other in-person flows where connectivity is inconsistent, and users expect a quick pass, not a multi-step verification process.
ECG-based age estimation shows how fast the science is movingOne of the most forward-looking sessions came from Azfar Adib, Graduate Researcher at TECH-NEST, where teams are exploring how smartwatch ECG signals can estimate age with high accuracy. It builds on the idea that physiological signals are difficult to fake and inherently tied to liveness.
Azfar shared a line that captured the concept clearly:
“ECG is a real-time sign of liveness. The moment I die, you cannot get an ECG signal anymore.”
The study showed:
“Age classification reaches up to 97 percent accuracy using only a smartwatch ECG.”
These findings are early but demonstrate how multimodal verification will evolve as platforms look for signals that are resistant to spoofing and do not require collecting images or documents.
Email age checks are becoming a high-volume defaultBiometrics are not appropriate for every platform or flow. That is where Verifymy offered one of the most immediately deployable solutions.
Steve Ball, Chief Executive Officer of Verifymy, explained:
“Email is consistently the most preferred method users are willing to share, and we can verify age with just that.”
The method relies on digital footprint analysis tied to email addresses, not inbox content or personal messages, which keeps privacy intact while enabling large-scale adoption.
Regulators are acknowledging the method:
“Regulators around the world now explicitly reference email age checks as a highly effective method.”
For high-volume platforms, this offers a low-friction way to layer age checks without compromising onboarding speed.
Location has become a compliance requirementIn many markets, age gates depend entirely on where a user is physically located, so platforms must know whether the user is in a jurisdiction with specific restrictions. GeoComply framed this challenge with clarity.
Chris Purvis, Director of Business Development at GeoComply, explained:
“The law does not say you must check age unless they are using a VPN. It says you must prevent minors from accessing restricted content. Full stop.”
He added:
“Location complacency is not location compliance.”
Reliance on IP is no longer enough. VPN use spikes whenever new safety rules are introduced, so location assurance is now a required part of the age assurance stack.
State-level policies such as Florida’s age verification law and the wave of states requiring age verification are further accelerating vendor adoption, especially in markets focused on age verification for adult content and social media.
Standards will reshape the marketThe final presentation came from Tony Allen, Chief Executive of ACCS, who is leading the upcoming ISO 27566 standard for age assurance. His guidance was direct and will likely influence procurement decisions next year.
“Every age assurance system must prove five core characteristics: functionality, performance, privacy, security, and acceptability.”
He also noted:
“Expect procurement teams to start saying that if you do not have ISO certification, you will not even qualify.”
Certification will separate providers with measurable accuracy and defensible privacy controls from those relying on promises alone.
Where the market is moving nextThis Demo Day made one thing clear. The next generation of age assurance will be built on the principles of privacy preservation, zero data retention, multimodal analysis, and portable proofs that function across various contexts. These capabilities will sit alongside scalable age verification systems that can adapt to global rules and emerging compliance requirements.
The old model relied on document uploads and selfie flows. The emerging model relies on lightweight signals that minimize exposure and scale more easily across jurisdictions. Platforms need methods that work across global regulations, including high-pressure markets such as age verification for adult content, while users want verification to feel invisible and regulators want outcomes that withstand scrutiny.
These demands are aligning rather than conflicting. The vendors on stage showed how fast this space is evolving and what the next decade of online trust will require.
Watch the RecordingDid you miss our Age Assurance Demo Day? You can still catch the full replay of vendor demos, product walk-throughs, and expert insights:
Watch the Age Assurance Demo Day recording here
The post From Pilot Programs to Global Mandate: Age Assurance’s New Reality appeared first on Liminal.co.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 25 Nov 2025(Meudon – Saint Cloud, 25/11/2025) - Dassault Aviation and Thales, through cortAIx, its artificial intelligence (AI) accelerator, have entered into a strategic partnership for the development of controlled and supervised AI for defence aeronautics. This partnership was signed on 18 November by Eric Trappier, Chairman & CEO of Dassault Aviation, and Patrice Caine, Chairman & CEO of Thales. The announcement was made on Tuesday 25 November at the Grand Palais in Paris, during the International Adopt AI Summit organised under the patronage of the French President.
Dassault Aviation, an architect of collaborative air combat systems, and cortAIx, Thales’ trusted AI accelerator, are teaming up to develop sovereign AI solutions. These cover the functions for manned and unmanned aircraft, for observation, situation analysis, decision-making, planning and control during military operations.
©Dassault Aviation_Thales
"This partnership is reflected in research and innovation programmes dedicated to the collaborative air combat of the future, with a view to incorporating AI into aeronautical defence systems. It is the culmination of strategic discussions launched by Dassault Aviation and Thales’ AI accelerator, cortAIx, and illustrates our shared commitment to trusted, sovereign and controlled artificial intelligence for the armed forces," says Pascale Lohat, Chief Technical Officer at Dassault Aviation.
Dassault Aviation and cortAIx are developing a lasting cooperation with a high-level, global ecosystem. Their work is carried out in accordance with national and European ethical principles and regulations (AI Act).
"cortAIx will bring to this strategic partnership with Dassault Aviation the best of Thales’ technological heritage, enriched by decades of military experience, combined with the agility and dynamics of a powerful innovation accelerator. Present in France, the United Kingdom, Canada, Singapore and soon in the United Arab Emirates, cortAIx relies on recognised technological partners to transform AI advances into concrete levers of sovereignty and efficiency," says Mickael Brossard, Vice-President of cortAIx Factory, Thales.
This partnership was presented on 25 November to the guests of the Adopt AI event, via a large-scale illustration of the ambitions of the research and innovation programmes and initiatives currently supported by the European Defence Fund. Dassault Aviation and Thales, through cortAIx, presented their strategy for a controlled, supervised, sovereign, secure and trustworthy AI in the service of humanity, in front of an audience of representatives from the main French and European institutional, academic and economic bodies participating in the event.
©Thales
For more insights :
Artificial intelligence | Thales Group
AI we can all trust | Thales Group
About Dassault Aviation
With over 10,000 military and civil aircraft delivered in more than 90 countries over the last century, Dassault Aviation has built up expertise recognized worldwide in the design, production, sale and support of all types of aircraft, ranging from the Rafale fighter, to the high-end Falcon family of business jets, military drones and space systems. In 2024, Dassault Aviation reported revenues of €6.2 billion. The company has 14,600 employees.
HD Photos: mediaprophoto.dassault-aviation.com
HD Videos: mediaprovideo.dassault-aviation.com
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
Recent images of Thales and its activities in the fields of Defence, Aerospace and Cyber & Digital are available in the Thales Media Library. For any specific request, please contact the Media Relations team.
Visit the Thales website: Thales Group
View PDF corporate : Investor relations ; market_segment : Defence > Air ; advanced_technologies : Advanced technologies > Artificial intelligence https://thales-group.prezly.com/dassault-aviation-and-cortaix-sign-a-strategic-partnership-for-a-sovereign-ai-serving-the-air-combat-of-the-future dassault-aviation-and-cortaix-sign-strategic-partnership-sovereign-ai-serving-air-combat-future On Dassault Aviation and cortAIx sign a strategic partnership for a sovereign AI serving the air combat of the future
By Trevor Butterworth
Do you want your customers to be phished by fake AI agents pretending to be from your company?
Of course you don’t. That’s the stuff of nightmares.
You’re dreaming up amazing ways to use AI to help your customers and simplify your operations. And to turn that dream into reality in travel, the focus is on delivering automated performance. How do you solve the customer’s problem, meet their goal — and beat every other competitor trying to do the same thing?
Authentication isn’t even an afterthought.
Newsflash: the nightmare of spoofed agents is coming. And it’s bringing a friend, the specter of regulatory compliance.
Do you think you can just grab and projectile-share tons of customer personal data, unencrypted and unprotected, as if GDPR doesn’t exist?
Do you think legacy authentication tech, built on usernames and passwords, is up to the task of protecting these radically new customer interactions?
In one breach, you will become a global news story, lose your customers, and be lucky if you aren’t fined into oblivion.
This is why you need Verifiable Credentials for AI.
1. Making AI implementable means taking authentication seriouslyThe bad news is that conventional, legacy, centralized authentication technology can’t protect your AI agents and your customer interactions.
The good news is that decentralized identity and Verifiable Credentials can — faster and cheaper.
The trick is that an AI agent with a Verifiable Credential and a customer with a Verifiable Credential are able to authenticate each other before sharing data.
And they’re able to do this cryptographic authentication in a way that is resistant to the bad kind of reverse-engineering AI.
This means that if you are an airline or a hotel chain, a customer can instantly prove that they are interacting with an AI agent from that airline or hotel chain.
At the same time the AI agent can prove the customer is also who they say they are — in other words, that they have been issued with a Verifiable Credential from their airline.
This all happens instantly.
2. Making AI implementable means taking GDPR seriouslyThe European Union’s General Data Protection Regulation (GDPR) is the gold standard for data privacy and a model for other jurisdictional data privacy and protection law.
GDPR requires a data subject — your customer — to consent to share their data. It requires the data processor — your company — to minimize the amount of personal data it uses and limit the purposes for which it can be used.
Right now, no one appears to be thinking about any of this; It’s a personal data-palooza. But this isn’t the web of 20 years ago. You can’t say people don’t care about data privacy when GDPR came into effect in 2018.
Again, Verifiable Credentials solve this problem. They are a privacy-by-design technology providing the customer with full control over their data. For an AI agent to access that data, the person must explicitly consent to sharing their data.
They can also share this data selectively, so you can meet the requirements of data and purpose minimization.
3. Expanding AI means taking delegated authority seriouslyIt’s going to be a multi-agent world. AI agents will need to talk to other AI agents to accomplish tasks. To make this work, a customer will have to give a special kind of permission to the first point of agentic contact: delegated authority.
This means a customer must explicitly consent to an agent sharing data with another agent, whether that second or third agent is inside the same company or outside.
Again, Verifiable Credentials make that kind of consent easy for the customer. On the back end, decentralized governance makes it easy for a company to implement and manage these kinds of AI agent networks.
An AI agent can hold a trust list of other AI agents it can interact with. And because all these agents also have Verifiable Credentials, every agent can authenticate each other — as if they were customers.
4. The additional incredibly useful benefit of Verifiable Credentials: structured data.The great thing about this technology is not just that it solves the problems you haven’t really thought about, it also helps to solve the problem that you’re currently focused on: the need for structured data.
Verifiable Credentials are ways to structure trustable information. If you as an airline create, say, a loyalty program credential, the information in that credential can be trusted as authentic. It comes from you; it’s not manually entered, potentially incorrectly, by the customer. It’s also digitally-signed so it cannot be altered by the user or anyone else.
So when an AI agent gets permission from a customer to access a loyalty program credential, it is able to automatically ingest that accurate, verifiable, information from the credential and immediately act on it.
Think about how easy it is now for a chatbot to interact with a passenger on a flight and provide instant access to services, or rebook a connecting flight, or connect them to a hotel agent — and then use mileage points associated with a Loyalty Program Credential to pay. (We’ve also enabled regular payments using Verifiable Credentials).
No more manual mis-typed data entry slowing things down creating frustration and customer dropoff. The customer has a user experience that works for them and delivers frictionless customer service from you. But only if you implement authentication and permissioned data access.
We’ve been recognized by Gartner for our innovation, we’ve been accepted into NVIDIA’s Inception Program, we’ve partnered with NEC to create the trust layer for automated AI systems.
Contact us to make AI a secure and compliant reality.
The post Four reasons why travel companies using AI need Indicio Proven appeared first on Indicio.
“Digital identity wallet standards are having a moment. In certain circles, they’re the topic.”
The European Union is driving much of the global conversation through eIDAS2, which requires every member state to deploy digital wallets and verifiable credentials that work across borders. In the United States, things look very different: deployment is happening state by state as DMVs explore mobile driver’s licenses (mDLs) and experiment with how these new credential formats fit into existing identity systems.
For most people, this shows up as “more stuff on my phone,” though desktops matter, too. Regulators and lawmakers see more areas where they need to set rules regarding security, privacy, and usability. Developers and standards architects see something else: a complex web of digital identity wallet standards that include the layers of browsers, OS platforms, protocols, and wallets that must interoperate cleanly for anything to work at all.
We saw a version of this during the October AWS outage, where headlines immediately blamed “the DNS.” But the DNS protocol behaved exactly as it should. What failed was the implementation layer: AWS’s tooling that feeds DNS, not DNS itself.
A similar pattern is emerging around digital wallets and verifiable credentials. At last week’s W3C TPAC meeting, I lost count of how many times I heard a version of:
“It’s the Digital Credential API’s fault/responsibility.”
Just as “it’s the DNS” became shorthand for an entire problem space, “it’s the DC API” is now becoming shorthand for an entire stack of browser, OS, and wallet behavior. And that shorthand is obscuring real issues.
A Digital Identity Digest Digital Identity Wallet Standards, the DC API, and Politics Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:18:34 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
Why the web has layers in the first placeBefore diving into the DC API, it’s worth remembering why the ecosystem is structured this way. The web is layered by design, and when it comes to digital wallets, that layer looks like this:
Browsers enforce security boundaries between websites and users. Operating systems manage device trust, hardware access, and application permissions. Wallets specialize in credential storage, presentation, and user experience. Protocols define how information moves across these components in interoperable ways.No single layer can—or should—control the entire flow. The fragmentation is intentional. But when political or regulatory urgency collides with this architectural reality, confusion is almost guaranteed.
If you want a broader overview of the standards bodies and protocols that shape digital identity wallets, I wrote about that landscape in a post last year: “The Importance of Digital Identity Wallet Standards.” It provides helpful context for how these layers evolved and why they interact the way they do.
What the Digital Credential API actually isThe DC API is a protocol under development at the W3C. It isn’t a standard yet, it isn’t a wallet, and it isn’t an OS-level API. It’s one layer in a larger system.
What it actually does:
The browser receives a request to present a credential. That request uses a specific presentation protocol, such as: ISO/IEC 18013-7 Annex C (from the mDL ecosystem) OpenID for Veriable Presentations (OpenID4VP, developed in the OpenID Foundation) The DC API passes that request to the device’s platform-specific APIs. The OS hands the request to the wallet(s) installed on the device. If the credential needs to move between devices, the DC API relies on the Client to Authenticator Protocol (CTAP, developed in the FIDO Alliance) for secure cross-device interactions.That’s it. The DC API connects a request from the browser to a wallet. Everything else happens above or below it. Tim Cappalli is doing amazing work documenting this on the Digital Credentials Developers website.
A similar dynamic showed up years ago with Identity Provider Discovery, where some stakeholders wanted a trusted, independent way to verify that the user was being shown the correct identity providers. Some would like the DC API to offer similar guarantees for wallets and credentials. But that kind of oversight is not in scope for this layer. The DC API doesn’t govern UX, verify correctness, or audit the platform; it only bridges protocols to the OS.
Two important clarificationsAs one of the co-chairs of the W3C group working on the DC API, I spend a lot of time keeping the specification tightly scoped to the layer it actually controls. Outside the working group, though, “the DC API” often becomes shorthand for every layer in the wallet stack, which results in unfortunate (at least from my perspective) confusion.
The DC API supports multiple presentation protocols, but browsers, OSs, and wallets don’t have to.The DC API can transport both Annex C and OpenID4VP. But support across layers varies:
Google supports both protocols. Apple supports Annex C only. Third-party wallets choose based on product goals. Government-built wallets align protocol choices with policy, privacy, and interoperability requirements.So while the DC API can support multiple protocols, the ecosystem around it is not uniform. That’s a separate but very relevant problem. At this time, this critical standard is only fully supported by one vendor, and yet, the option is this or the less-than-secure option that is custom URL schemes.
The DC API allows multiple wallets on one device, but the ecosystem isn’t ready.In theory, multiple wallets are fine. In practice, this raises unresolved questions:
How should a device present wallet choices? How does a wallet advertise which credentials it holds? What happens when wallets support different protocols?These aren’t DC API issues, but misunderstandings about them often land at the DC API’s feet. So why not make it the DC API’s responsibility? There are reasons.
Why pressure lands on the DC APISome requirements emerging from the European Commission would require changes in how the OS platform layer behaves; that’s the layer that controls platform APIs, secure storage, inter-app communication, and hardware-backed protections.
But the OS platform layer is proprietary. No external standards body governs it, and regulators cannot directly influence it.
The EC can influence some other layers. For example, they engage actively with the OpenID Foundation’s OpenID4VP work. But OpenID4VP has already been released as a final specification. The EC can request clarifications or plan for future revisions, but they cannot reshape the protocol on the timeline required for deployment.
That leaves the DC API.
Because the DC API is still in draft, it is one of the few open, standards-based venues where the EC can place requirements for transparency, security controls, and protocol interoperability. It is, quite literally, the only part of this stack where immediate governance pressure is possible.
When pressure lands on the wrong layerThe problem arises when that pressure is directed at a layer that does not control the behaviors in question. Some EC requirements cannot be met without OS-level changes, and those changes are outside the influence of a W3C specification. The deeper issue is that governments need predictable, enforceable behavior from digital wallets—behavior that works the same way across browsers, devices, and vendors. But if support for key standards like Annex C or OpenID4VP varies by platform, and wallet behavior differs across OS ecosystems, governments are left with only two real levers: regulate platforms directly, or mandate interoperability requirements that implementations must meet. Neither of those levers sits at the DC API layer. That layer can expose capabilities, but it cannot enforce consistency across implementations.
Regulators aren’t wrong to feel frustrated. They want outcomes—stronger security, clearer transparency, and technical mechanisms supporting regulatory oversight, and better privacy protections—that would require deeper hooks into the platform. But today, the only open venue available to them is the DC API, not the proprietary OS layers below it, and not a recently finalized protocol like OpenID4VP. By pressuring the DC API, there is hope that this might encourage OS vendors to change their ways.
The missing layer: the W3C TAG’s concerns about credential abuseAnother factor complicating the landscape is that each layer is thinking about the “big picture.” The W3C Technical Architecture Group (TAG) recently published a statement, Preventing Digital Credential Abuse, outlining the risks associated with the abuse of digital credentials, highlighting how easily these technologies can be misused for tracking, surveillance, exclusion, and other harms if guardrails aren’t in place.
Their guidance is deliberately broad. They are looking across browsers, wallets, protocols, and policy environments, and raising concerns that span multiple layers. That kind of review is their mandate: they examine the technical architecture (as implied by the name of the group) and provide guidance to ensure that protocols developed for the web are as safe and useful as possible.
Unfortunately, the TAG is similar to the EC in that it can only influence so much when it comes to standards. Critical decisions about privacy and security often occur outside the remit of any single specification or standards organization. A browser can mitigate some risks. An OS can mitigate others. Wallets can add protections. Protocols can limit what they expose. But no single layer can fix everything. That said, the EC (unlike the TAG) can regulate the issue and force platforms to specific implementations, which is its own political challenge.
The missing governance layerThis is part of why there is so much political, architectural, and cultural pressure around digital credentials. Everyone is trying to look after the whole system, even though no layer actually controls it. This is why some stakeholders argue that a new layer—one explicitly designed for governance—may be required. A layer where wallet- and platform-facing behaviors are standardized in a way regulators can rely on, rather than inferred from proprietary implementation details. If governments want consistent privacy controls, consistent credential-selection behavior, or consistent transparency requirements, they will eventually have to either regulate platform behavior directly or create a standards-based layer that makes such oversight possible.
When political pressure meets technical layeringDigital wallets and verifiable credentials are being deployed globally, but the gravitational center of policy influence remains Brussels. The “Brussels Effect” is real. Discussions in the W3C Federated Identity WG, the OpenID Foundation’s DCP WG, and other standards groups frequently reference the EUDI ARF and eIDAS2 timelines.
Political pressure isn’t inherently bad. Sometimes it’s the only reason standards work moves at all. But when pressure targets the wrong layer, we get:
misaligned expectations rushed architectural decisions poorly targeted requirements and the temptation to fall back to less secure approaches (such as custom URL schemes)Some EC expectations cannot be met without changes to the layering of the current technical solutions. Achieving the desired outcomes for privacy and transparency would require OS vendors to publicly disclose platform behaviors or implement new layers that afford regulatory oversight, standardized controls, or support additional protocol features.
Closing: shorthand is helpful… until it isn’t“The DC API” is increasingly used as shorthand for the entire credential-presentation ecosystem. It’s understandable and perhaps unavoidable. But it isn’t accurate.
When inconsistencies show up across implementations, it’s tempting to assume the DC API should be redesigned to become a more prescriptive or “smarter” layer. That is technically possible, but it has little support within the working group—and for good reason. A protocol-agnostic DC API preserves permissionless innovation and prevents any one protocol or architecture from dominating the ecosystem. It is meant to expose capabilities, not dictate which protocols must be used or how wallets should behave.
If governments need consistent, enforceable behavior across platforms—and many do—those guarantees must come from a different part of the stack or through regulation. The DC API simply does not have the authority to enforce the kinds of requirements that regulators are pointing toward, nor should it be transformed into that kind of enforcement mechanism.
If we want secure, interoperable digital wallets, we need to name the right problems and assign them to the right layers. Expecting the DC API to control behaviors outside its remit won’t get us the outcomes we want.
The real work starts with a better question: “Which layer is responsible for this?” Only then can we move on to the most important question: “How do we manage these layers to best support and protect the people using the technology?”
Everything else follows from that.
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
TranscriptWelcome back. Today, we’re talking about digital identity wallets because they are definitely having a moment right now—at least among the people who spend their days thinking about standards, protocols, governance, and all the delightful complexity of making digital identity work at scale.
In many circles, digital wallets aren’t just one topic among many. Instead, they have become the topic.
Furthermore, much of this momentum is coming from the European Union. Under eIDAS 2, every EU member state must deploy digital wallets and verifiable credentials that work across borders. That’s a massive mandate, and it’s shaping the conversation far beyond Europe.
Meanwhile, in the U.S., the topic looks a bit different. Deployment is happening primarily state by state, as DMVs explore mobile driver’s licenses (MDLs). The results include:
Fragmented implementation A lack of harmonization State-driven experimentationYet this patchwork now coexists with the EU’s broad alignment efforts, while other regions adapt approaches that suit their needs.
More Wallets, More Icons, More ComplexityFor most everyday users, this simply shows up as more stuff on their phones—an airline ticket here, a government wallet there, and possibly more icons appearing in the future as deployments expand.
Even though phones dominate the conversation, desktops continue to matter. Different stakeholders view the shift through different lenses:
Regulators identify more areas where intervention may be required. Architects see a complex interplay between browsers, operating systems, protocols, and wallet design. Product teams wrestle with the UX and expectations users bring. Lawmakers try to understand the implications and risks.And unsurprisingly, interoperability sometimes breaks.
A great example comes from the AWS outage back in October. Headlines blamed DNS—but DNS worked exactly as designed. The failure stemmed from the implementation layer that feeds DNS, not DNS itself.
This is important, because a similar pattern is emerging in the conversations around digital wallets and the Digital Credentials API (DC API).
The DC API Becomes a ScapegoatAt last week’s W3C TPAC meeting, “It’s the DC API’s fault” became a common refrain.
However, just as with DNS, “the DC API” has become shorthand for an entire stack—browser behavior, OS behavior, application logic, protocol integration, and wallet decisions. And that shorthand obscures real issues.
So let’s step back and revisit the basics.
Why We Have LayersThe web is layered, intentionally. Fragmentation is a feature, not a bug.
Each layer plays a different role:
Browsers enforce security boundaries and mediate how websites interact with users. Operating systems manage device trust, hardware access, and permissions for inter-app communication. Wallets store credentials and manage the user experience around presentation. Protocols govern how data moves between each layer in interoperable ways.No single layer can—or should—control the entire ecosystem.
However, political urgency tends to collide with this distributed design, creating confusion.
If you want a broader primer, check out the earlier piece The Importance of Digital Identity Wallet Standards, which explains how these layers historically evolved.
But today, our focus is the DC API.
What the DC API Actually DoesThe Digital Credentials API is a protocol under development at the W3C. It is:
Not a standard yet Not a wallet Not an OS-level API Just one layer with a narrow scopeHere’s the actual flow:
The browser receives a request to present a credential. That request uses a presentation protocol—typically: ISO 18013-7 (Annex C) from the MDL ecosystem, or OpenID for Verifiable Presentations (OID4VP). The DC API passes the request to the operating system. The OS hands it to whichever wallet(s) are installed. If the credential needs to be presented across devices (e.g., from phone to desktop), the DC API uses CTAP from the FIDO Alliance.As you can already see, this brings in several standards bodies:
ISO OpenID Foundation W3C FIDO AllianceThe DC API simply bridges browser protocol traffic to the OS. Everything else happens above or below it.
Tim Cappalli has a great visual diagram of this—linked from the written blog post.
Echoes of Old Identity ProblemsThis is not the first time these issues have come up. Years ago, during the identity provider discovery debates, some parties wanted browsers to verify the “correct” identity provider.
Now, we’re hearing similar suggestions: that the DC API should verify whether the “correct” wallet is being used.
However, this is out of scope. The DC API does not:
Govern UX Verify correctness Audit platform behaviorIt only bridges protocols.
As one of the W3C co-chairs working on the DC API, I spend a great deal of time keeping the specification tightly scoped, while the world often uses “DC API” as a catch-all term for problems elsewhere.
Clarification #1: Protocol Support VariesThe DC API supports multiple presentation protocols, but vendors do not have to.
For example:
Google supports both Annex C and OpenID4VP. Apple supports only Annex C. Third-party wallets choose based on their own goals. Government wallets choose based on policy and privacy requirements.So while the DC API is protocol-agnostic, the surrounding ecosystem is not uniform.
Without the DC API, implementers would rely on custom URI schemes, which are problematic for security and privacy.
Clarification #2: Multiple Wallets on One DeviceThe DC API technically allows multiple wallets on the same device, but the ecosystem is not yet ready.
Key unanswered questions include:
How should devices present wallet choices? Should wallets advertise which credentials they hold? What if different wallets prefer different protocols?These are important questions—but not questions for the DC API.
So Why the Pressure on the DC API?Much of it comes from regulatory pressure, especially from the European Commission.
Some EC requirements require OS-level changes, but OS behavior is proprietary and not governed by standards bodies—so regulators cannot influence it directly.
Protocols such as OID4VP and Annex C are already nearly finalized, leaving little wiggle room.
Therefore, regulators turn to the one venue still open: the DC API Working Group.
Unfortunately, some expectations require changes outside what the DC API controls.
Governments need:
Consistent behavior across devices Transparent selection mechanisms Predictable privacy protectionsIf platform support varies, governments must either:
Regulate platforms directly, or Mandate compliance through policyIn practice, the choice increasingly becomes regulation.
The W3C TAG Weighs InThe W3C Technical Architecture Group (TAG) recently published Preventing Digital Credential Abuse, which outlines risks including:
Tracking and surveillance User exclusion Privacy harmsThe TAG offers broad guidance across layers. However, the TAG cannot enforce behavior.
Browsers can mitigate some risks.
OS platforms can mitigate others.
Wallets can add protections.
Protocols can define boundaries.
But no single layer can solve them all.
The European Commission can regulate, though doing so creates additional complexities.
A Possible New Governance Layer?Because of the competing pressures—political, architectural, and cultural—some stakeholders now wonder if the ecosystem may eventually require a new governance layer.
Such a layer could standardize:
Wallet-facing behaviors Platform-facing behaviors Privacy controls Transparency requirementsThis would give regulators a consistent target, rather than relying on inferred or proprietary platform behavior.
The Brussels EffectDigital wallets and verifiable credentials are being deployed globally, but Brussels still exerts immense policy influence.
You can feel it in:
W3C Working Groups OpenID Foundation discussions Broader standards ecosystem conversationsPolitical pressure isn’t inherently bad—it often accelerates progress. But when the pressure is misaligned with where change must occur, the results are predictable:
Misaligned expectations Rushed architecture decisions Poorly targeted requirements A retreat to less secure options (like custom URI schemes) The DC API Is Not the Entire EcosystemThe DC API often becomes shorthand for the entire credential presentation ecosystem—but that’s inaccurate.
Protocols live in different standards bodies. Implementations live in different layers. Governments and companies make different choices.When inconsistencies appear, some suggest making the DC API more prescriptive. While technically possible, it has little support—for good reason.
A protocol-agnostic DC API:
Preserves permissionless innovation Prevents one protocol from dominating Exposes capabilities Avoids dictating wallet behaviorIf governments need enforceable consistency, it must come from either:
Regulation, or A new standards-based governance layerIt won’t come from the DC API.
Closing ThoughtsIf we want secure, interoperable digital wallets, we must name the right problems and assign them to the right layers.
The key question is:
Which layer is responsible for this component?
Once we answer that, we can address how to manage the layers to best support and protect users.
Everything else follows from that.
OutroAnd that wraps up this week’s episode of the Digital Identity Digest.
If this helped clarify a complex topic, please share it with a colleague and connect with me on LinkedIn @hlflanagan.
Subscribe, leave a rating or review, and check out the full written post at sphericalcowconsulting.com.
Stay curious, stay engaged, and let’s keep these conversations going.
The post Digital Identity Wallet Standards, the DC API, and Politics appeared first on Spherical Cow Consulting.
The Council of the European Union announced sanctions against the Russian ruble-pegged stablecoin A7A5 and the payment service provider Payeer on October 23, 2025, for their part in "Russia's actions destabilising the situation in Ukraine". These sanctions come into effect on November 25, 2025.
Web3 horror stories lessons learned — this summary turns scary headlines into simple education: self custody, bridge safety, venue vetting, stablecoin plans, and an incident checklist. We posted the full session on X here. If you missed it, this summary gives you the practical habits to use Web3 with more confidence.
Note: The information below is for education only. It describes options, questions, and factors to consider.
Web3 security foundationsBlockchain in one sentence: a public ledger where many computers agree on the same list of transactions.
Private key: the secret that lets you move your coins. Whoever controls it controls the funds.
Self custody vs custodial: self custody means you hold the keys. Custodial means a platform holds them for you.
What people usually try to learn about a venue
How customer assets are held and whether segregation is documented Whether the venue publishes proof of reserves and whether liabilities are discussed What governance or policy controls exist for large transfers How compliance, KYC/AML, and audits are described Incident history and the clarity of post-incident communications Withdrawal behavior during periods of stressCommon storage language
Hot storage: internet-connected and convenient Cold storage: offline and aimed at reducing online attack surfaceTrading and custody involve process and oversight. Public signals such as disclosures, status pages, and audit summaries help readers form their own view of venue risk.
Bridge security: moving across chains safelyThink of bridges as corridors, not parking lots. A bridge locks or escrows assets on one chain and represents them on another. Because value crosses systems, bridges can be complex and high-value points in the flow.
Typical points to check or ask about
Official interface and domain Current status or incident notes published by the team Fee estimates and expected timing Any approvals a wallet is about to grant and to which contract Whether a small “test” transfer is supported and how it is verified How the project communicates delays or stuck transfers Whether there is a public pause or circuit-breaker policyTerms that appear in bridge discussions
Validator and quorum or multisig: several independent signers must approve sensitive actions Reentrancy: a contract is triggered again before it finishes updating state Toolchain: compilers and languages a contract depends on; versions and advisories matterMovement across chains touches multiple systems at once. Understanding interfaces, messages, and approvals can help readers evaluate their own tolerance for operational complexity.
Stablecoins: reserves, design, and plansWhat a “dollar on-chain” can be backed by
Cash and short-term treasuries at named institutions Crypto collateral with over-collateralization rules Algorithmic or hybrid mechanismsQuestions readers often ask themselves
What assets back the stablecoin and where are they held How concentration across banks, issuers, or designs is handled What signals would trigger a partial swap or a wait-and-see approach Which sources are monitored for updates during stressExample elements of a personal depeg plan
Signals: price levels or time thresholds that prompt a review Actions: small, incremental adjustments rather than all-or-nothing moves Sources: issuer notices, status pages, and established news outletsDesigns behave differently under stress. Defining personal signals and information sources ahead of time can make decisions more methodical.
Human layer protection: phishing, privacy, browser hygienePatterns commonly seen in phishing or social engineering
Urgency or exclusivity, requests to “verify” a wallet, surprise airdrops Lookalike domains, QR codes from unknown accounts, unsigned or opaque transactions Requests for seed phrases or private keys (legitimate support does not request these)Privacy points that often come up
Use of a work or pickup address for hardware deliveries Awareness that marketing databases can leak personal detailsBrowser and device considerations people weigh
A separate browser profile for web3 use with minimal extensions Regular device and wallet firmware updates For shared funds, whether a multisig or policy-based account would add useful checksMany losses begin with human interaction rather than code. Recognizing common patterns can help readers evaluate messages and prompts more calmly.
Web3 security glossaryBridge: locks an asset on chain A and issues a representation on chain B
Wrapped token: an IOU on one chain representing an asset on another
Oracle: external data or price feed for smart contracts
Reentrancy: re entering a contract before the state updates which can enable over withdrawal
Multisig or quorum: multiple keys must sign before funds move
Proof of reserves: an attestation that holdings cover obligations and is meaningful only if it includes liabilities
Self custody: you hold the private keys which brings more responsibility and less venue risk
Cold storage: offline key storage that is safer from online attack
KYC or AML: identity and anti money laundering controls
Seed phrase: the words that are your wallet. Anyone with them can empty it
Keys
Where are long-term funds held Is there a way to verify address and network before larger transfers Is a small confirmation transfer practical in the current situationApprovals
Which contracts currently have spending permission Are there tools to review or remove old allowances if desiredBridges
Is the interface official and the status normal Are there recent notices about delays or upgrades If something looks off, where are the official communications checkedMonitoring
Which status pages are bookmarked for wallets, bridges, and venues Which channels are considered primary for updates during turbulenceVenues
Is there public information on liabilities alongside assets How are customer assets segregated according to the venue What governance and audit information is availableComms hygiene
How are links verified before use What is the process when receiving unexpected DMs or QR codes What information will never be shared (for example, seed phrases)Playbooks
What are the personal thresholds for a stablecoin price review What are the steps if an exchange pauses withdrawals What is the process if a wallet compromise is suspected Note for readersThis article is an educational takeaway from our community call. The full call is on X here. It is not advice. It is meant to help readers develop their own questions, checklists, and comfort levels when using web3 tools.
Web3 Horror Stories: Security Lessons Learned was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
The Ontology community has voted, and the results are in: the ONG Tokenomics Adjustment Proposal has officially passed.
After three days of voting, from October 28 to October 31, 2025 (UTC), Ontology Triones Nodes reached a unanimous decision in favor of the proposal. The proposal secured over 117 million votes in approval, signaling strong consensus within the network to move forward with the next phase of ONG’s evolution.
A Vote for Long-Term SustainabilityThis proposal represents a significant step in refining ONG’s tokenomics to ensure long-term stability, strengthen staking incentives, and promote sustainable ecosystem growth.
Here’s a quick recap of what’s changing and why it matters.
Key Objectives Cap total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value to strengthen liquidity and reduce circulating supply. Rebalance incentives for ONT stakers while ensuring long-term token sustainability. ONG Max and Total Supply will decrease from 1 billion to 800 million, with 200 million ONG burned immediately. ONG Circulating Supply remains unchanged immediately after the event; however, circulating supply could drop to around 750 million (assuming that 1 $ONG = 1 $ONT) in the future due to the permanent lock mechanism. Implementation PlanAdjust ONG Release Curve
Cap total supply at 800 million ONG. Extend total release period from 18 to 19 years. Maintain a consistent 1 ONG per second release rate for the remaining years.Released ONG Allocation
80% of released ONG will continue to flow to governance as ONT staking incentives. 20%, plus transaction fees, will be contributed to ecological liquidity.Swap Mechanism
ONG will be used to acquire ONT within a set fluctuation range. The acquired ONG and ONT will be paired to provide liquidity and generate LP tokens. LP tokens will be burned, permanently locking the underlying assets to maintain supply discipline. Community Q&A HighlightsQ1. How long will the ONT + ONG (worth 100 million ONG) be locked?
It’s a permanent lock.
Q2. Why extend the release period if total ONG supply decreases?
Under the previous model, the release rate increased sharply in the final years. By keeping the release rate steady at 1 ONG per second, the new plan slightly extends the schedule — from 18 to roughly 19 years — while maintaining predictable emissions.
Q3. Will ONT staking APY be affected?
Rewards will shift slightly, with ONG emissions reduced by around 20%. However, as ONG becomes scarcer, its market value could rise, potentially offsetting or even improving overall APY.
Q4. What does this mean for the Ontology ecosystem?
With the total supply capped and 200 million ONG burned immediately, and 100 million $ONG equivalent-valued $ONG and $ONT permanently locked, effective circulating supply could drop to around 750 million (assuming that 1 $ONG = 1 $ONT). This scarcity, paired with ongoing ONG utility and swapping mechanisms, should strengthen market dynamics and improve long-term network health.
Q5. Who was eligible to vote?
All Triones nodes participated via OWallet, contributing to Ontology’s decentralized governance process.
The Vote at a Glance
ProposalONG Tokenomics AdjustmentVoting PeriodOct 28–31, 2025 (UTC)Vote Status✅ ApprovedTotal Votes in Favor117,169,804Votes Against0StatusFinished
What Happens NextWith the proposal approved, the Ontology team will begin implementing the updated tokenomics plan according to the outlined schedule. The gradual rollout will ensure stability across the staking ecosystem and DEX liquidity pools as the new mechanisms are introduced.
This marks an important milestone in Ontology’s ongoing effort to evolve its token economy and strengthen decentralized governance.
As always, we thank our Triones nodes for participating and shaping the direction of the Ontology network.
Stay tuned for implementation updates and the next phase of Ontology’s roadmap.
ONG Tokenomics Adjustment Proposal Passes Governance Vote was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Friendly ACH in Banking appeared first on Liminal.co.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 24 Nov 2025As part of the French Navy’s programme for the maintenance in operational condition (MCO) of its vessels, CNN MCO, an Equans France company, has announced a strategic partnership with Thales and CS GROUP, a subsidiary to Sopra Steria, to equip the amphibious helicopter carriers (AHC) with a new, highly resilient inertial navigation system designed for electronic warfare environments. Together, the three companies form a 100% French and sovereign industrial consortium dedicated to naval performance.
Following successful tests confirming the effectiveness of the selected solution, deployment has just been successfully completed aboard the Mistral, during its technical maintenance period at the Toulon naval base.
The French Navy’s Fleet Support Service (SSF) is leading this initiative to upgrade navigation systems on the PHA-type amphibious helicopter carriers and has tasked CNN MCO as its fleet support contract holder with completing the work. Awarded in 2022 for a period of eight years, CNN MCO's contract covers all onboard systems on the three PHA vessels: Mistral, Tonnerre and Dixmude. It also includes in-service support of the Somme BCR-type command and underway replenishment vessel. The objective is to ensure the availability at sea and full operational capability of these platforms in an increasingly demanding strategic environment.
To address the progressive obsolescence of certain critical systems, the contract includes more than 40 modernisation studies. Conducted across three separate vessels, these studies have already resulted in 50 upgrades in the last two years. CNN MCO is managing these phases in close coordination with the SSF, from preliminary design studies through to shipboard integration.
Autonomous, reliable navigation in the face of new threats
In a context of hybrid conflicts and widespread jamming of radionavigation systems, the French Navy is strengthening the autonomy and resilience of its navigation systems. The TopAxyz inertial navigation system developed by Thales relies solely on internally controlled data with no dependence on external sources. Using shipboard sensors such as accelerometers and gyroscopes, TopAxyz measures the vessel’s movements to continuously calculate its position, speed and heading, without the need for external signals such as GPS.
(c)Thales
The result of over 40 years of expertise and experience, these systems have logged more than 50 million hours of operations across a wide range of applications, from space to civil and military aviation and land platforms. With this technology, enhanced by spoofing and jamming detection functionality, the Navy is strengthening its ability to navigate reliably with a high degree of precision and discretion, even in the most sensitive areas.
A tailor-made navigation system
Within the consortium, CS GROUP, a subsidiary of Sopra Steria, brings its leading expertise in navigation systems. The Thales inertial navigation units will be integrated into CS Group’s global navigation system. As the navigation system forms the backbone of a warship — ensuring positioning accuracy and manoeuvring safety — this technological partnership guarantees precision, resilience and service continuity in the most demanding maritime environments.
With over 15 years of experience, CS GROUP has mastered the integration of all navigation sensors, the development of resilient and cyber-secure navigation data distribution systems, and the serial production of rugged embedded systems for harsh environments.
The integration of these new systems, carried out by CNN MCO, complies with stringent safety requirements and leverages existing infrastructures. To date, the retrofit studies have required nearly 1,500 hours of engineering by CNN MCO’s in-house design office, in partnership with Thales and CS Group.
Phased rollout coordinated with scheduled maintenance
Installation of these navigation systems and related equipment is aligned with the SSF’s vessel maintenance schedule through to 2027. The Mistral will be the first to benefit from the new system, followed by the Dixmude and Tonnerre. Once the Thales inertial units have been integrated into the navigation modules by CS Group, the systems will be installed by CNN MCO across the three vessels, with a progressive ramp-up aligned with this timeline.
CNN MCO is responsible for integration, test supervision and equipment maintenance, working directly with the SSF teams. The aim is to optimise installation windows in order to ensure the operational availability of the systems at sea.
“This programme confirms CNN MCO’s ability to carry out complex upgrades in shipboard environments. We’re proud to contribute to the modernisation of sensitive equipment on these vessels as part of an all-French initiative, working with the SSF, Thales and CS Group.” – Céline Barazer, Deputy Director of CNN MCO, an Equans France company
“With 40 years of experience in inertial navigation systems for aircraft, space launchers, air defence systems and artillery, Thales is now bringing this know-how to naval operations. We’re proud to play a part in providing resilient navigation solutions for France's naval vessels as a prerequisite for ensuring freedom of action by armed forces at sea.” – Florent Chauvancy, Vice-President, Flight Avionics, Thales
“By contributing alongside CNN MCO and Thales to a 100% French solution, we reaffirm the importance of sovereign expertise serving the French Navy. A long-standing partner of the naval forces, CS GROUP is proud to support the fleet’s performance and safety, and to ensure the coherent and resilient integration of the entire navigation chain.”
— Frédéric Dussart, Director, Defence & Security Activities, CS Group
Press contacts:
Equans France: Léa Truchetto - +33 (0)6 03 18 42 67 – truchetto@droitdevant.fr
Thales: Camille Heck – +33 (0)6 73 78 33 63 – camille.heck@thalesgroup.com
CS Group - Sopra Steria:
Laura Bandiera,laura.bandiera@soprasteria.com , +33 (0)6 85 74 05 01 / Aurélien Flaugnatti,aurelien.flaugnatti@soprasteria.com, +33 (0)6 30 84 75 81
About CNN MCO
Founded in 2005, when in-service support contracts for French Navy vessels were first opened to competition, CNN MCO has continually expanded its know-how in the management, maintenance and in-service support of all types of naval platforms. Drawing on an extensive network of national and international expertise, this Equans France company — which this year celebrates its 20th anniversary — supports all players in the maritime sector, including the French Navy, the naval forces of other nations and civilian and state-owned ship operators. As its customer base has grown, CNN MCO has expanded its network of local facilities to serve vessels as close as possible to their home ports. In addition to its sites in Brest and Toulon, the company has built up a strong presence across France’s overseas territories, including Réunion, French Guiana, Martinique, New Caledonia and French Polynesia.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
About CS Group
As a designer, integrator and operator of critical systems, CS GROUP, a subsidiary of the Sopra Steria Group, operates in demanding markets including defence and security, space, aerospace, energy and cybersecurity. With 3,000 employees combining deep technical expertise and industry knowledge, CS GROUP is a trusted partner to its clients for the integration and deployment of operational systems that ensure the control and safety of their missions.
About Sopra Steria
Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion.
The world is how we shape it
Sopra Steria (SOP) is listed on Euronext Paris (Compartment A) – ISIN: FR0000050809 For more information, visit us at www.soprasteria.com
View PDF market_segment : Defence > Naval https://thales-group.prezly.com/strategic-partnership-between-cnn-mco-thales-and-cs-group-to-modernise-three-amphibious-helicopter-carriers-of-the-french-navy-under-the-project-management-of-the-fleet-support-service strategic-partnership-between-cnn-mco-thales-and-cs-group-modernise-three-amphibious-helicopter On Strategic partnership between CNN MCO, Thales and CS Group to modernise three amphibious helicopter carriers of the French Navy, under the project management of the Fleet Support Service
We recommend redirecting users to authenticate via the Okta-hosted sign-in page powered by the Okta Identity Engine (OIE) for your custom-built applications. It’s the most secure method for authenticating. You don’t have to manage credentials in your code and can take advantage of the strongest authentication factors without requiring any code changes.
The Okta Sign-In Widget (SIW) built into the sign-in page does the heavy lifting of supporting the authentication factors required by your organization. Did I mention policy changes won’t need any code changes?
But you may think the sign-in page and the SIW are a little bland. And maybe too Okta for your needs? What if you can have a page like this?
With a bright and colorful responsive design change befitting a modern lifestyle.
Let’s add some color, life, and customization to the sign-in page.
In this tutorial, we will customize the sign-in page for a fictional to-do app. We’ll make the following changes:
Use Tailwind CSS framework to create a responsive sign-in page layout Add a footer for custom brand links Display a terms and conditions modal using Alpine.js that the user must accept before authenticatingTake a moment to read this post on customizing the Sign-In Widget if you aren’t familiar with the process, as we will be expanding from customizing the widget to enhancing the entire sign-in page experience.
Stretch Your Imagination and Build a Delightful Sign-In ExperienceCustomize your Gen3 Okta Sign-In Widget to match your brand. Learn to use design tokens, CSS, and JavaScript for a seamless user experience.
In the post, we covered how to style the Gen3 SIW using design tokens and customize the widget elements using the afterTransform() method. You’ll want to combine elements of both posts for the most customized experience.
Table of Contents
Customize your Okta-hosted sign-in page Use Tailwind CSS to build a responsive layout Use Tailwind for custom HTML elements on your Okta-hosted sign-in page Add custom interactivity on the Okta-hosted sign-in page using an external library Customize Okta-hosted sign-in page behavior using Web APIs Add Tailwind, Web APIs, and JavaScript libraries to customize your Okta-hosted sign-in pagePrerequisites
To follow this tutorial, you need:
An Okta account with the Identity Engine, such as the Integrator Free account. Your own domain name A basic understanding of HTML, CSS, and JavaScript A brand design in mind. Feel free to tap into your creativity! An understanding of customizing the sign-in page by following the previous blog postLet’s get started!
Before we begin, you must configure your Okta org to use your custom domain. Custom domains enable code customizations, allowing us to style more than just the default logo, background, favicon, and two colors. Sign in as an admin and open the Okta Admin Console, navigate to Customizations > Brands and select Create Brand +.
Follow the Customize domain and email developer docs to set up your custom domain on the new brand.
Customize your Okta-hosted sign-in pageWe’ll first apply the base configuration using the built-in configuration options in the UI. Add your favorite primary and secondary colors, then upload your favorite logo, favicon, and background image for the page. Select Save when done. Everyone has a favorite favicon, right?
I’ll use #ea3eda and #ffa738 as the primary and secondary colors, respectively.
On to the code. In the Theme tab:
Select Sign-in Page in the dropdown menu Select the Customize button On the Page Design tab, select the Code editor toggle to see a HTML pageNote
You can only enable the code editor if you configure a custom domain.
You’ll see the lightweight IDE already has code scaffolded. Press Edit and replace the existing code with the following.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="robots" content="noindex,nofollow" />
<!-- Styles generated from theme -->
<link href="{{themedStylesUrl}}" rel="stylesheet" type="text/css">
<!-- Favicon from theme -->
<link rel="shortcut icon" href="{{faviconUrl}}" type="image/x-icon">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link
href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Manrope:wght@200..800&display=swap"
rel="stylesheet">
<title>{{pageTitle}}</title>
{{{SignInWidgetResources}}}
<style nonce="{{nonceValue}}">
:root {
--font-header: 'Inter Tight', sans-serif;
--font-body: 'Manrope', sans-serif;
--color-gray: #4f4f4f;
--color-fuchsia: #ea3eda;
--color-orange: #ffa738;
--color-azul: #016fb9;
--color-cherry: #ea3e84;
--color-purple: #b13fff;
--color-black: #191919;
--color-white: #fefefe;
--color-bright-white: #fff;
--border-radius: 4px;
--color-gradient: linear-gradient(12deg, var(--color-fuchsia) 0%, var(--color-orange) 100%);
}
{{#useSiwGen3}}
html {
font-size: 87.5%;
}
{{/useSiwGen3}}
#okta-auth-container {
display: flex;
background-image: {{bgImageUrl}};
}
#okta-login-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
width: 50vw;
background: var(--color-white);
}
</style>
</head>
<body>
<div id="okta-auth-container">
<div id="okta-login-container"></div>
</div>
<!--
"OktaUtil" defines a global OktaUtil object
that contains methods used to complete the Okta login flow.
-->
{{{OktaUtil}}}
<script type="text/javascript" nonce="{{nonceValue}}">
// "config" object contains default widget configuration
// with any custom overrides defined in your admin settings.
const config = OktaUtil.getSignInWidgetConfig();
config.theme = {
tokens: {
BorderColorDisplay: 'var(--color-bright-white)',
PalettePrimaryMain: 'var(--color-fuchsia)',
PalettePrimaryDark: 'var(--color-purple)',
PalettePrimaryDarker: 'var(--color-purple)',
BorderRadiusTight: 'var(--border-radius)',
BorderRadiusMain: 'var(--border-radius)',
PalettePrimaryDark: 'var(--color-orange)',
FocusOutlineColorPrimary: 'var(--color-azul)',
TypographyFamilyBody: 'var(--font-body)',
TypographyFamilyHeading: 'var(--font-header)',
TypographyFamilyButton: 'var(--font-header)',
BorderColorDangerControl: 'var(--color-cherry)'
}
}
config.i18n = {
'en': {
'primaryauth.title': 'Log in to create tasks',
}
}
// Render the Okta Sign-In Widget
const oktaSignIn = new OktaSignIn(config);
oktaSignIn.renderEl({ el: '#okta-login-container' },
OktaUtil.completeLogin,
function (error) {
// Logs errors that occur when configuring the widget.
// Remove or replace this with your own custom error handler.
console.log(error.message, error);
}
);
</script>
</body>
</html>
This code adds style configuration to the SIW elements and configures the text for the title when signing in. Press Save to draft.
We must allow Okta to load font resources from an external source, Google, by adding the domains to the allowlist in the Content Security Policy (CSP).
Navigate to the Settings tab for your brand’s Sign-in page. Find the Content Security Policy and press Edit. Add the domains for external resources. In our example, we only load resources from Google Fonts, so we added the following two domains:
*.googleapis.com
*.gstatic.com
Select Save to draft, then Publish to view your changes.
The sign-in page looks more stylized than before. If you try resizing the browser window, we see it’s not handling different form factors well. Let’s use Tailwind CSS to add a responsive layout.
Use Tailwind CSS to build a responsive layoutTailwind makes delivering cool-looking websites much faster than writing our CSS manually. We’ll load Tailwind via CDN for our demonstration purposes.
Add the CDN to your CSP allowlist:
https://cdn.jsdelivr.net
Navigate to Page Design, then Edit the page. Add the script to load the Tailwind resources in the <head>. I added it after the <style></style> definitions before the </head>.
<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4" nonce="{{nonceValue}}"></script>
Loading external resources, like styles and scripts, requires a CSP nonce to mitigate cross-site scripting (XSS). You can read more about the CSP nonce on the CSP Quick Reference Guide.
Note
Don’t use Tailwind from NPM CDN for production use cases. The Tailwind documentation notes this is for experimentation and prototyping only, as the CDN has rate limits. If your brand uses Tailwind for other production sites, you’ve most likely defined custom mixins and themes in Tailwind. Therefore, reference your production Tailwind resources in place of the CDN we’re using in this post.
Remove the styles for #okta-auth-container and #okta-login-container from the <style></style> section. We can use Tailwind to handle it. The <style></style> section should only contain the CSS custom properties defined in :root and the directive to use SIW Gen3.
Add the styles for Tailwind. We’ll add the classes to show the login container without the hero image in smaller form factors, then display the hero image with different widths depending on the breakpoints.
The two div containers look like this:
<div id="okta-auth-container" class="h-screen flex bg-(--color-gray) bg-[{{bgImageUrl}}]">
<div id="okta-login-container" class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center"></div>
</div>
Save the file and publish the changes. Feel free to test it out!
Use Tailwind for custom HTML elements on your Okta-hosted sign-in pageTailwind excels at adding styled HTML elements to websites. We can also take advantage of this. Let’s say you want to maintain continuity of the webpage from your site through the sign-in page by adding a footer with links to your brand’s sites. Adding this new section involves changing the HTML node structure and styling the elements.
We want a footer pinned to the bottom of the view, so we’ll need a new parent container with vertical stacking and ensure the height of the footer stays consistent. Replace the HTML node structure to look like this:
<div class="flex flex-col min-h-screen">
<div id="okta-auth-container" class="flex grow bg-(--color-gray) bg-[{{bgImageUrl}}]">
<div class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center">
<div id="okta-login-container"></div>
</div>
</div>
<footer class="font-(family-name:--font-body)">
<ul class="h-12 flex justify-evenly items-center text-(--color-azul)">
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Terms</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Docs</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com/blog">Blog</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://devforum.okta.com">Community</a></li>
</ul>
</footer>
</div>
Everything redirects to the Okta Developer sites. 😊 I also maintained the style of font, text colors, and text decoration styles to match the SIW elements. CSS custom properties make consistency manageable.
Feel free to save and publish to check it out!
Add custom interactivity on the Okta-hosted sign-in page using an external libraryTailwind is great at styling HTML elements, but it’s not a JavaScript library. If we want interactive elements on the sign-in page, we must rely on Web APIs or libraries to assist us. Let’s say we want to ensure that users who sign in to the to-do app agree to the terms and conditions. We want a modal that blocks interaction with the SIW until the user agrees.
We’ll use Alpine for the heavy lifting because it’s a lightweight JavaScript library that suits this need. We add the library via the NPM CDN, as we have already allowed the domain in our CSP. Add the following to the <head></head> section of the HTML. I added mine directly after the Tailwind script.
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js" nonce="{{nonceValue}}"></script>
Note
We’re including Alpine from the NPM CDN for demonstration and experimentation. For production applications, use a CDN that supports production scale. The NPM CDN applies rate limiting to prevent production-grade use.
Next, we add the HTML tags to support the modal. Replace the HTML node structure to look like this:
<div class="flex flex-col min-h-screen">
<div id="modal"
x-data
x-cloak
x-show="$store.modal.open"
x-transition:enter="transition ease-out duration-300"
x-transition:enter-start="opacity-0"
x-transition:enter-end="opacity-100"
x-transition:leave="transition ease-in duration-200"
x-transition:leave-start="opacity-100"
x-transition:leave-end="opacity-0 hidden"
class="fixed inset-0 z-50 flex items-center justify-center bg-(--color-black)/80 bg-opacity-50">
<div x-transition:enter="transition ease-out duration-300"
x-transition:enter-start="opacity-0 scale-90"
x-transition:enter-end="opacity-100 scale-100"
x-transition:leave="transition ease-in duration-200"
x-transition:leave-start="opacity-100 scale-100"
x-transition:leave-end="opacity-0 scale-90"
class="bg-(--color-white) rounded-(--border-radius) shadow-lg p-8 max-w-md w-full mx-4">
<h2 class="text-2xl font-(family-name:--font-header) text-(--color-black) mb-4 text-center">Welcome to to-do app</h2>
<p class="text-(--color-black) mb-6">This app is in beta. Thank you for agreeing to our terms and conditions.</p>
<button @click="$store.modal.hide()"
class="w-full bg-(--color-fuchsia) hover:bg-(--color-orange) text-(--color-bright-white) font-medium py-2 px-4 rounded-(--border-radius) transition duration-200">
Agree
</button>
</div>
</div>
<div id="okta-auth-container" class="flex grow bg-(--color-gray) bg-[{{bgImageUrl}}]">
<div class="w-full min-w-sm lg:w-2/3 xl:w-1/2 bg-(image:--color-gradient) lg:bg-none bg-(--color-white) flex justify-center items-center">
<div id="okta-login-container"></div>
</div>
</div>
<footer class="font-(family-name:--font-body)">
<ul class="h-12 flex justify-evenly items-center text-(--color-azul)">
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Terms</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com">Docs</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://developer.okta.com/blog">Blog</a></li>
<li><a class="hover:text-(--color-orange) hover:underline" href="https://devforum.okta.com">Community</a></li>
</ul>
</footer>
</div>
It’s a lot to add, but I want the smooth transition animations. 😅 The built-in enter and leave states make adding the transition animation so much easier than doing it manually.
Notice we’re using a state value to determine whether to show the modal. We’re using global state management, and setting it up is the next step. We’ll add initializing the state when Alpine initializes. Find the comment // Render the Okta Sign-In Widget within the <script></script> section, and add the following code that runs after Alpine initializes:
document.addEventListener('alpine:init', () => {
Alpine.store('modal', {
open: true,
show() {
this.open = true;
},
hide() {
this.open = false;
}
});
});
The event listener watches for the alpine:init event and runs a function that defines an element in Alpine’s store, modal. The modal store contains a property to track whether it’s open and some helper methods for showing and hiding.
When you save and publish, you’ll see the modal upon site reload!
We made the modal fixed even if the user presses Esc or selects the scrim. Users must agree to the terms to continue.
Customize Okta-hosted sign-in page behavior using Web APIsWe display the modal as soon as the webpage loads. It works, but we can also display the modal after the Sign-In Widget renders. Doing so allows us to use the nice enter and leave CSS transitions Alpine supports. We want to watch for changes to the DOM within the <div id="okta-login-container"></div>. This is the parent container that renders the SIW. We can use the MutationObserver Web API and watch for DOM mutations within the div.
In the <script></script> section, after the event listener for alpine:init, add the following code:
const loginContainer = document.querySelector("#okta-login-container");
// Use MutationObserver to watch for auth container element
const mutationObserver = new MutationObserver(() => {
const element = loginContainer.querySelector('[data-se*="auth-container"]');
if (element) {
document.getElementById('modal').classList.remove('hidden');
// Open modal using Alpine store
Alpine.store('modal').show();
// Clean up the observer
mutationObserver.disconnect();
}
});
mutationObserver.observe(loginContainer, {
childList: true,
subtree: true
});
Let’s walk through what the code does. First, we’re creating a variable to reference the parent container for the SIW, as we’ll use it as the root element to target our work. Mutation observers can negatively impact performance, so it’s essential to limit the scope of the observer as much as possible.
Create the observer
We create the observer and define the behavior for observation. The observer first looks for the element with the data attribute named se, which includes the value auth-container. Okta adds a node with the data attribute for internal operations. We’ll do the same for our internal operations. 😎
Define the behavior upon observation
Once we have an element matching the auth-container data attribute, we show the modal, which triggers the enter transition animation. Then we clean up the observer.
Identify what to observe
We begin by observing the DOM and pass in the element to use as the root, along with a configuration specifying what to watch for. We want to look for changes in child elements and the subtree from the root to find the SIW elements.
Lastly, let’s enable the modal to trigger based on the observer. I intentionally provided you with code snippets that force the modal to display before the SIW renders, so you could take sneak peeks at your work as we went along.
In the HTML node structure, find the <div id="modal">. It’s missing a class that hides the modal initially. Add the class hidden to the class list. The class list for the <div> should look like
<div id="modal"
x-data
x-cloak
x-show="$store.modal.open"
x-transition:enter="transition ease-out duration-300"
x-transition:enter-start="opacity-0"
x-transition:enter-end="opacity-100"
x-transition:leave="transition ease-in duration-200"
x-transition:leave-start="opacity-100"
x-transition:leave-end="opacity-0 hidden"
class="hidden fixed inset-0 z-50 flex items-center justify-center bg-(--color-black)/80 bg-opacity-50">
<!-- Remaining modal structure here. Compare your work to the class list above -->
</div>
Then, in the alpine:init event listener, change the modal’s open property to default to false:
document.addEventListener('alpine:init', () => {
Alpine.store('modal', {
open: false,
show() {
this.open = true;
},
hide() {
this.open = false;
}
});
});
Save and publish your changes. You’ll now notice a slight delay before the modal eases into view. So smooth!
It’s worth noting that our solution isn’t foolproof; a savvy user can hide the modal and continue interacting with the sign-in widget by manipulating elements in the browser’s debugger. You’ll need to add extra checks and more robust code for foolproof methods. Still, this example provides a general idea of capabilities and how one might approach adding interactive components to the sign-in experience.
Don’t forget to test any implementation changes to the sign-in page for accessibility. The default site and the sign-in widget are accessible. Any changes or customizations we make may alter the accessibility of the site.
You can connect your brand to one of our sample apps to see it work end-to-end. Follow the instructions in the README of our Okta React Sample to run the app locally. You’ll need to update your Okta OpenID Connect (OIDC) application to work with the domain. In the Okta Admin Console, navigate to Applications > Applications and find the Okta application for your custom app. Navigate to the Sign On tab. You’ll see a section for OpenID Connect ID Token. Select Edit and select Custom URL for your brand’s sign-in URL as the Issuer value.
You’ll use the issuer value, which matches your brand’s custom URL, and the Okta application’s client ID in your custom app’s OIDC configuration.
Add Tailwind, Web APIs, and JavaScript libraries to customize your Okta-hosted sign-in pageI hope you found this post interesting and unlocked the potential of how much you can customize the Okta-hosted Sign-In Widget experience.
You can find the final code for this project in the GitHub repo.
If you liked this post, check out these resources.
Stretch Your Imagination and Build a Delightful Sign-In Experience The Okta Sign-In WidgetRemember to follow us on LinkedIn and subscribe to our YouTube for more exciting content. Let us know how you customized the Okta-hosted sign-in page. We’d love to see what you came up with.
We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!
The Office of the Comptroller of the Currency has confirmed that national banks can hold certain digital assets on their balance sheets for operational purposes. In Interpretive Letter 1186, issued November 18, the OCC clarified that banks may hold native blockchain tokens like Ether (ETH) or Solana (SOL) when needed to pay network fees for permissible activities or to test blockchain-based platforms.
India’s crypto market, with over 90 million users, continues to demand infrastructure that balances innovation with regulatory compliance. Shyft Network has integrated Veriscope with Fincrypto, marking another step in bringing Travel Rule compliance to India’s evolving digital asset ecosystem.
Fincrypto is India’s first outlet exchange, pioneering a hybrid model that combines online crypto trading with physical walk-in locations. The platform provides instant INR-crypto on-ramp and off-ramp services, plus spot trading for Bitcoin, Ethereum, and major stablecoins (USDT, USDC, DAI, USDD). The Veriscope integration brings automated compliance through cryptographic proof verification without disrupting user experience.
Bridging the Trust Gap in India’s Crypto MarketAs India’s regulatory framework for digital assets continues to take shape, Virtual Asset Service Providers need solutions that address both compliance requirements and user trust. Fincrypto’s unique approach — offering physical outlet locations alongside online services — tackles a critical challenge in emerging crypto markets: building user confidence through tangible presence while maintaining the efficiency of digital platforms.
The outlet exchange model is particularly relevant in India, where users often prefer the security of face-to-face transactions for significant financial decisions. By integrating Veriscope, Fincrypto ensures that both its physical and digital operations maintain consistent compliance standards, creating a seamless experience whether customers walk into an outlet or trade online.
This integration enables Fincrypto to verify wallet ownership, conduct secure data exchanges with counterparty VASPs, and maintain audit trails — all while preserving user privacy through cryptographic verification rather than centralized data storage.
Compliance for India’s Physical-Digital HybridThe Shyft Network-Fincrypto integration brings key capabilities to India’s first outlet exchange:
Automated Travel Rule Compliance: Cryptographic proof exchanges handle regulatory requirements without disrupting transactions, whether initiated online or at physical locations Privacy-First Verification: User Signing technology protects customer data while enabling secure identity verification across all service channels Scalable Infrastructure: Built-in compliance architecture supports Fincrypto’s expansion plans, including their roadmap for 100+ outlets nationwide and entry into GIFT City Cross-Platform Consistency: Uniform security and compliance standards across online platform and physical outlets, building trust with retail investors, traders, and businesses Veriscope’s Growing Presence in IndiaWith Fincrypto’s integration, Veriscope expands its presence in India’s rapidly growing crypto ecosystem, joining platforms like Nowory and Carret in building compliant infrastructure for the market. Each integration addresses different segments of India’s diverse digital asset landscape — from retail trading to institutional services to hybrid physical-digital models.
As India’s regulatory environment matures and global compliance standards become essential for market participants, Veriscope provides VASPs with the infrastructure needed to meet both domestic KYC/AML requirements and international FATF Travel Rule obligations. This positions Indian platforms for sustainable growth and international partnerships.
About VeriscopeVeriscope, built on Shyft Network, provides Travel Rule compliance infrastructure for Virtual Asset Service Providers worldwide. Using User Signing cryptographic technology, the platform enables secure wallet verification and data exchanges between VASPs while protecting user privacy. Veriscope simplifies regulatory compliance for crypto exchanges and payment platforms operating in regulated markets.
About FincryptoFincrypto, operated by Digital Secure Service Private Limited, is India’s first outlet exchange combining online trading with physical walk-in locations. The platform offers instant INR-crypto on/off ramps and spot trading for Bitcoin, Ethereum, and stablecoins (USDT, USDC, DAI, USDD). All services are KYC-verified and compliant with Indian regulations, providing transparent pricing and instant settlement for retail and business customers.
Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.
Book a consultation at calendly.com/tomas-shyft or email bd@shyft.network
Shyft Network Brings Travel Rule Compliance to India’s First Outlet Exchange, Fincrypto was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.
State governments today face pressure to modernize their digital identity systems, whether for mobile driver’s licenses, benefits access, permitting, or online services. As states move from pilot projects to statewide deployments, a recurring challenge emerges: how do we ensure that systems built today can work with systems built tomorrow, and with systems in other jurisdictions?
That challenge is interoperability.
Interoperability is often treated as a technical choice: which protocol to adopt, which standard to implement, or which vendor to select. But technology alone doesn’t create interoperability. States do, through the policy decisions they encode into procurement requirements, certification criteria, governance structures, and vendor evaluation. These decisions determine whether identity systems remain flexible, competitive, and future-proof, or whether they lock in silos that will be costly to unwind.
This blog post outlines why interoperability should be designed as a policy requirement from the outset and provides practical recommendations for state CIOs, digital services leaders, and procurement teams. The goal is to help states build digital identity infrastructure that can evolve, avoid vendor lock-in, and integrate across agencies and jurisdictions.
0:00 /1:41 1×Wayne Chang, SpruceID CEO at Utah's 2025 SEDI Summit
Designing Openness Into ProcurementWe recommend that states encourage a diverse ecosystem of vendors. This can be achieved by maintaining open certification and procurement processes that don’t exclude smaller companies and startups. This ensures that the market does not consolidate around a single provider, allowing innovation and healthy competition to flourish.
Specifically for states considering a statewide digital identity program, there should be no requirement that every credential type be issued using the same vendor software. Instead, multiple issuers and technology providers should be able to participate so long as they comply with a common trust framework, a state digital identity profile, and certification standards. This allows, for example, one vendor to issue digital driver licenses and another to issue Veteran ID, giving residents flexibility while keeping the overall system interoperable and cohesive.
Open Standards Are the Foundation for InteroperabilityStates should prefer that the technical standards used in a state digital identity program be open, freely available, and implementable by the public and private sector without proprietary licensing restrictions. Open standards are critical to ensuring transparency, interoperability, and long-term sustainability.
States that rely on proprietary or niche technologies often find that what looks like a shortcut quickly becomes a long-term constraint. These systems introduce hidden costs—rising licensing fees, custom integration work, and unpredictable upgrade cycles—that drain budgets and slow modernization. Even worse, they create “digital islands” that can’t communicate with the rest of the state’s infrastructure, reinforcing data silos and complicating everything from inter-agency coordination to seamless resident services.
For digital identity programs, the stakes are even higher. A credential that isn’t interoperable across jurisdictions or recognized by key private-sector partners loses much of its value. And beneath it all is the biggest risk of all: vendor lock-in. When states depend on a single provider for ongoing maintenance and upgrades, they lose leverage and flexibility—while niche or unsupported systems introduce real security risks. End-of-life or obscure software is far more likely to harbor vulnerabilities, making vendor-neutral, standards-based approaches essential for any resilient digital government strategy.
Building on open standards is the most effective way for states to avoid vendor lock-in, improve security, and future-proof their infrastructure. Open standards—like ISO mDL, W3C Verifiable Credentials, and IETF SD-JWTs—ensure interoperability, portability, and transparent community-led governance. This approach breaks down data silos, strengthens security through broad peer review, reduces costs by fostering competitive markets, and gives states the flexibility to upgrade or integrate new technologies without starting from scratch.
With most organizations now increasing their reliance on open-source and standards-based tools, it’s clear that open foundations create a more resilient, innovative, and cost-effective path for digital government. We discuss this more in our blog, “A Practical Checklist to Future-Proof Your State’s Digital Infrastructure.”
Using Certification to Operationalize PolicyInteroperability is best achieved by enforcing statutory principles rather than mandating a single technology. When states ground their ecosystems in requirements like privacy, minimal disclosure, unlinkability, and user control, they create a resilient framework that can adapt over time. This approach ensures that interoperability endures even as vendors, tools, and industry practices change.
To put these principles into practice, states can require wallets and issuers to actively demonstrate compliance. This shifts the focus from choosing a preferred technology to verifying measurable outcomes. It also creates a clear baseline of expectations that every participant in the ecosystem must meet.
A certification framework strengthens this model even further. By standardizing safeguards across vendors, it provides a transparent way to evaluate and compare solutions. It also encourages competition, supports accountability, and keeps the system aligned with policy goals as technologies continue to evolve.
For a deeper look at how certification utilizes these principles, read our post, “Digital Wallet Certification: The Foundation for Interoperable State Identity Systems.”
Preserving Future Flexibility and GovernanceTo maintain relevance and adaptability, states should also establish a governance process for updating their digital identity profile. This process should include structured input from public agencies, private vendors, civil society, and technical experts, and ensure that updates are guided by both statutory principles, technology maturity, and real-world market adoption.
States should also create structured avenues for ongoing engagement and collaboration with partners. This includes convening advisory groups with representatives from financial services, healthcare, education, retail, and consumer advocacy and social welfare organizations, as well as with federal and interstate partners.
By framing requirements around these principles, states can remain open to innovation, avoid vendor or protocol lock-in, and maximize interoperability across jurisdictions and sectors. This approach ensures that vendors can propose solutions aligned with mature, well-supported ecosystems while also leaving room for emerging technologies to demonstrate value.
Bringing Policy and Practice TogetherInteroperability isn’t a byproduct of technology, it’s the result of clear policy choices. By incorporating openness into procurement, enforcing standards through certification, and refining governance through practical implementation, states can make interoperability a built-in feature of the system rather than a byproduct. By grounding digital identity programs in statutory principles such as privacy, minimal disclosure, and open standards, states maintain a healthy competition and an adaptable ecosystem.
For agencies defining digital identity strategy, SpruceID can help operationalize privacy, interoperability, and open standards in your state’s identity ecosystem.
Get in TouchAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
The Financial Stability Board (FSB) published its first comprehensive assessment of global crypto regulation, revealing a sector racing ahead of its regulatory framework. While crypto market capitalization surged to $4 trillion in early August 2025, the FSB's October 2025 thematic review shows that regulatory implementation remains incomplete, fragmented and insufficient to address mounting financial stability risks.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 20 Nov 2025The Argonaut lander will fly to the Moon and land on its surface ensuring the European autonomous access to the Moon
Cologne, November 20th, 2025 - Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%), has signed multiple contracts shaping the core industrial team that will build the European Space Agency (ESA) Argonaut Lunar Descent Element. ESA’s Argonaut Mission, planned for launch from the 2030s, will deliver cargo, infrastructure and scientific instruments to the Moon’s surface.
These contracts follow the one already signed between ESA and Thales Alenia Space in January 2025. It referred to the design, development and delivery of the Lunar Descent Element (LDE), including responsibility for mission design and integration.
As prime contractor and system integrator of the LDE, Thales Alenia Space in Italy leads the industrial consortium that is responsible for the system, the entry descent and landing aspects, as well as the general and specific architectures of the thermomechanical, avionics and software chains.
The core industrial team is made up of Thales Alenia Space in Italy, Thales Alenia Space in France, Thales Alenia Space in the UK as well as OHB System AG and Nammo, part of the consortium as strategic subcontractor for the propulsion.
© Thales Alenia Space/ Briot
“The creation of this consortium led by Thales Alenia Space represents a significant milestone in this challenging Argonaut mission,” said Giampiero Di Paolo, Deputy CEO and Senior Vice President, Observation, Exploration and Navigation at Thales Alenia Space. “Under the leadership of the European Space Agency and alongside the consortium partners, Thales Alenia Space is playing a pioneering role to enable the European autonomous access to the Moon”.
“Thales Alenia Space has supplied a significant proportion of the International Space Station’s pressurized volume and is playing a major role on board Artemis, manufacturing as well key elements of Orion’s European service module and leading flagship transportation programs, thus confirming once more time that our company is a major player at the forefront of exploration and space transportation systems”, Thales Alenia Space President and CEO Hervé Derrey added.
The consortium at a glance:
Thales Alenia Space in Italy: prime contractor and end-to-end system integrator including architectures definition, final verification and validation as well as assembly integration and testing.
Thales Alenia Space in France: responsible for the design, development, and validation of the Data Handling Sub-System, including Middleware software, as well as the procurement of its component equipment including On-Board Computers.
Thales Alenia Space in the UK: responsible for the Propulsion subsystem development and for the procurement of main components, in particular Propellant tanks and Thruster.
OHB System AG: responsible for guidance, navigation and control (GNC), electrical power systems (EPS) and telecommunications (TT&C) subsystem, as well as procurement of its component equipment (Solar array, batteries, LIDAR, series of transponder).
Nammo (Nordic Ammunition Company): responsible for the design and procurement of the Main Engine, critical assets, not only for the propulsion subsystem but also for the entire Argonaut LDE end item.
About Argonaut:
The Argonaut spacecraft consists of three main elements: the lunar descent element (LDE) for flying to the Moon and landing on the target, the cargo platform one, which is the interface between the lander and its payload, and finally, the element that the mission designers want to send to the Moon.
Adaptability is a key element of Argonaut's design, which is why the cargo platform is designed to accept any mission profile: cargo for astronauts near the landing site, a rover, technology demonstration packages, production facilities using lunar resources, a lunar telescope or even a power station.
The project will strengthen Thales Alenia Space’s skills in several technological areas essential to space exploration beyond the Moon. The future space ecosystem requires new solutions dedicated to the transport and return of cargo from low Earth orbit and lunar orbit, as well as crew transport to low Earth orbit. Thales Alenia Space is ready to put in place what is needed to prepare for humanity’s future life and presence in Space, laying the foundations for the post-ISS era and meeting new economic needs for research and science.
About Thales Alenia Space
Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe. www.thalesaleniaspace.com
View PDF market_segment : Space thales-alenia-space-signs-multiple-contracts-shape-consortium-carrying-out-lunar-descent-element On
We’re excited to announce that the official House Party Protocol (HPP) migration portal is now open, marking a key milestone in the Aergo → HPP transition.
[Official Migration Portal and Bridge]
This portal will allow legacy Aergo/AQT holders to convert their tokens into HPP ahead of upcoming listings on GOPAX and Coinone, so users can be fully prepared when trading goes live.
As we enter this next phase, our priority is to ensure everyone can migrate and move liquidity smoothly across exchanges. The guide below explains everything you need to know.
❌Before You Trade: Deposit HPP (Mainnet) Only, Not AERGO or AQT❌Both GOPAX and Coinone support only HPP (Mainnet) deposits and withdrawals. That means:
Do NOT deposit AERGO. Do NOT deposit AQT. Only HPP (Mainnet) tokens will be accepted for trading on both exchanges.If you currently hold AERGO or AQT, you must first migrate them to HPP via the two-step process (Migration → Bridge) before depositing them to exchanges.
[Token Swap & Migration Guide]
Migration GuideYour migration path depends on which network your AERGO tokens are on and where they are held. Below are the two most common cases.
Case 1: AERGO (ETH). If your AERGO tokens are on ERC-20 (held on other exchanges or private wallets):
AERGO (ETH) → Migration Portal HPP (ETH) → HPP Bridge HPP (Mainnet)Case 2: AERGO (Mainnet). If your AERGO is on Mainnet, such as staking, Bithumb, or any exchange that supports Aergo Mainnet:
AERGO (Mainnet) → Migration Portal /Aergo Bridge HPP (ETH) → HPP Bridge HPP (Mainnet)Only after you complete both steps (Migration and Bridge) can you deposit your HPP (Mainnet) to GOPAX or Coinone. This route ensures your legacy Mainnet tokens convert properly into HPP.
Final Step: Trade on GOPAX & CoinoneOnce your tokens are on HPP (Mainnet), you can deposit them to either exchange and begin trading immediately. Both GOPAX and Coinone support the same HPP Mainnet token, ensuring consistent trading and liquidity across platforms.
❌ Do NOT send HPP (ETH) to the exchanges. Only HPP (Mainnet) deposits are supported.
HPP Migration Backed by Coinone and GOPAX, With Further Updates Ahead
What’s NextMore global exchanges will support HPP as the migration continues. We’ll provide additional guides, visuals, and tutorials to help users transition without confusion.
Welcome to the House Party Protocol, and thank you for being part of the journey!
📢 HPP Migration Portal Is Now Live Ahead of GOPAX and Coinone Listings was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.
Did you know that for every minute you spend online, 30 ads are silently stalking you and slowing down your device? From intrusive pop-ups to autoplay videos, ads are not just annoying – they drain your data and invade your privacy. It’s time to say “No” to it all. This article reveals how Herond Browser helps you completely eliminate this digital “pollution.” Discover now to experience faster, cleaner, and uninterrupted web browsing.
Why Current Adblockers Are Failing You The Failure of Traditional Adblockers (Filter List Reliance) Outdated Mechanism: Traditional adblockers rely heavily on fixed Filter Lists to identify and hide ads. Easily Bypassed: This method is easily circumvented by major platforms when they update their code structures. The Biggest Challenge: They cannot effectively counter Server-Side Ad Insertion (SSAI) technology – where ads are stitched directly into the content stream from the server – making blocking attempts futile. Result: Ad-blocking performance is severely compromised, leading to a disrupted user experience. The Threat of Malvertising and Tracking Malware Risks (Malvertising): Ads are often gateways for Malvertising, allowing viruses or malware to infiltrate your device the moment an ad loads, even without a click. Privacy Issues: Ads come with countless invisible Trackers that continuously collect data on your browsing habits, interests, and location. The Goal: This tracking creates a detailed personal profile, severely violating your privacy to serve hyper-personalized ads. Result: Browsing without protection means you are voluntarily trading your safety and personal information. The Ultimate Solution: Herond Browser’s “No Compromise” Mechanism Next-Gen Blocking Engine Core Mechanism: Herond utilizes a highly efficient ad-blocking mechanism at the Network/DNS level. How It Works: Ads are blocked before they can even load onto the browser, optimizing speed and saving system resources. Defeating SSAI: Herond is capable of processing and bypassing complex ad streams like Server-Side Ad Insertion (SSAI). Smart Content Handling: Similar to SponsorBlock, Herond intelligently identifies and skips sponsored segments or irrelevant content, delivering a seamless entertainment experience. Security and Privacy-First Features Anti-Tracking: Proactively blocks invisible trackers, pixels, and third-party cookies. Privacy Protection: Ensures your browsing activity remains absolutely private, with no personal data collection. Anti-Malvertising: Integrated protection against malicious websites with automatic Phishing alerts. Absolute Safety: Creates a secure browsing environment, shielding you from potential malware threats. Built-in VPN/Proxy: Enhances security by encrypting traffic and hiding your real IP address. Unmatched Speed and Efficiency Superior Page Load Speed: Achieves significantly faster load times by eliminating the need to download heavy scripts, images, and ad videos. Data Savings: A clear benefit for mobile users, significantly reducing monthly 3G/4G/5G data consumption. Battery Saver: Reduces the load on your CPU and memory by not processing ads, extending your device’s battery life. How to Activate and Experience an Ad-Free Internet with Herond BrowserReady to reclaim your internet? Follow these simple steps to install Herond Browser:
Step 1: Visit herond.org and click on “Download Herond”. Step 2: Select the Herond version compatible with your current device configuration. Step 3: Click “Download”. Step 4: Launch Herond and start browsing safely! Conclusion: The Future is in Your Hands, with Herond BrowserIn summary, Herond Browser is more than just a browser; it is a comprehensive solution offering three core benefits: Superior Speed + Absolute Security + A Completely Ad-Free Experience.
Gone are the days of accepting 30 ads stalking you every minute and trading away your privacy. Herond puts control of the Internet back in your hands.
Don’t wait any longer. Download Herond Browser today to start experiencing web browsing as it was meant to be: fast, safe, and uninterrupted.
About Herond BrowserHerond is a browser dedicated to blocking ads and tracking cookies. With lightning-fast page load speeds, it allows you to surf the web comfortably without interruptions. Currently, Herond features two core products:
Herond Shield: Advanced software for ad-blocking and user privacy protection. Herond Wallet: A multi-chain, non-custodial social wallet.Herond Browser aims to bring Web 3.0 closer to global users. We hope that in the future, everyone will have full control over their own data. The browser app is now available on Google Play and the App Store, delivering a convenient experience for users everywhere.
Follow our upcoming posts for more useful information on safe and effective web usage. If you have any suggestions or questions, contact us immediately on the following platforms:
Telegram: https://t.me/herond_browser X (Twitter): @HerondBrowserThe post Every Minute You Spend Online, 30 Ads Are Stalking You – Herond Browser Says “No” appeared first on Herond Blog.
The post Every Minute You Spend Online, 30 Ads Are Stalking You – Herond Browser Says “No” appeared first on Herond Blog.
Every day you surf the web, your personal data is being collected without your consent. According to research from the Pew Research Center (2025), 90% of global internet users are having their online activities tracked.
They watch everything from your search history and shopping habits to your precise geographic location. With every click and every website visit, hundreds of hidden trackers are recording your behavior to serve targeted ads. Are you being “stalked” online without realizing it? Don’t lose your privacy before it’s too late!
The Reality of Online TrackingYou are being tracked – right now.
Whether you’re scrolling through TikTok or using payment apps, hundreds of cookies and trackers are silently recording your behavior. According to Statista (2025), 72% of global users are worried about their privacy. In Vietnam, this number is even higher.
Cisco (2025) revealed a shocking statistic: 85% of users do not know their personal data is being sold to advertising companies and even hackers.
The Consequences of Being TrackedIf you don’t act today, your data will be exploited every single second. So, what can we do?
Being tracked isn’t just “annoying” – it is a real threat.
Financial Risk: McKinsey (2025) reports that 71% of users would stop shopping with a brand if they knew their data was being abused. Personal Harassment: Pew Research (2025) warns that 44% of users in Vietnam have faced online harassment due to leaked personal information. This ranges from phishing messages and “stalker” ads to the theft of bank accounts.The Solution is ComingMillions have lost their data permanently. Do you want to be the next victim?
Herond Browser – The Breakthrough Ad-Blocking Browser of 2025 is Launching Soon!
In just a few days, we will officially launch Herond Browser – the ultimate tool to block 100% of trackers, stop global surveillance, boost browsing speed by 3x, and eliminate intrusive ads on all devices: mobile, desktop, and tablet.
Absolute Security: No cookies, no traces left behind. Ad-Free Experience: Browse the web clean and fast. Herond Shield Integration: Experience smooth, A-Z safety with our built-in protection suite. Conclusion90% of Internet users are being tracked without knowing it – and you could be one of them.
Every click, every transaction, and every message is being silently recorded by hundreds of trackers, sold to advertisers, or falling into the hands of hackers. You don’t have much time left. Act now to protect yourself in the online space!
About HerondHerond is a browser dedicated to blocking ads and tracking cookies. With lightning-fast page load speeds, it allows you to surf the web comfortably without interruptions. Currently, Herond features two core products:
Herond Shield: Advanced software for ad-blocking and user privacy protection. Herond Wallet: A multi-chain, non-custodial social wallet.Herond Browser aims to bring Web 3.0 closer to global users. We hope that in the future, everyone will have full control over their own data. The browser app is now available on Google Play and the App Store, delivering a convenient experience for users everywhere.
Follow our upcoming posts for more useful information on safe and effective web usage. If you have any suggestions or questions, contact us immediately on the following platforms:
Telegram: https://t.me/herond_browser X (Twitter): @HerondBrowserThe post 90% of Internet Users Are Being Tracked Without Knowing It – Are You One of Them? appeared first on Herond Blog.
The post 90% of Internet Users Are Being Tracked Without Knowing It – Are You One of Them? appeared first on Herond Blog.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 20 Nov 2025 In line with the UAE’s vision to enhance National Cyber Sovereignty, Thales and the UAE Cyber Security Council (CSC) sign a Memorandum of Understanding (MoU) to strengthen and accelerate the country’s Cyber capabilities. This partnership includes the creation of three major capabilities within the framework of the Cyber Center of Excellence: a Space META Security Operation Centre (SOC), a Cyber Evaluation Lab, and a Crypto Lab. This long-term strategic collaboration focuses on building local capacities, driving innovation, and strengthening sovereign cyber defence capabilities, with, to start with, the space sector as the first area of interest.
On the occasion of Dubai Air Show, Thales and the UAE Cyber Security Council (CSC) have signed a Memorandum of Understanding (MoU) to establish a long-term strategic partnership aimed at developing a Cyber Centre of Excellence in the UAE.
This agreement covers the co-development and establishment of three key projects:
Space META-SOC (Security Operation Centre): a specialised centre dedicated to the cybersecurity of space infrastructures, developed in cooperation with a local industrial partner. Connected to the national SOC, this SOC will integrate advanced technical capabilities and enable knowledge transfer through training focused on satellite constellations and ground system monitoring. Cyber Evaluation Lab: a testing and evaluation laboratory for software and hardware asset, designed to be operated by UAE nationals. This lab will also support the development of policies, standards, and governance frameworks for the UAE critical domains, with the space sector as the initial area of interest, positioning the country as a regional showcase for cyberspace excellence. Crypto Lab: a platform for designing, testing, implementing and validating cryptographic solutions for the space environment, with a focus on Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD). Knowledge transfer and advanced training will be provided by Thales experts, leveraging experience gained from European Space Agency (ESA) projects.These projects will be part of the Cyber Center of Excellence, an initiative led by His Excellency Dr. Al Kuwaiti, Head of the UAE Cyber Security Council. This collaboration aims to support the UAE’s vision for technological sovereignty, promote local research and development, and contribute to building a sustainable Emirati expertise in space Cyber Security.
“This partnership with the UAE Cyber Security Council marks a major milestone in our joint commitment to advancing the UAE’s sovereign, secure and sustainable Cyber Security ecosystem. Together, we combine our complementary expertise and shared ambition to shape the future of Cyber Security, starting with the Space domain” said Christophe Salomon, Executive Vice-President, Secure Communications & Information Systems, Thales.
With the Cyber Center of Excellence, Thales and the Cyber Security Council will leverage joint research, development, and innovation, to strengthen the UAE’s existing capabilities and build new ones. The collaboration will also extend to other strategic domains beyond space, with both parties jointly addressing international markets.
Thales is present at the Dubai Airshow on stand #740. To find out more about Thales in the United Arab Emirates, please go to: Thales in the UAE | Thales Group.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies in advanced for the Defence, Aerospace and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.
Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
MEDIA CONTACT
Thales in the UAE:
Tarek.Solimane@thalesgroup.com
Dhwani.Sanganee@thalesgroup.com
Corporate media relations:
Marion.bonnet@thalesgroup.com
View PDF corporate : Group ; market_segment : Defence > Cyber ; countries : Middle-East and Africa > United Arab Emirates https://thales-group.prezly.com/thales-and-the-uae-cyber-security-council-join-forces-to-develop-a-cyber-centre-of-excellence thales-and-uae-cyber-security-council-join-forces-develop-cyber-centre-excellence On Thales and the UAE Cyber Security Council join forces to develop a Cyber Centre of Excellence
The crypto world is constantly evolving, and 2025 has thrown a new curveball: XRP Meme Coins. While Meme Coins have always relied on hype and community, leveraging the foundation and massive community of the XRP Ledger (XRPL) creates an entirely new dynamic. Is this just another passing craze, or is the combination of meme culture and XRPL’s efficiency truly The Next Big Crypto Trend? Don’t dive into the hype blindly. This essential guide from Herond cuts through the noise, explaining exactly what XRP Meme Coins are, how they work on the XRPL, and the risks and rewards every investor must understand.
Ready to find out if these assets belong in your portfolio?
What Are XRP Meme Coins?XRP Meme Coins represent the fusion of the highly speculative, community-driven nature of traditional meme coins (like Dogecoin or Shiba Inu) and the underlying technology of the XRP Ledger (XRPL). Unlike assets built on Ethereum or Solana, these tokens leverage XRPL’s core advantages: fast, near-zero-cost transactions and inherent decentralized exchange (DEX) functionality. Essentially, they are digital tokens created purely for entertainment and social virality, but they benefit from XRPL’s robust foundation, giving them greater transactional utility and often, a faster route to adoption among the dedicated XRP community. They are fundamentally driven by social trends, but technically empowered by one of the fastest blockchains in crypto.
Why XRP Meme Coins Are Trending in 2025 XRP surge: $3 high (Jan 2025) boosts ecosystem (AMM auto-adjusts meme values)This rising tide directly benefits Meme Coins. The XRPL’s Automated Market Maker (AMM) feature ensures that as core XRP value grows, the perceived value and stability of its derivative meme tokens are instantly and automatically validated, driving up investor interest.
Low barriers: XRPL scalability (1,500 TPS) vs. ETH gas warsXRPL completely bypasses the cost and congestion problem. This low barrier allows anyone to trade XRP Meme Coins frequently and cheaply. Near-zero transaction fees mean even the smallest. Most speculative trades are economically viable, fueling high-volume viral trading.
Community: Hype from XRP Army, tokens like DROP/SIGMA with governanceTokens like DROP or SIGMA don’t have to build an audience from scratch. They instantly tap into the massive XRPL community, leveraging its hype and social coordination. Furthermore, the introduction of token-based governance gives these meme assets actual political weight within the ecosystem, converting hype into real influence.
Top 5 XRP Meme Coins to Watch ARMY (Market Cap: $107M) Identity: The flagship token representing the vast and passionate XRP Army community. Value Proposition: Built-in network effect with one of crypto’s most dedicated user bases. Offers utility via staking rewards, incentivizing long-term holding. PHNIX (Market Cap: $45M) Identity: Features the iconic Phoenix rebirth meme narrative. Value Proposition: Exhibits strong market activity ($1M daily volume), suggesting high liquidity. Appeals to collectors with integrated NFT ties, offering multi-asset utility. PONGO / RIPPY Identity: Fun, pure-meme tokens featuring ape and dog themes. Value Proposition: High-risk, high-reward plays. Attracts speculative investors due to the low entry price ($0.0001) and the potential for explosive 100x gains common in the meme coin category. DROP / SIGMA Identity: Meme coins focused on adding tangible value to the XRPL. Value Proposition: These tokens elevate meme coins to utility assets by offering governance voting rights and enabling DeFi integration. They turn social hype into real influence within the ecosystem. LIHUA (Market Cap: $35M) Identity: A Chinese cat-themed token aligning with Asian market aesthetics. Value Proposition: Positioned for growth by capitalizing on the rapid Asia adoption of the XRPL. Its market cap suggests high potential for regional viral growth and investor interest. How to Buy & Trade XRP Meme Coins Safely Download Herond (iOS/Android). Buy XRP via Fiat on-ramp. Connect to XRPL DEX (e.g., Sologenic). Swap for memes (e.g., XRP -> ARMY). Stake/monitor in wallet. Risks & Predictions Risks: Volatility, Rugs, and IlliquidityThe Reality Check: Meme coins, regardless of their chain, are highly speculative. Prepare for extreme volatility, where price drops of 90% are common.
Predictions: Institutional Validation and Growth
Future Forecast: Industry reports are bullish on XRPL’s meme sector. A hypothetical McKinsey report suggests that meme coins could account for 20% of the XRPL’s total volume by 2030.
ConclusionThe emergence of XRP Meme Coins represents a powerful blend: the raw virality of meme culture merged with the transactional efficiency of the XRPL. While the potential for 50x to 100x gains is undeniable – especially with institutional tailwinds like a potential XRP ETF – the inherent risks of extreme volatility and rug pulls demand caution. Herond urges every collector to use this guide as a roadmap: prioritize assets with governance utility, leverage the XRPL’s low fees, and always invest with a strong understanding of your risk tolerance. Is this the next big crypto trend? It certainly has the technical foundation and community muscle to be, but only time, and careful analysis, will tell.
Ready to secure your crypto transactions as you explore this volatile market?
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post XRP Meme Coins Explained: The Next Big Crypto Trend? appeared first on Herond Blog.
The post XRP Meme Coins Explained: The Next Big Crypto Trend? appeared first on Herond Blog.
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 20 Nov 2025Dubai – 20th November 2025: Technology Innovation Institute (TII), the applied research entity of Abu Dhabi’s Advanced Technology Research Council (ATRC), has signed a Collaboration Agreement with global technology leader Thales in order to accelerate the development and deployment of new technologies in the areas of quantum, autonomous systems and directed energy. The agreement, signed at the Dubai Airshow, establishes a long-term framework for joint R&D that drives innovation and real-world applications.
Technology Innovation Institute and Thales Partner to Advance Research in Quantum, Autonomy, and Directed Energy ©Flat Earth
This partnership exemplifies the UAE’s commitment to building sovereign R&D capabilities, while supporting the country's growing role as a catalyst for global technology collaboration. Working with the Paris-headquartered multinational organisation Thales presents an opportunity to strengthen the UAE–France cooperation in advanced science and technology. The collaboration underscores TII’s role as a global hub for frontier R&D and Thales’ commitment to advancing applied science through partnership.
The agreement focuses on several advanced technology areas where both parts have complementary expertise:
Quantum Sensing: Development of ultra-precise sensing technologies (gravimetry, magnetometry, and navigation) using quantum physics principles, Advanced Autonomous Systems: Cross-domain robotics and autonomous mobility (air, land, sea, and space), focusing on swarm coordination, adaptive autonomy, AI-driven mission execution and advanced Command and Control Systems, Advanced Laser and Power Generation Technologies: Joint R&D on high-energy laser systems, advanced beam control, and compact, scalable power sources for future power beaming and defence applications.Dr. Najwa Aaraj, CEO of TII, said: “We are on a mission to accelerate R&D in frontier technologies and spark global collaboration by uniting with bold organisations that turn breakthrough ideas into real-world impact. In partnership with Thales, we have brought together two global leaders in applied research, as we are driven by a shared ambition to push the boundaries of science and shape the next wave of technological progress and value creation.”
The focus of the collaboration is to accelerate the transition from research to deployment and serving clients’ operational needs. By convening leading researchers from both organisations, the time to market for next generation technologies will be brought forward, increasing the potential for enhanced workflows and further industry breakthroughs.
Dr. Chaouki Kasmi, Chief Innovation Officer at TII said: “At TII, we see innovation as a bridge between scientific discovery and societal progress. By joining forces with Thales, we are strengthening that bridge, advancing technologies that not only define new frontiers in quantum, autonomy, and photonics, but also deliver solutions that shape a safer, smarter, and more sustainable world.”
Abdelhafid Mordi, Vice President of Thales in UAE and Iraq, and CEO of Thales Emarat Technologies said: “Thales’s partnership with the Technology Innovation Institute builds on shared strengths in quantum, autonomy, and photonics. These are areas critical to the next generation of secure and intelligent systems. Together, we are translating scientific excellence into applied innovation that supports the UAE’s vision for a future-ready, knowledge-based, sovereign economy.”
The signing of the agreement at Dubai Airshow 2025 highlights the industry’s heritage in global innovation and strategic cooperation. With the UAE’s vision for advanced technology leadership, this partnership will help the country make significant strides forward.
Thales is present at the Dubai Airshow on stand #740. To find out more about Thales in the United Arab Emirates, please go to: Thales in the UAE | Thales Group.
ABOUT TECHNOLOGY INNOVATION INSTITUTE (TII)
The Technology Innovation Institute (TII) is the dedicated applied research pillar of Abu Dhabi’s Advanced Technology Research Council (ATRC). TII is a pioneering global research and development center that focuses on applied research and new-age technology capabilities. The Institute has 9 dedicated research centers in advanced materials, autonomous robotics, cryptography, AI and digital science, directed energy, quantum, secure systems, propulsion and space, and renewable and sustainable energy. By working with exceptional talent, universities, research institutions, and industry partners from all over the world, TII connects an intellectual community and contributes to building an R&D ecosystem that reinforces the status of Abu Dhabi and the UAE as a global hub for innovation.
For more information, visit www.tii.ae
PRESS CONTACTS
Jinan Warrayat: +971 50 471 3552 – jinan.warrayat@tii.ae
ABOUT THALES
Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.
Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
PRESS CONTACTS
Tarek Solimane: Tarek.Solimane@thalesgroup.com
Dhwani Sanganee: Dhwani.Sanganee@thalesgroup.com
View PDF advanced_technologies : Advanced technologies > Quantum algorithms ; countries : Middle-East and Africa > United Arab Emirates https://thales-group.prezly.com/technology-innovation-institute-and-thales-partner-to-advance-research-in-quantum-autonomy-and-directed-energy technology-innovation-institute-and-thales-partner-advance-research-quantum-autonomy-and-directed On Technology Innovation Institute and Thales Partner to Advance Research in Quantum, Autonomy, and Directed Energy
Share options
Facebook X Whatsapp Linkedin Email URL copied to clipboard 20 Nov 2025 Initiative aims to support air traffic controllers and airlines to significantly reduce fuel burn, and to deliver more on-time arrivals. This research partnership aims to advance operations via an AI-powered digital solution which can mitigate up to 40% of holding patterns at Dubai’s arrival airport. New system will predict UAE airspace congestion and generate proactive recommendations in order to modify flight plans whenever appropriate and reduce holding.Dubai Air Navigation Services, Emirates Airline, and Thales Sign Collaborative Research Agreement to advance flight efficiency through innovative solutions © Flat Earth
Dubai Air Navigation Services (DANS), Emirates Airline and Thales have signed a Collaborative Research Agreement (CRA) to pioneer cutting-edge research aimed at reducing holding patterns for all flights arriving at Dubai International, supporting more efficient UAE airspace management. This collaboration for a safer and more efficient UAE airspace management reinforces the UAE's Net Zero by 2050 target.
Holding patterns provide a safe, orderly system to sequence arriving aircraft during peak traffic and runway congestion. Aircraft fly carefully choreographed racetrack-shaped circuits at designated altitudes to ensure smooth arrivals. While holding patterns are essential for safety and operational flow, minimizing their occurrence reduces fuel consumption and associated carbon emissions.
Under this agreement, the three partners will work together to research innovative ways to optimise aviation traffic management. Central to their efforts will be the integration of advanced AI technologies, which will play a pivotal role in identifying potential congestion areas and providing recommendations to stakeholders. By leveraging these AI-driven insights, the collaboration aims to help air traffic controllers and airlines minimise delays, enhance predictability, and boost operational efficiency throughout the entire airspace network.
“This innovative partnership with Dubai Air Navigation Services and Emirates Airline will allow us to strengthen the UAE’s airspace management technologies. Thales will bring its advanced capabilities in digital technologies, Artificial Intelligence, and air traffic management, to support the UAE’s vision of a more secure, connected, and sustainable aviation sector.” Abdelhafid Mordi, Vice President of Thales in UAE and Iraq, and CEO of Thales Emarat Technologies.
“This research agreement with Thales and DANS represents a practical application of AI to solve a real operational challenge. By predicting congestion and adjusting cruise speeds proactively, we not only reduce fuel burn associated with holding patterns but are also optimising our operational efficiencies. If successful, this solution can benefit other airlines, turning a shared problem into an opportunity for improvement." Adel Al Redha, Deputy President and Chief Operating Officer Emirates Airline.
Thales is present at the Dubai Airshow on stand #740. To find out more about Thales in the United Arab Emirates, please go to: Thales in the UAE | Thales Group.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.
The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.
View PDF market_segment : Civil Aviation > Airspace management ; countries : Middle-East and Africa > United Arab Emirates https://thales-group.prezly.com/dubai-air-navigation-services-emirates-airline-and-thales-sign-collaborative-research-agreement-to-advance-flight-efficiency-through-innovative-solutions dubai-air-navigation-services-emirates-airline-and-thales-sign-collaborative-research-agreement On Dubai Air Navigation Services, Emirates Airline, and Thales sign Collaborative Research Agreement to advance flight efficiency through innovative solutions
The NFT market has evolved far beyond simple JPEGs – it’s now defined by tiers of quality and utility. As a collector, you face a critical question: What truly differentiates a ‘Premium NFT’ from the noise? At Herond, we cut through the speculation to deliver clarity. This Ultimate Guide is your essential roadmap to understanding the metrics, rarity factors, and exclusive benefits that define a truly Premium NFT. We’ll show you how to identify high-value assets, minimize risk, and build a collection with lasting worth. Stop guessing and start collecting with confidence! Are you ready to dive into the world of blue-chip digital assets?
What Makes an NFT “Premium”? Verified Creator (Blue-Chip Artists) Beeple, Pak, or Established Studios: These names don’t just ensure artistic quality; they guarantee the historical significance and sustained market commitment of the digital asset. Real Utility (Staking, Governance, Access) Beyond the PFP: A Premium NFT must offer value that extends past its visual appeal. Real utility transforms the NFT into an exclusive membership token. Scarcity (<1,000 Supply) The Law of Supply and Demand: A collection with a maximum supply cap of under 1,000 NFTs is a clear indicator of inherent scarcity. Even 10K blue-chip collections must feature defined “1/1” or “Genesis” traits to qualify individual pieces as Premium. Provenance (On-Chain History) OpenSea Verification & Clean History: Provenance serves as the concrete evidence of an NFT’s authenticity and ownership lineage. Market Momentum (Floor Price >0.5 ETH) Sustained Floor Price: A stable, high floor price is the clearest market validation of community confidence and widespread acceptance. What Is Premium NFT? – Types of Premium NFTs in 2025 Art & Collectibles (e.g., Chromie Squiggle – $50K+) Examples: Chromie Squiggle, Fidenza (Generative Art). These NFTs are valued purely on their provenance, aesthetic quality, and scarcity. Their floor prices often start at $50,000+. PFP Projects (Profile Picture) Examples: Bored Ape Yacht Club (BAYC), Azuki. While they start as profile pictures, their true Premium value lies in the Community and Perks. Real-World Assets (RWA) Examples: Tokenized fractional ownership of real estate, gold, fine wine, or blue-chip stocks. These bridge the gap between physical wealth and blockchain efficiency. Gaming NFTs (Yield-Bearing Assets) Examples: In-game land in major metaverses (e.g., Otherside, Sandbox), or yield-bearing characters/weapons. Membership NFTs (Exclusive Clubs & Events) Examples: VeeFriends (Gary Vaynerchuk), or exclusive tokens granting access to specific investment groups or private masterminds. AI-Generated Art (Dynamic, Evolving) Examples: Art pieces that evolve and change based on environmental data, user interaction, or on-chain activity. What Is Premium NFT? – Step-by-Step: Buy Your First Premium NFT Download Herond (iOS/Android) Set up your wallet in 60 seconds. We eliminate technical barriers to ensure your entry into the Premium NFT market is instant and highly secure on any device. Fund with USDT (fiat on-ramp) Our integrated Fiat On-Ramp allows you to fund your wallet instantly with a credit card or bank transfer, ensuring you are always ready to capitalize on quick floor price dips. Connect to OpenSea/Magic Eden (1-tap) Utilize our seamless, 1-tap connection to OpenSea, Magic Eden, and other blue-chip platforms. This integration guarantees faster, safer transactions and bypasses the security risks of external wallet extensions. Buy & auto-store in Herond vault Once purchased, your high-value assets are automatically moved and secured into the encrypted Herond Vault, separate from your daily transaction balance. This is peak security for blue-chip assets. ConclusionThis Ultimate Guide for What Is Premium NFT? equips serious collectors with the power to identify true Premium NFTs—those defined by Real Utility and verifiable market strength, not hype. At Herond, we simplify the complex process, offering the secure, 1-tap access needed to transact safely.
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post What Is Premium NFT? The Ultimate Guide for Collectors appeared first on Herond Blog.
The post What Is Premium NFT? The Ultimate Guide for Collectors appeared first on Herond Blog.
Tired of intrusive ads slowing down your iPhone and invading your privacy? Pop-ups, trackers, and auto-play videos drain your battery, waste your data, and compromise your personal information every time you browse. Herond brings you this comprehensive guide to the best ad blockers for the iPhone in 2025 – comparing speed, privacy protection, and ease of use to help you choose the perfect solution. Whether you want simple one-tap blocking or advanced customization, we’ll show you how to transform your browsing into a faster, cleaner, and more private experience. Say goodbye to annoying ads and reclaim control of your iPhone today, start using Best Ad Blocker for Iphone.
You Need an Ad Blocker on the iPhoneData Drain: Ads Consume 30-50% of Your Mobile Data
Ads account for 30-50% of total mobile data consumption according to Statista 2025 research Every webpage loads multiple ad scripts, high-resolution images, auto-play videos, and tracking pixels that waste bandwidth You’re paying for ads with your data plan – content you never wanted to see Video ads are the worst offenders, consuming 5-10MB each and often loading in HD quality by default Over a month, wasted data can add up to several gigabytes, potentially triggering overage charges Ad blockers eliminate this waste, letting you browse 2-3x more content with the same data plan.Battery Drain: Video Ads Cut 2-3 Hours from Your iPhone Battery Life
Auto-play video ads drain battery rapidly by forcing your processor and display to work harder Aggressive advertising can reduce iPhone battery life by 2-3 hours daily Video ads run in the background even when you scroll past, consuming CPU and memory continuously Ad animations and interactive elements require constant GPU rendering, accelerating battery drain Users report 20-30% longer battery life after installing quality ad blockers Blocking ads lets your iPhone allocate resources to actual content instead of unwanted marketingPrivacy Invasion: Trackers Collect Your Location, Browsing History, and Even Voice Data
Modern ad networks deploy sophisticated tracking beyond just website visits Location tracking uses GPS and Wi-Fi to pinpoint your exact location and movement patterns Browsing surveillance follows you across websites and apps to build comprehensive behavior profiles Cross-device fingerprinting links your iPhone to other devices for complete digital profiling Some ad networks request microphone access to listen for keywords and serve targeted ads Personal data is sold to hundreds of third-party companies without explicit consent Ad blockers stop trackers before they collect anything, keeping your information truly private Herond’s integrated blocking prevents tracking scripts from executing, ensuring your activity stays yours alone Top 5 Best Ad Blocker for Iphone – Ranked #1 Herond Browser – Best Ad Blocker for Iphone100% Ad-Free Experience: Block Pop-ups, Banners, and Even YouTube Ads
Complete ad elimination across all websites and platforms – no pop-ups, banners, or intrusive overlays YouTube ads completely blocked – enjoy uninterrupted videos without pre-roll, mid-roll, or banner ads Blocks all ad formats: display ads, video ads, native advertising, sponsored content, and auto-play media No “acceptable ads” exceptions – Herond doesn’t whitelist advertisers or accept payment to show ads Pages load faster and look cleaner without ad clutter consuming screen space Automatic protection. Once installed, blocks all ads without requiring manual updates or configuration2.3x Faster Than Safari: Lightning-Speed Performance
Independent tests prove Herond loads pages 2.3x faster than Safari by eliminating ad-related code Websites that take 8-10 seconds on Safari open in just 3-4 seconds with Herond Fewer HTTP requests and reduced JavaScript mean dramatically lighter page weight Works efficiently even on slower 3G/4G connections with reduced data transfer Lower memory usage – runs smoothly with multiple tabs open without lag Battery-efficient architecture delivers speed without typical browser power drain Top Best Ad Blocker for Iphone – Comparision Ad BlockerProsCons#2 AdGuard Comprehensive ad blockingWe’ve established the core factors for choosing the ideal ad blocker: speed, privacy, and ease of use – all non-negotiables for the premium iOS experience. The era of slow loading times and constant battery drain from intrusive ads is over. By implementing the best solutions, you don’t just eliminate pop-ups and banners; you proactively protect your personal data from hidden trackers. Stop wasting time and mobile data! It’s time to elevate your iPhone browsing to the next level: faster, cleaner, and absolutely secure.
About HerondHerond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.
To enhance user control over their digital presence, Herond offers two essential tools:
Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.
Have any questions or suggestions? Contact us:
On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.orgThe post Best Ad Blocker for iPhone: Fast, Private & Easy to Use appeared first on Herond Blog.
The post Best Ad Blocker for iPhone: Fast, Private & Easy to Use appeared first on Herond Blog.
The post Redefining Age Assurance appeared first on Liminal.co.
On November 19, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) sanctioned Media Land, Aeza Group’s front companies and associated individuals and entities of both parties for their part in providing bulletproof hosting (BPH) services for illicit activities. OFAC also listed one bitcoin address belonging to Media Land’s General Director Alexander Alexandrovich Volosovik.
By Heather Dahl
The future is automated, permissioned, seamless, and secure. Whether you’re managing a border or a bank, decentralized identity solutions are radically simplifying how data is shared, authenticated and used.
At Indicio, we’ve been leading this revolution in simplicity with the world’s most powerful decentralized identity platform, Indicio Proven®. And yes, the clue is in the title. It’s a way to instantly prove that digital data is authentic and can be trusted — and for that data to go anywhere and be verified anywhere.
We’re watching, in real-time, how our customers are using our technology to do rapid, remote KYC and account access, create Digital Travel Credentials for seamless border crossing, automate hotel check-in, manage payments for tourist services, trade digital assets, and issue all kinds of documents as Verifiable Credentials which can’t be stolen, altered, or faked, even by AI.
In a way, it’s like data teleportation. We’ve removed the friction around authentication, both in back-end systems and on the front end with people, passengers, and customers. We’ve removed the vulnerabilities in legacy tech that are costing businesses hundreds of billions of dollars a year and driving consumers multifactor mad.
Information that can be instantly trusted anywhere is rocket fuel for business. Think about it: no more customer drop-off in enrollment or payments online. No more chargeback fraud . No need for customers to use logins or passwords, or even to store their data if you don’t want to.
But think about how this instant, portable, digital trust is able to create and scale markets — better yet, see this through the eyes of one of our forward-thinking customers. Want a single market for tourists effortlessly managed through an app? Add a Digital Travel Credential with authenticated biometrics, leverage the trust from a “government-grade” digital identity, and make product and service integration, access, and payment frictionless.
You can organize an industry vertical or an entire economy around a unified customer or partner identity.
This is Web3. It’s here, it’s happening, and you can have it too. We’ve solved all the hard problems around interoperability and scale; we’ve created an all-in-one product for use anywhere; and we’ve made it easy to drop into your existing systems and get up and running in days to weeks.
Now here’s the kicker, guess what’s on the heels of Web3? The internet of AI. This is where we’re most excited about Indicio Proven, having just joined NVIDIA’s Inception Program and partnered with NEC.
We’re using Verifiable Credentials and decentralized governance to structure data and permission access so that it can be used effectively by AI to solve simple but important vertical use cases.
For example, think of a chatbot interacting with a customer. With Verifiable Credentials, the customer and the chatbot can authenticate each other right away. The chatbot then asks for permission to use the customer’s data, keeping everything aligned with GDPR. Once approved, the chatbot can access that data automatically. The information is authenticated, and structured for immediate use.
Customers can even delegate authority so a chatbot or AI agent can pass verified data to another agent and the agents can authenticate each other before any data is ever shared!
Try do this with technology built for the Web2 era. Actually don’t, because it won’t be secure.
In many ways, decentralized identity is shorthand for dropping an entirely new way of creating, sharing, and verifying data onto Web2, and letting it organically remove all the bits that are broken and obsolete — as if you could inject a new framework into a decaying shack and watch it turn into a skyscraper with minimal effort and cost.
Those who move fast will rapidly build the next era of the internet and monetize verification in the process. We are already seeing a verification economy emerge with our customers. They’ll also win the automation race by making it work in practice. That means structured data, portable trust, decentralized governance, and delegated authority.
Are you ready to move fast and get to market quickly with product?We’ve taken the hard work out of implementation, interoperability, and scale so your team doesn’t get stuck in months — or years — in development. We designed Indicio Proven to liberate teams so they can focus on product. Contact us to learn how easy it is to start right now.
The post Win Web3 with Verifiable Credentials from Indicio appeared first on Indicio.
Regulatory frameworks around the world require financial institutions to maintain ongoing due diligence on their customers. When it comes to cryptoassets, this means systematically rescreening wallet addresses and transactions to identify material changes in risk exposure over time. But there's a critical question to consider when evaluating rescreening solutions: What actually gets recalculated during each rescreening event?
In today’s fast-moving digital world, knowing the real age of your users is more than a safety measure. It is a responsibility. Businesses in gaming, e-commerce, adult platforms, retail, and social media, especially those growing rapidly in India’s digital economy, are under pressure to verify users quickly and accurately. Fake IDs such as edited Aadhaar cards, PAN cards, or altered Driving Licences are becoming dangerously good. Regulations are becoming stricter. And minors are finding new creative ways to slip into spaces they should not be in.
This is where a modern age verification system becomes a lifesaver. It gives companies the confidence that every interaction is genuine, compliant, and safe.
Artificial intelligence has pushed online age verification into a new era. Whether it is age-checking software, an AI age-detection tool, a face age checker, or a complete age verification solution, the goal remains the same. Keep users safe. Keep platforms compliant. And keep things running smoothly without slowing down real adults.
The Evolution of Age Identification Technology in India’s Growing Digital Landscape
Age verification has changed dramatically over the years. Not long ago, websites relied on a simple checkbox that asked users if they were eighteen. Teenagers checked that box faster than they could finish a bag of chips. That system lasted only because there were no better options.
Today the situation is very different. Powerful AI models examine IDs, faces, devices, and behavior patterns to determine whether a user is genuinely of legal age. Modern tools used for age identification can verify a person in milliseconds while following global data standards such as the rules detailed in the GDPR compliance guidelines.
This matters even more in India, where millions of new users join digital platforms every month.
Most systems combine document scanning with facial comparison. AI analyzes facial landmarks to match them with ID photos. It checks lighting, angles, micro textures, and even subtle facial changes. Performance studies published in the NIST Face Recognition Vendor Test show that advanced models now reach extremely high accuracy levels in identity and age validation.
These advances prove that digital age validation is no longer a basic checkbox. It is a critical layer of online trust.
Why Indian Businesses Are Switching to Smart Age Verification Solutions
Manual checks are slow, inconsistent, and easy to fool. A teenager with enough determination can outsmart a distracted employee or upload a slightly edited Indian ID. A sturdy digital age verification solution removes these weak spots.
Businesses across retail, entertainment, gaming, and online marketplaces use automated checks to avoid compliance issues and keep their platforms safe. This is especially relevant in India’s mobile-first ecosystem, where large onboarding volumes require fast, accurate verification.
Here is why companies are embracing this approach:
Faster verification: Users get approved in seconds. Higher accuracy: AI detects manipulated IDs that humans would overlook. Better compliance: Aligned with global privacy and safety requirements. Strong fraud prevention: Impossible to bypass with a borrowed or stolen ID. Smoother user experience: No long forms or awkward verification steps.Many organizations explore these capabilities through the ID Document Recognition SDK, which lets them integrate age checks into onboarding flows effortlessly.
How Device-Based Age Verification Strengthens Security for Indian Platforms
Not every fraud attempt happens inside a document. Sometimes the red flag is hidden inside the device itself. That is why many companies use device-based age verification to detect suspicious behavior before any user reaches the final verification screen.
This system analyzes device history, repeated identity switching, login patterns, and other signals that reveal risky attempts. It stops situations where minors use the IDs of older siblings or attempt multiple logins on the same phone, a common pattern in many Indian households where devices are shared.
Combined with automated checks, device intelligence creates a multi-layered shield that keeps platforms safe from repeat offenders.
How AI Age Detection Helps Screen Users in Seconds
AI age detection has become a powerful screening tool. It gives platforms a quick estimate of a user’s age based on facial features. This does not require storing full images. Instead, the system evaluates the face momentarily and keeps only an age range.
Many companies rely on benchmark results shared in the NIST FRVT evaluations, which show how modern models estimate ages with impressive consistency. This is especially useful for platforms that offer age verification for adult content, including those operating in India where age-restricted content rules are strict.
AI age estimation is not a final verification step. It is an intelligent filter that helps determine whether a user is likely old enough before they undergo full verification.
Why an Adult Verification System Protects Businesses in India
A reliable adult verification system shields companies from violations, helps them avoid legal trouble, and ensures that minors never gain access to restricted environments. It strengthens safety while building trust with users who want a protected community.
A complete adult verification workflow includes the following:
Document authenticity scanning Facial biometric matching Liveness checks Age estimation Device intelligence Fraud behavior monitoringLiveness detection plays a major role because it ensures that a real person is present. Many minors attempt to trick systems using printed images or digital masks. This is why companies often rely on tools powered by the ID Document Liveness Detection SDK, which tests for motion, depth, and natural facial movement.
Ensuring Privacy and Meeting Global Standards Relevant to India
Privacy is one of the biggest concerns surrounding digital verification. Companies must follow strict guidelines to keep data protected. Responsible systems use encrypted templates rather than raw images. They only store what is necessary and follow regulated retention policies.
Organizations that follow the compliance rules outlined in the GDPR regulatory framework can confidently use age verification tools without compromising user rights. This approach also aligns well with the expectations of Indian internet users, who are becoming increasingly privacy-aware.
How a Face Age Checker Enhances Accuracy and Flow
A face age checker offers an easy way to screen users before requesting full identity documents. If a user appears to be too young, the system redirects them to a thorough verification step. If they look old enough, onboarding remains quick and smooth.
This approach keeps user experience intact while maintaining strict age control. Developers often explore this process through the ID Document Verification Playground, where different checks can be tested in real time.
Real-World Use Cases Where Age Verification Matters Most in India
1. Age-restricted content
Platforms offering adult material use layered checks to block minors and avoid strict penalties.
2. Retail and e-commerce
Stores selling alcohol, tobacco, vapes, or other restricted products use age scanners to prevent misuse, an important requirement as online deliveries rise in India.
3. Financial services
Banks use identity verification to confirm eligibility and reduce fraud.
4. Streaming platforms
Video services with adult categories rely on automated screening to stay within regulations.
Each of these industries benefits from AI-powered identity tools that improve both safety and trust.
Comparing Different Types of Digital Age Verification Tools
Every platform has different needs. Some need fast screening. Others need top-level fraud resistance. Some must meet strict regulations. A quick look at the most common verification methods helps businesses decide which approach works best for their users.
Below is a simple comparison table that keeps things clear and easy to digest.
Verification Method What It Checks Strengths Best Use Case Document Scanning ID authenticity, security markers, text accuracy Accurate detection of fake or altered IDs Platforms that need strong identity proof before access AI Age Estimation Facial features that predict approximate age Fast screening, low friction, works well for high-volume signups Apps that want quick checks before full verification Face Match and Biometric Analysis Compares selfie with ID photo Very hard to bypass, stops stolen or borrowed IDs Services offering sensitive or restricted content Device-Based Age Signals Device history, login behavior, repeated identity switches Stops repeat attempts and risky patterns Platforms experiencing multiple fraud attempts from the same device Liveness Detection Confirms the user is a real person and not a photo or mask Blocks deepfakes, printed images, and replay tricks Businesses facing high levels of identity spoofingChallenges and Ethical Responsibilities in India’s Diverse Digital Environment
Even advanced systems must handle challenges. AI models need constant training to avoid bias. Lighting conditions, camera quality, and access environments vary widely, especially in a country as diverse as India. Fraud attempts also evolve, pushing developers to create better defenses.
Ethical responsibility plays a major part too. Users must know when and why their data is being collected. Clear communication builds confidence. Initiatives like the open research shared on the Recognito GitHub page help encourage transparency and innovation across the industry.
The Future of Age Verification and Safe Digital Access in India
As technology improves, age verification tools will only grow more advanced. Future systems may combine 3D sensors, behavioral analytics, improved document checks, and deeper fraud detection to stay ahead of misuse.
The goal is not to replace traditional verification. It is to enhance it. These innovations will build a digital ecosystem where safety is automatic and effortless.
Building Trust in the Era of Intelligent Verification
Trust is the foundation of every online platform. A smart and well-designed age verification system gives businesses the confidence that every user is genuine. With AI-driven tools, global compliance, and layered protection, companies can keep minors away from restricted spaces while creating a smooth and secure experience for real adults.
Recognito continues helping organizations build this trust by offering AI-powered verification tools that combine accuracy, privacy, and confidence at every step.
Frequently Asked Questions
1. How does an age verification system detect fake IDs?
AI checks ID security features, patterns, and micro-details to spot tampering quickly and accurately.
2. Why is AI age estimation important?
It offers a fast age check from facial features, helping filter minors before full verification.
3. Can age verification help with compliance?
Yes, it follows global data rules like GDPR to keep verification secure and legally compliant.
4. What makes liveness detection essential?
It confirms the user is real by detecting natural movements, blocking photos, masks, and deepfakes.
The snow is falling, the hot cocoa is brewing, and the one crucial holiday mission begins: choosing the perfect movie lineup. With so many seasonal options – from black-and-white classics to modern streaming hits – the question isn’t if you should watch a Christmas movie, but which ones belong in your essential marathon? We’ve done the heavy lifting to bring you the definitive list of the Best Christmas Movies of All Time. This comprehensive guide is your one-stop shop for festive viewing, designed to power your Ultimate Holiday Binge from Thanksgiving through New Year’s Day.
Forget endless scrolling! We’ve organized the most iconic films into clear, easy-to-navigate categories. You’ll find the tear-jerking Undisputed Christmas Classics, the feel-good Modern Christmas Hits, and even deep dives into Genre Categories like the best holiday comedies and must-see unconventional action picks. Get your popcorn ready – it’s time to start binging!
The Hall of Fame: Undisputed Christmas Classics The Traditional CornerFor an authentic taste of holiday nostalgia, your binge must start with these enduring masterpieces. These are best Christmas movies of all time that truly set the standard for festive filmmaking.
It’s a Wonderful Life (1946): The quintessential holiday film. It delivers a powerful message about community and purpose, making it a perfect, tear-jerking centerpiece for any binge. Miracle on 34th Street (1947): A timeless story that centers on the very real question of believing in Santa Claus, offering charm and warmth that define the era of black and white holiday films. White Christmas (1954): Starring Bing Crosby and Danny Kaye, this early color musical is pure spectacle, packed with catchy tunes and a deeply heartwarming story about friendship and service. The Beloved Family StaplesThese are the films and specials that define childhood Christmases. Perfect for multi-generational viewing, these iconic holiday specials are essential viewing for your ultimate holiday binge.
A Christmas Story (1983): An irresistibly funny tale of Ralphie’s quest for a Red Ryder BB Gun. This movie is the king of nostalgic family viewing and mandatory watching for many. Rudolph the Red-Nosed Reindeer (Rankin/Bass, 1964): The groundbreaking stop-motion classic that introduced generations to misfit toys and the power of being yourself. It remains one of the best family Christmas movies. A Charlie Brown Christmas (1965): The simple, honest animated special about finding the true meaning of Christmas, featuring Vince Guaraldi’s unforgettable jazz score. A short, sweet, and perfect addition to any marathon. Modern Christmas Hits: The Post-2000 Essentials The Ultimate Comfort WatchesThese best Christmas movies of all time have achieved classic status in a short time, offering high production value and an unbeatable feel-good factor perfect for repeating your ultimate holiday binge.
Elf (2003): Will Ferrell’s hilarious performance makes this the ultimate feel-good comedy. It’s highly rewatchable and mandatory for setting a silly, joyful tone. The Polar Express (2004): A visually stunning, high-production animated adventure that captures the wonder of childhood belief. It remains a top choice for a magical, immersive holiday binge. The Holiday (2006): Starring Cameron Diaz and Kate Winslet, this cozy holiday romance is the perfect winter escapism, blending two beautiful love stories across continents. Animated FavoritesDon’t skip the modern animation that has quickly become essential for any holiday marathon. These Disney/Netflix hits bring beautiful visuals and fresh stories to the season.
Klaus (2019): This gorgeous, hand-drawn-style film from Netflix offers a stunning, original origin story for Santa Claus. It’s one of the best modern Christmas movies and perfect for binge-watching. The Grinch (2018): A highly popular, vibrant update of the Dr. Seuss classic, featuring lush animation and a high-energy pace. A fun, quick watch for the whole family. Frozen (2013): While not strictly a “Christmas” movie, its themes of winter, sisterly love, and holiday-adjacent magic make it a mandatory, snow-filled addition to any holiday binge. The Best Christmas Comedies (The Laugh Track Binge)Need a break from the sentimental stuff? These are the funniest Christmas movies ever made, perfect for a lighthearted night of holiday comedies.
Home Alone (1990): The ultimate slapstick classic. Kevin McCallister’s battle against the Wet Bandits is timeless, hilarious, and an absolute requirement for any holiday binge. National Lampoon’s Christmas Vacation (1989): The king of relatable holiday chaos. Clark Griswold’s disastrous attempt at a “perfect” family Christmas delivers non-stop laughs and iconic quotes. The Santa Clause (1994): Tim Allen’s sarcastic transformation into St. Nick blends cynicism with magic, creating one of the most clever and entertaining Christmas comedies of the 90s. Unconventional & Action-Packed Holiday MoviesSpice up your marathon with these high-octane picks. While the debate rages on about whether these are true unconventional holiday films, their Christmas settings make them essential viewing for action lovers.
Die Hard (1988): The king of Christmas action movies. John McClane’s battle to save hostages at a holiday office party is the ultimate counter-programming to sentimental classics. Lethal Weapon (1987): Set entirely during the holiday season, this definitive buddy-cop movie uses Christmas as a backdrop for redemption, action, and an iconic opening fight scene set to “Jingle Bell Rock.” Kiss Kiss Bang Bang (2005): This neo-noir crime comedy embraces the holiday aesthetic wholeheartedly. It’s a sharp, witty, and underrated gem that perfectly fits the mold of an alternative Christmas movie. Tear-Jerking Holiday RomanceGet the tissues ready for these emotional favorites. These best Christmas romance movies are essential viewing for anyone seeking love under the mistletoe.
Love Actually (2003): The definitive romantic holiday film. This ensemble masterpiece weaves together multiple stories of love and heartbreak in London, covering every emotion of the season. While You Were Sleeping (1995): Sandra Bullock shines in this cozy 90s classic. It’s a sweet, heartwarming tale of mistaken identity and finding love where you least expect it. Last Christmas (2019): A modern tear-jerker featuring the music of George Michael. This bittersweet story offers a surprising twist and a touching message about healing during the holidays. Conclusion: The Final Word on Festive FilmsFrom the black-and-white nostalgia of It’s a Wonderful Life to the high-octane thrills of Die Hard, this list proves there is truly a perfect holiday film for every mood. Whether you’re planning a cozy family night with animated staples or a laugh-out-loud marathon with the Griswolds, you now have the roadmap for the Ultimate Holiday Binge.
The best part of the season is sharing these traditions. Did your personal favorite make the list, or is there a hidden gem we missed? Drop your top pick in the comments below and let the debate begin! Now, grab the remote, pour the hot cocoa, and let the marathon begin. Happy holidays and happy watching!
About HerondHerond Browser is a Web browser that prioritizes users’ privacy by blocking ads and cookie trackers, while offering fast browsing speed and low bandwidth consumption. Herond Browser features two built-in key products:
Herond Shield: an adblock and privacy protection tool; Herond Wallet: a multi-chain, non-custodial social wallet.Herond aims at becoming the ultimate Web 3.0 solution, heading towards the future of mass adoption. Herond has now released the mobile version on CH Play and App Store. Join our Community!
The post The Ultimate Holiday Binge: The Best Christmas Movies of All Time appeared first on Herond Blog.
The post The Ultimate Holiday Binge: The Best Christmas Movies of All Time appeared first on Herond Blog.
Is your phone or desktop screen still looking like October? It’s time to trade in the boring backgrounds for a dash of pure holiday magic! The search for the perfect festive background ends here. We’ve scoured the web to bring you the definitive collection of the Cute Christmas Wallpaper 2025 – all completely free and available in stunning, high-quality resolutions for instant download. Whether you’re looking for a cozy, minimalist aesthetic, a touch of classic charm with Santa and snowmen, or the pure joy of festive furry friends, we have the perfect image of spreading holiday cheer across every device you own.
The Cute Christmas Wallpaper Themes of 2025 Cozy & Aesthetic: Minimalist Holiday VibesIf your personal style leans toward clean lines and calming palettes, you’ll fall in love with our collection of aesthetic Christmas wallpaper. These aren’t the busy, chaotic backgrounds of yesteryear; this year’s trend is all about minimalist holiday background designs that bring a sense of tranquility to your device.
The charm of the cozy Christmas lock screen lies in its simplicity. We focus on soft, muted color schemes – think creams, dusty sage greens, and warm terracotta reds-instead of harsh primary colors. You’ll find high-quality images featuring simple line drawings of pine branches, delicate reindeer silhouettes, and stylized snowflakes.
For maximum coziness, look for wallpapers that incorporate texture and light:
Knitted Patterns: Wallpapers that mimic the look of chunky knit sweaters or wool blankets. Bokeh Lights: Soft, blurred backgrounds of warm twinkling lights that create a gentle, magical ambiance. Hot Cocoa Scenes: Simple, overhead shots of steaming mugs, perfect for setting a relaxing winter mood. Classic Charm: Santa, Reindeer, and SnowmenEmbrace the nostalgic, timeless appeal of the holidays with our Classic Charm collection. We focus on the playful, traditional side of the season, ensuring these images bring an immediate, feel-good holiday spirit to your device.
What You’ll Find in the Classic Charm Collection:
Timeless Appeal: Downloads feature the best of the season’s nostalgic, traditional charm. Santa’s Favorites: The perfect cute Santa wallpaper showcasing cheerful, classic designs. Festive Friends: Adorable reindeer and genuinely joyful scenes, including your new favorite festive snowman wallpaper. Cartoon Backgrounds: High-quality, playful Christmas cartoon background downloads that capture the true magic of Christmas. High Resolution: All images are available in crisp quality to ensure they look stunning on any screen. Pet Power: Festive Furry FriendsNothing melts the heart quite like an animal dressed for the holidays. For all animal lovers, our Pet Power collection is a must-download for maximum cuteness!
Feline Festivities: Discover the most adorable cute Christmas cat wallpaper featuring felines nestled in tinsel and cozy scenes. Canine Cheer: Find a playful dog Christmas background where pups sport tiny antlers, festive scarves, and Santa hats. Instant Cuteness: These free Christmas backgrounds are guaranteed to be conversation starters and deliver ultimate holiday joy to your screen. Simple Download: Easily grab your favorite pet holiday wallpaper and celebrate the season with your furry friends! Sweet Treats: Gingerbread and Candy CanesIndulge your sweet tooth without the calories! This vibrant collection focuses on the most delicious icons of the season.
Edible Aesthetics: Find the perfect sweet Christmas wallpaper featuring colorful sweets, festive hot chocolate, and baked goods. Classic Comfort: Easily download charming gingerbread man wallpaper designs, from cute cartoons to detailed photorealistic textures. Striped Fun: Discover high-quality, free candy cane background images that add a pop of cheerful red and white to your screen. Instant Joy: These free backgrounds are the simplest way to get into the joyful spirit of holiday baking and treats! Optimizing Your Wallpaper for Every Device Desktop & Laptop BackgroundsTo ensure your new free Christmas backgrounds for laptop or desktop look crisp and perfect, keep these optimization tips in mind:
Aspect Ratio is Key: All our Christmas desktop wallpaper 4k downloads are optimized for common desktop resolutions, primarily 16:9 and 16:10. This prevents stretching or distortion on most modern screens. Avoid Scaling Issues: If an image doesn’t perfectly fit, choose the “Fit” or “Fill” option in your display settings rather than “Stretch” to maintain image quality and prevent blurry visuals. Maximize Holiday Cheer: For continuous festivity, utilize your system’s built-in feature to set a dynamic wallpaper or a rotating slideshow. This lets you cycle through multiple cute designs automatically throughout the day. Mobile Phone Lock ScreensOptimizing for your phone is crucial, as the clock and icons can easily cover the cutest parts of your background. Follow these tips for a perfect holiday lock screen free download:
Vertical Layout is Essential: All our cute Christmas phone wallpaper images are provided in vertical layouts (typically 9:16 or 20:9 aspect ratios) to fit modern smartphones perfectly. Mind the Clock: When setting your image, ensure the main subject is positioned in the center or lower half of the screen. This prevents critical details from being hidden by the clock, date, and notification icons at the top. Disable Parallax: For a static and sharp background, we recommend disabling the “Parallax” or “Perspective Zoom” effect (common on iPhone Christmas background settings). This stops the image from subtly moving and ensures the perfect composition you selected remains fixed. Tablet & iPad WallpapersTablets present a unique challenge: rotation. Since your device often switches between portrait and landscape modes, the image needs to be flexible. Use these tips to optimize your iPad Christmas wallpaper and other tablet holiday background downloads:
Go Wider, Not Taller: When choosing your free Christmas backgrounds, look for images that are inherently wider (designed for desktop) or have a central subject. This ensures the main focus remains visible when shifting between landscape and portrait wallpaper modes. The Centerpiece Rule: Select wallpapers where the main design element (e.g., a cute snowman or a gingerbread house) is positioned near the center of the image. This prevents cropping issues when the screen rotates. Utilize “Fit” Settings: Always choose the “Fit” or “Center” option in your tablet’s display settings. This prevents the image from zooming in too aggressively or being cut off when you switch orientations. How to Download and Set Your New WallpaperA simple, step-by-step instructional section for beginners.
Step 1: Click the download link/button. Step 2: Save the image to your device (Gallery/Downloads folder). Step 3: Access device settings (Windows/Mac/iOS/Android). Step 4: Navigate to Wallpaper/Background settings and select the new image. Step 5: Adjust positioning (Fit, Fill, Center). ConclusionYou’ve now explored the ultimate collection of Cute Christmas Wallpapers 2025, curated specifically to make your devices shine. Whether you chose a calming minimalist holiday background, a playful Christmas cartoon background featuring Santa, or one of our adorable pet holiday wallpaper designs, you are now fully equipped to spread the festive joy.
Remember, every single download here is 100% free and available in high-resolution, ensuring a crisp, beautiful look on your phone, tablet, or desktop.
Don’t wait – the holiday season moves fast! Start downloading your favorites now and instantly transforming your digital world. If you love the new look, be sure to share this page with friends and check back often; we’ll be adding new aesthetic designs and trending cuts throughout the rest of the season!
About HerondHerond Browser is a Web browser that prioritizes users’ privacy by blocking ads and cookie trackers, while offering fast browsing speed and low bandwidth consumption. Herond Browser features two built-in key products:
Herond Shield: an adblock and privacy protection tool; Herond Wallet: a multi-chain, non-custodial social wallet.Herond aims at becoming the ultimate Web 3.0 solution, heading towards the future of mass adoption. Herond has now released the mobile version on CH Play and App Store. Join our Community!
The post Cute Christmas Wallpaper 2025: Best Free Downloads for Holiday Cheer appeared first on Herond Blog.
The post Cute Christmas Wallpaper 2025: Best Free Downloads for Holiday Cheer appeared first on Herond Blog.
Our community has doubled in size in the last year. It has grown from a collection of small communities to a global gathering place to talk about what’s happening. People are coming to Bluesky because they want a place where they can have conversations online again. Most platforms are now just media distribution engines. We are bringing social back to social media.
On Bluesky, people are meeting and falling in love, being discovered as artists, and having debates on niche topics in cozy corners. At the same time, some of us have developed a habit of saying things behind screens that we'd never say in person. To maintain a space for friendly conversation as well as fierce disagreement, we need clear standards and expectations for how people treat each other. In October, we announced our commitment to building healthier social media and updated our Community Guidelines. Part of that work includes improving how users can report issues, holding repeat violators accountable and providing greater transparency.
Today, we're introducing updates to how we track Community Guidelines violations and enforce our policies. We're not changing what's enforced - we've streamlined our internal tooling to automatically track violations over time, enabling more consistent, proportionate, and transparent enforcement.
What's ImprovingAs Bluesky grows, so must the complexity of our reporting system. In Bluesky’s early days, our reporting system was simple, because it served a smaller community. Now that we’ve grown to 40 million users across the world, regulatory requirements apply to us in multiple regions.
To meet those needs, we’re rolling out a significant update to Bluesky’s reporting system. We’ve expanded post-reporting options from 6 to 39. This update offers you more precise ways to flag issues, provides moderators better tools to act on reports quickly and accurately, and strengthens our ability to address our legal obligations around the world.
Our previous tools tracked violations individually across policies. We've improved our internal tooling so that when we make enforcement decisions, those violations are automatically tracked in one place and users receive clear information about where they stand.
This system is designed to strengthen Bluesky’s transparency, proportionality, and accountability. It applies our Community Guidelines, with better infrastructure for accurate tracking and to communicate outcomes clearly.
In-app Reporting ChangesLast month, we introduced updated Community Guidelines that provide more detail on the minimum requirements for acceptable behavior on Bluesky. These guidelines were shaped by community feedback along with regulatory requirements. The next step is aligning the reporting system with that same level of clarity.
Starting with the next app release, users will notice new reporting categories in the app. The old list of broad options has been replaced with more specific, structured choices.
For example:
You can now report Youth Harassment or Bullying, or content such as Eating Disorders directly, to reflect the increased need to protect youth from harm on social media. You can flag Human Trafficking content, reflecting requirements under laws like the UK’s Online Safety Act.This granularity helps our moderation systems and teams act faster and with greater precision. It also allows for more accurate tracking of trends and harms across the network.
Strike System ChangesWhen content violates our Community Guidelines, we assign a severity rating based on potential harm. Violations can result in a range of actions - from warnings and content removal for first-time, lower-risk violations to immediate permanent bans for severe violations or patterns demonstrating intent to abuse the platform.
Severity Levels Critical Risk: Severe violations that threaten, incite, or encourage immediate real-world harm, or patterns of behavior demonstrating intent to abuse the platform → Immediate permanent ban High Risk: Severe violations that threaten harm to individuals or groups → Higher penalty Moderate Risk: Violations that degrade community safety → Medium penalty Low Risk: Policy violations where education and behavior change are priorities → Lower penalty Account-Level ActionsAs violations accumulate, account-level actions escalate from temporary suspensions to permanent bans.
Not every violation leads to immediate account suspension - this approach prioritizes user education and gradual enforcement for lower-risk violations. But repeated violations escalate consequences, ensuring patterns of harmful behavior face appropriate accountability.
In the coming weeks, when we notify users about enforcement actions, we will provide more detailed information, including:
Which Community Guidelines policy was violated The severity level of the violation The total violation count How close the user is to the next account-level action threshold The duration and end date of any suspensionEvery enforcement action can be appealed. For post takedowns, email moderation@blueskyweb.xyz. For account suspensions, appeal directly in the Bluesky app. Successful appeals undo the enforcement action – we restore your account standing and end any suspension immediately.
Looking AheadAs Bluesky grows, we’ll continue improving the tools that keep the network safe and open. An upcoming project will be a moderation inbox to move notifications on moderation decisions from email into the app. This will allow us to send a higher volume of notifications, and be more transparent about the moderation decisions we are taking on content.
These updates are part of our broader work on community health. Our goal is to ensure consistent, fair enforcement that holds repeat violators accountable while serving our growing community as we continue to scale.
As always, thank you for being part of this community and helping us build better social media.
The past decade has delivered extraordinary innovation across retail, media, loyalty, payments, and consumer experience. As brands scale their Retail Media ambitions to the global tune of $180 billion; as consumers adopt loyalty apps, rewards programs, and AI-powered tools; and as merchants modernize in-store operations with self-checkout, digital signage, and mobile payments, every layer of this ecosystem becomes increasingly interconnected.
Unfortunately, none of those connections fully work unless the system can confirm deterministically that a shopper is actually in front of a screen, in an aisle, at a drive-thru, or at the point of sale.
Volumes of customer data can yield a loose, probabilistic approximation, but without that verified moment in time and space, the promise of modern commerce ultimately falls apart. Personalization falters, attribution weakens, AI agents can only guess, loyalty fails to engage, and omnichannel experiences remain fragmented.
This is why so many of the industry’s most persistent challenges, from Retail Media skepticism, to in-store attribution gaps, to broken loyalty journeys, to disjointed checkout flows, can all be traced back to the same root cause: the absence of a universal, real-world, proof-of-presence signal. Until that gap is closed, the entire stack above it can only approximate value instead of delivering it with confidence.
Three Players, Three Layers, One Missing SignalModern commerce is a triangle built around three core players:
Brands, who need to understand which touchpoints genuinely influence behavior Merchants, who need to recognize customers, honor loyalty, and connect digital identity to physical action Consumers, who expect seamless experiences: rewards that travel with them, merchants that recognize them, and journeys that don’t break between screensand three layers of activity that bind them together:
The Transaction & Fulfillment Layer (AI agents, e-commerce, in-store payments, BOPIS) The Marketing Activation Layer (RMN, DOOH, Digital Signage) The Loyalty & Engagement Layer (Perks, Status, Rewards)Each layer exists to provide value for the three players, but all three layers share a common constraint keeping them from maximizing that value: they lack a shared, deterministic proof of presence.
Why Existing Signals Still Fall Short: Probabilistic PresenceToday’s technologies can approximate presence, but they never truly verify it. GPS is too coarse, too drifty, and unreliable indoors; BLE beacons are prone to overlap, spoofing, and signal bleed; QR codes depend entirely on user action and interrupt the journey; and Wi-Fi-based signals are noisy, shared, and often tied to the wrong device, rendering them unsuitable for precise indoor presence verification.
Each can get “close.” But when 80% of purchasing decisions happen in-store, close is not enough. Without deterministic proof of presence:
The Transaction Layer can’t optimize what it can’t verify. The Marketing Layer can’t attribute what it can’t observe. The Loyalty Layer can’t reward what it can’t confirm.This is where LISNR’s Radius technology enters the picture: an ultrasonic proximity protocol built to verify presence in environments where traditional signals fail: stores, transit, venues, drive-thrus, checkout lanes, and everyday physical spaces.
Talk with Our Team about Retail Media Solutions How LISNR Radius Unlocks the Full Potential of Modern Commerce: Deterministic PresenceRadius stabilizes the existing stack instead of replacing it, giving marketers, retailers, and product teams a deterministic signal they can trust across environments and devices. It turns proximity from a probabilistic guess into a verifiable event. This kind of low-level, deterministic context signal is the missing “substrate” that indoor positioning and context-aware computing experts have argued is necessary for higher-level services, like personalization and attribution, to perform consistently. Radius enables every layer–from loyalty, to media, to transactions–to operate with clarity instead of assumption.
For brands, this means every in-store interaction can finally be tied to authenticated exposure and incremental lift without depending on multiple modalities, stitched-together proxies, or platform-bounded measurement. For merchants and infrastructure providers, it means digital identity and loyalty recognition at the point of entry, not just the point of purchase, and the ability to measure dwell time throughout the physical space with confidence. And for consumers, it means experiences “just work”: loyalty triggers automatically, offers follow them, and AI agents can act on their behalf because the system knows where they are—accurately and securely.
Deterministic proof of presence isn’t simply a technical upgrade; it’s the essential signal that enables the next generation of omnichannel commerce. And Radius is the first technology built to provide it at scale.
Proof of Presence: The Most Valuable Signal in CommerceWe’re entering a new era in which the most valuable signal in commerce is not the click, the impression, or even the transaction.
The most valuable signal is the proof that a shopper is there: physically, verifiably, and in real time.
That signal unlocks confidence in spend, precision in personalization, and alignment across every stakeholder in the modern retail ecosystem. Marketing becomes measurable. Loyalty becomes composable. Transactions become anticipatory. Omnichannel finally behaves like a single channel and not a patchwork of disconnected systems.
This shift is underway now, and the organizations that ground their media, loyalty, and transactional systems in deterministic presence will define the future standards everyone else has to follow.
This article is the first entry in a five-part exploration of how presence verification, and its role in maximizing the value of retail media, loyalty, and transactional systems, is becoming the foundational signal of modern commerce. The next article examines why legacy signals fall short in real-world environments and what LISNR’s proof-of-presence enables for brands, merchants, and consumers alike.
The post Your Omnichannel Promise Has a Presence Problem appeared first on LISNR.
Rushing into AML software implementation without fully understanding specific information of various compliance obligations can put firms at significant risk of regulatory scrutiny, financial penalties, and even reputation damage.
The post 5 Critical Errors to Dodge in AML Software Implementation first appeared on ComplyCube.
The cryptoasset ecosystem has become multichain. Different blockchains have emerged to serve distinct use cases: Ethereum for DeFi infrastructure, Solana for high-throughput applications, Bitcoin for secure store of value and trading, et cetera. The industry has recognized that no single blockchain optimally serves every use case.
“In my last post, I wrote that resilience demands choice and that some dependencies matter more than others, and pretending otherwise spreads resources too thin. I joked that I was glad I didn’t have to make those decisions, but of course, I do.”
We all do, in our own way. Every time we vote, renew a passport, or trust a digital service, we’re participating in a system of priorities. Someone, somewhere, has decided what counts as critical and what doesn’t.
That realization sent me down another research rabbit hole about who decides what counts as critical, and what happens once something is labeled that way. How does accountability become infrastructure, and how do those rules, once written for slower systems, now shape the limits of resilience?
Critical infrastructure sounds like a technical category, but it’s really a political one. It represents a social contract between governments, markets, and citizens; a way of saying, “these systems matter enough to regulate.” The problem is that the rules we inherited for defining and managing critical infrastructure were written for a different kind of world. They were designed for slower systems, clearer ownership, and risks that respected national borders.
A Digital Identity Digest The Regulator’s Dilemma Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:51 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
Generational changeThis brings me to something a friend and longtime identity thinker, Ian Glazer, has been discussing lately: Generational Change: Building Modern IAM Today (great talk at Authenticate 2025). His premise is simple: every major shift in how we manage identity begins with a crisis of accountability.
For our generation, at least in the U.S., that story starts twenty-five years ago with Enron. When the company collapsed under the weight of its own fraud, it triggered the Sarbanes-Oxley Act (SOX), a sweeping effort to rebuild public trust through enhanced oversight and auditing. SOX didn’t just transform corporate governance; it rewired the architecture of digital systems.
To prove they weren’t lying to their shareholders, companies had to prove who had access to financial systems, when they had it, and why. That single regulatory demand gave birth to the modern identity and access management (IAM) industry. User provisioning, quarterly access reviews, and separation-of-duties rules were not technical innovations for their own sake. They were compliance artifacts.
Accountability as a system requirement.It worked, mostly. However, it also froze an entire generation of identity practices in a compliance mindset designed for static environments—the kind of world where servers sat in data centers, applications were monolithic, and auditors could literally count user accounts.
That world no longer exists. I’m not sure it ever really did, but it came close enough for compliance purposes.
Today’s infrastructure is a living network of APIs, ephemeral containers, and machine-to-machine connections. Permissions change constantly; roles are inferred rather than assigned. The SOX model of accountability—document-based, periodic, human-verified—cannot keep up with the speed and fluidity of digital operations.
Yet we still design our controls as if that old world were intact. Every new regulation borrows the same logic: prove compliance through evidence after the fact. In an API economy, that’s like measuring river depth with a snapshot.
This is the essence of what it now means to be a regulated industry: to operate in a world where compliance frameworks lag behind reality, and where the very act of proving control can undermine the flexibility that keeps systems running.
The challenge ahead is to re-imagine accountability for systems that no human can fully audit anymore.
Who decides what’s criticalThe SOX era showed us what happens when accountability becomes infrastructure. Once something is declared essential to the public good, the expectations around it change. Auditors appear. Processes multiply. Documentation becomes proof of virtue. The thing itself—energy, banking, cloud, identity—may not inherently change, but the burden of accountability does, and how that accountability is handled influences how much innovation is allowed to happen.
That’s the quiet tension at the heart of every critical infrastructure discussion: as systems become more indispensable, the scrutiny around them intensifies. The stakes rise, and so do the checklists.
When oversight can’t keep upAt the top of that pyramid sits government. Regulation is, in theory, how society expresses its collective expectations—how we agree that safety and reliability matter more than speed or convenience. But in practice, the model of oversight we keep reusing comes from a slower era: a world where an inspector could show up with a clipboard, verify that the valves were turned the right way, and sign off.
In digital infrastructure, that model doesn’t scale. Governments can loom over the industry’s figurative shoulder, but they can’t keep up with its velocity. The old rituals of control—compliance reports, annual audits, quarterly attestations—look increasingly ceremonial when infrastructure changes by the second.
And yet, the instinct to regulate through prescription persists. When governments define something as critical, they tend to follow with a detailed checklist of how to do the job, codifying procedures in the name of safety. It’s a natural response to risk, but one that struggles to survive contact with continuous deployment pipelines and automated policy engines.
So maybe the harder question isn’t what counts as critical, but how much definition it requires. Can we acknowledge essentiality without turning it into bureaucracy? Can we create accountability without demanding omniscience? What governments introduce is another common paradox: the need to be specific without actually suggesting what tools to use to get there from here.
If the first generation of regulation hard-coded accountability into organizations, the next will need to hard-code trust into systems—without pretending that trust can be reduced to a form.
Accountability without omniscienceDeclaring something “critical” has always carried the weight of oversight. The assumption is that governments, or their proxies, can both understand and manage the risk. But as infrastructure becomes increasingly digital and interconnected, that assumption begins to fail.
The OECD’s Government at a Glance 2025 report states that most countries now recognize that infrastructure resilience demands a whole-of-government, system-wide approach—one that acknowledges interdependencies, information-sharing, and trust as policy instruments in their own right. Yet the governance structures built for power grids and pipelines aren’t well-suited to cloud platforms, APIs, or digital identity systems. The more critical these become, the less feasible it is for any single authority to monitor and manage every component.
That’s the paradox of modern accountability: the more connected systems get, the harder it is to define responsibility in a way that scales. The critical infrastructure lab and the Research Network on Internet Governance (REDE), funded by the Internet Society, made the case that sovereignty and resilience now depend less on control and more on coordination—on being able to share risk data and dependencies transparently across borders and sectors. In principle, it’s the right move. In practice, it’s a trust exercise that few institutions are prepared for.
The idea of distributed accountability sounds progressive, but it also has a familiar flaw. When everyone is accountable, no one is accountable. The result is a kind of modern Bystander Effect: every actor assumes someone else will notice, intervene, or fix the problem. The chain of command dissolves into a web of good intentions.
This is the point where governance runs into the limits of imagination. Most people—and most regulators—can picture centralized oversight. They can picture privatized responsibility. But a shared model of accountability, one that is collaborative without being amorphous, is much harder to design. And yet that’s exactly what digital infrastructure demands.
We don’t need omniscience. But we do need visibility, and a clear sense of who moves first when something goes wrong.
When visibility becomes controlThe inability to imagine shared accountability has consequences. When coordination feels uncertain, governments reach for the tools they already understand: classification, jurisdiction, and control.
It’s an understandable impulse. No one wants to be caught watching a crisis unfold with no clear authority to act. So when infrastructure becomes essential, the default response is to anchor it to sovereignty—to say, “this part of the network belongs to us.” Visibility becomes control.
But this is also where the governance model for critical infrastructure collides with the architecture of the Internet. Digital systems don’t map neatly onto national boundaries, and yet the instinct to assert jurisdiction persists.
The European Union’s NIS2 and CER directives, the United States’ NSM-22, India’s CERT-In, and a growing list of regional cybersecurity laws all share the same structure: protect the systems that matter most within the territory you can regulate.
Each framework makes sense in isolation. Together, they create a patchwork of compliance zones that overlap but rarely align. The more governments move to secure their slice of the Internet, the more the global system fragments. Resilience becomes something you defend domestically rather than something you coordinate internationally.
There are various ways to interpret this, as scholars like Niels ten Oever and others exploring “infrastructural power” have noted. I’ll refer to it as the politics of dependencies—states now manage risk not only by building capacity, but by redefining what (and whom) they depend on. It’s a rational strategy in an interdependent world, but it comes with trade-offs. Limiting dependency also limits collaboration. A jurisdiction that can’t tolerate external risk soon finds itself isolated from shared innovation and shared recovery.
This is what makes the regulator’s dilemma so difficult. The very act of governing risk can create new vulnerabilities. The more states assert control over digital infrastructure, the more brittle global interdependence becomes. And yet, doing nothing isn’t an option either.
Doing something (without breaking everything)If doing nothing isn’t an option, what does doing something look like?
The compliance modelThe easiest path is the one we know: expand the existing machinery of audits, attestations, and oversight. This approach offers the comfort of processes we already know, which provide defined responsibilities, measurable outcomes, and the illusion of control.
Unfortunately, as discussed, the compliance model is self-limiting. Checklists don’t scale well when the systems they’re meant to protect evolve faster than the paperwork. It works locally but drags globally, slowing innovation in the name of assurance.
The sovereignty modelThe second path is already well underway. Nations reassert control by treating digital infrastructure as a means of asserting sovereignty. Clouds become domestic. Data stays home. Dependencies are pruned in the name of national security.
This approach can strengthen internal resilience, but only within its own borders. The cost of the sovereignty model is interoperability. The more countries pursue sovereign safety, the more brittle cross-border systems become, and the more the Internet looks like a federation of incompatible jurisdictions.
The coordination model (and its limits)The third path—shared coordination—remains the ideal of any globalist like me, but it’s also the least likely.
True collaboration demands transparency, and transparency means exposing dependencies that are strategically or commercially sensitive. In a world leaning toward self-protection, that kind of openness is rare.
So coordination won’t vanish, but it will devolve by shifting from global alignment to regional or sector-based pacts, where trust is built within smaller, semi-compatible networks. That’s not the open Internet we once imagined, but it may be the one we have to learn to live with.
Each of these paths has trade-offs.
Compliance centralizes process. Sovereignty centralizes power. Coordination, when it happens, centralizes trust, and that is becoming a scarce resource. The challenge now is not to prevent fragmentation, but to make it survivable.
The next generation of accountabilityIan Glazer’s idea of Generational Change in Identity has changed how I see the evolution of regulation and infrastructure. Every generation inherits a crisis it didn’t design and a set of controls that no longer fit.
For ours, that crisis isn’t fraud or corporate malfeasance. It’s fragility; the uneasy recognition that we’ve built systems so interdependent that no one can fully explain how they work, let alone govern them coherently.
If the last generation of regulation was born from the failure of a few companies, the next will emerge from the failure of shared systems. We need to start thinking about what kind of accountability we design in response. Do we double down on compliance and centralization (something that would make me Very Sad), or accept that resilience must now be negotiated—sometimes awkwardly, sometimes locally—among the people and institutions who can still see each other across the network?
We may not get a global framework this time. We may get overlapping, regionally aligned regimes that reflect the trade-offs each society is willing to make between openness, autonomy, and control. That’s not necessarily a failure. It’s the kind of adaptation that complex systems make when they outgrow the rules that shaped them.
If the last generation of accountability was about proving control, the next must be about sharing it: building systems where visibility replaces omniscience, and cooperation replaces the illusion of total oversight.
That’s the generational change we’re living through: the slow shift from auditing the past to governing the very immediate present. And if we’re good, we’ll learn to design for accountability the way we once designed for uptime.
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript Introduction[00:00:30] Welcome back.
In my last post, The Paradox of Protection, I argued that resilience requires choice — that some dependencies matter far more than others. And pretending otherwise only spreads resources too thin.
I also joked — particularly in the written post — that I was very glad I personally did not have to make those decisions.
But of course, the truth is that we all do.
Every time we vote, renew a passport, or trust a digital service, we participate in a system of priorities. Someone, somewhere, has already decided what counts as critical and what doesn’t.
That realization sent me down a research rabbit hole:
Who decides what’s critical — and what happens once something earns that label?
Because the moment a system becomes critical, it doesn’t just get protection.
It gets rules, institutions, oversight, and redefined accountability.
[00:01:24] Accountability becomes surprisingly rigid.
[00:01:28] Today, let’s talk about the regulator’s dilemma — and how the very act of governing risk creates new vulnerabilities.
A few weeks ago, my friend and longtime identity thinker Ian Glaser gave a Talk@Authenticate 2025 called Generational Building: Modern IAM Today. One idea from his opening really stuck with me:
Every major shift in how we manage identity begins with a crisis of accountability.
[00:02:01] For our generation — the last 25 years — that story begins with Enron.
Its collapse triggered the Sarbanes–Oxley Act (SOX), a sweeping effort to rebuild trust through oversight and auditing. And SOX didn’t just transform corporate governance — it rewired digital architecture.
To prove honesty to shareholders, companies had to prove:
Who had access When they had it And whyThat requirement gave birth to modern identity and access management (IAM).
Key IAM practices that emerged:
User provisioning Quarterly access reviews Separation of duties Access certificationNot because they were fun technical innovations, but because compliance required them.
And to be fair, it worked. Mostly.
But it also froze an entire generation of identity practices into a compliance mindset built for:
Static servers Data centers Monolithic applications Human auditors counting accountsA world that no longer exists.
The Accountability Lag[00:03:21] Today’s infrastructure is a living network — APIs, ephemeral containers, and machine-to-machine connections.
[00:03:28] Permissions shift constantly.
Roles are inferred.
Identity is dynamic.
[00:03:33] And yet we still audit like it’s 2005.
The SOX model — document-based, periodic, human-verified — cannot keep up with the speed of cloud and automation.
Ironically, the more we try to prove control… the more we slow down the systems we’re trying to protect.
Compliance starts to undermine resilience.
It’s like measuring a rushing river using a still photograph.
You can capture the moment — but not the motion.
Once something becomes critical, auditors appear, processes multiply, and documentation becomes proof of virtue. Not capability.
The Weight of Critical Infrastructure[00:04:42] As systems become indispensable, scrutiny intensifies.
[00:04:47] Stakes rise.
[00:04:50] Checklists expand.
At the top of all this sits government regulation — society’s way of expressing collective expectations around safety and reliability.
In theory, it’s how we prioritize the public good.
In practice, oversight is modeled on a world where inspectors carried clipboards and verified valves.
[00:05:21] That model doesn’t scale to digital infrastructure.
Government can loom over an industry’s shoulder — but it cannot see fast enough or deep enough to match the pace of automation.
[00:05:31] Annual audits and quarterly attestations become ceremonial when infrastructure shifts every second.
And yet the instinct to regulate through prescriptive checklists persists.
But prescriptive rules do not survive contact with:
Continuous deployment API-driven systems Automated policy enginesSo the real question becomes:
[00:06:01] What truly counts as critical — and how much definition is necessary?
Can we acknowledge essentiality without creating bureaucratic drag?
Can we create accountability without pretending omniscience?
When Accountability Becomes Trust[00:06:25] If the last generation of regulation hard-coded accountability into organizations…
[00:06:30] The next will have to hard-code trust into systems.
But not the kind of trust that can be reduced to a form or a checklist.
[00:06:39] Declaring something critical always invites oversight — and assumes governments can understand and manage the risk.
Increasingly, they can’t.
The OECD’s Government at a Glance 2025 notes that resilience now demands a whole-of-government approach, treating information sharing and trust as policy instruments.
Yet our governance frameworks were built for:
Power grids Pipelines Physical infrastructureNot cloud platforms, APIs, or digital identity.
[00:07:15] The more critical digital systems become, the less feasible it is for any single authority to monitor them.
It’s the paradox of modern accountability.
[00:07:23] More connectivity = harder definitions of responsibility.
Research from the Critical Infrastructure Lab and the Internet Society’s REED Project highlights this shift:
Resilience now depends less on control and more on coordination.
But coordination has a flaw:
When everyone is accountable, no one is accountable.
Governance as a Trust Exercise[00:08:03] When every actor assumes someone else will intervene, the chain of command dissolves.
[00:08:10] Governance then becomes imagination:
Regulators can picture centralization.
They can picture privatization.
But shared accountability — structured collaboration — is harder to design.
Yet digital infrastructure demands exactly that.
We don’t need omniscience.
We need visibility and clarity about who moves first when something goes wrong.
When coordination feels uncertain, governments default to familiar tools:
Classification Jurisdiction ControlBecause no one wants to watch a crisis unfold without clear authority to act.
Sovereignty and Fragmentation[00:08:50] When infrastructure becomes essential, governments anchor to sovereignty:
“This part of the network is ours.”
But digital systems ignore borders.
Still, the instinct persists.
The result is a wave of regional cybersecurity laws:
EU NIS2 U.S. NSM-22 India’s CERT rules Regional data residency mandatesEach makes sense alone.
Together, they form a patchwork of compliance zones that rarely align.
The more governments secure their slice of the Internet, the more the global system fragments.
Resilience becomes domestic instead of international.
The Three Paths of Modern Governance[00:09:55] When doing nothing isn’t an option, what does doing something look like?
There are three paths:
ComplianceComfortable, measurable, familiar — but self-limiting.
Checklists don’t scale.
Paperwork slows innovation.
Domestic clouds. Localized data.
Strengthens internal resilience but fractures interoperability.
Shared governance, mutual visibility, collective risk management.
Globally the best path — but increasingly rare because it requires uncomfortable transparency.
And transparency exposes dependencies that many institutions don’t want exposed.
So coordination shrinks:
From global to regional From universal to sectoral From open to semi-compatible[00:11:29] Not the Internet we imagined.
But possibly the one we have to live with.
Each governance path centralizes something:
Compliance → process Sovereignty → power Coordination → trustAnd trust is scarce.
This brings us back to Internet fragmentation:
We cannot prevent fragmentation, but we can make it survivable.
Identity governance is a generational story.
Each generation inherits a crisis it didn’t design and controls that don’t fit anymore.
The crisis today isn’t fraud.
It’s fragility — and our recognition that our systems are too interdependent to fully understand.
[00:12:32] The next regulatory wave will emerge from failures in shared systems.
We must choose what kind of accountability to design:
Double down on compliance and centralization? Or negotiate resilience — sometimes awkwardly, sometimes locally — with the people who can still see each other across the network?Likely we’ll see:
Overlapping Regionally aligned Sector-specificRegimes that reflect societal trade-offs between openness, autonomy, and control.
This isn’t failure.
It’s adaptation.
If the last generation of accountability was about proving control…
The next must be about sharing it.
Building systems where:
Visibility replaces omniscience Cooperation replaces total oversight Resilience is negotiated, not dictatedThis is the shift from auditing the past to governing the present.
[00:13:41] And if we’re good — if we learn from trade-offs without repeating them — we might design accountability the way we once designed for uptime.
We’ll see how it goes.
Closing ThoughtsThere’s more in the written blog, including research links that informed this episode.
If you’d like to reflect or push back, I’d love to continue the conversation.
[00:14:07] Have a great rest of your day.
OutroIf this episode helped make things clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn: @hlflanagan
And if you enjoyed the show, please subscribe and leave a rating and review on your favorite podcast platform.
You’ll find the full written post at sphericalcowconsulting.com.
Stay curious. Stay engaged. Let’s keep these conversations going.
The post The Regulator’s Dilemma appeared first on Spherical Cow Consulting.
Building secure and seamless sign-in experiences is a core challenge for today’s iOS developers. Users expect authentication that feels instant, yet protects them with strong safeguards like multi-factor authentication (MFA). With Okta’s DirectAuth and push notification support, you can achieve both – delivering native, phishing-resistant MFA flows without ever leaving your app.
In this post, we’ll walk you through how to:
Set up your Okta developer account Configure your Okta org for DirectAuth and push notification factor Enable your iOS app to drive DirectAuth flows natively Create an AuthService with the support of DirectAuth Build a fully working SwiftUI demo leveraging the AuthServiceNote: This guide assumes you’re comfortable developing in Xcode using Swift and have basic familiarity with Okta’s identity flows.
If you want to skip the tutorial and run the project, you can follow the instructions in the project’s README.
Table of Contents
Use Okta DirectAuth with push notification factor Prefer phishing-resistant authentication factors Set up your iOS project with Okta’s mobile SDKs Authenticate your iOS app using Okta DirectAuth Add the OIDC configuration to your iOS app Add authentication in your iOS app without a browser redirect using Okta DirectAuth Secure, native sign-in in iOS Sign-out users when using DirectAuth Refresh access tokens securely Display the authenticated user’s information Build the SwiftUI views to display authenticated state Read ID token info View the authenticated user’s profile info Keeping tokens refreshed and maintaining user sessions Build your own secure native sign-in iOS app Use Okta DirectAuth with push notification factorThe first step in implementing Direct Authentication with push-based MFA is setting up your Okta org and enabling the Push Notification factor. DirectAuth allows your app to handle authentication entirely within its own native UI – no browser redirection required – while still leveraging Okta’s secure OAuth 2.0 and OpenID Connect (OIDC) standards under the hood.
This means your app can seamlessly verify credentials, obtain tokens, and trigger a push notification challenge without switching contexts or relying on the SafariViewController.
Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:
Go to Applications > Applications Select Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Native Application as the application type, then select Next Enter an app integration name Configure the redirect URIs: Redirect URI:com.okta.{yourOktaDomain}:/callback
Post Logout Redirect URI: com.okta.{yourOktaDomain}:/ (where {yourOktaDomain}.okta.com is your Okta domain name). Your domain name is reversed to provide a unique scheme to open your app on a device.
Select Advanced v.
Select the OOB and MFA OOB grant types.
In the Controlled access section, select the appropriate access level
Select Save
NOTE: When using a custom authorization server, you need to set up authorization policies. Complete these additional steps:
In the Admin Console, go to Security > API > Authorization Servers Select your custom authorization server (default)
On the Access Policies tab, ensure you have at least one policy:
If no policies exist, select Add New Access Policy
Give it a name like “Default Policy”
Set Assign to “All clients”
Click Create Policy
For your policy, ensure you have at least one rule:
Select Add Rule if no rules exist
Give it a name like “Default Rule”
Set Grant type is to “Authorization Code”
Select Advanced and enable “MFA OOB”
Set User is to “Any user assigned the app”
Set Scopes requested to “Any scopes”
Select Create Rule
For more details, see the Custom Authorization Server documentation.
Where are my new app's credentials?Creating an OIDC Native App manually in the Admin Console configures your Okta Org with the application settings.
After creating the app, you can find the configuration details on the app’s General tab:
Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default
Client ID: 0oab8eb55Kb9jdMIr5d6
NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.
Prefer phishing-resistant authentication factorsWhen implementing DirectAuth with push notifications, security remains your top priority. Every new Okta Integrator Free Plan account requires admins to configure multi-factor authentication (MFA) using Okta Verify by default. We’ll keep these default settings for this tutorial, as they already support Okta Verify Push, the recommended factor for a native and secure authentication experience.
Push notifications through Okta Verify provide strong, phishing-resistant protection by requiring the user to approve sign-in attempts directly from a trusted device. Combined with biometric verification (Face ID or Touch ID) or device PIN enforcement, Okta Verify Push ensures that only the legitimate user can complete the authentication flow – even if credentials are compromised.
By default, push factor isn’t enabled in the Integrator Free org. Let’s enable it now.
Navigate to Security > Authenticators. Find Okta Verify and select Actions > Edit. In the Okta Verify modal, find Verification options and select Push notification (Android and iOS only). Select Save.
Set up your iOS project with Okta’s mobile SDKsBefore integrating Okta DirectAuth and Push Notification MFA, make sure your development environment meets the following requirements:
Xcode 15.0 or later – This guide assumes you’re comfortable developing iOS apps in Swift using Xcode. Swift 5+ – All examples use modern Swift language features. Swift Package Manager (SPM) – Dependency manager handled through SPM, which is built into Xcode.Once your environment is ready, create a new iOS project in Xcode and prepare it for integration with Okta’s mobile libraries.
Authenticate your iOS app using Okta DirectAuthIf you are starting from scratch, create a new iOS app:
Open Xcode Go to File > New > Project Select iOS App and select Next Enter the name of the project, such as “okta-mfa-direct-auth” Set the Interface to SwiftUI Select Next and save your project locallyTo integrate Okta’s Direct Authentication SDK into your iOS app, we’ll use Swift Package Manager (SPM) – the recommended and modern way to manage dependencies in Xcode.
Follow these steps:
Open your project in Xcode (or create a new one if needed) Go to File > Add Package Dependencies In the search field at the top-right, enter:https://github.com/okta/okta-mobile-swift and press Return. Xcode will automatically fetch the available packages.
Select the latest version (recommended) or specify a compatible version with your setup
When prompted to choose which products to add, ensure that you select your app target next to OktaDirectAuth and AuthFoundation
Select Add Package
These packages provide all the tools you need to implement native authentication flows using OAuth 2.0 and OpenID Connect (OIDC) with DirectAuth, including secure token handling and MFA challenge management – without relying on a browser session.
Once the integration is complete, you’ll see OktaMobileSwift and its dependencies listed under your project’s Package Dependencies section in Xcode.
Add the OIDC configuration to your iOS appThe cleanest and most scalable way to manage configuration is to use a property list file for Okta stored in your app bundle.
Create the property list for your OIDC and app config by following these steps:
Right-click on the root folder of the project Select New File from Template (New File in legacy Xcode versions) Ensure you have iOS selected on the top picker Select Property List template and select Next Name the templateOkta and select Create to create an Okta.plist file
You can edit the file in XML format by right-clicking and selecting Open As > Source Code. Copy and paste the following code into the file.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>scopes</key>
<string>openid profile offline_access</string>
<key>redirectUri</key>
<string>com.okta.{yourOktaDomain}:/callback</string>
<key>clientId</key>
<string>{yourClientID}</string>
<key>issuer</key>
<string>{yourOktaDomain}/oauth2/default</string>
<key>logoutRedirectUri</key>
<string>com.okta.{yourOktaDomain}:/</string>
</dict>
</plist>
Replace {yourOktaDomain} and {yourClientID} with the values from your Okta org.
If you use something like this in your code, you can directly access the DirectAuth shared instance, which is already initialized and ready to handle authentication requests.
Now that you’ve added the SDK and property list file, let’s implement the main authentication logic for your app.
We’ll build a dedicated service called AuthService, responsible for logging users in and out, refreshing tokens, and managing session state.
This service will rely on OktaDirectAuth for native authentication and AuthFoundation for secure token handling.
To set it up, create a new folder named Auth under your project’s folder structure, then add a new Swift file called AuthService.swift.
Here, you’ll define your authentication protocol and a concrete class that integrates directly with the Okta SDK – making it easy to use across your SwiftUI or UIKit views.
import AuthFoundation
import OktaDirectAuth
import Observation
import Foundation
protocol AuthServicing {
// The accessToken of the logged in user
var accessToken: String? { get }
// State for driving SwiftUI
var state: AuthService.State { get }
// Sign in (Password + Okta Verify Push)
func signIn(username: String, password: String) async throws
// Sign out & revoke tokens
func signOut() async
// Refresh access token if possible (returns updated token if refreshed)
func refreshTokenIfNeeded() async throws
// Getting the userInfo out of the Credential
func userInfo() async throws -> UserInfo?
}
With this added, you will get an error that AuthService can’t be found. That’s because we haven’t created the class yet. Below this code, add the following declarations of the AuthService class:
@Observable
final class AuthService: AuthServicing {
}
After doing so, we next need to confirm the AuthService class to the AuthServicing protocol and also create the State enum, which will hold all the states of our Authentication process.
To do that, first let’s create the State enum inside the AuthService class like this:
@Observable
final class AuthService: AuthServicing {
enum State: Equatable {
case idle
case authenticating
case waitingForPush
case authorized(Token)
case failed(errorMessage: String)
}
}
The new code resolved the two errors about the AuthService and the State enum. We only have one error to fix, which is confirming the class to the protocol.
We will start implementing the functions top to bottom. Let’s first add the two variables from the protocol, accessToken and state. After the definition of the enum, we will add the properties:
@Observable
final class AuthService: AuthServicing {
enum State: Equatable {
case idle
case authenticating
case waitingForPush
case authorized(Token)
case failed(errorMessage: String)
}
private(set) var state: State = .idle
var accessToken: String? {
return nil
}
}
For now, we will leave the accessToken getter with a return value of nil, as we are not using the token yet. We’ll add the implementation later.
Next, we’ll add a private property to hold a reference to the DirectAuthenticationFlow instance.
This object manages the entire DirectAuth process, including credential verification, MFA challenges, and token issuance. The object must persist across authentication steps.
Insert the following variable between the existing state and accessToken properties:
private(set) var state: State = .idle
@ObservationIgnored private let flow: DirectAuthenticationFlow?
var accessToken: String? {
return nil
}
To allocate the flow variable, we will need to implement an initializer for the AuthService class. Inside, we’ll allocate the flow using the PropertyListConfiguration that we introduced earlier. Just after the accessToken getter, add the following function:
// MARK: Init
init() {
// Prefer PropertyListConfiguration if Okta.plist exists; otherwise fall back
if let configuration = try? OAuth2Client.PropertyListConfiguration() {
self.flow = try? DirectAuthenticationFlow(client: OAuth2Client(configuration))
} else {
self.flow = try? DirectAuthenticationFlow()
}
}
This will try to fetch the Okta.plist file from the project’s folder, and if not found, will fall back to the default initializer of the DirectAuthenticationFlow. We have now successfully allocated the DirectAuthenticationFlow, and we can proceed with implementing the next functions of the protocol.
Moving down to the first function in the protocol, which is the signIn(username: String, password: String).
The signIn method below performs the full authentication flow using Okta DirectAuth and Auth Foundation.
It authenticates a user with their username and password, handles MFA challenges (in this case, Okta Verify Push), and securely stores the resulting token for future API calls. Add the following code just under the Init that we just added.
// MARK: AuthServicing
func signIn(username: String, password: String) {
Task { @MainActor in
// 1️⃣ Start the Sign-In Process
// Update UI state and begin the DirectAuth flow with username/password.
state = .authenticating
do {
let result = try await flow?.start(username, with: .password(password))
switch result {
// 2️⃣ Handle Successful Authentication
// Okta validated credentials, return access/refresh/ID tokens.
case .success(let token):
let newCred = try Credential.store(token)
Credential.default = newCred
state = .authorized(token)
// 3️⃣ Handle MFA with Push Notification
// Okta requires MFA, wait for push approval via Okta Verify.
case .mfaRequired:
state = .waitingForPush
let status = try await flow?.resume(with: .oob(channel: .push))
if case let .success(token) = status {
Credential.default = try Credential.store(token)
state = .authorized(token)
}
default:
break
}
} catch {
// 4️⃣ Handle Errors Gracefully
// Update state with a descriptive error message for the UI.
state = .failed(errorMessage: error.localizedDescription)
}
}
}
Let’s break down what’s happening step by step:
1. Start the sign-in process
When the function is called, it launches a new asynchronous Task and sets the UI state to .authenticating. It then initiates the DirectAuth flow using the provided username and password:
let result = try await flow?.start(username, with: .password(password))
This sends the user’s credentials to Okta’s Direct Authentication API and waits for a response.
2. Handle successful authentication
If Okta validates the credentials and no additional verification is needed, the result will be .success(token).
The returned Token object contains access, refresh, and ID tokens.
We securely persist the credentials using AuthFoundation:
let newCred = try Credential.store(token)
Credential.default = newCred
state = .authorized(token)
This marks the user as authenticated and updates the app state, allowing your UI to transition to the signed-in experience.
3. Handle MFA with push notification
If Okta determines that an MFA challenge is required, the result will be .mfaRequired. The app updates its state to .waitingForPush, prompting the user to approve the login on their Okta Verify app:
state = .waitingForPush
let status = try await flow?.resume(with: .oob(channel: .push))
The .oob(channel: .push) parameter resumes the authentication flow by waiting for the push approval event from Okta Verify.
Once the user approves, Okta returns a new token:
if case let .success(token) = status {
Credential.default = try Credential.store(token)
state = .authorized(token)
}
4. Handle errors
If any step fails (e.g., invalid credentials, network issues, or push timeout), the catch block updates the UI to show an error message:
state = .failed(errorMessage: error.localizedDescription)
The error function allows your app to display user-friendly error states while preserving robust error handling for debugging.
Secure, native sign-in in iOSThis function demonstrates a complete native sign-in experience with Okta DirectAuth, no web views, no redirects.
It authenticates the user, manages token storage securely, and handles push-based MFA all within your app’s Swift layer – making the authentication flow fast, secure, and frictionless.
The following diagram illustrates how the authentication flow works under the hood when using Okta DirectAuth with push notification authentication factor:
Sign-out users when using DirectAuthNext from the protocol functions is the sign-out method. This method provides a clean and secure way to log the user out of the app.
It revokes the user’s active tokens from Okta and resets the local authentication state, ensuring that no stale credentials remain on the device. Add the following code right below the signIn method:
func signOut() async {
if let credential = Credential.default {
try? await credential.revoke()
}
Credential.default = nil
state = .idle
}
Let’s look at what each step does: 1. Check for an existing credential
if let credential = Credential.default {
The method first checks if a stored credential (token) exists in memory.
Credential.default represents the current authenticated session created earlier during sign-in.
2. Revoke the tokens from Okta
try? await credential.revoke()
This line tells Okta to invalidate the access and refresh tokens associated with that credential.
Calling revoke() ensures that the user’s session terminates locally and in the authorization server, preventing further API access with those tokens.
The try? operator is used to safely ignore any errors (e.g., network failure during logout), since token revocation is a best-effort operation.
3. Clear local credential data
Credential.default = nil
After revoking the tokens, the app clears the local credential object.
This removes any sensitive authentication data from memory, ensuring that no valid tokens remain on the device.
4. Reset the authentication state
state = .idle
Finally, the app updates its internal state back to .idle, which tells the UI that the user is now logged out and ready to start a new session.
You can use this state to trigger a transition back to the login screen or turn off authenticated features.
The protocol confirmation is almost complete, and we only have two functions remaining to implement.
Refresh access tokens securelyAccess tokens issued by Okta have a limited lifetime to reduce the risk of misuse if compromised. OAuth clients that can’t maintain secrets, like mobile apps, require short access token lifetimes for security.
To maintain a seamless user experience, your app should refresh tokens automatically before they expire.
The refreshTokenIfNeeded() method handles this process securely using AuthFoundation’s built-in token management APIs.
Let’s walk through what it does. Add the following code right after the signOut method:
func refreshTokenIfNeeded() async throws {
guard let credential = Credential.default else { return }
try await credential.refresh()
}
1. Check for an existing credential
guard let credential = Credential.default else { return }
Before attempting a token refresh, the method checks whether a valid credential exists. If no credential is stored (e.g., the user hasn’t signed in yet or has logged out), the method exits early.
2. Refresh the token
try await credential.refresh()
This line tells Okta to exchange the refresh token for a new access token and ID token.
The refresh() method automatically updates the Credential object with the new tokens and securely persists them using AuthFoundation.
If the refresh token has expired or is invalid, this call throws an error – allowing your app to detect the issue and prompt the user to sign in again.
Display the authenticated user’s informationLastly, let’s look at the userInfo() function. After authenticating, your app can access the user’s profile information – such as their name, email, or user ID – from Okta using a standard OIDC endpoint.
The userInfo() method retrieves this data from the ID token or by calling the authorization server’s /userinfo endpoint. The ID token doesn’t necessarily include all of the profile information though, as the ID token is intentionally lightweight.
Here’s how it works. Add the following code after the end of refreshTokenIfNeeded():
func userInfo() async throws -> UserInfo? {
if let userInfo = Credential.default?.userInfo {
return userInfo
} else {
do {
guard let userInfo = try await Credential.default?.userInfo() else {
return nil
}
return userInfo
} catch {
return nil
}
}
}
1. Return the cached user info
if let userInfo = Credential.default?.userInfo {
return userInfo
}
If the user’s profile information has already been fetched and stored in memory, the method returns it immediately.
This avoids unnecessary network calls, providing a fast and responsive experience.
2. Fetch user info
guard let userInfo = try await Credential.default?.userInfo() else {
return nil
}
If the cached data isn’t available, the method fetches it directly from Okta using the UserInfo endpoint.
This endpoint returns standard OpenID Connect claims such as:
sub (the user's unique ID)
name
email
preferred_username
etc...
The AuthFoundation SDK handles the request and parsing for you, returning a UserInfo object.
3. Handle errors gracefully
catch {
return nil
}
If the request fails (for example, due to a network issue or expired token), the function returns nil.
This prevents your app from crashing and allows you to handle the error by displaying a default user state or prompting re-authentication.
With this implemented, you’ve resolved all the errors and should be able to build the app. 🎉
Build the SwiftUI views to display authenticated stateNow that we’ve built the AuthService to handle sign-in, sign-out, token management, and user info retrieval, let’s see how to integrate it into your app’s UI.
To maintain consistency in your architecture, rename the default ContentView to AuthView and update all references accordingly.
This clarifies the purpose of the view – it will serve as the primary authentication interface.
Then, create a Views folder under your project’s folder, drag and drop the AuthView into the newly created folder, and create a new file named AuthViewModel.swift in the same folder.
The AuthViewModel will encapsulate all authentication-related state and actions, acting as the communication layer between your view and the underlying AuthService.
Add the following code in AuthViewModel.swift:
import Foundation
import Observation
import AuthFoundation
/// The `AuthViewModel` acts as the bridge between your app's UI and the authentication layer (`AuthService`).
/// It coordinates user actions such as signing in, signing out, refreshing tokens, and fetching user profile data.
/// This class uses Swift's `@Observable` macro so that your SwiftUI views can automatically react to state changes.
@Observable
final class AuthViewModel {
// MARK: - Dependencies
/// The authentication service responsible for handling DirectAuth sign-in,
/// push-based MFA, token management, and user info retrieval.
private let authService: AuthServicing
// MARK: - UI State Properties
/// Stores the user's token, which can be used for secure communication
/// with backend services that validate the user's identity.
var accessToken: String?
/// Represents a loading statex. Set to `true` when background operations are running
/// (such as sign-in, sign-out, or token refresh) to display a progress indicator.
var isLoading: Bool = false
/// Holds any human-readable error messages that should be displayed in the UI
/// (for example, invalid credentials or network errors).
var errorMessage: String?
/// The username and password properties are bound to text fields in the UI.
/// As the user types, these values update automatically thanks to SwiftUI's reactive data binding.
/// The view model then uses them to perform DirectAuth sign-in when the user submits the form.
var username: String = ""
var password: String = ""
/// Exposes the current authentication state (idle, authenticating, waitingForPush, authorized, failed)
/// as defined by the `AuthService.State` enum. The view can use this to display the correct UI.
var state: AuthService.State {
authService.state
}
// MARK: - Initialization
/// Initializes the view model with a default instance of `AuthService`.
/// You can inject a mock `AuthServicing` implementation for testing.
init(authService: AuthServicing = AuthService()) {
self.authService = authService
}
// MARK: - Authentication Actions
/// Attempts to authenticate the user with the provided credentials.
/// This triggers the full DirectAuth flow -- including password verification,
/// push notification MFA (if required), and secure token storage via AuthFoundation.
@MainActor
func signIn() async {
setLoading(true)
defer { setLoading(false) }
do {
try await authService.signIn(username: username, password: password)
accessToken = authService.accessToken
} catch {
errorMessage = error.localizedDescription
}
}
/// Signs the user out by revoking active tokens, clearing local credentials,
/// and resetting the app's authentication state.
@MainActor
func signOut() async {
setLoading(true)
defer { setLoading(false) }
await authService.signOut()
}
// MARK: - Token Handling
/// Refreshes the user's access token using their refresh token.
/// This allows the app to maintain a valid session without requiring
/// the user to log in again after the access token expires.
@MainActor
func refreshToken() async {
setLoading(true)
defer { setLoading(false) }
do {
try await authService.refreshTokenIfNeeded()
accessToken = authService.accessToken
} catch {
errorMessage = error.localizedDescription
}
}
// MARK: - User Info Retrieval
/// Fetches the authenticated user's profile information from Okta.
/// Returns a `UserInfo` object containing standard OIDC claims (such as `name`, `email`, and `sub`).
/// If fetching fails (e.g., due to expired tokens or network issues), it returns `nil`.
@MainActor
func fetchUserInfo() async -> UserInfo? {
do {
let userInfo = try await authService.userInfo()
return userInfo
} catch {
errorMessage = error.localizedDescription
return nil
}
}
// MARK: - UI Helpers
/// Updates the `isLoading` property. This is used to show or hide
/// a loading spinner in your SwiftUI view while background work is in progress.
private func setLoading(_ value: Bool) {
isLoading = value
}
}
With the view model in place, the next step is to bind it to your SwiftUI view.
The AuthView will observe the AuthViewModel, updating automatically as the authentication state changes.
It will show the user’s ID token when authenticated and provide controls for signing in, signing out, and refreshing the token.
Open AuthView.swift, remove the existing template code, and insert the following implementation:
import SwiftUI
import AuthFoundation
/// A simple wrapper for `UserInfo` used to present user profile data in a full-screen modal.
/// Conforms to `Identifiable` so it can be used with `.fullScreenCover(item:)`.
struct UserInfoModel: Identifiable {
let id = UUID()
let user: UserInfo
}
/// The main SwiftUI view for managing the authentication experience.
/// This view observes the `AuthViewModel`, displays different UI states
/// based on the current authentication flow, and provides controls for
/// signing in, signing out, refreshing tokens, and viewing user or token information.
struct AuthView: View {
// MARK: - View Model
/// The view model that manages all authentication logic and state transitions.
/// It uses `@Observable` from Swift's Observation framework, so changes here
/// automatically trigger UI updates.
@State private var viewModel = AuthViewModel()
// MARK: - State and Presentation
/// Holds the currently fetched user information (if available).
/// When this value is set, the `UserInfoView` is displayed as a full-screen sheet.
@State private var userInfo: UserInfoModel?
/// Controls whether the Token Info screen is presented as a full-screen modal.
@State private var showTokenInfo = false
// MARK: - View Body
var body: some View {
VStack {
// Render the UI based on the current authentication state.
// Each case corresponds to a different phase of the DirectAuth flow.
switch viewModel.state {
case .idle, .failed:
loginForm
case .authenticating:
ProgressView("Signing in...")
case .waitingForPush:
// Waiting for Okta Verify push approval
WaitingForPushView {
Task { await viewModel.signOut() }
}
case .authorized:
successView
}
}
.padding()
}
}
// MARK: - Login Form View
private extension AuthView {
/// The initial sign-in form displayed when the user is not authenticated.
/// Captures username and password input and triggers the DirectAuth sign-in flow.
private var loginForm: some View {
VStack(spacing: 16) {
Text("Okta DirectAuth (Password + Okta Verify Push)")
.font(.headline)
// Email input field (bound to view model's username property)
TextField("Email", text: $viewModel.username)
.keyboardType(.emailAddress)
.textContentType(.username)
.textInputAutocapitalization(.never)
.autocorrectionDisabled()
// Secure password input field
SecureField("Password", text: $viewModel.password)
.textContentType(.password)
// Triggers authentication via DirectAuth and Push MFA
Button("Sign In") {
Task { await viewModel.signIn() }
}
.buttonStyle(.borderedProminent)
.disabled(viewModel.username.isEmpty || viewModel.password.isEmpty)
// Display error message if sign-in fails
if case .failed(let message) = viewModel.state {
Text(message)
.foregroundColor(.red)
.font(.footnote)
}
}
}
}
// MARK: - Authorized State View
private extension AuthView {
/// Displayed once the user has successfully signed in and completed MFA.
/// Shows the user's ID token and provides actions for token refresh, user info,
/// token details, and sign-out.
private var successView: some View {
VStack(spacing: 16) {
Text("Signed in 🎉")
.font(.title2)
.bold()
// Scrollable ID token display (for demo purposes)
ScrollView {
Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)")
.font(.footnote)
.textSelection(.enabled)
.padding()
.background(.thinMaterial)
.cornerRadius(8)
}
.frame(maxHeight: 220)
// Authenticated user actions
signoutButton
}
.padding()
}
}
// MARK: - Action Buttons
private extension AuthView {
/// Signs the user out, revoking tokens and returning to the login form.
var signoutButton: some View {
Button("Sign Out") {
Task { await viewModel.signOut() }
}
.font(.system(size: 14))
}
}
With this added, you will receive an error stating that WaitingForPushView can’t be found in scope. To fix this, we need to add that view next. Add a new empty Swift file in the Views folder and name it WaitingForPushView. When complete, add the following implementation inside:
import SwiftUI
struct WaitingForPushView: View {
let onCancel: () -> Void
var body: some View {
VStack(spacing: 16) {
ProgressView()
Text("Approve the Okta Verify push on your device.")
.multilineTextAlignment(.center)
Button("Cancel", action: onCancel)
}
.padding()
}
}
Now you can run the application on a simulator, and it should present you with the option to log in first with a username and password. After selecting SignIn, it will redirect to the “Waiting for push notification” screen and remain active until you acknowledge the request from the Okta Verify App. If you’re logged in, you’ll see the access token and a sign-out button.
Read ID token infoOnce your app authenticates a user with Okta DirectAuth, the resulting credentials are securely stored in the device’s keychain through AuthFoundation.
These credentials include access, ID, and (optionally) refresh tokens – all essential for securely calling APIs or verifying user identity.
In this section, we’ll create a skeleton TokenInfoView that reads the current tokens from Credential.default and displays them in a developer-friendly format.
This view helps visualize the credential in the store and to inspect the scope. And it helps verify that the authentication flow works.
Create a new Swift file in the Views folder and name it TokenInfoView. Add the following code:
import SwiftUI
import AuthFoundation
/// Displays detailed information about the tokens stored in the current
/// `Credential.default` instance. This view is helpful for debugging and
/// validating your DirectAuth flow -- confirming that tokens are correctly
/// issued, stored, and refreshed.
///
/// ⚠️ **Important:** Avoid showing full token strings in production apps.
/// Tokens should be treated as sensitive secrets.
struct TokenInfoView: View {
/// Retrieves the current credential object managed by `AuthFoundation`.
/// If the user is signed in, this will contain their access, ID, and refresh tokens.
private var credential: Credential? { Credential.default }
/// Used to dismiss the current view when the close button is tapped.
@Environment(\.dismiss) var dismiss
var body: some View {
ScrollView {
VStack(alignment: .leading, spacing: 20) {
// MARK: - Close Button
// Dismisses the token info view when tapped.
Button {
dismiss()
} label: {
Image(systemName: "xmark.circle.fill")
.resizable()
.foregroundStyle(.black)
.frame(width: 40, height: 40)
.padding(.leading, 10)
}
// MARK: - Token Display
// Displays the token information as formatted monospaced text.
// If no credential is available, a "No token found" message is shown.
Text(credential?.toString() ?? "No token found")
.font(.system(.body, design: .monospaced))
.padding()
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.background(Color(.systemGroupedBackground))
.navigationTitle("Token Info")
.navigationBarTitleDisplayMode(.inline)
}
}
// MARK: - Credential Display Helper
extension Credential {
/// Returns a formatted string representation of the stored token values.
/// Includes access, ID, and refresh tokens as well as their associated scopes.
///
/// - Returns: A multi-line string suitable for debugging and display in `TokenInfoView`.
func toString() -> String {
var result = ""
result.append("Token type: \(token.tokenType)")
result.append("\n\n")
result.append("Access Token: \(token.accessToken)")
result.append("\n\n")
result.append("Scopes: \(token.scope?.joined(separator: ",") ?? "No scopes found")")
result.append("\n\n")
if let idToken = token.idToken {
result.append("ID Token: \(idToken.rawValue)")
result.append("\n\n")
}
if let refreshToken = token.refreshToken {
result.append("Refresh Token: \(refreshToken)")
result.append("\n\n")
}
return result
}
}
To view this on screen, we need to instruct SwiftUI to present it. We added the State variable in the AuthView for this purpose - it’s named showTokenInfo. Next, we need to add a button to present the TokenInfoView. Go to the AuthView.swift and scroll down to the last private extension where it says “Action Buttons” and add the following button:
/// Opens the full-screen view showing token info.
var tokenInfoButton: some View {
Button("Token Info") {
showTokenInfo = true
}
.disabled(viewModel.isLoading)
}
Now that this is in place, we need to tell SwiftUI that we want to present TokenInfoView whenever the showTokenInfo boolean is true. In the AuthView, find the body and add this code at the end below the .padding():
// Show Token Info full screen
.fullScreenCover(isPresented: $showTokenInfo) {
TokenInfoView()
}
If you build and run the app, you’ll no longer see the Token Info button when logged in. To keep the button visible, we also need to reference the tokenInfoButton in the successView. In the AuthView file, scroll down to “Authorized State View” (successView) and reference the button just above the signoutButton like this:
private var successView: some View {
VStack(spacing: 16) {
Text("Signed in 🎉")
.font(.title2)
.bold()
// Scrollable ID token display (for demo purposes)
ScrollView {
Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)")
.font(.footnote)
.textSelection(.enabled)
.padding()
.background(.thinMaterial)
.cornerRadius(8)
}
.frame(maxHeight: 220)
// Authenticated user actions
tokenInfoButton // this is added
signoutButton
}
.padding()
}
Try building and running the app. You should now see the Token Info button after logging in. Tapping the button should open the Token Info View.
View the authenticated user’s profile infoOnce your app authenticates a user with Okta DirectAuth, it can use the stored credentials to request profile information from the UserInfo endpoint securely.
This endpoint returns standard OpenID Connect (OIDC) claims, including the user’s name, email address, and unique identifier (sub).
In this section, you’ll add a User Info button to your authenticated view and implement a corresponding UserInfoView that displays these profile details.
This is a quick and powerful way to confirm the validity of the access token and that your app can retrieve user data after sign-in.
Create a new empty Swift file in the Views folder and name it UserInfoView. Then add the following code:
import SwiftUI
import AuthFoundation
/// A view that displays the authenticated user's profile information
/// retrieved from Okta's **UserInfo** endpoint.
///
/// The `UserInfo` object is provided by **AuthFoundation** and contains
/// standard OpenID Connect (OIDC) claims such as `name`, `preferred_username`,
/// and `sub` (subject identifier). This view is shown after the user has
/// successfully authenticated, allowing you to confirm that your access token
/// can retrieve user data.
struct UserInfoView: View {
/// The user information returned by the Okta UserInfo endpoint.
let userInfo: UserInfo
/// Used to dismiss the view when the close button is tapped.
@Environment(\.dismiss) var dismiss
var body: some View {
ScrollView {
VStack(alignment: .leading, spacing: 20) {
// MARK: - Close Button
// Dismisses the full-screen user info view.
Button {
dismiss()
} label: {
Image(systemName: "xmark.circle.fill")
.resizable()
.foregroundStyle(.black)
.frame(width: 40, height: 40)
.padding(.leading, 10)
}
// MARK: - User Information Text
// Displays formatted user claims (name, username, subject, etc.)
Text(formattedData)
.font(.system(size: 14))
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
}
}
.background(Color(.systemBackground))
.navigationTitle("User Info")
.navigationBarTitleDisplayMode(.inline)
}
// MARK: - Data Formatting
/// Builds a simple multi-line string of readable user information.
/// Extracts common OIDC claims and formats them for display.
private var formattedData: String {
var result = ""
// User's full name
result.append("Name: " + (userInfo.name ?? "No name set"))
result.append("\n\n")
// Preferred username (email or login identifier)
result.append("Username: " + (userInfo.preferredUsername ?? "No username set"))
result.append("\n\n")
// Subject identifier (unique Okta user ID)
result.append("User ID: " + (userInfo.subject ?? "No ID found"))
result.append("\n\n")
// Last updated timestamp (if available)
if let updatedAt = userInfo.updatedAt {
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .medium
dateFormatter.timeStyle = .short
let formattedDate = dateFormatter.string(for: updatedAt)
result.append("Updated at: " + (formattedDate ?? ""))
}
return result
}
}
Once again, to display this in our app, we need to add a new button to show the new view. To do that, open the AuthView.swift, scroll down to the last private extension where it says “Action Buttons”, and add the following button just below the tokenInfoButton:
/// Loads user info and presents it full screen.
@MainActor
var userInfoButton: some View {
Button("User Info") {
Task {
if let user = await viewModel.fetchUserInfo() {
userInfo = UserInfoModel(user: user)
}
}
}
.font(.system(size: 14))
.disabled(viewModel.isLoading)
}
Next, we need to add the button to the successView like we did with the tokenInfoButton. Then, we will use the userInfo property in the AuthView, which we added at the start. Navigate to the AuthView.swift file and find the successView in the “Authorized State View” mark and reference the userInfoButton after the tokenInfoButton like this:
private var successView: some View {
VStack(spacing: 16) {
Text("Signed in 🎉")
.font(.title2)
.bold()
// Scrollable ID token display (for demo purposes)
ScrollView {
Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)")
.font(.footnote)
.textSelection(.enabled)
.padding()
.background(.thinMaterial)
.cornerRadius(8)
}
.frame(maxHeight: 220)
// Authenticated user actions
tokenInfoButton
userInfoButton // this is added
signoutButton
}
.padding()
}
We need to tell SwiftUI that we want to open a new UserInfoView whenever the value on the userInfo property changes. To do so, open the AuthView and find the body variable, add the following code after the last closing bracket:
// Show User Info full screen
.fullScreenCover(item: $userInfo) { info in
UserInfoView(userInfo: info.user)
}
The body of your AuthView should look like this now:
var body: some View {
VStack {
// Render the UI based on the current authentication state.
// Each case corresponds to a different phase of the DirectAuth flow.
switch viewModel.state {
case .idle, .failed:
loginForm
case .authenticating:
ProgressView("Signing in...")
case .waitingForPush:
// Waiting for Okta Verify push approval
WaitingForPushView {
Task { await viewModel.signOut() }
}
case .authorized:
successView
}
if viewModel.isLoading {
ProgressView()
}
}
.padding()
// Show Token Info full screen
.fullScreenCover(isPresented: $showTokenInfo) {
TokenInfoView()
}
// Show User Info full screen
.fullScreenCover(item: $userInfo) { info in
UserInfoView(userInfo: info.user)
}
}
Keeping tokens refreshed and maintaining user sessions
Access tokens have a limited lifetime to ensure your app’s security. When a token expires, the user shouldn’t have to sign-in again – instead, your app can request a new access token using the refresh token stored in the credential.
In this section, you’ll add support for token refresh, allowing users to stay authenticated without repeating the entire sign-in and MFA flow.
You’ll add an action in the UI that calls the refreshTokenIfNeeded() method from your AuthService, which silently exchanges the refresh token for a new set of valid tokens. We’re making this call manually, but you can watch for upcoming expiry and refresh the token before it happens preemptively. While we don’t show it here, you should use Refresh Token Rotation to ensure refresh tokens are also short-lived as a security measure.
First, we need to add the refreshTokenButton, which we’ll add to the AuthView. Open the AuthView, scroll down to the last private extension in the “Action Buttons” mark, and add the following button at the end of the extension:
/// Refresh Token if needed
var refreshTokenButton: some View {
Button("Refresh Token") {
Task { await viewModel.refreshToken() }
}
.font(.system(size: 14))
.disabled(viewModel.isLoading)
}
Next, we need to reference the button somewhere in our view. We will do that inside the successView, like we did with the other buttons. Find the successView and add the button. Your successView should look like this:
private var successView: some View {
VStack(spacing: 16) {
Text("Signed in 🎉")
.font(.title2)
.bold()
// Scrollable ID token display (for demo purposes)
ScrollView {
Text(Credential.default?.token.idToken?.rawValue ?? "(no id token)")
.font(.footnote)
.textSelection(.enabled)
.padding()
.background(.thinMaterial)
.cornerRadius(8)
}
.frame(maxHeight: 220)
// Authenticated user actions
tokenInfoButton
userInfoButton
refreshTokenButton // this is added
signoutButton
}
.padding()
}
Now, if you run the app and tap the refreshTokenButton, you should see your token change in the token preview label.
One thing that we didn’t implement and left with a default implementation to return nil is the accessToken property on the AuthService. Navigate to the AuthService, find the accessToken property, and replace the code so it looks like this:
var accessToken: String? {
switch state {
case .authorized(let token):
return token.accessToken
default:
return nil
}
}
Currently, if you restart the app, you’ll get a prompt to log in each time. This is not a good user experience, and the user should remain logged in. We can add this feature by adding code in the AuthService initializer. Open your AuthService class and replace the init function with the following:
init() {
// Prefer PropertyListConfiguration if Okta.plist exists; otherwise fall back
if let configuration = try? OAuth2Client.PropertyListConfiguration() {
self.flow = try? DirectAuthenticationFlow(client: OAuth2Client(configuration))
} else {
self.flow = try? DirectAuthenticationFlow()
}
// Added
if let token = Credential.default?.token {
state = .authorized(token)
}
}
Build your own secure native sign-in iOS app
You’ve now built a fully native authentication flow on iOS using Okta DirectAuth with push notification MFA – no browser redirects required. You can check your work against the GitHub repo for this project.
Your app securely signs users in, handles multi-factor verification through Okta Verify, retrieves user profile details, displays token information, and refreshes tokens to maintain an active session.
By combining AuthFoundation and OktaDirectAuth, you’ve implemented a modern, phishing-resistant authentication system that balances strong security with a seamless user experience – all directly within your SwiftUI app.
If you found this post interesting, you may want to check out these resources:
How to Build a Secure iOS App with MFA Introducing the New Okta Mobile SDKs A History of the Mobile SSO (Single Sign-On) Experience in iOSFollow OktaDev on Twitter and subscribe to our YouTube channel to learn about secure authentication and other exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!
The post FedRAMP High Authorization: What It Is & What It Means for 1Kosmos appeared first on 1Kosmos.
The European Digital Identity Wallet is entering one of the most consequential phases of its rollout, and few people are closer to the work than Esther Makaay, VP of Digital Identity at Signicat. After spending the last three years at the center of the European Identity Wallet Consortium (EWC), Esther joined us for a deep-dive presentation on what the Large-Scale Pilots have actually delivered, and how ready Europe truly is for the 2026 deadline.
Across payments, travel, and organizational identity, Esther walked through the key results from the pilots: what worked, what didn’t, which technical and regulatory pieces are still missing, and the biggest barriers to adoption that Member States and the private sector now need to solve. She also examined interoperability tests, trust infrastructure gaps, signing flows, business models, governance frameworks, and the early findings from user adoption research.
Below are the core takeaways from her presentation, distilled into the critical insights anyone working in digital identity, payments, or IAM needs to understand as Europe moves into the final countdown toward the EUDI Wallet becoming a reality.
The global supply chain the backbone of international trade continues to face persistent challenges in transparency, security, and traceability. Traditional systems, often fragmented and reliant on trust between intermediaries, remain vulnerable to fraud, inefficiencies, and high administrative costs. The emergence of blockchain technology offers a disruptive solution, and the Ontology platform is at the forefront of this transformation with its modular approach centered on decentralized identity.
Challenges of Traceability in a Connected WorldIn a traditional supply chain, tracking a product from its origin to its final destination is a complex process. Data is stored in silos, making it difficult to establish a single source of truth. This lack of transparency leads to several key issues:
Vulnerability to Counterfeiting: Without tamper-proof verification of product origin and movement, counterfeit goods can easily enter the market. Lack of Consumer Trust: Consumers increasingly demand to know where their products come from especially regarding ethical sourcing and sustainability. Operational Inefficiencies: Product recalls and dispute resolutions become lengthy and costly due to the inability to quickly identify the point of failure.Blockchain technology, with its distributed and immutable ledger, provides a natural solution. It enables every transaction and step in a product’s lifecycle to be recorded securely and transparently.
Ontology’s Modular Toolkit: A Targeted ApproachInstead of offering a monolithic solution, Ontology has developed a set of interconnected tools forming a true Modular Toolkit for Supply Chain Management. This toolkit leverages blockchain’s power to specifically address the crucial needs of identity and verification, which are essential for traceability.
ONT ID (Decentralized Identity)Description: A self-sovereign identity (SSI) system allowing users, businesses, and even IoT devices to own and control their digital identity. Role in the Supply Chain: Authentication to ensure every actor (supplier, carrier, product) is a verified and unique entity.
Verifiable Credentials (VCs)Description: Cryptographically secured digital attestations that prove facts such as quality certification, harvest date, or regulatory compliance.
Role in the Supply Chain: Traceability & Proof to certify the product’s origin, condition, and key milestones with tamper-proof verification.
Description: A high-performance public blockchain featuring sub-second transactions and minimal costs crucial for handling large volumes of logistics data.
Role in the Supply Chain: Security & Performance, provided by the immutable, distributed ledger for securely recording traceability data.
Integrating these components creates an ecosystem where trust is no longer presumed but cryptographically verified.
Toward Transparent and Secure TraceabilityApplying this modular toolkit transforms traceability into a transparent and secure process:
Anti-Counterfeiting Security:Ontology’s Modular Toolkit represents a major step forward in the digital transformation of the supply chain. By placing decentralized identity and verifiable credentials at its core, Ontology does more than enhance traceability — it establishes a new layer of digital trust for global commerce. This approach, combining high performance, low cost, and cryptographic security, is the key to building truly transparent, resilient, and Web3-ready supply chains.
Revolutionizing the Supply Chain with Ontology’s Modular Toolkit was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
When most people hear “digital ID wallet”, they picture an app that looks like an Apple Wallet, filled with digital cards for IDs, licenses, and credentials.
But that mental image is limiting.
Because an identity wallet isn’t about digital cards, or even about apps.
It’s simply a secure way to store and share verified data, data that’s cryptographically signed and privacy-protected.
In reality, a wallet can take many forms depending on the use case.
🎥 Watch on YouTube 🎥
🎧 Listen On Spotify 🎧
🎧 Listen On Apple Podcasts 🎧
Can you cryptographically sign a lie? Yes, and that single fact exposes a major flaw in how digital trust works today.
In this episode of The SSI Orbit Podcast, host Mathieu Glaude speaks with Scott Perry, CEO of the Digital Governance Institute, about why cryptography alone cannot solve the growing crisis of misinformation, AI-generated content, and digital manipulation.
The conversation centers on C2PA, a global standard that embeds a “nutrition label” into digital content at the moment it is created. This provenance data reveals how a digital object was generated, whether it has been altered, and which tools were used, giving people the context they need to judge trustworthiness.
However, as Scott explains, technical tools are only half of the solution. True digital trust requires governance, including transparent conformance programs, certificate authorities, and accountability frameworks that ensure consistency, security, and fairness across all participating products and industries.
The episode also explores the next layers of the trust stack:
• Creator Assertions, which allow individuals to add identity-backed claims to their content
• JPEG Trust, which adds rights and ownership information for legal clarity and compensation
With fraud, deepfakes, and impersonation rising across journalism, insurance, entertainment, and politics, these combined layers of provenance, identity, rights, and governance represent the new trust infrastructure the internet urgently needs.
Key InsightsCryptography is not enough to guarantee truth. Cryptographic signatures can prove integrity and origin, but they cannot determine whether the content itself is accurate or honest.
AI has amplified the urgency for content provenance. Traditional methods like CAPTCHA are no longer reliable because AI can pass them. This accelerates the need for cryptographic provenance systems.
C2PA acts as a global provenance standard for digital media. It embeds a signed manifest into images, videos, audio, and other digital objects at the moment of creation, functioning like a “nutrition label” for content.
Generator products must meet strict governance and conformance requirements. Phones, cameras, and software tools must obtain approved signing certificates through the C2PA conformance program.
Certificate authorities play a central role. Public CAs and enterprise-grade CAs issue the X.509 certificates used for content credential signing. They must meet the requirements outlined in the C2PA certificate policy.
Creator Assertions allow individuals and organizations to add identity-backed claims. This layer, governed by the Creator Assertions Working Group under DIF, enables people to add context and metadata to content.
Rights and ownership require an additional governance layer. JPEG Trust extends the system to define legal rights, IP claims, and ownership for use in court or licensing contexts.
Industry self-regulation is essential. Sectors like journalism, entertainment, insurance, and brand management are expected to police their own registries and authorized signers.
Fraud prevention is a major driver. AI-manipulated images are already causing real financial losses in industries like insurance.
Digital identity credentials will eventually enable end users to sign their own assertions. Verifiable credentials will allow creators to link identity claims to content in a trustworthy way.
Governance must be transparent and fair. Oversight, checks and balances, and multi-party decision making are essential to avoid exclusion or bias.
StrategiesUse cryptography combined with governance, not cryptography alone. Provenance, conformance programs, and accountability frameworks must work together.
Adopt C2PA provenance for any digital content creation flow. Integrate C2PA manifests at the point of generation for images, video, audio, and documents.
Obtain signing certificates only from trusted certificate authorities. Use public CAs or enterprise-grade CAs approved by the C2PA program.
Implement secure software practices and continuous attestation. Higher assurance levels require proof of updated patches, secure architecture, and verified implementation.
Document generator product architecture using the C2PA template. Applicants must clearly describe all components involved in creating and signing content.
Leverage creator assertions for identity and contextual claims. Individuals or organizations can add structured, signed metadata throughout a content asset’s lifecycle.
Use provenance and rights frameworks to combat fraud. Industries like insurance and media should implement provenance tools to detect manipulation and support claims assessment.
Rely on industry-specific trust registries. Fields such as journalism already use trusted lists to validate authorized contributors.
Build governance frameworks that emphasize transparency and fairness. Prevent exclusion by maintaining multi party oversight and clearly documented decision making.
Additional resources: Episode Transcript Digital Governance Institute C2PA (Coalition for Content Provenance and Authenticity) Creator Assertions Working Group (hosted by the Decentralized Identity Foundation) JPEG Trust NIST Post Quantum Cryptography Program X.509 Certificate Standard Trust Over IP Foundation and the Governance Meta Model About GuestScott Perry is a longtime expert in digital trust and governance who has spent his career helping organizations make technology more reliable and accountable. He leads the Digital Governance Institute, where he advises on cyber assurance, conformance programs, and how to build trust into digital systems.
Scott plays a key role in the C2PA as the Conformance Program Administrator, making sure content-generating products and certificate authorities meet high standards for provenance and authenticity. He also co-leads the Creator Assertions Working Group and contributes to governance work at the Trust Over IP Foundation, focusing on how identity and metadata shape trust in digital content.
With a background in IT audit and deep experience with cryptography and certification authorities, Scott brings a practical, real-world approach to governance, assurance, and digital identity. LinkedIn
The post You Can Cryptographically Sign a Lie: Why Digital Trust Needs Governance (with Scott Perry) appeared first on Northern Block | Self Sovereign Identity Solution Provider.
Ontology will begin the MainNet v3.0.0 upgrade and Consensus Nodes upgrade on December 1, 2025. This release improves network performance and implements the approved ONG tokenomics update.
Ontology blockchain users and stakers will not be affected.
Upgrade OverviewThis upgrade will be completed in two phases with two public releases:
November 27, 2025: v2.7.0 December 1, 2025: v3.0.0All nodes and dApp nodes on Ontology’s MainNet will upgrade in step.
Timeline and RequirementsPhase 1: v2.7.0 (Nov 27, 2025)
Who must upgrade: Consensus nodes
Deadline: Before Dec 1, 2025
Included:
Consensus mechanism optimization Gas limit optimizationPhase 2: v3.0.0 (Dec 1, 2025)
Who must upgrade: Triones nodes
Included:
ONG tokenomics update Consensus mechanism optimizationPlease complete the upgrade step by step to avoid any synchronization pause.
About the ONG Tokenomics UpdateThe tokenomics changes, approved by consensus nodes, are designed to enhance long-term sustainability and align incentives:
Cap total ONG supply at 800 million Permanently lock ONT + ONG equal to 100 million ONG in value Extend the release schedule from 18 to 19 years Direct 80% of released ONG to ONT staking incentivesRead the full governance summary here.
Ontology MainNet Upgrade Announcement was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
We are pleased to share that DAXA-member exchanges Coinone and GOPAX have confirmed their support for the House Party Protocol (HPP) migration. We’re deeply grateful for their support — and this is only the beginning.
For your safety, rely only on the official notices published by these exchanges and ignore any messages from third parties. Use only the official bridge, migration portal, and the exchanges we announce to keep your assets secure.
What You Should DoIf your assets are on the exchange:
Make sure your assets remain on Coinone or GOPAX before the deadlines noted in each exchange’s notice. No further action is required on your side.If you self-custody:
Use the official HPP migration portal once it opens, and always double-check that you are on the correct URL. We will never ask you to transfer tokens to a specific address through DMs or private messages. Safety first1) Avoid deposits during suspension: Do not transfer assets to legacy(old) deposit addresses during suspension. Wait for the official “HPP deposit/withdraw open” notice.
2) Double-check the deposit addresses and the official contract details on each exchange’s asset page before depositing or trading. Sending AERGO tokens to HPP addresses will result in permanent loss.
What to Know Going ForwardOnce the migration and listings are underway, we’ll share the next steps for the HPP ecosystem:
New Staking PortalMore announcements are coming soon — stay tuned!
Ticket ID: 1297620 | Ticket submitted by: han@aergo.io We would like to register two http://hpp.io emails to manage HPP’s self-reporting dashboard on CoinMarketCap. Ticket ID: 1301903 | Ticket submitted by: han@aergo.ioHPP Migration Backed by Coinone and GOPAX, With Further Updates Ahead was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.
Digital identity is shifting from individual credentials to full ecosystems. As states modernize, the question is no longer which digital ID to issue, it’s how to build the underlying framework that every credential can rely on. A flexible, standards-aligned model provides states with room to evolve, protects resident choice, and maintains consistent privacy controls across various use cases.
This blog post examines why a framework, rather than a single credential, provides the strongest foundation for a long-term, interoperable digital identity.
The Framework Approach: Built for EvolutionWe recommend that states define their digital identity initiative as a framework of technical and statutory controls, not as a single credential. A statewide identity framework would apply to many possible credentials, whether it is a digital identification card issued by the DMV, or a digital veteran's ID card issued by the UT VMA, which represent the State's highest-assurance digital identity, meeting all required protections such as unlinkability, minimal disclosure, and individual control outlined in the framework. At the same time, the framework can be referenced piecewise to guide the design of other state-issued credentials—such as professional licenses, permits, benefit eligibility records, or guardianship credentials—by allowing them to adopt relevant controls where appropriate. This framework approach ensures consistency across credential types, maximizes flexibility, enables interoperability with existing credential ecosystems, and preserves resident choice.
Under a statewide digital identity framework, these controls must apply in full to credentials appropriate for foundational identity use cases, ensuring strong privacy, unlinkability, minimal disclosure, and individual control. The same framework can also be applied piecewise to other credential types, allowing states to deploy the right safeguards for the right context. For example, a certain credential may require revocation mechanisms but not the full set of high-assurance controls, while all credential types might consistently adopt "no phone-home" verification as a baseline requirement.
Why Flexibility MattersThis approach ensures that states can deploy the right kind of framework-compliant credential for the right use case. The framework would allow for states to issue digital-only credentials that would not be practical in the physical world. It also allows the state government to move iteratively, prototyping, piloting, and refining credential types individually without requiring the entire ecosystem to be redesigned each time. By adopting a framework model, states gain the flexibility to meet federal requirements where needed, adapt to unsolved challenges like guardianship, and maintain interoperability with existing verifier ecosystems both inside and outside the state.
Contrasting Models: The Trade-offsA composite framework model lowers barriers to entry by allowing residents to opt in to the credentials most relevant to their lives, from driver licenses and professional licenses to permits and benefit eligibility proofs.
When we compare approaches, the distinctions become clear:
Framework Model (Recommended):
Budget: Incremental and use-case driven. New framework-compliant credential types can be introduced iteratively without disrupting existing ones, reducing long-term risk. Older credential types may be gradually deprecated as newer technologies become available, such as to add post-quantum cryptography. A subset of framework controls can be applied to additional non-framework credentials where appropriate. User Choice: Holders can select among framework-compliant credentials, all of which meet baseline security and privacy requirements. For example, one resident might opt for a state-endorsed Veteran ID, while another prefers a digital driver's license. States should guide residents on available options and their tradeoffs (e.g. usability, privacy, security) so individuals can make informed choices. Scalability: Horizontally scalable by use case. New compliant credential types can be added without disrupting existing ones, as long as they meet framework controls. Different framework credential implementations might also compete on speed and efficiency. Flexibility/Adaptability: Highly adaptable by adding new credential types while still enforcing common controls. Supports mirrored credentials as well as novel, unsolved use cases (e.g. guardianship) via iterative pilots and evolving standards.Monolithic Credential Model:
Budget: Likely lower short-term cost and complexity for a single credential. Long-term changes would require costly, system-wide redesigns. User Choice: Holders either accept the canonical credential or are excluded from framework benefits. Scalability: Scales only within the limits of the chosen protocol; new use cases may require major redesign. Adoption: Riskier as an all-or-nothing approach if residents resist that credential (e.g., due to privacy concerns or technical barriers) or if it fails to meet diverse needs. Building the MarketThe ultimate strength of a statewide digital identity framework approach is that the technical controls which must all be implemented for credentials, may be used piecewise as appropriate for other credential types for use cases like permitting and licensing, or even in the private sector. This also creates the opportunity for firms to specialize in managing aspects of the framework, which lowers the barriers to market entry and creates opportunities, especially for smaller and newer vendors to work on particular problem sets. It also allows deep specialization to produce best-in-class products.
We believe the framework approach would produce a larger, more diverse market, ultimately allowing states, agencies, and the private sector to have many choices from vendors, including those that are local, who can specialize in the technologies. This ability to create the best possible products extends even across non-framework credentials.
By having a composable set of framework controls that can be repurposed for other use cases as they are appropriate, we can accelerate privacy, security, and usability within government agencies and the private sector alike. This increases the level of optionality, quality, and lowers the cost for credentials and all other credentials built on an aligned framework.
Standards-Based, Vendor-NeutralWe recommend a multi-format issuance strategy that aligns with national and international standards. By relying on open, widely adopted standards, states can avoid vendor lock-in, accelerate adoption by verifiers, and maximize cross-jurisdictional acceptance.
The role of a statewide digital identity framework should be to clearly outline and encode policy goals (e.g. security, privacy, unlinkability, and minimal disclosure, use of open standards), describe technical controls ("framework controls") which have been proposed, reviewed, and confirmed to achieve those policy goals without "picking winners" in the market of technology, and leave flexibility in how they are achieved. Each foundational identity use case can then adopt the technical standard best suited to its context, provided it complies with the framework's required controls.
Governance as Public InfrastructureFinally, we highlight the need for sustainable governance. States should treat their digital identity frameworks as public infrastructure, funded as a shared good rather than a private service. By certifying multiple wallets and issuers against a published state digital identity profile, a state can foster a healthy ecosystem of vendors, spur innovation, and ensure residents retain choice while benefiting from consistent protections.
Looking AheadA framework approach gives states the structure they need to modernize responsibly. It enables agencies to introduce new credential types without redesigning the entire system, fosters a healthy market of vendors, and ensures that privacy requirements are consistently applied across every use case. By grounding digital identity in open standards and clear statutory controls, states can support today’s needs while preparing for what comes next.
Treating digital identity as public infrastructure ensures it remains flexible, durable, and rights-preserving. Residents keep meaningful control, agencies gain reliable tools, and innovation can happen without compromising trust.
If your state is evaluating how to structure its digital identity strategy, SpruceID can help. We support governments building open, privacy-preserving identity frameworks, from mobile driver’s licenses to multi-credential ecosystems.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
How many dormant accounts are quietly eroding your cyber defenses? What’s your true mean time to remediate (MTTR) a privilege creep?
Organizations juggle sprawling cloud apps and siloed directories. Risk-averse CISOs track these outcome-driven indicators: cut orphaned identities, slash MFA exceptions, and speed up risk fixes. They form and reveal your attack surface’s true size where misconfigurations, dormant accounts, and inconsistent access policies quietly expand risk.
According to Gartner® report, Reduce Your IAM Attack Surface Using Visibility, Observability, and Remediation (Rebecca Archambault, 2025), IAM leaders can strengthen security across centralized and decentralized environments by focusing on three key pillars: visibility, observability, and remediation. Today’s IAM ecosystems are often fragmented across numerous directories, identity providers, and access systems. Business units may configure tools independently, resulting in inconsistent policies and poor oversight.
Common symptoms include:
Disabled multifactor authentication (MFA) Orphaned or dormant accounts Exposed machine credentials Over-privileged service accountsThese gaps are rarely visible in real time, leaving organizations vulnerable to misuse and lateral movement. As Gartner notes, the market for IAM posture, hygiene, and identity threat detection tools is crowded, yet many offerings address only part of the problem — making it difficult for security leaders to measure progress or understand the full scope of their attack surface. The Solution: A Continuous Loop of Unify → Observe → Act
At Radiant Logic, we believe reducing IAM risk starts with a closed-loop process: Unify → Observe → Act. This model provides the visibility and feedback necessary to continuously measure and improve your identity security posture.
1. Unify: Break Down Silos and Establish a Trusted Identity FabricThe first step is to unify human, non-human and agentic AI identity data across all sources — on-premises directories, cloud platforms, HR systems, and custom applications — into a single, consistent view. RadiantOne’s Identity Data Management layer ingests, correlates, and normalizes identity attributes to create a complete, authoritative profile for every user, device, and service.
This unified data foundation eliminates blind spots and provides accurate, consistent information that downstream tools need to enforce policy and evaluate risk. Without unification, observability is fragmented — and remediation becomes guesswork.
2. Observe: Gain Real-Time Insight into Identity Hygiene, Posture, and RiskOnce data is unified, organizations can observe how identities interact across systems and where exposures lie. Dashboards and analytics help teams visualize dormant accounts, privilege creep, and inactive entitlements. Outcome-driven metrics (ODMs) replace simple control counts with measurable results — such as the percentage of risky permissions removed or the reduction in mean time to remediate.
Radiant Logic’s observability capabilities make it possible to quantify security progress and track attack-surface reduction over time. These insights allow IAM and security teams to shift from reactive audits to proactive defense, aligning security metrics with business outcomes.
3. Act: Remediate Identity Risks and Automate with ConfidenceVisibility is only valuable if it leads to action. The final step in the loop is to act — automating remediation workflows and runtime responses that address risks as soon as they are discovered.
Using RadiantOne’s integration and orchestration capabilities, organizations can trigger alerts, open tickets, or execute corrective actions automatically. For example, if a risky entitlement is detected or a service account behaves abnormally, RadiantOne can inform the appropriate system to disable access or enforce MFA. Integration with runtime protocols such as the Continuous Access Evaluation Profile (CAEP) also enables dynamic policy enforcement — terminating or quarantining suspect sessions until investigation is complete.
Measuring What MattersWe believe Gartner emphasizes the importance of outcome-driven metrics to evaluate IAM effectiveness. Rather than focusing on the number of controls deployed, organizations should measure tangible improvements such as:
Fewer orphaned or dormant accounts Reduced over-privileged access Shorter remediation times for risky identities Lower rates of MFA exceptions Documented decreases in IAM-related audit findingsFrom Visibility to ValueBy tracking these outcomes over time, IAM teams can quantify their progress in shrinking the attack surface and demonstrate real value to business leadership. Radiant Logic enables these measurements through centralized visibility and continuous feedback loops.
As Gartner notes, Identity Visibility and Intelligence Platforms (IVIPs) represent a major innovation in the IAM market — providing rapid integration, analytics, and a single view of identity data, activity, and posture. We believe Radiant Logic’s inclusion in Hype Cycle for Digital Identity, 2025 underscores our position in this emerging category.
By implementing the Unify → Observe → Act loop, organizations can:
Eliminate identity data silos Reveal hidden access risks across environments Automate policy enforcement and remediation Quantify security improvements with outcome-driven metricsThis continuous cycle transforms identity security from a static process into a dynamic system of improvement — one that strengthens Zero Trust architectures and aligns security outcomes with measurable business value.
Start Closing IAM Security Gaps with Radiant LogicReducing your IAM attack surface begins with unified visibility. Radiant Logic helps organizations integrate and understand their identity data, observe it in context, and act with precision. The result is not just stronger security — it’s a measurable path to risk reduction and operational resilience.
DisclosureGartner, Reduce Your IAM Attack Surface Using Visibility, Observability, and Remediation, Rebecca Archambault, 8 October 2025
Gartner, Hype Cycle for Digital Identity, 2025, Nayara Sangiorgio, Nathan Harris, 14 July 2025
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The post Shrinking the IAM Attack Surface: How Unify, Observe, Act Transforms Identity Security appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
By Helen Garneau
Phocuswright 2025 is around the corner, and Indicio is ready for a dynamic conversation in travel technology as our CEO, Heather Dahl, joins experts across the industry at the Technology Industry Roundtable to discuss the future of travel innovation.
From digital identity to AI and data privacy, technology is transforming how travelers move, interact with, and experience the world — and Indicio is spearheading this change through its market-leading position in decentralized identity solutions and global partnerships.
We’re defining how secure, seamless, privacy-preserving digital travel works with ground-breaking deployments that give our customer first-mover advantages.
Here are the conversations we’re most excited for at Phocuswright.
Digital Travel Credentials (DTCs)Digital Travel Credentials have moved from theory to deployment and Indicio customers are using them to move passengers through borders and checkpoints faster, reduce manual verification costs and error, while improving security.
By combining document validation, liveness checking and face mapping in a Verifiable Credential aligned with ICAO DTC specifications, Digital Travel Credentials represent the strongest form of digital identity available. Taking just minutes to create and reusable until they expire, DTCs enable pre-authorized travel, seamless authentication across the travel ribbon, and seamless border crossing that’s up to four-times faster than any other solution.
Added to a travel app, and with simple mobile verifier software, they also provide a way to fully integrate a travel or tourist experience with all the services a traveler or tourist might need from hotel check-in to event ticketing and provable authorization for payments (negating the risk of chargeback fraud).
And by combining authenticated biometrics with vital guest or passenger information, many staff-intensive processes can be automated with the assurance that the credential presented is bound to the person presenting it.
The benefits are a secure, seamless, privacy-preserving experience for travelers all manageable from their mobile device without fuss or friction or waiting in line.
AI and Verifiable Credentials for travel customer serviceAI is transforming customer service across the travel industry, but without verifiable identity, it’s just a guessing game. When AI systems and people can’t confirm who’s on the other end, it’s a recipe for disaster — with the added stakes that AI now has access to all their data.
Indicio Proven is both an AI-resistant way to authenticate identity and data and a way to secure AI agents and systems from identity fraud.
Here’s how it works: we give AI agents Verifiable Credentials so that they can prove they belong to your organization. And we give your customers Verifiable Credentials so they can prove to the AI agents that they are your customers. They instantly verify each other; then the agent asks permission to access the customer’s data.
Remember, the nature of a Verifiable Credential is that you can’t fake where it came from and you can’t alter the information inside it without the credential malfunctioning.
With this trust, and with the addition of delegated authority (the AI agent asks permission to share your data with other authenticated AI agents), airlines, hotels, booking platforms have a complete end-to-end trust network for AI agents to perform their tasks with maximal efficiency and effectiveness and minimal customer input. The kicker: it also makes compliance with GDPR so much simpler. At every step, the AI agent has auditable consent to share the customer’s data.
Age assurance technology in travelFrom booking family vacations to verifying eligibility for youth or senior discounts, age assurance plays a bigger role in travel than most people likely realize.
Indicio’s Verifiable Credential technology makes age-verification instant, assured, and auditable.
A person can confirm their age without revealing unnecessary personal details — or even their actual age (some Indicio credentials allow zero-knowledge proofs).
It’s a simple, privacy-preserving solution that helps travel companies meet GDPR requirements for data and purpose minimization — and a powerful way to build trust with customers.
Government DTC-2 type deploymentsThe next step in Digital Travel Credentials is direct government issuance along with a person’s physical passport.
Having been the first company to successfully deploy a Digital Travel Credential for international travel derived by a person from their passport (aligning with the International Civil Aviation Organization’s DTC-1 specification), Indicio is now at the forefront of deploying credentials aligned with ICAO DTC-2 specifications for border management.
At Phocuswright, we’re eager to talk to policymakers, regulators, and innovators about our DTC-2 type deployments, and to advance this next phase of digital travel infrastructure.
Building for the European Union Digital Identity (EUDI)Indicio’s partners and customers are using our technology to issue and verify digital credentials that align with the European Union’s Digital Identity and wallet requirements.
Why do they choose Indicio? Because we make it easy to connect the world to Europe and Europe to the world. Indicio was the first to combine IATA’s OneID with a DTC-1 type credential for international travel and border crossing, using one digital wallet in one seamless journey.
Not every country is going to use a single credential format. Canada and India have chosen a different format to the EU. Many people will also start using mobile driver’s licenses (mDL) for domestic travel.
Interoperability is the key to preventing a Babel of digital wallets and credentials. And that is where Indicio leads. We’re making the world easily connectable, so travelers can seamlessly authenticate their identity no matter where they are.
We make this easy for customers by providing all the most popular credential formats, a white-label digital wallet compatible with EUDI and global standards and specifications, and a mobile SDK that allows you to add Verifiable Credentials to your Android and IoS apps.
We also have developed — in tandem with the Decentralized Identity Foundation — a flexible, decentralized way to ensure credential trust (aka governance) across jurisdictions.
It’s this combination of features — authenticated biometrics, document validation, identity for AI agents — and components that enable our customers to rapidly implement this technology and rapidly see results.
If you’re heading to Phocuswright this year, don’t miss the chance to meet with Heather Dahl and talk about what’s next for digital identity in travel — because with Indicio, the future is already happening.
Book time with Heather for a one-on-one meeting at Phocuswright 2025.
The post 5 things Indicio is excited about at Phocuswright 2025 appeared first on Indicio.
Summary
In this episode of the Analyst Brief podcast, Simon Moffatt and David Mahdi discuss the latest trends in identity security, including recent funding rounds and acquisitions. They explore the growing importance of identity governance, the intersection of security and identity management, and the role of trust in the age of AI. The conversation also touches on the significance of ITDR and the implications of recent acquisitions for the market. The hosts reflect on the future of identity security and the need for continuous innovation in this evolving landscape.
Chapters
00:00 Introduction to the Analyst Brief Podcast
03:03 Autumn Conference Season Insights
06:05 Funding and Acquisitions in Identity Governance
08:59 The Growing Complexity of Identity Governance
12:01 The Intersection of Security and Identity
14:49 The Future of Cybersecurity and Identity Integration
17:42 Understanding the Broader Ecosystem of Cybersecurity
21:04 The Importance of Protecting All Identities
23:53 First Principles in Cybersecurity Strategy
29:10 Navigating Resilience and Availability in Security
29:56 Funding Trends in Identity Security
31:45 The Impact of Acquisitions on Identity Security
32:13 Twilio's Acquisition of Stitch: A New Era in Identity
36:33 Building Trust in the Age of AI
39:21 Zero Trust: Establishing and Maintaining Trust
44:14 Ping Identity's Acquisition of Keyless: Innovations in Biometric Authentication
55:11 JumpCloud Acquires Breeze Security: Enhancing ITDR Solutions
59:54 Improvata's Strategic Moves in Identity Security
Keywords
identity security, funding, acquisitions, AI, trust, governance, ITDR, cybersecurity, authentication, market trends
Safle has fully integrated the Concordium blockchain into its wallet, introducing native support for Protocol-Level Tokens (PLTs), CCD transfers, staking and delegation — all within a single, non-custodial interface.
This update not only expands Safle’s multi-chain capabilities but also reinforces a shared vision with Concordium: to make secure, compliant, and scalable blockchain access simple for everyone — from everyday users to enterprise developers.
Why PLTs MatterThe integration of Protocol-Level Tokens (PLTs) marks a major step forward in Safle’s mission to support next-generation blockchain standards. Unlike tokens issued through smart contracts, PLTs are built directly into Concordium’s protocol layer, giving them stronger foundations for performance, security, and compliance.
By incorporating PLTs, Safle enables:
Native efficiency: Token operations like minting, transferring, or burning are handled by the protocol itself, reducing friction and network costs. Enhanced security: No external contract code means fewer vulnerabilities and lower attack surfaces. Compliance by design: Each token can integrate Concordium’s identity verification framework, aligning with the growing need for compliant on-chain assets.Through PLT integration, Safle becomes a key access point for Concordium’s PayFi ecosystem — enabling users to interact with programmable, regulation-ready assets without compromising on control or usability.
Key Highlights of the IntegrationBeyond wallet creation and identity support from the earlier release, users can now manage PLTs, transfer CCD, and stake or delegate — all within the Safle Wallet.
Access to Protocol-Level Tokens (PLTs) PLT management: Store, send, and receive PLTs seamlessly through Concordium’s protocol. Lower risk and cost: Direct protocol execution eliminates complex contract dependencies. Future-ready assets: Connect to Concordium’s network of regulated tokens and PayFi utilities.2. Staking and Delegation
Earn directly within Safle: Stake or delegate CCD without leaving the app. Flexible options: Choose between Passive Delegation or Validator Pools based on your goals. Transparent control: Manage, update, or stop delegations anytime — with full asset ownership preserved.3. Seamless CCD and PLT Transfers
Instant transactions: Transfer CCD and PLTs in seconds using wallet addresses or QR codes. Clear visibility: Track real-time balances and transaction histories in one place. Multi-account access: Effortlessly manage multiple Concordium wallets within Safle.Looking Ahead
The integration of Concordium within Safle represents more than just added functionality — it brings the complete Concordium experience into a single, secure environment.
This milestone strengthens the foundation for ongoing collaboration between the two ecosystems, paving the way for deeper protocol support, enhanced developer tools, and wider adoption of Concordium’s trust-driven technology.
Explore Concordium on Safle today — your gateway to secure, compliant, and effortless blockchain access.
Download the Safle WalletDownload for iOS | Download for Android
Join Our CommunityStay connected for updates and announcements:
Concordium: x.com/ConcordiumNet Safle: x.com/GetSafle
When you choose Okta as your IAM provider, one of the features you get access to is customizing your Okta-hosted Sign-In Widget (SIW), which is our recommended method for the highest levels of identity security. It’s a customizable JavaScript component that provides a ready-made login interface you can use immediately as part of your web application.
The Okta Identity Engine (OIE) utilizes authentication policies to drive authentication challenges, and the SIW supports various authentication factors, ranging from basic username and password login to more advanced scenarios, such as multi-factor authentication, biometrics, passkeys, social login, account registration, account recovery, and more. Under the hood, it interacts with Okta’s APIs, so you don’t have to build or manage complex auth logic yourself. It’s all handled for you!
One of the perks of using the Okta SIW, especially with the 3rd Generation Standard (Gen3), is that customization is a configuration thanks to design tokens, so you don’t have to write CSS to style the widget elements.
Style the Okta Sign-In Widget to match your brandIn this tutorial, we will customize the Sign In Widget for a fictional to-do app. We’ll make the following changes:
Replace font selections Define border, error, and focus colors Remove elements from the SIW, such as the horizontal rule and add custom elements Shift the control to the start of the site and add a background panelWithout any changes, when you try to sign in to your Okta account, you see something like this:
At the end of the tutorial, your login screen will look something like this 🎉
We’ll use the SIW gen3 along with new recommendations to customize form elements and style using design tokens.
Table of Contents
Style the Okta Sign-In Widget to match your brand Customize your Okta-hosted sign-in page Understanding the Okta-hosted Sign-In Widget default code Customize the UI elements within the Okta Sign-In Widget Organize your Sign-In Widget customizations with CSS Custom properties Extending the SIW theme with a custom color palette Add custom HTML elements to the Sign-In Widget Overriding Okta Sign-In Widget element styles Change the layout of the Okta-hosted Sign-In page Customize your Gen3 Okta-hosted Sign-In WidgetPrerequisites To follow this tutorial, you need:
An Okta account with the Identity Engine, such as the Integrator Free account. The SIW version in the org we’re using is 7.36. Your own domain name A basic understanding of HTML, CSS, and JavaScript A brand design in mind. Feel free to tap into your creativity!Let’s get started!
Customize your Okta-hosted sign-in pageBefore we begin, you must configure your Okta org to use your custom domain. Custom domains enable code customizations, allowing us to style more than just the default logo, background, favicon, and two colors. Sign in as an admin and open the Okta Admin Console, navigate to Customizations > Brands and select Create Brand +.
Follow the Customize domain and email developer docs to set up your custom domain on the new brand.
You can also follow this post if you prefer.
A Secure and Themed Sign-in PageRedirecting to the Okta-hosted sign-in page is the most secure way to authenticate users in your application. But the default configuration yield a very neutral sign-in page. This post walks you through customization options and setting up a custom domain so the personality of your site shines all through the user's experience.
Alisa DuncanOnce you have a working brand with a custom domain, select your brand to configure it. First, navigate to Settings and select Use third generation to enable the SIW Gen3. Save your selection.
⚠️ Note
The code in this post relies on using SIW Gen3. It will not work on SIW Gen2.
Navigate to Theme. You’ll see a default brand page that looks something like this:
Let’s start making it more aligned with the theme we have in mind. Change the primary and secondary colors, then the logo and favicon images with your preferred options
To change either color, click on the text field and enter the hex code for each. We’re going for a bold and colorful approach, so we’ll use #ea3eda as the primary color and #ffa738 as the secondary color, and upload the logo and favicon images for the brand. Select Save.
Take a look at your sign-in page now by navigating to the sign-in URL for the brand. With your configuration, the sign-in widget looks more interesting than the default view, but we can make things even more exciting.
Let’s dive into the main task, customizing the signup page. On the Theme tab:
Select Sign-in Page in the dropdown menu Select the Customize button On the Page Design tab, select the Code editor toggle to see a HTML pageUnderstanding the Okta-hosted Sign-In Widget default codeNote: You can only enable the code editor if you configure a custom domain.
If you’re familiar with basic HTML, CSS, and JavaScript, the sign-in code appears standard, although it’s somewhat unusual in certain areas. There are two major blocks of code we should examine: the top of the body tag on the page and the sign-in configuration in the script tag.
The first one looks something like this:
<div id="okta-login-container"></div>
The second looks like this:
var config = OktaUtil.getSignInWidgetConfig();
// Render the Okta Sign-In Widget
var oktaSignIn = new OktaSignIn(config);
oktaSignIn.renderEl({ el: '#okta-login-container' },
OktaUtil.completeLogin,
function(error) {
// Logs errors that occur when configuring the widget.
// Remove or replace this with your own custom error handler.
console.log(error.message, error);
}
);
Let’s take a closer look at how this code works. In the HTML, there’s a designated parent element that the OktaSignIn instance uses to render the SIW as a child node. This means that when the page loads, you’ll see the <div id="okta-login-container"></div> in the DOM with the HTML elements for SIW functionality as its child within the div. The SIW handles all authentication and user registration processes as defined by policies, allowing us to focus entirely on customization.
To create the SIW, we need to pass in the configuration. The configuration includes properties like the theme elements and messages for labels. The method renderEl() identifies the HTML element to use for rendering the SIW. We’re passing in #okta-login-container as the identifier.
The #okta-login-container is a CSS selector. While any correct CSS selector works, we recommend you use the ID of the element. Element IDs must be unique within the HTML document, so this is the safest and easiest method.
Now that we have a basic understanding of how the Okta Sign-In Widget works, let’s start customizing the code. We’ll start by customizing the elements within the SIW. To manipulate the Okta SIW DOM elements in Gen3, we use the afterTransform method. The afterTransform method allows us to remove or update elements for individual or all forms.
Find the button Edit on the Code editor view, which makes the code editor editable and behaves like a lightweight IDE.
Below the oktaSignIn.renderEl() method within the <script> tag, add
oktaSignIn.afterTransform('identify', ({ formBag }) => {
const title = formBag.uischema.elements.find(ele => ele.type === 'Title');
if (title) {
title.options.content = "Log in and create a task";
}
const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help');
const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock');
const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider');
formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele));
});
This afterTransform hook only runs before the ‘identify’ form. We can find and target UI elements using the FormBag. The afterTransform hook is a more streamlined way to manipulate DOM elements within the SIW before rendering the widget. For example, we can search elements by type to filter them out of the view before rendering, which is more performant than manipulating DOM elements after SIW renders. We filtered out elements such as the unlock account element and dividers in this snippet.
Let’s take a look at what this looks like. Press Save to draft and Publish.
Navigate to your sign-in URL for your brand to view the changes you made. When we compare to the default state, we no longer see the horizontal rule below the logo or the “Help” link. The account unlock element is no longer available.
We explored how we can customize the widget elements. Now, let’s add some flair.
Organize your Sign-In Widget customizations with CSS Custom propertiesAt its core, we’re styling an HTML document. This means we operate on the SIW customization in the same way as we would any HTML page, and code organization principles still apply. We can define customization values as CSS Custom properties (also known as CSS variables).
Defining styles using CSS variables keeps our code DRY. Setting up style values for reuse even extends beyond the Okta-hosted sign-in page. If your organization hosts stylesheets with brand color defined as CSS custom properties publicly, you can use the colors defined there and link your stylesheet.
Before making code edits, identify the fonts you want to use for your customization. We found a header and body font to use.
Open the SIW code editor for your brand and select Edit to make changes.
Import the fonts into the HTML. You can <link> or @import the fonts based on your preference. We added the <link> instructions to the <head> of the HTML.
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Poiret+One&display=swap" rel="stylesheet">
Find the <style nonce="{{nonceValue}}"> tag. Within the tag, define your properties using the :root selector:
:root {
--color-gray: #4f4f4f;
--color-fuchsia: #ea3eda;
--color-orange: #ffa738;
--color-azul: #016fb9;
--color-cherry: #ea3e84;
--color-purple: #b13fff;
--color-black: #191919;
--color-white: #fefefe;
--color-bright-white: #fff;
--border-radius: 4px;
--font-header: 'Poiret One', sans-serif;
--font-body: 'Inter Tight', sans-serif;
}
Feel free to add new properties or replace the property value for your brand. Now is a good opportunity to add your own brand colors and customizations!
Let’s configure the SIW with our variables using design tokens.
Find var config = OktaUtil.getSignInWidgetConfig();. After this line of code, set the values of the design tokens using your CSS Custom properties. You’ll use the var() function to access your variables:
config.theme = {
tokens: {
BorderColorDisplay: 'var(--color-bright-white)',
PalettePrimaryMain: 'var(--color-fuchsia)',
PalettePrimaryDark: 'var(--color-purple)',
PalettePrimaryDarker: 'var(--color-purple)',
BorderRadiusTight: 'var(--border-radius)',
BorderRadiusMain: 'var(--border-radius)',
PalettePrimaryDark: 'var(--color-orange)',
FocusOutlineColorPrimary: 'var(--color-azul)',
TypographyFamilyBody: 'var(--font-body)',
TypographyFamilyHeading: 'var(--font-header)',
TypographyFamilyButton: 'var(--font-body)',
BorderColorDangerControl: 'var(--color-cherry)'
}
}
Save your changes, publish the page, and view your brand’s sign-in URI site. Yay! You see, there’s no border outline, the border radius of the widget and HTML elements changed, a different focus color, and a different color for element outlines when there’s a form error. You can inspect the HTML elements and view the computed styles. Or if you prefer, feel free to update the CSS variables to something more visible.
When you inspect your brand’s sign-in URL site, you’ll notice that the fonts aren’t loading properly and that there are errors in your browser’s debugging console. This is because you need to configure Content Security Policies (CSP) to allow resources loaded from external sites. CSPs are a security measure to mitigate cross-site scripting (XSS) attacks. You can read An Overview of Best Practices for Security Headers to learn more about CSPs.
Navigate to the Settings tab for your brand’s Sign-in page. Find the Content Security Policy and press Edit. Add the domains for external resources. In our example, we only load resources from Google Fonts, so we added the following two domains:
*.googleapis.com
*.gstatic.com
Press Save to draft and press Publish to view your changes. The SIW now displays the fonts you selected!
Extending the SIW theme with a custom color paletteIn our example, we selectively added colors. The SIW design system adheres to WCAG accessibility standards and relies on Material Design color palettes.
Okta generates colors based on your primary color that conform to accessibility standards and contrast requirements. Check out Understand Sign-In Widget color customization to learn more about color contrast and how Okta color generation works. You must supply accessible colors to the configuration.
Material Design supports themes by customizing color palettes. The list of all configurable design tokens displays all available options, including Hue* properties for precise color control. Consider exploring color palette customization options tailored to your brand’s specific needs. You can use Material palette generators such as this color picker from the Google team or an open source Material Design Palette Generator that allows you to enter a HEX color value.
Don’t forget to keep accessibility in mind. You can run an accessibility audit using Lighthouse in the Chrome browser and the WebAIM Contrast Checker. Our selected primary color doesn’t quite meet contrast requirements. 😅
Add custom HTML elements to the Sign-In WidgetPreviously, we filtered HTML elements out of the SIW. We can also add new custom HTML elements to SIW. We’ll experiment by adding a link to the Okta Developer blog. Find the afterTransform() method. Update the afterTransform() method to look like this:
oktaSignIn.afterTransform('identify', ({formBag}) => {
const title = formBag.uischema.elements.find(ele => ele.type === 'Title');
if (title) {
title.options.content = "Log in and create a task";
}
const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help');
const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock');
const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider');
formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele));
const blogLink = {
type: 'Link',
contentType: 'footer',
options: {
href: 'https://developer.okta.com/blog',
label: 'Read our blog',
dataSe: 'blogCustomLink'
}
};
formBag.uischema.elements.push(blogLink);
});
We created a new element named blogLink and set properties such as the type, where the content resides, and options related to the type. We also added a dataSe property that adds the value blogCustomLink to an HTML data attribute. Doing so makes it easier for us to select the element for customization or for testing purposes.
When you continue past the ‘identify’ form in the sign-in flow, you’ll no longer see the link to the blog.
Overriding Okta Sign-In Widget element stylesWe should use design tokens for customizations wherever possible. In cases where a design token isn’t available for your styling needs, you can fall back to defining style manually.
Let’s start with the element we added, the blog link. Let’s say we want to display the text in capital casing. It’s not good practice to define the label value using capital casing for accessibility. We should use CSS to transform the text.
In the styles definition, find the #login-bg-image-id. After the styles for the background image, add the style to target the blogCustomLink data attribute and define the text transform like this:
a[data-se="blogCustomLink"] {
text-transform: uppercase;
}
Save and publish the page to check out your changes.
Now, let’s say you want to style an Okta-provided HTML element. Use design tokens wherever possible, and make style changes cautiously as doing so adds brittleness and security concerns.
Here’s a terrible example of styling an Okta-provided HTML element that you shouldn’t emulate, as it makes the text illegible. Let’s say you want to change the background of the Next button to be a gradient. 🌈
Inspect the SIW element you want to style. We want to style the button with the data attribute okta-sign-in-header.
After the blogCustomLink style, add the following:
button[data-se="save"] {
background: linear-gradient(12deg, var(--color-fuchsia) 0%, var(--color-orange) 100%);
}
Save and publish the site. The button background is now a gradient.
However, style the Okta-provided SIW elements with caution. The dangers with this approach are two-fold:
The Okta Sign-in widget undergoes accessibility audits, and changing styles and behavior manually may decrease accessibility thresholds The Okta Sign-in widget is internationalized, and changing styles around text layout manually may break localization needs Okta can’t guarantee that the data attributes or DOM elements remain unchanged, leading to customization breaksIn the rare case where you style an Okta-provided SIW element you may need to pin the SIW version so your customizations don’t break from under you. Navigate to the Settings tab and find the Sign-In Widget version section. Select Edit and select the most recent version of the widget, as this one should be compatible with your code. We are using widget version 7.36 in this post.
Change the layout of the Okta-hosted Sign-In page⚠️ Note
When you pin the widget, you won’t get the latest and greatest updates from the SIW without manually updating the version. Pinning the version prevents any forward progress in the evolution and extensibility of the end-user experiences. For the most secure option, allow SIW to update automatically and avoid overly customizing the SIW with CSS. Use the design tokens wherever possible.
We left the HTML nodes defined in the SIW customization unedited so far. You can change the layout of the default <div> containers to make a significant impact. Change the display CSS property to make an impactful change, such as using Flexbox or CSS Grid. I’ll use Flexbox in this example.
Find the div for the background image container and the okta-login-container. Replace those div elements with this HTML snippet:
<div id="login-bg-image-id" class="login-bg-image tb--background">
<div class="login-container-panel">
<div id="okta-login-container"></div>
</div>
</div>
We moved the okta-login-container div inside another parent container and made it a child of the background image container.
Find #login-bg-image style. Add the display: flex; property. The styles should look like this:
#login-bg-image-id {
background-image: {{bgImageUrl}};
display: flex;
}
We want to style the okta-login-container’s parent <div> to set the background color and to center the SIW on the panel. Add new styles for the login-container-panel class:
.login-container-panel {
background: var(--color-white);
display: flex;
justify-content: center;
align-items: center;
width: 40%;
min-width: 400px;
}
Save your changes and view the sign-in page. What do you think of the new layout? 🎊
⚠️ Note
Flexbox and CSS Grid are responsive, but you may still need to add properties handling responsiveness or media queries to fit your needs.
Your final code might look something like this:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="robots" content="noindex,nofollow" />
<!-- Styles generated from theme -->
<link href="{{themedStylesUrl}}" rel="stylesheet" type="text/css">
<!-- Favicon from theme -->
<link rel="shortcut icon" href="{{faviconUrl}}" type="image/x-icon">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter+Tight:ital,wght@0,100..900;1,100..900&family=Poiret+One&display=swap" rel="stylesheet">
<title>{{pageTitle}}</title>
{{{SignInWidgetResources}}}
<style nonce="{{nonceValue}}">
:root {
--font-header: 'Poiret One', sans-serif;
--font-body: 'Inter Tight', sans-serif;
--color-gray: #4f4f4f;
--color-fuchsia: #ea3eda;
--color-orange: #ffa738;
--color-azul: #016fb9;
--color-cherry: #ea3e84;
--color-purple: #b13fff;
--color-black: #191919;
--color-white: #fefefe;
--color-bright-white: #fff;
--border-radius: 4px;
}
{{ #useSiwGen3 }}
html {
font-size: 87.5%;
}
{{ /useSiwGen3 }}
#login-bg-image-id {
background-image: {{bgImageUrl}};
display: flex;
}
.login-container-panel {
background: var(--color-white);
display: flex;
justify-content: center;
align-items: center;
width: 40%;
min-width: 400px;
}
a[data-se="blogCustomLink"] {
text-transform: uppercase;
}
</style>
</head>
<body>
<div id="login-bg-image-id" class="login-bg-image tb--background">
<div class="login-container-panel">
<div id="okta-login-container"></div>
</div>
</div>
<!--
"OktaUtil" defines a global OktaUtil object
that contains methods used to complete the Okta login flow.
-->
{{{OktaUtil}}}
<script type="text/javascript" nonce="{{nonceValue}}">
// "config" object contains default widget configuration
// with any custom overrides defined in your admin settings.
const config = OktaUtil.getSignInWidgetConfig();
config.theme = {
tokens: {
BorderColorDisplay: 'var(--color-bright-white)',
PalettePrimaryMain: 'var(--color-fuchsia)',
PalettePrimaryDark: 'var(--color-purple)',
PalettePrimaryDarker: 'var(--color-purple)',
BorderRadiusTight: 'var(--border-radius)',
BorderRadiusMain: 'var(--border-radius)',
PalettePrimaryDark: 'var(--color-orange)',
FocusOutlineColorPrimary: 'var(--color-azul)',
TypographyFamilyBody: 'var(--font-body)',
TypographyFamilyHeading: 'var(--font-header)',
TypographyFamilyButton: 'var(--font-body)',
BorderColorDangerControl: 'var(--color-cherry)'
}
}
// Render the Okta Sign-In Widget
const oktaSignIn = new OktaSignIn(config);
oktaSignIn.renderEl({ el: '#okta-login-container' },
OktaUtil.completeLogin,
function (error) {
// Logs errors that occur when configuring the widget.
// Remove or replace this with your own custom error handler.
console.log(error.message, error);
}
);
oktaSignIn.afterTransform('identify', ({ formBag }) => {
const title = formBag.uischema.elements.find(ele => ele.type === 'Title');
if (title) {
title.options.content = "Log in and create a task";
}
const help = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'help');
const unlock = formBag.uischema.elements.find(ele => ele.type === 'Link' && ele.options.dataSe === 'unlock');
const divider = formBag.uischema.elements.find(ele => ele.type === 'Divider');
formBag.uischema.elements = formBag.uischema.elements.filter(ele => ![help, unlock, divider].includes(ele));
const blogLink = {
type: 'Link',
contentType: 'footer',
options: {
href: 'https://developer.okta.com/blog',
label: 'Read our blog',
dataSe: 'blogCustomLink'
}
};
formBag.uischema.elements.push(blogLink);
});
</script>
</body>
</html>
You can also find the code in the GitHub repository for this blog post. With these code changes, you can connect this with an app to see how it works end-to-end. You’ll need to update your Okta OpenID Connect (OIDC) application to work with the domain. In the Okta Admin Console, navigate to Applications > Applications and find the Okta application for your custom app. Navigate to the Sign On tab. You’ll see a section for OpenID Connect ID Token. Select Edit and select Custom URL for your brand’s sign-in URL as the Issuer value.
You’ll use the issuer value, which matches your brand’s custom URL, and the Okta application’s client ID in your custom app’s OIDC configuration. If you want to try this and don’t have a pre-built app, you can use one of our samples, such as the Okta React sample.
Customize your Gen3 Okta-hosted Sign-In WidgetI hope you enjoyed customizing the sign-in experience for your brand. Using the Okta-hosted Sign-In widget is the best, most secure way to add identity security to your sites. With all the configuration options available, you can have a highly customized sign-in experience with a custom domain without anyone knowing you’re using Okta.
If you like this post, there’s a good chance you’ll find these links helpful:
Create a React PWA with Social Login Authentication Secure your first web app How to Build a Secure iOS App with MFARemember to follow us on Twitter and subscribe to our YouTube channel for fun and educational content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below! Until next time!
California has returned to the Zero-Trust front line. When Assemblymember Jacqui Irwin re-introduced the mandate this year as AB 869, she rewound the clock only far enough to give agencies a fighting chance: every executive-branch department must show a mature Zero-Trust architecture by June 1, 2026.
The bill sailed through the Assembly without a dissenting vote and now sits in the Senate Governmental Organization Committee with its first hearing queued for early July. Momentum looks strong: the measure already carries public endorsement from major players in the security space such as Okta, Palo Alto Networks, Microsoft, TechNet, Zscaler and a unanimous fiscal-committee green light.
The text itself is straightforward. It lifts the same three pillars that the White House spelled out in Executive Order 14028—multi factor authentication everywhere, enterprise-class endpoint detection and response and forensic-grade logging—and stamps a date on each pillar. Agencies that fail will be out of statutory compliance, but, as the committee’s analysis warns, the real price tag is the downtime, ransom and public-trust loss that follow a breach.
Why Unifying Identity Data Is the Real Challenge in Zero TrustCalifornia has spent four years laying technical groundwork. The Cal-Secure roadmap already calls for continuous monitoring, identity lifecycle discipline and tight access controls. Yet progress has stalled because most departments still lack a single, authoritative view of who and what is touching their systems. Identity data lives in overlapping Active Directory forests, SaaS directories, HR databases and contractor spreadsheets. When job titles lag three weeks behind reality or an account persists after its owner leaves, even the best MFA prompt or EDR sensor can’t make an accurate determination.
Identity Data Fabric and the RadiantOne Platform: How Radiant Logic Creates a Single Source of Identity TruthRadiant Logic solves the obstacle at its root. The platform connects to every identity store—on-prem, cloud, legacy or modern—then correlates, cleans and serves a real-time global profile for every person and device. That fabric becomes the single source of truth that each Zero-Trust control needs and consumes:
MFA tokens draw fresh role and device attributes, so “adaptive” policies really do adapt. EDR and SIEM events carry one immutable user + device ID, letting analysts trace lateral movement in minutes instead of days. Log files share the same identifier, turning post-incident forensics into a straight line instead of a spider web.The system’s built-in hygiene analytics spotlight dormant accounts, stale entitlements and toxic combinations—precisely the gaps auditors test when they judge “least-privilege” maturity.
A Concrete, 12-Month Playbook: What an Identity Data Fabric Does in Practice Connect all identity sources. Map and connect every authoritative and shadow identity source to RadiantOne. No production system needs to stop; the platform operates as an overlay. Redirect authentication flows—IdPs, VPNs, ZTNA gateways—so their policy engines read from the new identity fabric. Legacy applications gain modern, attribute-driven authorization without code changes. Stream context into security tools. By streaming enriched context into existing EDR and SIEM pipelines, alerts can now include the “who, what and where” information that incident responders crave. Run hygiene dashboards to purge inactive or over-privileged accounts. The same reports double as proof of progress for the annual OIS maturity survey.Teams that follow the sequence typically see two wins long before the statutory deadline, one being faster mean-time-to-detect during adversarial red-teaming exercises and, secondly, a dramatic cut in audit questions that start with, “How do you know…?”
Beyond Compliance: Why Zero Trust is More than a CheckboxAB 869 may be the nudge, but the destination is bigger than a check box. When de facto identity is the new perimeter—and when that identity is always current, complete and trustworthy—California’s digital services stay open even on the worst cyber day. Radiant Logic provides the identity fabric that makes Zero-Trust controls smarter, cheaper and easier to prove.
The countdown ends June 1, 2026. The journey can start with a single connection to your first directory.
REFERENCES
https://cdt.ca.gov/wp-content/uploads/2021/10/Cybersecurity_Strategy_Plan_FINAL.pdf
https://calmatters.digitaldemocracy.org/bills/ca_202520260ab869
The post California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
In many places in the world, Zero Trust has shifted from being a security philosophy to a mandate by regulators, including the U.S., as discussed in California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic. Gartner’s 2025 Hype Cycle for Zero Trust Technology highlights identity as the foundation for Zero Trust success and names Radiant Logic as a Sample Vendor enabling that foundation in the AI for Access Administration category.
Regulatory Mandates Are Accelerating Zero Trust AdoptionAcross both public and private sectors, the push for implementing Zero Trust is accelerating. California’s Assembly Bill 869, for example, requires every executive-branch agency to demonstrate a mature Zero Trust architecture by June 2026. This is one example of how regulations are putting firm dates on adoption. Gartner’s recognition underscores why Radiant Logic matters in this context.
Zero Trust depends not only on reliable identity data but also on making that data accessible. The challenge for most organizations is not the lack of Zero Trust tools but the difficulty of getting the right identity data. Attributes, context, and relationships all need to be provided to the tools in a format and way that these can be used.
Without that foundation, Zero Trust efforts typically stall.
Why Identity is Central to Zero TrustThe National Institute of Standards and Technologies (NIST) defines Zero Trust around a simple idea: never trust, always verify. Every request must be authenticated and authorized in its context. Yet in most enterprises, identity data is fragmented across directories, cloud services, HR systems, and contractor databases. This is the reality of what we call identity sprawl. When accounts linger after employees leave or when attributes are out of date, even the best MFA solutions or EDR policies falter.
Gartner cautions that organizations lacking visibility into their identity data face both elevated security risks and operational inefficiencies. Zero Trust controls cannot deliver on their promise if they operate on incomplete or inconsistent input. That means that the result is only as good as the underlying identity data.
Radiant Logic’s RoleRadiantOne unifies identity data from every source into a single, correlated view of every identity, whether human or non-human. That fabric becomes the authoritative context that Zero Trust controls require and need to be successful. This foundation lets MFA policies adapt dynamically to current identity and device signals while, at the same time, unifying log files under a single identifier and enabling Zero Trust access, network segmentation, and more. So why is this important? Many regulatory initiatives are tightening up the reporting should a breach occur; therefore, correlating identities into a single view streamlines forensic work and ultimately allows for swift signaling or reporting to a competent authority.
The importance of identity data hygiene is that it allows organizations to detect dormant accounts, stale entitlements, and toxic combinations before auditors or adversaries find them.
The Business ImpactMaintaining this hygiene is critical to mitigating risk and ensuring that Zero Trust policies are enforced on accurate, trustworthy data. By ensuring Zero Trust policies run on clean, governed identity data, Radiant Logic enables organizations to enforce least privilege, reduce the attack surface, and meet compliance obligations in a timely fashion.
For CISOs, this reduces risk by closing identity gaps before attackers exploit them.
For CIOs, it modernizes access controls without disrupting legacy systems.
For compliance leaders, it provides defensible evidence for regulatory audits and mandates and, in case of a breach, a swift response to regulators signaling and reporting requirements.
Zero Trust is no longer an academic philosophic idea — it is operational to modern security. Gartner’s recognition of Radiant Logic validates our role in making it achievable, practical, and provable.
Learn MoreThe full report can be downloaded here. Discover how Radiant Logic strengthens Zero Trust initiatives with unified, real-time identity data and intelligence. To discuss with an identity and Zero Trust expert, contact us today.
The post Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
In today’s healthcare ecosystem, seconds can mean the difference between life and death. Clinicians need instant access to systems, patient records, and tools that guide treatment decisions. But too often, identity and access management (IAM) becomes a silent bottleneck—slowing workflows, increasing frustration, and opening new avenues for attackers.
Key Takeaway: In modern healthcare, fast, secure identity access isn’t an IT issue—it’s a patient safety issue The Legacy Identity Problem in Healthcare Common IAM Pain Points for Healthcare ProvidersIdentity is not just an IT function. It is the connective tissue between operational efficiency and strong security. When access works seamlessly, clinicians focus on patients. When it falters, care delivery stalls. The stakes are that high.
Healthcare organizations carry a legacy burden that includes identity infrastructures stitched together from mergers, acquisitions, and outdated systems. The results are familiar and painful:
Slow onboarding: Clinicians wait days or weeks to access EHRs, e-prescribing platforms, or HR systems Siloed systems: Contractors, vendors, and students are often tracked manually or inconsistently, creating blind spots Fragmented logins: Multiple usernames and passwords drain productivity, encourage weak credential practices, and create security risks Why Fragmented Systems Put Patients and Data at RiskHow the “Persona Problem” Impacts CliniciansEach inefficiency cascades into operational and security problems. In a shared workstation environment where multiple staff members rotate across terminals, the friction of multiple logins is more than inconvenient—it is unsafe.
Modern clinicians often wear many hats: surgeon, professor, and clinic practitioner. Each role demands different entitlements, application views, and permissions. Legacy IAM systems struggle to keep pace, forcing clinicians into frustrating workarounds that compromise both care and compliance.
A modern identity data foundation solves this “persona problem” by enabling:
multi-persona profiles: A unified identity that captures every role under one credential contextual access: Role-specific entitlements delivered at the point of authorization streamlined governance: Fewer duplicates, cleaner oversight, and enforced least privilegeThe result? Clinicians move seamlessly across their responsibilities without juggling multiple logins, and security teams gain a clearer, more manageable access model.
Identity as the Frontline of Healthcare CybersecurityDisconnected directories, inconsistent access records, and orphaned accounts create fertile ground for attackers. The 2024 Change Healthcare ransomware incident, traced back to compromised remote access credentials, highlighted the catastrophic impact that a single identity failure can unleash.
The Compliance Consequences of Poor Identity HygienePoor IAM hygiene doesn’t just slow down care—it invites compliance nightmares. Regulations like HIPAA require clear evidence of least-privilege access and timely de-provisioning, but piecing that evidence together from fractured systems is a losing battle.
Temporary fixes and one-off integrations won’t cure healthcare’s identity problem. What is needed is a modern identity data foundation that:
unifies identity data from HR systems, AD domains, credentialing databases, cloud apps, and more rationalizes and correlates records into a single, authoritative profile for each user delivers tailored views to each consuming system—EHR, tele-health, billing, scheduling—through standard protocols like LDAP, SCIM, and REST strengthens ISPM by ensuring security policies, risk analytics, and compliance reporting all act on the same high-quality identity dataRadiantOne provides that foundation. Acting as a universal adapter and central nervous system, it abstracts away complexity, enables day-one M&A integration, supports multi-tenant models for affiliated clinics, and reduces costly manual cleanup.
Healthcare’s identity challenge is not theoretical. It is visible every day in delayed access, clinician frustration, regulatory fines, and high-profile breaches. But it doesn’t have to be this way.
With a unified identity data foundation, healthcare organizations can:
accelerate clinician onboarding reduce operational bottlenecks strengthen identity security posture simplify compliance empower caregivers with seamless, secure accessThe question is no longer whether identity impacts care delivery and security: it is whether your identity infrastructure is helping or holding you back.
Download the white paper, The Unified Identity Prescription: Securing Modern Healthcare & Empowering Caregivers, to explore how a unified identity data foundation can power better care and stronger security.
The post Identity: The Lifeline of Modern Healthcare appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
Gartner’s 2025 Hype Cycle for Digital Identity and Hype Cycle for Zero-Trust Technology, 2025 highlights AI for Access Administration as an emerging innovation with high potential, or as it is called by Gartner, an “Innovation trigger.” The promise to automate entitlement reviews, streamline least-privilege enforcement and replace months of manual cleanup with intelligent, adaptive identity governance is very compelling.
But as Gartner cautions, “AI is no better than human intelligence at dealing with data that doesn’t exist.”
When it comes to AI, the limiting factor is not the algorithms: it’s the data. Fragmented directories, inconsistent entitlement models, and dormant accounts create blind spots that undermine any attempt at automation. Without a reliable identity foundation, AI has little to work with and what it does work with is riddled with flaws and problems.
Key Takeaway: The barrier to AI success in access governance isn’t algorithms—it’s bad identity data. Identity-Driven Attacks Are Outpacing Traditional IAM ProcessesVerizon’s 2025 DBIR confirms credential misuse as the leading breach vector, with attackers increasingly exploiting valid accounts rather than brute-forcing their way in. IBM X-Force highlights that the complexity of responding to identity-driven incidents nearly doubles compared to other attack types. Trend Micro adds that risky cloud app access and stale accounts remain among the most common exposure points. These are just three out of many prominent organizations voicing their concern.
What This Means: Static certifications and spreadsheet-based entitlement reviews cannot keep pace with adversaries who are already automating their side of the equation. Making Identity Data AI-ReadyRadiant Logic is recognized in Gartner’s Hype Cycle for enabling AI for Access Administration as a Sample Vendor. Our role is foundational—we provide the trustworthy identity data layer that AI systems require to function effectively.
The RadiantOne Platform unifies identity information from directories, HR systems, cloud services, and databases into one semantic identity layer. This layer ensures that access intelligence operates on clean, normalized, and correlated data. The result is an explainable and auditable basis for AI-driven recommendations and automation.
From Episodic to Continuous Access IntelligenceWith this semantic identity layer in place, AI can shift access administration from episodic to continuous monitoring, detecting entitlement drift, rationalizing excessive access, and adapting policies in near real time.
Enabling Agentic AI in Access GovernanceRadiant Logic is investing deeply in advancing the field of Agentic AI and has already delivered tangible innovations for customers through AIDA and fastWorkflow.
What Is AIDA (AI Data Assistant)?AIDA (AI Data Assistant) is a core capability of the platform. It is presented as a virtual assistant to simplify user interactions, improve operational efficiency and help to make more informed decisions.
How AIDA Simplifies Access ReviewsFor example, AIDA is used to address one of the most resource-heavy processes in IAM: user access reviews. Instead of overwhelming reviewers with raw data, AIDA highlights isolated access, surfaces over-privileged or dormant accounts, and proposes remediations in plain language. Each suggestion is linked to the underlying identity relationships, ensuring decisions remain auditable and defensible.
What Is fastWorkflow and Why It MattersThe result is a faster review cycle with less fatigue for reviewers, while giving compliance teams confidence that AI assistance does not compromise accountability. At its core, AIDA leverages fastWorkflow—A reliable Agentic Python Framework.
fastWorkflow aims to address common challenges in AI agent development such as intent misunderstanding, incorrect tool calling, parameter extraction hallucinations, and difficulties in scaling.
The outcome is much faster agent development, providing deterministic results even when leveraging smaller (and cheaper) AI models.
Open-Sourcing fastWorkflow for the CommunityRadiant Logic has released fastWorkflow to the open-source community under the permissive Apache 2.0 license, enabling developers to accelerate their AI initiatives with a flexible and proven framework.
If you are interested in knowing more about fastWorfklow, this article series is available. You can access the project and code for fastWorkflow on GitHub.These capabilities are the first public expressions of our broader Agentic AI strategy, moving AI beyond theoretical promise into operational reality. These innovations are part of a larger roadmap exploring how intelligent agents can fundamentally transform the way enterprises secure and govern identity data.
Our recognition in Gartner’s Hype Cycle for Digital Identity reflects why this matters: most AI initiatives in IAM fail not because of algorithms, but because of poor data quality and unreliable execution. By unifying identity data, enabling explainable guidance through AIDA, and ensuring safe, reliable execution with fastWorkflow, we are making Agentic AI practical for access governance today—while laying the foundation for what comes next.
The Business ImpactFor CISOs, this means reducing exposure by closing gaps before they are exploited. For CIOs, it delivers modernization without breaking legacy systems. For compliance leaders, it simplifies audits with data-backed, explainable decisions.
AI for Access Administration will not replace governance programs, but it will change their tempo. What was once a quarterly campaign becomes a continuous process. What was once a compliance checkbox becomes a dynamic part of security posture. This is closely in line with regulatory initiatives where a continuous risk-based security posture is critical.
Radiant Logic provides the missing foundation: unified, governed, and observable identity data.
See how you can shift from a reactive identity security posture to a proactive, data-centric, AI-driven approach: contact us today.
The post AI for Access Administration: From Promise to Practice appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
By Trevor Butterworth
PhocusWire’s annual Hot 25 Startups identifies the most dynamic and disruptive companies in the travel, tourism and hospitality sectors — and for 2026, Indicio is on its hot list, thanks to our award-winning decentralized identity technology — Indicio Proven®,
Indicio Proven is providing trusted digital identity for travelers in Africa and the Caribbean (and soon, Europe), enabling pre-authorized travel, seamless border crossing, and shorter lines at airports, all while delivering superior security, data privacy, and better traveler experiences.
A member of Nvidia’s Inception Program and recently partnered with NEC , Indicio is also pioneering the use of decentralized identity in AI systems using agents, enabling AI agents to hold verifiable digital identities so that they can prove who they represent to travelers, ask permission to access data, share that data with other agents, and deliver next-generation customer service across travel systems, loyalty programs, and tourist services.
So yes, we agree with PhocusWire: We’re hot!
How Indicio Proven brings the heatIndicio Proven is a platform that plugs into your platform to make things easier, faster, simpler on the front and on the back end.
What things? Identity authentication, access management, data sharing, integration, compliance, security.
Decentralized identity isn’t just for identifying people, it’s also for identifying organizations, devices, AI agents, and being able to securely share and authenticate the data they need to act on.
Decentralized identity uses Verifiable Credentials to make all this possible. They make data fully portable and instantly verifiable anywhere — even offline.
With Indicio Proven, you are able to add an authenticated biometric to a Verifiable Credentials and create a “government grade” digital identity in minutes.
Indicio Proven transforms travel, tourism, hospitalityMore and more people are traveling and want to travel. According to a recent report by the World Travel Council, 14 billion passengers will be flying annually by 2035 — a forecast that exceeds previous estimates and 2025 passenger numbers (9.8 billion).
How are airports, airlines, border control, and every related service to contend with this increased demand? How is the tourist sector to manage a world where everyone wants to go on vacation but fewer and fewer people are available to serve them?
At the same time as travel has been democratized, people want experiences over things. Waiting in endless lines or struggling through complex online processes are not what they have in mind. But that’s what they are facing if we can’t simplify operations and increase capacity.
This is where Indicio Proven comes in. With Indicio Proven, you can build automated operations around trusted data. This means using Digital Travel Credentials (DTC) to pre-authorize travel long before a passenger leaves for the airport. This means being able to leverage that trusted data for seamless authentication throughout the airport — and all the way to hotel check in and beyond.
It also means being able to do all this in the most secure way possible, by removing all the vulnerabilities that are fueling the explosion in identity fraud. For example, the cryptography that provides assurance of proof in a Verifiable Credential is AI resistant; the authenticated biometrics in a DTC provides a simple way to avoid deepfakes.
This makes challenges, like automating hotel check-in, simple. Chargeback fraud, on the other hand, becomes difficult because you can’t share or steal a Verifiable Credential and fake authorize a payment.
New kinds of services around data and real-time data become easy to implement because passengers and guests can consent to share their data simplifying GDPR.
Indicio Proven and decentralized identity are a minimalist’s dream of lego: how to build a network for sharing and using trusted data with the fewest number of parts, the least friction, the least expense, and with the most flexibility to add new elements and scale.
And the neat thing? You can plug this lego into your existing systems and get going in weeks.
Global interoperability ensures that travelers equipped with Indicio Proven credentials can go from anywhere to anywhere.
Why travel AI systems need Indicio ProvenWhile AI in travel isn’t new, it often shows itself to be not very useful. We’ve all dealt with chatbots that can tell you your flight time but not much else. Or they paralyze you with authentication requests that require multiple data points from different apps and emails.
Indicio Proven takes the same seamless approach to crossing a border and applies it to interacting with an AI agent.
When a passenger and an AI agent interact, each mutually authenticates the other’s identity before any data is shared.
This is absolutely critical for security and trust. A traveler has to trust that the AI agent is legitimate and not spoofed. But the AI agent also needs to verify that the traveler’s identity is real too.
Only decentralized identity provides a secure way to do this. Legacy, centralized approaches cannot deliver the security needed for deploying AI agents.
After mutual authentication, the traveler consents to share their data. This is vital for GDPR compliance.
The AI Agent then accesses structured data that has been digitally signed to make it tamper proof ·(even from malicious AI).
The AI Agent can also ask for permission to share the passenger’s data with one or more other agents, and with that consent can execute increasingly complex tasks — for example, an airline agent could interact with a railway agent to book a ticket using the traveler’s air miles.
We call this delegated authority and it’s essential to both enabling AI to scale and to do so in a compliant way.
The result: frictionless, next-gen customer service. Travelers can carry their identity in a way that’s verified, private, and reusable, and AI can personalize responses and services in real time because it’s working with actual data, not guesses or outdated records.
Other analysts are also taking noteGartner, Forrester, IDC, S&P Global, and Juniper Research have all recognized Indicio as a company driving digital transformation through decentralized identity.
Our team is leading the work to help develop and support open standards, like OpenID4VC, and to drive global interoperability.
There isn’t going to be one digital credential that will cover every possible use case; that’s fine. The key is to make today’s — and tomorrow’s — digital credentials interoperate with each other seamlessly, so it’s just not an issue for anyone.
Thank you, PhocusWireIf you think about what we’re doing, we’re enabling data to travel anywhere, independently. We allow data to be created so that any border — a device, a system — can verify where it’s come from, if it has been manipulated, and whether it is to be trusted. You could say, we’re kinda like a travel company for the next era of the internet. And we’re tremendously grateful to PhocusWire to see all this work recognized. It brings the future even closer.
Contact usIf you are in the travel, tourism, or hospitality business, we’d love to show you how easy this all is to implement and to talk about how our customers are using Indicio Proven to extend their leadership in their sectors (and — no pressure — possibly yours).
We have solutions for hotel check-in, payments, ticketing, customer service, employee access, reducing chargeback fraud, along with the building blocks for deploying them: a white-label digital wallet and mobile SDK.
You need trusted data to master the future? We’ve got you covered.
Read the full PhocusWire announcement here.
The post Indicio wins spot on PhocusWire’s Hot 25 startups for 2026 appeared first on Indicio.
“Last month’s AWS outage did more than interrupt chats and scramble payment systems. It reignited a political argument that has been simmering for years: whether cloud platforms have become too essential to be left in private hands.”
In the U.K., calls for digital sovereignty resurfaced almost immediately. Across Europe, people again questioned their dependence on U.S. providers. Even for companies that weren’t directly affected, the incident felt uncomfortably close.
In The Infrastructure We Forgot We Built, I pointed out that private infrastructure now performs public functions. The question isn’t whether these systems are critical—demonstrably, they are—it’s what happens when everything is critical. Governments continue to expand their definitions of “critical infrastructure,” extending the term to encompass finance, cloud, data, and communications. Each new addition feels justified, but the result is an ever-growing list that no one can fully protect.
Declaring something “critical” once meant ensuring its safety. Now it often means claiming jurisdiction. It creates an uncomfortable paradox: the more we classify, the more we appear to protect, and the less effective we become at coordinating a response when the next outage arrives.
Let’s poke at some interesting ramifications of classifying a service as critical.
A Digital Identity Digest The Paradox of Protection Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:09 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
The American model: expanding scope, dispersing responsibilityNowhere is this inflation more visible than in the United States, where “critical infrastructure” has evolved from a short list of sixteen sectors, including energy, water, transportation, and communications, to a sprawling catalog of national functions. The Cybersecurity and Infrastructure Security Agency (CISA) calls them National Critical Functions: over fifty interconnected capabilities that “enable the nation to function.” It’s an attempt to capture the web of dependencies that tie one system to another, but the list is so long that prioritization becomes impossible.
At the same time, National Security Memorandum 22 (NSM-22) shifted much of the responsibility for protecting those functions away from federal oversight. Under NSM-22, agencies and private operators were expected to manage their own resilience planning. In theory, decentralization builds flexibility; in practice, it creates a policy map with thousands of overlapping boundaries. The government defines criticality broadly, but control over what that means in practice is increasingly diffuse.
As of 2025, the current U.S. administration is reviewing NSM-22 and several other cybersecurity and infrastructure policies in an effort to clarify lines of responsibility and modernize federal strategy. According to Inside Government Contracts, this review could lead to significant revisions in how critical infrastructure is defined and governed, though the direction remains uncertain.
What’s unlikely to change is the underlying trend: expansion without coordination. The more functions labeled critical, the thinner the resources spread to defend them. If everyone is responsible, no one really is.
The European model: bureaucracy as resilienceEurope has taken almost the opposite approach. Where the U.S. delegates, the European Union codifies. The NIS2 Directive and the Critical Entities Resilience (CER) Directive bring a remarkable range of organizations, such as cloud providers, postal services, and wastewater plants, under the umbrella of “essential” or “important” entities. Each must demonstrate compliance with a thick stack of risk-management, incident-reporting, and supply-chain-security obligations.
It’s tempting to see this as overreach, but there’s a strange effectiveness to it. A friend recently observed that bureaucracy can be a form of resilience: it forces repeatable, auditable behavior, even when it slows everything down. Under NIS2, an outage may still occur, but the process for recovery is at least predictable. Europe’s system may be cumbersome, but it institutionalizes the habit of preparedness.
If the U.S. model risks diffusion, the European one risks inertia. Both confuse activity with assurance. To put it another way, expanding oversight doesn’t guarantee protection; it guarantees paperwork. Protection might just be a happy accident.
Interdependence cuts both waysUnderlying both approaches is the same dilemma: interdependence magnifies both stability and fragility. The OECD warns about “systemic risk” in its 2025 Governments at a Glance report. Similarly, the WEF describes this characteristic as “interconnected risk” in their Global Risks Report 2024. In both cases, they are talking about how a disturbance in one sector can ripple instantly into others, turning what should be a local failure into a global one.
But interdependence also enables the efficiencies that modern economies depend on. The same cloud architectures that expose organizations to shared risk also deliver shared recovery. If an AWS region goes down, another can often pick up the load within minutes. That doesn’t make the system invulnerable; it makes it tightly coupled, which is both a feature and a flaw.
That is the paradox of microservice design: locally resilient, globally fragile. The further we distribute responsibility, the more brittle the whole becomes. Managing that trade-off is less about eliminating interdependence than about deciding which dependencies are worth keeping.
Coordination in a fragmented worldThe Carnegie Endowment’s recent report on global cybersecurity clearly frames the problem: the challenge is no longer whether to protect critical systems, but how to coordinate that protection across borders. The Internet made infrastructure transnational; regulation still stops at the border.
That tension was at the center of my earlier series, The End of the Global Internet. Fragmentation, through data-localization mandates, competing technical standards, and geopolitical distrust, is shrinking the space for cooperation. The systems that most need collective protection are emerging at the moment when collective action is least feasible.
That was made more than clear during the October 2025 AWS outage.
In the U.K., it reignited arguments about tech sovereignty, with commentators and MPs warning that reliance on U.S. providers left the country strategically exposed. In Brussels, the outage reinforced calls to accelerate the European Cloud Federation and “limit reliance on American hyperscalers.”Tech.eu put it bluntly: “A global AWS outage exposes fragile digital foundations.” They are not wrong.
A technical event at this scale offers impressive political ammunition. The debate becomes about more than just uptime. It’s also about who controls the tools a society can’t seem to function without.
Labeling platforms as critical infrastructure amplifies that instinct. Once something is “critical,” every government wants jurisdiction. Every region seeks its own version. The intent is to strengthen sovereignty, but the result is a more fragmented Internet. Protection turns into partition.
Openness vs. control: lessons from digital public infrastructureThis tension between openness and control shows up again in global discussions around Digital Public Infrastructure (DPI). A recent G20 policy brief argues that while DPI and Critical Information Infrastructure (CII) both serve public purposes, they arise from opposite design instincts. DPI emphasizes inclusion, interoperability, and openness; CII emphasizes security, restriction, and control.
Some systems are designated critical only after they become indispensable. India’s Aadhaar identity platform is a great example. The Central Identities Data Repository (CIDR) was declared a Protected System under the country’s CII rules in 2015—five years after Aadhaar’s rollout—adding compliance obligations to what began as open, widely used public infrastructure. Those regulations were and are necessary, but it’s reasonable to ask whether a system managing such sensitive data should ever have operated without that protection in the first place.
The challenge isn’t simply timing. Too early can stifle innovation; too late can amplify harm. The real question is how societies decide when openness must yield to oversight, and whether that transition preserves the trust that made the system valuable in the first place.
The politics of protectionCritical infrastructure has always been political. As the Brookings Institution observed more than a decade ago, infrastructure investment—and, by extension, classification—has always reflected political will as much as technical necessity. The same logic applies online. Designating something “critical” can attract funding, exemptions, or strategic leverage. In a digital economy where perception drives policy, criticality itself becomes a form of currency.
The temptation to leverage the classification of “critical” is understandable: declaring something critical signals seriousness. But it also invites lobbying, nationalization, and regulatory capture. In the analog era, the line between public good and private gain was already blurry; the digital era simply made it blur faster and more broadly.
Criticality has become a negotiation, and as with all negotiations, outcomes depend less on evidence than on who has the microphone.
The discipline of selective resilienceIf the first post in this series leaned toward recognizing new kinds of critical infrastructure, this one argues for restraint in doing so. Declaring everything critical doesn’t make societies safer; it makes prioritization impossible. Resilience requires hierarchy, specifically knowing what must endure, what can fail safely, and how systems recover in between.
That’s an uncomfortable truth for both policymakers and providers. (I would say I’m glad I don’t have that job, but I kind of do as a voting member of society) Safety sounds equitable; prioritization sounds elitist. But in practice, resilience demands choice. It asks us to acknowledge that some dependencies matter more than others, and to build systems that tolerate loss rather than pretending loss is preventable.
The more we classify, the more we appear to protect, and the less effective we become at coordinating when the next outage arrives. The task ahead isn’t expanding the list. It’s learning to live with a smaller one.
If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript[00:00:30]
Welcome back. Last month’s AWS outage did more than just interrupt chats and scramble payment systems — it ignited a long-simmering argument about whether cloud platforms have become too essential to be left entirely in private hands.
In the UK, calls for digital sovereignty resurfaced almost immediately. Across Europe, governments and enterprises once again questioned their dependence on U.S. providers. And even for organizations that weren’t directly affected, the outage felt uncomfortably close. The internet wobbled — and everybody noticed.
Defining What’s “Critical”In my post last week, The Infrastructure We Forgot We Built, I argued that private infrastructure now performs public functions.
That’s the heart of the question here — not whether these systems are critical infrastructure (they are), but what happens when everything becomes critical?
When every failure becomes a matter of national concern, the language of protection starts collapsing under its own weight.
So, what do we actually mean when we say critical infrastructure? The phrase sounds straightforward, but it isn’t. Every jurisdiction defines it differently. Broadly speaking, critical infrastructure refers to assets, systems, and services essential for society and the economy — things whose disruption would cause harm to public safety, economic stability, or national security.
That definition works for power grids and water systems, but it gets complicated when we start talking about DNS, payments, or authentication services — the digital glue holding everything together.
Today, critical is no longer just about physical survival. It’s about functional continuity and keeping society running.
When Everything Is Critical, Nothing IsEach country’s list of what’s critical keeps getting longer — and fuzzier. Declaring something critical once meant ensuring its safety. Now, it feels more like staking a claim to control.
That’s the paradox. The more we classify, the more we appear to protect — but the less effective we become when the next outage hits.
This tension is especially visible in the United States. Critical infrastructure once referred to 16 sectors — energy, water, transportation, communications — things you could point to in the real world.
Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recognizes more than 50 “national critical functions.” These include both government and private-sector operations so vital that their disruption could debilitate the nation.
It’s a noble definition — but a recipe for paralysis. Because if everything is critical, then nothing truly is.
Expansion Without CoherenceThe National Security Memorandum 22 (NSM 22) was intended to modernize how those functions are managed. In theory, it decentralizes responsibility, allowing agencies and private operators to tailor protections to their own risk environments.
In practice, it’s become a policy map full of overlapping boundaries — blurry accountability, scattered resources, and fragmented oversight.
It’s a patchwork: agencies, regulators, and corporate partners each hold a piece of the responsibility, but no one has the full picture.
While the U.S. administration is reviewing these policies, the underlying trend remains: we keep expanding the definition of “critical” without improving coordination.
The result?
Expansion without coherence Protection without prioritization A system too diffused to defendIt’s the digital version of the bystander effect: if everyone is responsible, no one truly is.
Bureaucracy as ResilienceLet’s shift to the European model, which takes almost the opposite approach. Where the U.S. delegates, the EU codifies — through the NIS 2 Directive and the Critical Entities Resilience Directive.
These cover a wide range of organizations — from cloud providers to waste-water plants — all classified as “essential” or “important.” Each must prove compliance with risk management, incident reporting, and supply-chain security requirements.
It’s easy to dismiss that as bureaucratic overreach — and in part, it is.
But it’s also effective in its own way. Bureaucracy, for all its flaws, enforces repeatable, auditable behavior even as it slows things down.
Under NIS 2, an outage may still occur, but the recovery process is predictable. You may not like the paperwork, but you’ll have it — and sometimes, that’s half the battle.
Still, the EU’s model has limits. If the U.S. risks diffusion, the EU risks inertia. Both can be mistaken for resilience, but neither guarantees protection. What bureaucracy guarantees is documentation, not defense.
Interdependence and FragilityBoth systems face the same dilemma: interdependence.
It magnifies both stability and fragility. A local failure can ripple across sectors and become a global event — yet shared infrastructure also provides recovery pathways.
When an AWS region fails, another often takes over. That’s designed resilience, but it isn’t limitless. As we’ve seen, microservice architecture provides local stability but global fragility. The more distributed a system becomes, the harder it is to understand its failure points.
When everything depends on everything else, “critical infrastructure” starts to lose meaning.
The goal isn’t to eliminate dependencies — that’s impossible — but to decide which ones we can live with.
The Coordination GapCoordination, or the lack of it, is the real challenge.
A recent Carnegie Endowment report put it plainly: the issue isn’t whether to protect critical systems, but how to coordinate that protection across borders.
The internet made infrastructure transnational.
Regulation, however, still stops at the border. The wider that gap grows, the more fragile the entire system becomes.
We’re trying to protect a global network at a time when global cooperation is at a low point.
During the October AWS outage, responses were swift — and revealing:
In the UK, debates about tech sovereignty resurfaced. In Brussels, attention turned to reducing dependence on U.S. hyperscalers. Across tech journalism, the consensus was clear: a global AWS outage exposes fragile digital foundations.And they’re right. But this technical failure has become political ammunition. The debate has shifted from uptime to control — who controls the tools we can’t function without?
From Protection to FragmentationOnce something is labeled critical, every government wants jurisdiction.
Every region wants its own version. The intent is protection; the result is fragmentation.
This same tension shows up in debates about Digital Public Infrastructure (DPI) versus Critical Information Infrastructure (CII).
DPI emphasizes inclusion, interoperability, and openness. CII emphasizes security, restriction, and control.Both serve public goals — they just stem from different design instincts.
For example, India’s Aadhaar identity system began as an open platform for inclusion. Five years later, it was reclassified as protected critical infrastructure. That shift was probably necessary, but it raises an uncomfortable question:
Should systems managing that level of personal data ever have operated without such protections?
Move too early, and you stifle innovation.
Move too late, and you amplify harm.
The challenge is timing — and trust.
How do we decide when openness must yield to oversight, and how do we maintain public confidence when that shift happens?
Declaring something critical is never neutral. It’s a political act.
In the digital economy, criticality itself becomes a kind of currency — attracting investment, lobbying, and influence.
If a nation declares a platform critical, is it for resilience or for leverage?
Realistically, it’s both.
If The Infrastructure We Forgot We Built was about recognizing new kinds of critical systems, this reflection argues for restraint.
Declaring everything critical doesn’t make us safer — it makes prioritization impossible.
Resilience requires hierarchy: knowing what must endure, what can fail safely, and how recovery happens in between.
That’s uncomfortable for policymakers. Safety sounds equitable; prioritization sounds elitist.
But resilience demands choice. It asks us to build systems that tolerate failure rather than pretending it won’t happen.
The more we classify, the more we appear to protect — and the less control we have when it matters most.
Maybe the real task isn’t expanding the list of critical infrastructure, but learning to live with a smaller one.
Because protection is ultimately about trade-offs:
The harder we try to protect everything, the more fragile we make the whole.
[00:13:33]
That’s it for this week’s episode of The Digital Identity Digest.
[00:13:38]
If this helped make things clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn @hlflanagan, and if you enjoyed the show, please subscribe and leave a rating on your favorite podcast platform.
You can also read the full post at sphericalcowconsulting.com.
Stay curious, stay engaged, and let’s keep these conversations going.
The post The Paradox of Protection appeared first on Spherical Cow Consulting.
Romance scams. Spear phishing. Authorized Push Payment fraud (APP fraud). These social engineering atttacks are no longer marginal threats. For banks and financial institutions, they represent one of the fastest-growing forms of fraud – costing billions each year and eroding customer trust and institutional reputation.
In our first article of our fraud series, we revealed who is behind this global enterprise worth over $1 trillion and looked inside their vast complexes around the world, housing hundreds to thousands of trafficked workers. Now, we turn our focus towards how scam compounds operate; how they replicate corporate structure, scale with technology, deploy Fraud-as-a-Service (FaaS) and drive threats that risk not just money, but reputation and trust.
Fraud Inc.: Departments like real companiesStep inside a scam compound and what you’ll find looks less like a criminal hideout and more like a corporate headquarters. Inside, these operations function as fully fledged business ecosystems.
It all begins with procurement, the recruitment process that fuels the enterprise. Recruiters post fake job ads on social media and employment platforms, offering high salaries and promising conditions. Many who apply are students, retirees, or people in vulnerable economic situations. Few realize they’re being drawn into a human trafficking network. Once they arrive at what they believe is their new workplace, they find themselves trapped within guarded compounds and forced into labour – trained and deployed to defraud victims around the world.
From there, new arrivals enter structured training academies that mirror legitimate corporate onboarding. They are given scripts, coached on tone and persuasion, and taught to impersonate trusted individuals or institutions. They learn how to overcome objections, create urgency, and craft convincing messages and emails – all the hallmarks of professional sales training, repurposed for deception.
Once trained, recruits join the call centres, the heart of the operation. Floor after floor of desks are filled with “sales teams” executing scams around the clock. Performance is tracked obsessively: conversion rates, value per victim, number of successful interactions, and response times to leads. High performers are rewarded. Those who fall behind face severe punishment.
Underpinning it all are the operations and IT teams, ensuring the smooth running of the criminal enterprise. Infrastructure is maintained, systems monitored, and data managed. Meanwhile, payroll and accounting functions handle the proceeds, laundering the fraudulently obtained funds and reinvesting them to expand and sustain the operation further.
But perhaps most sophisticated is the R&D unit: its sole purpose is to stay one step ahead of banks’ fraud prevention measures. These teams constantly evolve and fine-tune new attack methods to bypass the latest defenses. They test social engineering workflows, refine bypasses for two-factor authentication and explore how to exploit gaps in identity verification. Increasingly, they use AI tools to deepen deception with deepfake voice impersonations, synthetic IDs or AI-generated phishing platforms.
On paper, you would not be able to distinguish the internal structure from that of a legitimate company.
Scaling fraud with AI & FaaSNo single compound has to reinvent the wheel – and increasingly, these large criminal enterprises are even franchising out their operations. Through fraud-as-a-Service (FaaS) models, they sell or lease “pluggable fraud kits” on the dark web. Those contain identity-spoofing services, exploit packages, script libraries, all available with a few clicks, making it easier for individuals to deploy sophisticated APP scams or impersonation attacks without any previous technical or scam background. It’s a franchise model for cybercrime.
Using software and AI to streamline scamsScammers must reach the high call volume KPIs required everyday and to do so, they must rely on Voice-over-IP (VoIP) services. VoIP allows them to make international calls cheaply via the internet while spoofing caller IDs with UK or EU country codes to appear more credible. These tools also provide a steady supply of fresh phone numbers when agents’ numbers get blacklisted as spam.
Scammers also use software stacks that mirror legitimate corporate tools. CRM-style dashboards track leads and capture victim information like investment experience, call history and personal details. Stolen identity databases enable highly personalised attacks, and increasingly, AI chatbots automate message personalisation and generate deepfakes. Tools like ChatGPT are actively deployed inside compounds to craft convincing investor narratives and sustain prolonged, trust-building conversations with victims.
Why banks must look beyond the transactionFraud losses are exploding. In 2024, consumers in the U.S. lost over $12.5 billion to scams, with investment and imposter scams alone making up most of it. In Norway, losses from social engineering rose 21% between 2021 and 2022, reaching NOK 290.3 million ($25-30 million USD) as more users were manipulated into authorizing payments – a trend that has also been noted by European banks, which saw digital payment fraud rise by 43% in 2024 compared to 2023, with social engineering tactics increasing by 156% and phishing by 77%.
These operations hurt banks in far more ways than immediate financial loss. Each successful scam erodes trust – from customers, regulators and the public. When customers believe their bank can’t protect them, they may flee to competitors or lose faith. Regulatory scrutiny and fines also increase, especially as social engineering becomes the fraud vector regulators are watching most closely.
The human toll and what can be doneIt is apparent that fraud is shifting from solely technical compromises to manipulations of human trust, but not only of those deceived to send money. Many scammers are recruited under false pretenses, trafficked or working under duress – a grim reality upon which these industrial fraud machines are built.
Tools to fight (social engineering) fraudSocial engineering scams are among the most challenging threats banks face today. Unlike traditional fraud like forged documents, these scams manipulate genuine customers into authorizing payments or sharing sensitive information – often without realizing they’re being deceived. This is especially true in cases like APP fraud, where the victim is tricked into sending money themselves, and because the transaction appears legitimate and is initiated by the account holder, detecting these scams demands a new level of vigilance and smarter technology.
To combat this, banks need tools that go beyond standard identity checks. Solutions must be able to spot subtle signs of coercion and manipulation in real time. Video-based verification solutions are purpose-built for this and are the only verification method capable of detecting social engineering attempts through dynamic, human-led interactions and social-engineering–style questioning that reveal behavioral inconsistencies or signs of distress that may indicate a customer is being manipulated by a scammer.
With social engineering, the focus shifts from verifying identity to understanding intent. That’s where platforms like the IDnow Trust Platform comes in. By integrating behavioral biometrics, such as erratic transaction histories, geographical inconsistencies and device switching, it flags suspicious patterns and enables real-time risk assessment throughout the entire customer lifecycle, not just at onboarding.
In addition, end‑user education is a critical pillar: in the UK, where APP fraud has been huge, banks are now required to reimburse victims up to £85,000. With prevention efforts now in place, case volumes have fallen by 15 %.
Together, these capabilities transform fraud prevention from reactive patching to proactive defense.
Social engineering has always existed but in today’s digital, hyperconnected world, it has evolved into a global trade. What once were isolated scams have become industrialized operations running 24/7, powered by automation and scale. Fraud factories exploit the weakest link in the chain – human vulnerability – making them harder to detect and the biggest threat to banks today. For financial institutions, the challenge is no longer about patching single points of failure, it’s about dismantling entire production lines of deception. Understanding what happens inside these operations is now the first line of defense in a war that criminals are currently winning.
Interested in more stories like this one? Check out: The true face of fraud #1: The masterminds behind the $1 trillion crime industry to explore who is behind the fastest-growing schemes today, where are the hubs of so-called scam compounds and what financial organizations must understand about their opponent. The rise of social media fraud: How one man almost lost it all to learn about romance fraud to investment scams and the multitude of ways that fraudsters use social media to target victims.By
Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn
Belém, Brazil, 11 November 2025. Viable Cities, in collaboration with UN-Habitat and Dark Matter Labs, is inviting urban and funding partners — including governments, development donors, philanthropies, foundations and investors — to co-design a new grant call for climate-neutral, resilient, and future-ready cities.
Following the successful 2021 Climate Smart Cities Challenge, developed in collaboration with UN Habitat, Viable Cities has implemented a programme of system demonstrators in Sweden as part of its mission to support cities in becoming climate neutral by 2030. Exploring multi-actor climate city contracts, integrated action and investment plans, as well as national-local collaboration, this initiative provided the building blocks for the current European 100 Climate-neutral and Smart Cities Mission. Combining breakthrough actions across key emission sectors, Swedish cities have started demonstrating how building portfolio approaches and public-private ecosystems can scale game changing interventions and create the conditions for a new climate neutral normal.
System demonstrators, accelerating and scaling the transition
System demonstrators are designed to test how whole-systems approaches across governance, finance, infrastructure, data, and citizen engagement can accelerate the transition to climate neutral and resilient cities that are futureproof. They also explore how to create an agile and derisked operating framework for public and private actors to design and implement viable businesses and value cases at scale. Using key societal priorities such as the energy transition, affordable housing, and aggregated purchasing power to help launch new innovations, technologies and markets as initial wedges, they start a transition journey building momentum, partnerships, and long term impact. Early examples include CoAction in Lund and STOLT in Stockholm, which focus on the nexus of energy, housing, and mobility and the realisation of emission-free inner cities, while also exploring new ways of organising collaboration and investment to achieve climate-neutrality at city scale.
Building on this work, Viable Cities, Dark Matter Labs, and UN-Habitat have since 2021 been collaborating with cities in Brazil, Uganda, Colombia, and the UK to apply and adapt the system demonstrator approach. The partnership works with cities including Curitiba, Makindye Ssabagabo, Bogotá, and Bristol to explore how systemic innovation can help them transition. Together, these efforts aim to generate practical learning about how cities can transform toward climate neutrality and resilience through coordinated, system-wide action.
Game changers, driven by local leadership
In Lund, Sweden, a game changer approach was set up as part of the system demonstrator: With EnergyNet connecting deregulation to deployable infrastructure, the green transition becomes commercially possible.
EnergyNet is a new way to manage the distribution of electricity. This is important because today’s electricity grids have major challenges in managing local production, storage and sharing. The system is suitable for use in energy communities, but can also be used outside. EnergyNet makes it possible to connect an unlimited amount of local energy resources, which creates completely new conditions for low electricity prices for large quantities of green electricity. How does it work? EnergyNet is developed according to the same principles as the Internet. It is therefore decentralized, which makes it significantly more resistant to disruptions. Through new types of power electronics, electricity distribution can now be completely controlled by software. Classic challenges for the electricity grid such as frequency and balance are no longer blocking. The new networks are not only decentralized but also distributed, which makes it easier to solve electricity needs as close to the consumer as possible.
The EnergyNet in Lund, driven by the CoAction initiative, represents a breakthrough in integrating multiple energy solutions into a unified, city-wide system. It was set up as a collaborative multi-stakeholder platform bringing together public authorities, businesses, and citizens to co-design a sustainable energy network. The approach integrates local renewable energy sources, smart grids, and demand-side management to optimize energy use across districts. This collaborative governance model fosters cross-sector partnerships and supports a data-driven approach to managing energy efficiency, helping Lund meet its climate targets while offering a model for scalable urban energy transitions globally.
Bristol’s Affordable Housing Initiative, driven by the Housing Festival, represents a game-changing approach to addressing the city’s housing crisis, combining climate-smart and social rent housing solutions. In response to a chronic social housing deficit, with 18,000 people on the waiting list and over 1,000 families in temporary accommodation, the initiative focused on aggregating small brownfield sites across the city to enhance housing viability using Modern Methods of Construction (MMC). The initiative’s main goal was to demonstrate how aggregation of these sites could help create net-zero social homes on small plots that are often seen as unviable for traditional housing projects.
The housing system demonstrator aims to test this aggregation model by building 25 zero-carbon social rent homes across six small sites in Bristol, which would not have been feasible individually. A digital tool was developed to help identify these sites and assess their viability, while a collaborative multi-stakeholder approach — involving Bristol City Council, Atkins Realis, Edaroth, and Lloyds Banking Group — was key to moving from concept to implementation. The project’s unique approach also includes a redefined notion of ‘viability’, integrating social infrastructure investments alongside traditional capital repayment models.
The initiative’s innovative approach has garnered support for scaling through the Small Sites Aggregator program, which aims to unlock thousands of small, underutilized brownfield sites across the UK. This strategy is seen as a path towards building 10,000 homes annually and addressing wider housing shortages, with ongoing testing in cities such as Bristol, Sheffield, and London’s Lewisham Borough. Through this work, the Housing Festival has created the Social Housing at Pace Playbook, which outlines a replicable ecosystem solution to deliver affordable, climate-smart housing at scale. This initiative has demonstrated the potential for collaboration across sectors, innovative financing, and climate-conscious design to provide a pathway for cities worldwide facing similar housing challenges.Bristol’s Affordable Housing Initiative, driven by the Housing Festival, is an innovative model for tackling housing affordability through community-led co-design and collaborative financing. The initiative brought together local authorities, housing developers, social enterprises, and citizens to explore new ways of creating affordable, sustainable homes. The Housing Festival served as a platform for crowdsourcing ideas, testing alternative financial models, and showcasing eco-friendly building techniques. By blending public, private, and philanthropic investment, the initiative created a dynamic ecosystem that accelerates the delivery of affordable housing, prioritizing local engagement and long-term sustainability. This approach is revolutionizing how cities can rethink housing challenges by embedding innovation into the policy framework.
Looking forward: launching a global and distributed system demonstrator initiative
The first step in program alignment is the development of a System Demonstrator Grant Call, as part of the new Viability Fund for Cities. Building on the experiences of system demonstrators in Europe, Latin America and Africa, the ambition is to develop a new standard system demonstrator global grant call The first phase will prioritise Brazil, California, India, Sweden, Ukraine and selected global programmes. The goal is to create a shared practical framework that funders can adapt and apply in their own contexts to support system demonstrator initiatives, but which at the same time allows for joint learning, implementation, and demand side aggregation.
Between January and March 2026, three co-design meetings will be held, with drafting and review work taking place in between. Through this process, participating organisations will jointly develop a general, open source, call text and an operating and fundraising structure that can be used to launch coordinated calls for proposals in multiple countries. In April 2026, Viable Cities, Dark Matter Labs, UN-Habitat and other partners will reflect and deliberate on the outcomes of the dialogues to decide on the launch of an international call for system demonstrators.
Organisations interested in taking part in this collaborative process are invited to submit an expression of interest. Participation is flexible, and actors can step in or out at any time before March 2026.
Submit your expression of interest to join the co-design process and help shape the future of system demonstrator funding.
https://form.typeform.com/to/JOGXlMai
For more information, contact systemdemo@viabilityfund.org
The post The Best Identity Verification Software Providers in 2026 appeared first on 1Kosmos.
The post Hot 25 Travel Startups for 2026: Indicio appeared first on Indicio.
In our recent live workshop, Introduction to Decentralized Identity, Richard Esplin (Dock Labs' Head of Product) and Agne Caunt (Dock Labs' Product Owner) explained how digital identity has evolved over the years and why decentralized identity represents such a fundamental shift.
If you couldn’t attend, here’s a quick summary of the three main identity models they covered:
For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.
Today, that partnership continues to expand. As enterprise adoption of YubiKeys continues to accelerate worldwide, HYPR and Yubico are proud to announce innovations that help enterprises to further validate that the employees receiving or using their YubiKeys are assured to the highest levels of identity verification.
HYPR Affirm, a leading identity verification orchestration product, now integrates directly with Yubico’s provisioning capabilities, enabling organizations to securely verify, provision, and deploy YubiKeys to their distributed workforce with full confidence that each key is used by the right, verified individual.
Secure YubiKey Provisioning for Hybrid TeamsSecurity leaders routinely purchase YubiKeys by the hundreds or thousands, only to confront a stubborn challenge: securely provisioning those keys to a remote or hybrid workforce quickly and verifiably.
Manual processes, from shipment tracking to recipient activation, are no longer adequate for modern security. The current setup, while seemingly robust, lacks the critical identity assurance needed to withstand today's threats. Even the most advanced hardware security key is compromised if it's issued or activated by an unverified individual. What’s needed is not just faster fulfillment, but a secure, automated bridge that links verified identity directly with hardware credentialing.
What YubiKey Provisioning with HYPR Affirm DeliversEnterprises can now link a verified human identity to a hardware-backed, phishing-resistant credential before a device is shipped. Yubico provisions a pre-registered FIDO credential to the YubiKey, binds it to the organization’s identity provider (IdP), and ships the key directly to the end user - no IT or security team intermediation required. The user receives a key that’s ready to activate in minutes - no shared secrets over insecure communications, no guesswork, zero gaps of trust. This joint approach streamlines operations while preserving Yubico’s gold-standard hardware security and user experience.
How It Works: Pre-Register → Verify → ActivateThe flow is seamless. To activate a YubiKey, HYPR Affirm verifies that the intended user is, in fact, the right individual through high-assurance identity verification that incorporates orchestration capabilities that include options such as government ID scanning, facial biometrics with liveness detection, location data, and can even include live video verification with peer-based attestation. Policy settings can be easily grouped by role & responsibility.
Once verified, the user is issued a PIN to activate the pre-registered, phishing-resistant credential on the YubiKey, linked to the organization’s identity provider. When the user receives their key, activation is simple, secure, and immediate.
The result is an end-to-end, verifiable trust chain that gives IT, security, and compliance teams the assurance that:
The YubiKey was issued to a verified user. The credential was provisioned securely and cannot be intercepted. An auditable record ties the verified identity to the hardware-backed credential. Scalable Remote Distribution and Faster RolloutsThis is built for the real world: companies that buy 100, 1,000, or 10,000 keys and need to deploy them across regions, time zones, and employment types. By anchoring every key to a verified user before it ships, organizations reduce failed enrollments, eliminate back-and-forth helpdesk tickets, and accelerate time-to-protection for global teams.
Beyond Day One: Resets, Re-issues, and Role ChangesImplementing automated identity verification checks into the YubiKey provisioning process streamlines initial deployment, but the same model applies after initial rollout. When a new employee is being onboarded, or a key is lost, damaged, or reassigned, HYPR Affirm can re-verify identity at the moment of risk, and Yubico can provision a replacement credential with the same tight linkage between proofing and issuance. This reduces social-engineering exposure during high-risk helpdesk moments and keeps lifecycle events as deterministic as day one.
Building a Future of Trusted, Effortless AuthenticationYubico set the global benchmark for hardware-backed, phishing-resistant authentication. HYPR is extending that foundation to unlock identity assurance at scale - ensuring every YubiKey is ready to protect access from day one.
Together, we’re transforming what has traditionally been a manual, trust-based process into a verifiable, automated, and user-friendly standard for enterprise security.
From my perspective, this partnership represents something bigger than integration. It’s a proof point that security and simplicity can coexist at scale - and that’s what excites me most. We’re helping organizations move faster toward a passwordless future where verified identity and hardware-backed trust work seamlessly, everywhere.
Learn more about how HYPR and Yubico are redefining workforce identity and authentication for the modern era: Explore the Integration.
HYPR and Yubico FAQQ: What changes with this new HYPR and Yubico partnership?
A: Identity verification and YubiKey provisioning are now tightly connected, so each key is pre-registered to a user before shipment and is activated through identity verification upon arrival.
Q: How does this improve remote rollouts?
A: Enterprises can ship keys globally with proof that intended recipients are the ones who activate the device, reducing logistics friction and failed enrollments.
Q: What compliance benefits does this provide?
A: The verified identity event is linked to the cryptographic credential, producing a clear audit trail and aligning with NIST 800-63-3’s assurance model (IAL for proofing, AAL for authentication) while enabling AAL3 from first use.
Q: Does this help with loss, replacement, or re-enrollment?
A: Yes. HYPR Affirm can trigger re-verification for high-risk events (like replacement or role change) before provisioning, reducing social-engineering risk and maintaining assurance over time. Yubico Enterprise Delivery allows organizations to seamlessly replace lost authenticators in a secure and simple workflow.
Q: What is the end-user experience like?
A: Receive a pre-registered YubiKey and activate with a simple identity verification. They log in with phishing-resistant passkeys - no passwords or complex setup.
In today’s cloud-first enterprise landscape, organizations face unprecedented challenges in managing identity and access across distributed, hybrid environments. Traditional on-premises IAM systems have become operational bottlenecks, with deployment cycles measured in weeks rather than hours, security vulnerabilities emerging from static configurations, and scaling limitations that can’t keep pace with business growth. As enterprises accelerate their digital transformation and embrace cloud-native architectures, these legacy constraints threaten competitive advantage and operational resilience.
At Radiant Logic, we recognized these industry-wide pain points weren’t just technical challenges—they represented a fundamental shift in how IAM must be delivered and managed in the cloud era.
Addressing the Cloud-Native IAM GapThe enterprise IAM landscape has been stuck in a legacy mindset while the infrastructure beneath it has transformed completely. Organizations are migrating critical workloads to Kubernetes clusters, embracing microservices architectures, and demanding the same agility from their IAM infrastructure that they have achieved in their application delivery pipelines. Yet most IAM solutions still operate with monolithic deployment models, manual configuration processes, and reactive monitoring approaches that belong to the pre-cloud era. Setting up new environments can take weeks, and keeping everything secure and compliant is a constant battle with the rollout of version patches and updates.
The Three Critical Gaps in Traditional IAM DeliveryThrough our extensive work with enterprise customers, we identified the following critical gaps in traditional IAM delivery:
Deployment velocity: enterprises need IAM environments provisioned in hours, not weeks, to match the pace of modern DevOps practices Operational resilience: IAM systems must be designed for failure, with automatic healing capabilities and zero-downtime updates Real-time observability: security teams need continuous visibility into IAM performance, usage patterns, and potential threats as they emergeRadiant Logic’s cloud-native IAM approach addresses these gaps by fundamentally reimagining how IAM infrastructure is delivered, managed, and operated in cloud-native environments.
Re-Imagining Your IAM Operations with a Strategic Cloud-Native ArchitectureOur Environment Operations Center (EOC) is exclusively available as part of our SaaS offering, representing our commitment to cloud-native IAM delivery. This isn’t simply hosting traditional software in the cloud—it is a ground-up reimagining of IAM operations leveraging Kubernetes orchestration, microservices architecture, and cloud-native design principles.
Why EOC Is Different from Traditional Cloud HostingEvery EOC deployment provides customers with their own private, isolated cloud environment built on Kubernetes foundations. This cloud-native, container-based approach delivers four strategic advantages that traditional IAM deployments simply cannot match.
Agility through microservices architecture Each component of the IAM stack operates as an independent service that can be updated, scaled, or modified without affecting other system elements. This eliminates the risk of monolithic upgrades that have historically plagued enterprise IAM deployments and enables continuous delivery of new features and security patches. Resilience through Kubernetes orchestration The EOC leverages Kubernetes’ self-healing capabilities, automatically detecting and recovering from failures at the container, pod, and node levels. This means your IAM infrastructure maintains availability even when individual components experience issues, providing the operational resilience that modern enterprises demand. Automation through cloud-native tooling Manual configuration and deployment processes are replaced by automated workflows that provision, configure, and maintain IAM environments according to defined policies. This reduces human error, accelerates deployment cycles, and ensures consistent security posture across all environments. Real-time observability through integrated monitoring The EOC provides comprehensive visibility into system health, performance metrics, and security events through cloud-native observability tools that integrate seamlessly with existing enterprise monitoring infrastructure. Key Takeaway: Cloud-native IAM replaces static deployments with flexible, self-healing, continuously observable environments.The EOC’s cloud-native architecture enables sophisticated AI-driven operations management that goes far beyond traditional monitoring approaches. Our platform continuously analyzes metrics including CPU utilization, memory consumption, network traffic patterns, and application response times across your Kubernetes-based IAM infrastructure.
How AI Can Detect and Resolve Issues AutomaticallyWhen our AI detects anomalous patterns—such as unexpected spikes in authentication requests, unusual network traffic flows, or resource consumption trends that indicate potential security threats—it doesn’t just alert operators. The system automatically triggers remediation actions, such as scaling pod replicas to handle increased load, reallocating resources to maintain performance, or isolating potentially compromised components while maintaining overall system availability.
Unified Management: Purpose-Built for Enterprise OperationsThis proactive approach to operations management represents a fundamental shift from reactive problem-solving to predictive optimization. Instead of waiting for issues to impact users, the EOC identifies and addresses potential problems before they affect service delivery.
The EOC consolidates all aspects of IAM operations management into a single, intuitive interface designed specifically for enterprise security and IT teams. Our dashboards provide real-time visibility into system health, performance trends, and security posture across your entire IAM infrastructure.
Streamlining Everyday IAM Operations Through One InterfaceCritical operations such as application version management, automated backup orchestration, and security policy enforcement are streamlined through purpose-built workflows that integrate naturally with existing enterprise tools. The platform’s responsive design ensures full functionality whether accessed from desktop workstations or mobile devices, enabling operations teams to maintain visibility and control regardless of location.
Because the EOC is built specifically for our SaaS offering, it includes deep integration with Radiant Logic’s IAM capabilities while maintaining compatibility with your existing identity, monitoring, logging, and security infrastructure. This ensures seamless operations without requiring wholesale replacement of existing tooling.
Future-Ready: Adaptive Security and ComplianceThe EOC’s cloud-native foundation enables adaptive security capabilities that automatically adjust protection levels based on real-time risk assessment. Our compliance management tools leverage automation to maintain regulatory adherence across dynamic, distributed environments, reducing the manual overhead traditionally associated with compliance reporting and audit preparation.
As enterprises continue their cloud transformation journey, the EOC evolves alongside changing requirements, leveraging Kubernetes’ extensibility and our continuous delivery capabilities to introduce new features and capabilities without disrupting ongoing operations.
Transform Your IAM OperationsBy delivering cloud-native IAM infrastructure through our SaaS platform, we are helping enterprises achieve the agility, resilience, and security required to compete in the cloud era.
Ready to see how to transform your identity and access management operations? Contact Radiant Logic for a demo and discover how our cloud-native SaaS innovation can accelerate your organization’s digital transformation journey.
The post Rethinking Enterprise IAM Deployments with Radiant Logic's Cloud-Native SaaS Innovation appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
The post Returns Abuse in E-Commerce | Link Index Report appeared first on Liminal.co.
In today’s connected world, your identity is more than your name or password. It’s your access key to everything from your bank to your email to your online shopping carts. But while technology has made life easier, it has also opened new doors for identity theft.
Fortunately, trusted biometric solutions are here to close those doors. These systems don’t just protect your data. They protect you using your unique traits to make identity theft nearly impossible.
The Growing Problem of Identity Theft
Identity theft isn’t a problem for tomorrow. It’s happening right now. According to cybersecurity analysts, global cases of online identity theft have jumped by more than 35% in just a year. Hackers now use deepfakes, AI-generated profiles, and synthetic data to impersonate real people.
Once your information is stolen, trying to recover it can be like chasing a ghost.
The most common forms of identity fraud include:
Financial theft: Stealing banking or credit details for unauthorized use Medical identity theft: Using stolen identities for treatment or prescriptions Synthetic identities: Creating fake people from pieces of real data Social or digital impersonation: Cloning accounts to scam othersIt’s not just about losing money. Victims spend months repairing their reputation, accounts, and credit. The best way to win this fight is by stopping it before it starts and biometric identity theft protection does exactly that.
Why Biometrics Are the Future of Identity Theft Protection
Biometrics use your unique physical and behavioral features, like your face, fingerprint, or voice, to verify your identity. Unlike passwords or PINs, they can’t be stolen, guessed, or shared.
Modern systems powered by AI are incredibly accurate. The NIST Face Recognition Vendor Test reports that advanced facial recognition models reach over 99% accuracy. That means they can verify you faster and more securely than traditional login methods.
Biometric security isn’t just the future of identity theft protection services. It’s becoming the standard for how we protect everything we value online.
How Biometric Identity Monitoring Services Work
Traditional identity theft monitoring only tells you something went wrong after it happens. But biometric protection acts before any damage occurs. It’s active, precise, and nearly foolproof.
Here’s how it works step by step:
1. CaptureThe system starts by securely capturing your biometric data, such as a face scan. It’s quick, natural, and effortless. This becomes your digital signature, a personal identity key that no one else can copy.
2. EncryptionYour biometric data is instantly encrypted. Instead of storing your actual face or fingerprint, it’s turned into coded data that even a hacker couldn’t understand. This is where real identity theft prevention begins.
3. MatchingWhenever you try to log in or verify your identity, the system compares your live scan with your stored encrypted data. If it matches, access is granted. If it doesn’t, the system blocks entry and triggers identity fraud detection to check for suspicious behavior.
4. AlertIf the system spots something unusual, it alerts you immediately or locks down access. This rapid response prevents identity fraud protection breaches before they happen.
You can see how this works by trying Recognito’s Face Biometric Playground. It’s a fun, interactive way to see how biometric verification distinguishes real people from imposters in real time.
The Best Identity Theft Protection Uses Biometrics
The best identity theft protection doesn’t wait for alerts. It stops fraud before it starts. That’s what biometrics do so well, they make your physical presence part of the security process.
Modern systems use:
Facial recognition to instantly confirm identity Liveness detection to ensure it’s a real person, not a photo or deepfake Behavioral biometrics to monitor how users type or move Voice recognition for call-based verificationBusinesses can integrate these tools using Recognito’s Face Recognition SDK and Face Liveness Detection SDK. Together, they form the core of intelligent identity monitoring services that protect users from digital fraud without adding friction.
Real-Life Examples of Biometric Identity Fraud Prevention
1. Banking and Fintech
A global bank implemented facial verification to confirm customer logins. Within months, they prevented hundreds of fraudulent account openings. Fraudsters tried using edited selfies, but the liveness detection caught every fake.
2. E-commerceOnline retailers now use face recognition at checkout to confirm identity. Even if a hacker has your card details, they can’t mimic your live face or expressions.
3. HealthcareHospitals are starting to use biometrics for patient verification. This prevents criminals from using stolen identities for prescriptions or insurance fraud.
These are real examples of identity fraud protection at work. It’s fast, accurate, and much harder for scammers to outsmart.
Compliance and Data Security Come First
The rise of biometrics comes with responsibility. Ethical systems never store your photo or raw data. Instead, they keep encrypted templates that can’t be reverse-engineered.
This approach complies with the GDPR and other global privacy standards. It also promotes transparency, with open programs like FRVT 1:1 and community-driven research such as Recognito Vision’s GitHub. These efforts ensure fairness, security, and accountability across the biometric industry.
How Biometrics Stop Online Identity Theft
Online identity theft has become one of the fastest-growing cybercrimes in the world. Phishing scams, deepfakes, and password breaches make it easy for hackers to impersonate you online.
Biometric technology makes that nearly impossible. Even if criminals get your password, they can’t fake your face, voice, or live presence. AI-powered identity theft prevention systems recognize you using micro-expressions, natural movement, and behavioral patterns.
It’s no wonder that industries like banking, insurance, and remote onboarding are rapidly adopting these systems. They offer the perfect blend of convenience and unbeatable security.
Traditional vs Biometric Identity Protection
Feature Traditional Protection Biometric Protection Verification Based on what you know (passwords, PINs) Based on who you are Speed Slower, manual authentication Instant, automated Accuracy Prone to errors or guessing Over 99% accurate Fraud Prevention Reactive after breaches Proactive before breaches User Experience Complex and time-consuming Seamless and secure
If traditional methods are locks, biometrics are smart vaults that open only for their rightful owner.
The Future of Identity Theft Protection Services
The next generation of identity theft protection services will utilize a combination of AI, blockchain, and multi-biometric authentication for comprehensive digital security. Imagine verifying yourself anywhere in seconds, without sharing sensitive personal data.
Future systems will likely combine:
Face recognition for instant authentication Voice and gesture biometrics for multi-layered security Blockchain-backed identity to make personal data tamper-resistantRegulators and innovators are already working together to ensure these systems stay ethical, inclusive, and bias-free. The goal is simple: a safer, more personal internet for everyone.
Staying Ahead with Recognito
Ultimately, identity theft protection is about trust. Biometrics provides that trust by using something only you have.
If you want to explore how biometric security can protect you or your business. Learn how Recognito helps organizations secure users through advanced facial recognition and liveness technology, keeping identities safe while making the user experience simple.
Because in the digital world, there’s only one you, and Recognito makes sure it stays that way.
Frequently Asked Questions
1. How does biometric technology prevent identity theft?
Biometric technology uses your unique traits, like your face or voice to verify your identity. It stops criminals from using stolen passwords or fake profiles, providing stronger identity theft protection than traditional methods.
2. Are biometric identity monitoring services secure?
Yes. Identity monitoring services that use biometrics encrypt your data, so your face or fingerprint is never stored as an image. This makes them safe, private, and nearly impossible for hackers to exploit.
3. What is the best way to protect yourself from online identity theft?
The best identity theft protection combines biometric verification with secure passwords and regular monitoring. Using facial recognition and liveness detection makes it much harder for cybercriminals to impersonate you online.
4. Can biometrics detect identity fraud in real time?
Yes. Modern identity fraud detection systems can instantly recognize fake attempts using AI and liveness checks. They verify real human presence and block fraud before any damage occurs.
The 2025 Gartner Hype Cycle for Digital Identity talks about the growing need for standardization in identity management—especially as organizations navigate fragmented directories, cloud sprawl, and increasingly complex hybrid environments. Among the mentioned technologies, SCIM (System for Cross-domain Identity Management) stands out as a foundational protocol for modern, scalable identity lifecycle management.
Radiant Logic is proud to be recognized in this report. Our platform’s robust SCIMv2 support positions RadiantOne as a key enabler of identity automation, built on open standards and enterprise-proven architecture.
Why Standardized Identity Management MattersSCIM was introduced to replace earlier models like SPML, offering a RESTful, schema-driven protocol to streamline identity resource management across systems. It defines a consistent structure and a set of operations for creating, reading, updating, and deleting (CRUD) identity resources such as User and Group.
Today, SCIM is broadly adopted by SaaS and IAM platforms alike. It reduces manual effort, eliminates brittle custom integrations, and strengthens governance and compliance through standardized lifecycle operations.
Without SCIM—or a consistent identity abstraction layer behind it—organizations are forced to manage identities with ad hoc connectors, divergent schemas, and fragile provisioning scripts. Gartner rightly identifies SCIM as essential to achieving identity governance at scale, enabling consistent policy enforcement and lowering operational risk.
Radiant Logic’s SCIM ImplementationRadiantOne delivers full SCIMv2 support, allowing organizations to extend standardized provisioning across their entire environment—cloud, on-prem, and hybrid—without rearchitecting existing infrastructure.
As both a SCIM client and server, RadiantOne can expose enriched identity views to downstream applications or ingest SCIM-based data from external sources for correlation and normalization. This bidirectional flexibility eliminates the need for custom connectors and hardcoded integrations.
At the core is RadiantOne’s semantic identity layer, which unifies identity data across sources, ensures consistency, and drives intelligent automation. This data foundation supports not only SCIM-based lifecycle management, but also Zero Trust access control, governance workflows, and AI-driven analytics.
Where RadiantOne and SCIM Deliver Real ValueHere are six practical use cases where Radiant Logic’s SCIM support drives immediate impact:
Accelerated Onboarding with Trusted Identity Data RadiantOne consolidates authoritative sources—HR, AD, ERP, SaaS—into a single, richly structured identity record. That record is exposed over SCIM v2 (or any preferred connector) to the customer’s existing join-move-leave engines—IGA, ITSM, or custom workflows—so they grant birth-right access through the tools already built for approvals and fulfillment. Offering complete and accurate provisioning with minimal integration effort, RadiantOne stays focused on delivering clean, governed identity data rather than duplicating workflow logic. From SSO to Lifecycle Management SSO controls access, but SCIM controls who gets access. RadiantOne aggregates and enriches identity data from sources like Active Directory, LDAP, and HR systems, making it available to SCIM-enabled applications. Provisioning decisions are based on accurate, policy-aligned identity, ensuring access is granted appropriately from the start. This closes the gap between authentication and authorization, reducing overhead and aligning with Zero Trust principles. Simplifying Application Migrations RadiantOne delivers a clean, normalized identity record and, through its enriched SCIM v2 interface, maps every attribute name and value to the exact schema and format the target expects. This built-in translation removes custom scripts, connector rewrites, and brittle middleware, so admins can load thousands of users into new SaaS platforms quickly during M&A, re-platforming, or app consolidation. Admins can provision thousands of users into SaaS platforms efficiently, making this ideal for M&A, re-platforming, or app consolidation. Real-Time Updates as Identity Changes RadiantOne keeps identity data current as roles change or users depart. Apps simply ask RadiantOne via SCIM v2 for the latest record—no custom sync jobs or code—so they can enforce least-privilege and de-provision on time while their own workflows remain untouched. This ensures timely de-provisioning and continuous enforcement of least-privilege access. Precision Access for Governance and PAM Provisioning isn’t just account creation—it’s about controlled access. RadiantOne adds business context to identity data, such as org structure, clearance, and location, so SCIM can support fine-grained entitlements. This aligns with PAM policies, improves audit readiness, and enhances IGA and analytics accuracy. Keeping Workflows and Business Logic in Sync SCIM also supports operational workflows. RadiantOne keeps identity attributes—like manager relationships, email, or job status—accurate across systems. This ensures approval chains, directories, and collaboration tools function correctly without manual updates. ConclusionRadiant Logic’s SCIM implementation is already powering identity automation in some of the world’s most complex IT environments, proving its value in delivering standards-based, high-integrity identity infrastructure. Book a demo to explore how Radiant Logic’s SCIM-enabled identity platform can transform your organization’s identity management practices, drive operational excellence, and secure your digital identity future.
Disclaimers:
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
GARTNER is a registered trademark and service mark of Gartner and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
The post Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
The post Indicio secures investment from NEC X, accelerating a new era of user-controlled digital identity appeared first on Indicio.
As states modernize public services, privacy must move from principle to practice. Laws define what to protect, but it’s technology that enforces how. Embedding statutory safeguards, such as unlinkability, data minimization, and selective disclosure, directly into the architecture is critical to making privacy protection real and reliable. In this blog post, we explore recommendations around translating privacy law into digital architecture.
Compliance by DesignCompliance with a state’s privacy statutes should be embedded directly into the design and governance of state digital identity, ensuring that protections are enforced through both technology and policy.
One approach is to use personal data licenses, where every credential presentation carries machine-readable terms that specify how the data may be used, for how long it may be retained, and whether it may be shared. Wallets can enforce these licenses automatically, creating automated privacy compliance that is consistent with statutory requirements and reducing reliance on after-the-fact enforcement.
Establishing Reasonable Disclosure NormsStates could also establish the principle of reasonable disclosure, defining contextual norms for when certain attributes may be shared. For example, in a bar setting, presenting “over 21” is a reasonable disclosure, but if the bar requests an email address, that exceeds the scope of the transaction and must be flagged or presented differently.
An insurance company might ask for someone’s basic history, but additionally requesting genetic indicators of future disease may be considered unlawful or predatory. Embedding these rules into wallet UX and verifier obligations ensures that disclosures remain consistent with a state’s privacy laws while still supporting legitimate use cases.
Governance and Decentralized EnforcementIt is a very difficult but important task to determine the proper governance around agreeing upon “reasonable disclosure” across many different industry use cases. We believe that one entity would not be able to make good judgements across all industry verticals, and so industry engagement is critical for this to be successful.
Further, it remains unclear if a government agency is the best entity to coordinate these efforts, versus non-profits, cooperatives, or even private companies specializing in digital reputation management. This is a hard and open problem in decentralized identity, but necessary to create the benefits while managing the risks of increased user control.
Balancing User Autonomy and System SafetyIt’s our opinion that this should operate in a decentralized manner, with wallets mediating requests and issuers not serving as intermediaries for every transaction. We believe that enforcement of these reasonable disclosure frameworks should be composable across many different sources and list maintainers, and ultimately configured at the wallet level.
We should “push decision-making towards the edges” as much as possible, while ensuring reasonable defaults which provide an acceptable trade-off between user choice and safety.
Incentivizing Privacy by DesignTo further protect residents, states could consider imposing an insurance requirement on verifiers or entities that retain personally identifiable information (PII). This creates a financial incentive to minimize data collection and retention, while ensuring that residents are protected if breaches occur.
States could also consider strongly restricting the appropriate request criteria, which would transmit PII and result in full identification. Finally it would also be possible for wallet providers to align on a privacy-preserving fraud signal mechanism, where relying parties overcollecting data are detected via anonymized aggregated reporting so that investigations and enforcement can take suitable action.
Putting Privacy Law Into ActionTranslating privacy law into digital architecture is both a technical and civic responsibility. It demonstrates how statutory principles, such as unlinkability, minimal disclosure, and individual control, can be implemented in real systems. When wallets enforce policy through personal data licenses and reasonable disclosure frameworks, compliance becomes built-in and verifiable.
By embedding privacy into the core architecture, governments and institutions can establish a new standard for privacy-by-design governance that protects individuals and fosters confidence in digital services. SpruceID enables governments and organizations to turn privacy principles into secure, trusted digital systems. To learn more, contact our team.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
We sat down with Christopher Krause, Senior Manager Customer Finance at congstar, to discuss how the company’s partnership with IDnow enables seamless eSIM activations and postpaid sign-ups in seconds through AI-powered identity verification, why digital trust has become as important as price or network speed, and why telcos are uniquely positioned to become anchors of digital identity in the era of EU Digital Wallets and decentralized credentials.
congstar is known for fair, transparent tariffs and a simple, digital customer experience. What role does identity verification play in keeping the onboarding process both secure and smooth?We put convenience and security on an equal pedestal. When a customer signs up for a postpaid plan, we offer a range of identification options, especially automated identity verifications so that only real, eligible customers get through – this protects against fraud and satisfies legal requirements in Germany.
When customers sign up for postpaid plans or activate eSIMs, how do you ensure the process feels instant and seamless, while still meeting all regulatory and fraud-prevention requirements?For postpaid or eSIM activation, speed is king, but we never cut corners on compliance. A perfect example is our sub-brand fraenk, our fully digital mobile phone app tariff. Here we rely on fully digital KYC tools, in our case IDnow’s automated identity verification solution, that scans a photo ID and do a quick selfie/liveness check. These AI-powered checks turn around in seconds, replacing old-school video calls or lengthy paperwork. As a result, signing up feels almost instant to the customer – yet it still meets all legal requirements.
Another good example is our congstar website, where we have incorporated the verification step into the checkout process. By doing the identity check with a fast, in-browser process (no extra app needed) and clearly explaining each step, customers hardly feel any friction.
In short, we assure our customers are safe, while keeping the process simple and transparent – a key part of our “fair and worry-free” brand promise.
You’ve been working with IDnow for identity verification. How does this collaboration support congstar’s goals for digital efficiency, compliance and customer satisfaction?Our partnership with IDnow is a cornerstone of this approach. IDnow’s automated identity verification solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations.
Thanks to that, we can scale up our digital sales without bottlenecks – maintaining our light, digital touch while staying on the right side of the law. In practice, this means high customer satisfaction because sign-ups are almost instant, and our operations save time – all contributing to our goal of a smooth, yet secure and digital experience.
With growing eSIM adoption and automated onboarding, where do you see the biggest opportunities or challenges for congstar in the next few years?IDnow’s automated solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations.
Christopher Krause, Senior Manager Customer Finance at congstar
Looking ahead, the rise of eSIMs and automated onboarding is a big opportunity for us. Analysts expect eSIMs to boom soon. For us, this means we can offer even more flexible, instant activations. It also cuts costs – no more plastic SIM cards or waiting for mail delivery. The flip side: as onboarding goes 100% digital, we need to stay vigilant against evolving fraud, like SIM swap attacks or deepfakes. We’re preparing by continuously improving our automated checks and monitoring tools. Overall, the shift is positive – it lets us focus on the best customer experience and leaves us more bandwidth to innovate on products and services. The main challenge is simply staying one step ahead of bad actors as we grow digitally.
Do you see digital trust as a new differentiator in the telecom market, similar to how speed or price once defined competition?Absolutely – we see digital trust becoming a real differentiator. In a mature market like ours, price and speed are table stakes. What sets a brand apart now is how much customers trust it with their heart, their data and security. Trust wins loyalty: research shows that trusted telcos gain more market share, foster long-term customers and are recommended more than others. Our brand is built on transparency and fairness, so emphasizing trust feels natural for us.
When a customer goes through an identity check, what do you want them to feel –safety, simplicity, control?In the identity check itself, we want customers to feel a sense of calm and confidence. They should feel that the process is simple as we guide them clearly through each step and respectful as they decide what information to share. Altogether, we want people to walk away thinking: “That was easy, and I know my account is protected.”
How does your verification journey contribute to that emotional experience?Our verification flow is designed to build those positive feelings. For example, we use the in-app browser for IDnow’s automated identity verification solution in our fraenk app, which keeps the process friendly and fast. The user sees clear instructions and immediate feedback, so they never feel lost. Every step is optimized for transparency: we show progress bars, explain why we need each check, and never ask for data twice. The result is a consistent, reassuring experience that strengthens the feeling of security and control.
How do you prepare for upcoming regulations like eIDAS 2.0 and the EUDI Wallet, and what opportunities do these create for telcos?What sets a brand apart now is how much customers trust it with their heart, their data and security.
Christopher Krause, Senior Manager Customer Finance at congstar
We’re monitoring the developments around eIDAS 2.0 and the EU Digital Wallet and we are seeing them as enablers rather than headaches. As the regulations come into force, we review them with our identification partners and examine how we can further improve identification with the new options available. For telcos, the opportunity could be big: according to experts, eIDAS-compliant credentials mean we can verify any EU customer’s identity seamlessly and with reduced risk of fraud.
In a world of digital wallets and decentralised identity, how do you see the telco’s role in verifying and protecting digital identity?Telcos have a vital role to play. We already have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for certain credentials – for instance, confirming that a person is a current mobile subscriber, or verifying age to enable services. Industry analysts note that telcos are well-positioned as “mobile-SIM-anchored” issuers of digital credentials.
How important is orchestration, i.e connecting verification, fraud and user experience, to achieving a scalable, future-proof onboarding process?Telcos have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for verifying credentials.
Christopher Krause, Senior Manager Customer Finance at congstar
Orchestration is absolutely critical for scaling securely. We can’t treat identity checks, fraud detection and user experience as separate silos. Instead, we tie them together. For example, if our system flags an order as high-risk, it immediately triggers additional steps. Conversely, if everything looks legitimate, the user sails through. This end-to-end coordination (identification, device risk profiling and behavior analytics) is what lets us grow quickly without ballooning costs.
How do data-sharing initiatives or consortium-based approaches help strengthen fraud prevention across the telecom sector?Industry-wide collaboration is a force-multiplier against fraud because fraudsters don’t respect company boundaries. For instance, telecoms worldwide have started exchanging through platforms like the Deutsche Fraud Forum. In addition, regular and transparent communication with government authorities such as the BNetzA and LKA is essential to set uniform industry standards and combat potential fraud equally.
How do you see AI helping to detect fraud in real time without adding friction for genuine users?Finally, AI is becoming essential to catch clever fraud without inconveniencing users. We use AI and machine learning models that watch behind the scenes. The smart part is that genuine customers hardly notice: the system learns normal behavior and only steps in (with an extra check or block) when something truly stands out. This adaptive learning means false alarms drop over time, reducing friction for legitimate users. We also benefit by deploying solutions like IDnow’s automated identity verification solution, which already uses AI trained on millions of data points to verify identities. In network operations, we complement that with risk scores on each transaction. The net effect is real-time fraud defense that locks out attackers but lets loyal customers pass through hassle-free.
About congstar:Founded in 2007 as a subsidiary of Telekom Deutschland GmbH, congstar offers flexible, transparent, and fair mobile and DSL products tailored for digital-savvy customers. Known for its customer-first approach and innovative app-based brand fraenk, congstar continues to redefine simplicity and security in Germany’s telecom market.
Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. DGGS’s CEO, Florian Werner, talked to us about how strict regulatory requirements are shaping online gaming in Germany and what it’s like to be the first brand to receive a national slot licence.By
Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn
This guide explains how biometrics identity verification systems use facial recognition, liveness detection, and AI to stop fraud, meet global compliance standards, and streamline secure digital onboarding at scale.
The post What to Look for in a Biometrics Identity Verification System first appeared on ComplyCube.
As blue confetti settles in Los Angeles after an historic World Series win, we close the chapter on another electric sports event on Bluesky. It’s during these cultural flashpoints when Bluesky is at its best – when stadium crowds are going wild, you can feel that same excitement flooding into posts.
Numbers can help describe the scale of that feeling. There were approximately 600,000 baseball posts made during the World Series, based on certain key terms. (note: we’re pretty sure that this number is an undercount, as it’s hard for us to accurately attribute to baseball the flood of “oh shit” posts that came in during Game 7)
At least 3% of all posts made on November 1 (Game 7) were about baseball. The final game also resulted in a +30% bump in traffic to Bluesky, with engagement spikes up to +66% from previous days.
We loved seeing World Series weekend in action, but it wasn’t a total surprise. In the last three months, sports generated the third-highest number of unique posts of any topic. Sports posters are also seeing greater engagement rates from posting on Bluesky than on legacy social media apps - up to ten times better.
But in a world of analytics, it’s easy to lose the love of the game. In that regard, we’re fortunate to have a roster of posters who bring the intangibles. They genuinely care about sports. Less hate, more substance and celebration.
yep, this is the baseball place now
— Keith Law (@keithlaw.bsky.social) November 1, 2025 at 10:28 PM
[image or embed]
That was the greatest baseball game I’ve ever seen.
— Molly Knight (@mollyknight.bsky.social) November 1, 2025 at 9:19 PM
If this World Series proved anything, it’s that big moments are more enjoyable when they unfold in real time, with real people. Sports has the juice on Bluesky — and every high-stakes game is bringing more fans into the conversation.
Go-to-market teams today face a frustrating challenge: they know more about their market than ever before, but struggle to act on that intelligence fast enough. Teams have the data, but not the coordination. They know who’s buying, but not when or why. And they often watch opportunities slip simply because their systems and workflows can’t keep pace with how fast buyer behavior changes.
That’s why we built Scout, a new product module inside our Link platform designed to help teams move from insight to execution in real time. Scout connects live market, competitor, and buyer signals directly to the tools teams already use—sales, marketing, and planning—turning static intelligence into coordinated GTM action.
Announced during Money20/20 USA 2025, Scout helps organizations in fraud prevention, cybersecurity, risk, and trust-critical sectors close the gap between strategy and execution. Rather than relying on static reports or siloed data sources, teams can now build programs that evolve in lockstep with the market itself.
From signals to outcomesMost GTM teams are drowning in signals but starving for clarity. Buyer demand shifts fast, contacts decay, and generic messaging rarely resonates. Scout addresses this directly by surfacing verified buyer signals, mapping decision-makers, and activating guided plays across the platforms teams already rely on—from outbound tools and CRMs to marketing automation and enablement systems
“Organizations are drowning in signals but starving for clarity,” said Travis Jarae, CEO of Liminal. “With Scout, the intelligence our customers already trust inside Link becomes immediately actionable. We connect what teams know about the market to what they can actually do about it, right where they already work.”
Why Scout matters for go-to-market teamsScout builds on the same proprietary intelligence graph that powers Link, continuously mapping relationships between vendors, buyers, and technologies to maintain a live, contextual view of the market. Predictive signals trigger guided playbooks that tell teams who to target, what to say, and which proof points will land. The result is faster, more precise go-to-market programs that stay aligned with real buyer intent and market movement.
“Liminal’s approach to data quality and context is unlike anything else in the industry,” said Matthew Thompson, CRO of Socure. “They help teams cut through the noise and make decisions based on what actually matters.”
Early users are already seeing measurable impact: 70–90% account coverage within target segments, roughly 97% automation of repetitive workflows, and 2× faster pipeline conversion compared with traditional outbound approaches. When Liminal deployed Scout across its own campaigns, the platform identified 570 target accounts and delivered personalized outreach to more than 2,400 verified contacts within minutes—demonstrating its ability to turn intelligence into coordinated execution.
How Scout worksScout moves teams from intelligence to impact through four connected layers: Build, Align, Engage, and Scale.
BuildScout gives teams the flexibility to start from scratch or plug into existing account strategies and data models. Whether you’re building new target lists or optimizing current ABM programs, Scout automatically unifies your market intelligence, contact data, and buyer signals into a single activation layer. With deep vertical context, teams can capture market movements as they happen, navigate complex GTM motions, and deploy enterprise-scale campaigns without manual lift.
AlignScout compresses months of planning into automated Account Strategy Plans—a dynamic activation blueprint that clarifies market context, use-case alignment, key purchase criteria, and the campaign-aligned path to close. Each plan includes live buyer insights, prioritized talking points, and step-by-step execution guidance for every account.
EngageNo person is just one person. Scout gives teams deep context around who buyers are, what they need, and how to align value with their goals. Teams gain verified contacts, role-specific insights, and adaptable account strategies that drive meaningful engagement.
ScaleEvery email, call, and inMail reflects the latest competitive, market, and regulatory intelligence. Scout helps reps highlight the strongest proof points, feature what motivates each prospect most, and handle objections with confidence—making every rep an expert in the moment that matters.
The new standard for GTM executionGo-to-market intelligence is no longer about collecting data; it’s about activating it. Scout gives teams a unified intelligence layer that connects what’s happening in the market to what happens next in pipeline. The result is execution that’s coordinated, contextual, and always current.
See how Scout can help your team turn market intelligence into GTM action → Book a demo
The post Introducing Scout — Turn Market Intelligence into GTM Action appeared first on Liminal.co.
The global stablecoin payment market is experiencing explosive growth, with businesses demanding infrastructure that combines seamless cross-border transactions with robust FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol and compliance solution provider, has partnered with Endl, a stablecoin neobank and payment rail provider, to integrate Veriscope for regulatory compliance. This collaboration showcases how VASPs can enable secure, compliant digital finance for modern payment infrastructure while prioritizing user privacy and meeting global regulatory standards.
Building the Future of Compliant Payment InfrastructureAs the digital payments landscape evolves, Virtual Asset Service Providers (VASPs) need blockchain compliance tools that ensure regulatory adherence without adding friction to user experience. Veriscope leverages cryptographic proof technology to facilitate secure, privacy-preserving data exchanges, aligning with FATF Travel Rule requirements and AML compliance standards. By integrating Veriscope, Endl demonstrates how next-generation stablecoin payment platforms can achieve regulatory readiness seamlessly while maintaining operational efficiency.
Endl is a regulatory-compliant stablecoin neobank providing fiat and stablecoin payment rails, multicurrency accounts, and crypto on/off ramps designed for businesses and individuals seeking secure cross-border payment solutions. By integrating Veriscope for Travel Rule compliance, Endl strengthens its commitment to security and regulatory compliance, enabling users to seamlessly convert, manage, and transfer both fiat and cryptocurrencies while meeting global AML and KYC compliance standards.
The Power of Veriscope for Global Payment PlatformsThe Shyft Network-Endl partnership highlights Veriscope’s ability to transform crypto compliance and blockchain regulatory infrastructure for payment platforms:
Seamless FATF Travel Rule Compliance: Automated cryptographic proof exchanges ensure FATF Travel Rule compliance for VASPs without disrupting user workflows or transaction speed Privacy-First AML Verification: User Signing technology enables secure KYC data verification and AML compliance while protecting customer privacy through blockchain encryption Global Regulatory Readiness for VASPs: Position Endl for expansion into regulated crypto markets worldwide with built-in compliance infrastructure that meets international standards Enhanced Trust in Digital Asset Transactions: Demonstrate commitment to security and regulatory standards, building confidence with both users and institutional partners in the stablecoin payment ecosystemZach Justein, co-founder of Veriscope, emphasized the integration’s impact on the crypto compliance landscape:
“The future of digital payments lies in seamless integration between fiat and digital assets with robust regulatory compliance. Veriscope’s integration with Endl reflects Shyft Network’s commitment to enabling compliant, privacy-preserving blockchain infrastructure for the next generation of payment platforms. As stablecoin adoption accelerates globally, FATF Travel Rule solutions like this will be essential for VASPs serving international markets and meeting evolving regulatory requirements.”
Endl joins a global network of Virtual Asset Service Providers adopting Veriscope to meet FATF Travel Rule and AML compliance demands seamlessly. This partnership underscores the critical need for secure, compliant crypto infrastructure as stablecoin payments become mainstream across cross-border transactions, international remittances, and B2B digital asset payments.
About VeriscopeVeriscope, built on Shyft Network, is the leading blockchain compliance infrastructure for Virtual Asset Service Providers (VASPs), offering a frictionless solution for FATF Travel Rule compliance and AML regulatory requirements. Powered by User Signing cryptographic proof technology, it enables VASPs to request verification from non-custodial wallets, simplifying secure KYC data verification while prioritizing privacy through blockchain encryption. Trusted globally by leading crypto exchanges and payment platforms, Veriscope reduces compliance complexity and empowers VASPs to operate confidently in regulated digital asset markets worldwide.
About EndlEndl is a digital asset payment infrastructure provider established in 2024. The company operates stablecoin payment rails, multicurrency account services, and fiat-to-crypto conversion infrastructure for commercial and retail clients. Services include cross-border transaction processing, linked card spending functionality, and yield generation on deposited assets. The platform is designed to meet regulatory compliance standards in jurisdictions where it operates, including FATF Travel Rule and anti-money laundering requirements.
Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.
Book a consultation at calendly.com/tomas-shyft or email bd @ shyft.network
Shyft Veriscope Expands VASP Compliance Network with Endl Integration was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post The Future of Identity Security: Demands of a New Era appeared first on uqudo.
“A friend sent over an interesting article by Ross Haleluik that opened with ‘Why it’s not just power grid and water, but also tools like Stripe and Twilio that should be defined as critical infrastructure.'”
The point being made is that there are some services (as demonstrated by the recent AWS outage) that cause significant harm if they become unavailable. The definition of critical infrastructure needs to go beyond power, water, or even core ICT networking.
So let’s talk about that outage. On 19 October 2025, an AWS outage (of course it was the DNS) made the Internet wobble. Payments failed. Authentication broke. Delivery systems froze. For a few hours, the digital economy looked a lot less digital and a lot more fragile.
From my perspective, the strangest things failed. I was in the process of boarding a plane to the Internet Identity Workshop. Air traffic control was fine (yay for archaic systems!), but the gate agent couldn’t run the automated bag check tools. The flight purser couldn’t see what catering had been loaded. And my seatmate completely panicked, wondering if it was even safe to fly.
So many things broke. A lot of things didn’t. Everyday people had no idea how to differentiate what mattered. That moment reminded me how fragile modern “resilience” can be.
We used to worry about power grids, water, and transportation—the visible bones of civilization. Now it is APIs, SaaS platforms, and cloud services that keep everything else alive. The outage didn’t just break a few apps; it exposed how invisible dependencies have become the modern equivalents of roads and power lines.
A Digital Identity Digest The Infrastructure We Forgot We Built Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:02 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
The invisible backboneModern business runs on other people’s APIs (I’m also looking at you, too, MCP). Stripe handles payments. Twilio delivers authentication codes and customer messages. Okta provides identity. AWS, Google Cloud, and Azure host it all.
These are not niche conveniences. They form the infrastructure of global commerce. When you tap your phone to pay for coffee, when a government portal confirms your tax filing, or when an airline gate agent scans your boarding pass, one of these services is quietly mediating that interaction.
They don’t look like infrastructure. There are no visible grids, transformers, or pipes. They exist as lines of code, data centers, and service contracts in that they are modular, rentable, and ephemeral. Yet they behave like utilities in every meaningful way.
We have replaced public infrastructure with private platforms. The shift brought convenience and innovation, but also a new kind of risk. Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will stay online. We stopped building things to last and started building things to scale by leveraging someone else’s efficiencies. The assumption that the lights will always stay on has not caught up with reality.
The paradox of “resilient” designCloud architecture is often described as inherently resilient. Redundancy, failover, and microservices are meant to prevent collapse. But “resilient” in one dimension can mean “fragile” in another. I talked about this in an earlier post, The End of the Global Internet.
Designing for resilience makes sense in a world where the Internet is fragmenting. Companies build multi-region redundancies, on-prem backups, and hybrid clouds to protect themselves from geopolitical risk, supply chain issues, and simple human error. That same design logic—isolating risk, duplicating services, layering complexity—often increases fragility at the systemic level. Resilience is considered important, but efficiency is even better.
Microservices make each node stronger while the overall network becomes more brittle. Every layer of redundancy adds another point of failure and another dependency. A service might survive the loss of one data center but not the failure of a shared authentication API or DNS resolver. Local resilience frequently amplifies global fragility.
The AWS outage demonstrated this clearly. A system built for reliability failed because its dependencies were too successful. Interdependence works in both directions. When everyone relies on the same safety net, everyone falls together.
Utility or vendor?This raises a larger question: should services like AWS, Stripe, or Twilio be treated as critical infrastructure? Haleluik says yes. I’m trying to decide where I stand on this, which is why I’m writing this series of blog posts.
In the United States, the FTC and FCC have debated for decades whether the Internet itself (aka, “broadband”) qualifies. If you aren’t familiar with that quagmire, you might be interested in “FCC vs FTC: A Primer on Broadband Privacy Oversight.”
The arguments for the designation are clear. Without broadband access, the modern economy falters. The arguments against it are equally clear. Labeling something as critical infrastructure introduces regulation, and regulation remains politically unpopular when applied to the Internet.
To put it another way, declaring something critical brings oversight, compliance requirements, and coordination mandates. Avoiding that label preserves flexibility and profit margins but leaves everyone downstream exposed. The result is an uneasy middle ground. These systems operate as essential infrastructure but remain governed by private interest. Their reach exceeds their obligations.
In traditional utilities, physical constraints limited monopoly power. Another way to look at it, though, is that traditional utilities are monopolized by government agencies (ideally) to the benefit of all. The economics of software, however, reward centralization. Success creates scale, and scale discourages competition. Very few can afford to get there (big enough to mask failures) from here (small enough to feel them).
I think we’re seeing quite a bit of magical thinking when it comes to the stories companies tell themselves when it comes to resilience: When your infrastructure depends on someone else’s business continuity plan, governance becomes an act of faith.
When “public” meets “critical”While the debate over “critical infrastructure” in the United States often focuses on regulation versus innovation, the rest of the world is having a different but related conversation under the banner of digital public infrastructure (DPI).
Across the G20 and beyond, governments are grappling with whether digital public infrastructure—such as national payment systems, digital identity programs, and data exchange platforms—should be designated as critical information infrastructure (CII). A recent policy brief from a G20 engagement group argues that while both concepts overlap, they represent opposing design instincts: DPI is built for openness, interoperability, and inclusion, whereas CII emphasizes restriction, control, and national security.
That tension is already visible in India, where systems such as the Unified Payments Interface (UPI) have become de facto critical infrastructure. Although UPI has not been formally designated as CII, its scale and centrality to the nation’s payment system have raised similar questions about oversight and control. Its success has increased trust and security expectations, but also heightened concerns about market access for private and foreign participants, as well as the challenges of cross-border interoperability.
The G20 paper calls for ex-ante (early) designation of digital public systems as critical, rather than ex-post (after deployment), to avoid costly retrofits and policy confusion. But the underlying debate remains unresolved: Should public-facing digital infrastructure be treated like essential utilities, or like regulated assets of the state? The answer may depend less on technology and more on who society believes should bear responsibility for keeping the digital lights on. The answer to that won’t be the same everywhere.
Security versus availabilityThat tension over control doesn’t stop at the policy level. It runs straight through the design decisions companies make every day. When regulation is ambiguous, incentives fill the gap—and the strongest incentive of all is keeping systems online.
Availability has become the real currency of trust. It’s a strange thing, if you think about it logically, but human trust rarely is. (Cryptographic trust is another matter entirely.) Downtime brings backlash, lost revenue, and penalties, so companies do the rational thing: they optimize for uptime above all else. Security comes later. I don’t like it, but I understand why it happens.
Availability wins because it’s visible. Customers notice an outage immediately. They don’t notice an insecure configuration, a quiet policy failure, or a missing audit trail until something goes horribly wrong and the media gets a hold of the after-action report.
That visibility gap distorts priorities. When reliability is measured only by uptime, risk grows quietly in the background. You can’t meaningfully secure systems you don’t control, yet most organizations depend on the illusion that control and accountability can be outsourced while reliability remains intact.
And then there are the incentives, a word I probably use too often, but for good reason. The incentives in this landscape reward continuity, not transparency. Revenue flows as long as the service runs, even if it runs insecurely. Yes, fines exist, but they are exceptions, not deterrents.
What counts as “working” is still negotiated privately, even when the consequences are public. Until those definitions include societal resilience, we’ll continue to mistake uptime for stability.
Regulated by dependenceAll of this sounds like arguments for the critical infrastructure label, doesn’t it? But remember, formal regulation is only one kind of control. Dependence is another, because dependence acts as a form of unofficial regulation.
Society already treats many tech platforms as critical infrastructure even without saying so. Governments host essential services on AWS. Health systems use commercial clouds for patient records. Banks rely on private payment APIs to move billions each day.
We trust these companies to act in the public interest, not because they must, but because we lack alternatives. Massive failures result in conversations like this post about whether these companies need to be more thoroughly monitored. This is the logic of “too big to fail,” translated into digital infrastructure. Authentication services, data hosting, and communication gateways now carry systemic risk once reserved for banks.
We have built a layer of critical infrastructure that is privately owned but publicly relied upon. It operates by trust, not by oversight, and that is a fragile foundation for a system this essential.
The illusion of choiceDependence isn’t only a matter of trust. It’s also the result of market design. The systems we treat as infrastructure are built on platforms that appear competitive but converge around the same few providers.
Vendor neutrality looks fine on a procurement slide but falters in practice.
Ask a CIO whether their organization could migrate off a cloud provider; most will say yes. Ask whether they could do it today, and the answer shortens to silence.
APIs, SDKs, and proprietary integrations make switching possible but painful. That pain enforces dependence. It isn’t necessarily malicious, but it keeps theoretical competition safely theoretical.
Lock-in is the quiet tax on convenience.
The market appears to offer many choices, but those choices often lead back to the same infrastructure. A handful of global providers now underpin authentication, messaging, hosting, and payments.
When a platform failure can delay paychecks, ground flights, or disrupt hospital systems, we’re no longer talking about preference or pricing. We’re talking about public safety.
The same qualities that once made the Internet adaptable—modular APIs, composable services, seamless integration—have made it fragile in aggregate. We built a dependency chain and called it innovation.
That dependency chain doesn’t just reshape markets. It reshapes how societies determine what constitutes essential. When the same few providers sit beneath every major system, “critical infrastructure” stops being a policy category and starts being a description of reality.
The expanding definition of “critical”What we’re looking at is the challenge that “critical” is just too big a concept. As societies become more technically complex, the definition of critical infrastructure also keeps growing.
Power, water, and transport once defined the baseline. Then came telecommunications. Then the Internet. The stack now includes authentication, payments, communication APIs, and identity services. Each layer improves capability while expanding exposure.
Whether or not you believe that these tools should exist, their failure now extends beyond the control of any single organization. As dependencies multiply, the distinction between convenience and infrastructure fades.
An AWS outage can make it really hard to check in for your flight. A Twilio misconfiguration can interrupt millions of authentication codes. A payment API failure can halt payroll for small businesses. These systems support not only individual companies but also the systems that support those companies.
If we decide that these systems function as critical infrastructure, the next question is what to do about them. Recognition doesn’t come free. It brings oversight, obligations, and trade-offs that neither governments nor providers are fully prepared to bear.
The cost of recognitionCalling an API a utility isn’t about nationalization. It’s about acknowledging that private infrastructure now performs public functions. With that acknowledgment comes responsibility.
Critical infrastructure is what society cannot function without. That definition once focused on physical essentials; now it includes the digital plumbing that supports everything else. Expanding that list has consequences. Every new addition stretches oversight thinner and diffuses accountability.
Resources are finite. Attention is finite. When every system is declared critical, prioritization becomes impossible. The next challenge isn’t deciding whether to protect these dependencies, but how much protection each truly deserves.
I can (and will!) say a lot more on this particular subject. Stay tuned for next week’s post.
Closing thoughtsRoss Haleluik’s observation was an interesting perspective on what utilities look like in modern life. Stripe, Twilio, AWS, and others do not just enable business; they are the business. They have become the unacknowledged utilities of a digital economy.
When I watched Delta’s systems falter during the AWS outage, it was not just an airline glitch. It was a glimpse into the depth of interdependence that defines a modern technical society. If efficiency is the goal, then labeling these systems as critical infrastructure may be the right path. But if resilience is the goal, then perhaps we have other choices to make.
The next outage will not be an exception. It will serve as another reminder that the foundations of the modern world are built on rented infrastructure, and the bill is coming due.
If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript[00:00:29]
Hi and welcome back.
I’m recording this episode while dealing with the cold that everyone seems to have right now — so apologies for it being a little bit late. I had hoped the cold would pass before I picked up the microphone again.
But here we are, and today I want to talk about Critical Infrastructure.
Rethinking What Counts as CriticalA friend of mine recently sent over an article by Ross Havelwick that began with an interesting point:
“It’s not just power grids and water systems that count as critical infrastructure, but also tools like Stripe and Twilio.”
His argument was simple yet powerful — some services have become so essential that when they fail, the impact ripples far beyond their own operations. The AWS outage in October proved that vividly.
Before diving deeper, it’s worth defining what we mean by critical infrastructure.
These are systems and assets so vital that their disruption would harm national security, the economy, or public safety.
Yet, as Havelwick and others note, this list keeps expanding.
When AWS Went DownOn October 19, 2025, AWS experienced a major outage in one region. A database error cascaded into failures across DNS, payments, and authentication systems.
For a few hours, the digital economy looked far less digital.
I remember it clearly: I was boarding a flight to the Internet Identity Workshop. Air traffic control was fine — archaic but stable. Yet the gate agent couldn’t check bags, and the purser couldn’t confirm catering. My seatmate was visibly anxious about whether it was even safe to fly.
So many systems failed, yet many didn’t. What struck me most was how few people could tell the difference between what mattered and what didn’t.
Invisible Dependencies and Fragile ResilienceThis incident made something clear — modern resilience is fragile because we’ve built it atop invisible dependencies that we rarely acknowledge.
Modern businesses run on other people’s APIs:
Stripe handles payments. Twilio delivers authentication codes. Okta manages identity. AWS, Google Cloud, Azure host nearly everything.These aren’t niche conveniences anymore — they’re the infrastructure of global commerce. When you tap your phone to pay for coffee or file taxes online, one of these services is working silently in the background.
They may not look like traditional infrastructure — no visible grids or pipes — but they behave like utilities.
In short, we’ve replaced public infrastructure with private platforms.
Innovation and Its RisksThis shift has brought incredible innovation but also new risks.
Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will always work.
We’ve optimized for scale, not longevity.
But our assumptions about resilience haven’t kept pace.
There’s a paradox here:
Cloud architectures are built for redundancy and fault tolerance. Yet every layer of resilience adds another dependency — and therefore, another potential point of failure.When a shared DNS resolver or authentication API fails, the entire ecosystem can crumble, no matter how many backups you have.
Interdependence and OversightInterdependence cuts both ways. When everyone relies on the same few providers, a failure for one becomes a failure for all.
So the big question arises:
Should services like AWS or Stripe be treated as critical infrastructure?
Havelwick argues yes. I’m not entirely convinced — but I see both sides.
In the U.S., agencies like the FTC and FCC have debated for decades whether the Internet itself qualifies as critical infrastructure.
Supporters argue that broadband is essential to modern life; opponents worry that regulation could slow innovation.
Declaring something “critical” brings oversight and compliance. Avoiding the label keeps flexibility — but also leaves society exposed.
We now have systems that operate like infrastructure yet remain governed by private interests. Their influence extends far beyond their legal obligations.
Digital Public Infrastructure and Global PerspectivesOutside the U.S., this debate continues under the banner of Digital Public Infrastructure (DPI).
Governments across the G20 are exploring whether payment systems, digital identity networks, and data exchange platforms should be classified as Critical Information Infrastructure (CII).
A recent G20 policy brief captured the tension well:
DPI emphasizes openness and inclusion. CII emphasizes restriction and control.For example, India’s Unified Payments Interface (UPI) functions as critical infrastructure in practice, even if not in name.
Its success raises key questions:
Who controls access? How should foreign participants interact? Can cross-border interoperability be trusted?The G20’s advice: identify critical systems early, before they become too big to retrofit with proper governance. But again, recognition invites regulation, which can stifle the innovation that made those systems successful.
The Incentive ProblemWhen regulation lags, incentives take over — and the biggest incentive of all is uptime.
Companies prioritize continuity because:
Downtime is visible. Security failures often aren’t.As a result, availability becomes the currency of trust.
Revenue flows as long as systems run — even if they run insecurely.
Until we include societal resilience in our definition of “working,” we’ll keep mistaking uptime for stability.
The Trust DilemmaDependency itself already acts as a form of regulation.
Governments host their services on AWS. Hospitals store patient records in the cloud.
We trust these platforms — not because they’re obligated to serve the public interest, but because we have no alternative.
It’s the logic of too big to fail rewritten for the digital era.
We’ve built a layer of infrastructure that’s privately owned yet publicly indispensable — and it’s running on trust, not oversight.
Dependence isn’t just about trust — it’s about design.
If you ask most CIOs whether they could migrate off a major cloud provider, they’ll say yes.
Ask if they could do it today, and the answer is no.
Proprietary integrations make switching possible but painful. That pain enforces dependence — not maliciously, but through market gravity.
Lock-in is the tax on convenience.
And when a platform failure can delay paychecks, disrupt hospitals, or ground flights, this isn’t about preference — it’s about public safety.
As technology grows more complex, the concept of critical infrastructure keeps expanding.
Power, water, and transportation were once the baseline. Then came telecommunications and the Internet. Now we’re talking about authentication, payments, messaging, and identity services.Each layer increases capability — but also multiplies exposure.
The real question isn’t whether these systems are critical. They clearly are.
It’s how to manage the responsibilities that come with that recognition.
[00:13:10]
Calling an API a utility doesn’t mean nationalizing it. It means acknowledging that private infrastructure now performs public functions, and that recognition carries responsibility.
Yet every new addition to the “critical” list spreads oversight thinner. If everything’s a priority, nothing truly is.
We have to decide which dependencies deserve protection — and which risks we can live with.
Stripe, AWS, and similar services don’t just enable business. They are business. They’ve become the unacknowledged utilities of our digital economy.
When I saw my airline systems falter during the AWS outage, it wasn’t just a glitch — it was a glimpse into how deeply interwoven our dependencies have become.
If your goal is efficiency, labeling these systems as critical may help create stability through regulation.
But if your goal is resilience, perhaps it’s time to design for flexibility — to accept failure as part of stability, and to plan for it.
The next outage will happen. It won’t be an exception. It will simply remind us that the foundations of the modern world run on rented infrastructure, and that rent always comes due.
[00:15:26]
And that’s it for this week’s episode of The Digital Identity Digest.
If it helped make things a little clearer — or at least more interesting — share it with a friend or colleague and connect with me on LinkedIn @HLFLanagan.
If you enjoyed the show, please subscribe and leave a rating or review on Apple Podcasts or wherever you listen.
You can also find the full written post at sphericalcowconsulting.com.
Stay curious, stay engaged — and let’s keep these conversations going.
The post The Infrastructure We Forgot We Built appeared first on Spherical Cow Consulting.
I want to address the following false statements and allegations made by Sheikh, Goertzel and Burke:
Oct 9, 2025 — Sheikh on X Spaces (Dmitrov)Oct 9, 2025 — Jamie Burke X PostOct 13, 2025 — Sheikh and Goertzel on X Spaces (Benali)Oct 15, 2025 — Jamie Burke X PostOct 17, 2025 — Sheikh on All-in-Crypto PodcastTo put to rest any false claims or allegations of “theft” of property, let’s track the story from start to finish. This post has also been prepared with input from Ocean Expeditions.
Prior to March 2021, Ocean Protocol Foundation (‘OPF’) owned the minting rights to the $OCEAN token contract (0x967) with 5 signers assigned by OPF.
The Ocean community allocation, which would comprise 51% of eventual $OCEAN supply, (‘51% Tokens’) had not yet been minted but its allocation had been communicated to the Ocean community in the Ocean whitepaper.
To set up the oceanDAO properly, OPF engaged commercial lawyers, accountants and auditors to conceive a legal and auditable pathway to grant the oceanDAO the rights of minting the Ocean community allocation.
In June 2022 (but with documents dated March 2021), the rights of minting the 51% Tokens was irrevocably signed over to the oceanDAO. Along with this, seven Ocean community members and independent crypto OGs, stepped in, in their individual capacities to become trustees of the 51% Tokens.
March 31, 2021 — Legal and formal sign-over of assets to oceanDAO from Ocean Protocol FoundationOne year later, in May 2023, using the minting rights it had been granted in June 2022, the oceanDAO minted the 51% Token supply and irreversibly relinquished all control over the $OCEAN token contract 0x967 for eternity. The $OCEAN token lives wholly independent on the Ethereum blockchain and cannot be changed, modified or stopped.
TX ID: https://etherscan.io/tx/0x9286ab49306fd3fca4e79a1e3bdd88893042fcbd23ddb5e705e1029c6f53a068The 51% Tokens were minted into existence on the Ethereum blockchain. The address holding the $OCEAN tokens sat in the ether, owned by no one. The address could release $OCEAN when at least 4 of 7 signers activated their key in the Gnosis Safe vault, which governs the address.
None of the signers had any claim of ownership over the address, Gnosis Safe vault or the contents (51% Tokens). They acted in the interest of the Ocean community and not anyone else, and certainly not OPF or the Ocean Founders.
During the ASI talks in Q2/2024, Fetch.ai applied significant pressure on OPF to convert the entire 51% Token supply immediately to $FET. OPF pushed back clearly stating that it had no power to do so as the 51% Tokens were not the property of, or under the control of OPF.
During those talks, OPF also repeatedly emphasized to Fetch.ai and SingularityNET that the property rights of EVERY token holder (including those of OPF and oceanDAO) must be respected, and that these rights were completely separate to the ASI Alliance. Fetch.ai and SingularityNET agreed with this fundamental principle.
March 24, 2024 — First discussion about Ocean joining ASIApril 3, 2024 — Pon to the Sheikh (Fetch.ai), Goertzel, Lake, Casiraghi (SingularityNET)May 24, 2024 — Pon to D. Levy (Fetch.ai)August 6, 2024 — ASI Board Call where Sheikh calls for ASI Alliance to refrain from exerting control over ASI membersAugust 17, 2025 — SingularityNET / Ocean CallIt must also be highlighted that oceanDAO was never a party to any agreement with Fetch.ai and SingularityNET. It had its own investment priorities and objectives as a regular token holder. oceanDAO’s existence, and the fact that it was the entity controlling the 51% Tokens with 7 signers, was acknowledged by SingularityNET at the very start of talks in March 2024.
In those discussions, OPF explained the intentions of the oceanDAO to release $OCEAN tokens to the community on an emission schedule.
In mid-2024, after the formation of the ASI Alliance, Fetch minted 611 million $FET tokens for the Ocean community. The sole purpose of minting this 611 million $FET was to accommodate a 100% swap-out of the Ocean community’s token supply of 1.41 billion $OCEAN. This swap-out would be via a $OCEAN-$FET token bridge and migration contract.
At that time, the oceanDAO did not initiate a conversion of the Gnosis Safe vault 51% Tokens from $OCEAN to $FET. The 51% of tokens sat dormant, as they had since being minted in May 2023.
However, with the continued and relentlessly deteriorating price of $FET due to the actions of Fetch.ai and SingularityNET, the Ocean community treasury had fallen in value from $1 billion in Q1/2024 to $300 million in Q2/2025.
oceanDAO therefore decided around April 2025 that it needed to take steps to avoid further fall in the value of the 51% Tokens for the Ocean community by converting some of the $OCEAN into other crypto-currencies or stablecoins, so that the oceanDAO would not be saddled with a large supply of steadily depreciating token.
The immediate and obvious risk to the Ocean community would be that if and when suitable projects come about, the Ocean community rewards could very well be worthless due to the continued fall in $FET price. This was an important consideration for the oceanDAO when it eventually decided that active steps had to be taken to protect the interests of the Ocean community.
Upon the establishment of Ocean Expeditions, a Cayman trust, in late-June, 2025, oceanDAO transferred signing rights over the 51% Tokens to Ocean Expeditions, who then initiated a conversion of $OCEAN to $FET using the token bridge and migration contract.
TX ID: https://etherscan.io/tx/0xce91eef8788c15c445fa8bb6312e8d316088ce174454bb3c96e7caeb62da980dSheikh alluded to this act of conversion of $OCEAN to $FET, along with his incorrect understanding of their purpose in a podcast.
Oct 17, 2025 — Sheikh speaking on All-in CryptoHowever, unlike what Sheikh falsely claimed, Ocean Expeditions’s conversion of the 51% Tokens from $OCEAN to $FET and the selling of some of these tokens, is in no way a “theft” of these tokens by Ocean Expeditions or by OPF. These are unequivocally, not ASI community tokens, not “ASI” community reward tokens and not under any consideration of the ASI community.
Ocean Expeditions converted its $OCEAN holdings into $FET by utilising the $FET that were specifically minted for the Ocean community and earmarked by Fetch.ai for this conversion. It is important to emphasize that Ocean Expeditions did not tap into any other portion of the $FET supply. Simply put, there was no “theft” because Ocean Expeditions had claimed what it was rightfully allocated and entitled to.
Any token movements of the 51% Tokens to 30 wallets, Binance, GSR or any other recipient AND any token liquidations or disposals, are the sole right of, and at the discretion of Ocean Expeditions, and no one else.
Ocean Expeditions sought to preserve the value of the community assets, for the good of the Ocean community. Any assets, whether in $FET, other tokens or fiat, remain held by Ocean Expeditions in trust for the Ocean community. The assets have not been transferred to, or in any other way taken by OPF or the Ocean Founders.
We demand that Fetch.ai, Sheikh and all other representatives of the ASI Alliance who have promulgated any lies, incitement and misrepresentations (e.g. “stolen” “scammers” “we will get you”) immediately retract their statements, delete their social media posts where these statements were made, issue a clarification to the broader news media and issue a formal apology to Ocean Expeditions, Ocean Protocol Foundation, and the Ocean Founders.
We repeat that the 51% Tokens are owned by Ocean Expeditions, for the sole purpose of the Ocean community and no one else.
Ocean Community 51% Tokens was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post What Is eKYC? A Quick Guide appeared first on 1Kosmos.
The Central Bank of the United Arab Emirates has taken a groundbreaking step in financial security.
It is now mandating the phase-out of SMS and email one-time passwords (OTPs).
The difference between effort that multiplies and effort that disappears
In today’s digital age, knowing who’s on the other side of the screen isn’t just a security measure, it’s a necessity. From online banking to employee attendance, businesses across the globe are under pressure to verify users faster and with absolute accuracy. That’s where the biometric face scanner steps in, giving companies the confidence to know every interaction is authentic.
The rise of artificial intelligence has turned face scanning into a cornerstone of identity verification. Whether it’s an AI face scanner, a facial recognition scanner, or an advanced face scanning machine, the goal remains the same: keep things secure without slowing people down.
The Evolution of Facial Scanning Technology
Facial recognition has come a long way since the early 2000s. What once required large systems and clunky cameras now fits into a sleek face scanner device powered by deep learning. Modern facial scanning technology can detect and verify faces in milliseconds while maintaining compliance with global data standards such as GDPR.
AI-driven algorithms analyze facial landmarks, comparing them with stored biometric templates. This process ensures unmatched accuracy. A study conducted by NIST’s Face Recognition Vendor Test confirmed that advanced AI models now achieve over 99% accuracy in matching and verification, outperforming traditional biometric systems like fingerprints under certain conditions.
These results show that biometric verification isn’t just futuristic talk, it’s an essential layer of digital trust.
Why Businesses Are Switching to Face Scanner Biometric Systems
Passwords, ID cards, and manual checks are vulnerable to theft, fraud, and human error. A face scanner biometric solution eliminates these weaknesses. For many businesses, it’s not about replacing human judgment, it’s about enhancing it.
Companies are now using AI face scan systems to authenticate employees, onboard new clients, and manage visitor access seamlessly. Here’s why adoption is growing so fast:
Faster verification: A simple glance replaces lengthy manual identity checks. Stronger security: Faces can’t be borrowed, stolen, or easily replicated. Higher accuracy: The system adapts to lighting, angles, and even subtle changes like facial hair. Better compliance: Aligned with data protection and global standards.It’s the balance between convenience and control that makes facial recognition scanners invaluable in sectors such as finance, healthcare, retail, and corporate access management.
How a Face Scan Attendance Machine Improves Workforce Management
Time theft and attendance fraud cost businesses millions annually. Traditional punch cards or RFID systems can be manipulated, but a face scan attendance machine offers transparency and efficiency. Employees simply look into a face scan camera, and their attendance is logged instantly.
This system ensures that only real, verified individuals are recorded. No more buddy punching or proxy logins. Companies integrating such systems experience improved productivity and cleaner attendance data. It’s a small change that brings big operational discipline.
Solutions like the face recognition SDK make implementation simple by offering APIs that integrate directly into existing HR and access management software.
The Technology Behind AI Face Scanners
A biometric face scanner operates on the principles of artificial intelligence and computer vision. It starts by mapping key facial points such as eyes, nose, jawline, and contours to create a unique mathematical pattern.
Here’s how the process unfolds:
A face scan camera captures the user’s face in real-time. The AI model extracts biometric data points. The AI face scanner compares the captured data with stored templates. The result is an instant verification decision.Unlike passwords or tokens, facial biometrics are almost impossible to replicate. Many systems also include liveness detection to distinguish between a live person and a photo or mask. Businesses can test this feature through the face liveness detection SDK, ensuring their verification process isn’t fooled by fake attempts.
Ensuring Privacy and Data Security
One major concern surrounding facial scanning technology is data privacy. Responsible companies know that collecting biometric data requires careful handling. The good news is that modern systems don’t store raw images. Instead, they use encrypted templates, mathematical representations that can’t be reverse-engineered into a real face.
Organizations adhering to GDPR and global privacy laws can confidently deploy face scanner devices without compromising user rights. Transparency, consent, and clear data retention policies are the pillars of ethical AI use.
To stay updated on compliance standards and performance benchmarks, many developers reference the NIST FRVT 1:1 reports, which highlight progress in algorithmic accuracy and fairness.
Real-World Applications of Face Scanning Machines
Facial recognition scanners have a wide range of real-world applications that continue to grow each year. Here are some key areas where they are being used:
1. Banking and FinanceFacial recognition technology helps prevent identity fraud during digital onboarding, ensuring secure access to banking services.
2. Corporate OfficesThese systems provide secure and frictionless access control, allowing employees to enter restricted areas without the need for physical keys or ID cards.
3. AirportsAirports use facial recognition to streamline processes, offering faster and more secure boarding and immigration checks.
4. EducationIn education, facial recognition is used for automated attendance tracking and exam proctoring, reducing administrative overhead and ensuring exam integrity.
For developers or businesses looking to explore how these systems work, the face biometric playground provides a hands-on environment to test AI-based facial recognition in action.
Challenges and Ethical Considerations
While the benefits are undeniable, biometric systems must still address several challenges. AI bias, varying lighting conditions, and evolving spoofing methods are ongoing hurdles. Continuous algorithm training using diverse datasets is key to ensuring fairness and reliability.
Ethical implementation also plays a major role. Users must always know when and why their data is being collected. Transparent policies build trust, the same trust that a biometric face scanner promises to uphold.
Open-source initiatives like Recognito Vision’s GitHub repository are helping drive responsible innovation by allowing researchers to refine and test AI-based recognition models openly and collaboratively.
The Future of Face Scanning and Business Verification
As AI becomes more sophisticated, so will biometric systems. Future scanners will combine 3D depth sensing, emotion analytics, and advanced liveness detection to improve security even further.
The evolution of AI face scan systems is not about replacing traditional verification but complementing it, building a security framework that feels effortless to users yet nearly impossible to breach.
Building Trust in the Age of Intelligent Verification
Trust isn’t built in a day, but it can be verified in a second. A well-designed biometric face scanner offers that confidence, enabling companies to know their users without a doubt. From corporate offices to fintech platforms, businesses that invest in intelligent verification today will lead tomorrow’s secure digital economy.
As one of the pioneers in ethical biometric verification, Recognito continues to empower organizations with AI-driven identity solutions that combine precision, privacy, and confidence.
Frequently Asked Questions
1. What is a biometric face scanner and how does it work?
It’s an AI-powered system that analyzes facial features to verify identity in seconds.
2. Is facial recognition technology safe for user privacy?
Yes. Modern systems use encrypted facial templates instead of storing real images.
3. What are the main benefits of using facial recognition in businesses?
It offers faster verification, stronger security, and reduced fraud risks.
4. How can companies integrate a biometric face scanner into their systems?
They can use APIs or SDKs to easily add facial verification to existing software.
Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.
Data Farming Rounds 160 (DF160) and 161 (DF161) have completed and rewards are now available after a temporary interruption in service from Oct 13 — Oct30.
DF162 is live, October 30th. It concludes on November 6th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.
2. DF structureThe reward structure for DF162 is comprised solely of Predictoor DF rewards.
Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.
3. How to Earn Rewards, and Claim ThemPredictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.
4. Specific Parameters for DF162Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE
Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.
Predictoor DF rewards are calculated as follows:
First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.Expect further evolution in DF: adding new streams and budget adjustments among streams.
Updates are always announced at the beginning of a round, if not sooner.
About Ocean and DF PredictoorOcean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.
In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.
DF160, DF161 Complete and DF162 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
Introducing a patent-pending framework that fuses AI, DID, and Blockchain to enable secure AI delegation
Imagine this:What if you could give your personal AI assistant a Power of Attorney — so it could act on your behalf? That sounds convenient, but only if it’s done in a fully secure, verifiable, and trustworthy way. No more sharing passwords. No more blind trust in centralized services. Just a cryptographically proven system of delegation.
A recent patent filed by CPLABS, titled “Method for Delegated AI Agent-Based Service Execution on Behalf of a User,” outlines exactly such a future. This patent proposes a new model that combines Decentralized Identity (DID), blockchain, and AI agents — so your AI can act for you, but with boundaries, verifiability, and accountability built in.
The Problem with Today’s Delegation MethodsWe delegate all the time to people, apps, and APIs. But today’s delegation models are broken
Paper-based authorizations are outdated: Physical documents are cumbersome and easily forged, and verifying who issued them is hard. API keys & password sharing are risky: Tokens can be leaked, and once exposed, there’s no way to limit or track their use. Most systems lack built-in revocation or expiration controls. There is no clear apparent trace of responsibility: If your AI does something using your credentials, it is recorded as if you did it. There is no audit trail, proof of scope, or consent.We need a more secure and user-centric model in the age of AI agents acting autonomously.
A New Solution: DID + Blockchain + AI AgentThe patent proposes an architecture built on three core technologies
1. Decentralized Identity (DID)Every user has a self-sovereign, blockchain-based digital ID. So does the AI agent — it operates as its own verifiable identity.
2. Blockchain LedgerAll actions and delegations are immutably recorded on-chain. Who delegated what, to whom, when, and how is traceable and tamper-proof.
3. Encrypted Delegation Credential (Digital PoA)Instead of paper documents, users issue digitally signed credentials. These include
The agent’s DID The scope of authority Expiration timestamp Revocation endpointThe entire process runs without centralized intermediaries and is powered by standardized DID and blockchain protocols.
How It Actually Works User delegates authority to AIThe AI can only act using the digital “key” you’ve granted — and every move is auditable.
Potential Use Cases Healthcare: The AI assistant retrieves records from Hospital A, forwards them to Hospital B with full user consent, and logs them securely. Finance: Delegate your AI to automate transfers up to $1,000 per day. Every transaction is verified and capped. Government services: AI files address changes or document requests using digital credentials — recognized legally as your proxy. Smart home access: Courier arrives? Your AI is granted temporary “open door” access, which is revoked automatically post-delivery. Why This Matters User-Controlled DelegationThis system provides a secure infrastructure for AI-human collaboration, backed by blockchain. It’s like handing your AI a digitally signed key with built-in expiration and tracking — ensuring it never oversteps its bounds.
This patent envisions a simple but powerful future: Your AI can act for you, but only within the rules you define, and everything it does is traceable and accountable.
That’s not just clever tech — it’s the foundation of digital trust in an AI-driven world.
내 AI에게 ‘디지털 위임장’을 안전하게 맡기는 법 내 AI에게 권한을 주는 시대생각해보세요. 내 개인 AI 비서에게 위임장(Power of Attorney)을 주어 나를 대신해 일을 처리하게 한다면 어떨까요? 하지만 이때 중요한 건 완전히 안전하고 신뢰할 수 있는 방식으로 맡기는 것입니다. 더 이상 비밀번호를 공유하거나, 앱에 내 데이터 전체를 맡기며 불안해할 필요가 없는 거죠.
최근 CPLABS에서 출원된 특허(명칭: 사용자로부터 권한을 위임받아 서비스를 대리 수행하는 방법 및 이를 이용한 AI 에이전트)는 바로 이런 미래를 보여줍니다. 이 기술은 분산 신원(DID)과 블록체인, 그리고 AI 에이전트를 결합해, AI가 사용자를 대신해 행동하되 모든 것이 검증 가능하고 추적 가능한 방식으로 이루어지도록 합니다.
지금의 위임 방식이 가진 문제점우리는 종종 다른 사람이나 소프트웨어에 일을 맡깁니다. 하지만 현재의 위임 방식에는 여러 가지 문제가 있습니다.
서류 기반 위임의 불편함: 종이 위임장은 작성도 번거롭고 위조도 쉽습니다. 위임자와 수임자의 관계를 확인하기도 애매하고, 신뢰성이 떨어집니다. API 키/비밀번호 공유의 위험성: 오늘날 앱이나 서비스 연결은 보통 API 키나 토큰을 공유하는 방식입니다. 이 키가 유출되면 공격자가 무제한 권한을 행사할 수 있죠. 만료나 철회 기능도 미비합니다. 책임 추적의 부재: AI가 내 계정으로 수행한 행동은 결국 ‘내가 한 것’처럼 기록돼 분쟁 시 책임 소재가 불분명합니다.지금의 위임은 너무 불편하거나, 너무 위험합니다. 특히 AI 시대에는 이런 방식으로는 부족합니다.
새로운 해법: DID + 블록체인 + AI 에이전트이번 특허가 제안하는 해법은 세 가지 핵심 기술을 결합합니다.
1. DID (Decentralized Identifier) 중앙 기관 없이 발급되고, 사용자가 직접 관리하는 디지털 신분증입니다. 사용자와 AI 에이전트 모두 독립적인 DID를 가집니다. 2. 블록체인 위임 관계와 AI 행동 내역을 변조 불가능하게 기록합니다. 누구에게, 언제, 어떤 권한이 위임되었고 AI가 어떤 행동을 했는지를 모두 투명하게 남깁니다. 3. 디지털 자격증명 기반 위임장 사용자가 직접 전자 서명해 발급하는 위임 토큰(Credential)에는 에이전트 DID, 권한 범위, 만료일, 철회 URL 등이 포함됩니다. 누구든 검증 가능한 디지털 위임장입니다.이 모든 것은 중앙 시스템 없이, DID와 블록체인 기반 인프라 위에서 이루어집니다.
실제 흐름은 이렇게 작동합니다 사용자가 AI에 위임장 발급 (예: “내 AI는 이번 달 동안 병원 예약을 대신할 수 있다”) AI가 서비스에 위임장 제출 (AI는 자신의 DID 서명과 함께, 위임장을 제시하며 요청을 수행합니다.) 서비스가 검증 수행 (블록체인 DID 레지스트리를 통해 사용자와 AI의 DID를 검증하고, 위임장 서명도 확인합니다.) 권한 범위 확인 (위임장에 명시된 범위를 벗어난 요청은 거부됩니다.) 행동 로그 기록 (성공/실패 여부를 블록체인에 기록하여 감사가 가능합니다.) 즉시 철회 가능 (사용자는 언제든 위임을 취소할 수 있고, 철회 사실도 블록체인에 반영됩니다.)즉, AI는 당신이 발급한 디지털 열쇠로만 행동할 수 있으며, 모든 흔적은 감사 가능한 증거로 남습니다.
활용 시나리오 의료: 환자가 AI에게 병원 기록 접근 권한을 위임. AI는 병원 A에서 기록을 받아 병원 B에 전달 (오남용은 불가능) 금융: AI 금융비서에게 “하루 100만원 한도 자동 이체” 권한을 위임. 초과 시 자동 거절 행정: 시민이 AI에게 주소 변경, 주민등록등본 발급 등 민원 업무를 맡김. AI는 합법적인 디지털 대리인으로 인식됨 생활/스마트홈: 택배기사 방문 시간에만 유효한 “문 열기 권한”을 발급. 시간이 지나면 자동 철회 왜 중요한가? 사용자 중심 통제: 내가 허락하지 않은 행동은 애초에 불가능 디지털 신뢰 확보: 블록체인 기반의 검증 가능한 기록 AI 자동화 확산: 신뢰 기반으로 더 많은 업무를 맡길 수 있음 책임 구분 가능: 문제가 생겨도 책임 주체가 명확결국 이 시스템은 AI와 사람이 안전하게 협업할 수 있는 새로운 신뢰 인프라입니다. 마치 AI에게 범위가 제한된 디지털 열쇠를 주고, 그 모든 흔적을 공증처럼 남기는 셈이죠.
마치며이 특허 기술은 단순한 자동화 도구가 아닙니다. AI가 우리를 대신해 행동할 수 있는 디지털 신뢰 프레임워크를 구현한 사례입니다. 앞으로 다가올 “AI와 함께 사는 세상”에서, 우리는 이런 시스템을 통해 안전하게 협업하고, 책임을 분명히 나누며, 더 많은 자유를 누릴 수 있게 될 것입니다.
이제, 당신의 AI에게 진짜 믿을 수 있는 디지털 열쇠를 쥐어줄 때입니다.
Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KOR) | http://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadiumHow to Safely Entrust a ‘Digital Power of Attorney’ to My AI was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced.
For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this structure provides the transparency and accountability required by regulated entities while enabling decentralized control.
DAO governance delivers substantial value by providing a standardized, neutral framework for coordination that reduces operational and regulatory friction.
Third-Party and In-House DAO InfrastructuresIn recent years, the infrastructure supporting DAOs has advanced significantly. A variety of third-party governance solutions now offer stable, enterprise-ready interfaces for managing proposals, conducting votes, and executing multi-signature transactions. Some noteworthy platforms include:
Snapshot: An off-chain, gasless voting platform widely used across leading protocols. It allows flexible voting strategies, quorum requirements, and verifiable results without introducing high transaction costs. Tally: A fully on-chain governance dashboard built on Ethereum, designed for transparency and auditability of protocol votes, treasury management, and proposal lifecycle tracking.These solutions form a growing middleware ecosystem that brings governance to the same level of technical maturity as enterprise resource planning systems.
At the same time, in-house DAO frameworks extend beyond generic governance tooling. They integrate DAO logic with the project’s native identity, treasury, and compliance layers, enabling seamless coordination between on-chain and organizational processes. This approach ensures that governance not only reflects community consensus but also aligns with operational and regulatory realities.
DAO Governance as a Mechanism for NeutralityDAO governance reinforces network neutrality, a crucial characteristic for projects that operate across multiple jurisdictions or regulatory contexts. This structural neutrality diminishes the concentration of control that can lead to compliance issues and enables projects to remain resilient during regulatory or organizational changes.
For blockchain systems aimed at enterprises, DAO infrastructure provides three measurable benefits:
Regulatory Adaptability: Transparent proposal and voting systems create a verifiable governance record suitable for audits, disclosures, or compliance reviews. Operational Continuity: Distributed governance logic allows decision-making to persist independently of any single corporate entity or leadership group. Stakeholder Alignment: Token-weighted or role-based participation aligns validators, contributors, and investors under a unified, rule-based coordination framework. Toward Structured and Resilient GovernanceAs blockchain networks evolve into critical data and financial infrastructure, governance must progress beyond mere symbolic decentralization. DAO systems offer a structured, compliant, and resilient approach to managing complex ecosystems.
DAOs are not merely voting or staking platforms. They serve as the operational core that defines how decentralized systems make, record, and enforce decisions. Only with a well-structured DAO model can projects establish the legal, operational, and procedural foundation required to function as sustainable organizations.
BC 101 #7: DAO, Standardization, and Neutrality was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.
At Bluesky, we’re building a place where people can have better conversations, not just louder ones. We’re not driven by engagement-at-all-costs metrics or ad incentives, so we’re free to do what’s good for people. One of the biggest parts of that is the replies section. We want fun, genuine, and respectful exchanges that build friendships, and we’re taking steps to make that happen.
So far, we’ve introduced several tools that give people more control over how they interact on Bluesky. The followers-only reply setting helps posters keep discussions focused on trusted connections, mod lists make it easier to share moderation preferences, and the option to detach quote posts gives people a way to limit unwanted attention or dogpiling. These features have laid the groundwork for what we’re focused on now: improving the quality of replies and making conversations feel more personal, constructive, and in your control.
In our recent post, we shared some of the new ideas we were starting to develop to encourage healthier interactions. Since then, we’ve started rolling out updates, testing new ranking models, and studying how small product decisions can change the tone of conversations across the network.
We’re testing a mix of ranking updates, design changes, and new feedback tools — all aimed at improving the quality of conversation and giving people more control over their experience.
Social proximityWe’re developing a system that maps the “social neighborhoods” that naturally form on Bluesky — the people you already interact with or would likely enjoy knowing. By prioritizing replies from people closer to your neighborhood, we can make conversations feel more relevant, familiar, and less prone to misunderstandings.
Dislikes betaSoon, we’ll start testing a “dislike” option as a new feedback signal to improve personalization in Discover and other feeds. Dislikes help the system understand what kinds of posts you’d prefer to see less of. They may also lightly inform reply ranking, reducing the visibility of low-quality replies. Dislikes are private and the signal isn’t global — it mainly affects your own experience and, to an extent, others in your social neighborhood.
Improved toxicity detectionOur latest model aims to do a better job of detecting replies that are toxic, spammy, off-topic, or posted in bad faith. Posts that cross the line are down-ranked in reply threads, search results, and notifications, reducing their visibility while keeping conversations open for good-faith discussion.
Reply contextWe’re testing a small change to how the “Reply” button works on top-level posts: instead of jumping straight into the composer, it now takes you to the full thread first. We think this will encourage people to read before replying — a simple way to reduce context collapse and redundant replies.
Reply settings refreshBluesky’s reply settings give posters fine-grained control over who can reply, but many people don’t realize they exist. We’re rolling out a clearer design and a one-time nudge in the post composer to make them easier to find and use. Better visibility means more people can shape their own conversations and prevent unwanted replies before they happen. Conversations you start should belong to you.
We won’t get everything right on the first try. Building healthier social media will take ongoing experimentation supported by your feedback. This work matters because it tackles a root flaw in how social platforms have been built in the past — systems that optimize for attention and outrage instead of genuine conversation. Improving replies cuts to the heart of that problem.
Over the next few months, we’ll keep refining these systems and measuring their impact on how people experience Bluesky. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.
By Trevor Butterworth
AI is going to be everywhere. From virtual assistants to digital twins and autonomous systems, it will reinvent how we do everything. But only if it can be trusted with high value data, only if it can access high quality data, only if there’s user consent to that data being shared, and only if it can be easily governed.
This is where decentralized identity comes in. It removes obstacles, solves problems, and does so in a way that delivers next-generation security. Here are the five ways decentralized identity and its key technology — Verifiable Credentials — puts AI agents and autonomous AI systems on the path to trust and monetization.
1. Authentication
We are going to need to authenticate AI agents. They are going to need to authenticate us. It’s an obvious trust issue when so much data is at stake.
“We” means everything that interacts with an agent — people, organizations, devices, robots, and other AI agents.
Traditional forms of identity authentication aren’t going to cut it (see this recent article by Hackernoon — “When OAuth Becomes a Weapon: AI Agents Authentication Crisis”).
And given the current volume of losses to identity fraud (the estimated global cost of digital fraud was $534 billion over the past 12 months, according to Infosecurity Magazine), the idea that we should now open up massive quantities of high-value data to the same security vulnerabilities is insane.
The first fake AI agent that scams a major financial customer will cause panic, burn trust, and trigger regulation.
Only decentralized, Verifiable Credentials can provide the seamless, secure, and AI-resistant authentication to identify both AI agents and their users. And they enable authentication to occur before any data is shared.
2. Consent
AI needs data to work — and that means a lot of personal data and user data. If you want AI solutions that require access to personal data to comply with GDPR and other data privacy regulations, the “data subject” needs to be able to consent to sharing their data. Otherwise, that data is going nowhere — or you’re headed toward compliance hell.
Verifiable Credentials are a privacy-by-design technology. Consent is built into how they work. This simplifies compliance issues and can be easily recorded for audit.
3. Delegated authority
AI agents are going to need to access multiple data sources. While Verifiable Credentials and digital wallets allow people and organizations to hold their own data, they are not necessarily going to hold all the data needed for a task.
For example, banks and financial institutions have multiple departments. An AI agent that is given permission to access an account holder’s information, will need to share that information across different departments either to access the customer’s data or connect it to other data. It might need to share the data with other agents or external organizations.
Verifiable Credentials make it easy for a person to delegate their authority to an AI agent to go where it needs to go to execute a task, radically simplifying compliance. Decentralized governance (more of which later) simplifies establishing trust between different organizations and systems.
4. Structured data
AI agents and systems need good quality data to do their job (and therefore earn their keep). Verifiable Credentials issued by trusted data sources contain information that’s tamper-proof, that can come from validated documents, and that is structured in a way that each data point can be selectively disclosed.
In other words by putting information into a Verifiable Credential, we minimize error while structuring it to be easy to consume. In the process, we enable data and purpose minimization to meet GDPR requirements.
5. Decentralized governance
Finally, we come to one of the less-well known features of decentralized identity: decentralized ecosystem governance — or as we call it DEGov, which is based on the Decentralized Identity Foundation Credential Trust Establishment specification.
DEGov is a way for humans to structure interaction through trust. The governance authority for a particular use case publishes trust lists for credential issuers and credential verifiers in a machine readable form. This is downloaded by each participant in a credential ecosystem, and it enables a credential holder’s software to automatically recognize that an AI agent issued by a given organization is trustable. These files also contain rules for data presentation workflows.
DEGov enables you to easily orchestrate data sharing: for example, a Digital Travel Credential issued by an airline for a passenger identity can be used by a hotel to automate check-in because the hotel’s verifier software has downloaded a governance file containing the airline as trusted credential issuer (this also facilitates offline verification as governance rules are cached).
The value of decentralized governance really comes to the fore when you start building autonomous systems with multiple AI agents. You can easily program which agent can interact with which resource and what information needs to be presented. You can orchestrate interaction and authentication across different departments, domains, sectors.
As you can also enable devices, such as sensors, to generate Verifiable Credentials containing the data they record, you can rapidly share trusted data across domains for use by pre-permissioned AI agents.
In sum, decentralized identity is more than identity or identity authentication — it’s a way to authenticate and share any kind of data across any kind of environment, seamlessly and securely. It’s a way to create digital relationships between participants, even virtual ones.
Indicio ProvenAIWe designed Indicio ProvenAI to do all of the above. It’s the counterpart of the Proven technology we’re deploying to manage borders, KYC, travel and everything in between. It’s why we are now a member of the NVIDIA Inception program.
We see decentralized identity as the key to AI unlocking the right kind of data in the right way. It’s the path to trust, and trust means value.
Contact Indicio to learn how we’re building a world filled with intelligent authentication.
The post Five reasons why AI needs decentralized identity appeared first on Indicio.
In this edition of CryptoCubed, we look at the top crypto cases worldwide. This includes Canada's record-breaking $177 million fine against Cryptomus, Dubai's ongoing enforcement sweep on virtual asset firms, and Trump's pardon.
The post The CryptoCubed Newsletter: October Edition first appeared on ComplyCube.
Defining a pricing benchmark for KYC and AML is an important step in managing compliance expenses effectively. Understanding the factors that drive the costs of KYC and AML helps organizations make more informed pricing decisions.
The post How to Use a KYC AML Pricing Benchmark Effectively first appeared on ComplyCube.
While the ability of artificial intelligence (AI) to optimize certain processes is well documented, there are still genuine concerns regarding the link between unfair and unequal data processing and discriminatory practices and social inequality.
In November 2022, IDnow, alongside 12 European partners, including academic institutions, associations and private companies, began the MAMMOth project, which set out to explore ways of addressing bias in face verification systems.
Funded by the European Research Executive Agency, the goal of the three-year long project, which wrapped on October 30, 2025, was to study existing biases and create a toolkit for AI engineers, developers and data scientists so they may better identify and mitigate biases in datasets and algorithm outputs.
Three use cases were identified:
Face verification in identity verification processes. Evaluation of academic work. In the academic world, the reputation of a researcher is often tied to the visibility of their scientific papers, and how frequently they are cited. Studies have shown that on certain search engines, women and authors coming from less prestigious countries/universities tend to be less represented. Assessment of loan applications.IDnow predominantly focused on the face verification use case, with the aim of implementing methods to mitigate biases found in algorithms.
Data diversity and face verification bias.Even the most state-of-the-art face verification models are typically trained on conventional public datasets, which features an underrepresentation of minority demographics. A lack of diversity in data makes it difficult for models to perform well on underrepresented groups, leading to higher error rates for people with darker skin tones.
To address this issue, IDnow proposed using a ‘style transfer’ method to generate new identity card photos that mimic the natural variation and inconsistencies found in real-world data. By augmenting the training dataset with synthetic images, it not only improves model robustness through exposure to a wider range of variations but also enables a further reduction of bias against darker skin faces, which significantly reduces error rates for darker-skinned users, and provides a better user experience for all.
The MAMMOth project has equipped us with the tools to retrain our face verification systems to ensure fairness and accuracy – regardless of a user’s skin tone or gender. Here’s how IDnow Face Verification works.
When registering for a service or onboarding, IDnow runs the Capture and Liveness step, which detects the face and assesses image quality. We also run a liveness/ anti‑spoofing check to check that photos, screen replays, or paper masks are not used.
The image is then cross-checked against a reference source, such as a passport or ID card. During this stage, faces from the capture step and the reference face are converted into compact facial templates, capturing distinctive features for matching.
Finally, the two templates are compared to determine a “match” vs. “non‑match”, i.e. do the two faces belong to the same person or not?
Through hard work by IDnow and its partners, we developed the MAI-BIAS Toolkit to enable developers and researchers to detect, understand, and mitigate bias in datasets and AI models.
What’s good for the user is good for the business.We are proud to have been a part of such an important collaborative research project. We have long recognized the need for trustworthy, unbiased facial verification algorithms. This is the challenge that IDnow and MAMMOth partners set out to overcome, and we are delighted to have succeeded.
Lara Younes, Engineering Team Lead and Biometrics Expert at IDnow.
While the MAI-BIAS Toolkit has demonstrated clear technical improvements in model fairness and performance, the ultimate validation, as is often the case, will lie in the ability to deliver tangible business benefits.
IDnow has already began to retrain its systems with learnings from the project to ensure our solutions are enhanced not only in terms of technical performance but also in terms of ethical and social responsibility.
Top 5 business benefits of IDnow’s unbiased face verification. Fairer decisions: The MAI-BIAS Toolkit ensures all users, regardless of skin color or gender, are given equal opportunities to pass face verification checks, ensuring that no group is unfairly disadvantaged. Reduced fraud risks: By addressing biases that may create security gaps for darker skinned users, the MAI-BIAS Toolkit strengthens overall fraud prevention by offering a more harmonized fraud detection rate across all demographics. Explainable AI: Knowledge is power, and the Toolkit provides actionable insights into the decision-making processes of AI-based identity verification systems. This enhances transparency and accountability by clarifying the reasons behind specific algorithmic determinations. Bias monitoring: Continuous assessment and mitigation of biases are supported throughout all stages of AI development, ensuring that databases and models remain fair with each update to our solutions. Reducing biases: By following the recommendations provided in the Toolkit, research methods developed within the MAMMOth project can be applied across industries and contribute to the delivery of more trustworthy AI solutions.As the global adoption of biometric face verification systems continues to increase across industries, it’s crucial that any new technology remains accurate and fair for all individuals, regardless of skin tone, gender or age.
Montaser Awal, Director of AI & ML at IDnow.
“The legacy of the MAMMOth project will continue through its open-source tools, academic resources, and policy frameworks,” added Montaser.
For a more technical deep dive into the project from one of our research scientists, read our blog ‘A synthetic solution? Facing up to identity verification bias.’
By
Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn
Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digital identity, data security, and decentralized trust.
Unlike general-purpose blockchains, Ontology focuses on building trust infrastructure for individuals, businesses, and governments. By leveraging Ontology’s features, Zambia can unlock innovation in financial inclusion, supply chain transparency, e-governance, and education.
1. Digital Identity for All ZambiansA key challenge in Zambia is limited access to official identification. Without proper IDs, many citizens struggle to open bank accounts, access healthcare, or register land. Ontology’s ONT ID (a decentralized digital identity solution) could:
Provide every citizen with a secure, self-sovereign digital ID stored on the blockchain. Link identity with services such as mobile money, health records, and education certificates. Reduce fraud in financial services, voting systems, and government benefit programs.This supports Zambia’s push for universal access to identification while protecting privacy.
2. Financial Inclusion & Digital PaymentsWith a large unbanked population, Zambia’s fintech growth depends on trust and interoperability. Ontology offers:
Decentralized finance (DeFi) solutions for micro-loans, savings, and remittances without reliance on traditional banks. Cross-chain compatibility to connect Zambian fintech startups with global crypto networks. Reduced transaction fees compared to traditional remittance channels, making it cheaper for Zambians abroad to send money home. 3. Supply Chain Transparency (Agriculture & Mining)Agriculture and mining are Zambia’s economic backbones, but inefficiencies and lack of transparency hinder growth. Ontology can:
Enable farm-to-market tracking of crops, ensuring farmers get fair prices and buyers trust product origins. Provide traceability in copper and gemstone mining, reducing smuggling and boosting global market confidence. Help cooperatives and SMEs access financing by proving their transaction history and supply chain credibility via blockchain records. 4. E-Government & Service DeliveryThe Zambian government aims to digitize public services. Ontology Blockchain could:
Power secure land registries, reducing disputes and fraud. Create tamper-proof records for civil registration (births, deaths, marriages). Support digital voting systems that are transparent, verifiable, and resistant to manipulation. Improve public procurement processes by reducing corruption through transparent contract tracking. 5. Education & Skills DevelopmentCertificates and qualifications are often hard to verify in Zambia. Ontology offers:
Blockchain-based education records: universities and colleges can issue tamper-proof digital diplomas. A verifiable skills database that employers and training institutions can trust. Empowerment of youth in blockchain and Web3 development, opening new economic opportunities. 6. Data Security & Trust in the Digital EconomyZambia’s growing reliance on mobile money and e-commerce requires strong data protection. Ontology brings:
User-controlled data sharing: individuals decide who can access their personal information. Decentralized identity verification for businesses, preventing fraud in digital transactions. Strong compliance frameworks to align with Zambia’s Data Protection Act of 2021. Challenges to OvercomeDigital literacy gaps: Zambian citizens need training to use blockchain-based services.
Regulatory clarity: as Zambia we must craft clear policies around blockchain and cryptocurrencies.
Infrastructure: because reliable internet and mobile access are essential for blockchain adoption.
ConclusionOntology Blockchain provides Zambia with more than just a digital ledger — it offers a trust framework for identity, finance, governance, and innovation. By integrating Ontology into key sectors like agriculture, health, mining, and public administration, Zambia can accelerate its journey toward a secure, inclusive, and transparent digital economy.
This is not just about technology it’s about empowering citizens, building investor confidence, and positioning Zambia as a leader in blockchain innovation in Africa.
How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings on reducing bias in artificial intelligence (AI) systems. Funded by the EU’s Horizon Europe program, the project brought together organizations from a consortium of leading universities, research centers, and private companies across Europe.
IDnow, a leading identity verification platform provider in Europe, was directly involved in the implementation of the project as an industry partner. Through targeted research and testing, an optimized AI model was developed to significantly reduce bias in facial recognition, which is now integrated into IDnow’s solutions.
Combating algorithmic bias in practiceFacial recognition systems that leverage AI are increasingly used for digital identity verification, for example, when opening a bank account or registering for car sharing. Users take a digital image of their face, and AI compares it with their submitted ID photo. However, such systems can exhibit bias, leading to poorer results for certain demographic groups. This is due to the underrepresentation of minorities in public data sets, which can result in higher error rates for people with darker skin tones.
A study by MIT Media Lab showed just how significant these discrepancies can be: while facial recognition systems had an error rate of only 0.8% for light-skinned men, the error rate for dark-skinned women was 34.7%. These figures clearly illustrate how unbalanced many AI systems are – and how urgent it is to rely on more diverse data.
As part of MAMMOth, IDnow worked specifically to identify and minimize such biases in facial recognition – with the aim of increasing both fairness and reliability.
Technological progress with measurable impactResearch projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application. By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.
Montaser Awal, Director of AI & ML at IDnow.
As part of the project, IDnow investigated possible biases in its facial recognition algorithm, developed its own approaches to reduce these biases, and additionally tested bias mitigation methods proposed by other project partners.
For example, as ID photos often undergo color adjustments by issuing authorities, skin tone can play a challenging role, especially if the calibration is not optimized for darker skin tones. Such miscalibration can lead to inconsistencies between a selfie image and the person’s appearance in an ID photo.
To solve this problem, IDnow used a style transfer method to expand the training data, which allowed the model to become more resilient to different conditions and significantly reduced the bias toward darker skin tones.
Tests on public and company-owned data sets showed that the new training method achieved an 8% increase in verification accuracy – while using only 25% of the original training data volume. Even more significantly, the accuracy difference between people with lighter and darker skin tones was reduced by over 50% – an important step toward fairer identity verification without compromising security or user-friendliness.
The resulting improved AI model was integrated into IDnow’s identity verification solutions in March 2025 and has been in use ever since.
Setting the standard for responsible AIIn addition to specific product improvements, IDnow plans to use the open-source toolkit MAI-BIAS developed in the project in internal development and evaluation processes. This will allow fairness to be comprehensively tested and documented before new AI models are released in the future – an important contribution to responsible AI development.
“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” adds Montaser Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”
Naarden, 30 oktober 2025 – De politie heeft in Amsterdam en Zaandam afgelopen maand acht mensen aangehouden op verdenking van grootschalige hypotheekfraude, witwassen en valsheid in geschrifte.
De zaak draait volgens de politie om valse werkgeversverklaringen en fictieve dienstverbanden. Het benadrukt opnieuw hoe kwetsbaar processen zijn die vertrouwen op door consumenten aangeleverde documenten.
· 1. Introduction
· 2. Ocean Nodes: from Foundation to Framework
· 3. Annotators Hub: Community-driven data annotations
· 4. Lunor: Crowdsourcing Intelligence for AI
· 5. Predictoor and DeFi Trading
· 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence
· 7. Conclusion
Back in June, we shared the Ocean Protocol Product Update half-year check-in for 2025 where we outlined the progress made across Ocean Nodes, Predictoor, and other Ocean ecosystem initiatives. This post is a follow-up, highlighting the major steps taken since then and what’s next as we close out 2025.
We’re heading into the final stretch of 2025, so it’s only fitting to have a look over what the core team has been working on and what is soon to be released. Ocean Protocol was built to level the playing field for AI and data. From day one, the vision has been to make data more accessible, AI more transparent, and infrastructure more open. The Ocean tech stack is built for that mission: to combine decentralized compute, smart contracts, and open data marketplaces to help developers, researchers, and companies tap into the true potential of AI.
This year has been about making that mission real. Here’s how:
2. Ocean Nodes: from Foundation to FrameworkSince the launch of Ocean Nodes in August 2024, the Ocean community has shown what’s possible when decentralized infrastructure meets real-world ambition. With over 1.7 million nodes deployed across 70+ countries, the network has grown far beyond expectations.
Throughout 2025, the focus has been on reducing friction, boosting usability, and enabling practical workflows. A highlight: the release of the Ocean Nodes Visual Studio Code extension. It lets developers and data scientists run compute jobs directly from their editor — free (within defined parameters), fast, and frictionless. Whether they’re testing algorithms or prototyping dApps, it’s the quickest path to real utility. The extension is now available on the VS Code Marketplace, as well as in Cursor and other distributions, via the Open VSX registry.
We’ve also seen strong momentum from partners like NetMind and Aethir, who’ve helped push GPU-ready infrastructure into the Ocean stack. Their contribution has paved the way for Phase 2, a major upgrade that the core team is still actively working on and that’s set to move the product from PoC to real production-grade capabilities.
That means:
Compute jobs that actually pay, with a pay-as-you-go system in place Benchmarking GPU nodes to shape a fair and scalable reward model Real-world AI workflows: from model training to advanced evaluationAnd while Phase 2 is still in active development, it’s now reached a stage where user feedback is needed. To get there, we’ve launched the Alpha GPU Testers program, for a small group of community members to help us validate performance, stability and reward mechanics across GPU nodes. Selected participants simply need to set their GPU node up and make it available for the core team to run benchmark tests. As a thank-you for their effort and uptime, each successfully tested node will receive a $100 reward.
Key information:
Node selection: Oct 24–31, 2025 Benchmark testing: Nov 3–17, 2025 Reward: $100 per successfully tested node Total participants: up to 15, on a first come-first served basis. Only one node/owner is allowedWith Phase 2 of Ocean Nodes, we will be laying the groundwork for something even bigger: the Ocean Network. Spoiler alert: it will be a peer-to-peer AI Compute-as-a-Service platform designed to make GPU infrastructure accessible, affordable, and censorship-resistant for anyone who needs it.
More details on the transition are coming soon. But if you’re running a node, building on Ocean, or following along, you’re already part of it.
What else have we launched?
3. Annotators Hub: Community-driven data annotations Current challenge: CivicLens, ends on Oct 31, 2025AI doesn’t work without quality data. And creating it still is a huge bottleneck. That’s why we’ve launched the Annotators Hub: a structured, community-driven initiative where contributors help evaluate and shape high-quality datasets through focused annotation challenges.
The goal is to improve AI performance by improving what it learns from: the data. High-quality annotations are the foundation for reliable, bias-aware, and domain-relevant models. And Ocean is building the tools and processes to make that easier, more consistent, and more inclusive.
Human annotations remain the single most effective way to improve AI performance, especially in complex domains like education and politics. By contributing to the Annotators Hub, the Ocean community members directly help build better models, that can power adaptive tutors, improve literacy tools, and even make political discourse more accessible.
For example, LiteracyForge, the first challenge ran in collaboration with Lunor.ai, focused on improving adaptive learning systems by collecting high-quality evaluations of reading comprehension material. The aim: to train AI that better understands question complexity and supports literacy tools. Here are a few highlights, as the challenge is currently being evaluated:
49,832 total annotations submitted 19,973 unique entries 147 annotators joined throughout the three weeks of the first challenge 17,581 double-reviewed annotationThe second challenge will finish in just 2 days, on Friday, October 31. This time we’re looking into analyzing speeches from the European Parliament, to help researchers, civic organizations as well as the general public better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible. There’s still time to jump in and become an annotator,
Yes, this initiative can be seen as a “launchpad” for a marketplace with ready-to-use, annotated data, designed to give everyone access to training-ready data that meets real-world quality standards. But on that in an upcoming blogpost.
As we get closer to the end of 2025, we’re doubling down on utility, usability, and adoption. The next phase is about scale and about creating tangible ways for Ocean’s community to contribute, earn, and build.
4. Lunor: Crowdsourcing Intelligence for AILunor is building a crowdsourced intelligence ecosystem where anyone can co-create, co-own, and monetize Intelligent Systems. As one of the core projects within the Ocean Ecosystem, Lunor represents a new approach to AI, one where the community drives both innovation and ownership.
Lunor’s achievements so far, together with Ocean Protocol, comprise of:
Over $350,000 in rewards distributed from the designated Ocean community wallet More than 4,000 contributions submitted 38 structured data and AI quests completedAssets from Lunor Quests are published on the Ocean stack, while future integration with Ocean nodes will bring private and compliant Compute-to-Data for secure model training.
Together with Ocean, Lunor has hosted quests like LiteracyForge, showcasing how open collaboration can unlock high-quality data and AI for education, sustainability, and beyond.
5. Predictoor and DeFi TradingAbout Predictoor. In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. The “earn $” part is key, because it fosters usage.
Predictoor involves two groups of people:
Predictoors: data scientists who use AI models to predict what the price of ETH, BTC, etc will be 5 (or 60) minutes into the future. The scientists run bots that submit these predictions onto the chain every 5 minutes. Predictoors earn $ based on sales of the feeds, including sales from Ocean’s Data Farming incentives program. Traders: run bots that input predictoors’ aggregated predictions, to use as alpha in trading. It’s another edge for making $ while trading.Predictoor is built using the Ocean stack. And, it runs on Oasis Sapphire; we’ve partnered with the Oasis team.
Predictoor traction. Since mainnet launch in October 2023, Predictoor has accumulated about $2B total volume. [Source: DappRadar]. Furthermore, in spring 2025, our collaborators at Oasis launched WT3, a decentralized, verifiable trading agent that uses Predictoor feeds for its alpha.
Predictoor past, present, future. After Predictoor product and rewards program were launched in fall 2023, the next major goal was “traders to make serious $”. If that is met, then traders will spend $ to buy feeds; which leads to serious $ for predictoors. The Predictoor team worked towards this primary goal throughout 2024, testing trading strategies with real $. Bonus side effects of this were improved analytics and tooling.
Obviously “make $ trading” is not an easy task. It’s a grind taking skill and perseverance. The team has ratcheted, inching ever-closer to making money. Starting in early 2025, the live trading algorithms started to bear fruit. The team’s 2025 plan was — and is — keep grinding, towards the goal “make serious $ trading”. It’s going well enough that there is work towards a spinoff. We can expect trading work to be the main progress in Predictoor throughout 2025. Everything else in Predictoor and related will follow.
6. bci/acc: accelerate brain-computer interfaces towards human superintelligenceAnother stream in Ocean has been taking form: bci/acc. Ocean co-founder Trent McConaghy first gave a talk on bci/acc at NASA in Oct 2023, and published a seminal blog post on it a couple months later. Since then, he’s given 10+ invited talks and podcasts, including Consensus 2025 and Web3 Summit 2025.
bci/acc thesis. AI will likely reach superintelligence in the next 2–10 years. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence.
Ocean bci/acc team. In January 2025, Ocean assembled a small research team to pursue bci/acc, with the goal to create BCI killer apps that it can take to market. The team has been building towards this ever since: working with state-of-the-art BCI devices, constructing AI-data pipelines, and running data-gathering experiments. Ocean-style decentralized access control will play a role, as neural signals are perhaps the most private data of all: “not your keys, not your thoughts”. In line with Ocean culture and practice, we look forward to sharing more details once the project has progressed to tangible utility for target users.
7. Conclusion2025 has been a year of turning vision into practice. From Predictoor’s trading traction, Ocean Nodes being pushed into a GPU-powered Phase 2, to the launch of Annotators Hub, and with ecosystem projects like Lunor driving community-led AI forward, it feels like the pieces of the Ocean vision are falling into place.
The focus is clear for the Ocean core team in Q4: scale, usability, and adoption. Thanks for being part of it. The best is yet to come.
Ocean Protocol: Q4 2025 Update was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.
Voting will take place on OWallet from October 28, 2025 (00:00 UTC) through October 31, 2025 (00:00 UTC).
Understanding the Current ModelLet’s start with where things stand today.
Total ONG Supply: 1 billion Total Released: ≈ 450 million (≈ 430 million circulating) Annual Release: ≈ 31.5 million ONG Release Curve: All ONG unlocked over 18 years. The remaining 11 years follow a mixed release pace: 1 ONG per second for 6 years, then 2, 2, 2, 3, and 3 ONG per second in the final 5 years.Currently, both unlocked ONG and transaction fees flow back to ONT stakers as incentives, generating an annual percentage rate of roughly 23 percent at current prices.
What the Proposal ChangesThe new proposal suggests several key adjustments to rebalance distribution and align long-term incentives:
Cap the total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value, effectively removing them from circulation. Strengthen staker rewards and ecosystem growth by making the release schedule steadier and liquidity more sustainable. Implementation Plan1. Adjust the ONG Release Curve
Total supply capped at 800 million. Release period extended from 18 to 19 years. Maintain a 1 ONG per second release rate for the remaining years.2. Allocation of Released ONG
80 percent directed to ONT staking incentives. 20 percent, plus transaction fees, contributed to ecological liquidity.3. Swap Mechanism
Use ONG to acquire ONT within a defined fluctuation range. Pair the two tokens to create liquidity and receive LP tokens. Burn the LP tokens to permanently lock both ONG and ONT, tightening circulating supply. Community Q & AQ1. How long will the ONT + ONG (worth 100 million ONG) be locked?
It’s a permanent lock.
Q2. Why does the total ONG supply decrease while the release period increases?
Under the current model, release speeds up in later years. This proposal keeps the rate fixed at 1 ONG per second, so fewer tokens are released overall but over a slightly longer span — about 19 years in total.
Q3. Will this affect ONT staking APY?
It may, but not necessarily negatively. While staking rewards in ONG drop 20 percent, APY depends on market prices of ONT and ONG. If ONG appreciates as expected, overall returns could remain steady or even rise.
Q4. How does this help the Ontology ecosystem?
Capping supply at 800 million and permanently locking 100 million ONG will make ONG scarcer. With part of the released ONG continuously swapped for ONT to support DEX liquidity, the effective circulating supply may fall to around 750 million. That scarcity, paired with new products consuming ONG, could strengthen price dynamics and promote sustainable network growth. More on-chain activity would also mean stronger rewards for stakers.
Q5. Who can vote, and how?
All Triones nodes have the right to vote through OWallet during the official voting window.
Why It MattersThis isn’t just a supply adjustment. It’s a structural change designed to balance reward distribution, liquidity, and governance in a way that benefits both the Ontology network and its long-term participants.
Every vote counts. By joining this governance round, Triones nodes have a direct hand in shaping how value flows through the Ontology ecosystem — not just for today’s staking cycle, but for the years of decentralized growth ahead.
A New Chapter for ONG: Governance Vote on Tokenomics Adjustment was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.
Identity is moving fast–AI agents, new fraud patterns, and tightening regulations are reshaping the identity landscape under our feet. At Ping YOUniverse 2025, thousands of identity leaders, customers, and partners came together to confront this dramatic shift.
We compared notes on what matters now:
Stopping account takeover without killing conversion, so security doesn’t tax your revenue engine.
Orchestrating trust signals across apps and partners, so decisions get smarter everywhere.
Shrinking risk and cost with just‑in‑time access, so the right access appears—and disappears—on demand.
This recap distills the most useful takeaways for you: real-world use cases, technical demos within our very own Trust Lab, and deep-dive presentations from partners like Deloitte, AWS, ProofID, Midships, Versent, and more—plus guest keynotes from Former Secretary General of Interpol, Dr. Jürgen Stock and cybersecurity futurist, Heather Vescent. And it’s unified by a single theme: Resilient Trust isn’t a moment. It’s a mindset.
As states move toward private, interoperable, and resident-controlled digital identity systems, certification of wallets and issuers becomes a cornerstone of trust. Certification doesn’t just validate technical conformance; it enforces privacy, supports procurement flexibility, and enables multiple vendors to participate under a consistent trust framework. This blog post outlines recommendations to meet these goals, with statutory guardrails and governance practices built in from the start.
The Case for CertificationWe believe that states should require certification of Digital Wallets that are capable of holding a state digital identity. Certification provides assurance that wallet providers comply with key requirements such as privacy preservation, unlinkability, minimal disclosure, and security of key material, which are enforced by design.
SpruceID believes that additional legislation should be enacted to establish a formal certification program for wallets, issuers, and potentially verifiers participating in a state digital identity ecosystem. The legislation should specify that the designated regulating entity may conduct audits and certify providers directly, or delegate certification responsibilities to qualified external organizations, provided such delegation is formally approved by the appropriate higher authority.
Enforcing Privacy and MinimizationA certification program would mandate compliance with privacy-preserving technical standards, restrict verifiers from requesting or storing more information than is legally required, and require wallets to obtain clear user consent before transmitting credential data. Wallets would also need to provide plain-language explanations of how data is used in each transaction. By creating a statutory basis for certification and oversight, states can ensure that unlinkability and data minimization are not just principles, but enforceable requirements with technological and governance safeguards.
Pilot Programs to Support InnovationWe recommend that states enact a pilot program allowing provisional, limited, and expiring operating approvals of issuers, wallets, and verifiers, preceding the establishment and full operation of its formal certification programs, for the purpose of encouraging market solutions to operate in real-world environments and generate learnings. The appropriate oversight agency would be able to adapt the resulting learnings towards creating the formal certification programs. Today’s best practice in the software industry has been to take an iterative “agile” approach towards implementation, and we believe this would be the best approach to creating certification programs for software as well, to fully engage industry early and often in a limited operating capacity, instead of attempting to fully specify rules a priori, which may become irrelevant if not created with perfect knowledge.
Clarifying Responsibilities Across the EcosystemClear allocation of liability and responsibility is essential for the trust and sustainability of any state digital identity program. A state's role is to establish statutory guardrails, oversee governance, and authorize a certification framework that ensures all ecosystem participants meet consistent standards. This includes creating a certification program for both digital wallet providers and credential issuers, verifying that they comply with statutory principles for privacy, unlinkability, minimal disclosure, and security.
Wallet Provider ResponsibilitiesDigital wallet providers bear responsibility for ensuring acceptable security mechanisms, proper user consent, presenting clear and plain-language disclosures which meet accessibility requirements, and ensuring features like personal data licenses and privacy-protective technical standards are honored in practice. Certified wallets must also support recovery mechanisms for key loss or compromise, ensuring that holders are not permanently locked out of identity credentials due to technical failures. Digital wallet providers should coordinate with issuers, designing solutions which anticipate that wallets and keys will be lost, stolen, and compromised.
Issuer ResponsibilitiesIssuers are responsible for creating a strong operational environment that ensures the accuracy of the attributes they release, and for maintaining correct and untampered authoritative source records. They are also responsible for ensuring that state digital identity credentials are issued to the correct holders, and to any acceptable wallets, free of unreasonable delay, burden, or fees. They must provide accessibility to holders, such as providing workable paths for holders who lose their credentials, wallets, and/or keys. Their certification ensures that state digital identity credentials are issued only under audited processes that meet required levels of identity proofing and revocation safeguards.
Legislating Technical Safeguards and LiabilityIn addition, states should require certification of wallets against a published state digital identity protection profile and create clear liability rules. Legislation should establish that wallet providers are responsible for implementing technical safeguards, that Holders maintain control over disclosure decisions, and that verifiers may only request attributes that are legally permitted. By legislating these aspects, states will ensure that residents can trust any certified wallet to uphold their rights, while fostering a competitive ecosystem of providers who innovate on usability and design within a consistent regulatory baseline.
Enabling Interoperability and CompetitionCertification also creates a mechanism for interoperability and trust across the ecosystem. By publishing a clear “state digital identity Wallet Protection Profile” and certifying wallets against it, states can ensure that wallets from different vendors operate consistently while still allowing for competition and innovation.
Building Public Confidence Through TransparencyFinally, certification helps build public confidence. Residents will know that any wallet bearing a certification mark has been independently tested and approved to uphold privacy and prevent surveillance, while verifiers will know they can safely interact with those wallets. At the same time, states should keep certification processes lightweight and transparent to avoid excluding smaller vendors, ensuring that certification supports security and privacy without stifling innovation.
Establishing the Guardrails of a Trusted EcosystemCertification is more than a checkbox, it's how we turn principles like unlinkability and minimal disclosure into an enforceable reality. By embedding privacy protections in wallet and issuer certification, states can foster innovation without compromising trust. The foundation for interoperable, people-first digital identity isn’t a single app or provider, it’s a standards-aligned ecosystem, governed responsibly and built to last.
SpruceID works with governments and standards bodies to build privacy-preserving, interoperable infrastructure designed for public trust from day one. Contact us to start the conversation.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
We’re excited to share that Dock Labs is collaborating with GSMA, Telefónica Tech, and TMT ID on a new initiative to reinvent call centre authentication.
Here's why:
Today’s customer authentication processes often rely on knowledge-based questions or one-time passwords (OTPs). These methods are time-consuming, typically taking between 30 and 90 seconds, and can be vulnerable to SIM swap attacks, phishing, Caller Line Identification (CLI) spoofing and data breaches.
On top of that, they frequently require customers to disclose personal information to call center agents, creating privacy and compliance risks for organisations.
To address these issues, the group has initiated a Proof of Concept (PoC) to explore a new, privacy-preserving model of caller authentication that is faster, secure, and user-friendly.
“For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all.”
That premise is a lovely ideal, but it no longer reflects reality. The Internet isn’t collapsing, but it is fragmenting: tariffs, digital sovereignty drives, export controls, and surveillance regimes all chip away at the illusion of universality. Standards bodies that still aim for global consensus risk paralysis. And yet, walking away from standards altogether isn’t an option.
The real question isn’t whether we still need standards. The question is how to rethink them for a world that is fractured by design.
This is the fourth of a four-part series on what the Internet will look like for the next generation of people on this planet.
First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: “The People Problem: How Demographics Decide the Future of the Internet“ Fourth post: [this one] A Digital Identity Digest Can Standards Survive Trade Wars and Sovereignty Battles? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:24 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
Global internet, local rulebooksIf you look closely, today’s Internet is already more a patchwork quilt of overlapping, sometimes incompatible regimes, and less global.
Europe pushes digital sovereignty and data protection rules, with eIDAS2 and the AI Act setting global precedents. The U.S. leans on export controls and sanctions, using access to chips and cloud services as levers of influence. China has doubled down on domestic control, firewalling traffic and setting its own technical specs. Africa and Latin America are building data centers and digital ID schemes to reduce dependence on foreign providers, while still trying to keep doors open for trade and investment.Standards development bodies now live in this reality. The old model where universality was the goal and compromise was the method is harder to sustain. If everyone insists on their priorities, consensus stalls. But splintering into incompatible systems isn’t viable either. Global supply chains, cross-border research, and the resilience of communications all require at least a shared baseline.
The challenge is to define what “interoperable enough” looks like.
The cost side is getting heavierThe incentives for participation in global standards bodies used to be relatively clear: access to markets, influence over technical direction, and reputational benefits. Today, the costs of cross-border participation have gone up dramatically.
Trade wars have re-entered the picture. The U.S. has imposed sweeping tariffs on imports from China and other countries, hitting semiconductors and electronics with rates ranging from 10% to 41%. These costs ripple across supply chains. On top of tariffs, the U.S. has restricted exports of advanced chips and AI-related hardware to China. The uncertainty of licensing adds compliance overhead and forces firms to hedge.
Meanwhile, the “China + 1” strategy—where companies diversify sourcing away from over-reliance on China—comes with a hefty price tag. Logistics get more complex, shipping delays grow, and firms often hold more inventory to buffer against shocks. A 2025 study estimated these frictions alone cut industrial output by over 7% and added nearly half a percent to inflation.
And beyond tariffs or logistics, transparency and compliance laws add their own burden. The U.S. Corporate Transparency Act requires firms to disclose beneficial ownership. Germany’s Transparency Register and Norway’s Transparency Act impose similar obligations, with Norway’s rules extending to human-rights due diligence.
The result is that companies are paying more just to maintain cross-border operations. In that climate, the calculus for standards shifts. “Do we need this standard?” becomes “Is the payoff from this standard enough to justify the added cost of playing internationally?”
When standards tip the scalesThe good news is that standards can offset some of these costs when they come with the right incentives.
One audit, many markets. Standards that are recognized across borders save money. If a product tested in one region is automatically accepted in another, firms avoid duplicative testing fees and time-to-market shrinks.
Case study: the European Digital Identity Wallet (EUDI). In 2024, the EU adopted a reform of the eIDAS regulation that requires all Member States to issue a European Digital Identity Wallet and mandates cross-border recognition of wallets issued by other states. The premise here is that if you can prove your identity using a wallet in France, that same credential should be accepted in Germany, Spain, or Italy without new audits or registrations.
The incentives are potentially powerful. Citizens gain convenience by using one credential for many services. Businesses reduce onboarding friction across borders, from banking to telecoms. Governments get harmonized assurance frameworks while retaining the ability to add national extensions. Yes, the implementation costs are steep—wallet rollouts, legal alignment, security reviews—but the payoff is smoother digital trade and service delivery across a whole bloc.
Regulatory fast lanes. Governments can offer “presumption of conformity” when products follow recognized standards. That reduces legal risk and accelerates procurement cycles.
Procurement carrots. Large buyers, both public and private, increasingly bake interoperability and security standards into tenders. Compliance isn’t optional; it’s the ticket to compete.
Risk transfer. Demonstrating that you followed a recognized standard can reduce penalties after a breach or compliance failure. In practice, standards act as a form of liability insurance.
Flexibility in a fractured market. A layered approach—global minimums with regional overlays—lets companies avoid maintaining entirely separate product lines. They can ship one base product, then configure for sovereignty requirements at the edges.
When incentives aren’t enoughOf course, there are limits to how far incentives can stretch. Sometimes the costs simply outweigh the benefits.
Consider a market that imposes steep tariffs on imports while also requiring its own unique technical standards, with no recognition of external certifications. In such a case, the incentive of “one audit, many markets” collapses. Firms face a choice between duplicating compliance efforts, forking product lines, or withdrawing from the market entirely.
Similarly, rules of origin can blunt the value of global standards. Even if a product complies technically, it may still fail to qualify for preferential access if its components are sourced from disfavored regions. Political volatility adds another layer of uncertainty. The back-and-forth implementation of the U.S. Corporate Transparency Act illustrates how compliance obligations can change rapidly, leaving firms unable to plan long-term around standards incentives.
These realities underline a sad reality that incentives alone cannot overcome every cost. Standards must be paired with trade policies, recognition agreements, and regulatory stability if they are to deliver meaningful relief. Technology is not enough.
How standards bodies must adaptIt’s easy enough to say “standards still matter.” What’s harder is figuring out how the institutions that make those standards need to change. The pressures of a fractured Internet aren’t just technical. They’re geopolitical, economic, and regulatory. That means standards bodies can’t keep doing business as usual. They need to adapt on two fronts: process and scope.
Process: speed, modularity, and incentivesThe traditional model of consensus-driven standards development assumes time and patience are plentiful. Groups grind away until they’ve achieved broad agreement. In today’s climate, that often translates to deadlock. Standards bodies need to recalibrate toward “minimum viable consensus” that offer enough agreement to set a global baseline, even if some regions add overlays later.
Speed also matters. When tariffs or export controls can be announced on a Friday and reshape supply chains by Monday, five-year standards cycles are untenable. Bodies need mechanisms for lighter-weight deliverables: profiles, living documents, and updates that track closer to regulatory timelines.
And then there’s participation. Costs to attend international meetings are rising, both financially and politically. Without intervention, only the biggest vendors and wealthiest governments will show up. That’s why initiatives like the U.S. Enduring Security Framework explicitly recommend funding travel, streamlining visa access, and rotating meetings to more accessible locations. If the goal is to keep global baselines legitimate, the doors have to stay open to more than a handful of actors.
Scope: from universality to layeringJust as important as process is deciding what actually belongs in a global standard. The instinct to solve every problem universally is no longer realistic. Instead, standards bodies need to embrace layering. At the global level, focus on the minimums: secure routing, baseline cryptography, credential formats. At the regional level, let overlays handle sovereignty concerns like privacy, lawful access, or labor requirements.
This shift also means expanding scope beyond “pure technology.” Standards aren’t just about APIs and message formats anymore; they’re tied directly to procurement, liability, and compliance. If a standard can’t be mapped to how companies get through audits or how governments accept certifications, it won’t lower costs enough to be worth the trouble.
Finally, standards bodies must move closer to deployment. A glossy PDF isn’t sufficient if it doesn’t include reference implementations, test suites, and certification paths. Companies need ways to prove compliance that regulators and markets will accept. Otherwise, the promise of “interoperability” remains theoretical while costs keep mounting.
The balanceSo is it process or scope? The answer is both. Process has to get faster, more modular, and more inclusive. Scope has to narrow to what can truly be global while expanding to reflect regulatory and economic realities. Miss one side of the equation, and the other can’t carry the weight. Get them both right, and standards bodies can still provide the bridges we desperately need in a fractured world.
A layered model for fractured timesSo what might a sustainable approach look like? I expect the future will feature layered models rather than a universal one.
At the bottom of this new stack are the baseline standards for secure software development, routing, and digital credential formats. These don’t attempt to satisfy every national priority, but they keep the infrastructure interoperable enough to enable trade, communication, and research.
On top of that baseline are regional overlays. These extensions allow regions to encode sovereignty priorities, such as privacy protections in Europe, lawful access in the U.S., or data localization requirements in parts of Asia. The overlays are where politics and local control find their expression.
This design isn’t neat or elegant. But it’s pragmatic. The key is ensuring that overlays don’t erode the global baseline. The European Digital Identity Wallet is a good example: the baseline is cross-border recognition across EU states, while national governments can still add extensions that reflect their specific needs. The balance isn’t perfect, but it shows how interoperability and sovereignty can coexist if the model is layered thoughtfully.
What happens if standards failIt’s tempting to imagine that if standards bodies stall, the market will simply route around them. But the reality of a fractured Internet is far messier. Without viable global baselines, companies retreat into regional silos, and the costs of compliance multiply. This section is the stick to go with the carrots of incentives.
If standards fail, cross-border trade slows as every shipment of software or hardware has to be retested for each jurisdiction. Innovation fragments as developers build for narrow markets instead of global ones, losing economies of scale. Security weakens as incompatible implementations open new cracks for attackers. And perhaps most damaging, trust erodes: governments stop believing that interoperable solutions can respect sovereignty, while enterprises stop believing that global participation is worth the cost.
The likely outcome is not resilience, but duplication and waste. Firms will maintain redundant product lines, governments will fund overlapping infrastructures, and users will pay the bill in the form of higher prices and poorer services. The Internet won’t collapse, but it will harden into a collection of barely connected islands.
That’s why standards bodies cannot afford to drift. The choice isn’t between universal consensus and nothing. The choice is between layered, adaptable standards that keep the floor intact or a slow grind into fragmentation that makes everyone poorer and less secure.
Closing thoughtThe incentives versus cost tradeoff is not a side issue in standards development. It is the issue. The technical community must accept that tariffs, sovereignty, and compliance aren’t temporary distractions but structural realities.
The key question to ask about any standard today is simple: Does this make it cheaper, faster, or less risky to operate across borders? If the answer is yes, the standard has a future. If not, it risks becoming another paper artifact, while fragmentation accelerates.
Now I have a question for you: in your market, do the incentives for adopting bridge standards outweigh the mounting costs of tariffs, export controls, and compliance regimes? Or are we headed for a world where regional overlays dominate and the global floor is paper-thin?
If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript[00:00:29] Welcome back to A Digital Identity Digest.
Today, I’m asking a big question that’s especially relevant to those of us working in technical standards development:
Can standards survive trade wars and sovereignty battles?
For decades, the story of Internet standards seemed fairly simple — though never easy:
get the right people in the room, hammer out details, and eventually end up with rules that worked for everyone.
[00:00:58] The Internet was one global network, and standards reflected that vision.
[00:01:09] That story, however, is starting to fall apart.
We’re not watching the Internet collapse, but we are watching it fragment — and that fragmentation carries real consequences for how standards are made, adopted, and enforced.
[00:01:21] In this episode, we’ll explore:
Why the cost of participating in global standards has gone up How incentives can still make standards development worthwhile What happens when those incentives fall short And how standards bodies need to adapt to stay relevant[00:01:36] So, let’s dive in.
The Fragmenting Internet[00:01:39] When the Internet first spread globally, it seemed like one big network — or at least, one big concept.
[00:01:55] But that’s not quite true anymore.
Let’s take a few regional examples.
Europe has leaned heavily into digital sovereignty, with rules like GDPR, the AI Act, and the updated eIDAS regulation. Their focus is clear: privacy and sovereignty come first. The United States takes a different tack, using export controls and sanctions as tools of influence — with access to semiconductors and cloud services as leverage in its geopolitical strategy. China has gone further, building its own technical standards and asserting domestic control over traffic and infrastructure. Africa and Latin America are investing in local data centers and digital identity schemes, aiming to reduce dependency while keeping doors open for global trade and investment.[00:02:46] When every region brings its own rulebook, global consensus doesn’t come easily.
Bodies like ISO, ITU, IETF, or W3C risk stalling out.
Yet splintering into incompatible systems is also costly:
It disrupts supply chains Slows research collaborations And fractures global communications[00:03:31] So let’s start by looking at what all of this really costs.
The Rising Cost of Participation[00:03:35] Historically, incentives for joining standards efforts were clear:
Influence technology direction Ensure interoperability Build goodwill as a responsible actor[00:03:52] But that equation is changing.
Take tariffs, for example.
U.S. tariffs on imports from China and others now range from 10% to 41% on semiconductors and electronics. Export controls restrict the flow of advanced chips, reshaping entire markets. Companies face new costs: redesigning products, applying for licenses, and managing uncertainty.[00:04:33] Add in supply chain rerouting — the so-called “China Plus One” strategy — and you get:
More complex logistics Longer delays Higher inventory buffersRecent studies show these frictions cut industrial output by over 7% and add 0.5% to inflation.
[00:04:58] It’s not just the U.S. — tariffs are now a global trend.
Then there are transparency laws, like:
The U.S. Corporate Transparency Act Germany’s Transparency Register Norway’s Transparency Act, which even mandates human rights due diligence[00:05:33] The result?
The baseline cost of cross-border operations is rising — forcing companies to ask if global standards participation is still worth it.
[00:05:50] So, why bother with standards at all?
Because well-designed standards can offset many of these costs.
[00:05:56] Consider the power of recognition.
If one region accepts a product tested in another, companies save on duplicate testing and reach markets faster.
[00:06:07] A clear example is the European Digital Identity Wallet (EUDI Wallet).
In 2024, the EU updated eIDAS to:
Require each member state to issue a European Digital Identity Wallet Mandate mutual recognition between member statesThis means:
A wallet verified in France also works in Germany or Spain Citizens gain convenience Businesses reduce onboarding friction Governments maintain a harmonized baseline with room for local adaptation[00:06:56] Though rollout costs are high — covering legal alignment, wallet development, and security testing — the payoff is smoother digital trade.
Beyond recognition, strong standards also offer:
Regulatory fast lanes: Reduced legal risk when products follow recognized standards Procurement advantages: Interoperability requirements in public tenders Risk transfer: Accepted standards can serve as a partial defense after incidents[00:07:34] In effect, standards can act as liability insurance.
[00:07:41] But not all incentives outweigh the costs.
When countries insist on unique local standards without mutual recognition, “one audit, many markets” collapses.
[00:08:05] Companies duplicate compliance, fork product lines, or leave markets.
Rules of origin and political volatility add further uncertainty.
[00:08:44] So yes — standards can tip the scales, but they can’t overcome every barrier.
The Changing Role of Standards Bodies[00:08:54] Saying “standards still matter” is one thing — ensuring their institutions adapt is another.
[00:09:02] The pressures shaping today’s Internet are not just technical but geopolitical, economic, and regulatory.
That means standards bodies must evolve in two key ways:
Process adaptation Scope adaptation[00:09:19] The old “everyone must agree” consensus model now risks deadlock.
Bodies need to move toward a minimum viable consensus — enough agreement to set a baseline, even if regional overlays come later.
[00:09:39] Increasingly, both state and corporate actors exploit the process to delay progress.
Meanwhile, when trade policies change in months, a five-year standards cycle is useless.
[00:10:16] Standards organizations must embrace:
Lighter deliverables Living documents Faster updates aligned with regulatory change[00:10:32] Participation costs are another barrier.
If only the richest governments and companies can attend meetings, legitimacy suffers.
Efforts like the U.S. Enduring Security Framework, which supports broader participation, are essential.
[00:11:10] Remote participation helps — but it’s not enough.
In-person collaboration still matters because trust is built across tables, not screens.
[00:11:31] Scope matters too.
Standards bodies should embrace layering:
Global level: focus on secure routing, baseline cryptography, credential formats Regional level: handle sovereignty overlays — privacy, lawful access, labor rules[00:11:55] Moreover, the scope must expand beyond technology to include:
Procurement Liability ComplianceIf standards don’t reduce costs in these areas, they won’t gain traction — no matter how elegant they look in PDF form.
[00:12:12] Standards also need to move closer to deployment:
Include reference implementations Provide test suites Define certification paths that regulators will acceptWithout these, interoperability remains theoretical while costs keep rising.
[00:12:53] Ultimately, this is both a process problem and a scope problem.
Processes must be faster and more inclusive.
Scopes must be realistic and economically relevant.
[00:13:11] Some argue that if standards bodies stall, the market will route around them.
But a fractured Internet is messy:
[00:13:45] And perhaps worst of all, trust erodes.
Governments lose faith in interoperability; companies question the value of participation.
[00:13:55] The outcome isn’t resilience — it’s duplication, waste, and higher costs.
[00:14:07] The Internet won’t disappear, but it risks hardening into isolated digital islands.
That’s why standards bodies can’t afford drift.
[00:14:26] The real choice is between:
Layered, adaptable standards that maintain a shared baseline Or a slow grind into fragmentation that makes everyone poorer and less secure Wrapping Up[00:14:38] The incentives-versus-cost trade-off is no longer a side note in standards work — it’s the core issue.
Tariffs, sovereignty, and compliance regimes aren’t temporary distractions.
They’re structural realities shaping the future of interoperability.
[00:14:52] The key question for any new standard is:
Does this make it cheaper, faster, or less risky to operate across borders?
If yes — that standard has a future.
If no — it risks becoming another PDF gathering dust while fragmentation accelerates.
[00:15:03] With that thought — thank you for listening.
I’d love to hear your perspective:
Do incentives for adopting bridge standards outweigh the rising costs of sovereignty battles? Or are we headed toward a world of purely regional overlays?[00:15:37] Share your thoughts, and let’s keep this conversation going.
[00:15:48] That’s it for this week’s Digital Identity Digest.
If this episode helped clarify or inspire your thinking, please:
Share it with a friend or colleague Connect with me on LinkedIn @hlflanagan Subscribe and leave a rating on Apple Podcasts or wherever you listen[00:16:00] You can also find the full written post at sphericalcowconsulting.com.
Stay curious, stay engaged — and let’s keep the dialogue alive.
The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.
Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below.
Source: X Spaces — Oct 9, 2025@BubbleMaps has made this very helpful diagram to identify the flows of $FET from the Ocean community wallet (give them a follow):
https://x.com/bubblemaps/status/1980601840388723064So, what’s the truth behind the distribution of $FET out of a single wallet and into 30 wallets?
Was it, as Sheikh claims, an ill-intentioned action to obfuscate the token flows and “dump” on the ASI community? Absolutely not.
First, it was done out of prudence. Given that a significant number of tokens were held in a single wallet, it was to reduce the risk of having the community treasury tokens hacked or otherwise vulnerable to bad actors. Clearly, spreading the tokens across 30 wallets greatly reduces the risk of their being hacked or forcefully taken compared to tokens being held in a single wallet.
Second, the spreading of the community treasury tokens across many wallets was something that Fetch and Singularity had themselves requested we do, to avoid causing problems with ETF deals which they had decided to enter into using $FET.
As presented in the previous “ASI Alliance from Ocean Perspective” blogpost, on Aug 13, 2025, Casigrahi, SingularityNet’s CFO, wrote an email to Ocean Directors, cc’ing Dr. Goertzel and Lake:
In it, he references 8 ETF deals in progress that were underway with institutional investors and the concerns that “the window — is open now” to close these deals.
Immediately after this email, Casigrahi reached out to a member of the Ocean community, explaining that such a large sum of $FET in the Ocean community wallet, which is not controlled by either Fetch or SingularityNET, would raise difficult questions from ETF issuers. Recall that Ocean did not participate in these side deals promoted by Fetch, and was often kept out of the loop, e.g. the TRNR deal.
Casiraghi requested (on behalf of Fetch and SingularityNET) that if the $FET in the Ocean community wallet could not be frozen, whether arrangements could be made to split the $FET tokens across multiple wallets?
Casiraghi explained that if this could be done with the $FET in the Ocean community wallet, Fetch and SingularityNET could plausibly deny the existence of a very large token holder which they had no control over. They could sweep it under the rug and avoid uncomfortable due diligence questions.
On Aug 16 2025, David Levy of Fetch called me with the same arguments, reasoning and plea, whether Ocean could obfuscate the tokens and split them across more wallets?
Incidentally, in this call Levy also for the first time, shared with me the details of the TRNR deal which alarmed me once I understood the implications (“TRNR” Section §12).
At this juncture, it should be recalled that the Ocean community wallet is under the control of Ocean Expeditions. The Ocean community member who spoke with Casiraghi, as well as myself, informed the Ocean Expeditions trustees of this request and reasoning. Thereafter a decision was made by the Ocean Expedition’s trustees, as an act of goodwill, to distribute the $FET across 30 wallets as requested by Fetch and SingularityNet.
Turning back to the bigger picture, as a pioneer in the blockchain space, I am obviously well aware that all token movements are absolutely transparent to the world. Any transfers are recorded immutably forever and can be traced easily by anyone with a modicum of basic knowledge. I build blockchains for a living. It is ridiculous to suggest that I or anyone in Ocean could have hoped to “conceal” tokens in this public manner.
A simple act of goodwill and cooperation that was requested by both Fetch and SingularityNET has instead been deliberately blown up by Sheikh, and painted as a malicious act to harm the ASI community.
Sheikh has now used the wallet distribution to launch an all-out assault on Ocean Expeditions and start a manhunt to identify the trustees of the Ocean Expeditions wallet.
Sheikh has wantonly spread lies, libel and misinformation to muddy the waters, construct a false narrative accusing Ocean and its founders of misappropriation, and to incite community sentiment against us.
Sheikh’s accusations and his twisting of the facts to mislead the community are so absurd that they would be laughable, if they were not so dangerous and harmful to the whole community.
Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” — Disproven was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
As organizations build more interconnected digital ecosystems, securing identity is no longer just a component of cybersecurity—it is the foundation of protecting everything from data to devices. We are now seeing an unprecedented proliferation of machine identities, which frequently outnumber human identities. Yet traditional identity systems struggle to manage these effectively.
Organizations grapple with fragmented, siloed identity data sources and IAM solutions leave blind spots and inefficiencies. The flexibility of using AI and mobile or personal devices for business operations further aggravates this issue.
These security concerns become a critical pain point for larger enterprises that want to scale. Mergers and acquisitions complicate matters by combining disparate identity systems and policies. It frequently leads to conflicting identity data and compromised access management, posing severe security risks.
What is the solution to making an organization’s identity security practices more effective and resilient, not just for the current threat landscape but also for next-gen risks?
The Need for a Data-Centric Identity Security InfrastructureThe answer is to architect a data-centric identity security infrastructure, making identity data the cornerstone of all security decisions.
Traditionally, enterprises have addressed identity security problems individually, implementing separate solutions for Identity Governance and Administration (IGA), Privileged Access Management (PAM), access management, and SaaS-native systems like Microsoft Entra and Okta. Although individually functional, these tools collectively create fragmented identity data silos. As identity and application counts grow, these silos generate significant security gaps due to inconsistent data visibility and management.
The solution lies in making identity data foundational. Identity data must be positioned at the core of every security decision, ensuring consistency, accuracy, and completeness across all processes related to authentication, authorization, and lifecycle management.
The goal is to perfectly align with Gartner’s definition of identity-first security—an approach positioning identity-based access control as the cornerstone of cybersecurity.
Implementing Gartner’s VIA ModelTo achieve data-centric identity security, Gartner’s VIA model (Visibility, Intelligence, Action) provides a clear and structured roadmap:
Visibility: Establishing unified identity data visibility Intelligence: Analyzing data for actionable insights Action: Executing real-time remediation based on intelligenceEach component is crucial for successful deployment.
Visibility: Consolidating Fragmented Identity DataOrganizations must first tackle fragmented identity data scattered across various sources—Active Directory, HR systems, PAM solutions, and cloud identity solutions. Consolidating these into an identity data lake is critical. This data lake must be data-agnostic, scalable, real-time, event-driven, and capable of handling vast volumes of data, both structured and unstructured.
Once consolidated, raw identity data needs to be transformed into actionable information via a semantic layer. A semantic layer is a structured representation or model that organizes identity data into meaningful relationships and context. It turns fragmented, raw data into unified, easily understood information.
In short, this semantic layer maps identity data into a coherent model providing unified visibility across human and non-human identities, entitlements, and actual usage. It must:
Ensure that diverse identity data is standardized and unified Break data silos by treating access uniformly, regardless of its source Leverage a graph-based structure for intuitive, multi-dimensional navigation Maintain data lineage for precise traceability and remediation Intelligence: Identifying and Observing AnomaliesThe semantic layer significantly improves data coherence but often results in large volumes of information that are challenging to analyze manually. For this reason, the Intelligence layer’s role is crucial. It continuously observes identity data, focusing specifically on detecting:
Deviations Discrepancies Unauthorized or abnormal changes Risky behaviorOrganizations benefit less from routine events than from abnormal situations requiring immediate attention. Intelligence leverages queries, usage analysis, change detection, peer group baselining, and correlation techniques.
Observations enrich the semantic layer, enhancing decision-making in downstream systems such as PAM, IGA, and access management platforms by providing crucial context around potential risks and anomalies.
Action: Executing Flexible RemediationThe Action layer addresses identified issues based on intelligence. This step requires a flexible approach, capable of adapting to different scenarios. Some actions may be straightforward, such as directly writing corrections back to endpoint systems. Others require interaction with existing cybersecurity tools—IGA, PAM, or ticketing systems—emphasizing the importance of well-maintained connectors and integrations.
Remediation often critically requires consensus from stakeholders beyond IT security teams. Engaging the business stakeholders—the first line of defense, such as line managers and resource owners—is essential to distinguish legitimate threats from false positives. This engagement transforms the security system into a collaborative “security cockpit,” amplifying the cybersecurity team’s capabilities.
Effective collaboration requires clear roles and responsibilities across all stakeholders, ensuring that ownership and accountability are well defined when addressing identity security risks. Additionally, seamless integration with everyday digital workplace tools like Slack or Microsoft Teams, possibly enhanced by LLM-based conversational interfaces, can significantly streamline interactions, enabling quick confirmations and decisions from non-technical stakeholders.
Strengthening Identity Security with a Data-Centric ApproachBuilding a data-centric identity security infrastructure using Gartner’s VIA model provides comprehensive benefits:
Unified Visibility: Eliminates fragmented silos, creating a coherent identity view Actionable Intelligence: Proactively identifies risks and anomalies, enhancing threat detection Real-time Remediation: Ensures quick, precise actions tailored to diverse cybersecurity scenarios Collaborative Remediation: Actively involves non-technical stakeholders, significantly improving accuracy and response effectivenessHow RadiantOne Implements a Data-Centric Identity Security InfrastructureUltimately, by placing identity data at the heart of security infrastructure, organizations significantly strengthen their security posture, achieving genuine, identity-first security.
The RadiantOne platform simplifies and accelerates the transition to a data-centric identity security model. The solution consolidates identity data from legacy on-premises and cloud-based sources into a unified, standards-based, vendor-neutral identity data lake. This consolidation eliminates identity data silos and provides a global IAM data catalog with rich, attribute-enhanced user profiles.
With RadiantOne, organizations can efficiently build unlimited virtual views of identity data that are unified across various protocols (LDAP, SQL, REST, SCIM, Web Service APIs). Its low-code/no-code transformation logic enables seamless data mapping, ensuring quick adaptability to changing business and security requirements without disrupting existing systems.
RadiantOne scales to support hundreds of millions of identities, adding resilience and speed through a highly available, near real-time identity service. The solution automates identity data management, streamlines user and group management, rationalizes group memberships, and dynamically controls access rights.
Its visibility capability provides a real-time, unified view of the situation for all human and non-human identities down to a permission level. Coupled with its observability capabilities, it spots misconfiguration and detects anomalies or abnormal changes to keep the identity landscape under control.
Most notably, the platform’s AI-powered assistant, AIDA, simplifies user access reviews, swiftly identifying anomalies and providing actionable remediation suggestions. By automating tedious manual reviews, AIDA drastically reduces administrative effort and improves decision accuracy, making it easier to enforce a least-privilege approach and continuous compliance.
The post Architecting a Data-Centric Identity Security Infrastructure appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base.
The portal can be accessed here: liq.kilt.ioFor the best experience, desktop/browser use is recommended.
Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for which you have been part of the program. Your liquidity is not locked in any way; you can add or remove liquidity at any time. The portal does not take custody of your KILT or ETH; positions remain on Uniswap under your direct control. Rewards can be claimed after 24hrs, and then at any time of your choosing. You will need KILT (0x5D0DD05bB095fdD6Af4865A1AdF97c39C85ad2d8) on Base ETH or wETH on Base An EVM wallet (e.g. MetaMask etc.) Joining the LIP OverviewThere are two steps to joining the LIP:
Add KILT and ETH/wETH to the Uniswap pool in a full-range position. The correct pool is v3 with 0.3% fees. Note that whilst part of the LIP you will continue to earn the usual Uniswap pool fees as well. Register this position on the Liquidity Portal. Your rewards will start automatically. 1) Adding LiquidityPositions may be created either on Uniswap in the usual way, or directly via the portal. If you choose to create positions on Uniswap then return to the portal afterwards to register them.
To create a position via the portal:
Go to liq.kilt.io and connect your wallet. Under the Overview tab, you may use the Quick Add Liquidity function. For more features, go to the Add Liquidity tab where you can choose how much KILT and ETH to contribute. 2) Registering PositionsOnce you have created a position, either on Uniswap or via the portal, return to the Overview tab
Your KILT:ETH positions will be displayed under Eligible Positions. Select your positions and Register them to enroll in the LIP. Monitoring your Positions and RewardsOnce registered, you can find your positions in the Positions tab. The Analytics tab provides more information, for example your time bonuses and details about each position’s contribution towards your rewards.
Claiming RewardsYour rewards start accumulating from the moment you register, but the portal may not reflect this immediately. Go to the Rewards tab to view and claim your rewards. Rewards are locked for the first 24hrs, after which you may claim at any time.
Removing LiquidityYour LP remains 100% under your control; there are no locks or other restrictions and you may remove liquidity at any time. This can be done in the usual way directly on Uniswap. Removing LP will not in any way affect rewards accumulated up to that time, but if you later re-join the program then any time bonuses will have been reset.
How are my Rewards Calculated?Rewards are based on:
The value of your KILT/ETH position(s). The total combined value of the pool as a whole. The number of days your position(s) have been registered.Rewards are calculated from the moment you register a position, but the portal may not reflect them right away.
Need Help?Support is available in our telegram group: https://t.me/KILTProtocolChat
-The KILT Foundation
KILT Liquidity Incentive Program was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post FedRAMP Moderate Authorization: Why It Matters for Government Security appeared first on 1Kosmos.
Remember when “seeing is believing” used to be the rule? Not anymore. The world is now facing an identity crisis, digital identity that is. As artificial intelligence advances, so do the fraudsters who use it. Deepfakes have gone from internet curiosities to boardroom threats, putting reputations, finances, and trust at risk.
Businesses worldwide are waking up to the danger of manipulated media and turning toward deepfake detection tools as a line of defense. These systems are becoming the business equivalent of a truth serum, helping companies verify authenticity before deception costs them dearly.
What Makes Deepfakes So Dangerous
A deepfake is an AI-generated video, image, or audio clip that convincingly mimics a real person. Using neural networks, these fakes can replicate facial movements, voice tones, and gestures so accurately that even experts struggle to tell them apart.
The technology itself isn’t inherently bad. In entertainment, it helps de-age actors or create realistic video games. The problem arises when it’s used for fraud, misinformation, or identity theft. A 2024 report by cybersecurity analysts revealed that over 40% of businesses had encountered at least one deepfake-related fraud attempt in the last year.
Common use cases that keep executives awake at night include:
Fake video calls where “executives” instruct employees to transfer money Synthetic job interviews where fraudsters impersonate real candidates False political or corporate statements are circulated to damage reputationsHow Deepfake Detection Technology Works
The idea behind deepfake detection technology is simple: spot what looks real but isn’t. The execution, however, is complex. Detection systems use advanced machine learning and biometrics to analyze videos, images, and audio clips at a microscopic level.
Here’s a breakdown of common detection methods:
Technique What It Detects Purpose Pixel Analysis Lighting, shadows, unnatural edges Identifies visual manipulation Audio-Visual Sync Lip and speech mismatches Flags voice-over imposters Facial Geometry Mapping Eye movement, micro-expressions Validates natural human patterns Metadata Forensics Hidden file data Detects tampering or file regenerationThese methods form the core of most deepfake detection software. They look for details invisible to the human eye, like the way light reflects in a person’s eyes or how facial muscles move during speech. Even the slightest irregularity can trigger a red flag.
Deepfake Detection in Corporate Security
For organizations, adopting a deepfake detector isn’t just a security upgrade, it’s a necessity. Financial institutions, identity verification providers, and digital platforms are integrating these solutions to prevent fraud in real time.
A growing number of companies have fallen prey to AI-generated fraud, with criminals using fabricated voices or videos to trick employees into approving transactions. One European company reportedly lost 25 million dollars after a convincing fake video call with their “CFO.” That’s not a Hollywood plot, it’s a real-world case.
Businesses now use deepfake facial recognition and deepfake image detection tools to verify faces during high-risk transactions, onboarding, and identity verification. By combining biometric data with behavioral analytics, these tools make it nearly impossible for fakes to pass undetected.
Real-World Examples of Deepfake Fraud
Finance: A multinational bank used a deepfake detection tool to validate executive communications. Within six months, it blocked three fraudulent video call attempts that mimicked senior leaders. Recruitment: HR departments now use deepfake detection software to confirm job candidates are who they claim to be. AI-generated interviews have become a growing issue in remote hiring. Social Media: Platforms like Facebook and TikTok rely on deepfake face recognition systems to automatically flag and remove fake celebrity or political videos before they go viral.
Each case reinforces a key truth: deepfakes aren’t just a cybersecurity issue, they’re a trust issue.
Challenges in Detecting Deepfakes
Even with cutting-edge tools, detecting deepfakes remains a technological tug-of-war. Every time detection systems advance, generative AI models evolve to bypass them, creating an ongoing race between innovation and deception. Businesses face several persistent challenges in this fight.
One major issue is evolving algorithms, as AI models constantly learn new tricks that make fake content appear more authentic. Another key challenge is data bias, where systems trained on limited datasets may struggle to perform accurately across different ethnicities or under varied lighting conditions.
Additionally, high processing costs remain a concern, as real-time deepfake detection requires powerful hardware and highly optimized algorithms. On top of that, privacy concerns also play a role, since collecting facial data for analysis must align with global data protection laws such as the GDPR.
To address these challenges, open-source initiatives like Recognito Vision GitHub are fostering transparency and collaboration in AI-based identity verification research, helping bridge the gap between innovation and ethical implementation.
Integrating Deepfake Detection Into Identity Verification
Deepfakes pose the greatest risk to identity verification systems. Fraudsters use synthetic faces and voice clips to bypass onboarding checks and exploit weak verification processes.
To counter this, many companies integrate deepfake detect models with liveness detection, systems that determine if a face belongs to a live human being or a static image. By tracking subtle movements like blinking, breathing, or pupil dilation, these systems make it much harder for fake identities to pass.
If you’re interested in testing how liveness verification works, explore Recognito’s face liveness detection SDK and face recognition SDK. Both provide tools to identify fraud attempts during digital onboarding or biometric verification.
The Business Case for Deepfake Detection Tools
So why are companies investing heavily in this technology? Because it directly protects their money, reputation, and compliance status.
1. Fraud Prevention
Deepfakes enable social engineering attacks that traditional security systems can’t catch. Detection tools provide a safeguard against voice and video scams that target executives or employees.
2. Compliance with Data RegulationsLaws like GDPR and other digital identity regulations require companies to verify authenticity. Using deepfake detection technology supports compliance by ensuring every identity is legitimate.
3. Brand IntegrityOne fake video can cause irreversible PR damage. Detection systems help safeguard brand image by filtering manipulated media before it spreads.
4. Consumer ConfidenceCustomers feel safer when they know your brand can identify real users from digital imposters. Trust is the new currency of business.
Popular Deepfake Detection Solutions in 2025 Tool Name Main Feature Ideal Use Case Reality Defender Multi-layer AI detection Financial institutions Deepware Scanner Video and image verification Cybersecurity firms Sensity AI Online content monitoring Social platforms Microsoft Video Authenticator Frame-by-frame confidence scoring Government and enterprise use
For businesses that want to experiment with AI-based face authentication, the Face biometric playground provides an interactive environment to test and understand how facial recognition and deepfake facial recognition systems perform under real-world conditions.
What’s Next for Deepfake Detection
The war between creation and detection is far from over. As generative AI improves, the line between real and fake will blur further. However, one thing remains certain, businesses that invest early in deepfake detection tools will be better prepared.
Future systems will likely combine blockchain validation, biometric encryption, and AI-powered forensics to ensure content authenticity. Collaboration between regulators, researchers, and businesses will be crucial to staying ahead of fraudsters.
Staying Real in a World of Fakes
The rise of deepfakes is rewriting the rules of digital trust. Businesses can no longer rely on human judgment alone. They need technology that looks beneath the surface, into the data itself.
Recognito is one of the pioneers helping organizations build that trust through reliable and ethical deepfake detection solutions, ensuring businesses stay one step ahead in an AI-powered world where reality itself can be rewritten.
Frequently Asked Questions
1. How can deepfake detection protect businesses from fraud?
Deepfake detection identifies fake videos or audio before they cause financial or reputational damage, protecting companies from scams and impersonation attempts.
2. What is the most accurate deepfake detection technology?
The most accurate systems combine biometric analysis, facial geometry mapping, and liveness detection to verify real human behavior.
3. Can deepfake detection software identify audio fakes too?
Yes, modern tools analyze pitch, tone, and rhythm to detect audio deepfakes along with visual ones.
4. Is deepfake detection compliant with data protection laws like GDPR?
Yes, when implemented responsibly. Businesses must process biometric data securely and follow data protection regulations.
5. How can companies start using deepfake detection tools?
Organizations can integrate off-the-shelf detection and liveness solutions into their existing identity verification systems to enhance security and prevent fraud.
On Oct 9, 2025 in an X Space in response to the withdrawal of the Ocean Protocol Foundation from the ASI Alliance, Sheikh said:
“You don’t try and steal from the community and get away with it that quickly, because we’re not going to just let it go, right? In the sense that, if you didn’t want to be part of the community, why did you then go into the token which belonged to the community, or which belonged to the alliance?”
This statement is false, misleading, and libelous, and this blogpost will demonstrate why.
The only three parties to the ASI Alliance are Fetch.ai Foundation (Singapore), Ocean Protocol Foundation (Singapore) and SingularityNET Foundation (Switzerland).
Neither the oceanDAO, nor Ocean Expeditions, are a party to the ASI Alliance Token Merger Agreement.
This fact, that oceanDAO (now Ocean Expeditions) is a wholly independent 3rd party from Ocean, was disclosed (Section §6) to Fetch and SingularityNET in May 2024 as part of the merger discussions.
Sheikh appears to deliberately conflate the Ocean Protocol Foundation with oceanDAO, as a tactic to mislead the community. To be clear, oceanDAO is a separate organisation that was formed in 2021 and then incorporated as Ocean Expeditions in June 2025. The reasons for this incorporation have been set out in an earlier blog post here: (https://blog.oceanprotocol.com/the-asi-alliance-from-oceans-perspective-f7848b2ad61f)
The Ocean community treasury remains in the custodianship of Ocean Expeditions guardians via a duly established, wholly legal trust in the Cayman Islands.
Every $FET token holder has sovereign property rights over its own tokens and is not answerable to the ASI Alliance as to what it does with its tokens.
Ocean Expeditions has no legal obligations to the ASI Alliance. Rather, the ASI Alliance has a clear obligation towards Ocean Expeditions as a token holder.
As a reminder relating to Fetch.ai obligations under the Token Merger Agreement, Fetch.ai is under a legally binding obligation to inject the remaining 110.9 million $FET into the $OCEAN:$FET token bridge and migration contract, and keep them available for any $OCEAN token holder who wishes to exercise their right to convert to $FET. To date, this obligation remains unmet. Fetch.ai must immediately execute this legally mandated action.
Any published information regarding this matter, unless confirmed officially by Ocean Protocol Foundation, should be assumed false.
We also request that Fetch.ai, Sheikh and all other ASI Alliance spokesmen refrain from confusing the public with false, misleading and libelous allegations that any tokens have been in any way “stolen”.
The $FET tokens Sheikh refers to are safely with Ocean Expeditions, for the sole benefit of the Ocean community.
Q&A
Q: There has recently been talk of Ocean “returning” tokens to ASI Alliance, through negotiated agreement. What’s that about?
A: This is complete nonsense. There are no tokens to return because no tokens were “stolen” or “taken”. Accordingly, it would make no sense to “return” any such tokens.
Ocean Community Tokens are the Property of Ocean Expeditions was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
State IT modernization is a perpetual challenge. For new technologies like verifiable digital credentials (secure, digital versions of physical IDs), this presents a classic "chicken and egg" problem: widespread adoption by residents and businesses is necessary to justify the investment, but that adoption won't happen without a robust ecosystem of places to use them. How can states ensure the significant investments they make today will build a foundation for a resilient and trusted digital future?
State IT leaders face increasing pressure to modernize aging infrastructure, combat rising security threats, and overcome stubborn data silos. These challenges are magnified by tight budgets and the pervasive risk of vendor lock-in. With a complex landscape of competing standards, making the right strategic decision is more difficult than ever. This uncertainty stifles the growth needed for a thriving digital identity ecosystem. The drive for modernization is clear, with over 65% of state and local governments, according to industry research, on a digital transformation journey.
Here, we'll offer a clear, actionable framework for state technology decision-makers: a practical checklist to evaluate technologies on their adherence to open standards. By embracing these principles, states can make informed choices that foster sustainable innovation and avoid costly pitfalls, aligning with a broader vision for open, secure, and interoperable digital systems that empower citizens and governments alike.
The Risks of Niche TechnologyChoosing proprietary or niche technologies can seem like a shortcut, but it often leads to a dead end. These systems create hidden costs that drain resources and limit a state's ability to adapt. The financial drain extends beyond initial procurement to include escalating licensing fees, expensive custom integrations, and unpredictable upgrade paths that leave little room for innovation.
Operationally, these systems create digital islands. When a new platform doesn't speak the same language as existing infrastructure, it reinforces the data silos that effective government aims to eliminate. This lack of interoperability complicates everything from inter-agency collaboration to delivering seamless services to residents. For digital identity credentials, the consequences are even more direct. If a citizen's new digital ID isn't supported across jurisdictions or by key private sector partners, its utility plummets, undermining the entire rationale for the program.
Perhaps the greatest risk is vendor lock-in. Dependence on a single provider for maintenance, upgrades, and support strips a state of its negotiating power and agility. As a key driver for government IT leaders, avoiding vendor lock-in is a strategic priority. Niche systems also lack the broad, transparent community review that strengthens security. Unsupported or obscure software can harbor unaddressed vulnerabilities, a risk highlighted by data showing organizations running end-of-life systems are three times more likely to fail a compliance audit.
Embracing the Power of Open Standards for State ITThe most effective way to mitigate these risks is to build on a foundation of open standards. In the context of IT, an open standard is a publicly accessible specification developed and maintained through a collaborative and consensus-driven process. It ensures non-discriminatory usage rights, community-driven governance, and long-term viability. For verifiable digital credentials, this includes critical specifications like the ISO mDL standard for mobile driver's licenses (ISO 18013-5 and 18013-7), W3C Verifiable Credentials, and IETF SD-JWTs. The principles of open standards, however, extend far beyond digital credentials to all critical IT infrastructure decisions.
Adopting this approach delivers many core benefits for State government. First is enhanced interoperability, which allows disparate systems to communicate seamlessly. This breaks down data silos and improves service delivery, a principle demonstrated by the U.S. Department of State's Open Data Plan, which prioritizes open formats to ensure portability. Second, open standards foster robust security. The transparent development process allows for broad community review, which leads to faster identification of vulnerabilities and more secure, vetted protocols.
Third, they provide exceptional adaptability and future-proofing. By reducing vendor lock-in, open standards enable states to easily upgrade systems and integrate new technologies without costly overhauls. This was the goal of Massachusetts' pioneering 2003 initiative to ensure long-term control over its public records. Fourth is significant cost-effectiveness. Open standards foster competitive markets, reducing reliance on expensive proprietary licenses and enabling the reuse of components. For government agencies, cost reduction is a primary driver for adoption.
Finally, this approach accelerates innovation. With 96% of organizations maintaining or increasing their use of open-source software, it is clear that shared, stable foundations create a fertile ground for a broader ecosystem of tools and expertise.
The State IT Open Standards ChecklistThis actionable checklist provides clear criteria for state IT leaders, procurement officers, and policymakers to evaluate any new digital identity technology or system. Use this framework to ensure technology investments are resilient, secure, and future-proof.
Ability to Support Privacy Controls: Does the technology inherently support all state privacy controls, or can a suitable privacy profile be readily created and enforced? Technologies that enable privacy-preserving techniques like selective disclosure and zero-knowledge proofs are critical for building public trust. Alignment with Use Cases: Does the standard enable real-world transactions that are critical to residents and relying parties? This includes everything from proof-of-age for controlled purchases and access to government benefits to streamlined Know Your Customer (KYC) checks that support Bank Secrecy Act modernization. Ecosystem Size and Maturity: Does the standard have a healthy base of adopters? Look for active participation from multiple vendors and demonstrated investment from both public and private sectors. A mature ecosystem includes support from major platforms like Apple Wallet and Google Wallet, indicating broad market acceptance. Number of Vendors: Are there multiple independent vendors supporting the standard? A competitive marketplace fosters innovation, drives down costs, and is a powerful defense against vendor lock-in. Level of Investment: Is there clear evidence of sustained investment in tools, reference implementations, and commercial deployments? This indicates long-term viability and a commitment from the community to support and evolve the standard. A strong identity governance framework depends on this long-term stability. Standards Body Support: Is the standard governed by a credible and recognized standards development organization? Bodies like ISO, W3C, IETF, and the OpenID Foundation ensure a neutral, globally-vetted process that builds consensus and promotes stability. Interoperability Implementations: Has the standard demonstrated successful cross-vendor and cross-jurisdiction implementations? Look for evidence of conformance testing or a digital ID certification program that validates wallet interoperability and ensures a consistent user experience. Account/Credential Compromise and Recovery: How does the technology handle worst-case scenarios like stolen private keys or lost devices? Prioritize standards that support a robust VDC lifecycle, including credential revocation. A clear process for credential revocation, such as using credential status lists, is essential for maintaining trust. Scalability: Has the technology been proven in scaled, production use cases? Assess whether scaling requires custom infrastructure, which increases operational risk, or if it relies on standard, well-understood techniques. Technologies that align with established standards like NIST SP 800-63A digital identity at IAL2 or IAL3, and leverage proven cloud architectures, offer a more reliable path to large-scale deployment. Building for tomorrow, todayThe strategic shift towards globally supported open standards is not just a technological choice; it is a critical imperative for states committed to modernizing responsibly and sustainably. It is the difference between building disposable applications and investing in durable digital infrastructure.
By adopting this forward-thinking mindset and leveraging the provided checklist, state IT leaders can confidently navigate the complexities of digital identity procurement. This approach empowers states to build resilient, secure, and adaptable IT infrastructure that truly future-proofs public services.
About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.
The post Enhancing Risk Insights by Integrating KYC Data with Transaction Monitoring appeared first on uqudo.
For decades, pop culture painted fraudsters as solitary figures hunched over laptops in darkened rooms. That stereotype is not only wrong, it is dangerously outdated. Today’s most damaging scams are orchestrated by global crime syndicates spanning every continent. These networks build tens of thousands-strong operations, traffic people, train their “staff” and basically operate like Fortune 100 companies, but their product is deception, and the victims pay the cost.
Their scale is staggering: global fraud reached over $1 trillion in 2024. The numbers, however, tell only part of the story. The fastest-growing schemes today are app-based and social engineering scams, which are also the most common types of fraud affecting banks and financial institutions, causing record losses from reimbursements and compliance costs.
These attacks not only target systems, but exploit people, undermining trust in financial institutions, regulators, and courts, while also supporting human trafficking and forced labor. Behind every fake investment ad or romance scam lies a darker reality: compounds where people are held captive and forced to defraud strangers across the world.
Global scam centres: Where to find themWhen most people think of criminal gangs, they imagine shadowy figures operating from jungles or remote hideouts. But the criminals behind the world’s largest fraud rings work very differently. These aren’t small-time operations running in the dark, they’re industrial-scale enterprises operating in plain sight.
Their structure closely mirrors those of legitimate businesses with executives overseeing operations, middle managers coaching employees and tracking KPIs and frontline workers executing scams via phone, social media or messaging apps.
Their facilities are not hidden in basements. They are large, purpose-built sites, often converted from former hotels, casinos or business parks. Located primarily in Southeast Asia – in Cambodia, Myanmar, Vietnam, and the Philippines – but increasingly also in Africa and Eastern Europe, these complexes can be vast. Investigators have uncovered huge compounds where hundreds of people work in rotating shifts, day and night. Some sites are so large they have been described as “villages,” covering dozens of acres, with syndicates often running multiple locations across regions. At scale, this means a single network can control thousands of people.
However, not all people who work for syndicates on site, are there voluntarily. In fact, most of the front-line workers and call centre agents are victims of human trafficking. Lured by the promise of big money and escape from poverty, they travel across borders, only to find themselves kidnapped, captured and coerced into deceiving others.
Life inside scam compounds: A prison disguised as an officeOn-site structure is designed to sustain a captive workforce. They include dormitories, shops, entertainment rooms, kitchens and even small clinics. On the surface, these facilities might resemble employee perks, and for vulnerable recruits from poorer backgrounds, they can even sound appealing, but the reality is dark: rows of desks, bunkrooms stacked with beds, CCTV cameras monitoring every corner, kitchens feeding hundreds. With razor-wire fences and armed guards at the gates, these compounds look more like prisons rather than offices. And in many ways, that is exactly what they are.
The “masterminds” of the crime ecosystemBehind the compounds lies a web of transnational operators and a shadow service economy. The organisers of these operations come in many forms – from criminal entrepreneurs diversifying from drugs to online scams, to networks linked with regional crime groups such as Southeast Asian gangs, Chinese or Eastern European syndicates, and illicit operators tied to South American cartels. In some places, politically connected actors or local elites profit from – and even protect – these operations, ensuring they continue with little interference.
Another layer consists of companies that appear legitimate on paper but in reality, supply the infrastructure that keeps the fraud industry running: phone numbers, fake identity documents, shell firms and payment processors willing to handle high-risk transactions. Investigations have uncovered, how underground service providers and proxy accounts help scammers move victims’ money through banks and into crypto using fake invoices and front companies as cover.
It’s an industrial-scale business model: acquisition channels built on fake ads, call centres with scripts and a laundering pipeline powered by mules, shell companies and crypto gateways. The setup is remarkably resilient – shut down one centre or payment route, and the network simply reroutes through another provider or jurisdiction.
How fraud hurts banks and other financial companiesFor banks and financial firms, the impact is severe. Direct financial losses and costs to financial institutions are significant and rising. Banks, fintechs and credit unions report substantial direct fraud losses: nearly 60% reported losing over $500k in direct fraud in a 12-month period and a large share reported losses over $1m. These trends force firms to allocate budget away from growth into loss-prevention and remediation.
Payment fraud at scale also increases operational and compliance costs. For example, in 2022, payment fraud was reported at €4.3 billion in European Economic Area and consumer-reported losses in other jurisdictions show multi-billion-dollar annual impacts that increase every year – all of which ripple into higher Suspicious Activity Report (SAR) volumes, Anti-Money Laundering (AML) investigations and strained dispute and reimbursement processes for banks. These costs are both direct (reimbursed losses) and indirect (investigation time, compliance staffing, fines, customer churn and reputational damage).
Banks face a daily balancing act: tighten controls and risk frustrating customers or loosen them and risk becoming a target. Either way, regulators demand ever-stronger safeguards. And even though stronger authentication and checks can increase drop-offs during onboarding or transactions, failure to comply risks exposure to legal and regulatory trouble (recent cases tied to payment rails illustrate how banks can face large remediation obligations and lawsuits if controls are perceived as inadequate).
The long-term consequences, however, go beyond operational complexity. Fraud undermines customer trust, which is the foundation of finance. It increases costs, slows innovation and forces financial institutions to redesign products with restrictions that customers feel but rarely understand. And this can lead to a long-term loss of market share.
What financial institutions must understand about the opponentBanks are not fighting individual perpetrators. They are facing industrialized criminal organizations. To defeat them, defensive measures must also be organized accordingly.
This means moving beyond isolated controls toward systemic resilience: robust fraud checks, stronger identity verification, continuous monitoring, transaction orchestration and faster coordination with law enforcement. But technology alone is not enough. Collaboration across institutions and industries is crucial to disrupt fraud networks that operate globally.
How financial organizations can protect themselves against financial crimeFinancial firms should invest in multi-layered identity checks combining document, liveness and behavioral signals (like the ones offered by IDnow); integrate real-time AML orchestration to flag mule activity early (like the soon-to-be-launched IDnow Trust Platform); and participate in intelligence-sharing networks that connect patterns across borders.
Fraud is no longer a fringe crime. It’s a billion-dollar corporate machine. To dismantle it, financial institutions must shift from investigating fraud after it happens to preventing it before it strikes, stopping both criminals and socially engineered victims before any loss occurs.
By
Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn
After months of preparation, the HPP mainnet is live, our core technologies are stable, and we are now entering the most exciting phase of our journey — growth and adoption.
1. Technical Milestones AchievedAll major technical roadmaps have been met and are now production-ready. This includes the mainnet launch and multiple project integrations across the HPP ecosystem. The network is built for scale, equipped for cross-chain connectivity, and ready for full activation.
2. Migration and Market ReadinessThe migration infrastructure, which includes the official bridge and migration portal, is complete and fully tested. Legacy Aergo and AQT token holders will be able to transition seamlessly into HPP through a secure, verifiable process designed to ensure accuracy and transparency across chains.
With the full network framework in place, HPP is now entering the growth and liquidity phase. We are in coordination with several major exchanges to align token listings, update technical integrations, and synchronize branding across trading platforms. These efforts aim to create a strong, sustainable market structure that supports institutional participation, community accessibility, and long-term ecosystem stability.
3. Building a Real-World BreakthroughWe are developing one of the most significant blockchain real-world use cases to date. This initiative combines a large user base, mission-critical data, and enterprise-grade requirements. It will demonstrate how our L2 infrastructure can power high-value, data-driven applications that go beyond typical blockchain use cases.
At the same time, we are working with enterprise partners, including early Aergo collaborators, to adopt HPP’s advanced features through the Noosphere layer.
4. Keeping You UpdatedTo ensure full transparency, we are continuously updating our HPP Living Roadmap, a real-time tracker that shows ongoing technical progress, upcoming milestones, and partner developments as they happen.
The technology is ready, the ecosystem is forming, and the next phase is set to begin. HPP is moving from readiness to execution, and the wait is almost over.
HPP Update: Technology Ready, Market Expansion Underway was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read the first installment in our series on The Future of Digital Identity in America here and the second installment here.
If policy sets the rules of the road, technology lays the pavement. Without strong technical foundations, decentralized identity would remain an inspiring vision but little more. What makes it real are the advances in cryptography, open standards, and system design that let people carry credentials in their own wallets, present them securely, and protect their privacy along the way. These technologies aren’t abstract: they are already running in production, powering mobile driver’s licenses, digital immigration pilots, and cross-border banking use cases.
Why Technology Matters for IdentityIdentity is the trust layer of the digital world. Every interaction - logging into a platform, applying for a loan, proving eligibility for benefits - depends on it. Yet today, that trust layer is fractured. We scatter our identity across countless accounts and passwords. We rely on federated logins controlled by Big Tech platforms. Businesses pour money into fraud prevention while governments struggle to verify citizens securely.
The costs of this fragmentation are staggering. In 2024 alone, Americans reported record losses of $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). At the institutional level, the average cost of a U.S. data breach hit $10.22 million in 2025 (IBM). And the risks are accelerating: synthetic identity fraud drained an estimated $35 billion in 2023 (Federal Reserve), while FinCEN has warned that criminals are now using deepfakes, synthetic documents, and AI-generated audio to bypass traditional checks at scale.
Decentralized identity offers a way forward; but, only if the technology can make it reliable, usable, and interoperable. That’s where verifiable credentials, decentralized identifiers, cryptography, and open standards come in.
The Standards that Make it WorkEvery successful infrastructure layer in technology—whether it was TCP/IP for the internet or HTTPS for secure web traffic—has been built on standards. Decentralized identity is no different. Standards ensure that issuers, holders, and verifiers can interact without building one-off integrations or relying on proprietary systems.
Here are the key ones shaping today’s decentralized identity landscape:
W3C Verifiable Credentials (VCs): This is the universal data model for digital credentials. A VC is essentially a cryptographically signed digital version of something like a driver’s license, diploma, or membership card. It defines how the credential is structured (with attributes, metadata, and signatures) so that anyone who receives it knows how to parse and verify it. Decentralized Identifiers (DIDs): DIDs are globally unique identifiers that are cryptographically verifiable and not tied to any single registry. Unlike email addresses or usernames, which depend on central providers, a DID is self-sovereign. For example, a university might issue a credential to did:example:university12345. The DID resolves to metadata (such as public keys) that allows verifiers to check signatures and authenticity. OID4VCI and OID4VP (OpenID for Verifiable Credential Issuance and Presentation): These protocols define how credentials move between systems. They extend OAuth2 and OpenID Connect, the same standards that handle billions of secure logins each day. With OID4VCI, you can request and receive a credential securely from an issuer. With OID4VP, you can present that credential to a verifier. This reuse of familiar login plumbing makes adoption easier for developers and enterprises. SD-JWT (Selective Disclosure JWTs): A new extension of JSON Web Tokens that enables selective disclosure directly within a familiar JWT format. Instead of revealing all fields in a token, SD-JWTs let the holder decide which claims to disclose, while still allowing the verifier to check the issuer’s signature. This bridges modern privacy-preserving features with the widespread JWT ecosystem already in use across industries. ISO/IEC 18013-5 and 18013-7: These international standards define how mobile driver’s licenses (mDLs) are presented both in person and online. For example, 18013-5 specifies the NFC and QR code mechanisms for proving your identity at a checkpoint without handing over your phone. 18013-7 expands these definitions to online use cases—critical for remote verification scenarios. ISO/IEC 23220-4 (mdocs): A broader framework for mobile documents (mdocs), extending beyond driver’s licenses to other government-issued credentials like passports, resident permits, or voter IDs. This standard provides a consistent way to issue and verify digital documents across multiple contexts, supporting both offline and online verification. NIST SP 800-63-4: The National Institute of Standards and Technology publishes the “Digital Identity Guidelines,” setting out levels of assurance (LOAs) for identity proofing and authentication. The latest revision reflects the shift toward verifiable credentials and modern assurance methods. U.S. federal agencies and financial institutions often rely on NIST guidance as their baseline for compliance.Reading the list above, you may realize that one challenge in following this space is the sheer number of credential formats in play—W3C Verifiable Credentials, ISO mDLs, ISO 23220 mdocs, and SD-JWTs, among others. Each has its strengths: VCs offer flexibility across industries, ISO standards are backed by governments and transportation regulators, and SD-JWTs connect privacy-preserving features with the massive JWT ecosystem already used in enterprise systems. The key recommendation for anyone trying to make sense of “what’s best” is not to pick a single winner, but to look for interoperability.
Wallets, issuers, and verifiers should be designed to support multiple formats, since different industries and jurisdictions will inevitably favor different standards. In practice, the safest bet is to align with open standards bodies (W3C, ISO, IETF, OpenID Foundation) and ensure your implementation can bridge formats rather than being locked into just one.
The following sections detail (in a vastly oversimplified way, some may argue) the strengths, weaknesses, and best fit by credential format type.
A flexible, standards-based data model for any kind of digital credential, maintained by the World Wide Web Consortium (W3C).
Strengths: Broadly applicable across industries, highly extensible, and supports advanced privacy techniques like selective disclosure and zero-knowledge proofs. Limitations: Still maturing; ecosystem flexibility can lead to fragmentation without a specific implementation profile; certification programs are less mature than ISO-based approaches; requires investment in verifier readiness. Best fit: Used by universities, employers, financial institutions, and governments experimenting with general-purpose digital identity. ISO/IEC 18013-5 & 18013-7 (Mobile Driver’s Licenses, or mDLs)International standards defining how mobile driver’s licenses are issued, stored, and verified.
Strengths: Mature international standards already deployed in U.S. state pilots; supported by TSA TSIF testing for federal checkpoint acceptance; backed by significant TSA investment in CAT-2 readers nationwide; privacy-preserving offline verification. Limitations: Narrow scope (focused on driver’s licenses); complex implementation; limited support outside government and DMV contexts. Best fit: State DMVs, airports, traffic enforcement, and retail environments handling age-restricted sales. ISO/IEC 23220-4 (“Mobile Documents,” or mdocs)A broader ISO definition expanding mDL principles to other official credentials such as passports, residence permits, and social security cards.
Strengths: Extends interoperability to a broader range of credentials; supports both offline and online presentation; aligned with existing ISO frameworks. Limitations: Still early in deployment; adoption and vendor support are limited compared to mDLs. Best fit: Immigration, cross-border travel, and civil registry systems. SD-JWT (Selective Disclosure JSON Web Tokens)A privacy-preserving evolution of JSON Web Tokens (JWTs), adding selective disclosure capabilities to an already widely used web and enterprise identity format.
Strengths: Easy to adopt within existing JWT ecosystems; enables selective disclosure without requiring new infrastructure or wallets. Limitations: Less flexible than VCs; focused on direct issuer-to-verifier interactions; limited for long-term portability or offline use. Best fit: Enterprise identity, healthcare, and fintech environments already built around JWT-based authentication and access systems.Together, these standards create the backbone of interoperability. They ensure that a credential issued by the California DMV can be recognized at TSA, or that a diploma issued by a European university can be trusted by a U.S. employer. Without them, decentralized identity would splinter into silos. With them, it has the potential to scale globally.
How Trust Flows Between Issuers, Holders, and VerifiersDecentralized identity works through a triangular relationship between issuers, holders, and verifiers. Issuers (such as DMVs, universities, or employers) create credentials. Holders (the individuals) store them in their wallets. Verifiers (such as banks, retailers, or government agencies) request proofs.
What makes this model revolutionary is that issuers and verifiers don’t need to know each other directly. Trust doesn’t come from an integration between the DMV and the bank, for example. It comes from the credential itself. The DMV signs a driver’s license credential. You carry it. When you present it to a bank, the bank simply checks the DMV’s digital signature.
Think about going to a bar. Today, you hand over a plastic driver’s license with far more information than the bartender needs. With decentralized identity, you would simply present a cryptographic proof that says, “I am over 21,” without revealing your name or address. The bartender’s system verifies the DMV’s signature and that’s it - proof without oversharing.
Cryptography at WorkTo make this work, at the core of decentralized identity lies one deceptively simple but immensely powerful concept: the digital signature.
A digital signature is created when an issuer (say, a DMV or a university) uses its private key to sign a credential. This cryptographic signature is attached to the credential itself. When a holder later presents the credential to a verifier, the verifier checks the signature using the issuer’s public key.
If the credential has been altered in any way—even by a single character—the signature will no longer match. If the credential is valid, the verifier has instant assurance that it really came from the claimed issuer.This creates trust without intermediaries.
Imagine a university issues a digital diploma as a verifiable credential. Ten years later, you apply for a job. The employer asks for proof of your degree. Instead of calling the university registrar or requesting a PDF, you simply send the credential from your wallet. The employer’s system checks the digital signature against the university’s public key. Within seconds, it knows the credential is genuine.
This removes bottlenecks and central databases of verification services. It also shifts the trust anchor from phone calls or PDFs—which can be forged—to mathematics. Digital signatures are unforgeable without the private key, and the public key can be widely distributed to anyone who needs to verify.
Digital signatures also make revocation possible. If a credential is suspended or withdrawn, the issuer can publish a revocation list. When a verifier checks the credential, it not only validates the signature but also checks whether it’s still active.
Without digital signatures, decentralized identity wouldn’t work. With them, credentials become tamper-proof, portable, and verifiable anywhere.
Selective Disclosure: Sharing Just EnoughOne of the major problems with physical IDs is oversharing. As we detailed in the scenario earlier, you only want to show a bartender that you are over 21, without revealing your name, home address, or exact date of birth. That information is far more than the bartender needs—and far more than you should have to give.
Selective disclosure, one of the other major features underpinning decentralized identity, fixes this. It allows a credential holder to reveal only the specific attributes needed for a transaction, while keeping everything else hidden.
Example in Practice: Proving Age A DMV issues you a credential with multiple attributes: name, address, date of birth, license number. At a bar, a bartender verifies if your age is over 21 by scanning your digital credential QR code. The verifier checks the DMV’s signature on the proof and confirms it matches the original credential. The bartender sees only a confirmation that you are over 21. They never see your name, address, or full birthdate. Example in Practice: Proving Residency A city issues residents a digital credential for municipal benefits. A service provider asks for proof of residency. You present your digital credential and the service provider verifies that your “Zip code is within city limits” without exposing your full street address.Selective disclosure enforces the principle of data minimization. Verifiers get what they need, nothing more. Holders retain privacy. And because the cryptography ensures the disclosed attribute is tied to the original issuer’s signature, verifiers can trust the result without seeing the full credential.
This flips the identity model from “all or nothing” to “just enough.”
Example in Practice: Sanctions ComplianceUnder the Bank Secrecy Act (BSA) and OFAC requirements, financial institutions must verify that customers are not on the Specially Designated Nationals (SDN) list before opening or maintaining accounts. Today, this process often involves collecting and storing excessive personal data—full identity documents, addresses, and transaction histories—simply to prove a negative.
In our U.S. Treasury RFC response, we outlined how verifiable credentials and zero-knowledge proofs (ZKPs) can modernize this process. Instead of transmitting complete personal data, a customer could present a cryptographically signed credential from a trusted issuer attesting that they have already been screened against the SDN list. A ZKP allows the verifier (e.g., a bank) to confirm that the check was performed and that the customer is not on the list—without ever seeing or storing the underlying personal details. This approach satisfies regulatory intent, strengthens auditability, and dramatically reduces the risks of overcollection, breaches, and identity theft.
ZKPs are particularly important for compliance-heavy industries like finance, healthcare, and government services. They allow institutions to meet regulatory requirements without creating data honeypots vulnerable to breaches.
They also open the door to new forms of digital interaction. Imagine a voting system where you can prove you’re eligible to vote without revealing your identity, or a cross-border trade platform where businesses prove compliance with customs requirements without exposing their full supply chain data.
ZKPs represent the cutting edge of privacy-preserving technology. They transform the old equation, “to prove something, you must reveal everything,” into one where trust is established without unnecessary exposure.
Challenges and the Path ForwardDecentralized identity isn’t just a lofty principle about autonomy and privacy. At its core, it is a set of technologies that make those values real.
Standards ensure interoperability across issuers, wallets, and verifiers. Digital signatures anchor credentials in cryptographic trust. Selective disclosure prevents oversharing, giving people control of what they reveal. Zero-knowledge proofs allow compliance and verification without sacrificing privacy.These aren’t abstract concepts. They are already protecting millions of people from fraud, reducing compliance costs, and embedding privacy into everyday transactions.
However, there are still hurdles. Interoperability across borders and industries is not guaranteed. Wallets must become as easy to use as a boarding pass on your phone. Verifiers need incentives to integrate credential checks into their systems. And standards need governance frameworks that help verifiers decide which issuers to trust.
None of these challenges are insurmountable, but they require careful collaboration between policymakers, technologists, and businesses. Without alignment, decentralized identity risks becoming fragmented—ironically recreating the silos it aims to replace.
SpruceID’s RoleSpruceID works at this intersection, building the tooling and standards that make decentralized identity practical. Our SDKs help developers issue and verify credentials. Our projects with states, like California and Utah, have proven that privacy and usability can go hand in hand. And our contributions to W3C, ISO, and the OpenID Foundation help ensure that the ecosystem remains open and interoperable.
Our objective is to make identity something you own—not something you rent from a platform. The technology is here. The challenge now is scaling it responsibly, with privacy and democracy at the center.
The trajectory is clear. Decentralized identity is evolving from a promising technology into the infrastructure of trust for the digital age. Like HTTPS, it will become invisible. Unlike many systems that came before it, it is being designed with people at the center from the very start.
This article is part of SpruceID’s series on the future of digital identity in America. Read more in the series:
SpruceID Digital Identity in America Series
Foundations of Decentralized Identity Digital Identity Policy Momentum The Technology of Digital Identity (this article) Privacy and User Control (coming soon) Practical Digital Identity in America (coming soon) Enabling U.S. Identity Issuers (coming soon) Verifiers at the Point of Use (coming soon) Holders and the User Experience (coming soon)
The post Key Lessons in Digital Identity Verification appeared first on 1Kosmos.
Today’s CISOs agree that there is a growing challenge from identity-driven threats due to complex environments with a growing technical debt combined with cloud adoption and sprawling identity ecosystems. This is confirmed by the various breach reports by third parties such as research from Verizon and IBM who point out identity as a primary attack vector. This is also recognized by Gartner who explicitly warns:
“Organizations lacking comprehensive visibility into identity data face significant security vulnerabilities and operational inefficiencies.” — Gartner
In this second blog of our three-part series on Gartner’s 2025 Digital Identity Hype Cycle, we explore the critical category of Identity Visibility and Intelligence Platforms, where Radiant Logic is recognized for its leadership as a Sample Vendor. This recognition affirms our strategic commitment to helping organizations secure and operationalize identity through real-time observability.
The Missing Piece in IAM Maturity
Despite years of investment, many IAM programs remain stuck at the operational layer, focused on provisioning, password management, and compliance reporting. What they are missing is observability.
Why Identity Visibility and Observability“Identity Visibility and Intelligence platforms are essential in navigating complex identity environments, enabling proactive identity risk management and consistent security policy enforcement.” — Gartner
Radiant Logic addresses identity sprawl at its root by delivering a unified identity data fabric that allows for authoritative, real-time visibility across your entire identity ecosystem. This eliminates blind spots and resolves inconsistencies across fragmented systems. Unlike legacy tools, RadiantOne offers a single, trusted source of truth for both human and non-human identities and their access relationships.
But visibility alone is not enough. Radiant Logic further provides near real-time event detection through active observability of changes, controls, and processes at the identity data layer. Proactive detection and intervention are foundational to shrinking the attack surface and stopping compromise attempts before they start.
Security operations teams gain instant visibility, accelerated threat detection, and proactive risk management.
Cleaning Up the Identity FoundationIdentity observability is the connective tissue between your existing controls and the proactive, intelligent security posture demanded by today’s threat landscape. It is worth pointing out that Identity Observability is not just another feature; it is what allows organizations to mature their Identity and Access Management architecture.
Modern IAM controls are only as resilient as the data that feeds them. As Gartner underscores, effective IAM starts with visibility into every account, access relationship, and policy. RadiantOne strengthens identity hygiene at the data layer by detecting orphaned or misaligned accounts, redundant entitlements, incorrectly provisioned users, and unmanaged users and groups. This ensures that SSO, IGA, PAM, Zero Trust, and SIEM tools ingest complete, accurate, and actionable data.
With the rise of Agentic AI, the stakes are higher than ever. LLMs increasingly consume and act on enterprise identity data, making its integrity and continuous monitoring both a compliance obligation (from frameworks such as DORA) and a security imperative against data poisoning, drift, and misconfigurations. By unifying and securing identity data at the source, RadiantOne reduces technical debt, enforces consistent policy, and strengthens risk-based decisions, all actions that effectively shrink the attack surface while enabling AI-powered security operations.
The Business Impact of Identity VisibilityFor most enterprises, the identity layer is now the largest and most dynamic attack surface. Every new SaaS subscription, every contractor onboarded, and every micro-service deployed creates new accounts, credentials, and entitlements. Increasingly, this also includes AI agents. Without observability, these changes accumulate, silently introducing risk, eroding compliance, and slowing down transformation programs.
Identity Visibility and Intelligence platforms like RadiantOne directly impact three critical dimensions:
Reduced Risk – Shrink the window of exposure by surfacing dormant accounts, excessive entitlements, and anomalous activity before adversaries exploit them Streamlined Compliance – Optimize certifications, audits, and regulatory reporting (e.g., DORA, NIS2, SOX) by automating lineage and reconciliation at the identity data layer Increased Agility – Enable faster M&A integration, smoother cloud adoption, and more resilient Zero Trust enforcement by providing a single, unified source of truth for identityWhen identity data is unified, observable, and continuously governed, organizations can accelerate digital initiatives without sacrificing security. That is the true value of being recognized in Gartner’s Hype Cycle: it validates that Identity Visibility is not only a technical enabler but also a business imperative.
The Path ForwardAs Gartner’s 2025 Digital Identity Hype Cycle confirms, Identity Visibility and Intelligence is no longer optional — it is foundational. Observability is not a standalone feature or a bolt-on product: it is the critical layer that sits atop your identity fabric, transforming fragmented data into actionable intelligence.
By adding observability to the identity fabric, organizations mature their IAM stack from reactive operations to proactive defense, equipping SSO, IGA, PAM, ZTA, and SIEM tools with the clean, real-time insights they need to act decisively.
Learn MoreExplore how Radiant Logic’s RadiantOne platform can strengthen your organization’s security and mature your IAM program. Contact us today.
Disclaimers:
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
GARTNER is a registered trademark and service mark of Gartner and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
The post Gartner Recognizes Radiant Logic as Leader in Identity Visibility and Intelligence Platforms appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.
By Trevor Butterworth
Indicio has officially joined the NVIDIA Inception Program, a global initiative that supports startups advancing artificial intelligence and high-performance computing. Indicio will focus on applying decentralized identity and Verifiable Credential technology — in the form of Indicio ProvenAI — to AI systems.
ProvenAI enables AI agents and their users to authenticate each other using decentralized identifiers and Verifiable Credentials. This means an AI agent can cryptographically prove who it is interacting with and that entity can do the same — all before any data is shared.
Once identified, a person or organization can give permission to the AI agent to access their data and can delegate authority to the agent to act on behalf of the person or organization.
To monetize AI, agents and users need to be able to trust each otherAgentic AI and AI agents cannot fulfill their mission without accessing data. The more data they can access, the easier it is to execute a task. But this exposes the companies that use them to significant risk.
How can they be sure their agent is interacting with a real person, an authentic user or customer? And how can that user, similarly, verify the authenticity of the agent?
The simplest way is to issue each with a Verifiable Credential, a cryptographic way to authenticate not only an identity but the data that is being shared. Importantly, this cryptography is AI-resistant, meaning it can’t be reengineered by people using AI to try and alter the underlying information.
The critical benefit to using Verifiable Credentials for this task is that there is no need for either party to phone home to crosscheck a database during authentication or authorization. Because a Verifiable Credential is digitally signed, the original credential issuer can be verified without having to contact the issuer. The information in the credential can also be cryptographically checked to see if it has been altered. As a result, if you know the identity of the credential issuer and you trust that issuer, you can trust the contents of the credential to instantly act on them.
With Verifiable Credentials, AI’s GDPR nightmare goes awayFor AI agents to be useful, they must be able to access personal data — lots of it. For this to be compliant with data privacy regulations such as GDPR, a person must be able to consent to share their data. There’s just no way of getting around this.
Verifiable Credentials makes consent easy because the person or organization holds their data in a credential or can provide a credential to a party containing permission to access data. Once a user consents to share their data, you’ve met a critical requirement of GDPR, and that decision can be recorded for audit.
But Verifiable Credentials — or at least some credential formats — also allow for selective disclosure or zero-knowledge proofs, which means that the data and purpose for which it is being used can be minimized, thereby fulfilling other GDPR requirements.
As AI Agents will also need to access large amounts of data held belonging to people and organizations that are held elsewhere, a Verifiable Credential can be used by a person or organization to delegate authority to access that data, with the rock-solid assurance that this permission has been given by the legitimate data subject.
Decentralized governance, the engine for autonomous systemsThese features create a seamless way for AI agents to operate. But things get even more exciting when we look at the way Verifiable Credentials are governed.
With Indicio Proven and ProvenAI, a network is governed by machine-readable files sent to the software of each participant in the network (i.e., the credential issuer(s), holders and verifiers). This software tells each participant who is a trusted issuer, who is a trusted verifier, and which information needs to be presented for which use case in what order.
Indicio DEGov enables the natural authority for a network or use case to orchestrate interaction by publishing a machine-readable governance file. And this orchestration can be configured to respect hierarchical levels of authority. The result is seamless interaction driven by automated trust.
Now think about autonomous systems where each connected element has a Verifiable Identity that can be orchestrated to interact with an AI agent. You have a very powerful way to apply authentication and information sharing to very complex systems in a highly secure way. Every element of this system can be known, can authenticate another element, and can share data in complex workflows. Each interaction can be made secure and element-to-element.
Indicio is making a safe, secure, trusted AI future possibleSecure and trustworthy authentication is foundational to unlocking the market benefits of AI and enabling AI networks to interoperate and scale. This is why we were the first decentralized identity company to connect Verifiable Credentials to AI and the first to offer a Verifiable Credential AI solution — Indicio ProvenAI — recognized by Gartner in its latest report on decentralized identity.
We’re tremendously excited to be a part of the NVIDIA Inception Program. We see decentralized identity as a catalytic technology to AI, one that can quickly unlock market opportunities, and support AI agents and agentic AI.
Learn how Indicio ProvenAI can help your organization build secure, verifiable AI systems. Contact Indicio to schedule a demo or explore integration options for your enterprise.
The post Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems appeared first on Indicio.
FNBC has been fined $601,139.80 for five AML violations. FINTRAC discovered that the firm did not meet AML standards, including submitting suspicious activity reports, updating client information, and performing due diligence.
The post Canada Bank Fined $601,139.80 for Five Major AML Breaches first appeared on ComplyCube.
People are rightly angry and frustrated. No one is a winner in this current state of unease, lack of information and transparency, and mudslinging. Ocean doesn’t see the benefit of throwing around unfounded and false allegations or the attempts to sully the reputations of the projects and people — it just damages both the ASI and Ocean communities unnecessarily.
Ocean has chosen to remain silent until now, out of respect for the ongoing legal processes. But given so many flagrant violations of decency, Ocean would like to take an opportunity to rebut many publicly voiced false allegations, libels, and baseless claims being irresponsibly directed towards the Ocean Protocol project. The false and misleading statements serve only to further inflame our community, while inciting anger and causing even more harm to the ASI and Ocean communities than is necessary.
There are former $OCEAN token holders who converted to $FET and who now face the dilemma of whether to stay with $FET, return to $OCEAN, or to liquidate and be completely done with the drama.
Rather than throw unsubstantiated jabs, I would like to provide a full context with supporting evidence and links, to address many of the questions around the ASI Alliance, Ocean’s participation, and the many incorrect allegations thrown out to muddy the waters and sow confusion among our community.
This blogpost will be followed up with a claim-by-claim rebuttal of all the allegations that have been directed towards Ocean since October 9, 2025 but for now, this blog gives the context and Ocean’s perspective.
I encourage you to read it all, as it reflects months of conversations that reveal the context and progression of events, so that you can best understand why Ocean took steps to chart a separate course from the ASI Alliance. We hope the ASI Alliance can continue its work and we wish them well. Meanwhile, Ocean will go its own way, as we have every right to do.
These are the core principles of decentralization — non-coercion, non-compulsion, individual agency, sovereign property ownership and the power of you, the individual, to own and control your life.
Table of Contents
∘ 1. The Builders
∘ 2. June 2014 — Audacious Goals
∘ 3. January 2024 — AI Revolution in Full Swing
∘ 4. March 2024 — ASI Alliance
∘ 5. April 2024 — A Very Short Honeymoon
∘ 6. May 2024 — Legal Dispute Delays the ASI Launch
∘ 7. June 2024 — Re-Cap Contractual Obligations of the ASI Alliance
∘ 8. August 2024 — Cudos Admittance into ASI Alliance
∘ 9. December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasury
∘ 10. January 2025 — oceanDAO Shifts from a Passive to an Active Token Holder
∘ 11. May 2025 — oceanDAO Establishes in Cayman
∘ 12. June 2025 — Fetch’s TRNR “ISI” Deal
∘ 13. June 2025 — oceanDAO becomes Ocean Expeditions
∘ 14. June 2025 — ASI Alliance Financials
∘ 15. July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community Treasury
∘ 16. August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract
∘ 17. August 2025 — A Conspiracy To Force Ocean to Submit
∘ 18. October 2025 — Ocean Exits the ASI Alliance
∘ 19. Summary
Trent and I are dreamers with a pragmatic builder ethos. We have done multiple startups together and what unifies us is an unquenchable belief in human potential and technological progress.
To live our beliefs, we’ve started multiple companies between us. One of the most rewarding things I’ve done in my life is to join forces with, and have the honor to work with Trent.
Builders create an inordinate amount of value for society. Look at any free and open society where capital is allowed to be deployed to launch new ideas — they thrive by leveraging the imagination, brainpower and hard work needed to bring about technological progress. These builders attract an ecosystem of supporters and services, but also as is natural, those who seek to earn easy money.
Builders also start projects with good faith, forthrightness and a respect for the truth, since everyone who has played this game knows that the easiest person to lie to is yourself. So, it’s best to constantly check assumptions and stand grounded on truth, even if wildly uncomfortable. Truth is always the best policy, sometimes because it is the hardest path. It also means that one doesn’t need to live a web of lies, in a toxic environment and constantly wondering when the lie catches up with you.
Builders focus on Win-Win outcomes, seeking to maximize value for everyone in the game, and make the best of bad situations by building our way through it. No one wants to waste time, with what limited time one has on Earth, least of all, to leave the world worse off for being in it. We all want to have a positive impact, however small, so that our existence has meaning in the void of the cosmic whole.
June 2014 — Set Audacious GoalsTwelve years ago, Trent and I decided to try something audacious — to build a global, decentralized network for data and AI that serves as a viable alternative to the centralized, corrupted and captured platforms. We had been inspired by Snowden, Assange and Manning, and horrified to learn what lies we’d been told. If successful, we could impact the lives of millions of developers, who in turn, could touch the lives of everyone on earth. It could be our technology that powered the revolution.
Trent had pulled this off before. In a prior startup, Trent was a pioneer at deploying AI at scale. His knowledge and the software he’d built helped to drive Moore’s Law for two decades. Every single device you hold or use has a small piece of Trent’s intellect, embedded in at the atomic level to make your device run so you can keep in touch with loved ones, scroll memes, and do business globally.
We’d learnt from the builders of the Domain Name System (DNS), Jim Rutt and David Holtzman who are legends in their own right, that the most valuable services on earth are registries — Facebook for your social graph, Amazon for purchases, and, surprisingly, governments with all the registry services they provide. We delved into the early foundations of the Internet and corresponded with Ted Nelson, one of the architects of our modern internet in the early 1960’s. Ted was convinced that the original sin of the internet was to strip away the “ownership” part of information and intellectual property.
Blockchains restored this missing connection. As knowledge and transactions were all to be ported to blockchains over the next 30 years, these blockchain registries would serve as the most powerful and valuable databases on earth. They were also public, free and open to anyone. A magical epiphany was then made by Trent. It wouldn’t be humans that drew intelligence and insight, it would be AI. The logical users of the eventual thousands of blockchains are AI algorithms, bots and agents.
After 3.5 years on ascribe and then BigchainDB, Ocean was the culmination of work as pioneers in the crypto and blockchain space. Trent saw that the logical endpoint for all these L0 and L1 blockchains was a set of powerful registries for data and transactions. Ocean was our project to build this bridging technology between the existing world (which was 2017 by now) and the future world where LLMs, agents and other AI tools could scour the world and make sense for humans.
January 2024 — AI Revolution in Full SwingChatGPT had been released 14 months prior, in November 2022, launching the AI revolution for consumers and businesses. Internet companies committed hundreds of billions to buy server farms, AI talent was getting scooped up for seven-to-nine figure sums and the pace was accelerating fast. Ocean had been at the forefront on a lonely one-lane road and overnight the highway expanded to an eight-lane freeway with traffic zooming past us.
By that time, Trent and I had been at it for 10 years. We’d built some amazing technologies and moved the space forward with fundamental insights on blockchains, consensus algorithms, token design, and AI primitives on blockchains, with brilliant teammates along the way. We’d launched multiple initiatives with varying degrees of adoption and success. We’d seen a small, vibrant community, “The Ocean Navy,” led by Captain Donnie “BigBags”, emerge around data and AI, bound with a cryptotoken — the $OCEAN token.
We were also feeling the fatigue of managing a large community that incessantly wanted the token price to go up, with expectations of constant product updates, competitions, and future product roadmaps. I myself had been on the startup grind since 2008, having unwisely jumped into blockchain to join Trent immediately after exiting my first startup, without taking any break to reflect and recover. By the beginning of 2024, I was coming out of a deep 2-year burnout where it had been a struggle to just get out of bed and accomplish one or two things of value in a day. After 17 years of unrelenting adrenaline and stress, my body and mind shut down and the spirit demanded payment. The Ocean core team was fabulous, they stepped in and led a lot of the efforts of Ocean.
When January 2024 came around, both Trent and I were in reasonable shape. He and I had a discussion on “What’s next?” with Ocean. We wanted to reconcile the competing demands of product development and the expectations of the Ocean community for the $OCEAN token. Trent and I felt that the AI space was going to be fine with unbridled momentum kicked off with ChatGPT, and that we should consider how Ocean could adapt.
Trent wanted to build hardcore products and services that could have a high impact on the lives of hundreds of people to start — a narrow but deep product, rather than aim for the entire world — broad but shallower. The rest of the Ocean team had been working on several viable hypotheses at varying scales of impact. For me, after 12 years of relentless focus in blockchain, I wanted to explore emerging technologies and novel concepts with less day-to-day operational pressure.
Trent and I looked at supporting the launch of 2–5 person teams as their own self-contained “startups” and then carving out 20% of revenue to plow back into $OCEAN token buybacks. We also bandied about the idea of joining up with another mature project in the crypto-space, where we could merge our token into theirs or vice versa. This had the elegant outcome where both Trent and I could be relieved of the day-to-day pressures, offloading the token and community management, and growing with a larger community.
March 2024 — ASI AllianceIn mid-March, Humayun Sheikh (“Sheikh”) reached out to Trent with an offer to join forces. Fetch and SingularityNet had been in discussions for several months on merging their projects, led and driven by Sheikh.
Even though Fetch and SingularityNet were not Ocean’s first choice for a partnership, and the offer came seemingly out of the blue, I was brought in the next day. Within 5 days, all three parties announced a shotgun marriage between Fetch, SingularityNet and Ocean. To put it bluntly, we, Ocean, had short-circuited our slow-brain with our fast-brain, because we had prepped ourselves for this type of outcome when it appeared, even with candidates that we hadn’t previously considered, and we rationalized it.
24 Mar 2024 Call between Dr. Goertzel & Bruce PonThe terms for Ocean to join the ASI Alliance were the following:
The Alliance members will be working towards building decentralized AI Foundations retain absolute control over their treasuries and wallets It is a tokenomic merger only and all other collaborations or activities would be decided by the independent Foundations.Sovereign ownership over property is THE core principle of crypto and it was the primary condition of Ocean joining the ASI Alliance. Given that there were two treasuries for the benefit of the Ocean community, a hot wallet managed by Ocean Protocol Foundation for operational expenses and a cold wallet, owned and controlled by oceanDAO (the independent 3rd party collective charged with holding $OCEAN passively,), we wanted to make sure that sovereign property and autonomy would be respected. In these very first discussions, SingularityNet also acknowledged the existence of the oceanDAO as a separate entity from Ocean. With this understanding for ownership of treasuries in place, Ocean felt comfortable to move forward. Ocean announced that it was joining the ASI Alliance on 27 March 2024.
April 2024 — A Very Short HoneymoonImmediately after the announcement, cracks started to appear and the commercial understandings that had induced Ocean to enter into the deal started to be violated or proven untrue.
SingularityNet confided in us that they were very grateful that Ocean could join since their own community would balk at a merger solely with Fetch, citing the SingularityNet community skepticism of Mr. Sheikh and Fetch.
Ben Goertzel of SingularityNet spoke of how Sheikh would change his position on various issues, and of Sheikh’s desire to “pump and dump” a lot of $FET tokens. Ben confided in us that they were very grateful that Ocean could join since their own SingularityNet community would balk at a merger solely with Fetch, citing the strong community skepticism about Sheikh and Fetch.
Immediately after the ASI Alliance was announced, SingularityNet implemented a community vote to mint $100 million worth of $AGIX with the clear intent on selling them down via the token bridge and migration contract, in our newly shared $ASI/$FET liquidity pools.
The community governance voting process was a farce. Fetch is 100% owned and controlled by Sheikh who holds over 1.2 billion $FET, so any “community” vote was guaranteed to pass. For SingularityNet, the voting was more uncertain, so SingularityNet was forced to massage the messages to convince major token holders to get on board. Ocean took its own pragmatic approach to community voting with the position, if $OCEAN holders don’t want $FET, they can sell their $OCEAN and move on. Ocean wanted to keep the “voting” as thin as possible so that declared preferences matched actual preferences.
Mr. David Lake (“Lake”), Board Member of SingularityNet also disclosed that Sheikh treated network decentralization as an inconvenient detail that he didn’t particularly care about and only paid “lip service” to it.
In hindsight this should have been a major red flag.
April 3, 2024 — Lake to PonOcean discovered that the token migration contracts, which SingularityNet had represented as being finished and security audited, were nowhere near finished or security audited.
A combined technology roadmap assessment showed little overlap, and any joint initiatives for Ocean would be “for show” and expediency rather than serving a practical, useful purpose.
The vision of bringing multiple new projects on-board, the vision sold to Ocean for ASI, hit the Wall when Fetch asserted that their L1 chain would retain primacy, so they could keep their $FET token. This meant that only ERC20 tokens could be incorporated into ASI in the future. ASI would not be able to integrate any other L1 chain into the Alliance.
This presented a dilemma for Ocean. Ocean was working closely with Oasis ($ROSE) and had planned on deeper technical integrations on multiple projects. If Ocean’s token was going to become $FET but Ocean’s technology and incentives were on $ROSE, there was an obvious mismatch.
Ocean worked feverishly for three weeks to develop integration plans, migration plans and technology roadmaps that could bridge the mismatch but, in the end, the options were rejected outright.
Summary of Ocean’s Proposal and Technical Analysis that was presented to Fetch and SingularityNETOutside of technology, the Ocean core team were being dragged into meeting-hell with 4–6 meetings a day, sucking up all our capacity to focus on delivering value to the Ocean community. ASI assumed the shape of SingularityNet, which was very administratively heavy and slow.
No one had done proper due diligence. We’d all made a mistake of jumping quickly into a deal.
At the end of April 2024, 1 month after signing the ASI Token Merger Agreement, Ocean asked to be let out of the ASI Alliance. Ocean had ignored the red flags for long enough and wanted to part ways amicably with minimal damage. Ocean drafted a departure announcement that was shared in good faith with Fetch and SingularityNet.
April 25/26 — Sheikh and PonThe next day emails were exchanged, starting with one from Sheikh to myself, threatening Ocean and myself with a lawsuit that would result in “significant damages.”
Believing that Sheikh shared a commitment to the principles of non-coercion and non-compulsion, I responded to say that the escalation path of Sheikh went immediately towards a lawsuit.
Sheikh then accused Ocean of being guilty of compelling and coercing the other parties against their will, and made clear that any public statement about Ocean leaving the ASI Alliance would be met with a lawsuit.
I re-asserted Ocean’s right to join or not join ASI, and asked that the least destructive path be chosen to minimize harm on the Fetch, SingularityNet and Ocean communities.
For Ocean, it was regrettable that we’d jumped into a deal haphazardly. At the same time, Ocean had signed a contract and we valued our word and our promises. We knew that it was a mistake to join ASI, but we’d gotten ourselves into a dilemma. We decided to ask to be let out of the ASI contract.
May 2024 — Legal Dispute Delays the ASI LaunchOcean’s request to be let out of the ASI Alliance was met with fury, aggression, and legal action was initiated immediately on Ocean. Sheikh was apparently petrified of the market reaction and refused to entertain anything other than a full merger.
Over the month of May 2024, with the residual goodwill from initial March merger discussions, I negotiated with David Levy who was representing Fetch, with SingularityNet stuck in the middle trying to play referee and keep everyone happy.
May 2, 2024 — Lake and PonTrent put together an extensive set of technical analyses exploring possible options for all parties to get what they wanted. Fetch wanted a merger while keeping their $FET token. Ocean needed a pathway that wouldn’t obstruct us to integrate with Oasis. SingularityNet wanted everyone to just get along.
By mid-May sufficient progress had been made so that I could lay down a proposal for Ocean to rejoin the ASI initiative.
May 12, 2024 — Pon to SheikhBy May 24, 2024 we were coming close to an agreement.
Given our residual reluctance to continue with the ASI Alliance, Ocean argued for minority rights so that we would not be bullied with administrative resolutions at the ASI level that compelled us to do anything that did not align with our values or priorities.
May 24, 2024 — Pon to LevyDespite Fetch and SingularityNET each (separately) expressing to Ocean concerns that each of the other was liquidating too many tokens too quickly (or had the intention to do so), we strongly reiterated the sacrosanct principle of all crypto, that property in your wallet is Yours. SingularityNet agreed, wanting the option to execute airdrops on the legacy Singularity community if they deemed it useful.
In short:
· Ocean would not interfere with Fetch’s or SingularityNet’s treasury, nor should they interfere with Ocean (or any other token holder).
· Fetch’s, SingularityNET’s and Ocean’s treasuries were sole property of the Foundation entities, and the owning entities had unencumbered, unrestricted rights to do as they wish with their tokens.
oceanDAO, the Ocean community treasury DAO which had been previously acknowledged by SingularityNET in March at the commencement of merger discussions, then came up over multiple discussions with Mr. Levy.
A sticking point in the negotiations appeared when Fetch placed significant pressure to compel Ocean (and oceanDAO) to convert all $OCEAN to $FET immediately after the token bridge was opened. Ocean did not control oceanDAO, and Ocean reiterated forcefully that oceanDAO would make their own decision on the token swap. No one could compel a 3rd party to act one way or the other, but Ocean would give best efforts to socialize the token swap benefits.
In keeping with an ethos of decentralization, Ocean would support any exchange choosing to de-list $OCEAN but Ocean would not forcefully advocate it. Ocean believed that every actor — exchange, token holder, Foundation — should retain their sovereign rights to do as they wish unless contractually obligated.
May 24, 2024 — Pon to LevyAs part of this discussion, Ocean disclosed to Fetch all wallets that it was aware of for both Ocean and the oceanDAO collective. What is clearly notable is that Ocean clearly highlighted to Fetch that OceanDAO was a separate entity, independent from Ocean (i.e. Ocean Protocol Foundation) and that it is not in any way controlled by Ocean.
May 24, 2024 — Pon to Levy (Full disclosure of all Ocean Protocol Foundation and oceanDAO wallets)Fetch applied intense pressure on Ocean to convert all $OCEAN treasury tokens (including oceanDAO treasury tokens) into $FET. In fact, Fetch sought to contractually compel Ocean to do so in the terms of the ASI deal. Ocean refused to agree to this, since, as already made known to Fetch, the oceanDAO was an independent 3rd party.
Finally acknowledging the reality of the oceanDAO as a 3rd party, Fetch.ai agreed to the following term into the ASI deal:
Ocean “endeavors on best efforts to urge the oceanDAO collective to swap tokens in their custody” into $FET/$ASI as soon as the token bridge was opened, acknowledging that Ocean could not be compelled to force a 3rd party to act.
Being close to a deal, we moved on to the Constitution of the ASI entity (Superintelligence Alliance Ltd). As was clear from the Constitution, the only role of the ASI entity was the assessment and admittance of new Members, and the follow-on instruction to Fetch to mint the necessary tokens to swap out the entire token supply of the newly admitted party.
This negotiated agreement allowed Ocean to preserve its full independence within the ASI Alliance so that it could pursue its own product roadmap based on pragmatism and market demand, rather than fake collaborations within ASI Alliance for marketing “show.” Ocean had fought, over and over again, for the core principle of crypto — each wallet holder has a sole, unencumbered right to their property and tokens to use as they saw fit.
It also allowed Ocean to reject any cost sharing on spending proposals which did not align to Ocean’s needs or priorities, to the significant dismay of Fetch and SingularityNet. They desired that Ocean would pay 1/3 of all ASI expenses that were proposed, even those that were nonsensical or absurd. Ocean’s market cap made up 20% of ASI’s total market cap, so whatever costs were commonly agreed, Ocean would still be paying “more than its fair share” relative to the other two members.
May 24, 2024 — Pon to LevyIn early-June, Ocean, Fetch and SingularityNet struck a deal and agreed to proceed. Fetch made an announcement of the ASI merger moving forward for July 15, 2024.
Ocean reasoned that a protracted legal case would not have helped anyone, $OCEAN holders would have a good home with $FET, that there were worse outcomes than joining $FET and that it would relieve the entire Ocean organization from the day-to-day management of community expectations, freeing the Ocean core team to focus on technology and product.
From June 2024, the Ocean team dove in to execute, in support of the ASI Alliance merger. Ocean had technical, marketing and community teams aligned across all three projects. The merger went according to plan, in spite of the earlier hiccups.
Seeing that there would potentially be technology integration amongst the parties moving forward, the oceanDAO announced through a series of blogposts that all $OCEAN rewards programs would be sun-downed in an orderly manner and that the use of Ocean community rewards would be re-assessed at a later date.
51% treasury for the Ocean community
It’s possible that it was at this juncture that Sheikh mistakenly assumed that the Ocean treasury would be relinquished solely for ASI Alliance purposes. This is what may have led to Sheikh’s many false allegations, libelous claims and misleading statements that Ocean somehow “stole” ASI community funds when, throughout the entire process, Ocean has made forceful, consistent assertions for treasury sovereignty.
Meanwhile, the operational delay had somewhat dampened the enthusiasm in the market for the merger. SingularityNet conveyed to Ocean that this had likely prevented Sheikh from using the originally anticipated hype and increased liquidity to exit large portions of his $FET position with a huge profit for himself. As it turned out, Ocean’s hesitation, driven by valid commercial concerns, may have inadvertently protected the entire ASI community by taking Sheikh’s planned liquidation window away.
In spite of any earlier bad blood, I sent Sheikh a private note immediately upon hearing that his father was gravely ill.
June 10, 2024 — Pon to Sheikh June 2024 — Re-Cap Contractual Obligations of the ASI AllianceTo take a quick step back, the obligations for the ASI Alliance were the following:
Fetch would mint 610.8m $FET to swap out all Ocean holders at a rate of 0.433226 $FET / $OCEAN Fetch would inject 610.8m $FET into the token bridge and migration contract so every $OCEAN token holder could swap their $OCEAN for $FET.In exchange, Ocean would:
Swap a minimum of 4m $OCEAN to $FET (Ocean Protocol Foundation only had 25m $OCEAN, of which 20m $OCEAN were locked with GSR) Support exchanges in the swap from $OCEAN to $FET Join the to-be established ASI Alliance entity (Superintelligence Alliance Ltd).When the merger happened in July 2024, Fetch.ai injected 500m $FET each into the migration contracts for $AGIX and $OCEAN, leaving a shortfall of 110.8m $FET which Ocean assumed would be injected later when the migration contract ran low.
With the merger completed, Ocean set about to focus on product development and technology, eschewing many of the outrageous marketing and administrative initiatives proposed by Fetch and SingularityNet.
July 17, 2024 — Pon to Lake and LevyThis singular product focus continued until Ocean’s eventual departure from the ASI Alliance in October 2025.
August 2024 — Cudos Admittance into ASI AllianceIn August 2024, Fetch had prepared a dossier to admit Cudos into the ASI Alliance. The dossier was relatively sparse and missed many key pertinent technical details. Trent had many questions about Cudos’ level of decentralization, which was supposedly one of the key objectives of the ASI Alliance, and whether Cudos’ service was both a cultural and technical fit within the Alliance. During the 2h Board meeting, it got heated when Sheikh made clear that he regarded decentralization as some “rubbish, unknown concept”.
The vote on Cudos proceeded. I voted for Cudos to try to maintain good relations with the others while Trent rightfully voiced his dissatisfaction with the compromise on decentralization principles. The resolution passed 5 of 6 when Fetch and Singularity both unanimously voted “Yes” for entry of Cudos.
The Cudos community “vote” proceeded. Even before the results had been publicly announced on 27 Sep 2024, Fetch.ai had minted the Cudos token allotment, and then sent the $FET to the migration contract to swap out $CUDOS token holders.
December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasuryBy December 2024, many of the ASI and Ocean communities had identified large flows of $AGIX and $FET tokens from the SingularityNet treasury wallets. At the start of the ASI Alliance, Ocean ignored the red flag signals from SingularityNet on their undisciplined spending that was untethered to reality.
Dr. Goertzel was hellbent on competing with the big boys of AI who were deploying $1 Trillion in CapEx. Meanwhile Dr. Goertzel apparently thought that a $100m buy of GPUs could make a difference. As part of this desire to “keep up” with OpenAI, X and others, SingularityNet had a headcount over 300 people. Their monthly fixed burn rate of $6 million per month exceeded the annual burn rate of both the Fetch and Ocean teams combined. This was, in Ocean’s view, unsustainable.
The results were clear as day in the $FET token price chart. From a peak of $3.22/$FET when the ASI Alliance was announced, the token price had dropped to $1.25 by end December 2024. Ocean had not sold, or caused to be sold, any $FET tokens.
Based on independent research, it appears that Fetch.ai also sold or distributed tokens to the tune of 390 million $FET worth $314 million from March 2024 until October 2025:
Further research shows a strong correlation between Fetch liquidations and injections into companies controlled by Sheikh in the UK.
All excess liquidity and buy-demand for $FET was sucked out through SingularityNet’s $6 million per month (or more) burn rate and Fetch’s liquidations with a large portion likely going into Sheikh controlled companies. As a result, the entire ASI community suffered, as $FET underperformed virtually every other AI-crypto token, save one. $PAAL had the unfortunate luck to get tangled up with the ASI Alliance, and through the failed token merger attempt, lost their community’s trust and support, earning the unenviable honour of the worst performing AI-crypto token this past year.
SingularityNet was harming all of ASI due to their out-of-control spending and Fetch’s massive sell-downs compounded the strong negative price pressure.
As painful as it was, Ocean held back from castigating SingularityNet, as one of the core principles of crypto is that a wallet holder fully controls their assets. Ocean kept to that principle, believing that it would likewise apply to any assets controlled by Ocean or oceanDAO. We kept our heads down and maintained strict fiscal discipline.
For the record, from March 2024 until July 2025, a period of 16 months, neither Ocean nor oceanDAO liquidated ANY $FET or $OCEAN into the market, other than for the issuance of community grants, operational obligations and market making to ensure liquid and functioning markets. Ocean had lived through too many bear markets to be undisciplined in spending. Ocean kept budgets tight, assessed every expense regularly and gave respect to the liquidity pools generated by organic demand from token holders and traders.
Contrast this financial discipline with the records which now seem to be coming out. Between SingularityNet and Fetch, approximately $500 million was sent to exchange accounts on the Ethereum, BSC and Cardano blockchains, with huge amounts apparently being liquidated for injection into Sheikh’s personal companies or being sent for custody as part of the TRNR deal (see below). This was money coming from the pockets of all the ASI token holders.
January 2025 — oceanDAO Shifts from a Passive to an Active Token HolderIn January 2025, questions arose from the oceanDAO, whether it would be prudent to explore options to preserve the Ocean community treasury’s value. In light of a $FET price that was clearly declining at a faster rate relative to other AI-crypto tokens, something had to be done.
Since 2021, when the custodianship of oceanDAO had been formally and legally transferred from the Ocean Protocol Foundation, the oceanDAO had held all assets passively. In June 2023, the oceanDAO minted the remaining 51% of the $OCEAN supply and kept them fully under control of a multisig without any activity until July 2025, to minimize any potential tax liabilities on the collective. I was one of seven keyholders.
To put to bed any false allegations, the $OCEAN held by oceanDAO are for the sole benefit of the Ocean community and no one else. It doesn’t matter if Sheikh makes claims based on an alternative reality hundreds of times or that these claims are repeated by his sycophants — the truth is that the $OCEAN / $FET owned by oceanDAO is for the benefit of the Ocean community.
May 2025 — oceanDAO Establishes in CaymanThe realization that SingularityNet (and, as it now turns out, Fetch) was draining liquidity and creating a consistent negative price impact on the community spurred the oceanDAO to investigate what could be done to diversify the Ocean community treasury out of the passively held $OCEAN which was pegged to $FET.
The oceanDAO collective realized it had to actively manage the Ocean community treasury to protect Ocean community interests, especially as the DeFi landscape had matured significantly over the years and now offered attractive yields. Lawyers, accountants and auditors were engaged to survey suitable jurisdictions for this purpose — Singapore, Dubai, Switzerland, offshore Islands. In the end, the oceanDAO decided on Cayman.
Cayman offered several unique advantages for DAOs. Cayman law permits the creation of entities which could avoid giving Founders or those close to the project any legal claim on community assets, ensuring that the work of the entity would be solely deployed for the Ocean community. One quarter of all DAOs choose Cayman as their place to establish, including SingularityNet.
By June 2025, a Cayman trust was established on behalf of the oceanDAO collective for the benefit of the Ocean community. This new entity became known as Ocean Expeditions (OE). oceanDAO transferred its assets to the OE entity and the passively held $OCEAN were converted to $FET. OE could now execute an active management of the treasury. As it happened, Fetch.ai had in fact gotten what it wanted, namely, for oceanDAO to convert its entire treasury of 661 million $OCEAN into $FET tokens.
Contrary to what Sheikh has been insinuating, Ocean does not control OE. Whilst I am the sole director of OE, I remain only one of several keyholders, all of whom entered into a legally binding instrument to act for the collective benefit of the Ocean community.
June 2025 — Fetch’s TRNR “ISI” DealUnbeknownst to Ocean or oceanDAO, in parallel, Fetch.ai UK had been working on an ETF deal with Interactive Strength Inc (ISI), aka the “TRNR Deal”.
Neither Ocean nor oceanDAO (or subsequently OE) had any prior knowledge, involvement or awareness of this. In fact, “Ocean” is not mentioned even once in the SEC filings. Consistent with the original understanding that each Foundation had sole control of their treasuries, Ocean was not consulted by Fetch.
I don’t have the full details and I encourage the ASI community to inquire further but the mid-June TRNR deal seems to have committed Fetch to supply $50 million in a cash loan for DWF and ISI, and $100 million in tokens (125m $FET) for a backstop to be custodied with BitGo.
SingularityNet told Ocean that they were strong-armed by Fetch.ai to put in $15 million in cash for this deal, but were not named in any of the filings. The strike price for the deal was around $0.80 per $FET and the backstop would kick-in if $FET dropped to $0.45, essentially betting that $FET would never drop -45%.
However, this ignored the fact that crypto can fall 90% in bear markets or flash crashes. The TRNR deal not only put Fetch.ai’s assets at risk if the collateral was called, the 125m $FET would be liquidated as well, causing significant harm to the entire ASI community.
Well, four months later, that’s exactly what happened. On the night of Oct 10, 2025, Trump announced tariffs on China sending the crypto market into chaos. Many tokens saw a temporary drawdown of 95% before recovering with 2/3 of their valuation from the day before. One week later on Oct 17, further crypto-market aftershocks occurred with another round of smaller liquidations.
Again, I don’t have all the details, but it appears that large portions of the $FET custodied with BitGo were liquidated causing a drop in $FET price from $0.40 down to $0.32.
Oct 12, 2025 — Artificial Superintelligence Alliance Telegram ChatThe ASI and Fetch community should be asking Fetch.ai some hard questions such as why Fetch.ai would sign such a reckless and disastrous deal? They should ask for full transparency on the TRNR deal with clear numbers on the amounts loaned, $FET used as collateral, and the risk assessment on the negative price impact to $FET if the collateral was called and liquidated by force.
June 2025 — oceanDAO becomes Ocean ExpeditionsTwo weeks after the TRNR deal was announced, OE received its business incorporation papers in Cayman and assets from oceanDAO could be immediately transferred over to the OE entity.
The timing of OE’s incorporation was totally unrelated to Fetch’s TRNR deal, and had in fact been in the works long before the TRNR deal was announced. OE’s strategy to actively manage the Ocean community treasury was developed completely independently from Fetch’s TRNR deal, because remember, Ocean was never informed of anything except for a head’s up on the main press release a few days before publication.
OE had few options with the $OCEAN it held because (contrary to recent assertions) Fetch.ai had mandated a one-way transfer from $OCEAN to $FET in the June 2024 deal for Ocean to re-engage with the ASI Alliance. By this time, most exchanges had de-listed $OCEAN, which closed off virtually all liquidity avenues. As a result, $OCEAN lost 90% of its liquidity and exchange pairs.
OE had only one way out and that was to convert $OCEAN to $FET. This was consistent with the ASI deal. It was Fetch.ai that wanted Ocean to compel oceanDAO to convert $OCEAN to $FET as part of the ASI deal.
On July 1 2025, all 661m $OCEAN held by OE in the Ocean community wallet were converted.
Completely unbeknownst to Ocean and to OE, Sheikh viewed OE’s treasury activities, not as support for his $FET token, rather as sabotage for his TRNR plans.
But recall, OE had no idea about the details of the deal. Neither OE, nor Ocean, was a party to the deal in any way. I found out like everyone else via a press release on June 11 that the deal had closed and I promptly ignored it to focus on Ocean’s strategy, products and technology.
Sticking to the principle that each Foundation works in its own manner for the benefit of the ASI community, Ocean didn’t feel the need to demand any restrictions on Fetch.ai nor to delve into any documents. Personally, I didn’t even read the SEC filings until September, in the course of the ongoing legal proceedings to understand the allegations being made against Ocean. The TRNR deal was solely a Fetch.ai matter.
June 2025 — ASI Alliance FinancialsAs an aside, I had been driving the effort to keep the books of the ASI Alliance properly up-to-date.
Sheikh was insistent that Fetch be reimbursed by the other Members for its financial outlays, assuming that other ASI members had spent less than Fetch.ai. When Sheikh found out that it was actually Ocean who had contributed the most money to commonly agreed expenditures, even though Ocean was the smallest member, and SingularityNet and Fetch.ai would owe Ocean money, the complaint was dropped.
Instead, Sheikh tried another tactic to offload expenses.
SingularityNet and Ocean had signed off on the 2024 financial statements for the ASI Alliance. However, the financials were delayed by Fetch.ai. Sheikh wanted to load up the balance sheet of the ASI Alliance with debt obligations based on the spendings of the member Foundations.
June 20, 2025 — Pon to SheikhFetch’s insistence was against the agreement made at the founding of ASI, that each Member would spend and direct their efforts on ASI initiatives of their own choosing and volition, and the books of ASI Alliance would be kept clean and simple. This was especially prudent as the ASI Alliance had no income or assets.
After a 6-week delay and back and forth discussions, in mid-August we finally got Fetch.ai’s agreement to move forward by deferring the conversation on cost sharing to the following year.
This incident stuck in my mind as an enormous red flag, as these types of accounting practices hinted at the type of tactics that Sheikh may consider as a normal way of doing business. Ocean strongly disagrees and does not find such methods to be prudent.
July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community TreasuryOn July 3, Ocean Expeditions (OE) sent 34 million $FET to a reputable market maker for mid-dated options with sell limits set to $0.75–0.95, so OE could earn premiums while allowing for the liquidation of $FET if the price was higher at option expiry.
This sort of option strategy is a standard approach to treasury management that is ethical, responsible and benefits token holders by maintaining relative price stability. The options only execute and trigger a sale if, upon maturity, the $FET price is higher than the strike price. If at maturity the $FET price is lower than the strike price, the options expire unexercised while still allowing OE to earn premiums, benefiting the Ocean community.
Insinuations that these transactions were a form of “token dumping” are nonsensical and misinformed. OE was simply managing the community treasury.
On July 14, a further 56 million $FET was sent out as part of the same treasury strategy with strikes set at $0.70-$1.05.
These option transactions did lead to a responsible liquidation of $18.2 million worth of $FET on July 21, one that accorded with market demand and did not depress the $FET price. Further, this was 6 weeks after the TRNR deal was announced. From July 21 until Ocean’s exit from the ASI Alliance on Oct 9, 2025, there were no further liquidations of $FET save for one small tranche that raised $2.2m.
In total, Ocean Expeditions raised $22.4 million for the Ocean community, a significantly smaller sum compared to the estimated $500 million of liquidations by the other ASI members.August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract
Around this time, Ocean realized that the $OCEAN/$FET token migration contract was running perilously low. The migration contract was supposed to cover over 270 million $OCEAN to be converted by 37,500 token holders, but only 7 million $FET were left in the migration contract.
On July 22, Ocean requested Fetch to top-up the migration contract with 50m $FET without response. Another email was sent to Sheikh on July 29 with a response from him of “will work on it.” Sheikh asked for a call on Aug 1, where he agreed to top up the migration contract with the remaining tokens. On Aug 5, I wrote an email to Fetch and Sheikh with a formal request for a top-up, while confirming that all wallets are secured for the Ocean community.
I sent a final note on August 12 to Sheikh with a request for information why the promised top-up had not yet occurred.
August 2025 — A Conspiracy To Force Ocean to SubmitStarting August 12, Fetch.ai and SingularityNet actively conspired against Ocean. Without allowing Ocean’s directors to vote on the matter (on the grounds that Ocean’s directors were purportedly “conflicted”), Fetch’s and SingularityNet’s directors on the ASI Alliance unilaterally attempted to pass a resolution to close the $OCEAN-$FET token bridge. This action clearly violated the ASI Constitution which mandated a unanimous agreement by all directors for any ASI Alliance actions.
On August 13, Mario Casiraghi, SingularityNET’s CFO, issued the following email:
The next day on August 14, I received this message from Lake:
(In this note, Lake acknowledged that Sheikh’s original plans to dump the ASI Alliance were still in place, albeit potentially at an accelerated pace).
Ocean objected forcefully, citing the need to protect the ASI and Ocean communities, and pleading to keep the matter private and contained.
August 19, 2025 — Pon, Dr. Goertzel, Mario CasiraghiAt this point, I highlighted the obvious hypocrisy of SingularityNet and Fetch.
SingularityNet and Fetch had moved $500 million worth of $FET, sucking out excess liquidity from all token holders. All the while, Ocean held its tongue and maintained fiscal discipline.
Yet, the very first instance that oceanDAO/ Ocean Expeditions actually liquidated any $FET tokens, Ocean was accused of malicious intent, exercising control over OceanDAO/OE and called to task. Fetch had accused the wrong entity, Ocean, for actions of a wholly separate 3rd party, and jumped to completely false conclusions about the motives.
The improper ASI Alliance Directors’ actions violated the core principle of the ASI Alliance that crypto-property was to be solely directed by each Foundation. Additional clauses with demands for transparency, something neither Fetch.ai nor SingularityNet had ever offered or provided themselves, were included to further try to hamper and limit Ocean Protocol Foundation.
The only authority of the ASI Alliance and the Board, as defined in the ASI Constitution, was to vote on accepting new members and then minting the appropriate tokens for a swap-out. There was no authority, power or mandate to sanction any Member Foundation.
Any and all other actions needed a unanimous decision from the Board and Member Foundations. This illegal action was exactly what Ocean was so concerned about in the May 2024 “re-joining” discussions — the potential for the bullying and harassment of Ocean as the weakest and smallest member of the ASI Alliance.
Finally, in seeing the clear intent on closing the token bridge and the active measures to harm 37,600 innocent $OCEAN token holders, Ocean needed to act.
Ocean immediately initiated legal action to protect Ocean and ASI users on August 15, 2025. This remains ongoing.
Within hours of Ocean’s filing, Fetch responded with a lengthy countersuit against Ocean accompanied with witness statements and voluminous exhibits. This showed that Fetch had for weeks been planning on commencing a lawsuit against Ocean and instructing lawyers behind the scenes. On August 19, Ocean also received a DocuSign from SingularityNet’s lawyer. This contained the resolution which Fetch and Singularity attempted to pass without the Ocean-appointed directors, i.e. myself and Trent.
On August 22, by consent, parties agreed to an order to maintain confidentiality during the legal process, and out of respect for the process, Ocean refrained from communicating with any 3rd parties, including OE who was not a party to the dispute or the proceedings. It is also the reason why Ocean has, until now, refrained from litigating this dispute in public.
October 2025 — Ocean Exits the ASI AllianceAs the legal proceedings carried on, and evidence was provided from August until late-September, it was clear that Ocean could no longer be a part of the ASI Alliance.
The only question was when to exit?
Ocean was confident that the evidence and facts presented to the adjudicator would prove its case and vindicate it, so Ocean wanted the adjudicator to forcefully make an assessment.
Once the adjudicator issued his findings (which Ocean has proposed to waive confidentiality over and release to the community so the community can see the truth for themselves, but which Fetch has refused to agree to), Ocean decided that it was time to leave the ASI Alliance.
The 18-month ordeal was too much to bear.
From the violation of the original agreements on the principles of decentralization, to the encroachment on both Ocean and Ocean Expedition treasuries, while watching SingularityNet and Fetch disregard and pilfer the community for their own priorities, Ocean knew that it needed out.
Ocean couldn’t save ASI, but could try to salvage something for the Ocean community.
SingularityNet and Fetch used their treasuries recklessly as they saw fit, without regard or consideration of the impacts to the ASI community.
From Fetch’s over-reaction the first time Ocean wanted to bow out amicably, Ocean knew that additional legal challenges and attempts to block Ocean from leaving could be expected.
Ocean has only tried to build decentralized AI products, exert strict fiscal discipline, collaborate in good faith and protect the ASI and Ocean communities as best as we can.
As of Oct. 9, Ocean Expeditions retained the vast majority of the $FET that were converted from $OCEAN. All tokens held by Ocean Expeditions are its property, and will be used solely for the benefit of the Ocean community. They are not controlled by Ocean, or by me.
Summary$FET dropped from a peak of $3.22 at the time of the ASI Alliance announcement to $0.235 today, a -93% drop. Fetch and SingularityNet have tried to convince the community that this was all a result of Ocean leaving the ASI Alliance, but that is untrue.
Ocean announced its withdrawal on Oct 9 from the ASI Alliance in a fully amicable manner, without pointing fingers to minimize any potential fallout. Even after 8 hours of Ocean’s announcement, the price of $FET had only fallen marginally from $0.55 to $0.53. In other words, Sheikh is blaming Ocean for a problem that has little to do with anything Ocean has done.
Price Chart “1h-Candles” on $FET at the time of the Ocean withdrawalThe Oct 10/11 Crypto flash crash due to Trump’s China tariff announcement took the entire market down and $FET went down to $0.11 before it recovered to $0.40.
On the evening of Oct 12, a further decline in $FET came when the TRNR collateral was called on and started to be liquidated. This event brought $FET down to $0.32. This was the ill-conceived deal entered into by Fetch.ai which apparently ignored the extreme volatility of crypto-markets and caused unnecessary damage to the entire ASI community.
Meanwhile, in the general crypto market, a second aftershock of liquidations happened around Oct 17.
Combined with Fetch and Sheikh’s attempts to denigrate Ocean, and in the process causing damage to their own $FET token as the allegations became more and more ludicrous, and the narrative attacks started to contradict themselves.
In short, the -93% drop in $FET from 27 March 2024 until 19 October 2025 was due to the broader market sentiment and volatility, SingularityNet and Fetch’s draining of liquidity from the entire community by dumping upwards of $500 million worth of $FET tokens, a reckless TRNR deal that failed to anticipate crypto dropping more than 45% and wiping out $150 million in cash and tokens, and Fetch.ai’s FUDing of its own project, bringing disrepute on itself when Ocean decided that it could not in good conscience remain a part of the ASI Alliance.
X Post: @cryptorinweb3 — https://x.com/cryptorinweb3/status/1980644944256930202I’m not going to say whose fault I think the drop in $FET’s price is, but I can with very high confidence say it has next to nothing to do with Ocean leaving the ASI Alliance.
I hope that the Fetch and SingularityNet communities ask for full financial transparency on the spendings of the respective Fetch.ai and SingularityNet Companies and Foundations.
I would also like to sincerely thank the Ocean community, trustees and the public for their patience and support during Ocean’s radio silence in respect of the legal processes.
The ASI Alliance from Ocean’s Perspective was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
Luxembourg & Zug – 23 October 2025 – AMINA Bank AG (“AMINA”), a Swiss Financial Market Supervisory Authority (FINMA)-regulated crypto bank with global reach, has entered into a collaboration agreement with Tokeny, (an Apex Group company), the leading onchain finance operating system, to create a regulated banking bridge for institutional tokenisation. This strategic collaboration addresses critical institutional bottlenecks by applying Swiss banking standards to blockchain innovation.
Through this agreement, AMINA Bank will deliver regulated banking and custody for underlying assets such as government bonds, corporate securities, treasury bills, and other traditional financial instruments, while Tokeny provides the tokenisation platform. AMINA’s extensive crypto and stablecoin offering also enables clients to seamlessly move on and off chain.
Market demand for tokenisation is coming from the open blockchain ecosystems, and institutions need a compliant and scalable way to meet it. By integrating AMINA Bank's regulated banking and custody framework with Tokeny's orchestrated tokenisation infrastructure, we provide financial institutions with a fast, seamless, and secure path to market. Luc FalempinCEO of Tokeny, and Head of Product for Apex DigitalThe tokenised assets market is experiencing explosive growth, with major institutions, including JP Morgan and BlackRock, leading adoption of blockchain-based financial products. This momentum is supported by accelerating regulatory clarity across the globe, from the US GENIUS Act to Hong Kong’s ASPIRe framework.
The collaboration leverages AMINA’s regulated banking infrastructure alongside Tokeny’s proven tokenisation expertise. AMINA provides Swiss banking-standard custody and compliance, while Tokeny contributes first-mover tokenisation technology and an enterprise-grade platform that has powered over 120 use cases and billions of dollars in assets. It has recently been acquired by Apex Group, a global financial services provider with $3.5 trillion in assets under administration.
In the past year, there’s been increased demand from our institutional clients for compliant access to tokenised assets on public blockchains. Tokenised entities still face critical challenges such as setting up banking and custody solutions. There’s a lack of orchestrated infrastructure that connects with legacy systems. My priority is delivering this innovation through the safest, most regulated pathway possible, and we’re excited to partner with Tokeny to make this happen. Myles HarrisonChief Product Officer at AMINA BankThe combined solution offers financial institutions end-to-end tokenisation capability with fast time-to-market measured in weeks. Starting with traditional financial instruments where institutional demand is focused, the collaboration agreement establishes the regulated infrastructure foundation for future expansion into asset classes where tokenisation can deliver greater utility.
Tokeny’s platform leverages the ERC-3643 standard for compliant tokenisation, the standard is built on top of ERC-20 with a compliance layer to ensure interoperability with the broader DeFi ecosystem. This ensures that, even within an open blockchain ecosystem, only authorised investors can hold and transfer tokenised assets while maintaining issuer control and automated regulatory compliance.
“The future of finance is open, and institutions now have the tools to take full advantage, without compromising on compliance, security, or operational efficiency,” added Falempin.
About TokenyTokeny is a leading onchain finance platform and part of Apex Group, a global financial services provider with $3.5 trillion in assets under administration and over 13,000 professionals across 52 countries. With seven years of proven experience, Tokeny provides financial institutions with the technical tools to represent assets on the blockchain securely and compliantly without facing complex technical hurdles.Institutions can issue, manage, and distribute securities fully onchain, benefiting from faster transfers, lower costs, and broader distribution. Investors enjoy instant settlement, peer-to-peer transferability, and access to a growing ecosystem of tokenized assets and DeFi services. From opening new distribution channels to reducing operational friction, Tokeny enables institutions to modernize how assets move and go to market faster, without needing to be blockchain experts.
Website | LinkedIn | X/Twitter
About AMINA – Crypto. Banking. Simplified.Founded in April 2018 and established in Zug (Switzerland), AMINA Bank AG is a pioneer in the crypto banking industry. In August 2019, AMINA Bank AG received the Swiss Banking and Securities Dealer License from the Swiss Financial Market Supervisory Authority (“FINMA”). In February 2022, AMINA Bank AG, Abu Dhabi Global Markets (“ADGM”) Branch received Financial Services Permission from the Financial Services Regulatory Authority (“FSRA”) of ADGM. In November 2023, AMINA (Hong Kong) Limited received its Type 1, Type 4 and Type 9 licenses from the Securities and Futures Commission (“SFC”).
To learn more about AMINA, visit aminagroup.com
The post Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking appeared first on Tokeny.
In this episode of The Future of Identity Podcast, I’m joined by Chris Goh, former National Harmonisation Lead for Australia’s mobile driver’s licenses (mDLs) and the architect behind Queensland’s digital driver’s license. Chris played a pivotal role in driving national alignment across states and territories, culminating in the 2024 agreement to adopt ISO mDoc/mDL standards for mobile driver’s licenses and photo IDs across Australia and New Zealand.
Our conversation dives into Australia’s path from early blockchain experiments to a unified, standards-based approach - one that balances innovation, security, and accessibility. Chris shares lessons from real-world deployments, cultural challenges like “flash passes,” and how both Australia and New Zealand are building digital ID ecosystems ready for global interoperability.
In this episode we explore:
Why mDoc became the foundation: Offline + online verification, PKI-based trust, and modular architecture enabling scalable, interoperable credentials. From Hyperledger to harmony: Lessons from early decentralized trials and how certification and conformance reduce fragmentation. Balancing innovation and standardization: Why agility and stability must coexist to keep identity ecosystems moving forward. The cultural realities of adoption: How flash passes, retail constraints, and public education shaped Australia’s rollout strategy. The road ahead: How national trust lists, privacy “contracts,” and delegated authority could define the next phase of digital identity in the region.This episode is essential listening for anyone building or implementing digital credentials, whether you’re a policymaker, issuer, verifier, or technology provider. Chris offers a clear, grounded perspective on what it really takes to move from pilots to national-scale digital identity infrastructure.
Enjoy the episode, and don’t forget to share it with others who are passionate about the future of identity!
Learn more about Valid8.
Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.
Listen to the full episode on Apple Podcasts or Spotify, or find all ways to listen at trinsic.id/podcast.
“If AI agents can now behave like humans well enough to pass CAPTCHA, we’re no longer dealing with bots we’re dealing with synthetic users. That creates real risk.”
The above statement, by Marcom Strategic Advisor and Investor Katherine Kennedy-White, in response to a LinkedIn post by Veracity’s CEO Nigel Bridges, shows the real concern over how sophisticated AI agents and bots are becoming.
The post How do we deal with “synthetic users” accessing our data? appeared first on Veracity Trust Network.
By Helen Garneau
Identity fraud is rising around the world, and travelers are starting to lose confidence that airlines and hotels can keep their personal data safe. More stories about new kinds of scams are coming out using AI-generated deepfakes, fake documents, and other digital tricks that can fool identity systems. As airports and airlines depend more on facial recognition and other biometric tools, the risk of these attacks becomes a serious threat to the entire travel experience.
Think of how this plays out in real life. A thief uses a stolen credit card to buy an airline ticket and checks in with a forged passport. An impostor calls into an airline call center with a stolen password, takes over the victim’s account, and steals their miles. A criminal walks through a border checkpoint using false biometrics. Each case happens because identity cannot be verified in real time, directly from the traveler.
Digital Travel Credentials fix this problem.A Digital Travel Credential — DTC — is a secure, digital version of a passport that aligns with specifications outlined by the International Civil Aviation Organization, a global body that was responsible for standardizing physical passports.
Currently, there are two types of implementable DTCs: One issued by a government along with a physical passport (DTC-2), or one issued by an organization, such as an airline or hotel, by way of data derived from a valid passport and biometrically authenticated against the rightful passport holder (DTC-1).
The data in each DTC is digitally signed, which provides a way to cryptographically prove its origin (who issued it) and that it hasn’t been tampered with. The credential is held by the passport holder in a digital wallet on their mobile device, which provides two additional layers of binding (biometric/code to first unlock the device and then the wallet).
Here’s what makes the DTC a deepfake busterFirst, you can’t re-engineer the cryptography using AI to alter the data. Second, each person is able to carry an authenticated biometric with them for verification. It’s like having a second you to prove who you are. The biometric template in the credential can be automatically compared with a liveness check, so authentication is not only instant, it doesn’t require the verifying party to store biometric data.
The DTC completely transforms identity verification and fraud prevention in one go.
The upshot is that identity authentication no longer needs usernames, passwords, centralized data storage, multifactor authentication, or increasingly complex and expensive layers of security; instead, customers hold their data and present it for seamless, cryptographic authentication, which can be done anywhere using simple mobile software.
Their data is protected, you’re protected, and your operations can be automated and streamlined for better customer experiences and reduced cost.
The easy switch for implementing DTC credentialsIndicio Proven® is the most advanced market solution for issuing, holding, and verifying interoperable DTC-1 and DTC-2 aligned credentials, with options for the leading three Verifiable Credential formats, SD JWT VC, AnonCreds, and mDL.
Proven is equipped with advanced biometric and document authentication tools, and our partnership with Regula enables us to validate identity documents from 254 countries and territories for issuance as Verifiable Credentials. It has a white-label digital wallet compatible with global digital identity standards and a mobile SDK for adding Verifiable Credentials to your apps.
It’s easy and quick to add to your existing biometric infrastructure, removing the need to rip and replace identity systems. It can effortlessly scale to country level deployment, and best of all, it’s significantly less expensive than centralized identity management.
Proven also follows the latest open standards, including eIDAS 2.0 and the EUDI framework, lowering regulatory risks, preserving traveler privacy, and opening markets that would otherwise be off limits.
Shut down fraud before it startsFraud should never be accepted as part of doing business. With Proven DTCs, airlines can defend against ticket and loyalty fraud before they even talk to a passenger. Airports can trust the traveler and the data they receive because it matches the credential and the verified government records. Hotels can check in guests with confidence, no passport photocopying or manual lookups required — and they have a simple and powerful way to reduce chargeback fraud.
Indicio Proven removes legacy vulnerabilities to identity fraud and closes the gaps between systems so identity can be trusted from start to finish. It protects revenue, safeguards customer relationships, and restores confidence across every stage of travel.
It’s time to stop fraud, simplify identity verification, and give travelers a secure, seamless experience with Indicio Proven.
Contact Indicio today and see how you can protect your business and your customers with Indicio Proven.
The post Why Digital Travel Credentials provide the strongest digital identity assurance appeared first on Indicio.
Keywords
AWS outage, digital dependency, business continuity, FIDO, authentication, passkeys, digital certificates, threat informed defense, false positives, cyber resilience
Summary
In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the recent AWS outage and its implications on digital dependency and business continuity. They explore the importance of disaster recovery plans and the evolving landscape of authentication technologies, particularly focusing on the FIDO Authenticate Conference. The conversation delves into the lifecycle of passkeys and digital certificates, emphasizing the need for threat-informed defense strategies and the challenges of managing false positives in security. The episode concludes with a call for better integration of systems and shared intelligence across the industry.
Chapters
00:00 Introduction and Global Outage Discussion
03:01 The Impact of Digital Dependency
06:00 Business Continuity and Disaster Recovery
09:10 FIDO Authenticate Conference Overview
16:09 Evolution of Authentication Technologies
21:45 The Lifecycle of Passkeys and Digital Certificates
29:59 Threat Informed Defense and False Positives
39:55 Conclusion and Future Considerations
This summer, Amadeus, a global travel technology company that powers many airline and airport systems, and Lufthansa, Germany’s flag carrier and one of Europe’s largest airlines, successfully tested the EU Digital Identity Wallet (EUDI Wallet) in real travel scenarios.
The test showed how credential-based travel could soon replace manual document checks.
During these tests, travellers could:
Check-in online by sharing verified ID credentials from their wallet with one click, instead of entering passport data manually.The results point to a future where travel becomes smoother and more secure, thanks to verifiable credentials and privacy-preserving identity verification.
“I’ve been having an intellectually fascinating time diving into Internet fragmentation and how it is shaped by supply chains more than protocols. There’s another bottleneck ahead, though, one that’s even harder to reroute: people.”
Innovation doesn’t happen in a vacuum. It requires engineers, designers, policy thinkers, and entrepreneurs. In other words, it needs human talent to build systems and set standards. And demographics are destiny when it comes to innovation. The places where populations are shrinking face not only economic strain but also a dwindling supply of innovators. The regions with young, growing populations could take the lead, but only if they can translate those numbers into participation in building tomorrow’s Internet.
Right now, the imbalance is striking. The countries that dominated the early generations of the Internet—the U.S., Europe, Japan, and now China—are either stagnating or shrinking. Meanwhile, countries with youthful demographics, especially across Africa and parts of South Asia, aren’t yet present in large numbers in the open standards process that defines the global Internet. That absence will shape the systems they inherit in the next 10-15 years.
This is the third in a four-part series of blog posts about the future of the Internet, seen through the lens of fragmentation.
First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: [this one] Fourth post: “Can standards survive trade wars and sovereignty battles?” [scheduled to publish 28 October 2025] A Digital Identity Digest The People Problem: How Demographics Decide the Future of the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:12:01 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
The United States: a leaking talent pipelineFor decades, the U.S. thrived as the global hub of Internet development. Silicon Valley became Silicon Valley not just because of venture capital, but because talent from around the world came to build there. That was then.
Domestically, U.S. students continue to lag behind peers in international comparisons of math and science performance, as OECD’s PISA 2022 makes clear. Graduate programs in engineering and computer science still brim with energy, but overwhelmingly from international students. Those students often want to stay, yet immigration bottlenecks, capped and riotously expensive H-1B visas, and green card backlogs create real uncertainty about whether they can.
Even inside the standards world, there are warning signs. The IETF’s 2024 community survey showed concerns about age distribution, with long-time participants nearing retirement and too few younger contributors entering. If the U.S. cannot fix its education and immigration systems, its long-standing leadership in setting Internet rules will decline, not through policy shifts in Washington which are not helping, but because of demographic erosion.
China: aging before it gets richChina has built its growth story on a huge working-age population. That dividend is spent. Fertility hovers around 1.0, far below the replacement rate of 2.1, and the working-age population has already begun shrinking. By 2040, the elderly dependency ratio will climb sharply, with more pressure on pensions, healthcare, and younger workers.
The state has made automation and AI a cornerstone of its adaptation strategy. Investments in robotics and machine learning are designed to offset the loss of youthful labor. But an older population means fewer risk-takers, fewer startups, and more fiscal resources tied up in sustaining a rapidly aging society.
Japan’s experience offers a cautionary tale. Starting in the 1990s, it faced a similar contraction. Despite strong institutions and technological sophistication, growth stagnated. China risks repeating that path on a larger scale, and with less wealth per capita to cushion the fall.
Europe & Central Asia: slow contraction, unevenly distributedEurope’s demographic transformation is a slow squeeze rather than a sudden cliff. According to the International Labour Organization’s 2025 working paper, the old-age ratio in Europe and Central Asia—the number of people over 65 compared to those of working age—will rise from 28 in 2024 to 43 by 2050. The region is expected to lose roughly ten million workers over that period.
The impact will not be uniform. Southern Europe is on track for some of the steepest shifts, with old-age ratios rising to two-thirds by 2050. By contrast, Central Asia maintains a relatively youthful profile, with projections of only 17 older adults per 100 workers. Policymakers across the continent are pushing familiar levers: encouraging older workers to stay employed longer, increasing women’s participation, and opening doors to migrants. But even with those adjustments, the fiscal weight of pensions, healthcare, and social protection will grow heavier, forcing innovation to rely more on productivity than population.
South Korea: the hyper-aged pioneerSouth Korea is the most dramatic example of how quickly demographics can shift. The Beyond The Demographic Cliff report describes a “demographic cliff”: fertility has collapsed to just 0.7 children per woman, the lowest in the world. The working-age share, once 72 percent in 2016, will fall to just 56 percent by 2040. By 2070, nearly half the population will be over 65.
Unlike the U.S. or Germany, South Korea has little immigration to soften the decline; only about five percent of the population is foreign-born. Despite spending trillions of won since 2005 on pronatalist programs, fertility has only dropped further. The government has little choice but to adapt. With one of the world’s highest industrial robot densities, Korea is leaning heavily on automation and robotics. At the same time, the “silver economy” is becoming a growth engine, with eldercare, health technology, and age-friendly industries gaining traction.
The sheer speed of Korea’s shift is staggering. What took France nearly two centuries—from 7 percent to 20 percent of the population being over 65—took Korea less than thirty years. That compressed timeline means Korea is a test case for what happens when demographics move faster than institutions can adapt.
Africa: the continent of the futureWhile the industrialized world contracts, Africa surges. As a World Futures article makes clear, Tropical Africa alone will account for much of the world’s population growth this century. By 2100, Africa will be the largest source of working-age population in the world.
This demographic wave could be transformative. Africa holds vast reserves of cobalt, lithium, and other rare earths critical to green technologies. Combined with a youthful workforce, that could give the continent a central role in shaping the next century’s innovation. But the risks are real: education systems remain uneven, governance is fragile in many states, and climate pressures could destabilize growth. A demographic dividend only pays out if paired with investment in education and institutions.
Still, Africa is where the people will be. Whether or not it becomes a driver of global innovation depends on choices made now by African governments, but also by those investing in the continent’s infrastructure and industries.
If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Who shows up in the standards processAnd here is the connection to Internet fragmentation: the regions with the fastest-growing, youngest populations are not yet shaping the standards process in any significant way.
The W3C Diversity Report 2025 shows that governance seats are still dominated by North America, Europe, and a slice of Asia. Africa and South Asia barely register. ISO admits the same problem: while more than 75 percent of its members are from developing countries, many lack the resources to participate actively. That’s why ISO has launched programs such as its Action Plan for Developing Countries and capacity-building initiatives for technical committees. Membership may be global, but influence is not.
Participation isn’t just about fairness. It determines the rules that future systems will follow. If youthful regions aren’t in the room when those rules are written, they’ll inherit an Internet designed elsewhere, reflecting other priorities. In the meantime, outside players are shaping the infrastructure. China is investing heavily in African digital and industrial networks, creating regional value chains that may set defaults long before African voices appear in open standards bodies.
Cross-border interdependenceEven if the Internet fragments politically or technologically, demographics will keep it globally entangled. Aging countries will depend on migration and remote work links to tap youthful labor pools. Younger countries will increasingly provide the engineers, developers, and operators who sustain platforms. Standards bodies may eventually shift to reflect new population centers, but the lag between demographic change and institutional presence can be decades.
This interdependence means that fragmentation won’t create neatly separated Internets. Instead, we’ll see overlapping systems shaped partly by who has the people and partly by who invests in them.
Destiny is in the demographicsDemographics don’t move quickly, but they do move inexorably. The U.S. risks losing its edge through education and immigration failures. China is aging before it fully secures prosperity. Europe faces a slow decline. South Korea is already living the reality of a hyper-aged society. Africa is the wild card, with the potential to become the global engine of innovation if it can turn population growth into a dividend rather than a liability.
The stage is clearly set: the regions with the people to build tomorrow’s Internet aren’t yet present in the open standards process. Others, especially China, are already investing heavily in shaping what those regions will inherit.
If you want to know what kind of Internet we’ll have in the decades to come, don’t just look at protocols or supply chains. Watch the people. Watch where they are, and who is investing in them. That’s where the future of innovation lies.
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript[00:00:30] Welcome back to the Digital Identity Digest! I’m Heather Flanagan, and if you’ve been following this series, you’ll remember that we’ve been exploring Internet fragmentation from multiple angles.
In this episode, we’re zooming out once again—because even when the protocols align perfectly and the chips get made, there’s still one more piece of the puzzle that determines the Internet’s future: people.
More precisely, demographics.
Why Demographics Matter[00:01:17] Who shows up to build tomorrow’s systems?
Who are the engineers, the designers, the startup founders?
Which regions have enough young people to sustain innovation—and which don’t?
This isn’t just about the present moment. It’s about what happens in 15 years.
[00:01:35] The countries that built and shaped the early Internet—the U.S., the EU, Japan, and more recently China—are all aging. Some are even shrinking.
Meanwhile, regions with the youngest and fastest-growing populations, such as Africa and South Asia, are not yet fully represented in the rooms where global standards are written. And that gap matters deeply for the Internet we’ll all inherit.
The United States: Talent Pipeline Challenges[00:02:07] For decades, the U.S. has been the global hub for Internet innovation. Silicon Valley thrived not just on venture capital, but because brilliant people from around the world came to build there.
[00:02:20] Yet, the domestic talent pipeline is starting to leak:
U.S. students lag behind international peers in math and science. Graduate programs remain strong, but most are filled with international students. Immigration backlogs and visa caps make it harder for those graduates to stay.[00:02:44] Even inside the standards community, demographics are aging. The IETF’s own survey shows long-time contributors retiring and not enough young participants stepping in.
If the U.S. can’t fix its education and immigration systems, its leadership won’t decline due to competition—it’ll slip because there aren’t enough people to carry the work forward.
China: From Growth to Grey[00:03:10] China’s story is different—but no less stark. For decades, its explosive growth came from a huge working-age population.
[00:03:19] That demographic dividend is over. Fertility rates have fallen to barely one child per woman. The working-age population peaked in 2015 and has been shrinking since.
[00:03:33] China’s solution has been to automate—investing heavily in robotics, AI, and machine learning.
But as populations age, societies often shift resources away from risk-taking. An older economy tends to:
Produce fewer startups Take fewer risks Spend more on pensions and healthcareJapan’s experience offers a cautionary example—and China risks following it on a larger scale and with less wealth per person to cushion the impact.
Europe: Managing a Slow Decline[00:04:24] Europe faces a quieter version of the same story.
[00:04:41] By 2050, the ratio of older to working-age adults in Europe and Central Asia is expected to rise from 28 to 43. That means millions fewer workers and millions more retirees.
Europe’s strategy includes:
Keeping older workers employed longer Expanding women’s participation in the workforce Opening the door to migrantsHowever, the basic reality remains—fewer young people are entering the workforce. Innovation will depend more on productivity gains than on population growth.
South Korea: The Hyper-Aged Future[00:05:12] South Korea offers a glimpse into the world’s most rapidly aging society.
[00:05:14] Fertility has collapsed to 0.7 children per woman, the lowest in the world. By 2070, nearly half the population will be over 65.
Unlike the U.S. or Germany, Korea has almost no immigration to balance the decline. Despite huge government investments in pronatalist programs, fertility continues to fall.
Korea is adapting through:
High robot density and automation Growth in the silver economy — industries around elder care, health tech, and age-friendly productsThe speed of this shift is astonishing: what took France 200 years, Korea did in less than 30. It’s now a laboratory for adaptation—figuring out how policy and technology respond when demographics move faster than politics.
Africa: The Continent of the Future[00:06:28] While industrialized nations age, Africa is booming.
By the end of this century, Africa will account for the majority of the world’s working-age population.
Its advantages are immense:
Rapid population growth Rich reserves of critical minerals (cobalt, lithium, rare earths) Expanding urbanization and educationHowever, these opportunities are balanced by real challenges:
Under-resourced education systems Fragile governance Climate pressures[00:07:22] If managed well, Africa could become the innovation hub of the late 21st century. But much depends on where investment originates—within Africa or from abroad—and whose values and standards shape the technologies that follow.
Who’s in the Room?[00:07:54] This is where demographics meet Internet fragmentation directly.
Regions with the youngest populations are still underrepresented in open standards bodies.
The W3C’s diversity reports show most seats are still held by North America, Europe, and parts of Asia. Africa and South Asia barely register. ISO has many developing-country members, but few can participate actively.[00:08:36] Membership may be broad, but influence is not.
And that absence matters—because standards define power. They determine how the Internet functions, what’s prioritized, and who benefits.
If youthful regions aren’t in the room when rules are written, they’ll inherit an Internet designed elsewhere.
Looking Ahead[00:09:02] Meanwhile, China is filling that vacuum—investing heavily in African digital infrastructure and shaping defaults long before African voices are fully present in global standards.
Even as the Internet fragments politically and technologically, demographics tie us together.
Aging nations will rely on migration and remote work. Younger countries will provide the engineers and operators sustaining global platforms. Standards institutions may eventually reflect new population centers—but change lags behind demographic reality.[00:09:43] The people who build the Internet of the future will increasingly come from Africa and Southeast Asia—while the institutions writing the rules still reflect yesterday’s demographics.
Wrapping Up[00:10:00] Demographics move slowly—but they are relentless. You can’t rush them.
The U.S. risks losing its edge through education and immigration challenges. China is aging before securing long-term prosperity. Europe faces a gradual, gentle decline. South Korea is already living the reality of hyper-aging. Africa remains the wild card—its youth could define the next Internet if it can translate population growth into participation and policy.[00:10:57] So, if you want to glimpse the Internet’s future, don’t just look at protocols or supply chains. Look at the people—where they are, and who’s investing in them. That’s where innovation’s future lies.
Closing Notes[00:11:09] Thanks for listening to this week’s episode of the Digital Identity Digest.
If this discussion helped make things clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn (@lflanagan), and if you enjoyed the show, please subscribe and leave a review on your favorite podcast platform.
You can always find the full written post at sphericalcowconsulting.com.
Stay curious, stay engaged, and keep the conversations going.
The post The People Problem: How Demographics Decide the Future of the Internet appeared first on Spherical Cow Consulting.
To improve the stability and performance of Metadium Explorer, we will be performing a database upgrade as outlined below.
📅 Schedule
October 23, 2025 (Thu) 10:30–11:30 (KST) Please note that the end time may vary depending on the progress of the upgrade.🔧 Update Details
DB minor version upgrade⚠️ Notice
During the upgrade window, Metadium Explorer will be temporarily unavailable. Users will not be able to access services such as transaction history, block information, and related features.We appreciate your understanding as we work to improve the reliability of our services.
Thank you. The Metadium Team.
Metadium Explorer의 안정성 및 성능 향상을 위해 아래와 같이 데이터베이스 업그레이드를 진행합니다.
📅 일정
2025년 10월 23일 (목) 10:30 ~ 11:30 (KST) 작업 상황에 따라 종료 시간이 변동될 수 있습니다.🔧 주요 변경 사항
DB 마이너 버전 업그레이드⚠️ 유의 사항
업그레이드 진행 시간 동안 Explorer 서비스 이용이 일시 중단됩니다. Explorer를 통해 트랜잭션 조회, 블록 정보 확인 등의 기능을 이용하시는 경우 참고 부탁드립니다.서비스 안정화를 위한 작업이오니 이용자 여러분의 너른 양해 부탁드립니다.
감사합니다. 메타디움 팀.
🛠 Metadium Explorer — Database Upgrade Notice was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post What Is Decentralized Identity? Complete Guide & How To Prepare appeared first on 1Kosmos.
The financial system’s integrity, and the public trust it depends on, can no longer rest on paper-era compliance. For more than fifty years, the Bank Secrecy Act (BSA) has guided how institutions detect and report illicit activity. Yet as the economy digitizes, this framework has become a drag on both effectiveness and inclusion. The cost of compliance has soared to $59 billion annually, while less than 0.2% of illicit proceeds are recovered. Community banks spend up to 9% of non-interest expenses on compliance; millions of Americans remain unbanked because the system is too manual, too fragmented, and too dependent on outdated verification models.
SpruceID’s response to the U.S. Treasury’s recent Request for Comment on Innovative Methods to Detect Illicit Activity Involving Digital Assets (TREAS-DO-2025-0070-0001) outlines a path forward. Drawing on our real-world experience building California’s mobile driver’s license (mDL) and powering state-endorsed verifiable digital credentials in Utah, we propose a model that unites lawful compliance, privacy protection, and public trust.
Our framework, called the Identity Trust model, shows how verifiable digital credentials and privacy-enhancing technologies can make compliance both more effective for enforcement and more respectful of individual rights.
Our proposal is not to expand surveillance or broaden data collection, but to make compliance more precise. The Identity Trust model is designed to be applied only where existing laws such as the BSA and AML/CFT rules require verification or reporting. Today’s compliance systems often require collecting and storing more personal information than is strictly necessary, which increases costs and risks for institutions and customers alike. By enabling verifiable digital credentials and privacy-enhancing technologies, our model ensures institutions can fulfill their obligations with higher assurance while minimizing the amount of personal data collected, stored, and exposed. This shift replaces excess data retention with cryptographic proofs, delivering better outcomes for regulators, financial institutions, and individuals alike.
This framework proposes regulation for the digital age, using the same cryptographic assurance that already secures the nation’s payments, passports, and critical systems to bring transparency, precision, and fairness to financial oversight.
A System Ready for ReformCompliance with BSA and AML/CFT rules remain rooted in outdated workflows: identity verified by a physical ID, information stored in readable form, and centralized personal data. These methods have become liabilities. They drive up costs, create honeypots of data for breaches, and encourage “de-risking” that locks out lower-income and minority communities.
The technology to fix this exists today. Mobile driver’s licenses (mDLs) are live in more than seventeen U.S. states, accepted by the TSA at over 250 airports. Utah’s proposed State-Endorsed Digital Identity (SEDI) approach, detailed in Utah Code § 63A-16-1202, already provides a framework for trusted, privacy-preserving digital credentials. Federal pilots, such as NIST’s National Cybersecurity Center of Excellence (NCCoE) mobile driver’s license initiative, are proving these models ready for financial use.
What’s missing is regulatory recognition: the clarity that these trusted credentials, when properly verified, fulfill legal identity verification and reporting obligations under the BSA.
The Identity Trust ModelThe Identity Trust model offers a blueprint for modernizing compliance without the need for new legislation. It allows regulated entities, such as banks or state- or nationally chartered trusts, to issue and rely on pseudonymous, cryptographically verifiable credentials that prove required attributes (such as sanctions screening status or citizenship) without disclosing unnecessary personal data.
The framework operates in four stages:
Identifying: A regulated entity (the Identity Trust, of which there can be many) is responsible for verifying an individual’s identity using digital and physical methods, based on modern best practices such as NIST SP 800-63-4A for identity proofing. Once verified, the trust issues a pseudonymous credential to the individual and encrypts their personal information. Conceptually, the unlocking key is split into three parts: one held by the individual, one by the Trust, and one by the courts, with any two sufficient to unlock the record (roughly, a “two-of-three key threshold”). Transacting: When the individual conducts financial activity, the individual presents their pseudonymous credential. Transactions are then tagged with unique one-time-use identifiers that prevent linking activity across contexts, even if collusion were attempted. Each identifier carries a cryptographically-protected payload that can only be “unlocked” with the conceptual two-of-three key threshold. Entities and decentralized finance protocols processing the identifiers are able to cryptographically verify that the identifier is correctly issued by an Identity Trust and remains valid. Investigating: If law enforcement or regulators demonstrate lawful cause, conceptually, both the court and the Identity Trust decide to operate their keys to reach the two-of-three threshold to designate authorized access to specific, limited data justified by the circumstances. The Identity Trust must have a robust governance framework for granting access to law enforcement that respects privacy and due process rights with law enforcement needs through judicial orders. Once the keys from the two entities are combined, the vault containing the relevant information about the identity can then be decrypted if it exists, revealing the individual’s information in a controlled and auditable manner, including correlating other transactions depending on the level of access granted by the lawful request. Alternatively, the individual is able to combine their key with the Identity Trust’s key to gain the ability to see their entire audit log, and also create cryptographic proofs of their actions across their transactions. Monitoring: The Identity Trust performs these continuous checks against suspicious actors and sanctions lists in a privacy-preserving manner with approved policies for manner and intervals, with the auditable logs protected and encrypted such that only the individual or duly authorized investigators can work with the Identity Trust to access the plaintext. Individuals may also request attribute attestations from the Identity Trust, for example, that they are not on suspicious actors or sanctions lists, or attestations for credit checks.This structure embeds accountability and due process into the architecture itself. It enables lawful access when required and prevents unauthorized surveillance when not. Crucially, the model fits within existing AML authority, leveraging the same legal and supervisory frameworks that already govern banks, trust companies, and credential service providers.
Policy Recommendations for TreasurySpruceID’s recommendations to Treasury and FinCEN focus on aligning policy with existing technology, ensuring that the U.S. remains a global leader in both compliance and digital trust.
Request for Consideration
Reasoning and Impact
1. Recognize verifiable digital credentials (VDCs) issued by many acceptable sources as valid evidence under Customer Identification Program (CIP) and Customer Due Diligence (CDD) obligations, including as “documentary” verification methods when appropriate.
Treasury and FinCEN should interpret 31 CFR § 1020.220 (and corresponding CIP rules and guidance) to include verifiable digital credentials if they can meet industry standards, such as a baseline of National Institute of Standards and Technology (NIST) SP 800-63-4 Identity Assurance Level 2 (IAL2) identity verification or higher, issued directly from government authorities, or through reliance upon approved institutions or identity trusts.
These verifiable digital credentials (VDCs), such as those issued pursuant to the State-Endorsed Digital Identity (SEDI) approaches, should be treated as “documentary” evidence where appropriate. The principle of data minimization should become a pillar of financial compliance, enabling VDC-enabled attribute verification encouraged over requiring the sharing of unnecessary personally identifiable information (PII), such as static identity documents, where possible.
Current CIP programs largely presume physical IDs, limiting innovation and remote onboarding, even as the statute is not prescriptive in medium or security mechanisms.
Verifiable digital credentials issued by trusted authorities provide cryptographically proven authenticity and higher assurance against forgery or impersonation, to better fulfill the aims of risk-based compliance management programs.
Recognizing VDCs as documentary evidence would enhance verification accuracy, reduce compliance costs, and align U.S. practice with FATF Digital ID Guidance (2023) and EU eIDAS 2.0, promoting global interoperability.
Attribute-based approaches to AML, such as “not-on-sanctions-list” or “US-person,” should be preferred whenever possible as they can effectively manage risks without the overcollection of PII data, avoiding a “checkpoint society” riddled with unnecessary ID requirements.
2. Permit financial institutions to rely on VDCs issued by other regulated entities, identity trusts, or accredited sources via verified real-time APIs for AML/CFT compliance.
Treasury and FinCEN should authorize institutions to accept credentials and attestations from peer financial institutions or identity trust networks when those issuers meet assurance and audit standards.
Congress should further consider the addition of a new § 201(d) to the Digital Asset Market Structure Discussion Draft (Sept. 2025) clarifying Treasury’s authority to recognize and accredit digital-identity and privacy-enhancing compliance frameworks.
While current CIP programs still assume physical ID presentation, the underlying statute is technology neutral and does not mandate any specific medium or security mechanism. Recognizing VDCs can modernize onboarding by reducing costs and friction, improving AML data quality and transparency, and enabling faster, more collaborative investigations across institutions and borders—all while minimizing data-collection risk.
Statutory clarity ensures that Treasury’s modernization efforts rest on a durable, technology-neutral foundation. This amendment would future-proof the U.S. AML/CFT regime, align it with G7 digital-identity roadmaps, and strengthen U.S. leadership in global digital-asset regulation.
3. Permit privacy-enhancing technologies (PETs) to meet verification and monitoring obligations.
Treasury should issue interpretive guidance or rulemaking confirming that zero-knowledge proofs, pseudonymous identifiers, and multi-party computation may be used for CIP, CDD, and Travel-Rule compliance if equivalent assurance and auditability are maintained.
PETs enable institutions to prove AML/CFT compliance without exposing underlying PII, minimizing data breach and insider risk exposure while maintaining verifiable oversight.
Recognizing PETs would modernize compliance architecture, lower data-handling costs, and encourage innovation consistent with global privacy and financial-integrity standards.
4. Modernize the Travel Rule to enable verifiable digital credential-based information transfer.
Treasury should amend 31 CFR § 1010.410(f) or issue guidance allowing originator/beneficiary data to be transmitted via cryptographically verifiable credentials or proofs instead of plaintext PII.
The current Travel Rule framework was built for wire transfers, not blockchain systems. Verifiable digital credentials can carry or attest to required information with integrity, selective disclosure, and traceability.
This approach preserves law-enforcement visibility while protecting privacy, ensuring interoperability with FATF Recommendation 16 and global Virtual Asset Service Providers (VASPs).
5. Establish exceptive relief for good-faith reliance on accredited identity trust, VDC, and Privacy-Enhancing Technology (PET) systems.
Treasury should use its § 1020.220(b) rulemaking authority to provide exceptive relief deeming institutions compliant when they rely on Treasury-accredited credentials or PET frameworks meeting defined assurance standards.
Institutions adopting accredited compliance tools should not face enforcement liability for third-party system errors beyond their control. Exceptive relief would provide regulatory certainty and clear boundaries of accountability.
Exceptive relief incentivizes the adoption of privacy-preserving identity systems such as identity trusts, reducing costs while strengthening overall compliance integrity.
6. Leverage NIST NCCoE collaboration for technical pilots and standards.
Treasury and FinCEN should partner with NIST’s National Cybersecurity Center of Excellence (NCCoE) Digital Identities project to pilot mDLs, VDCs, and interoperable trust registries for CIP and CDD testing.
The NCCoE provides standards-based prototypes (e.g., NIST SP 800-63-4 and ISO/IEC 18013-5/-7 mDL) that validate real-world feasibility and assurance equivalence.
Collaboration ensures technical soundness, interagency alignment, and rapid deployment of privacy-preserving digital-identity frameworks.
7. Direct FinCEN to engage proactively with industry on the adoption of advanced technologies that enhance AML compliance, investigations, and privacy protection.
Treasury should issue formal direction or guidance requiring FinCEN to establish an ongoing public-private technical working group with industry, academia, states, and standards bodies to pilot and evaluate advanced compliance technologies.
Continuous engagement with the private sector ensures that FinCEN’s rules keep pace with innovation and that compliance tools remain effective, privacy-preserving, and economically efficient.
This collaboration would strengthen AML/CFT investigations, reduce false positives, and alleviate the compliance burden on financial institutions while upholding privacy and data-protection standards.
The Path ForwardTime and again, regulatory compliance challenges have sparked the next generation of financial infrastructure. EMV chips transformed fraud detection; tokenization improved payment security; now, verifiable identity can redefine AML/CFT compliance.
By replacing static data collection with cryptographic proofs of compliance, regulators gain better visibility, institutions reduce cost, and individuals retain control over their personal information. The transformation is not solely technological—it’s institutional: from data collection to trust verification.
SpruceID’s aim is to build open digital identity frameworks that empower trust—not just between users and apps, but between citizens and institutions. Our experience powering government-issued credentials demonstrates that strong identity assurance and privacy can coexist. In our response to the Treasury, we’ve shown how those same principles can reshape AML/CFT for the digital age. But the work is far from finished.
Over the coming months, SpruceID will release additional thought pieces on how public agencies and private institutions can collaborate to advance trustworthy digital identity, from privacy-preserving regulatory reporting to unified standards for trustworthy digital identity.
We invite policymakers, regulators, technologists, and financial leaders to join us in dialogue and in action. Together, we can build a compliance framework that is lawful, auditable, and worthy of public trust.
About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.
In aflevering 14 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Gert-Jan van Dijke en Jeroen van Winden (Ockto) over klantbeheer in de financiële sector. Want zodra een klant aan boord is, begint het echte werk pas.
Hey folks 👋
Big one today.
Like we hinted in the last edition, Kin 0.6 is rolling out. This is Kin’s biggest update ever, and it’s packed.
Full rollout begins tomorrow (Tuesday, October 21, 2025), but we’ve got some sneak peeks for you.
We also have a super prompt based around making the most out of the new opportunities this update provides - so make sure you read to the end.
What’s New With Kin🚀 Meet your advisory board 🧠You’ve probably seen them drifting into chat recently - little hints of Harmony, Sage, and the rest. Now the full board of five arrives, each with expertise in advising on a particular topic.
Sage: Career & Work
Aura: Values & Meaning
Harmony (Premium only): Relationships
Pulse (Premium only): Sleep & Energy
Ember (Premium only): Social
Each one brings a different lens on your life, but all pull insight from your Journal entries, conversations, and memories.
Conversation Starters 💬Every advisor’s chat screen now includes some personalized, context-aware starters - not just to make beginning a conversation easier, but remembering things you wanted to talk about as effortless as possible.
Memory, re-engineered (finally) 📂It feels like we’re always alluding to this - but now it’s here.
Kin’s Memory is now 5× more accurate when recognizing and extracting memories from conversations.
Advisors can also now search across all of your memories, Journal entries, and conversations, so they can build an understanding of context quickly.
All of this means that no matter which advisor you speak with, Kin is much more able to pull the relevant information from its improved memory structure - so you get better, smarter, more relevant advice from every advisor.
We’ve also beefed up the Memory UI. On top of the classic memory graph, you can now see what Kin knows about you - as well as the organizations and people you’re connected to.
And each of these people/organizations/places now have their own Entity pages, where you can see, edit, and add to what Kin has collected about them from your conversations.
You can even finally search memories for key words and associations!
See your progress 📊There’s a brand new Stats page that visualizes your growth with Kin.
You can see a breakdown of usage stats and Memory types, so you can see what you’re talking about a lot, and where you and your Kin might have some blind spots.
Journaling, cleaned up 📝Based on all your feedback, we’ve finished rebuilding the Journal from the ground up.
There’s a brand-new, simplified UI to make daily journaling easier than ever.
Premium users also unlock custom journal templates, perfect for capturing anything from gratitude logs to tough feedback moments.
New onboarding (for everyone) 🔐Next time you open Kin, you’ll be prompted to sign in with Apple, Google, or Email.
This makes onboarding smoother and syncing easier (rumours of a desktop version abound), and lays the groundwork for future features.
But don’t worry: your data hasn’t moved an inch.
It still lives securely with you, on your device.
We’ll share a detailed write-up soon (as promised), but the short version is: simpler sign-in, same privacy-first design.
Premium (by request!) ⭐You asked. We built it.
Premium unlocks the full Kin experience, and extends existing Free features so you can make the most of your Kin.
If you join Premium, you’ll get:
All 5 advisors (Harmony, Pulse, Ember + the two free advisors, Sage and Aura)
Unlimited text messages
1 hour of voice per day
Custom journal templates
Premium is currently $20/month - and there’s a discount if you go for 3 months.
If you don’t want to upgrade though, don’t fret. The Free tier is going nowhere: Premium is for power users who want the full advisor board and voice time.
When? 🗓️Rollout starts Tuesday, October 21, 2025. That’s tomorrow, if you’re reading this as it goes out!
Expect updates over the following week as we make sure everything runs smoothly. Speaking of…
This is the biggest change Kin has ever gone through. It’s our largest step toward a 1.0 release yet - and we want to make sure we’re heading in the right direction before we get too far.
The KIN team can be reached at hello@mykin.ai for anything, from feedback on the app to a bit of tech talk (though support@mykin.ai is better placed to help with any issues).
You can also share your feedback in-app. Just screenshot to trigger the feedback form.
But if you really want to get involved, the official Kin Discord is the best place to talk to the Kin development team (as well as other users) about anything AI.
We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.
We also regularly run three casual calls every week - and we’d love for you to join them:
Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.
Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.
Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.
We updated Kin so that it can better help you - help us make sure that’s what it does!
Our current reads 📚No new Slack screenshot this edition - we’ve been too busy to share new articles recently!
Article: Google announced DeepSomatic, an open-source cancer research AI
READ - blog.google
Article: Meta AI glasses fuel Ray-Ban maker’s best quarterly performance ever
READ - reuters.com
Article: Google launch Gemini Enterprise, to make AI accessible to employees
READ - cloud.google.com
Article: How are MIT entrepreneurs using AI?
READ - MIT News
This time, your chosen advisor can help you answer the question:
“How do I make the most of new opportunities?”
Once the update comes out tomorrow, try hitting the link with different advisors selected, and get a few different viewpoints!
Your are Kin 0.6 (and beyond) 👥Without you, Kin wouldn’t be anything. We want to make sure you don’t just know that, but feel it.
So, please: get involved. Chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.
Most importantly: enjoy the update!
With love,
The KIN Team
Banken hebben de afgelopen jaren flinke stappen gezet in het digitaliseren van onboarding. Nieuwe klanten aanhaken gaat steeds makkelijker. Maar zodra het gaat om klantbeheer – het actueel houden van klantgegevens tijdens de looptijd – blijft de sector achter. Terwijl juist hier de druk toeneemt, van toezichthouders én vanuit zorgplicht.
Dear Community,
Metadium made meaningful strides in the third quarter of 2025. We deeply appreciate your continued support and are pleased to share the key achievements and updates from Q3.
SummaryMetadium’s DID and NFT technologies were applied to support Korea’s first ITMO (Internationally Transferred Mitigation Outcomes) certified carbon reduction project. The AI conversation partner service “Daepa” officially launched, using Metadium DID for identity management in the backend. The Metadium Node Partnership Program (2025–2027) officially began, with a total of 9 partners operating nodes across the network. Technology and Ecosystem Updates
Q3 Monthly Transactions
From July to September 2025, a total of 586,608 transactions were processed, and 42,979 DID wallets were created.
ITMO-Certified Carbon Reduction Project
The Verrywords project, officially recognized by the Cambodian government for reducing 680,000 tons of greenhouse gas emissions, utilized Metadium’s DID and NFT technologies as core infrastructure.
Metadium DIDs were issued to electric motorcycle recipients, enabling participant identification and tracking. The reduction records were issued through a Metadium-based point system and uniquely verified via NFTs. This marks Korea’s first officially approved ITMO case and a significant milestone demonstrating Metadium’s potential in global environmental cooperation.For more details, please click here.
Metadium DID Integrated into “Daepa” AI Service
The AI relationship training service “Daepa” has been officially launched with Metadium DID integrated into its backend identity management system.
Users do not directly interact with the DID system, but all interactions are managed through unique DIDs. The DID system will be leveraged for future expansion into trust-based services, including points, rewards, and user-to-user connections. This represents a case of DID and AI technology convergence, showcasing the diverse applicability of Metadium’s DID framework.For more details, please click here.
Transition to the 2025–2027 Metadium Node Partnership
As of September 2025, Metadium’s Node Partnership Program has transitioned into a new operational cycle (2025–2027).
Alongside existing partners like Metadium, Berrywords, and Superschool, new global partners have joined, totaling nine participants. These partners play a vital role as co-operators of the Metadium ecosystem, participating in block generation, validation, and governance. The new structure enhances technical diversity and global scalability of the network.For more details, please click here.
Metadium remains dedicated to advancing real-world applications of blockchain technology and decentralized identity.
We will continue to innovate and build a trusted digital infrastructure for our users and partners.
Thank you,
The Metadium Team
안녕하세요, 메타디움 팀입니다.
2025년 3분기에도 메타디움은 퍼블릭 블록체인으로서의 역할을 한층 확장하며, 기술적·생태계적으로 의미 있는 성과를 만들어냈습니다. 메타디움을 지지해주시는 모든 커뮤니티 여러분께 감사드리며, 이번 분기 주요 내용을 아래와 같이 보고드립니다.
요약메타디움 DID와 NFT 기술이 한국 최초의 국제 감축(ITMO) 실증 프로젝트에 적용되어, 총 68만 톤의 온실가스 감축 실적을 공식 인정받는 데 기여했습니다. AI 기반 감정 트레이닝 서비스 ‘대파(Daepa)’에 메타디움 DID가 식별 인프라로 적용되어 정식 출시되었습니다. 2025–2027 메타디움 노드 파트너십 체계로 전환되며, 기존 및 신규 파트너사 총 9개사가 참여하여 운영의 신뢰성과 다양성이 강화되었습니다. 기술 및 생태계 업데이트
Q3 월간 트랜잭션
2025년 7월부터 9월까지 총 586,608건의 트랜잭션이 처리되었으며, DID는 42,979건이 생성되었습니다.
ITMO 국제 감축 프로젝트에 메타디움 기술 기여
캄보디아 정부로부터 68만 톤 규모의 온실가스 감축 실적이 공식 인정된 베리워즈 프로젝트에 메타디움 DID 및 NFT 기술이 핵심 인프라로 사용되었습니다.
전기 오토바이 수령자에게 메타디움 DID가 발급되어 참여자 식별 및 추적이 가능해졌습니다. 감축 실적은 메타디움 기반 포인트 시스템으로 발행되며, NFT를 통해 고유하게 증명됩니다. 이는 한국 최초의 ITMO 승인 사례로, 메타디움 기술의 글로벌 환경 협력 기여 가능성을 보여주는 의미 있는 이정표입니다.자세한 내용은 여기를 확인해보세요.
AI 서비스 ‘대파(Daepa)’에 메타디움 DID 적용
AI 연애 트레이닝 서비스 ‘대파’가 정식 출시되었으며, 사용자 식별 백엔드 시스템에 메타디움 DID가 적용되었습니다.
사용자는 DID를 직접 인식하지 않지만, 모든 상호작용은 고유 DID 기반으로 관리됩니다. 향후 포인트, 리워드, 사용자 간 연결 등 신뢰 기반 서비스 확장에 활용될 예정입니다. 이는 DID와 AI 기술의 결합 사례로, 메타디움 DID의 다양한 활용 가능성을 보여줍니다.자세한 내용은 여기를 확인해보세요.
2025–2027 메타디움 노드 파트너십으로 전환
2025년 9월부로 메타디움의 노드 파트너십이 새로운 운영 주기(2025~2027) 로 전환되었습니다.
기존 메타디움, 베리워즈, 슈퍼스쿨 등과 함께 새로운 글로벌 파트너사들이 합류하여 총 9개사가 참여 중입니다. 이들은 블록 생성, 검증, 거버넌스 운영 등 메타디움 생태계의 공동 운영 주체(Co-operator) 로서 핵심 역할을 수행합니다. 기술적 다양성과 글로벌 확장성이 강화되는 구조로의 전환이 이루어졌습니다.자세한 내용은 여기를 확인해보세요.
메타디움은 블록체인 기술의 공공성과 신뢰를 바탕으로, 지속 가능한 사회적 가치와 실질적인 활용 사례를 만들어가고 있습니다.
앞으로도 다양한 영역에서 DID, NFT, 퍼블릭 체인의 역할을 실현해 나가며, 메타디움 생태계를 더욱 발전시켜 나가겠습니다.
감사합니다.
메타디움 팀
Metadium 2025 Q3 Activity report was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
Blockchain-Based Identity Infrastructure Tested for Public Sector Use
CSAP Security Certification Underway — Demonstrating Real-World Viability of Metadium’s DID Technology
Metadium’s decentralized identity (DID) technology has been adopted for the 2025 pilot project of the Korea Blockchain Trust Framework (K-BTF), led by the Korea Internet & Security Agency (KISA). The K-BTF program aims to validate the use of blockchain-powered trust infrastructure within public services.
CPLABS is responsible for building the DID platform for this year’s pilot, with Metadium as the underlying blockchain protocol. The project covers the entire lifecycle of DID infrastructure — from system design to cloud-based deployment and security certification — offering a robust opportunity to demonstrate the scalability and reliability of Metadium’s technology in real-world public cloud environments.
CSAP Certification in Progress for Metadium-Based DID PlatformCPLABS has successfully implemented a Metadium-based DID platform, including DID issuance, authentication, and verification functions. The platform has formally entered the CSAP (Cloud Security Assurance Program) evaluation process, which is a mandatory certification for any cloud service provider offering solutions to public institutions in Korea.
The CSAP review will assess technical performance and verify the platform’s compliance with administrative and policy standards.
This marks a meaningful step forward for Metadium DID as it transitions from technical showcase to deployable infrastructure for government-grade digital identity systems.
Expanding from Private Sector to Public Sector ApplicationsHaving proven itself in multiple commercial use cases, Metadium DID is now being tested for public sector readiness and policy alignment. Metadium’s blockchain is purpose-built for identity issuance and verification and features a native DID method optimized for privacy, traceability, and regulatory compliance — qualities that align well with government requirements.
The newly built platform is expected to serve as the foundation for a range of future public services, including cloud-based identity issuance, digital authentication, and point-integrated administrative programs.
👉 The results of the CSAP certification and future use cases will be shared via Metadium’s official channels.
The Metadium Team
메타디움 DID, KISA ‘K-BTF 실증사업’에 적용공공 신뢰 인프라 실증 사업에 메타디움 기반 DID 기술 활용
공공 클라우드 인증(CSAP) 절차 진행 중… 블록체인 기반 신원 인프라의 실용 가능성 입증
메타디움 DID 기술이 한국인터넷진흥원(KISA)이 주관하는 K-BTF(Korea Blockchain Trust Framework) 실증사업에 적용되었습니다. K-BTF는 블록체인 기반 신뢰 인프라의 공공 도입 가능성을 실증하는 정부 과제로, 2025년 사업에서는 씨피랩스(CPLABS)가 DID 플랫폼 구축을 맡고, 기반 블록체인 기술로 메타디움이 활용되고 있습니다.
이번 실증은 클라우드 기반 DID 서비스의 설계, 구현, 보안인증까지 아우르는 전주기 구조로 구성되어 있으며, 이를 통해 메타디움 기술이 실제 공공 클라우드 환경에서 기술적 안정성과 확장 가능성을 입증하는 계기를 마련하고 있습니다.
메타디움 DID 기반 플랫폼, CSAP 보안인증 절차 진행 중씨피랩스는 메타디움 기반 DID 시스템을 중심으로 사용자 DID 발급, 인증, 검증 기능을 구현한 플랫폼을 구축하였고, 현재는 공공 클라우드 보안 인증(CSAP)을 정식 신청하여 본심사 절차에 돌입한 상태입니다.
CSAP 인증은 공공기관 대상 SaaS 서비스 제공을 위한 가장 핵심적인 제도적 요건으로, 본 사업을 통해 메타디움 DID는 기술적 성능뿐 아니라 정책적, 제도적 기준 충족 여부까지 종합적으로 검증받고 있는 중입니다.
이는 단순 기술 데모를 넘어서, 실제 행정 시스템과 연동 가능한 수준의 DID 인프라로서의 활용 가능성을 입증하는 중요한 사례로 평가됩니다.
민간 중심 DID 기술, 공공시장으로의 확장 기반 마련메타디움 DID는 그간 다양한 민간 서비스에서 축적된 기술 기반을 바탕으로, 이번 실증사업을 통해 공공 부문으로의 확장 가능성을 테스트하고 있습니다. 특히, 메타디움 블록체인은 DID 신원 발급 및 검증에 최적화된 구조와 자체 DID Method를 갖추고 있어, 공공기관이 요구하는 비식별성, 추적 가능성, 개인정보 보호 기준에 효과적으로 대응할 수 있습니다.
이번 실증을 통해 구축된 플랫폼은 향후 공공 클라우드에서 운영되는 DID 기반 행정 서비스, 인증 서비스, 포인트 기반 행정 연계 서비스 등 다양한 서비스로 확장될 수 있는 기반을 제공할 것으로 기대됩니다.
👉 CSAP 인증 결과와 공공 적용 사례는 추후 메타디움 공식 채널을 통해 상세히 공개할 예정입니다.
감사합니다.
메타디움팀 드림
Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KOR) | http://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadiumMetadium DID Applied in KISA’s “K-BTF” National Pilot Project was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
At Metadium, we build infrastructure for trust. We believe trust isn’t just about how a technology operates, but about how that technology is responsibly operated together. Trust isn’t built in isolation — it’s co-created with committed partners.
Today, we’re proud to welcome a new partner to our ecosystem: SuperSchool, an AI-powered education platform, has officially joined Metadium as a Node Partner.
Technology that works in the classroom — The SuperSchool missionSuperSchool is an EdTech company innovating the school system through AI and cloud-based technology.
From attendance and activity logs to counseling, performance analytics, college guidance, and student records — SuperSchool digitizes the full spectrum of school operations, offering automated, AI-driven solutions for teachers and students.
Their work is grounded in real classrooms:
Co-designed with over 300 teachers nationwide Patent-registered for digital attendance & administration systems Privacy partnership with Korea University’s Graduate School of Cybersecurity Co-developed an IB education platform with Jeju PyoSeon High School Signed international contracts with schools abroad, including in ChinaThis is not just about making school digital — it’s about managing sensitive educational data responsibly and meaningfully.
From technology user to ecosystem operator — What the node partnership meansMetadium has already been providing blockchain technology for parts of SuperSchool’s platform.
Now, that collaboration deepens: SuperSchool joins as a Node Partner, becoming an active participant in the operation of the Metadium network.
A node is more than just infrastructure. It’s a trust operator responsible for helping maintain the integrity, transparency, and continuity of a decentralized ecosystem. SuperSchool now shares in that responsibility.
Blockchain-Powered verification for educational recordsSuperSchool manages data that reflects students’ lives — data that informs academic growth, career decisions, and institutional trust.
AI-generated activity record analysis Auto-drafted student reports for evaluation Personalized career & college guidanceParts of this sensitive data are cryptographically verified through Metadium’s blockchain, ensuring they are tamper-proof and authentically sourced.
This represents a real-world intersection of public-sector education and blockchain-level trust.
Metadium x SuperSchoolWe now stand on the same node where technology meets education and trust supports students’ future.
📎 Learn more about SuperSchool
메타디움 노드 파트너로 ‘슈퍼스쿨’이 합류합니다 기술, 교육, 그리고 신뢰 생태계의 접점에서메타디움은 ‘신뢰 인프라’를 설계합니다. 우리는 기술이 작동하는 방식보다, 그 기술을 어떻게 함께 운영하느냐를 더 중요하게 생각합니다. 그리고 그 신뢰는 혼자서는 완성되지 않습니다 — 책임 있게 참여하는 파트너들과 함께 구축되어야 합니다.
메타디움 생태계에 또 하나의 의미 있는 노드 파트너가 합류했습니다. AI 기반 교육 플랫폼을 운영하는 ‘슈퍼스쿨(SuperSchool)’이 공식 메타디움 노드 파트너로 참여합니다.
교육 현장에서 실천하는 기술, 슈퍼스쿨슈퍼스쿨은 AI와 클라우드 기술을 기반으로 학교 교육을 혁신하고 있는 에듀테크 기업입니다. 출결 관리, 활동 기록, 상담, 성적 분석, 진로 컨설팅, 생기부 작성 등 학교의 모든 교육 데이터를 디지털화하고, AI 분석과 자동화를 통해 교사와 학생 모두의 경험을 향상시키는 플랫폼을 만들어왔습니다.
전국 현직 교사 300여 명과 함께 기획된 현장 중심 솔루션 출결 및 행정 자동화 시스템 특허 등록 고려대학교 정보보호대학원과 개인정보보호 협력 제주 표선고와 IB 교육과정 전용 플랫폼 공동 개발 중국 교육기관과의 해외 공급 계약 체결이 모든 활동은 단순한 서비스가 아니라, 교육 데이터를 어떻게 안전하게 다루고, 의미 있게 사용할 수 있는가에 대한 고민의 결과입니다.
기술 사용자에서 생태계 운영자로 — 노드 파트너십의 의미메타디움은 이미 슈퍼스쿨의 일부 시스템에 기술을 제공하며 협력해 왔습니다. 그리고 이제, 단순한 사용자 관계를 넘어 슈퍼스쿨은 메타디움의 노드 파트너로서 생태계 운영에 직접 참여하게 되었습니다.
노드는 단순한 기술 장비가 아닙니다. 그것은 네트워크의 일부를 책임지는, 신뢰의 운영자입니다. 블록체인의 데이터가 기록되는 모든 과정에 관여하며, 생태계의 무결성과 지속성을 함께 지켜나가는 존재입니다.
교육과 DID, 그리고 데이터의 새로운 흐름슈퍼스쿨이 다루는 교육 데이터는 학생의 삶과 연결되어 있고, 사회로 이어집니다.
AI 활동기록 분석 디지털 생기부 초안 자동 작성 학생 맞춤형 진로/진학 추천이 모든 흐름 위에, 메타디움 블록체인 기술이 함께하게 되면서 교육이라는 공공성과 블록체인 기술의 신뢰성이 현실적으로 접목됩니다.
메타디움 x 슈퍼스쿨기술이 교육을 만나고, 신뢰가 학생의 미래를 지지하는 지점에서 — 우리는 이제 같은 노드 위에 서 있습니다.
SuperSchool Joins the Metadium Node Partner Network was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
Metadium is more than just a blockchain project. We design trust infrastructure. And trust is not built by technology alone — it’s built through responsible operation and meaningful participation by those who share that responsibility.
Today, we’re pleased to announce the latest addition to the Metadium ecosystem: VERYWORDS, a company developing and operating an e-Mobility-based carbon reduction platform, has officially joined as a Metadium Node Partner.
VERYWORDS — Building sustainable technology through real-world actionVERYWORDS is not a company that merely expresses interest in climate issues. It’s a team that has spent years in the field, building practical, technology-based models to reduce carbon emissions.
2017–2019: Carbon neutrality consulting for government and enterprise 2020–2022: e-Mobility pilot programs across ASEAN countries 2023: Established electric motorcycle assembly plant in Cambodia 2024: Signed carbon credit pre-purchase agreement with Korea’s Ministry of Trade, Industry, and Energy 2025: Secured Korea’s first ITMO (Internationally Transferred Mitigation Outcome) project approvalVERYWORDS operates a multidimensional climate tech ecosystem, integrating carbon reduction, EV infrastructure, international policy cooperation, and cutting-edge technology. They’ve built a blockchain-based MRV (Monitoring, Reporting, Verification) system to ensure transparency and reliability of carbon data. They are already running IoT-based data security infrastructure and carbon reward mechanisms.
Through partnerships with the Cambodian government and various regional institutions, VERYWORDS has secured key footholds in international climate mitigation — not as experiments, but as operational models already in motion.
From technology user to ecosystem operator — a partnership grounded in shared trustMetadium has already provided blockchain technology for VERYWORDS’ platform, helping ensure integrity in carbon verification and data transparency. This new node partnership marks an evolution in that relationship — a step toward greater autonomy, shared responsibility, and deeper integration.
A node is not just a server. Maintaining trust, security, and decentralization is a critical part of the blockchain’s backbone. Becoming a node partner means becoming an active steward of the network itself.
VERYWORDS will now participate directly in the data creation and validation process within the Metadium blockchain, assuming the role of trust operator in a broader ecosystem of transparency and accountability.
At the intersection of climate action and Web3Climate change is one of the most complex, far-reaching challenges humanity faces. When applied thoughtfully, blockchain offers one of the most powerful tools to address it: a tamper-proof, transparent, verifiable system of record.
VERYWORDS is a leading example of applying this technology beyond borders — creating tangible momentum in the global fight against climate change.
Their role as a Metadium Node Partner further strengthens the trust architecture behind that work.
To those building this ecosystem with usMetadium is an open-source blockchain project focused on decentralized identity (DID) and trust infrastructure. But ecosystems are not built on code alone. They are built by those who share the responsibility to run them, those who apply them meaningfully, and those who have the vision — and grit — to bring it into the real world.
VERYWORDS is one of those partners.
Metadium x VERYWORDSWhere sustainability meets technology, and trust supports real-world climate action — We now stand on the same node.
메타디움 노드 파트너로 ‘베리워즈’가 합류합니다 기술, 기후, 그리고 신뢰 생태계의 접점에서메타디움은 단지 블록체인을 개발하는 프로젝트가 아닙니다. 우리는 ‘신뢰 인프라’를 설계합니다. 그리고 그 신뢰는 기술로만 완성되지 않습니다 — 책임 있게 운영하고, 진정성 있게 참여하는 파트너들과 함께할 때 비로소 구축됩니다.
그런 의미에서, 메타디움 생태계에 새로운 노드 파트너의 합류를 소개하게 된 것을 기쁘게 생각합니다. e-Mobility 기반 탄소감축 플랫폼을 개발하고 운영하는 베리워즈(VERYWORDS)가 공식 메타디움 노드 파트너로 참여하게 되었습니다.
지속가능성을 위한 기술, 현장에서 실천해온 베리워즈베리워즈는 단순히 기후 문제에 ‘관심 있는’ 기업이 아닙니다. 이들은 수년간 직접 현장에 들어가, 기술을 기반으로 실질적인 탄소감축 모델을 구현해온 기업입니다.
2017~2019: 정부 및 기업 대상 탄소중립 컨설팅 2020~2022: 아세안 국가 대상 e-Mobility 시범사업 수행 2023: 캄보디아 현지에 전기 오토바이 조립 공장 구축 2024: 산업부와 탄소크레딧 선구매 협약 체결 2025: 국내 최초 국제탄소감축사업(ITMO) 공식 승인베리워즈는 탄소 감축, 전기차 인프라, 국제 정책 협력, 기후테크 기술이 유기적으로 연결된 복합적이고 입체적인 생태계를 구축하고 있습니다.
기술적으로는 블록체인 기반 MRV(Monitoring, Reporting, Verification) 시스템을 구축해 감축 활동의 투명성과 신뢰성을 확보했고, IoT 기반 데이터 보안 및 탄소 리워드 시스템도 이미 운영 중입니다.
또한, 캄보디아 정부 및 다양한 현지 기관과의 협업을 통해 국제 감축 사업의 거점을 실질적으로 확보하고 있으며, 이는 단순한 ‘실험’이 아닌 운영 가능한 모델로 증명되고 있습니다.
기술 사용자에서 생태계 운영자로 — 신뢰의 무게를 함께 지는 파트너십메타디움은 이미 베리워즈의 플랫폼에 기술을 제공하며, 감축 인증 및 데이터 투명성 확보에 협력해 왔습니다. 그리고 이번 노드 파트너십은, 그 협업 관계가 한층 더 주체적이고 책임 있는 단계로 진화했음을 의미합니다.
노드는 단순한 기술 장비가 아닙니다. 노드는 블록체인 네트워크의 일부로서, 생태계의 신뢰성과 무결성을 유지하는 핵심 요소입니다. 블록체인의 거버넌스 구조 안에서 직접 참여하고, 네트워크를 ‘함께 지키는 존재’가 된다는 뜻입니다.
베리워즈는 이제, 메타디움 블록체인의 데이터가 만들어지고 저장되는 과정에 참여하며, 생태계의 신뢰 운영자(trust operator)로 함께하게 됩니다.
기후 행동과 Web3의 연결 지점에서기후 변화는 인류가 직면한 가장 복잡하고 거대한 과제입니다. 그리고 블록체인은, 그 과제를 해결하는 데 있어 가장 강력한 도구 중 하나입니다 — 데이터 위조가 불가능하고, 투명하며, 누구나 확인 가능한 기록 시스템.
베리워즈는 이 기술을 기후 분야에 적용하는 데 있어 한국을 넘어 국제 무대에서도 의미 있는 전환점을 만들어내고 있으며, 이번 메타디움 노드 파트너 합류는 이런 움직임에 더 깊은 신뢰 구조를 부여하는 작업이라 할 수 있습니다.
함께 생태계를 만드는 이들에게메타디움은 오픈소스 블록체인 프로젝트로서, 자기주권형 신원(DID)과 신뢰 기반 인프라를 구축하는 것을 목표로 합니다.
그리고 그 생태계는 기술만으로는 완성되지 않습니다. 책임을 함께 지는 파트너, 기술을 실제로 구현하는 기업, 그리고 그것을 실현할 철학과 실행력을 가진 사람들과 함께 만들어집니다.
베리워즈는 바로 그런 파트너입니다.
메타디움 x 베리워즈기술이 지속가능성을 만나고, 신뢰가 기후 행동을 지지하는 지점에서 — 우리는 이제 같은 노드 위에 서 있습니다.
VERYWORDS Joins the Metadium Node Partner Network was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.
TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE
Artificial inflation of real estate prices for decades caused the global financial crisis.
We propose converting the global system from a debt-backed to an equity-backed model to solve it.
We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.
By Vinay Gupta (CEO Mattereum)
with economics and policy support from Matthew Latham of Bradshaw Advisory
and art and capitalism analysis from A.P. Clarke
Back when I was in primary school in the 1970s, they suddenly started teaching us binary arithmetic; why?
Well, because they could see that computers were a coming thing and, of course, we’d all have to know binary to program them. So, The Digital has been a rolling wave for pretty much all my life — that was an early ripple, but it continued to build relentlessly, and when it truly started surging forwards in the 1990s, it began to transform everything in its path. Some things it washed away entirely, some things floated on it, but everywhere it went, digital technology transformed things. Sometimes the result is amazing, sometimes it is disastrous.
The wave was sometimes visible, but sometimes invisible. You could see barcodes arrive, and replace price stickers.
I’m just a bit too young to remember when UK money was pounds, shillings and pence. In 1970 there were two hundred and forty pence to the pound.
Through the 1970s and 1980s the introduction of barcodes on goods was a fundamental change in retail, not just because it changed how prices were communicated in stores, but because it enabled a flow of realtime information about the sale of goods to the central computer systems managing logistics and money in the big store’s supply chain.
Before the bar code, every store used to put the price on every object with a little sticker gun. Changing prices meant redoing all the stickers. Pricing was analogue.
In many ways decimalization and barcoding marked the end of the British medieval period. We still buy and sell real estate pretty much the same way we did in 1970.
Monty Python and the Holy Grail, 1975 The sword has two edgesWhen you get digitization wrong, the downsides tend to be much larger than the upsides. It’s all very well to “move fast and break things” but the hard work is in replacing the broken thing with something that works better. It’s not a given that better systems will emerge by smashing up the old order, but the digital pioneers were young, and it seems obvious to young people that literally anything would be better than the old people’s systems. This is particularly true in America, which, being founded by revolutionaries, lacks a truly conservative tradition: in America, what is conserved, what people have nostalgia for, is revolution itself.
That makes change a constant, and this is both America’s greatest strength, and weakness. The only thing you can get people interested in is a revolution. Nobody cares about incrementally improving business-as-usual. Everybody acts like they have nothing to lose at all times.
This winner-takes-all nothing-held-back attitude exemplified by “move fast and break things” has become the house style of the digitization.
But as a result a lot of things are broken these days.
Jonathan Taplin - Move Fast and Break Things
Wikipedia turned out pretty wellYou can get a decent enough answer for most purposes from Wikipedia, it’s free, it’s community generated, there’s no ads, it doesn’t enshittify (yet), and you do not need to spend a fortune on a shelf full of instantly out of date paper encyclopedias. Most people would agree this is “digitization done right.”
Spotify, not so much; it wrecked musicians livelihoods, turned music listening into a coercive hellscape of ‘curated’ playlists, and is on course to overwhelm actual human created music with AI produced digital soundalike slop that will do its best to kill imaginative new leaps in music — no AI without centuries of history and culture build up in its digital soul could come up with anything like the extraordinary stuff Uganda’s Nyegge Nyegge Tapes label finds for example — and you pay for it or get hosed with ads. It was, after all, always intended to be an advertising platform that tempted audiences in with streamed music.
Nobody ever stopped to ask how the musicians were doing.
Of course for the listeners — the consumers of music-content-product — the experience was initially utopian. People used to talk about the celestial jukebox, the everything-machine, and for a while $9.99 a month got you that. The first phase was utopian, for everybody except the musicians, the creators of music: they had a better financial deal when they got their own CDs pressed and sold them at shows. Seriously, musicians look back at that time as the good old days. Digitization went badly wrong for music.
How Spotify is stealing from small indie artists, why it matters, and what to do about it
It’s not just the software that can be disastrous; take massive data centres, not only do they cover vast areas and consume ludicrous amounts of energy, they are extremely vulnerable to disaster. The Korean government just lost the best part of a petabyte of data when one went up in smoke — the backup was in the same place it seems.
Then there’s a contagious Bluetooth hack of humanoid robots that has just come to light. You can infect one of the robots, and then over Bluetooth, it can infect the other robots, until you have a swarm of compromised humanoid robots, and Elon Musk says he’s going to produce something like 500,000 of these things.
We always thought Skynet would be some sinister defence company AI, but it turns out that basically it’s just going to be 4Chan’s version of ChatGPT — and it’s not like there isn’t plenty of dodgy, abrasive internet culture in the training data already!
This is the digital crisis, it inevitably hits field after field, but whether what emerges at the end is a winner or a disaster is completely unpredictable.
Will it lead to a Wikipedia, or a Spotify, something that’s just sort of OK, like Netflix, or something deeply weird and sinister like those hacked robots. Did Linux save the world? Will it?
Why is there such a range in outcomes over a process whose arrival is so predictable? That is because the Powers-That-Be that might steer the transition, that could come up with an adequate response, the nation states, are really poor at digital. Nation States move too slowly, they fundamentally fail to understand the digital, and their mechanisms just haven’t caught up; they just suck at digital at a very primordial level, so the result of any digital crisis requiring state intervention is a worse crisis.
That’s not to say that any of the possible alternatives to nation states show any sign of doing this better — that’s part of the problem.
Whoever is doing the digitizing during the critical period for each industry has outsized freedom to shape how the digitization process plays out.
4chan faces UK ban after refusing to pay 'stupid' fine
Move fast and break democracyEventually “move fast and break things” took over in California and beyond, and crypto (the political faction, the industry, the ideology!) identified the fastest moving object in the political space (Trump-style MAGA Republicanism) and backed it to the hilt.
The American Libertarian branch of the crypto world is now trying to build out the rest of their new political model without a real grasp of how politics worked before they got interested in it. The crypto SuperPACs and associated movements threw money at electing a team who would accommodate them, in the process destroying the old American political mode without, perhaps, much concern about what else they might do once in power.
There’s a whole bunch of “break things phase” activity emerging from this right now.
Unprecedented Big Money Surge for Super PAC Tied to Trump
The “break things” part of “move fast and break things” has a very constrained downside in a corporation. Governments are a lot more dangerous to tinker with.
The Silicon Valley venture capital ecosystem itself is a relic. It is itself a legacy system. Dating back to the 1950s American boom times, Silicon Valley is having an increasingly hard time generating revenue, and today its insularity and short sightedness are legendary. There is a lot of need for innovation, and there’s no good way to fund most of it. Keeping innovation running needs a new generation of financial instruments (remember the 2018 ICO craze?) but instead we’re stuck with Series funding models.
Funding the future is a now a legacy industry.
Series A, B, C, D, and E Funding: How It Works
It still isn’t fully appreciated that today’s political crisis, to a significant extent, is because the Silicon Valley could not integrate into the old American political mode. For decades Silicon Valley struggled to find a voice in Washington, or to figure out whether the right wing or the left wing was its natural home. Meanwhile life got worse and worse in California because of a set of frozen political conflicts and bad compromises nobody seemed to be able to fix. The situation slowly escalated, but the problem in Silicon Valley was always real estate.
How Proposition 13 Broke California Housing Politics
Digital real estate is a huge global gambleThe digital crisis is just about to collide with one of society’s other major crises — the housing crisis.
We have problems, globally, with real estate. We don’t seem to be able to build enough of it, nobody seems to be able to afford it, largely because it’s being used as an asset class by finance instead of being treated as a basic human need.Real estate availability and real estate bubbles are horrendous problems.
Now the hedge funds are moving in to further financialize the sector at the same time as people seem not to be able to buy enough housing to have kids in.
This has been steadily getting worse since Thatcher and Reagan in the late 70s/early 80s. Once, one person in work could comfortably buy a house and support a family, then it became necessary for two people to work to do that, now it’s slipping beyond the grasp of even two people, and renting is no cheaper; renters are just people who haven’t got a deposit together for a mortgage, so are paying someone else’s and coming out the end with nothing to show for it. It’s a mess, and then we’re going to come along and we’re going to digitize real estate. What could possibly go wrong?
Well, if we don’t deal with this as being an aspect of a much larger crisis, we will be rolling the dice on whether we like the outcome we get from digitization of real estate. Things are really bad already, and bad digitization could make them so much worse. But, as is the nature of the digital crisis, it could also make them better, and it is up to us, while things are still up in the air, to make sure that this is what happens.
The initial skirmishes around digitization of real estate have mostly been messy: the poster children are Airbnb and Booking, both of which enjoy near-monopoly status and externalize a range of costs onto the general public, while usually offering seamless convenience to renters and guests. But when things go wrong and an apartment gets trashed or a hotel is severely substandard, often people are left out in the cold dealing with a corporation which is so large it might as well be a government and this is, indeed, usually how the Nation State as an institution has handled digital.
Corporations the size of governments negotiate using mechanisms that look more like international treaties than contracts, and they increasingly wield powers previously reserved to the State itself. It’s not a great way to handle a customer service dispute on an apartment.
Neoreaction (NRx) and all the rest of it simply want to ratify this arrangement and create a permanent digital aristocracy as a layer above the democracy of the (post-)industrial nation states: the directors and owners of corporations treated as above the law.
Inside the New Right, Where Peter Thiel Is Placing His Biggest Bets
Economic stratification and political complexityOne reason we aren’t dealing adequately with these crises is that the very existence of many of them is buried by an increase in the variance of outcomes. It used to be that people operated within a fairly narrow bandwidth. The standard deviation of your life expectations was relatively narrow, barring things like wars. Now, what we have is this incredibly broad bimodal distribution, trimodal distribution. A chunk of people manage to stay in the average, a tiny number of people wind up as billionaires, and then maybe 20% of the society gets shoved into various kinds of gutters. In America, it’s medical bankruptcy, it’s homelessness, it’s the opioid epidemic, it’s being abducted by ICE, those kinds of things.
What we’ve done is create a much wider range of possible outcomes, and a lot of those outcomes are bad, but the average still looks kind of acceptable — the people at the top end of that spectrum are throwing off the averages for the entire rest of the thing.
Ten facts about wealth inequality in the USA - LSE Inequalities
In fact, generally speaking, on the streets things repeatedly approach the point of revolution as various groups boil over. If they all boil over at the same time, that’s it, game over, new regime.
We’re in a position where we’ve managed to create a much more free society with a much wider range of possible outcomes, however, the bad outcomes are very severe and often masked by the glitzy media circus around the people enjoying the good outcomes. Good outcomes are being disproportionately controlled by a tiny politically dangerous minority at the top, but as these are the ones making the rules, trying to correct the balance is super difficult.
Democracy as we knew it was rooted in economic democracy, and nothing is further from economic democracy than robots, AI, and massive underemployment. Political democracy without economic democracy is unstable and only gives the lucky rich short term benefits; they are gambling on being able to constantly surf the instabilities to keep ahead of the game, continuing to reap those benefits while palming the externalities off on everyone else. But that can’t be done; eventually someone gets something wrong and the whole lot hits the wall in financial crashes, riot, revolution, and no one gets a good outcome. It all ends up like Brazil if you’re lucky and Haiti if you’re not.
The combination of extreme wealth gaps and democracy cannot be stabilized, and increasingly the rich are looking at democracy as a problem to be solved, rather than the solution it once was. I cannot tell you how bad this is.
Yet the benefits of technology are all around us, increasingly so. Democracy tends towards the constant redistribution of those benefits through taxation-and-subsidy. To fight against being redistributed, the billionaires are rapidly moving towards a post-democratic model of political power. The general human need for access to a safe and stable future seems to be less and less a stated goal for any political faction. This is getting messy.
Today, middle of the road democratic redistribution sounds like communism, but it’s not; it just sounds like that because the current version of capitalism is so distorted and out of whack. American capitalism used to function much more like Scandinavian capitalism, a version of capitalism that gives everyone a reasonable bit of the pie, with a strong focus on social cohesion. Within that model, the slice may vary considerably in size, but it should allow even those at the lower end safe and dignified lives. Weirdly enough the only large country running a successful 1950s/1960s “rapid economic growth with reasonable redistribution of wealth” model of capitalism today is China.
Fractocrises and magic bulletsIn 2016 there was a little dog with a cup of coffee who reflected back the feeeling the world had gone out of control and nobody cared.
In 2016. Sixteen. No covid. Not much AI. Little war. But still the pressure.
https://www.nytimes.com/2016/08/06/arts/this-is-fine-meme-dog-fire.htmlUnderstandably some very smart people are pursuing the concept of polycrisis as a response to the many arms of chaos.
https://x.com/70sBachchan/status/1723103050116763804Deal with the crises in silos and this mess is the result.
The impulse towards polycrisis as a model is understandable, but it’s a path we know leads to a very particular kind of nowhere. It leads to Powerpoint.
https://www.nytimes.com/2010/04/27/world/27powerpoint.htmlIn truth, crises are fractal. They are self-similar across levels. The chains of cause-and-effect which spider across the policy landscape in impenetrable webs are produced by a relatively small number of repeating patterns.
“Follow the money”, for example, almost always cuts through polycrisis and replaces the complexity of the situation with a small number of actors who are above the law.
To use a medical analogy, a patient can present with a devastating array of systemic failures driven by a single root cause. Consider someone suffering from dehydration — blood pressure is way down, kidneys are failing, maybe 40 different systems are going seriously wrong. Treat them individually and the patient will just die.
Step back and realise “Oh, this patient is dehydrated!”, give them water and rehydration salts and appropriate care and all the problems are solved at once.
Or maybe it’s reintroducing wolves to Yellowstone Park; suddenly the rivers work better, there are more trees, insect pests decline, because one big key change ramifies through the system and brings about a whole load of unanticipated benefits downstream. Systems have systemic health. Systems also have systemic decline. But the complex systems / “polycrisis” analysts focus entirely on how failing systems interact to produce faster failure in other failing systems, effectively documenting decline, and carry around phrases like “there is no magic bullet.”
There is. The magic bullet for dehydration is water.
Finding the magic bullets is medicine; documenting systemic collapses is merely biology.
REHYDRATING THE AMERICAN DREAMThe dollar is a dead man walking — there is no way to stabilise the dollar in the current climate. It is holed below the waterline but the general public has only the very earliest awareness of this problem today. By the time they all know there will be no more dollar. Perhaps the entire fiat system is in terminal decline as a result: if the dollar hyperinflates, or dies in some other way, will it take the Pound and the Euro and the Yen with it? Who could have foreseen this?
Frankly, in 2008, following the Great Financial Crisis, everybody knew.
https://theconversation.com/as-uk-inflation-falls-to-2-3-heres-what-it-could-mean-for-wages-230563There is a long term macro trend of fiat devaluation. There is also the acute fallout of the 2008 catastrophe. We have a fundamental problem: the 1971 adoption of the fiat currency system (over the gold standard) is not working. The dysfunction of the fiat system detonated in 2008. We have now had 17 years of negotiations with the facts of the matter, but so far, no solutions.
Well, other than this one…
All the fiat economies are carrying tons of debt, crazy unsustainable amounts of debt, both personal and national. It could well be that a lot of smart people are thinking that a “great reset” of some kind would solve a lot of problems simultaneously.
The nature of that “great reset” is going to determine whether your children live as slaves, or live at all.
So the global approach to currency needs overhauling as part of a more general effort to make the political economy stabilize in an age of exponential change.
It is not the first time that it has been done even within living memory.
Bowie released “The Man Who Sold The World” while the dollar was still backed by physical gold. This is not ancient history. It’s not at all irrational to think that 6000 year old human norms about handing over shiny bits of metal for food might need to be updated for the world we are in today. But it’s also not too late to adjust our models and fine-tune the experiment.
Globally issued non-state fiat, like Bitcoin, is just not going to get you the society that you want, unless the society you want is an aristocratic oligarchy. Bitcoin is just a different kind of fiat — money that only exists because someone says it’s money and enough people go along with it, rather than money based on something that has intrinsic value itself. It has the same problem as fiat currency has: there is no way to accurately vary the amount of money to meet the demand for money to keep the price of money stable. Purchasing power is always going to be unpredictable and that makes long term economic forecasting difficult for workers and governments alike.
Governments print too much. Bitcoin prints too little, particularly this late in the Halving Cycle.
Understanding the Bitcoin Halving Cycle and Its Impact on 2025 Market Trends
The problem of purchasing power fluctuations causes for estimating long term infrastructure project economics has huge impacts too: if you can’t accurately predict the future, you can’t finance infrastructure. You can’t plan for pensions. The great wheel of civilization grinds to a halt as short-termism eats the seed corn of society. Nobody wants to make a 30 year bet because of robots and AI and all the rest, and so we wind up ruled quarter by quarter with occasional 4 year elections.
Not dollars, not BitcoinThe debates about what money should be are not new.
Broadly, there are three models for currency
(1) government fiat/national fiat — fine in principle but, in practice, in nearly all highly democratic societies the governments wind up inflating away their own currencies over time
(2) global fiat issued on blockchains — Ethereum, Bitcoin, all the rest of those things
(3) resource-backed currencies — conventionally that means gold but it can also apply to things like Timebanking and various mutual credit systems
Gold is already massively liquid. You cannot solve a global crisis by making gold 40 times more valuable than it currently is because it becomes the backing for all currencies again. Gold is also very unequally distributed: Asian women famously collect the stuff in the form of jewellery and a shift to a new gold standard could easily make India one of the wealthiest countries in the world again, women first. Much as this sounds like a delightful outcome, it’s hard to imagine a new economic order ruled by now-very-wealthy-indeed middle class Indian housewives who had a couple of generations to build up a solid pile of bangles.
This, by the way, is the same argument against hyperbitcoinization — being on the Silk Road in 2011 and buying illegal substances using bitcoin is not the same thing as being good at productive business or being a skilled capital allocator: windfalls based on a social choice about currency systems are not a sensible way to allocate wealth, although it does often happen.
Hyperbitcoinization Explained - Bitcoin Magazine
You can argue that bitcoin mining requires a ton of expertise and technological capacity, and this is worthy economic reward, but there is a fundamental limit to how many kilograms of gold you can rationally expect to pull out of a data center running 15 year old open source software.
Similarly, the areas which were geographically blessed (or is that cursed?) by gold would wind up with a huge economic uplift. It becomes a question of geological roulette whether you have gold or not, and unlike the coal and oil and iron and uranium lotteries, nobody can build anything using gold as an energy source or a tool. Gold is just money. It’s like an inheritance.
So what’s the alternative? Bitcoin scarcity, gold scarcity, these are all models in which early owners of the asset do very well when the asset class is selected for the backing of the new system. Needless to say those asset owners are locked in a very significant geostrategic power struggle for the right to define the next system of the world. They are all bastards.
Strange women lying in ponds distributing swords is no basis for a system of government...
But what if we move to something that is genuinely, fundamentally useful? Well, what about land? You’re much more likely to get a world that works if you rebase the global currencies on real estate in a way that causes homes to get built, than if you rebase the world’s currencies on non-state fiat.
Both sides of this equation must balance. If we simply lock the amount of real estate in the game, then (figure out how to) use it as currency we wind up with another inflexible monetary supply problem. Might as well use Bitcoin or Gold. We’ve been down this track: we did not like it, and in 1971 we changed course permanently.
Real estate could be “the new gold” but real estate has flexible supply because you can always build more housing.
If the law permits.
And if we can solve that problem, the incentives align in a new way: building housing increases the money supply. If house prices are rising too fast, build more housing.
Bryan's Cross of Gold and the Partisan Battle over Economic Policy | Miller Center
Artificially scarce real estate is the gold of todayWe’ve been manipulating real estate prices for a few generations.
The data is screamingly clear, and that’s 100% evidence of pervasive market manipulation: housing is not hard to physically build, but bureaucratically there’s been a massive concerted effort to keep the stuff expensive by bureaucratic limitations on supply. There are entire nation states dedicated to this cause.
The exceptions to this rule look like revolutionary actions.
Consider Austin, Texas which saw its real economic growth and potential status as The Next Great Californian City threatened by a San Francisco style house price explosion. Austin responded with a massive building wave, and managed to rapidly stabilize house prices at a more sustainable level.
Some reports say that >50% of Silicon Valley investor money eventually winds up in the pockets of landlords.
Peter Thiel: Majority of capital poured into SV startups goes to 'urban slumlords'
The way out is to build housing, and a lot of it.
But not like this.
Digital finance has to build more real estate to winAt the root of everything is that the digitization of real estate has to build more real estate. If the next system does not work for average people to get them a better outcome than the current system, there is going to be real trouble: state failures or the violent end of capitalism.
First and above all, this means we need to build more real estate.
Building has been artificially restricted because to make it work as an investment that increases in value, there needs to be scarcity; if you build more its investment value goes down, but its utility value increases.
One way to digitize real estate is to create currencies backed by real estate, but the logical outcome of this is to make real estate as scarce as possible to protect the value of the currency, which is a disaster for the people who actually need to live somewhere. It would be like a society where mining gold is illegal because the value of the gold supply has to be protected, except we are doing this for homes. We are here now, and we could make this disaster worse.
In truth, if we take that path, we are fucked beyond all human belief. We will have literally immanentized the eschaton. You basically wind up with the economic siege of the young by the old, and that is a powder keg waiting to blow. State failures and violent revolutions.
The 2008 crisis was triggered by over-valuing real estate (underpricing the risk, to be precise) on gigantic over-simplified financial instruments like mortgage-backed securities, literally gigantic bundles of mortgages with a fake estimate about how many of the people taking out those mortgages could afford them in the long run. The global economic slowdown triggered by the US-led invasion of Afghanistan and Iraq (don’t even get me started) hit the mortgage payers, and the risk concentrated in markets like “subprime mortgages” and the credit default swaps which were being used to hedge those (and other) risks.
Credit default swap - Wikipedia
The digital crisis, when it hits real estate, could make 2008 look like the boom of the early 90s. However we choose to tokenize real estate, it has to result in more homes getting built.
However we choose to tokenize real estate, it has to result in more homes getting built.
You cannot use real estate as the backend for stablecoins, then limit the supply of real estate in a way that causes prices to continually go up. That paradigm is what has caused the current real estate crisis. It’s been destroying our societies in America and Europe for decades, so it’s not going to solve the crisis it has caused.
This is largely downwind of Thatcher and Reagan and financial deregulation on one hand, paired with promises to control inflation over the long run (we’re talking decades). This was the core promise made by the Conservatives: inflation will stay low forever. We will not print money.
Once that promise was in place it was possible to have low interest rates and long mortgages, meaning the working class could afford to buy housing. They called this model the Ownership Society.
The ownership society (and associated models) was an attempt to change the incentives for poor voters so they would not use democracy to take control of the government and vote money from the rich into their own pockets.
What we’ve done is we’ve basically bribed an entire generation (the boomers) with that model, and now we’re at the point where they have no grandchildren and the entire thing is collapsing because housing is a much worse kind of bitcoin than bitcoin. Expensive bitcoin makes bitcoin hard to buy. Expensive housing devastates entire societies. And that’s where we are today.
The solution to all of these ills is to solve these crises at a fundamental level. The patient is dehydrated. The patient needs water. Affordable housing.
This is why you’ve got to focus on outcomes for average people: in any crisis you can find a minority of people who are thriving. Those people are useless for diagnosing the cause of the crisis. You have to look at the losers to understand why the system is broken.
The rent is too damn high.
The patient needs water not antibioticsIf we fix the housing part of this digitization crisis correctly, the results are going to be amazing. That could be the one big change that propagates through the entire financial system and brings back the balance.
Essentially, what works is not backing a currency with real estate, then manipulating the real estate supply to prop it up. What works, we believe, is being able to use land directly as a kind of currency. This is not in the current sense of taking out a loan with the land as collateral, but instead using it directly as money without ever having to dip into any kind of fiat; no need to turn anything into dollars to be able to trade things. Why would I pay interest on a loan against my collateral if I can simply pay for something using my collateral directly?
If we digitize real estate properly, the reward is that we could potentially use tokenized real estate to stabilize the financial system. Regulatory friction is keeping real estate, by far the world’s largest asset class, illiquid in a world which desperately needs liquidity. But there is also a very hard problem in automating the valuation of real estate, and that is going to need AI.
When something is digitized it is inevitably an approximation, and the consequences of that approximation are much larger in some areas than others. With real estate, when we buy and sell we’re constantly in a position where we are dealing with the gap between the written documentation of the real estate and the actual value of the asset. As a result, you wind up with another kind of digitization crisis, one caused by the gap between the digital representation of the object and the object itself.
Using current systems, the liability pathways attached to misleading information in a data set being used to value assets would normally be revealed during legal discovery. If the problem is worth less than tens of millions it’s never going to be found out. If the problem is worth tens or hundreds of billions, it’s now too late. A lot slips through the gaps, historically speaking. And this is only going to get worse now that sellers have started to fake listings using AI.
Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risksThis information-valuation-risk nexus creates friction; to get real estate digitization to work we need to eradicate that friction, and keep fake listings out of the system. This challenge is only going to get harder.
Real estate is a safer fix for the currency crisis“Without revolution” is a feature, not a bug.
Vitally, unlike gold or bitcoin, the distribution of land and real estate ownership is close to the current estimates of people’s wealth: a shift to a real estate based economic model would not have the same gigantic and disruptive impacts as moving to either gold or bitcoin or both. There is enough value there too: $400 trillion dollars of real estate, versus $30 trillion of gold or only $38 trillion of US national debt. Global GDP is a bit over $100 trillion.
There is enough real estate, correctly deployed, to create a stable global medium of exchange.
The valuation problem has meant that previously the transactional costs of pricing real estate as collateral were insane. Instead of doing the hard work, pricing the real estate, financial institutions priced the mortgages on the real estate using simplistic models. The 2008-era financial system simply treated the mortgage as a promise to pay, without evaluating whether the person who was supposed to pay had a job, or if anybody was willing to buy the underlying asset which was meant to be backing the mortgage. A thing is worth what you can sell it for!
Shocking Headlines of the 2008 Financial Crisis - CLIPPING CHAINS
You would think somebody was minding the store, but you only need to look at the post-2008 shambles to realize not only is there nobody minding the store, the store itself burned down some time ago. In fact the global financial system is a system in name only: it’s more like a hybrid of a memecoin economy, a Schelling point, a set of real economic flows of oil and machine tools and microprocessors, and big old books of nuclear strategy and doctrine. The globa “system” is a bunch of bolted together game boards with innumerable weird pieces held together by massive external pressures, gradually collapsing because the stress points between different complex systems are beyond human comprehension. Environmental economics, for example. Or the energy policy / national security interface. AI and everything. The complexity overwhelms understanding and the system degrades.
It does not have to be this way.
When you have AI to price complex collateral like real estate (or running businesses), you can do things with that collateral that you couldn’t do previously. Of course that AI system needs trustworthy inputs. If the information coming into the system is factual, and the AI is an objective analyst, various parties can use their own AI systems to do the pricing without human intervention, so the trade friction plummets. Remember too these are competitive systems: players with better AI pricing models will beat out players with less effective price estimation, and that continuous competition will keep the markets honest, at least for a while.
Mattereum Asset Passports can provide the trustworthy inputs, again based on an extremely competitive model to price the risk of bad information getting into the Mattereum Asset Passport which is being used by the AI system to price the asset. The economic model we use was built from the ground up to price every substantial asset in the world even in an environment with the pervasive use of AI systems to manufacture fake documents and perpetrate fraud. We literally built it for these times, but we started in 2017. That’s futurism for you!
The economic mechanism of the Mattereum Asset Passport is a thing of beauty. The way that it works is that data about an asset is broken up into a series of claims. For example, for a gold bar, weight and purity and provenance and vaulting and delivery details and likely enough to price the bar. For an apartment there might be 70 claims including video walk throughs of the space and third party insurances covering issues like title or flood insurance. Every piece of information in the portfolio is tied to a competitively-priced warranty: buyers will rationally select the least expensive adaquate warranty on each piece of data. This keeps warranty prices down. This process is a strain with humans in the loop for every single decision, but in an agentic AI economy this competitive “Product Information Market” model is by far the best way of arriving at a stable on-chain truth about matters of objective fact.
It’s not that the system drives out error: it does, but the point is that it accurately prices the risk of error which is a much more fundamental economic process. This is a subtle point.
Bringing Truth to Market with Trust Communities & Product Information Markets
The combination of AI to commit real estate fraud and Zcash and similar technologies to launder the money stolen in those frauds is going to be unstoppable without really good, competitive models for pricing and then eliminating risk on transactions. The alternatives are pretty unthinkable.
In this new model, if I come to you with a token that says, based on the Mattereum Asset Passport data, this property is worth $380,000. Then you can say I will pay you 20% of this property in return for a car, there’s the transaction. You take 20% of a piece of real estate, I take an SUV. Maybe you can require me to buy back a chunk of that equity every month (a put option). Maybe the equity is pulled into an enormous sovereign wealth fund type apparatus which uses the pool to back standard stable tokens backed by fractions of all the real estate in the country. The story may begin with correctly priced collateral, but it does not end with correctly priced collateral. This is the anchor but it is only a part of a system.
If we get it right — and it’s a lot of moving parts — we could get out of the awful shadow of not only 2008’s financial crisis, but the calamitous changes to the global system which emerged from 1971.
The pragmatics of making real estate liquidAs long as you’ve got the ability to do relative pricing based on AI analysis, you don’t need to convert everything into currency to use it in trade. If you have an AI that can do the relative valuations, including risk and uncertainty, you can reach a position where you don’t have to use fiat money to make a fair exchange between different items, like land and cars, or apartments and antique furniture, or factories and farms; there are a whole set of AI-based value-estimation mechanisms that can be used for doing that and produce a fair outcome.
This cuts down or eliminates the valuation problems which can be caused by any kind of fiat — be it government fiat like the dollar, or private fiat like bitcoin — making it possible to operate on tokenized land — tokens based on an asset that is inherently dramatically more stable and inherently non-volatile. Solid assets back transactions. Closer to gold, but more widely distributed.
It’s a big story. But at its simplest what if we just said… “look, this is a global currency crisis. And the reason we’re in that crisis is artificial inflation. Real estate prices. Take the inflated real estate and the debt associated with the real estate transform it into equity, you know, debt to equity transformation…” and we restart the game on a sounder basis.
Who can follow along with that tune?
If you tokenize half, or even a third, of the real estate, what that provides is a staggeringly enormous pool of assets which move from being illiquid to liquid and that liquidity — widely distributed in the hands of ordinary people by virtue of them already owning these properties — then bails out the rest of the system. The conversion of mortgage debt into shared ownership arrangements, as mortgage lenders take equity rather than facing huge waves of defaults (again), balances the books without requiring huge government bailouts and money printing as in 2008. Homeowners do not hit the sheer logistical nightmares of moving house (particularly in old age) nor do they have to borrow money from lenders by remortgaging, creating more debt.
Rather than attaching debt to the real estate, we simply add a cap table to the real estate as if it was a tiny little company, and then let the owners sell or exchange some of that equity for whatever they want.
It’s a relatively small change to established norms, with massive, outsized benefits.
The key benefit of this approach is precisely that it is non-revolutionary. Compare the social stresses between this approach and doing that rescue process by massively pumping the price of (say) Bitcoin. In the hyperbitcoinization model you wind up with massive, massive, massive class war because you have people that were cryptocurrency nerds who are now worth half a trillion. You can’t have that kind of transfer of power without the system trying to engineer around it. Same thing happens with gold at $38,000 an ounce. The shift in wealth distribution is too violent for society to survive the transitional processes.
But making real estate truly liquid gives the economy the flexibility it desperately needs, probably without wrecking the world in the process.
Turning real estate debt into real estate equity and then making the equity tradable is not a new trick in finance: large scale real estate finance projects do things like this all the time. We’re just using established techniques from corporate finance at a much smaller scale, on a house-by-house basis, to safely manage the otherwise unmanageable real estate bubble. If every piece of real estate in America had the ability to do tokenized equity release built into the title deeds, America would not have solvency problems.
Pricing debt which does not default is relatively easy and prior to 2008 the global system sought stability by pricing debt as if it would not default. This looks like a joke now. But pricing defaults on debt is very very hard because the global economy is a just a part of a much larger unitary interlinked system and factors from beyond the view of spreadsheets can cause the world to move: covid, most recently. Such correlated risks change everything and are inherently unpredictable. Debt-based economies carry such risks poorly. Equity is a much better instrument for handling risk, but we have over-restricted its use, and are paying the price (literally) for this societal-scale error of judgement.
Debt cannot do what equity can, and we have too much debt and not enough equity.
Pricing complex and diverse assets like real estate is orders of magnitude harder than pricing good debt. Fortunately we now have the computer.
Flipping us from a debt world to an equity world needs a competitive AI environment to value the assets, and the blockchain to make issuing and transferring equity in those assets manageable.
That’s what’s needed to start clearing up the gridlocked debt obligation nightmare.
It’s not that hard to imagine, if you could tokenize one house, you could tokenize all of them. If you think of it as the debt to equity transformation for all of the mortgage debt, and then you pull the mortgage debt back out of the American system because you turn it into equity and then you allocate it to the banks, you could actually make America liquid again much faster.
It is an extreme manoeuvre, but the question is, as always, “compared to what?”
At the end of that we’d be left with a very different real estate ownership model, more like the Australian strata title or English Commonhold model. In both of these instances, aspects of a real estate title deed are split between multiple owners (the “freehold” is fractional) forming what amounts to an implicit corporate structure within every real estate title deed.
Imagine, that, but scaled.
Strata title - Wikipedia Commonhold - Wikipedia So practical government fought dirty for yearsBusiness is pretty good at change once government gets out of the way.
Once tokenized equity is clearly regulated in America, business will figure out real estate tokenization very fast. We could see 5,000 companies in America that are capable of doing real estate tokenization five years after the SEC says it’s okay to do it.
Business will create competing industrial machines that will effect the transformation, and get huge numbers of people out of the debt. Shared equity arrangements for housing could rebalance the economy without crashing society. The speed at which society can get the assets on chain is equal to how quickly finance can satisfactorily document them and fractionalize them.
What is a plausible documentation standard for a real world asset on chain that you could use an AI system to create? That’s a Mattereum Asset Passport.
Mattereum aims to get real estate through the digitization crisis in a healthy and productive way. Specifically, a decentralized, networked way which is kept honest by ruthless competition to honestly price risk in fair and free (AI powered) markets.
A business model which is the best of capitalism.
The alternatives are not attractive.
But there is reason for hope.
Once you tell the Americans what the rules are, the Americans will go there and do it. The only way that the SEC could hold back mass adoption of crypto was by refusing to tell people the rules. It doesn’t matter how onerous the regulatory burden was, if the SEC had told people the rules, they would have crawled up that regulatory tree a branch at a time, and we would have had mass tokenization six months after the rules were set, whatever the rules were.
The long delay was only possible because of an aggressive use of ambiguity, I’m going to say charitably, to protect Wall Street. Maybe it was to keep Silicon Valley out of the banking business, but however you want to think about it, the SEC had a very strong commitment under previous administrations that there was not going to be mass tokenization.
We can take this further — as the digitization wave washes inevitably over everything, if we continue to use this model we can finally be done with the age of the Digital Crisis and all its chaoses, replaced with far more stable, and predictably advantageous outcomes. For example, if everybody is using an AI to put a price tag on anything they look at, and all I have to do is hold that up in the air and say, does anybody want this? Then what you could get is effectively a spot market in everything, because the AIs do pricing. In that environment is anybody going to get a destructive permanent lock in? What makes most of the big digitization disasters into disasters is the formation of wicked monopolies, after all.
Spot markets today are for things like gold, oil or foreign exchange, anything where there’s so much volume in the marketplace that the prices are basically set by magic. With a vast number of participants in a global marketplace, all you need to do is hold up an asset, then everybody uses their AI to price the asset, resulting in a market that has a spot price for everything. Add the tokens to effect the title transfer. When you have a market that has a spot price and everything, all assets are in some way equivalent to gold — the thing that makes gold, gold, is that you can get a spot price on it. So if we have spot pricing for basically everything, based on AI agents, what you wind up with is being able to use almost any asset in the world as if it was gold. Everything is capital, completing the de Soto vision of the future.
In this future, all assets are equivalent to gold because you can price them accurately and cheaply, and can verify the data about them. It changes the entire nature of global finance, because that finally removes the friction from transacting assets. Then, if you’ve got near-zero friction transactions in assets, why use money? No need for dollars, no need for bitcoin; instead, a new financial system creating itself out of the whole cloth on the fly, and one that is stable and shows every sign of being rational because it is diverse and not tied to any single asset that can distort the market through exuberance and crashes. Diversification is the only stability.
Now that would be a paradigm worthy of the name “stablecoins”!
Anyone got a better plan for saving the world?In a world that has blockchains, artificial intelligence, and a global currency crisis, we need big ideas and big reach to get to a preferable future. It’s an alignment problem, not just AI alignment but capital alignment. We don’t just strive against 2008’s AAA bonds backed by mouldy sheds alone, but against future Nick Landian factors about AI alignment.
Through the lens of AI, we can start looking at all the world’s real estate as an anchor for the rest of the economy. When we put the diligence package for a piece of real estate on chain in the form of Mattereum Asset Passport, then over time 50 or 70 or 90 or 95 or 99.9% of the diligence could be done by competing networks of AIs, striving to correctly value property and price risk in competitive markets which reliably punish corruption with (for example) shorts. With those tools, we could rapidly tokenize the world and use the resulting liquidity to keep the wheels from falling off the global economy.
This is, at least in potential, a positive way of solving the next financial crisis before it really starts and ensuring that the digitization of real estate does not create another digital disaster.
CONCLUSIONArtificial inflation of real estate prices for decades caused the global financial crisis.
We propose converting the global system from a debt-backed to an equity-backed model to solve it.
We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.
THE DIGITAL CRISIS — TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.
The Financial Stability Board (FSB) — the G20’s global risk watchdog — released a sobering statement: there remain “significant gaps” in global crypto regulation.
It wasn’t the typical bureaucratic warning. It was a clear signal that the world’s financial governance structures are lagging behind the speed and fluidity of decentralized systems. For an industry built on cross-border code and borderless capital, national rulebooks no longer suffice.
But the FSB’s concern reaches beyond oversight. It exposes an unresolved paradox at the heart of digital finance: how to regulate what was designed to resist regulation.
Fragmented Governance, Unified RiskThe FSB’s assessment underscores a growing structural mismatch. The world’s regulatory responses to crypto have been disparate, reactive, and jurisdictionally fragmented.
The United States continues to rely on enforcement-driven oversight, led by the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), each defining “crypto assets” through its own lens. The European Union is pursuing harmonization through the Markets in Crypto-Assets Regulation (MiCA), creating the first comprehensive regional rulebook for digital assets. Asia remains diverse: Japan and Singapore operate under established licensing regimes, while India and China take more restrictive, state-centric approaches.To the FSB, this regulatory pluralism is not innovation — it’s exposure. The lack of standardized frameworks for risk management, consumer protection, and cross-border enforcement creates vulnerabilities that can spill over into the traditional financial system.
In a market where blockchain transactions flow without borders, inconsistent regulation becomes the new systemic risk.
Regulatory Arbitrage: The Silent ThreatThis fragmented environment fuels what the FSB calls “regulatory arbitrage” — the quiet migration of capital, operations, and data to jurisdictions with the weakest oversight.
Stablecoin issuers, decentralized finance (DeFi) platforms, and digital asset exchanges can relocate at the speed of software. For regulators, national boundaries have become lines on a digital map that capital simply ignores.
The result is a patchwork of supervision. Entities can appear compliant in one jurisdiction while operating opaque structures in another. Risk becomes mobile, and accountability becomes ambiguous.
Ironically, this dynamic mirrors the early years of global banking — before coordinated frameworks like Basel III sought to standardize capital rules. Crypto now faces the same evolution: a system outgrowing its regulatory perimeter.
Privacy as a Barrier and a BattlegroundOne of the FSB’s most striking observations concerns privacy laws. Regulations originally designed to protect individual data are now obstructing global financial oversight.
Cross-border supervision depends on data sharing — but privacy regimes like the EU’s General Data Protection Regulation (GDPR) and similar frameworks in Asia restrict what can be exchanged between authorities.
This creates a paradox:
To monitor crypto markets effectively, regulators need visibility. To protect users’ rights, privacy laws impose opacity.The collision of these principles reveals a deeper tension between financial transparency and digital sovereignty.
For blockchain advocates, this friction isn’t a flaw — it’s the point. Privacy, pseudonymity, and autonomy were not accidental features of decentralized systems; they were foundational responses to surveillance-based finance.
Now, as regulators push for traceability “from wallet to wallet,” the original ethos of blockchain — self-sovereignty over data and identity — faces its greatest institutional test.
The Expanding Regulatory PerimeterThe FSB’s report marks a turning point: the global regulatory community no longer debates whether crypto needs rules, but how far those rules should reach.
Stablecoins have become the front line. The Bank of England (BoE) recently stated it will not lift planned caps on individual stablecoin holdings until it is confident such assets pose no systemic threat. Meanwhile, the U.S. Federal Reserve has warned that the growth of privately backed digital currencies could undermine monetary policy if left unchecked.
These positions signal that regulators see crypto not as a niche market, but as a parallel financial infrastructure that must be integrated or contained.
Yet, as oversight expands, so does the distance from decentralization’s original promise. The drive to institutionalize crypto — through licensing, capital controls, and compliance standards — risks turning decentralized finance into regulated middleware for the existing system.
The innovation remains, but the autonomy fades.
From Innovation to IntegrationWhat the FSB implicitly acknowledges is that crypto’s mainstreaming is no longer hypothetical. Tokenized assets, on-chain settlement, and programmable money are being adopted by major banks and financial institutions.
However, this adoption often comes with a trade-off: decentralized architecture operated under centralized control.
The example of AMINA Bank — which recently conducted regulated staking of Polygon (POL) under the Swiss Financial Market Supervisory Authority (FINMA) — illustrates this trajectory. The blockchain may remain decentralized in code, but its operation is now filtered through institutional risk, compliance, and prudential oversight.
Crypto is entering a phase of institutional assimilation, where its tools survive but its principles are moderated.
The Ethical Undercurrent: Control vs. AutonomyAt its core, the FSB’s warning is not only about risk but about control. Global regulators see the same infrastructure that enables open, peer-to-peer exchange also enabling opaque, borderless financial activity that escapes accountability.
Their response — standardization and supervision — is rational from a stability standpoint. But it introduces a new ethical question: who governs digital value?
If every decentralized protocol must operate through regulated entities, if every wallet must be traceable, and if every transaction must comply with jurisdictional mandates, then blockchain’s promise of financial self-determination becomes conditional — granted by regulators, not coded by design.
This doesn’t make regulation wrong. It makes it philosophically consequential.
A Call for Coordination, Not ConvergenceThe FSB’s call for tighter global alignment does not mean a single, monolithic framework. True coordination will require mutual recognition, data interoperability, and respect for jurisdictional privacy laws, not their erosion.
Without this nuance, global harmonization risks turning into regulatory homogenization, where innovation bends entirely to institutional comfort.
A sustainable balance will depend on how regulators treat decentralization:
As a risk to be mitigated, or As an architecture to be understood and integrated responsibly.The distinction is subtle but defining.
The Architecture of Financial SovereigntyThe G20’s warning marks a pivotal moment. It is a reminder that the future of digital finance will not be decided by code alone, but by the alignment — or collision — of regulatory philosophies.
Crypto began as a rejection of centralized financial power. It now faces regulation not as an external force, but as an inevitable layer of the system it helped create.
The question ahead is not whether crypto will be regulated. It already is.
The real question is whose definition of sovereignty will prevail — that of the individual, or that of the institution.
Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance into blockchain while preserving privacy and sovereignty. SHFT is its native token and fuel of the network.
Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.
Visitour website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn,Telegram,Medium, andYouTube.Sign up for our newsletter to keep up-to-date on all things privacy and compliance.
Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network
G20’s Crypto Dilemma: Regulation Without Coordination was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post What Is Digital Identity Management & How to Do It Right appeared first on 1Kosmos.
In the move toward more inclusive and privacy-respecting digital government services, guardianship (when one person is legally authorized to act on behalf of another) is a core, but often overlooked, component.
Today, guardianship processes are fragmented across probate court, family court, and agency-level determinations, with no clear mechanism for digital verifications. Without clarity, agencies risk legal challenges if they inadvertently allow the wrong person to act on behalf of a dependent.
Rather than treating guardianship as an abstract capability, we believe states should identify a non-exhaustive list of key use cases they want to enable. For example, a parent accessing school records on behalf of a minor, a guardian applying for healthcare or social services on behalf of a dependent senior adult, or a foster parent temporarily authorized to pick a child up. Each of these may require a different level of assurance, auditability, and inter-agency coordination.
Why Legal Infrastructure Falls ShortSeveral legal and regulatory barriers may affect the implementation of a state digital identity. At the state level, existing statutes were drafted for physical credentials and may not clearly authorize digital equivalents in all contexts. Without explicit recognition of state digital identity as a legally valid proof of identity, agencies may be constrained in adopting digital credentials for remote service delivery.
This legal ambiguity creates friction for both agencies and residents, limiting the full potential of digital identity solutions.
Mapping Authority: Who Can Issue What, and WhenGuardianship in digital identity is a complex and, as yet, unsolved problem. A guardianship solution should accept decisions from the entities legally empowered to make them, represent those decisions in credentials rather than recreating them, and keep endorsements current as circumstances change.
The first step is to enumerate today’s pathways to establishing guardianship and to identify which entities are authorized to issue evidence. This mapping enables cohesive implementation and prevents confusion about who can issue what.
In parallel, a program should also clarify which agencies authorize which actions and what evidence each verifier needs. Where authorities differ, the state can allow agencies to issue guardianship credentials that reflect their scope while still unifying common steps to reduce friction.
A Taxonomy for Real-World Guardianship ScenariosWe believe that states should define a clear guardianship credential taxonomy.
There are multiple ways to define guardianship depending on legal and operational context, such as parental authority, foster care, medical consent, or financial guardianship. This will naturally lead to multiple guardianship credential types, tailored to definitions, use cases, and issuing agencies.
Design for Flexibility and ChangeDigital delivery introduces several challenges that the program should address up front. Endorsements need to change cleanly at the age of majority or when a court modifies an order, including a clear transfer of control to the individual. Reissuance and backstops should be specified for lost devices or keys and calibrated to the chosen technical models.
The design should remain flexible enough to accommodate emerging topics, including AI agent-based interactions, without locking in assumptions that are likely to shift.
Support Human Judgment and Prevent AbuseThe overall system for guardianship should maximize the ability for appropriate and contextualized exercise of human judgement by responsible individuals. All of these systems, even protected with cryptography, security measures, and fraud detection, will still be faulty. They should be designed to prioritize humans and their wellbeing, even with failures and fraud present.
A state digital identity framework should require that as much credential validity information as is appropriate and necessary to be made available to the relying party, and that clear indicators of the credential’s current status are available to holders.
It is equally important to prevent abuse of the system. A state must ensure that guardianship credentials cannot be issued or accumulated in ways that could enable fraud, such as one person holding dozens of guardian endorsements to unlawfully access benefits or facilitate trafficking.
The Future of Digital GuardianshipGuardianship in digital identity is not a future problem, it’s a present-day requirement. A successful state digital identity framework must support these relationships with clarity, flexibility, and privacy at its core.
SpruceID helps states design systems that reduce the risk of fraud without sacrificing individual autonomy. Contact us to learn more.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.
The financial services landscape is defined by a relentless drive for frictionless commerce. Yet, the industry remains trapped in a payments paradox: increasing convenience often comes at the expense of security and reliability. The current generation of low-friction solutions, primarily QR codes, are highly susceptible to spoofing and fraud. Conversely, secure methods like NFC are costly, hardware-dependent, and struggle with mass deployment.
This trade-off is untenable.
LISNR has introduced the definitive answer: Radius. By utilizing ultrasonic data-over-sound, Radius provides the industry with the missing link—a secure, hardware-agnostic, and offline-reliable method for token exchange and proximity verification. This technology is not an iteration; it is the strategic shift required to future-proof mobile payments.
The Current Vulnerability and Reliability Gaps
For financial institutions and payment processors, the challenge lies in securing high-value transactions across a fractured ecosystem:
QR Code Spoofing: QR code payments are vulnerable to “quishing” (QR code phishing). A fraudster can easily overlay a malicious code onto a legitimate one, hijacking payments or stealing credentials. This simplicity is its greatest security flaw. Offline Transaction Liability: In environments with poor connectivity (e.g., transit, emerging markets), most digital wallets revert to a hybrid system where transactions are batched. This exposes merchants to greater fraud liability and introduces a dangerous delay in payment certainty. Deployment Bottlenecks: Scaling a payment solution for tap-to-pay payment solutions quickly requires high capital expenditure. The mandatory, dedicated hardware required for NFC makes global deployment slow and expensive, hindering financial inclusion. Radius: The Strategic Imperative for Payment ModernizationLISNR’s Radius SDK addresses these strategic deficiencies by decoupling transactional security from reliance on hardware and the network. It transforms every device with a speaker and microphone into a secure payment endpoint.
Here are the four non-negotiable benefits of adopting Radius for your payments platform:
1. Absolute SecurityLISNR eliminates the core vulnerability of open-source payment modalities by building security directly into the data transfer protocol.
Spoofing Elimination: ToneLock® uses a proprietary security precaution to obfuscate the payload before transmission. Only receivers with the correct, authorized key can demodulate the tone, making it impossible for unauthorized apps to read or spoof the payment data. End-to-End Encryption: For the highest security standards, the SDK offers optional, built-in AES 256 Encryption for all payloads, ensuring data remains unreadable. 2. Unrivaled Offline Transaction CertaintyRadius is engineered for mission-critical reliability, ensuring transactions are secure and auditable even when the network fails.
Network Agnostic Reliability: The entire ToneLock and AES 256 Encryption/Decryption process can occur offline. This enables the secure exchange and validation of payment tokens without requiring an active internet connection. Radius ensures instant transaction certainty and lowers merchant liability in disconnected environments. Bi-Directional Exchange: The SDK supports bidirectional transactions, allowing two devices (e.g., customer wallet and merchant terminal) to simultaneously transmit and receive tones on separate channels. This two-way handshake initiates payment instantly while simultaneously delivering a merchant record to the consumer device. 3. High-Velocity, Zero-Friction CommerceThe speed of a transaction directly correlates with consumer satisfaction and throughput in high-volume settings. Radius accelerates the process with specialized tone profiles.
Rapid High-Throughput: For point-of-sale environments, LISNR offers Point 1000 and Point 2000 tone profiles. These are optimized for sub-1 meter range and engineered for high throughput, enabling near-instantaneous credential exchange for rapid checkout and self-service kiosks. Seamless User Experience: The process can be nearly entirely automated: the user simply opens the app, and the transaction is initiated and verified by proximity, eliminating manual input, scanning, or tapping. 4. Low-Cost, Universal DeploymentRadius is a software-only solution that democratizes access to secure, contactless payment infrastructure.
Hardware-Agnostic: The SDK is integrated into existing applications and requires only a device’s standard speaker and microphone. This removes the need for costly upgrades to POS hardware, dramatically reducing the capital expenditure barrier for global payment modernization. Scalability: As a software solution, upgrading the entire payment infrastructure is as easy as updating the app. Because there is no new hardware to manage, payment providers can achieve unparalleled scale and speed in deploying secure payment functionality across millions of endpoints instantly.LISNR is the worldwide leader in proximity verification because our software-first approach delivers the security and reliability the payments industry demands, without sacrificing the frictionless experience consumers expect.
Want to Learn more?We’d love to learn more about your payment solution and discuss how data-over-sound can help improve your consumer experience. Learn more about our solutions in finance on our website or contact us to set up a meeting.
The post 4 Ways Ultrasonic Proximity Solves the Security-Friction Trade-Off appeared first on LISNR.
De tijd dat je stapels documenten nodig had om een klant goed te beoordelen, loopt op z’n einde. In een wereld waarin snelheid, compliance en klanttevredenheid steeds belangrijker worden, is werken met pdf’s, bijlagen en handmatige controles niet meer houdbaar. Zeker in de credit management sector leidt het oude proces tot vertraging, fouten en frustratie – voor zowel de klant als de organisatie.
Nineteen Virtual Asset firms in Dubai have been charged with penalties amounting to $163,000. These firms were fined for operating without a Virtual Assets Regulatory Authority (VARA) license and breaching Dubai's marketing rules.
The post 19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators first appeared on ComplyCube.
You know that moment when a new app asks for your ID and selfie before letting you in? You sigh, snap the photo, and in seconds it says “You’re verified!” It feels simple, but behind that small step sits an advanced system called ID verification services that keeps businesses safe and fraudsters out.
In today’s digital world, identity verification isn’t a luxury. It’s a necessity. Without it, online platforms would be a playground for scammers. That’s why more companies are turning to digital ID verification to secure their platforms while keeping user experiences smooth and fast.
How ID Verification Evolved into a Digital Superpower
Not too long ago, verifying someone’s identity meant visiting a bank, filling out forms, and waiting days for approval. It was slow and painful. Today, online identity verification has turned that ordeal into a 10-second selfie check.
Feature Traditional ID Checks Digital ID Verification Time Days or weeks Seconds or minutes Accuracy Prone to human error AI-powered precision Accessibility In-person only Anywhere, anytime Security Paper-based Encrypted and biometricAccording to a Juniper Research 2024 report, businesses using digital identity checks have reduced onboarding times by 55% and cut fraud by nearly 40%. That’s not an upgrade, that’s a revolution.
How ID Verification Services Actually Work
It looks easy on your screen, but behind the scenes, it’s like a full orchestra performing perfectly in sync. When you upload your ID, OCR technology instantly extracts your details. Then, facial recognition compares your selfie to the photo on your document, while an ID verification check cross-references the data with secure global databases.
All this happens faster than your coffee order at Starbucks. And yes, it’s fully encrypted from start to finish.
If you want to see how global accuracy standards are tested, visit the NIST Face Recognition Vendor Test (FRVT). This benchmark helps developers measure the precision of their facial recognition algorithms.
Why Businesses Are Making the Shift
Let’s be honest, no one likes waiting days to get verified. Businesses know that, and users expect speed. So, they’re shifting from manual checks to identity verification solutions that deliver results in real time.
ID verification software gives businesses an edge by:
Cutting down on manual reviews Reducing fraud risks through AI analysis Staying compliant with rules like GDPR Enhancing global accessibilityA McKinsey & Company study found that businesses using automated ID verification checks experienced up to 70% fewer fraudulent sign-ups. Another Gartner analysis (2023) reported that automation in verification reduces onboarding costs by over 50%.
So, businesses aren’t just going digital for fun; they’re doing it to stay alive in a market where users expect instant trust.
The Technology Making It All Possible
Every smooth verification hides some serious tech genius. Artificial intelligence detects tampered IDs or fake lighting, while machine learning improves detection accuracy over time. Facial recognition compares live selfies to document photos, even if your hair color or background lighting changes.
The FRVT 1:1 results show that today’s best facial recognition models are over 20 times more accurate than they were a decade ago, according to NIST.
Optical Character Recognition (OCR) handles the text on IDs, and encryption ensures data privacy. It’s these small but powerful innovations that make modern ID document verification fast, secure, and scalable.
Want to explore real-world tech examples? Visit the Recognito Vision GitHub, where you can see how advanced verification systems are built from the ground up.
Why It’s a Smart Investment
Investing in reliable ID verification solutions isn’t just about compliance, it’s about building customer trust. When users feel safe, they’re more likely to finish sign-ups and come back.
According to Statista’s 2024 Digital Trust Report, companies using digital identity verification saw conversion rates increase by 30–35%. That’s because users today value both speed and security.
So, when you invest in this technology, you’re not just protecting your business. You’re giving users the confidence to engage without hesitation.
Where ID Verification ShinesThe beauty of user ID verification is that it works across every industry. It’s not just for banks or fintech startups.
In finance, it prevents money laundering and fraud. In healthcare, it confirms patient identities for telemedicine. In eCommerce, it helps fight fake orders and stolen cards. In gaming, it enforces age restrictions. In ridesharing and rentals, it keeps both parties safe.According to a 2022 IBM Security Study, 82% of users say they trust companies more when those companies use digital identity checks. That’s how powerful this technology is; it builds credibility while keeping everyone safe.
Recognito Vision’s Role in Modern Verification
For businesses ready to step into the future, Recognito Vision makes it simple. Their ID document recognition SDK helps developers integrate verification directly into apps, while the ID document verification playground lets anyone test the process firsthand.
Recognito’s platform blends AI accuracy, fast processing, and user-friendly design. The result? Businesses verify customers securely while users hardly notice it’s happening. That’s efficiency at its best.
Challenges to Consider
Of course, nothing’s perfect. Some users hesitate to share IDs online, and global documents come in thousands of formats. Integrating verification tools into older systems can also feel tricky.
However, choosing a trustworthy ID verification provider can solve most of these issues. As Gartner’s 2024 Cybersecurity Trends Report points out, companies that adopt verified digital identity frameworks see significantly fewer data breaches than those using manual checks.
So while there are challenges, the benefits easily outweigh them.
The Road Ahead
The next phase of digital identity verification is all about control and privacy. Imagine verifying yourself without even sharing your ID. That’s what decentralized identity systems and zero-knowledge proofs are bringing to life.
According to the PwC Global Economic Crime Report 2024, widespread digital ID verification could save over $1 trillion in fraud losses by 2030. That’s not science fiction, it’s happening right now.
The world is heading toward frictionless, instant trust. And businesses that adopt early will lead the pack.
Final Thoughts
At its core, ID verification services aren’t just about checking who someone is. They’re about creating confidence for users, for businesses, and for the digital world as a whole.
If you’re a company ready to modernize and protect your platform, explore Recognito Vision’s identity verification solutions. Because in an era of deepfakes, scams, and cyber tricks, the smartest move is simply knowing who you’re dealing with safely, quickly, and confidently.
Frequently Asked Questions
1. What are ID verification services and how do they work?
ID verification services confirm a person’s identity by analyzing official ID documents and matching them with facial or biometric data using AI technology.
2. Why are ID verification services important for businesses?
They help businesses prevent fraud, comply with KYC regulations, and build customer trust through secure and fast verification processes.
3. Is digital ID verification secure for users?
Yes, digital ID verification is highly secure because it uses encryption, biometric checks, and data protection standards to keep user information safe.
4. How do ID verification services help reduce fraud?
They detect fake or stolen IDs, verify real users instantly, and prevent unauthorized access, reducing fraud risk significantly.
5. What should businesses look for in an ID verification provider?
Businesses should look for providers that offer fast results, global document support, strong data security, and full regulatory compliance.
October is National Domestic Violence Awareness Month (DVAM), an annual event dedicated to shedding light on the devastating impact of domestic violence and advocating for those affected.
The theme for DVAM 2025 is With Survivors, Always, which is exploring what it means to be in partnership with survivors towards safety, support, and solidarity.
Anonyome Labs stands #WithSurvivors this National Domestic Violence Awareness Month and every day—and is proud to help empower safety through privacy for survivors of domestic violence via our Sudo Safe Initiative.
What is the Sudo Safe Initiative?The Sudo Safe Initiative is a program developed to bring privacy to those at higher risk of verbal harassment or physical violence.
Sudo Safe offers introductory discounts on the MySudo privacy app, to help people to keep their personally identifiable information private.
You can get a special introductory discount to try MySudoby becoming a Sudo Safe Advocate.
Here’s how it works:
Visit our website at anonyome.com. Sign up to be a Sudo Safe Advocate — it’s quick and easy. Once you’re signed up, you’ll receive details on how to access your exclusive discount and start using MySudo.In addition to survivors of domestic violence, the Sudo Safe Initiative also empowers safety through privacy for:
Healthcare professionals Teachers Foster care workers Volunteers Survivors of violence, bullying, or stalking. How can MySudo help survivors of domestic violence?MySudo allows people to communicate with others without using their own phone number and email address, to reduce the risk of that information being used for tracking or stalking.
With MySudo, a user creates secure digital profiles called Sudos. Each Sudo has a unique phone number, handle, and email address for communicating privately and securely.
The user can avoid making calls and sending texts and emails from their personal phone line and email inbox by using the secure alternative contact details in their Sudos.
No personal information is required to create an account with MySudo through the app stores.
Four other ways to help survivors of domestic violence Educate yourself and othersLearn and share the different types of abuse (physical, emotional, sexual, financial, and technology-facilitated) and how to find local resources and support services.
Listen without judgmentOne of the most powerful things you can offer a domestic violence survivor is support, by doing things like:
Creating a safe space for them to share their experiences without fear of judgment or blame Letting them express their feelings while validating their emotions Being willing to listen Helping them create a safety plan. Encourage professional supportEncourage your friend or family experiencing domestic violence to seek help from counselors, therapists, or support groups that specialize in trauma and abuse. You can assist by researching local resources, offering to accompany them to appointments, or helping them find online support communities. Professional guidance can provide victims with the tools they need to rebuild their lives.
Raise awareness and advocate for changeSupport survivors not just during DVAM, but year-round. Find ideas here and learn about the National Domestic Violence Awareness Project.
Become a Sudo Safe AdvocateIf your organization can help us spread the word about how MySudo allows at-risk people to interact with others without giving away their phone number, email address, and other personal details, we invite you to become a Sudo Safe Advocate.
As an advocate, you’ll receive:
A toolkit of shareable privacy resources A guide to safer communication Special MySudo promotions Your own digital badge.Become a Sudo Safe Advocate today.
More informationContact the National Domestic Violence Hotline.
Learn about the National Domestic Violence Awareness Project.
Learn more about Sudo Safe Initiative and Anonyome Labs.
Anonyome Labs is also a proud partner of the Coalition Against Stalkerware.
The post DVAM 2025: MySudo discount for survivors of domestic violence appeared first on Anonyome Labs.
For years, the promise of a truly passwordless enterprise has felt just out of reach. We’ve had passwordless for web apps, but the desktop remained a stubborn holdout. We’ve seen the consumer world embrace passkeys, but the solutions were built for convenience, not the rigorous security and compliance demands of the enterprise. This created a dangerous gap, a world where employees could access a sensitive cloud application with a phishing-resistant passkey, only to log in to their workstation with a phishable password.
That gap closes today.
HYPR is proud to announce our partnership with Microsoft to deliver the industry's first true enterprise-grade passkey solution. By integrating HYPR’s non-syncable, FIDO2 passkeys directly with Microsoft Entra ID, we are finally eliminating the last password and providing a unified, phishing-resistant authentication experience from the desktop to the cloud.
What is the Difference Between Enterprise and Other Passkeys?The term "passkey" has become a buzzword, but not all passkeys are created equal. The synced, consumer-grade passkeys offered by large tech providers are a fantastic step forward for the public, but they present significant challenges for the enterprise:
Loss of Control: Synced passkeys are stored in third-party consumer cloud accounts, outside of enterprise control and visibility. Security Gaps: They are designed to be shared and synced by users, which can break the chain of trust required for corporate assets. The Workstation Problem: They do not natively support passwordless login for enterprise workstations (Windows/macOS), leaving the most critical entry point vulnerable.For the enterprise, you need more than convenience. You need control, visibility, and end-to-end security. You need an enterprise passkey.
Introducing HYPR Enterprise Passkeys for Microsoft Entra IDHYPR’s partnership with Microsoft directly addresses the enterprise passkey gap. Our solution is purpose-built for the demands of large-scale, complex IT environments that rely on Microsoft for their identity infrastructure.
This isn't a retrofitted consumer product. It's a FIDO2-based, non-syncable passkey that is stored on the user's device, not in a third-party cloud. This ensures that your organization retains full ownership and control over the credential lifecycle.
With a single, fast registration, your employees can use one phishing-resistant credential to unlock everything they need:
Passwordless Desktop Login: Users log in to their Entra ID-joined Windows workstations using the HYPR Enterprise Passkey on their phone. No password, no phishing, no push-bombing.This partnership isn't just about adding another MFA option; it's about fundamentally upgrading the security posture of your entire Microsoft ecosystem.
Effortless Deployment: Go Passwordless in Days, Not QuartersYou’ve invested heavily in the Microsoft ecosystem. Now, you can finally maximize that investment by eliminating the #1 cause of breaches: the password. The HYPR and Microsoft partnership makes true, end-to-end passwordless authentication a reality.
There are no complex federation requirements, no painful certificate management, and no AD dependencies. It's a simple, lightweight deployment that allows you to roll out phishing-resistant MFA across your entire workforce in days, not quarters.
Empower your employees with fast, frictionless access that works everywhere they do. And empower your security team with the control and assurance that only a true enterprise passkey can provide.
Ready to bring enterprise-grade passkeys to your Microsoft environment? Schedule your personalized demo today.
Enterprise Passkey FAQQ: What is a "non-syncable" passkey?
A: A non-syncable passkey is a FIDO2 credential that is bound to the user's physical device and cannot be copied, shared, or backed up to a third-party cloud. This provides a higher level of security and assurance because the enterprise maintains control over where the credential resides.
Q: How is this different from using an authenticator app for MFA?
A: Authenticator apps that use OTPs or push notifications are still susceptible to phishing and push-bombing attacks. HYPR Enterprise Passkeys are based on the FIDO2 standard, which is cryptographically resistant to phishing, man-in-the-middle, and other credential theft attacks
Q: What does the deployment process look like?
A: Deployment is designed to be fast and lightweight. It involves deploying the HYPR client to workstations and configuring the integration within your Microsoft Entra ID tenant. Because there are no federation servers or complex certificate requirements, many organizations can go from proof-of-concept to production rollout in a matter of days.
Q: Does this support Bring-Your-Own-Device (BYOD) scenarios?
A: Yes. The solution is vendor-agnostic and supports both corporate-managed and employee-owned (BYOD) devices, providing a simple, IT-approved self-service recovery flow that keeps users productive without compromising security.
A new Annotators Hub challenge
The European Parliament generates thousands of speeches, covering everything from local affairs to international diplomacy. These speeches shape policies that impact millions across Europe and beyond. Yet, much of this discourse remains unstructured, hard to track, and difficult to analyze at scale.
CivicLens, the second and latest task in the Annotators Hub, invites contributors to help change that. Together with Lunor, Ocean is building a structured, research-grade dataset based on real EU plenary speeches. Your annotations will support civic tech, media explainers, and political AI, and will give you the chance to earn a share of the $10,000 USDC prize pool.
What you’ll doYou’ll read short excerpts from speeches and answer a small set of targeted questions:
Vote Intent: Does the speaker explicitly state how they will vote (yes/no/abstain/unclear)? Tone: Is the rhetoric cooperative, neutral, or confrontational? Scope of Focus: Is the emphasis on the EU, the speaker’s country, or both? Verifiable Claims: Does the excerpt contain a factual, checkable claim (flag and highlight the span)? Topics (multi-label): e.g., economy, fairness/rights, security/defense, environment/energy, governance/procedure, health/education, technology/industry. Ideological Signal (if any): Is there an inferable stance or framing (e.g., pro-integration, national interest first, market-oriented, social welfare-oriented), or no clear signal?Each task follows a consistent schema with clear tooltips and examples. Quality is ensured through overlap assignments, consensus checks, and spot audits.
Requirements Good command of written English (reading comprehension and vocabulary) Ability to recognize when political or ideological arguments are being made Basic understanding of common political dimensions (e.g., left vs. right, authoritarian vs. libertarian) Minimum knowledge of international organizations and relations (e.g., what the EU is, roles of member states) Awareness of what parliamentary speeches are and their general purpose in the context of EU roll call votes Why it mattersYour contributions will help researchers and civic organizations better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible.
The resulting dataset isn’t just for political analysis, but it has broad, real-world applications:
Fact-checking automation: AI models trained on this data can learn to distinguish checkable assertions from opinions or vague claims, helping organizations like PolitiFact, Snopes, or Full Fact prioritize their verification workload Compliance and policy tracking: Financial compliance platforms, watchdog groups, and regtech firms can detect and monitor predictive or market-moving statements in political and economic discourse Content understanding and education: News aggregators, summarization tools, and AI assistants (like Feedly or Artifact) can better tag and summarize political content. The same methods can also power educational apps that teach critical thinking and media literacy RewardsA total prize pool of $10,000 USDC is available for contributors.
Rewards are distributed linearly based on validated submissions, using the formula:
Your Reward = (Your Score ÷ Sum of All Scores) × Total Prize Pool
The higher the quality and volume of your accepted annotations, the higher your share.
For full participation details, submission rules, and instructions, visit the quest details page on Lunor Quest.
CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
Are you a lawyer working away? A smartworking accountant? Do you provide consulting services at clients’ premises? If so, read this article to learn why you should use a Business VPN to connect to a network other than your own. A VPN for business is a valuable professional ally because it helps protect highly sensitive information while guaranteeing secure remote access to professional content wherever you are, even abroad.
So, what exactly can a VPN – a virtual private network – do for you when you work remotely? Here are five practical ways in which using a VPN for remote work can make a difference to professionals and small businesses.
Work from home securityYou are working from home and, as always, you have to access business management systems, dashboards, customer and supplier databases. You may also need to consult or send confidential documents like balance sheets, contracts and court procedures. You even have crucial calls and meetings on your agenda to finalise agreements or submit reports. To do all this, you rely on your home router and perhaps use your own laptop or smartphone. Without a VPN to protect your connection, your home network can become a point of vulnerability – a potential entry point for eavesdropping and data breaches. Have you ever thought what would happen if all the information you work with were to fall into the wrong hands? Your clients’ confidentiality, the security of your work and your own professional reputation would be severely compromised.
A VPN creates an encrypted and therefore secure tunnel between your device and company servers, ensuring cybersecurity and protecting resources and internal communications. In this way, even remotely sharing files with co-workers or customers is absolutely secure. Many premium VPNs also offer additional security tools that protect you from malware, intrusive advertisements, dangerous sites and trackers, and warn you in case of data leaks.
Public Wi-Fi security
On a business trip, you are highly likely to use hotel or airport lounge Wi-Fi to complete a presentation or access your corporate cloud. What could happen without a VPN? Imagine you are waiting for your flight and want to check your email. The moment you connect to the public network and access your mail server, a hacker intercepts your traffic, reads your email and steals your login credentials. You don’t know it, but you have just suffered what is called a man-in-the-middle attack. With a virtual private network, no hacker can see what you do online, even on open Wi-Fi networks.
Accessing national services and portals, even abroadIf you are abroad and need to access essential international websites, portals and services like National Insurance, Inland Revenue, or corporate intranets, you may encounter access limitations and geo-blocking. This is because, for security reasons, some public portals and corporate networks choose to restrict access from foreign IPs. In some cases, the site may not function properly or may not show certain sections.
In these cases, a VPN is absolutely indispensable. Irrespective of where you are physically located, all you need to do is connect to a server in another country to simulate a presence there, bypass geo-blocking and gain access the content you want, while still enjoying an encrypted and protected connection.
Privacy and data securityThis aspect is often overlooked. Surfing online without adequate protection endangers the security not only of your own information but also that of your employees, collaborators, suppliers and customers, risking potentially enormous economic and reputational damage.
If you think data breaches only concern big tech companies like Meta, Amazon and Google, you are wrong. Very often hackers and cybercriminals choose to target professional firms or small businesses that fail to pay attention to IT security, underestimating the need for proper tools and protective infrastructures to prevent data breaches.
When dealing with sensitive data, health, legal or financial information on a daily basis, keeping it secure is not just common sense in today’s fully digitalised world, but a legal duty.
Data privacy is as crucial for individuals as it is for companies, because it represents a key element of protection, trust and accountability. It means maintaining control over your personal information and protecting yourself against abuse or misuse that may damage brand reputation or personal security.
Using a VPN for business travel is one of the tools that cybersecurity experts recommend to protect privacy and client data, since, as we have seen, VPNs change your IP address and encrypt your Internet connection, preventing potential intrusions.
Access to international websites and contentIf you work with international customers or suppliers, a virtual private network is indispensable. As we have seen, for security reasons, some institutional and professional sites and portals restrict access based on your geographical location. With a VPN, you can simulate your presence in a country other than the one in which you are physically located.
For instance, do you ever need to consult public registers or legal databases in non-EU countries, access tax or customs portals, use SaaS software for foreign markets or monitor the pricing strategies of foreign competitors by accessing local versions of their sites? With a VPN you only need to connect to the server of the country or geographical area you are interested in to bypass geo-blocking and access the financial resources you need.
Whatever your profession, whatever the size of your company, and wherever you are, a VPN is indispensable to the security and privacy of your work.
The post VPN for lawyers, labour consultants, accountants appeared first on Tinexta Infocert international website.
We have been living in a vast digital workplace for some time now, a permanently connected environment that transcends the boundaries of the traditional office to include the sofa at home, airport lounges, hotel rooms, coffee shops and train carriages. In this fluid and constantly evolving digital space, you read the news, shop online, download apps, participate in calls and meetings, answer emails, access sensitive data, perform banking transactions, and more besides, on a daily basis. But do you ever wonder what happens to your data while you are online? Are you really in control of the information you share, the sites you visit, and the actions you take? Spoiler: a large number of others can see what you do during your daily visits to the Internet. Unless, of course, you use a VPN – a Virtual Private Network to protect your Internet connection and online privacy. So, how does a VPN work? A VPN acts as a vigilant and attentive guardian to protect you from prying eyes and malicious attacks.
Who can see what you do online?Though it might seem so, surfing online is by no means private. Every click you make leaves a trace. These traces form what is called a “digital shadow” or fingerprint. Every time you “touch” something online, many actors monitor, collect or intercept what you do. Who are these people?
1. Your Internet Service Provider (ISP): your provider can track all the sites you visit, when you visit them, and for how long. Not only that, but your provider may store and share certain information with third parties (not only the police and judicial authorities, but even advertisers) for a variable period of time, depending on the type of content, the consent you have given, internal policies and legislation (national and European). In Italy, for example, Internet service providers may retain certain data for up to 10 years.
2. Network administrators: if you connect to corporate or public Wi-Fi, e.g. a hotel network, the network administrator can monitor its traffic and thus have access to information on your online activities.
3. Websites and online platforms: many sites collect browsing data, including through cookies (just think of all those pop-ups that constantly interrupt your browsing), pixels and trackers. This allows them to profile you in order to show you personalised advertisements or sell your data to third parties.
4. Search engines: if you use a traditional search engine like Google, Bing or Yahoo, everything you do is traceable – even if you use “Incognito mode”. If you want to keep your searches private, we suggest using non-traceable search engines such as DuckDuckGo, Qwant, Startpage or Swisscows.
5. Hackers and criminals: surfing online exposes you to daily risks, especially when you choose to connect to unprotected public Wi-Fi networks or surf without the use of security tools like antivirus software, VPNs or anti-malware tools. Credentials, emails, bank details, even your identity, are valuable commodities.
The Internet is not a private house; it is a public square.Every time you connect to the Internet, your device uses an Internet Protocol (IP) address, which can reveal not only your online identity, but also the location from which you connect. Technically, an IP address is a numerical label assigned by the Internet service provider. Because it is used to identify individual devices among billions of others, it can be regarded as a postal address in the digital world.
When you enter the name of a website (example.com) in your browser’s address bar, your computer has to perform certain operations because it cannot actually read words, only numbers. First of all, the browser locates the IP address corresponding to the site you want (example.com = 192.168.1.1), then, once the location is found, it loads the site onto the screen. An IP address functions like a home address, ensuring that data sent over the Internet always reaches the correct destination.
This identifier is visible to all the subjects listed above.
Not only that, but the information you routinely exchange online – passwords, emails, documents and sensitive data – often travel in “plaintext” i.e. without being encrypted. This means that anyone who manages to intercept them on their way through the network can read or copy them. Think of sending a postcard: anyone intercepting it on the way can read its contents, your name, the recipient’s address and so on. The same happens with your online data. Not using adequate protection systems, like a VPN, is like leaving your front door open. Would you ever do that?
How does a VPN work?Typically, when you attempt to access a website, your Internet provider receives the request and directs it straight to the desired destination. A VPN, however, directs your Internet traffic through a remote server before sending it on to its destination, creating an encrypted tunnel between your device and the Internet. This tunnel not only secures the data you send and receive, but also hides it from outside eyes, providing you with greater privacy and online security. A VPN also changes your real IP address (i.e. your digital location), e.g. Milan, and replaces it with that of the remote server you have chosen to connect to, e.g. Tokyo. In this way, no one – neither your Internet provider, nor the sites you visit, nor any malicious attackers – can know where you are really connecting from.
It is as if the virtual public square, where everyone sees and listens, turns into a closed room, invisible to those outside, at the click of a button.
This, in brief, is how a virtual private network works:
1. First, the VPN server identifies you by authenticating your client.
2. The VPN server applies an encryption protocol to all the data you send and receive, making it unreadable to anyone trying to intercept it.
3. The VPN creates a virtual, secure “tunnel” through which your data travels to its destination, so that no one can access it without authorisation.
4. The VPN wraps each data packet inside an external packet (an “envelope”) which is encrypted by encapsulation. The envelope is the essential element of the VPN tunnel that keeps your data safe during transfer.
5. When the data reaches the server, the external packet is removed through a decryption process.
Using a VPN should be part of your digital hygieneEvery professional should use a VPN, not only when working remotely or using public Wi-Fi, but as an essential tool to surf more securely, privately and responsibly, day after day. You can think of a VPN as a habit of digital hygiene that provides greater privacy and an additional layer of protection against potential online threats.
A VPN:
● encrypts your data, protecting you from prying eyes
● changes your real IP, protecting your identity
● routes your data through remote servers, creating a secure and private tunnel
● stops your Internet provider and other third parties tracking your data.
To sum up, a VPN is not just a tool for special situations, like using public Wi-Fi, accessing restricted content. Neither is it only for experienced users and cybersecurity enthusiasts. On the contrary, it is an essential tool – a “must-have” – for all professionals and individuals who want to inhabit the digital space that surrounds us with greater awareness and less fear.
The post VPN: a non-technical guide for professionals appeared first on Tinexta Infocert international website.
Read the first installment in our series on The Future of Digital Identity in America here.
Technology alone doesn’t change societies; policy does. Every leap forward in digital infrastructure (whether electrification, the internet, or mobile payments) has been accelerated or slowed by policy. The same is true for verifiable digital identity. The question today isn’t whether the technology works; it does. The question is whether policy frameworks will make it accessible, trusted, and interoperable across industries and borders.
Momentum is building quickly. State legislatures, federal agencies, and international bodies are beginning to treat verifiable digital identity not as a niche experiment, but as critical public infrastructure. In this post, we’ll explore how policy is shaping digital identity, from U.S. state laws to European regulations, and why governments care now more than ever.
States Leading the Way: Laboratories of DemocracyIn the U.S., states have become the proving ground for verifiable digital identity. Seventeen states, including California, New York, and Georgia, already issue mobile driver’s licenses (mDLs) that are accepted at more than 250 TSA checkpoints. By 2026, that number is expected to double, with projections of 143 million mDL holders by 2030, according to ABI Research forecasts.
Seventeen states now issue mobile driver’s licenses accepted at more than 250 TSA checkpoints - digital ID is already real, growing faster than many expected.
California’s DMV Wallet offers one of the most comprehensive examples. In less than two years, over two million Californians have provisioned mobile driver’s licenses, which can be used at TSA checkpoints, in convenience stores for age-restricted purchases, and even online to access government services—real, everyday transactions that people recognize. In addition to the digital licenses, more than thirty million vehicle titles have been digitized using blockchain, making it easier for people to transfer ownership, register cars, or prove title history without mountains of paperwork. Businesses can verify credentials directly, residents can present them online or in person, and the system is designed to work across states and industries. In other words, this program demonstrates proof that digital identity can scale to millions of people and millions of records while solving real problems.
California’s DMV Wallet has issued over two million mDLs and has digitized over 42 million vehicle titles using blockchain - demonstrating trustworthiness at scale.
Utah took a different approach by legislating principles before widespread deployment. SB 260, passed in 2025, lays down a bill of rights for digital identity. Citizens cannot be forced to unlock their phones to present a digital ID. Verifiers cannot track or build profiles from ID use. Selective disclosure must be supported, allowing people to prove an attribute, like being over 21, without revealing unnecessary details. Digital IDs remain optional, and physical IDs must continue to be accepted. Utah’s framework shows how policy can proactively protect civil liberties while enabling innovation.
Utah’s SB 260 doesn’t just pilot identity tech - it builds in privacy and choice from day one, naming those values as rights.
Together, California and Utah illustrate a spectrum of policymaking. One demonstrates what’s possible with rapid deployment at scale - how quickly millions of people can adopt new credentials when the technology is made practical and widely available. The other shows how legislation can proactively embed privacy and choice into the foundations of digital identity, creating durable protections that guard against misuse as adoption grows. Both approaches are valuable: California proves the model can work in practice, while Utah ensures it works on terms that respect civil liberties. Taken together, they show that speed and safeguards are not opposing forces, but complementary strategies that, if aligned, can accelerate trust and adoption nationwide.
Federal Engagement: Trust, Security, and ComplianceFederal agencies are also stepping in, linking digital identity to national security and resilience. The Department of Homeland Security (DHS) is piloting verifiable digital credentials for immigration—a use case where both accuracy and accessibility are essential.
Meanwhile, the National Institute of Standards and Technology (NIST), through its National Cybersecurity Center of Excellence (NCCoE), has launched a hands-on mDL initiative. In collaboration with banks, state agencies, and technology vendors (including 1Password, Capital One, Microsoft, and SpruceID, among others), the project is building a reference architecture demonstrating how mobile driver’s licenses and verifiable credentials can be applied in real-world use cases: CIP/KYC onboarding, federated credential service providers, and healthcare/e-prescribing workflows. The NCCoE has already published draft CIP/KYC use-case criteria, wireframe flows, and a sample bank mDL information page to show how a financial institution might integrate and present mDLs to customers—bringing theory into usable models for regulation and deployment.
Why the urgency? Centralized identity systems are prime targets for adversaries. Breach one large database, and millions of people’s information is compromised. Decentralized approaches change that risk equation by sharding and encrypting user data, reducing the value of any single “crown jewel” target.
Decentralized identity reshapes the risk equation—no single crown jewel database for adversaries to breach.
Policy is also catching up to compliance challenges in financial services. In July 2025, Congress passed the Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act, which, among other provisions, directs the U.S. Treasury to treat stablecoin issuers as financial institutions under the Bank Secrecy Act (BSA). Section 9 of the Act requires Treasury to solicit public comment on innovative methods to detect illicit finance in digital assets, including APIs, artificial intelligence, blockchain monitoring, and (critically) digital identity verification.
Treasury’s August 2025 Request for Comment (RFC) builds directly on this mandate. It seeks input on how portable, privacy-preserving digital identity credentials can support AML/CFT and sanctions compliance, reduce fraud, and lower compliance costs for financial institutions. Importantly, the RFC recognizes privacy as a design factor, asking specifically about risks from over-collection of personal data, the sensitivity of information reviewed, and how to implement safeguards alongside compliance.
This is a significant shift: digital identity is not only being framed as a user-rights issue or a convenience feature, but also as a national security and financial stability priority. By embedding identity into the GENIUS Act’s framework for stablecoins and BSA modernization, policymakers are effectively saying that modernized, cryptographically anchored identity is essential for the resilience of U.S. markets.
The European Example: eIDAS 2.0While the U.S. pursues a patchwork of state pilots and federal engagement, Europe has opted for a coordinated regulatory approach. In May 2024, eIDAS 2.0 came into force, requiring every EU Member State to issue a European Digital Identity Wallet by 2026.
The regulation mandates acceptance across public services and major private sectors like banks, telecoms, and large online platforms. Privacy is baked into the requirements: wallets must be voluntary and free for citizens, support selective disclosure, and avoid central databases. Offline QR options are also mandated, ensuring usability even without connectivity.
Europe is treating digital identity as a right: free, voluntary, private, and accepted across borders.
Why does this matter? For citizens, it means one-click onboarding across borders. For businesses, it means lower compliance costs and reduced fraud. For the EU, it’s a step toward digital sovereignty, reducing dependency on foreign platforms and asserting leadership in global standards.
Identity as InfrastructureLook closely, and a pattern emerges: policymakers are treating identity as infrastructure. Like roads, grids, or communications networks, identity is a shared resource that underpins everything else. Without it, markets stumble, governments waste resources, and citizens lose trust. With it, economies run smoother, fraud drops, and individuals gain autonomy.
Identity is infrastructure—like roads or grids, it underpins every modern economy and democracy.
This framing (identity as infrastructure) helps explain why governments care now. Fraud losses are staggering, trust in institutions is fragile, and AI is amplifying risks at unprecedented speed. Policy is not just reacting to technology; it’s shaping the conditions for decentralized identity to succeed.
Risks of Policy Done WrongOf course, not all policy is good policy. Poorly designed frameworks could centralize power, entrench surveillance, or create vendor lock-in. Imagine if a single state-issued wallet were mandatory for all services, or if verifiers were allowed to log every credential presentation. The result would be digital identity as a tool of control, not freedom.
That’s why principles matter. Utah’s SB 260 is instructive: user consent, no tracking, no profiling, open standards, and continued availability of physical IDs. These are not just policy features; they are guardrails to keep digital identity aligned with democratic values.
Privacy as Policy: Guardrails Before GrowthAlongside momentum in statehouses and federal pilots, civil liberties organizations have raised a critical warning: digital identity cannot scale without strong privacy guardrails. Groups like the ACLU, EFF, and EPIC have cautioned that mobile driver’s licenses (mDLs) and other digital ID systems risk entrenching surveillance if designed poorly.
The ACLU’s Digital ID State Legislative Recommendations outline twelve essential protections: from banning “phone-home” tracking and requiring selective disclosure, to preserving the right to paper credentials and ensuring a private right of action for violations. EFF warns that without these safeguards, digital IDs could “normalize ID checks” and make identity presentation more frequent in American life .
The message is clear: technology alone isn’t enough. Policy must enshrine privacy-preserving features as requirements, not optional features. Utah’s SB 260 points in this direction by mandating selective disclosure and prohibiting tracking. But the broader U.S. landscape will need consistent frameworks if decentralized identity is to earn public trust.
We'll explore these principles in greater depth in a later post in this series, where we examine how civil liberties critiques shape the design of decentralized identity and why policy and technology must work together to prevent surveillance creep.
SpruceID’s PerspectiveAt SpruceID, we sit at the intersection of policy and technology. We’ve helped launch California’s DMV Wallet, partnered on Utah’s statewide verifiable digital credentialing framework, and collaborated with DHS on verifiable digital immigration credentials. We also contribute to global standards bodies, such as the W3C and the OpenID Foundation, ensuring interoperability across jurisdictions.
Our perspective is simple: decentralized identity must remain interoperable, privacy-preserving, and aligned with democratic principles. Policy can either accelerate this vision or derail it. The frameworks being shaped today will determine whether decentralized identity becomes a tool for empowerment or for surveillance.
Why Governments Care NowThe urgency comes down to four forces converging at once:
Fraud costs are exploding. In 2024, Americans reported record losses - $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). On the institutional side, the average U.S. data breach cost hit $10.22 million in 2025, the highest ever recorded (IBM). AI is raising the stakes. Synthetic identity fraud alone accounted for $35 billion in losses in 2023 (Federal Reserve). FinCEN has warned that criminals are now using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks and exploit financial systems at scale. Global trade requires interoperability. Cross-border commerce depends on reliable, shared frameworks for verifying identity. Without them, compliance costs balloon and innovation slows. Citizens expect both privacy and convenience. People want frictionless, consumer-grade experiences from digital services, but they will not tolerate surveillance or being forced into a single system.Policymakers increasingly see decentralized identity as a way to respond to all four at once. By reducing fraud, strengthening democratic resilience, supporting global trade, and protecting privacy, decentralized identity offers governments both defensive and offensive advantages.
The Policy FrontierWe are standing at the frontier of decentralized identity. States are pioneering real deployments. Federal agencies are tying identity to national security and compliance. The EU is mandating wallets as infrastructure. Around the world, policymakers are realizing that identity is not just a product, it’s the scaffolding for digital trust.
The decisions made in statehouses, federal agencies, and international bodies over the next few years will shape how identity works for decades. Done right, verifiable digital identity can become the invisible infrastructure of freedom, convenience, and security. Done wrong, it risks becoming another layer of surveillance and control.
That’s why SpruceID is working to align policy with technology, ensuring that verifiable digital identity is built on open standards, privacy-first principles, and user control. Governments care now because the stakes have never been higher. And the time to act is now.
This article is part of SpruceID’s series on the future of digital identity in America.
Subscribe to be notified when we publish the next installment.
Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.
Data Farming Round 159 (DF159) has completed.
DF160 is live, October 16th. It concludes on October 23rd. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.
2. DF structureThe reward structure for DF160 is comprised solely of Predictoor DF rewards.
Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.
3. How to Earn Rewards, and Claim ThemPredictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.
4. Specific Parameters for DF160Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE
Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.
Predictoor DF rewards are calculated as follows:
First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.Expect further evolution in DF: adding new streams and budget adjustments among streams.
Updates are always announced at the beginning of a round, if not sooner.
About Ocean and DF PredictoorOcean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.
In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.
DF159 Completes and DF160 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.
“I had one of those chance airplane conversations recently—the kind that sticks in your mind longer than the flight itself.”
My seatmate was reading a book about artificial intelligence, and at one point, they described the idea of an “infinitely growing AI.” I couldn’t help but giggle a bit. Not at them, but at the premise.
An AI cannot be infinite. Computers are not infinite. We don’t live in a world where matter and energy are limitless. There aren’t enough chips, fabs, minerals, power plants, or trained engineers to sustain an infinite anything.
This isn’t just a nitpicky detail about science fiction. It gets at something I’ve written about before:
In Who Really Pays When AI Agents Run Wild? I noted that scaling AI systems isn’t just about clever protocols or smarter algorithms. Every prompt, every model run, every inference carries a cost in water, energy, and hardware cycles. In The End of the Global Internet, I argued that we are already moving toward a fractured network where national and regional policies shape what’s possible online.The “infinite AI” conversation is an example that ties both threads together. We may dream about global systems that grow without end, but the reality is that technology is built on finite supply chains. It’s those supply chains that are turning out to be the real bottleneck for the future of the Internet.
A Digital Identity Digest Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:15:19 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
The real limits aren’t protocolsWhen people in the identity and Internet standards space talk about limits, we often point to protocols. Can the protocol scale? Will a new protocol successfully replace cookies? Can we use existing protocols to manage delegation across ecosystems?
These are important questions, but they are not the limiting factor. Protocols, after all, are words in documents and lines of code. They can be revised, extended, and reinvented. The hard limits come from the physical world.
Chips and fabs. Advanced semiconductors require fabrication plants that cost tens of billions of dollars and take years to build. Extreme ultraviolet lithography machines (say that five times, fast) are produced (as of 2023) by exactly one company in the Netherlands—ASML—and delivery schedules are measured in years. Minerals and materials. Every computer depends on a handful of rare inputs: lithium for batteries, cobalt for electrodes, rare earth elements for magnets, neon for chipmaking lasers, high-purity quartz for wafers. These are not evenly distributed across the globe. China dominates rare earth refining, while Ukraine has been a critical source of neon. And there is no substitute for water in semiconductor production. Power and cooling. Training a frontier AI model consumes gigawatt-hours of electricity. Running hyperscale data centers requires water for cooling that rivals the consumption of entire towns. When power grids are strained, there’s no protocol that can fix it. People. None of this runs itself. Chip designers, process engineers, cleanroom technicians, miners, metallurgists—these are highly specialized roles. Many countries are facing demographic changes that include aging workforces and immigration restrictions for the current tech giants and uneven education where the populations are booming.You can’t standardize your way out of these shortages. You can only manage, redistribute, or adapt to them.
Geopolitics and demographicsThe Internet was often described as “borderless,” but the hardware that makes it run is anything but. Supply chains for semiconductors, network equipment, and the minerals that feed them are deeply entangled with geopolitics and demographics.
No region has a fully independent pipeline:
The US leads in chip design but depends on the Indo-Pacific region for chip manufacturing. China dominates rare earth refining but relies on imports of high-end chipmaking tools it cannot yet build domestically. Europe has niche strengths in lithography and specialty equipment but lacks the scale for end-to-end independence. Countries like Japan, India, and Australia supply critical inputs—from silicon wafers to rare earth ores—but not the whole stack.This interdependence is not an accident. Globalization optimized supply chains for efficiency, not resilience. Each region specialized in the step where it had a comparative advantage, creating a finely tuned but fragile web.
Demographics add another layer. Many of the most skilled engineers in chip design and manufacturing are reaching retirement age. The same is true for technical standards architects; they are an aging group. Training replacements takes years, not months. Immigration restrictions in key economies further shrink the talent pool. Even if we had the minerals and the fabs, we might not have the people to keep the pipelines running.
The illusion of global resilienceFor decades, efficiency reigned supreme. Tech companies embraced just-in-time supply chains. Manufacturers outsourced to the cheapest reliable suppliers. Investors punished redundancy as waste.
That efficiency gave us cheap smartphones, affordable cloud services, and rapid AI innovation. But it also created a brittle system. When one link in the chain breaks, the effects cascade:
A tsunami in Japan or a drought in Taiwan can disrupt global chip supply. A geopolitical dispute can halt exports of critical minerals overnight. A labor strike at a port can ripple through shipping networks for months.We saw this during the 2020–2023 global chip shortage. A pandemic-driven demand spike collided with supply chain shocks: a fire at a Japanese chip plant, drought in Taiwan, and war in Ukraine cutting off neon supplies. Automakers idled plants. Consumer electronics prices rose. Lead times stretched into years.
AI at scale only magnifies the problem. Training one large model requires thousands of specialized GPUs. If one upstream material is constrained—say, the gallium used in semiconductors—it doesn’t matter how advanced your algorithms are. The model doesn’t get trained.
Cross-border dependencies never vanishThis is where the conversation loops back to the idea of a “global Internet.” Even if the Internet fragments into national or regional spheres—the “splinternet” scenario—supply chains remain irreducibly cross-border.
You can build your own national identity system. You can wall off your data flows. But you cannot build advanced technology entirely within your own borders without enormous tradeoffs.
A U.S. data center may run on American-designed chips, but those chips likely contain rare earths refined in China. A Chinese smartphone may use domestically assembled components, but the photolithography machine that patterned its chips came from Europe. An EU-based AI startup may host its models on European servers, but the GPUs were packaged and tested in Southeast Asia.Fragmentation at the protocol and governance level doesn’t erase these dependencies. It only adds new layers of complexity as governments try to manage who trades with whom, under what terms, and with what safeguards.
The myth of “digital sovereignty” often ignores the material foundations of technology. Sovereignty over protocols does not equal sovereignty over minerals, fabs, or skilled labor.
Opportunities in regional diversityIf infinite AI is impossible and total independence is unrealistic, what’s left? One answer is regional diversity.
Instead of assuming we can build one perfectly resilient global supply chain, we can design multiple overlapping regional ones. Each may not be fully independent, but together they reduce the risk of “one failure breaks all.”
Examples already in motion:
United States. The CHIPS and Science Act is pouring billions into domestic semiconductor manufacturing (though how long that act will be in place is in question). The U.S. is also investing in rare earth mining and processing though environmental and permitting challenges remain. European Union. The EU Raw Materials Alliance is working to secure critical mineral supply and recycling. European firms already lead in certain high-end equipment niches. Japan and South Korea. Both countries are investing in duplicating supply chain segments currently dominated by China, such as battery materials. India. This country has ambitious plans to build local chip fabs and become a global assembly hub. Australia and Canada. Positioned as suppliers of critical minerals, Australia and Canada are working to move beyond extraction to refining.Regional chains come with tradeoffs: higher costs, slower rollout, and sometimes redundant investments. But they create buffers. If one region falters, others can pick up slack.
They also open the door to more design diversity. Different regions may approach problems in distinct ways, leading to innovation not just in technology but in governance, regulation, and labor practices.
Reframing the narrativeSo let’s come back to that airplane conversation. The myth of infinite AI (or infinite cloud computing, for that matter) isn’t just bad science fiction. It’s a misunderstanding of how technology actually grows.
AI, like the Internet itself, is bounded by the real world. Protocols matter, but they are only the top layer. Beneath them are the chips, the minerals, the power, and the people. Those are the constraints that will shape the next decade.
Which leads us to the current irony in all of this: even as the Internet fragments along political and regulatory lines, the supply chains that support it remain irreducibly global. We can argue about governance models and sovereignty all we like and target tariffs at a whim, but a smartphone or a GPU is still a planetary collaboration.
The challenge, then, isn’t to pretend we can achieve total independence. It’s to design supply chains—local, regional, and global—that acknowledge these limits and build resilience into them.
Looking aheadWhen I wrote about The End of the Global Internet, I wanted to show that fragmentation is not just possible, but already happening. But fragmentation doesn’t erase interdependence. It just makes it messier.
When I wrote about Who Pays When AI Agents Run Wild? I wanted to point out that scaling computation is not a free lunch. It comes with bills measured in electricity, water, and silicon.
This post ties both threads together: the real bottlenecks in technology are not the protocols we argue about in standards meetings. They are the supply chains that determine whether the chips, power, minerals, and people exist in the first place.
AI is a vivid example because its appetite is so enormous. But the lesson applies more broadly. The Internet is fracturing into spheres of influence, but those spheres will remain bound by the physical pipelines that crisscross borders.
So the next time someone suggests an infinite AI, or a fully sovereign domestic Internet, remember: computers aren’t infinite. Supply chains aren’t sovereign. The real question isn’t whether we can break free of those facts, it’s how we design systems that can thrive within them.
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript[00:00:29] Welcome back to The Digital Identity Digest. I’m Heather Flanagan, and today, we’re going to dig into one of those invisible but very real limits on our digital future — supply chains.
[00:00:42] Now, I know supply chains don’t sound nearly as exciting as AI agents or new Internet protocols. But stay with me — because without the physical stuff (chips, minerals, power, and people), all of those clever protocols and powerful algorithms don’t amount to much.
[00:01:00] This episode builds on two earlier posts:
Who Really Pays for AI? — exploring how AI comes with a bill in water, electricity, and silicon. The End of the Global Internet — examining how fragmentation is reshaping the network itself.Both lead us here: the supply chain is one of the biggest constraints on how far both AI and the Internet can actually go.
[00:01:27] So, if you really want to understand the future of technology, you can’t just look at the code or the protocols.
[00:01:35] You have to look at the supply chains.
The Reality Check: Technology Needs Stuff[00:01:38] Let’s start with a story. On a recent flight, my seatmate was reading a book about artificial intelligence. Go him.
[00:01:49] At one point, he leaned over and described an idea of an infinitely growing AI.
[00:01:56] I couldn’t help but laugh a little — because computers are not infinite.
[00:02:04] There just aren’t enough chips, fabs, minerals, power plants, or trained people on the planet to sustain infinite anything. It’s not imagination — it’s physics, chemistry, and labor.
[00:02:20] That exchange captured something I keep seeing in conversations about AI, identity, and the Internet. We treat protocols as if they’re the bottleneck. But ultimately, it’s the supply chains underneath that constrain everything.
Chips, Fabs, and the Fragility of Progress[00:02:38] Let’s break that down — starting with chips and fabricators, also known as fabs.
[00:02:44] The most advanced semiconductors come from fabrication plants that cost tens of billions of dollars to build — and take years, even a decade, to come online.
[00:02:56] And the entire process hinges on one company — ASML in the Netherlands.
[00:03:03] They’re the only supplier of extreme ultraviolet lithography machines. Without those, you simply can’t make the latest generation of chips. The backlog? Measured in years.
[00:03:21] Then there’s the issue of minerals and materials:
Lithium for batteries Cobalt for electrodes Rare earth elements for magnets Neon for chipmaking lasers High-purity quartz for wafers[00:03:44] These resources aren’t evenly distributed. China refines most rare earths. Ukraine supplies much of the world’s neon. And water — another critical input — is also unevenly available.
Power, People, and Production[00:04:05] A frontier AI model doesn’t just use a lot of electricity — it uses gigawatt-hours of power.
[00:04:26] Running a hyperscale data center can consume as much water as a small city. And when power grids are strained, no clever standard can conjure new electrons out of thin air.
[00:04:26] Then there’s the people. None of this runs itself:
Chip designers Process engineers Clean room technicians Miners and metallurgists[00:04:57] These are highly specialized roles — and many experts are nearing retirement. Replacing them takes years, not months. Immigration limits compound the challenge.
[00:05:05] So yes, protocols matter — but the real limits come from the physical world.
Geopolitics and the Global Supply Web[00:05:16] The Internet may feel borderless, but the hardware that makes it work is not.
[00:05:26] Every link in the supply chain is tangled in geopolitics:
The U.S. leads in chip design but depends on Taiwan and South Korea for manufacturing. China dominates rare earth refining but still relies on imported chipmaking tools. Europe has niche strengths in lithography but lacks materials for full independence. Japan, India, and Australia provide key raw inputs but not the entire production stack.[00:06:16] This global interdependence made systems efficient — but also fragile.
Demographics: The Aging Workforce[00:06:21] There’s also a demographic angle. Skilled engineers and technicians are aging out.
[00:06:35] In about 15 years, we’ll see significant skill gaps. Even if minerals and fabs are available, we might not have the people to keep things running.
[00:06:58] The story isn’t just about where resources are — it’s about who can use them.
The Illusion of Resilience[00:07:06] For decades, efficiency ruled. Tech companies built “just-in-time” supply chains, outsourcing to low-cost, reliable suppliers.
[00:07:21] That gave us cheap smartphones and rapid innovation — but also brittle systems.
[00:07:38] A few reminders of fragility:
2011: Tsunami in Japan disrupts semiconductor production. 2021: Drought in Taiwan forces fabs to truck in water. 2022: War in Ukraine cuts off neon supplies. 2020–2023: Global chip shortage reveals how fragile everything truly is.[00:08:18] AI at scale only magnifies this fragility. Even one constrained resource, like gallium, can halt model training — regardless of how advanced the algorithms are.
The Splinternet Still Needs a Global Supply Chain[00:08:48] Even as the Internet fragments into regional “Splinternets,” supply chains remain global.
[00:09:18] You can wall off your data, but you can’t build advanced tech entirely within one nation’s borders.
Examples include:
A U.S. data center using chips refined with Chinese minerals. A Chinese smartphone using European lithography tools. An EU startup running on GPUs packaged in Southeast Asia.[00:09:46] Fragmentation adds complexity, not independence.
The Myth of Digital Sovereignty[00:09:46] The idea of total “digital sovereignty” sounds empowering — but it’s misleading.
[00:10:07] You can control protocols, standards, and regulations.
But you can’t control:
[00:10:14] So, what’s the alternative? Regional diversity.
Instead of one global, fragile chain, we can build multiple overlapping regional systems:
U.S.: The CHIPS and Science Act investing in domestic semiconductor manufacturing. EU: The Raw Materials Alliance strengthening mineral supply and recycling. Japan & South Korea: Building redundancy in battery and material supply. India: Launching its “Semiconductor Mission.” Australia & Canada: Expanding refining capacity for critical minerals.[00:11:38] Yes, these efforts are costlier and slower — but they build buffers. If one region falters, another can pick up the slack.
The Takeaway: Infinite AI is a Myth[00:12:06] That airplane conversation sums it up. The myth of infinite AI isn’t just science fiction — it’s a misunderstanding of how technology works.
[00:12:17] AI, like the Internet, is bounded by the real world — by chips, minerals, power, and people.
[00:12:45] Even as the Internet fragments, its supply chains remain irreducibly global.
[00:13:02] The challenge isn’t escaping these limits — it’s designing systems that thrive within them.
Closing Thoughts[00:13:27] The real bottleneck in technology isn’t protocols — it’s supply chains.
[00:13:48] AI is just the most visible example of how finite our digital ambitions are.
[00:14:13] So, the next time you hear someone talk about “infinite AI” or a “sovereign Internet,” remember:
Computers are not infinite. Supply chains cannot be sovereign.[00:14:19] The real question isn’t how to escape those facts — it’s how to build systems that can thrive within them.
Outro[00:14:19] Thanks for listening to The Digital Identity Digest.
If you enjoyed the episode:
Share it with a colleague or friend. Connect with me on LinkedIn @hlflanagan. Subscribe and leave a rating wherever you listen to podcasts.[00:15:02] You can also find the full written post at sphericalcowconsulting.com.
Stay curious, stay engaged — and let’s keep the conversation going.
The post Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet appeared first on Spherical Cow Consulting.
You’ve probably seen that pop-up asking you to verify your identity when signing up for a new banking app or wallet. That’s KYC, short for Know Your Customer. It helps businesses confirm that users are real, not digital impostors trying to pull a fast one.
In the old days, this meant long queues, forms, and signatures. Today, KYC verification online makes that process digital, instant, and painless.
Here’s how the two compare.
Feature Traditional KYC Online KYC Verification Time Taken Days or weeks A few minutes Method Manual paperwork Automated verification Accuracy Prone to error AI-based precision Accessibility Branch visits required Anywhere, anytime Security Paper-based Encrypted and biometricAccording to Deloitte’s “Revolutionising Due Diligence for the Digital Age”, digital verification and automation can drastically improve compliance efficiency and customer experience, both of which are central to modern financial services.
That’s why KYC verification online has become the backbone of secure onboarding for fintechs, banks, and even government platforms.
How KYC Verification Online Actually Works
When you perform a KYC check online, it feels quick and effortless, but behind that simple process, powerful AI is doing the hard work. It matches your selfie with your ID, reads your details using OCR, and cross-checks everything with trusted databases, all in seconds.
Here’s what’s really happening:
You upload your ID (passport, driver’s license, or national ID). You take a quick selfie using your phone camera. The system compares your selfie to the photo on your ID using advanced facial recognition. OCR (Optical Character Recognition) extracts the text from your ID to verify your name, address, and date of birth. Data is validated against government or regulatory databases. You get approved often in under two minutes.That’s KYC authentication in action: fast, secure, and contact-free.
According to the NIST Face Recognition Vendor Test (FRVT), today’s leading algorithms are over 20 times more accurate than those used just a decade ago. That leap in precision is one reason why eKYC verification is now trusted by global banks and fintech companies.
Why Businesses Are Switching to KYC Verification OnlineNo one enjoys filling out endless forms or waiting days for approvals. That’s why businesses everywhere are turning to KYC verify online systems; they make onboarding smoother for customers while cutting costs for organizations.
Some of the biggest reasons behind this shift include:
Faster onboarding times that enhance customer experience. Greater accuracy from AI-powered checks. Enhanced fraud detection through biometric validation. Regulatory compliance with frameworks like GDPR. Global accessibility for users to verify KYC online anytime, anywhere.Research by Deloitte Insights notes that organizations automating due diligence and verification processes reduce manual costs while increasing compliance accuracy, a huge win for financial institutions managing high user volumes.
Simply put, online KYC check systems help companies onboard customers faster while minimizing human error and fraud.
Technology Behind Modern KYC Verification SolutionsEvery smooth verification process is powered by some serious tech muscle.
Artificial Intelligence (AI) helps detect fraudulent IDs and spot manipulation patterns in photos. Machine learning continuously improves accuracy by learning from new data. Facial recognition verifies your selfie against your ID photo with pinpoint precision, tested under the NIST FRVT benchmark.
Meanwhile, Optical Character Recognition (OCR) pulls data from your documents instantly, and encryption technologies protect that data as it moves across systems.
For developers and organizations wanting to implement their own KYC verification solutions, Recognito’s face recognition SDK and ID document recognition SDK are reliable tools that simplify integration.
You can also explore Recognito’s GitHub repository to see how real-time AI verification systems evolve in practice.
How to Verify Your KYC Online Without the Hassle
If you haven’t tried KYC verification online yet, it’s simpler than you think. Just open the app, upload your ID, take a selfie, and let the system handle the rest.
Most platforms now allow you to check online KYC status in real time. You’ll see exactly when your verification moves from “in review” to “approved.”
Curious about how it all works behind the scenes? Try the ID Document Verification Playground. It’s an interactive way to see how modern KYC systems scan, process, and authenticate IDs no real data required.
According to Allied Market Research, the global eKYC verification market is expected to reach nearly $2.4 billion by 2030, growing at over 22% CAGR. That surge shows just how essential digital KYC has become to the future of online services.
The Future of KYC AuthenticationThe next generation of KYC authentication is going to feel almost invisible. Biometric technology and AI are merging to make verification instant; imagine unlocking your account just by looking at your camera.
In India, systems like UIDAI’s Aadhaar e-KYC have already transformed how millions of users open bank accounts and access government services. It’s fast, paperless, and secure.
Global research by PwC on Digital Identity predicts that the world is moving toward a unified digital identity model, one verified profile for all services, from banking to healthcare.
This is the future of KYC identity verification: a seamless, secure, and user-friendly process that builds trust without slowing you down.
Final Thoughts
In the end, KYC verification online is about more than compliance; it’s about confidence. It ensures that businesses and customers can interact safely in an increasingly digital world.
It eliminates paperwork, reduces fraud, and makes onboarding faster and smarter. That’s progress everyone can appreciate.
If you’re a business exploring modern KYC verification solutions, check out Recognito. Their AI-powered technology helps companies verify identities accurately, comply with regulations, and create frictionless user experiences.
Frequently Asked Questions
1. How does KYC verification online work?
You upload your ID, take a selfie, and the system checks both using AI. KYC verification online confirms your identity in just a few minutes.
2. Is eKYC verification safe to use?
Yes, eKYC verification is secure since it uses encryption and biometric checks. Your personal data stays protected throughout the process.
3. What do I need to verify my KYC online?
To verify KYC online, you only need a valid government ID and a selfie. The rest is handled automatically by the system.
4. Why are companies using online KYC checks now?
Businesses use online KYC check systems because they’re faster and help prevent fraud. It also makes onboarding easier for users.
5. What makes a good KYC verification solution?
A great KYC verification solution should be fast, accurate, and compliant with privacy laws. It should make KYC identity verification simple for both users and companies.
The post FedRAMP Levels Explained & Compared (with Recommendations) appeared first on 1Kosmos.
If you're in Revenue Operations, Marketing Ops, or Sales Ops, your core mandate is velocity. Every week, someone needs to integrate a new tool: "Can we connect Drift to Salesforce?" "Can we push this data into HubSpot?" "Can you just give marketing API access?" You approve the OAuth tokens, you connect the "trusted" apps, and you enable the business to move fast. You assume the security team has your back.
But the ShinyHunters extortion spree that surfaced this year, targeting Salesforce customer data, exposed the deadly vulnerability built into that convenience-first trust model. This wasn't just a "cyber event" for the security team; it was a devastating wake-up call for every operator who relies on that data. Suddenly, every connected app looks like a ticking time bomb, filled with sensitive PII, contact records, and pipeline data.
Anatomy of the Attack: Hacking Authorization, Not AuthenticationThe success of the ShinyHunters campaign wasn't about a software bug or a cracked password. It was about trusting the wrong thing. The attackers strategically bypassed traditional MFA by exploiting two key vectors: OAuth consent and API token reuse.
Path 1: The Fake "Data Loader" That Wasn't (OAuth Phishing)The most insidious vector involved manipulating human behavior through advanced vishing (voice phishing).
Attackers impersonated internal IT support, creating urgency to trick an administrator. Under the pretext of fixing an urgent issue, the victim was directed to approve a malicious Connected App—often disguised as a legitimate tool like a Data Loader.
The result was the same as a physical breach: the employee, under false pretenses, granted the attacker’s malicious app a valid, persistent OAuth access token. This token is the backstage pass—it gave the attacker free rein to pull vast amounts of CRM data via legitimate APIs, quietly and without triggering MFA or login-based alerts.
The parallel vector targeted tokens from already integrated third-party applications, such as Drift or Salesloft.
Attackers compromised these services to steal their existing OAuth tokens or API keys used for the Salesforce integration. These stolen tokens act like session cookies: they are valid, silent, and allow persistent access to Salesforce data without ever touching a login page. Crucially, once stolen, these tokens can be reused until revoked, representing an open back door into your most valuable data.
Both paths point to a single conclusion: your digital ecosystem is built on convenience-first trust, and in the hands of sophisticated attackers, trust is the ultimate exploitable vulnerability.
For years, security focused on enforcing strong MFA and password rotation. But the ShinyHunters campaign proved that this focus is too narrow.
You can enforce the best MFA, rotate passwords monthly, and check all your compliance boxes. But if an attacker can:
Convince an employee to approve a fake OAuth app, or Steal a token that never expires from an integration...then everything else is just window dressing.
The uncomfortable truth for RevOps is that attackers are not exploiting a zero-day; they are hacking how you work. The industry-wide shift now, led by NIST and CISA, is toward phishing-resistant authentication. Why? Because the weak spots exploited in this breach - reusable passwords and phishable MFA - are eliminated when you replace them with cryptographic, device-bound credentials.
Where HYPR Fits In: Making Identity Deterministic, Not Trust-BasedHYPR was built for moments like this—when the mantra "never trust, always verify" must transition from a slogan into an operational necessity. Our Identity Assurance platform delivers the deterministic certainty needed to stop both forms of token theft cold.
Here’s how HYPR's approach prevents these breach vectors:
Eliminating Shared Secrets: HYPR Authenticate uses FIDO2-certified passwordless authentication. There is no password or shared secret for attackers to steal, replay, or trick a user into approving. This automatically eliminates the phishable vector used in Path 1. Domain Binding Stops OAuth Phishing: FIDO Passkeys are cryptographically bound to the specific URL of the service. If an attacker tries to trick a user into authenticating on a malicious domain (OAuth phishing), the key will not match the registered domain, and the authentication will fail instantly and silently. Deterministic Identity Proofing for High-Risk Actions (HYPR Affirm): Granting new app privileges is a high-risk action. HYPR Affirm brings deterministic identity proofing—using live liveness checks, biometric verification, and document validation—before any credential or app authorization is granted. This stops social engineering attacks aimed at the help desk or an administrator because you ensure the person making the request is the rightful account owner. No Unchecked Trust (HYPR Adapt): Every high-risk action - whether it’s a new device enrollment, a token reset, or a highly-privileged connected app approval - can trigger identity re-verification. If your HYPR Adapt risk engine detects anomalous API activity (Path 2), it can dynamically challenge the user to re-authenticate with a phishing-resistant passkey, immediately revoking the session/token until certainty is established.This platform isn't about simply locking things down; it's about building secure, efficient systems that can verify who is on the other end with cryptographic certainty.
Next Steps for RevOps: Championing the Identity PerimeterThe Salesforce breach was about trust at scale. As RevOps leaders, you need to protect not just the data, but how that data is accessed and shared.
Here is what you must prioritize now:
Revisit Your Integrations: Know which connected apps have offline access and broad permissions (e.g., refresh_token, full) to your Salesforce data - and ruthlessly trim the list to only essential tools. Automate Least Privilege: Implement a policy for temporary tokens and expiring scopes. Move away from permanent credentials where possible, forcing periodic re-consent. Champion Phishing-Resistant MFA: Make FIDO2 Passkeys the minimum baseline for every high-value user and administrator. Anything less is a calculated risk you can’t afford.The uncomfortable truth is: Attackers did not utilize brute force - they strategically weaponized OAuth consent and token theft. The good news is that passwordless, phishing-resistant authentication would have stopped both paths cold.
Unlock the pipeline velocity you need with the deterministic security you can trust.
👉 Request a Demo of the HYPR Identity Assurance Platform Today.
All the hard work put into Wind Tunnel, our scale testing suite, is starting to become visible! We’re now collecting metrics from both the host OS and Holochain, in addition to the scenario metrics we’d already been collecting (where zome call time and arbitrary scenario-defined metrics could be measured). We’re also running scenarios on an automated schedule and generating reports from them. Our ultimate goals are to be able to:
monitor releases for performance improvements and regressions, identify bottlenecks for improvement, and turn report data into release-specific information you can use and act upon in your app development process.Finally, Wind Tunnel is getting the ability to select a specific version of Holochain right from the test scenario, which will be useful for running tests on a network with a mix of different conductors. It also saves us some dev ops headaches, because the right version for a test can be downloaded automatically as needed.
Holochain 0.6: roughly two (ideal) weeks remainingOur current estimates predict that Holochain 0.6’s first release will take about two team-weeks to complete. Some of the dev team is focused on Wind Tunnel and other tasks, so this may not mean two calendar weeks, but it’s getting closer. To recap what we’ve shared in past Dev Pulses, 0.6 will focus on:
Warrants — reporting validation failures to agent activity authorities, who collect and supply these warrants to anyone who asks for them. As soon as an agent sees and validates a warrant, they retain it and block the bad agent, even if they aren’t responsible for validating the agent’s data. If the warrant itself is invalid (that is, the warranted data is valid), the authority issuing the warrant will be blocked. Currently warrants are only sent in response to a get_agent_activity query; in the future, they’ll be sent in response to other DHT queries too. Blocking — the kitsune2 networking layer will allow communication with remote agents to be blocked, and the Holochain layer will use this to block agents after a warrant against them is discovered. Performance improvements — working with Unyt, we’ve discovered some performance issues with must_get_agent_activity and get_agent_activity which we’re working on improving.You have probably already seen the recent announcements from Holochain and Holo (or the livestream), but if not, here’s the news from the org: Holo is open-sourcing its always-on node software in an OCI-compliant container called Edge Node.
This is going to do a couple things for hApp developers:
make it easier to spin up always-on nodes to provide data availability and redundancy for your hApp networks, provide a base dockerfile for devs to add other services to — maybe an SMS, email, payment, or HTTP gateway for your hApp, and allow more hosts to set up nodes, because Docker is a familiar distribution formatI think this new release connects Holo back to its roots — the decentralised, open-source values that gave birth to it — and we hope that’ll mean more innovation in the software that powers the Holo network. HoloPort owners will need to be handy with the command line, but a recent survey found that almost four fifths of them already are.
So if you want to get involved, either to bootstrap your own infrastructure or support other hApp creators and users, here’s what you can do:
Download the latest HolOS ISO for HoloPorts, other hardware, VMs, and cloud instances. Download the Edge Node container for Docker, Kubernetes, etc. Get in touch with Rob from Holo on the Holo Forum, the Holo Edge Node Support Telegram, Calendly, or the DEV.HC Discord (you’ll need to self-select Access to: Projects role in the #select-a-role channel, then go to the #always-on-nodes channel). Join the regular online Holo Huddle calls for support (get access to these calls by getting in touch with Rob above). Soon, there’ll be a series of Holo Forge calls for people who want to focus on building the ecosystem (testing, modifying the Edge Node container, etc).Join us on the DEV.HC Discord at 16:00 UTC for the next Dev Office Hours call — bring your ideas, questions, projects, bugs, and hApp development challenges to the dev team, where we’ll do our best to respond to them. See you there!
Decentralized identity is becoming the backbone of how organizations, governments, and individuals exchange trusted information.
In this live workshop, Agne Caunt (Product Owner, Dock Labs) and Richard Esplin (Head of Product, Dock Labs) guided learners through the foundations of decentralized identity: how digital identity models have evolved, the Trust Triangle that powers verifiable data exchange, and the technologies behind it: from verifiable credentials to DIDs, wallets, and biometric-bound credentials.
Below are the core takeaways from the session.
Hey folks 👋
Following the rapid-fire releases in the last few newsletters, we have a quieter one for you this edition.
Everyone’s busy working on some bigger features and editions to Kin, meaning not much has gone out in the last two weeks.
So instead, this’ll be a sneak peek into what’s coming really soon - with the usual super prompt at the end for you.
What (will be) new with Kin 🕑 Your Kin, expanded 🌱The biggest change coming up is our rollout of Kin Accounts. Don’t worry: these accounts won’t store any of your conversation data - just some minimal basics that we’ll keep secure.
We’ll be introducing Kin Accounts to lay the groundwork for multi-device sync (which inches closer!), more integrations into Kin, and eventually Kin memberships.
More information on Kin Accounts, and what we mean by “minimal basics” will come out soon too, so you stay fully informed
More personal advisors and notifications 🧩Off the back of the positive feedback for the advisor updates covered in the last edition, we’re continuing to expand their personalities and push notification abilities.
Very soon, you’ll notice that each advisor feels even more unique, more understanding of you, and more suited to their role - both in chat and in push notifications.
And in case you missed it, you have full control over the push notification frequency. If you want to hear from an advisor while outside Kin more, you can turn it up in each advisor’s edit tab from the home screen - and if you want to hear less from them, you can turn it down more.
Memory appears in these updates almost every time - and that’s because we really are working on it almost every week.
The imminent update will continue to work toward our long-standing goal of making Kin the best personal AI at understand time in conversations - something we’ve explained in more depth in previous articles.
More on this when the next stage of the update rolls out!
Similarly, Journaling also makes another appearance as we continue to re-work it according to your feedback. Guided daily and weekly Journals will help you track your progress, more visible streak counts will help keep you involved, and a new prompting system will help entries feel more insightful. You’ll hear more about exactly what’s changing once we’ve released some of it.
Start a conversation 💬I know this reminder is in every newsletter - but that’s because it’s integral to Kin.
Kin is built for you, with your ideas. So, your feedback is essential to helping us know whether we’re making things the way you like them.
The KIN team is always around at hello@mykin.ai for anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).
To get more stuck in, the official Kin Discord is still the best place to interact with the Kin development team (as well as other users) about anything AI.
We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.
We also regularly run three casual calls every week - you’re welcome to join:
Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.
Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.
Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.
Kin is yours, not ours. Help us build something you love!
Finally, you can also share your feedback in-app. Just screenshot to trigger the feedback form.
Our current reads 📚Article: OpenAI admits to forcibly switching subscribers away from GTP 4 and 5 models in some situations
READ - techradar.com
Article: San Diego State University launch first AI responsibility degree in California
READ - San Diego State University
Article: Australia’s healthcare system adopting AI tools
READ - The Guardian
Article: California’s AI laws could balance innovation and regulation
READ - techcrunch.com
This week, your Kin will help you answer the question:
“How can I better prepare for change?”
If you have Kin installed and up to date, you can tap the link below (on mobile!) to explore how you think about pressure, and how you can keep cool under it.
As a reminder, you can do this on both iOS and Android.
We build Kin together 🤝If you only ever take one thing away from these emails, it should be that you have as much say in Kin as we do (if not more).
So, please chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.
With love,
The KIN Team
The 6-month plan to go from zero to traction (with weekly tasks you can start today)
In our recent live podcast, Richard Esplin (Dock Labs) sat down with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) to unpack the new ISO standards for mobile driver’s licenses (mDLs).
One topic dominated the discussion: server retrieval.
The journey from a signed contract to a fully deployed security solution is one of the most challenging in enterprise technology. For a mission-critical function like identity, the stakes are even higher. It requires more than just great technology; it demands a true partnership to drive change across massive, complex organizations.
I sat down with HYPR’s SVP of Worldwide Sales, Doug McLaughlin, to discuss what it really takes to get from the initial sale to the finish line, and how HYPR works with customers to manage the complexities of procurement, organizational buy-in, and full-scale deployment for millions of users.
Let’s talk about the initial hurdles – procurement and legal. These processes can stall even the most enthusiastic projects. How do you get across that initial finish line?
Doug: By the time you get to procurement and legal, the business and security champions should be convinced of the solution's value. These teams aren't there to re-evaluate whether the solution is needed; they're there to vet who is providing it and under what terms. The biggest mistake you can make is treating them like a final sales gate.
Our approach is to be radically transparent and prepared. We have our security certifications, compliance documentation, and legal frameworks ready to go well in advance. We’ve already proven the business value and ROI to our champions, who then become our advocates in those internal procurement meetings. It’s about making their job as easy as possible. When you’ve built a strong, trust-based relationship across the organization, procurement becomes a process to manage efficiently, not an obstacle to overcome. The contract signature is less the "end" and more the "official beginning" of the real work.
You’ve navigated some of the largest passwordless deployments in history. Many people think the deal is done when the contract is signed. What’s the biggest misconception about that moment?
Doug: The biggest misconception is that the signature is the finish line. In reality, it’s the starting gun. For us, that contract isn’t an endpoint; it’s a formal commitment to a partnership. You've just earned the right to help the customer begin the real work of transformation.
In these large-scale projects, especially at global financial institutions or manufacturing giants, you’re not just installing software. You’re fundamentally changing a core business process that can touch every single employee, partner, and sometimes even their customers. If you view that as a simple handoff to a deployment team, you're setting yourself up for failure. The trust you built during the sales cycle is the foundation you need for the change management journey ahead.
When you’re dealing with a global corporation, you have IT, security, legal, procurement, and business units all with their own priorities. How do you start building the consensus needed for a successful rollout?
Doug: You have to build a coalition, and you do that by speaking the language of each stakeholder. I remember working with a major global bank. Their security team was our initial champion; they immediately saw how passkeys would eliminate phishing risk and secure their high-value transactions. But one of the key stakeholders was wary. Their primary concern was a potential surge in help desk calls during the transition, which would blow up their budget.
Instead of just talking about security with them, we shifted the conversation entirely and early. We presented the case study from another financial services deployment showing a 70-80% reduction in password-related help desk tickets within six months of rollout. We framed the project not as a security mandate, but as an operational efficiency initiative that would free up the team's time.
We connected the dots for them. Security got their risk reduction. IT saw a path to lower operational costs. The business leaders saw a faster, more productive login experience for their bankers. When each department saw its specific problem being solved, they became a unified force pushing the project forward. That's how you turn individual stakeholders into a powerful coalition.
That leads to the user. How do you get hundreds of thousands of employees at a global company to embrace a new way of signing in?
Doug: You can’t force change on people; you have to make them want it. A great example is a Fortune 500 manufacturing company we worked with. They had an incredibly diverse workforce. From corporate executives on laptops to factory floor workers using shared kiosks and tablets. Compounding this further, employees spanned the globe, from US, to China to LatAm and beyond. Let’s face it, a single, top-down email mandate was never going to work.
We partnered with them to create a phased rollout that respected these different user groups. For the factory floor, we focused on speed. The message was simple: "Clock in faster, start your shift faster." We trained the shift supervisors to be the local experts and put up simple, visual posters near the kiosks.
For the corporate employees, we focused on convenience and security, highlighting the ability to log in from anywhere without typing a password. We identified influential employees in different departments to be part of a pilot program. Within weeks, these "champions" were talking about how much easier their sign-in experience was. That word-of-mouth was more powerful than any corporate memo. The goal is to make the new way so demonstrably better that people are actively asking when it's their turn. That’s when adoption pulls itself forward.
Looking back at these massive, multi-year deployments, what defines a truly "successful" partnership for you?
Doug: Success isn’t the go-live announcement. It's six months later when the CISO tells you their help desk calls are down 70%. It's when an employee from a branch in Singapore sends unsolicited feedback about how much they love the new login experience. It’s when the customer’s security team stops seeing you as a vendor and starts calling you for advice on their entire identity strategy.
That's the real finish line. It's when the change has stuck, the value is being realized every day, and you’ve built a foundation of trust that you can continue to build on for years to come.
What's the biggest topic that keeps coming up in your customer conversations these days?
Doug: I'm having a lot of fun clarifying the difference between simply checking a document and actually verifying a person's identity. Many companies believe that if they scan a driver's license, they're secure. But I always ask, "Okay, that tells you the document is probably real, but how do you really know who's holding it?" That question changes everything. Between the rise of AI-generated fakes, or the simple reality that people lose their wallets, relying on a single document is incredibly fragile. The last thing you want is your top employee stranded and locked out of their accounts because their license is missing.
I move the conversation to a multi-factor approach. We check the document, yes, but then we use biometrics to bind it to the live person in front of the camera, and then we cross-reference that against another trusted signal, like the phone they already use to sign in. It gives you true assurance that the right person is there. More importantly, it provides multiple paths so your employees are never left helpless. It’s about building a resilient system that’s both more secure and more practical for your people.
Bonus question! What’s one piece of advice you’d give to someone just starting to manage these complex sales and deployment cycles?
Doug: Get obsessed with your customer's business, not your product. Understand what keeps their executives up at night, what their biggest operational headaches are, and what their long-term goals are. If you can authentically map your solution to solving those core problems, you stop being a salesperson and start being a strategic partner. Everything else follows from that.
Thanks for the insights, Doug. It’s clear that partnership is the key ingredient to success!
Keywords
PAM, IGA, CyberArk, Palo Alto, identity security, AI, machine identity, cybersecurity, information flows, behavioral analysis
Summary
In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the significant changes in the cybersecurity landscape, particularly focusing on Privileged Access Management (PAM) and Identity Governance and Administration (IGA). They explore the recent acquisition of CyberArk by Palo Alto, the evolution of identity security, and the convergence of various identity management solutions.
The conversation highlights the importance of information flows, and the need for a mindset shift in the industry to effectively address identity security challenges.
Takeaways
Chapters
00:00 Welcome Back and Industry Changes
02:01 The Evolution of Privileged Access Management (PAM)
10:41 The Convergence of Cybersecurity and Identity
16:13 The Future of Identity Management Platforms
24:23 Understanding Information Flows in Cybersecurity
28:12 The Role of AI in Identity Management
33:42 Navigating Mergers and Acquisitions in Tech
39:50 The Future of Identity Security and AI Integration
Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved, this would be the first time tokenised securities trade on a major U.S. exchange, a milestone that could transform global capital markets. Under the proposal, investors will be able to choose whether to settle their trades in traditional digital form or in tokenised blockchain form.
As, more and more firms are tokenising stocks. The implications are potentially huge:
24/7 trading of tokenised equities Instant settlement Programmable ownership Full shareholder rights, identical to traditional sharesThis is a large overhaul of market infrastructure. Sounds great, but the reality is much more complex.
How to tokenise stocks?Tokenised stocks today can be structured in several ways, including:
Indirect tokenisation: The issuer raises money via the issuance of a financial instrument different from the stocks, typically a debt instrument (e.g. bond/note), and buys the underlying stocks with the raised funds. The tokens may either be the financial instrument itself or represent a claim on that financial instrument. The token does not grant investors direct ownership of the underlying stock. However, it is simple to launch. Direct tokenisation: Stocks are tokenised directly at the stock company level, preserving voting, dividends, and reporting rights. Although this method tends to be more difficult to implement due to legal and infrastructure requirements.Both structures have their benefits and drawbacks. The real issue, however, is how the tokens are managed post-issuance.
Permissionless vs permissioned tokensWhile choosing a structure for tokenised stocks is important, the true success of tokenisation depends on whether the tokens are controlled or free to move, because this determines compliance, investor protection, and ultimately whether the market can scale safely.
Permissionless: Tokens can move freely on-chain after issuance. Token holders gain economic exposure, but not shareholder rights. Secondary market trading is not controlled, creating compliance risks. The legitimate owner of the security is not always clear. Permissioned: Compliance and eligibility are enforced at every stage, embedding rules directly into the token. Crucially, permissioned tokens also guarantee investor safety by making ownership legally visible in the issuer’s register. For issuers, this model also fulfils their legal obligation to know who their investors are at all times. Transfers to non-eligible wallets are blocked, maintaining regulatory safety while preserving trust.While permissionless tokens may be quicker to launch, they carry significant legal risks, weaken investor trust, and fragment growth. By contrast, permissioned tokens should be considered as the only sustainable approach to tokenising stocks, because they combine compliance, investor protection, and long-term scalability.
The right way forward – compliance at the token levelNasdaq’s SEC filing shows the path to do this right. Tokenised stocks will only succeed if eligibility and compliance are enforced in both issuance and secondary trading.
That’s where open-source standards like ERC-3643 come in:
Automated compliance baked in: Rules are enforced automatically at the protocol level, not manually after the fact Eligibility checks: Only approved investors can hold the asset, enabling ownership tracking efficiently Controlled transfers: Tokens cannot be sent to non-eligible investors, even in the secondary market Auditability: Every transaction can be monitored in real time, ensuring trust with regulatorsThis is how tokenised stocks can operate safely at scale, with compliance embedded directly into the digital infrastructure, no matter if it’s through direct or indirect tokenisation. This provides safety at scale, unlocked liquidity, efficiency, and regulatory alignment.
Why this matters now?Investor demand for tokenised assets is surging. Global banks are exploring issuance, Coinbase has sought approval, and now Nasdaq is moving ahead under the SEC’s umbrella. Tokenisation will be at the core of financial markets.
But shortcuts built on permissionless, freely transferable tokens will only invite regulatory backlash, slowing innovation and preventing the market from scaling.
The future of tokenised shares will be built on:
Carrying full shareholder rights and guaranteeing ownership Automatic, enforced compliance on every trade Integrating directly into existing market infrastructureThat is what true tokenisation means, not synthetic exposure, but embedding the rules of finance into the share itself.
We believe this is the turning point. Nasdaq’s move validates what we’ve been building toward: a global financial system where tokenisation unlocks liquidity, efficiency, and access, not at the expense of compliance, but because of it.
The race is on. The winners won’t be those who move fastest, but those who build markets that are trusted, compliant, and scalable from day one.
Tokeny SpotlightAnnual team building
We head to Valencia for our annual offsite team building. A fantastic time filled with great memories.
Read MoreToken2049
Our CEO and Head of Product for Apex Digital Assets, and CBO, head to Singapore for Token2049
Read MoreNew eBook
Global payments reimagined. Download to learn what’s driving the rapid rise of digital currencies.
Read MoreRWA tokenisation report
We are proud to have contributed to the newly released RWA Report published by Venturebloxx.
Read MoreSALT Wyoming
Our CCO and Global Head of Digital Assets at Apex Group, Daniel Coheur, discusses Blockchain Onramps at SALT.
Read MoreWe test SilentData’s privacy
Their technology explores how programmable privacy allows for secure and compliant RWA tokenisation.
Read More Tokeny EventsToken2049 Singapore
October 1st-2nd, 2025 | 🇸🇬 Singapore
Digital Assets Week London
October 8th-10th, 2025 | 🇬🇧 United Kingdom
ALFI London Conference
October 15th, 2025 | 🇬🇧 United Kingdom
RWA Singapore Summit
October 2nd, 2025 | 🇸🇬 Singapore
Hedgeweek Funds of the Future US 2025
October 9th, 2025 | 🇺🇸 United States of America
ERC-3643 is recognized in Animoca Brands Research’s latest report on tokenised real-world assets (RWAs).
The report highlights ERC-3643 as a positive step for permissioned token standards, built to solve the exact compliance and interoperability challenges holding the market back.
Read the story here
Subscribe NewsletterA monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.
Previous Newsletter Oct10 Are markets ready for tokenised stocks’ global impact? September 2025 Are markets ready for tokenised stocks’ global impact? Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved,… Sep1 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group Last month, together with Apex Group, we introduced Apex Digital 3.0, the first… Aug1 Apex Digital 3.0 is Live – The Future of Finance Starts Now July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now To truly scale tokenisation, we need a global force at the… Jul1 Real Estate Tokenization Takes Off in Dubai June 2025 Real Estate Tokenization Takes Off in Dubai Dubai’s real estate market is breaking records. According to data shared by Property Finder, Dubai recorded…The post Are markets ready for tokenised stocks’ global impact? appeared first on Tokeny.
Most people think of digital identity as a mobile driver’s license or app on their phone. But identity isn’t just a credential, it’s infrastructure. Like roads, broadband, or electricity, digital identity frameworks must be built, governed, and funded as public goods.
Today, the lack of a unified identity system fuels fraud, inefficiency, and distrust. In 2023, the U.S. recorded 3,205 data breaches affecting 353 million people, and the Federal Trade Commission reported $12.5 billion in fraud losses, much of it rooted in identity theft and benefit scams.
These aren’t isolated incidents but symptoms of fragmentation: every agency and organization maintaining its own version of identity, duplicating effort, increasing breach risk, and eroding public trust.
We argue that identity should serve as public infrastructure: a government-backed framework that lets residents prove who they are securely and privately, across contexts, without unnecessary data collection or centralization. Rather than a single product or app, this framework can represent a durable set of technical and statutory controls built to foster long-term trust, protect privacy, and ensure interoperability and individual control.
From Projects to Public InfrastructureGovernments often launch identity initiatives as short-term projects: a credential pilot, a custom-built app, or a single-agency deployment. While these efforts may deliver immediate results, they rarely provide the interoperability, security, or adoption needed for a sustainable identity ecosystem. Treating digital identity as infrastructure avoids these pitfalls by establishing common rails that multiple programs, agencies, and providers can build upon.
A better approach is to adopt a framework model, where digital identity isn’t defined by a single product or format but by adherence to a shared set of technical and policy requirements. These requirements, such as selective disclosure, minimal data retention, and individual control, can apply across many credential types, from driver’s licenses and professional certifications to benefit eligibility and guardianship documentation.
This enables credentials to be iterated and expanded on thoughtfully: credentials can be introduced one at a time, upgraded as standards evolve, and tailored to specific use cases while maintaining consistency in protections and interoperability.
Enforcing Privacy Through Law and CodeFoundational privacy principles such as consent, data minimization, and unlinkability must be enforced by technology, not just policy documents. Digital identity systems should make privacy the default posture, using features (depending on the type of credential) such as:
Selective disclosure (such as proving “over 21” without showing a birthdate) Hardware-based device binding Cryptographically verifiable digital credentials with offline presentation Avoid architectures that risk exposing user metadata during verification.By embedding security, privacy, and interoperability directly into the architecture, identity systems move beyond compliance and toward real-world protection for residents. These are not optional features, they are statutory expectations brought to life through secure protocols.
Open Standards, Broad InteroperabilityPublic infrastructure should allow for vendor choice and competitive markets that foster innovation. That’s why modern identity systems should be built on open, freely implementable standards, such as ISO/IEC 18013-5/7, OpenID for Verifiable Presentations (OID4VP), W3C Verifiable Credentials, and IETF SD-JWTs.
These standards allow credentials to be portable across wallet providers and verifiable in both public and private sector contexts, from airports and financial institutions to universities and healthcare. Multi-format issuance ensures credentials are accepted in the widest range of transactions, without compromising on core privacy requirements.
A clear certification framework covering wallets, issuers, and verifiers can ensure compliance with these standards through independent testing, while maintaining flexibility for providers to innovate. Transparent certification also builds trust and ensures accountability at every layer of the ecosystem.
Governance Leads, Industry BuildsTreating digital identity as infrastructure doesn’t mean the public sector has to (or even should) build everything. It means the public sector must set the rules, defining minimum standards, overseeing compliance, and ensuring vendor neutrality.
Wallet providers, credential issuers, and verifiers can all operate within a certified framework if they meet established criteria for security, privacy, interoperability, and user control. Governments can maintain legal authority and oversight while encouraging healthy private-sector competition and innovation.
This governance-first approach creates a marketplace that respects rights, lowers risk, and is solvent. Agencies retain procurement flexibility, while residents benefit from tools that align with their expectations for usability and safety.
Why This MattersDigital identity is the entry point to essential services: healthcare, education, housing, employment, and more. If it’s designed poorly, it can become fragmented, invasive, or exclusionary. But if it’s designed as infrastructure with strong governance and enforceable protections, it becomes a foundation for inclusion, trust, and public value.
Well-governed digital identity infrastructure enables systems that are:
Interoperable across jurisdictions and sectors Private by design, not retrofitted later Transparent, with open standards and auditability Resilient, avoiding lock-in and enabling long-term evolutionMost importantly, it is trustworthy for residents, not just functional.
A Foundation for the FuturePublic infrastructure requires alignment between law, technology, and market design. With identity, that means enforcing privacy in code, using open standards to drive adoption, and establishing certification programs that ensure accountability through independent validation without stifling innovation.
This is more than a modernization effort. It’s a transformation that ensures digital identity systems can grow, adapt, and serve the public for decades to come.
Ready to Build Trustworthy Digital ID Infrastructure?SpruceID partners with governments to design and implement privacy-preserving digital identity systems that scale. Contact us to explore how we can help you build standards-aligned, future-ready identity infrastructure grounded in law, enforced by code, and trusted by residents.
Contact UsAbout SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.
In a recent client call, we were asked whether our platform could help a local government issue digital IDs.
To answer that, Richard Esplin (Head of Product) put together a live demo.
Instead of complex architectures or long timelines, he showed how a city could issue a digital residency credential and use it instantly across departments. From getting a library card to scheduling trash pickup.
The front end for the proof-of-concept was spun up in an afternoon with an AI code generator.
Behind the scenes, we handled verifiable credential issuance, verification, selective disclosure, revocation, and ecosystem governance, proving that governments can move from paper processes to reusable, privacy-preserving digital IDs in days, not months.
The UK's Online Safety Act triggered a staggering 1,800% surge in VPN signups within days of implementation.
The UK’s Online Safety Act was introduced to make the internet “safer,” especially for children. It forces websites and platforms to implement strict age verification measures for adult and “harmful” content, often requiring users to upload government IDs, credit cards, or even biometric scans.
While the goal is protection, the method feels intrusive.
Suddenly, every UK citizen is being asked to share sensitive identity data with third-party verification companies just to access certain sites.
The public response was immediate.
Within days of implementation, the UK saw a staggering 1,800% surge in VPN signups.
ProtonVPN jumped to the #1 app in the UK App Store. NordVPN reported a 1,000% surge. In fact, four of the top five free iOS apps in the UK were VPNs.
Millions of people literally paid to preserve their privacy rather than comply.
This backlash reveals a fundamental flaw in how age verification was implemented.
People are rejecting what they perceive to be privacy-invasive ID uploads. They don’t want to hand over passports, driver’s licenses, or facial scans just to browse.
Can we blame them?
The problem isn’t age verification itself. The problem is the method, which pushes people to circumvent the rules with VPNs or even fake data.
But here’s the thing: we already have better options.
Government-issued digital IDs already exist. Zero-knowledge proofs let you prove you’re 18+ without revealing who you are. Verifiable credentials combine reliability (government-backed trust) with privacy by design.With this model, the website never sees your personal data.
The check is still secure, government-backed, and reliable, without creating surveillance or new honeypots of sensitive data.
The VPN surge is proof that people value their digital privacy so much that they’ll pay for it.
If governments want compliance and safety, they need to meet people where they are: with solutions that respect privacy as much as protection.
The UK’s privacy backlash demonstrates exactly why verifiable ID credentials are the way forward.
They can resolve public resistance while maintaining both effective age checks and digital rights.
In our recent live podcast, Richard Esplin (Dock Labs) spoke with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) about the rollout of mobile driver’s licenses (mDLs) and what comes next.
One idea stood out: derived credentials.
mDLs are powerful because they bring government-issued identity into a digital format.
But in practice, most verifiers don’t need everything on your driver’s license.
A student bookstore doesn’t need your address, it only needs to know that you’re enrolled.
That’s where derived credentials come in.
They allow you to take verified data from a root credential like an mDL and create purpose-specific credentials:
A student ID for campus services. An employee badge for workplace access. A travel pass or loyalty credential.Andrew put it simply: if you don’t need to use the original credential with everything loaded into it, don’t.
Ryan added that the real benefit is eliminating unnecessary personal data entirely, only passing on what’s relevant for the transaction.
Derived credentials also make it possible to combine data from multiple credentials into one, enabling new use cases.
For example, a travel credential could draw on both a government-issued ID and a loyalty program credential, giving the verifier exactly what they need in a single, streamlined interaction.
This approach flips the model of identity sharing.
Instead of over-exposing sensitive details, derived credentials enable “less is more” identity verification: stronger assurance for the verifier, greater privacy for the user.
Looking ahead, Andrew revealed that the ISO 18013 Edition 2 will introduce support for revocation and zero-knowledge proofs, enhancements that will make derived credentials even more practical and privacy-preserving.
Bottom line: mDLs are an important foundation, but the everyday future of digital ID lies in derived credentials.
The post Building Trust in Agentic Commerce appeared first on Liminal.co.
Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust.
If agentic AI is going to matter in e-commerce, we need guardrails that make it safe, compliant, and worth the operational risk. That is where authentication, authorization, and verification come in. Think identity, boundaries, and proof. Until teams can check those boxes with confidence, adoption will stall.
What is an AI agent, and why does it matter in e-commerceAt its simplest, an AI agent is software that can act on instructions without waiting for every step of human input. Instead of a static chatbot or recommendation engine, an agent can take context, make a decision, and carry out an action.
In e-commerce, that could mean:
Verifying a buyer’s identity before an agent executes a purchase on their behalf Allowing an agent to issue refunds up to a set limit, but requiring human approval beyond that threshold Confirming that an AI-driven order or promotion matches both customer intent and compliance rules before it goes liveThe upside is clear: faster processes, lower manual overhead, and customer experiences that feel effortless. But the risk is just as clear. If an agent acts under the wrong identity, oversteps its boundaries, or produces outcomes that don’t match user intent, the impact is immediately evident in increased fraud losses, compliance failures, or customer churn.
That’s why the industry is focusing on three pillars: authentication, authorization, and verification. Without them, agentic commerce cannot scale.
The adoption gapAnalysts project autonomous agents will grow to $70B+ by 2030. Buyers want speed, automation, and scale, but customers are not fully on board. In fact, only 24% of consumers say they are comfortable letting AI complete a purchase on their own.
That consumer hesitation is the critical signal. Ship agentic commerce without shipping trust, and you don’t just risk adoption, you risk chargebacks, brand erosion, and an internal rollback before your pilot even scales.
What’s broken todayThree realities keep coming up in my conversations with product, fraud, and risk leaders:
Attack surface expansion. Synthetic identity and deepfakes raise the baseline risk. 71% of organizations say they lack the AI/ML depth to defend against these tactics. Confidence is slipping. Trust in fully autonomous agents dropped from 43% to 27% in one year, even among tech-forward orgs. Hype hurts. A meaningful share of agent projects will get scrapped by 2027 because teams cannot tie them to real value or reliable controls.The regulatory lens makes this sharper. Under the new EU AI Act, autonomous systems are often treated as high-risk, requiring transparency, human oversight, and auditability. In the U.S., proposals like the Algorithmic Accountability Act and state laws such as the Colorado AI Act point in the same direction—demanding explainability, bias testing, and risk assessments. For buyers, that means security measures are not only best practice but a growing compliance requirement.
When I see this pattern, I look for the missing scaffolding. It is almost always the same three blanks: who is the agent, what can it do, and did it do the right thing.
The guardrails that matterIf you are evaluating solutions, anchor on these three categories. This is the difference between a flashy demo and something you can put in production.
AuthenticationProve the agent’s identity before you let it act. That means credentials for agents, not just users. It means attestation, issuance, rotation, and revocation. It means non-repudiation, so you can tie a transaction to a specific agent and key.
What to look for:
strong, verifiable agent identities and credentials support for attestation, key management, rotation, and kill switches logs that let you prove who initiated what, and whenSet boundaries that are understood by both machines and auditors. Map policies to budgets, scopes, merchants, SKUs, and risk thresholds. Keep it explainable so a human can reason about the blast radius.
What to look for:
policy engines that accommodate granular scopes and spend limits runtime constraints, approvals, and step-up controls simulation and sandboxes to test policies before they go liveTrust but verify. Confirm that outcomes align to user intent, compliance, and business rules. You need evidence that holds up in a post-incident review.
Verification isn’t just operational hygiene. Under privacy rules like GDPR Article 22, individuals have a right to safeguards when automated systems make decisions about them. That means the ability to explain, evidence, and roll back agent actions is not optional.
What to look for:
transparent audit trails and readable explanations outcome verification against explicit user directives real-time anomaly detection and rollback pathsIf a vendor cannot demonstrate these three pillars working together, you are buying a future incident.
Real-world examples todayReal deployments are still early, but they show what’s possible when trust is built in.
ChatGPT Instant Checkout marks one of the first large-scale examples of agentic commerce in production. Powered by the open-source Agentic Commerce Protocol, co-developed with Stripe, it enables users in the U.S. to buy directly from Etsy sellers in chat, with Shopify merchants like Glossier, SKIMS, and Vuori coming next. The article affirms each purchase is authenticated, authorized, and verified through secure payment tokens and explicit user confirmation—demonstrating how agentic AI can act safely within clear trust boundaries. Konvo AI automates ~65% of customer queries for European retailers and converts ~8% of those into purchases, using agents that can both interact with customers and resolve logistics issues. Visa Intelligent Commerce for Agents is building APIs that let AI agents make purchases using tokenized credentials and strong authentication — showing how payment-grade security can extend to autonomous actions. Amazon Bedrock AgentCore Identity provides identity, access control, and credential vaulting for AI agents, giving enterprises the tools to authenticate and authorize agent actions at scale Agent Commerce Kit (ACK-ID) demonstrates how one agent can verify the identity and ownership of another before sensitive interactions, laying the groundwork for peer-to-peer trust in agentic commerce.These aren’t fully autonomous across all commerce workflows, but they demonstrate that agentic AI can deliver value when authentication, authorization, and verification are in place.
What good looks like in practiceBuyers ask for a checklist. I prefer evaluation cues you can test in a live environment:
Accuracy and drift. Does the system maintain performance as the catalog, promotions, and fraud patterns shift? Latency and UX. Do the controls keep decisions fast enough for checkout and service flows? Integration reality. Can this plug into your identity, payments, and risk stack without six months of glue code? Explainability. When an agent takes an action, can a product manager and a compliance lead both understand why? Recourse. If something goes wrong, what can you unwind, how quickly can you roll it back, and what evidence exists to explain the decision to auditors, customers, or regulators?The strongest teams will treat agent actions like high-risk API calls. Every action is authenticated, every scope is authorized, and every outcome is verified. The tooling makes that visible.
Why this matters right nowIt is tempting to wait. The reality is that agentic workflows are already creeping into back-office operations, customer onboarding, support, and payments. Early movers who get trust right will bank the upside: lower manual effort, faster cycle time, and a margin story that survives scrutiny.
The inverse is also true. Ship without safeguards, and you’ll spend the next quarter explaining rollback plans and chargeback spikes. Customers won’t give you the benefit of the doubt. Neither will your CFO.
A buyer’s short listIf you are mapping pilots for Q4 and Q1 2026, here’s a simple way to keep the process grounded:
define the jobs to be done write the rules first simulate and stage measure what matters keep humans in the loop regulatory readiness. Confirm vendors can meet requirements for explainability, audit logs, and human oversight under privacy rules. The road aheadAgentic commerce is not a future bet. It is a present decision about trust. The winners will separate signal from noise, invest in authentication, authorization, and verification, and scale only when those pillars are real.
At Liminal, we track the vendors and patterns shaping this shift. If you want a deeper dive into how teams are solving these challenges today, we’re bringing together nine providers for a live look at the authentication, authorization, and verification layers behind agentic AI. No pitches, just real solutions built to scale safely.
▶️ Want to know more about it? Watch our Liminal Demo Day: Agentic AI in E-Commerce recording, and explore how leading vendors are tackling this challenge.
My take: The winners won’t be the first to launch AI agents. They’ll be the first to prove their agents can be trusted at scale.
The post Building Trust in Agentic Commerce appeared first on Liminal.co.
Identity theft is one of the fastest-growing financial crimes worldwide, and consumers are more aware of the risks than ever before. But in an increasingly competitive market, offering “basic” identity theft insurance is no longer enough. To stand out, insurers need to think beyond the minimum by focusing on product innovation, customer experience, and trust.
Below, we explore six powerful ways insurers can differentiate their identity theft insurance offerings.
1. Innovate with product features & coverageMost identity theft insurance policies cover financial losses and restoration costs, but few go beyond reactive measures to prevent identity theft from occurring. To gain a competitive edge, insurers can expand coverage to offer proactive identity protection solutions, such as:
Alternative phone numbers and emails to keep customer communications private and reduce phishing risks. A password manager to help policyholders secure accounts and prevent credential-based account takeovers. VPN for private browsing to protect sensitive activity on public Wi-Fi and stop data interception. Virtual cards that protect payment details and shield credit card numbers from fraudsters. Real-time breach alerts so customers can take immediate action when their data is compromised. Personal data removal tools to wipe sensitive information from people-search sites and reduce exposure. A privacy-first browser with ad and tracker blocking to prevent data harvesting and malicious tracking.By proactively covering these risks and offering early detection, insurers not only reduce claims costs but also create meaningful value for customers.
2. Provide strong restoration & case managementCustomers are often overwhelmed and unsure what to do next when their identity is stolen. Insurers can become their most trusted ally by offering:
A dedicated case manager who works with them from incident to resolution. A restoration kit with step-by-step instructions, pre-filled forms, and key contacts. 24/7 access to a helpline for guidance and reassurance.A study from the University of Edinburgh shows that case management can reduce the cost burden of an incident by up to 90%. It also boosts customer satisfaction and loyalty, which is a critical differentiator in a market where switching providers is easy.
3. Build proactive prevention & education programsMost consumers only think about identity protection after an incident occurs. Insurers can flip this dynamic by helping customers stay ahead of threats.
Ideas include:
Regular scam alerts and phishing education campaigns. Tools for identity monitoring, breach notifications, and credit report access. Dashboards that visualize a customer’s digital exposure, allowing them to see their risk level. Ongoing educational content such as webinars, how-to guides, and FAQs.Short, targeted online fraud education lowers the risk of falling for scams by roughly 42–44% immediately after training. This finding is based on a study that used a 3-minute video or short text intervention with 2,000 U.S. adults.
4. Offer flexible pricing & bundling optionsFlexibility is key to reaching a broader customer base. Instead of a one-size-fits-all product, insurers can:
Offer tiered plans (basic, mid, premium) with incremental features. Bundle identity theft insurance with homeowners, renters, etc. Provide family plans that protect multiple household members.This strategy serves both budget-conscious and premium segments.
5. Double down on customer experienceTrust is one of the most important factors consumers consider when buying identity theft insurance. Insurers can build confidence by:
Using clear, jargon-free language in policy documents. Responding quickly and resolving cases smoothly. Displaying trust signals, such as third-party audits, security certifications, and privacy commitments. Publishing reviews, testimonials, and case studies that show real results.A better experience leads to higher Net Promoter Scores (NPS), lower churn rates, and a long-term competitive advantage.
6. Leverage partnershipsWorking with technology partners can enhance insurers’ offerings without straining internal resources. Here are some examples of what partners can do:
Custom-branded dashboards and mobile apps that seamlessly integrate into your existing customer experience, keeping your brand front and center. Privacy status at a glance, indicating to customers whether their information has been found in data breaches. Management of alternative phone numbers and emails, allowing customers to create, update, or retire these directly in the portal.By offering these features through a white-labeled experience, insurers provide customers with daily, visible value while partners, like Anonyome Labs, handles the privacy technology behind the scenes.
Outside of white-label opportunities, strategic partnerships and endorsements also strengthen offerings. Collaborations with credit bureaus, cybersecurity firms, and privacy organizations expand capabilities and build credibility.
Powering the next generation of identity theft insuranceThe future of identity theft insurance is proactive, not reactive. Insurers who move beyond basic reimbursement to offer daily-use privacy and security tools will lead the industry in trust, engagement, and profitability. Anonyome Labs makes this shift seamless with a fully white-labeled Digital Identity Protection suite that includes alternative phone numbers and emails, password managers, VPNs, virtual cards, breach alerts, and tools for removing personal data.
By offering these proactive protections, you provide customers with peace of mind, prevent costly fraud incidents before they occur, and unlock new revenue opportunities through subscription-based services.
By partnering with Anonyome Labs, you can transform identity theft insurance into a daily value driver, positioning your company as a market leader in proactive protection.
Learn more by getting a demo of our Digital Identity Protection suite today!
The post 6 Ways Insurers Can Differentiate Identity Theft Insurance appeared first on Anonyome Labs.
Most of us never think about identity online. We type in a username, reuse a password, or click “Log in with Google” without a second thought. Identity, in the digital world, has been designed for convenience. But behind that convenience lies a hidden cost: surveillance, lock-in, and a system where we don’t really own the data that defines us.
Digital identity today is built for convenience, not for people.
Decentralized identity is a way of proving who you are without relying on a single company or government database to hold all the power. Instead of logging in with Google or handing over a photocopy of your driver’s license, you receive digital verifiable credentials, digital versions of IDs, diplomas, or licenses, directly from trusted issuers like DMVs, universities, or employers. You store these credentials securely in your own digital wallet and decide when, where, and how to share them. Each credential is cryptographically signed, so a verifier can instantly confirm its authenticity without needing to contact the issuer. The result is an identity model that’s portable, privacy-preserving, and designed to give control back to the individual rather than intermediaries.
Decentralized identity means you own and control your credentials, like IDs or diplomas, stored in your wallet, not in someone else’s database.
In this series, we’ll explore why decentralized identity matters, how policymakers are responding, and the technology making it possible. But before diving into policy debates or technical standards, it’s worth starting with the foundations: why identity matters at all, and what it means to build a freer digital world around it.
From Borrowed Logins to Borrowed AutonomyThe internet we know today was built on borrowed identity. Early online gaming systems issued usernames, turning every move into a logged action inside a closed sandbox. Social media platforms went further, normalizing surveillance as the price of connection and building entire economies on behavioral data. Even in industries like healthcare or financial services, “identity” was usually just whatever proprietary account a platform would let you open, and then keep hostage.
Each step offered convenience, but at the cost of autonomy. Accounts could be suspended. Data could be resold. Trust was intermediated by companies whose incentives rarely aligned with their users. The result was an internet where identity was an asset to be monetized, not a right to be owned.
On today’s internet, identity is something you rent, not something you own.
Decentralized identity represents a chance to reverse that arc. Instead of treating identity as something you rent, it becomes something you carry. Instead of asking permission from platforms, platforms must ask permission from you.
Why Identity Is a Pillar of Free SocietiesThis isn’t just a technical argument - it’s a philosophical and economic one. Identity is at the center of how societies function.
Economists have long warned of the dangers of concentrated power. Adam Smith argued that monopolies distort markets. Milton Friedman cautioned against regulatory capture. Friedrich Hayek showed that dispersed knowledge, not central planning, leads to better decisions. Ronald Coase explained how lowering transaction costs opens new forms of cooperation.
Philosophers, too, placed identity at the heart of freedom. John Locke’s principle of self-ownership and John Stuart Mill’s defense of liberty both emphasize that individuals must control what they disclose, limited only by the harm it might cause others.
Decentralized identity operationalizes these ideas for the digital era. By distributing trust, it reduces dependency on monopolistic platforms. By lowering the cost of verification, it unlocks new forms of commerce. By centering autonomy, it ensures liberty is preserved even as interactions move online.
The Costs of Getting It WrongAmerican consumers and institutions are losing more money than ever to fraud and cybercrime. In 2024 alone, the FBI’s Internet Crime Complaint Center (IC3) reported that scammers stole a record $16.6 billion, a stark 33% increase from the previous year. Meanwhile, the FTC reports that consumers lost over $12.5 billion to fraud in 2024, a 25% rise compared to 2023.
On the organizational side, data breach costs are soaring. IBM’s 2025 Cost of a Data Breach Report shows that the average cost of a breach in the U.S. has reached a record $10.22 million, driven by higher remediation expenses, regulatory penalties, and deepening complexity of attacks .
Identity theft has become one of the fastest-growing crimes worldwide. Fake accounts drain social programs. Fraudulent applications weigh down financial institutions. Businesses lose customers, governments lose trust, and people lose confidence that digital systems are designed with their interests in mind.
The Role of AI: Threat and CatalystAs artificial intelligence tools advance, they’re empowering fraudsters with tools that make identity scams faster, more automated, and more believable. According to a Federal Reserve–affiliated analysis, synthetic identity fraud, where criminals stitch together real and fake information to fabricate identities, reached a staggering $35 billion in losses in 2023. These figures highlight the increasing risk posed by deepfakes and AI-generated personas in undermining financial systems and consumer trust.
And at the frontline of consumer protection, the Financial Crimes Enforcement Network (FinCEN) has warned that criminals are increasingly using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks, evade fraud detection systems, and exploit financial institutions at scale.
AI doesn’t just make fraud easier—it makes strong identity more urgent.
As a result, AI looms over every digital identity conversation. On one side, it makes fraud easier: synthetic faces, forged documents, and bots capable of impersonating humans at scale. On the other, it makes strong identity more urgent and more possible.
Digital Credentials: The Building Blocks of TrustThat’s why the solution isn’t more passwords, scans, or one-off fixes - it’s a new foundation built on verifiable digital credentials. These are cryptographically signed attestations of fact - your age, your license status, your professional certification - that can be presented and verified digitally.
Unlike static PDFs or scans, digital credentials are tamper-proof. They can’t be forged or altered without detection. They’re also user-controlled: you decide when, where, and how to share them. They also support selective disclosure: you can prove you’re over 21 without sharing your exact birthdate, or prove your address is in a certain state without exposing the full line of your home address.
Verifiable digital credentials are tamper-proof, portable, and under the user’s control—an identity model built for trust.
Decentralized identity acts like an “immune system” for AI. By binding credentials to real people and organizations, it distinguishes between synthetic actors and verified entities. It also makes possible a future where AI agents can act on your behalf - booking travel, filling out forms, negotiating contracts - while remaining revocable and accountable to you.
Built on open standards, digital credentials are globally interoperable. Whether issued by a state DMV, a university, or an employer, they can be combined in a wallet and presented across contexts. For the first time, people can carry their identity across borders and sectors without relying on a single gatekeeper.
From Pilots to InfrastructureDecentralized identity isn’t just theory - it’s already being deployed.
In California, the DMV Wallet has issued more than two million mobile driver’s licenses in under 18 months, alongside blockchain-backed vehicle titles for over 30 million cars. Utah has created a statewide framework for verifiable credentials, with privacy-first principles written directly into law. SB 260 prohibits forced phone handovers, bans tracking and profiling, and mandates that physical IDs remain an option . At the federal level, the U.S. Department of Homeland Security is piloting verifiable digital credentials for immigration, while NIST’s NCCoE has convened banks, state agencies, and technology providers, including SpruceID, to define standards . Over 250 TSA checkpoints already accept mobile IDs from seventeen states, and adoption is expected to double by 2026 .These examples show that decentralized identity is moving from pilot projects to infrastructure, just as HTTPS went from niche to invisible plumbing for the web.
Why It Matters NowWe are at a crossroads. On one side, centralized systems continue to create single points of failure - massive databases waiting to be breached, platforms incentivized to surveil, and users with no say in the process. On the other, decentralized identity offers resilience, interoperability, and empowerment.
For governments, it reduces fraud and strengthens democratic resilience. For businesses, it lowers compliance costs and builds trust. For individuals, it restores autonomy and privacy.
This isn’t just a new login model. It’s the foundation for digital trust in the 21st century - the bedrock upon which free societies and vibrant economies can thrive.
This article is part of SpruceID’s series on the future of digital identity in America.
Subscribe to be notified when we publish the next installment.
Subscribe Email sent! Check your inbox to complete your signup.No spam. Unsubscribe anytime.
The public transit sector is undergoing a significant digital transformation, consolidating operations under the vision of Mobility-as-a-Service (MaaS). This shift promises passenger convenience through integrated mobile ticketing and Account-Based Ticketing (ABT) systems, but it simultaneously introduces a critical vulnerability: the rising threat of mobile fraud and revenue leakage.
For transit operators, the stakes are substantial. Revenue losses from fare evasion and ticket forgery, ranging from simple misuse of paper tickets to sophisticated man-in-the-middle attacks, can significantly impact the sustainability of MaaS and the ability to reinvest in services.
Traditional authentication methods are proving insufficient for the complexity of modern, multimodal transit:
NFC: Require significant, capital-intensive infrastructure replacement, which creates a high barrier to entry and slows deployment. QR Codes: Are prone to fraud, can be easily duplicated, and suffer from friction, slowing down passenger throughput at peak hours. BLE: Relies on robust cellular connectivity, which is often unavailable in critical transit environments, such as underground tunnels or moving vehicles.The strategic imperative for any transit authority or MaaS provider is to adopt a hardware-agnostic, software-defined proximity verification solution that is secure, fast, and works reliably regardless of network availability.
The Strategic Imperative: Securing the Transaction at the Point of PresenceThe sophistication of mobile fraud is escalating, posing a threat to the integrity of digital payment systems. Fraudsters exploit vulnerabilities, such as deferred payment authorization, to use compromised credentials repeatedly.
The solution requires a layer of security that instantly validates both the physical proximity and digital identity of the passenger. LISNR, as a worldwide leader in proximity verification, delivers this capability by transforming everyday audio components into secure transactional endpoints.
Technical Solution: Proximity Authentication with Radius® and ToneLockLISNR’s technology provides a secure, reliable, and cost-effective foundation for next-generation transit ticketing and ticket validation. This is achieved through the Radius® SDK, which facilitates the ultrasonic data-over-sound communication and the proprietary ToneLock security protocol.
Proximity Validation with RadiusThe Radius SDK is integrated directly into the transit agency’s mobile application and installed as a lightweight software component onto existing transit hardware equipped with a speaker or microphone (e.g., fare gates, information screens, on-bus systems).
Offline Capability: The MaaS application uses ultrasonic audio with user ticket data embedded within for fast data exchange. Crucially, the tone generation and verification process can occur entirely offline, ensuring that ticketing and payment validation remain functional and sub-second fast, even in areas with zero network coverage. Hardware Agnostic Deployment: Since Radius only requires a standard speaker and microphone, it eliminates the high cost and complexity of deploying proprietary NFC hardware, allowing for rapid and scalable deployment across an entire fleet or network. Security for Fraud PreventionTo combat the growing threat of mobile fraud, LISNR enables ecosystem leaders to deploy multiple advanced measures directly into the ultrasonic transaction:
ToneLock Security: Every Radius transaction can be protected by ToneLock, a proprietary tone security protocol. Only the intended receiver, with the correct, pre-shared key, can demodulate and authenticate the tone. AES256 Encryption: LISNR also offers the ability for developers to add the security protocol trusted by governments worldwide, AES256 Encryption, to all emitted tones. By folding this feature into mobility ecosystems, transit providers can ensure a secure and scalable solution for their ticketing infrastructure.The Top Business Values of Ultrasonic Proximity in Transit
For forward-thinking transit agencies and MaaS providers, adopting LISNR’s technology offers tangible operational and financial advantages:
Reduced Capital and Operational Expenditure Business Value: Eliminates the need for expensive, proprietary NFC reader hardware replacement and maintenance. Impact on ROI: Lowered infrastructure cost and faster time-to-market for new ticketing solutions. Enhanced Security and Revenue Protection Business Value: ToneLock and Encryption provide an advanced and off-network security layer for ticket and payment authentication. Impact on ROI: Significant reduction in fare evasion, fraud, and revenue leakage, directly increasing financial stability. Superior Passenger Throughput and Experience Business Value: Sub-second authentication regardless of connectivity or weather conditions. Impact on ROI: Increased rider throughput and satisfaction, encouraging greater adoption of digital ticketing and MaaS. Future-Proof and Scalable Platform Business Value: Provides a flexible, software-defined foundation that easily integrates with new Account-Based Ticketing (ABT) and payment models. Impact on ROI: Ensures longevity of infrastructure and adaptability to future urban mobility standards.By integrating the Radius SDK into their existing platform, transit operators secure their revenue, eliminate infrastructure debt, and deliver the seamless, high-security experience modern passengers demand.
Are you interested in how Radius can provide an additional stream while onboard (i.e. proximity marketing)? Are you using a loyalty system to capture and reward your most loyal riders? Want to learn more about how Radius works in your ecosystem? Fill out the contact form below to get in contact with an ultrasonic expert.
The post The New Transit Security Mandate appeared first on LISNR.
“The Internet is too big to fail, but it may be becoming too big to hold together as one.”
Many of the people reading this post grew up believing and expecting in a single, borderless Internet: a vast network of networks that let us talk, share, learn, and build without arbitrary walls. I like that model, probably because I am a globalist, but I don’t think that’s where the world is heading. In recent years, laws, norms, infrastructure, and power pulling in different directions, driving us increasingly towards a fragmented Internet. This is a reality that is shaping how we connect, what tools we use, and who controls what.
In this post, I talk about what fragmentation is, how it is happening, why it matters, and what cracks in the system may also open up room for new kinds of opportunity. It’s a longer post than usual; there’s a lot to think about here.
A Digital Identity Digest The End of the Global Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:34 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link EmbedYou can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
What is “fragmentation”?Fragmentation isn’t a single event with a single definition; it’s a multi-dimentional process. Research has identified at least three overlapping types:
Technical fragmentation: differences in protocols, infrastructure, censorship, filtering; sometimes entire national “gateways” or shutdowns. Regulatory / governmental fragmentation: national laws around data flows, privacy, platform regulation, online safety, and content moderation diverge sharply. Commercial fragmentation: companies facing divergent rules in different markets (privacy, liability, content) so they adapt differently; global products become “local versions.”A primer from the United Nations Institute for Disarmament Research (UNIDIR) published in 2023 lays this out in detail. The authors of that paper argue that Internet fragmentation is increasingly something that influences cybersecurity, trade, national security, and civil liberties. Another study published not that long ago in SciencesPo suggests that fragmentation is shifting from inward-looking national control toward being used as a tool of power projection; i.e. countries not only fence their own access, but use fragmented rules or control of infrastructure to impose influence beyond their borders.
Evidence: How fragmentation is happeningSounds all conspiracy theory, doesn’t it? Here are some concrete examples and trends.
Divergent regulatory frameworks The European Union, China, and the U.S. are increasingly adopting very different regulatory models for digital platforms, data privacy, and online content. The “prudent regulation” approach in the EU (which tends toward pre-emptive checks, heavy regulation) contrasts with the more laissez-faire (or “permissionless”) philosophy in parts of the U.S. or other jurisdictions. I really like how that’s covered in the Fondation Robert Schuman’s paper, “Digital legislation: convergence or divergence of models? A comparative look at the European Union, China and the United States.“ Countries around the world have passed or are passing online safety laws, content moderation mandates, or rules that give governments broad powers over what gets seen, what stays hidden, and what content is restricted. Check out the paper published in the Tech Policy Press, “Amid Flurry of Online Safety Laws, the Global Online Safety Regulators Network is Growing” for a lot more on that topic. Regulatory divergence not only in content, but in infrastructure: for example laws about mandatory data localization, national gateways, network sovereignty. These increase the cost and complexity for cross-border services. Few organizations know more about that than the Internet Society, which has an explainer entirely dedicated to Internet fragmentation.While this divergence creates friction for global platforms, it also produces positive spillovers. The ‘Brussels Effect’ has pushed companies to adopt GDPR-level privacy protections worldwide rather than maintain separate compliance regimes, raising the baseline of consumer trust in digital services. At the same time, the OECD’s latest Economic Outlook stresses that avoiding excessive fragmentation will require countries to cooperate in making trade policy more transparent and predictable, while also diversifying supply chains and aligning regulatory standards on key production inputs.
Taken together, these trends suggest that even in a fragmented environment, stronger rules in one region can ripple outward, whether by shaping global business practices or by encouraging cooperation to build resilience. Of course, this can work both positively and negatively, but let’s focus on the positive for the moment. “Model the change you want to see in the world” is a really good philosophy.
Technical / infrastructural separation National shutdowns or partial shutdowns are still used by governments during conflict, elections, or periods of dissent. Internet Society’s explainer catalogues many examples, but even better is their Pulse table that shows where there have been Internet shutdowns in various countries since 2018. Some countries are building or mandating their own national DNS, national gateways, or other chokepoints—either to control content, enforce digital sovereignty, or “protect” their citizens. These create friction with global addressing, with trust, with how routing and redundancy work. More information on that is, again, in that Internet Society fragmentation explainer.That said, fragmentation at the infrastructure level can also accelerate experimentation with alternatives. In regions that experience shutdowns or censorship, communities have adopted mesh networks and peer-to-peer tools as resilient stopgaps. Research from the Internet Society’s Open Standards Everywhere project, no longer a standalone project but still offering interesting observations, shows that these architectures, once fringe, are being refined for broader deployment, pushing the Internet to become more fault-tolerant.
Commercial & trade-driven fragmentation Platforms serving global audiences must adapt to local laws (e.g., privacy laws, content moderation laws) so they build variants. The result is that features, policies, even user experience diverge by country. I’m not even going to try to link to a single source for that. It’s kind of obvious. Also, restrictive trade policies (export controls, sanctions) affect what hardware/software can move across borders. Fragmentation in what devices can be used, which cloud services, etc., often comes from supply-chain / trade policy rather than purely from regulation. The UNIDIR primer notes how fragmentation when applied to cybersecurity or export controls ripples through global supply.Yet duplication of supply chains can also help build redundancy. The CSIS reports on semiconductor supply chains notes (see this one as an example) that efforts to diversify chip fabrication beyond Taiwan and Korea, while expensive, reduce systemic risks. Similarly, McKinsey’s “Redefining Success: A New Playbook for African Fintech Leaders” highlights how African fintechs are thriving by tailoring products to fragmented regulatory and infrastructural environments, turning local constraints into opportunities for growth in areas like cross-border payments, SME lending, and embedded finance. There’s a lot to study there in terms of what opportunity might look like.
I’d also like to point to the opportunities described in the AMRO article “Stronger Together: The Rising Relevance of Regional Economic Cooperation” which describes how ASEAN+3 member states are using frameworks like the Regional Comprehensive Economic Partnership (RCEP), Economic Partnership Agreements, and institutions such as the Chiang Mai Initiative to deepen trade, investment, financial ties, and regulatory cooperation. These are not just formal treaties but mechanisms for cross-border resilience, helping supply chains, capital flows, and finance networks absorb external shocks. This blog post is already crazy long, so I won’t continue, but there is definitely more to explore with how to meet this type of fragmentation with a more positive mindset.
Why does it matter?Why should we care that the Internet is fragmenting? If there are all sorts of opportunities, do we even have to worry at all? Well, yes. As much as I’m looking for the opportunities to balance the breakages, we still have to keep in mind a variety of consequences, some immediate, some longer-term.
Loss of universality & increased frictionThe Internet’s power comes from reach and interoperability: you could send an email or view a website in Boston and someone in Nairobi could see it without special treatment. But as more rules, filters, and walls are inserted, that becomes harder. Services may be blocked, slowed, or restricted. Different regulatory compliance regimes will force more localization of infrastructure and data. Users may need to use different tools depending on where they are. Work that used to scale globally becomes more expensive.
However, constraints often fuel creativity. The World Bank has documented how Africa’s fintech ecosystem thrived under patchy infrastructure, leapfrogging legacy systems with mobile-first solutions. India’s Aadhaar program is another case where local requirements drove innovation that now informs digital identity debates globally. Fragmentation can, paradoxically, widen the palette of local solutions while reducing the palette of global solutions.
Security, surveillance, and trust challengesFragmentation creates new attack surfaces and risk vectors. For example:
If traffic must go through national gateways, those are chokepoints for surveillance, censorship, or abuse. If companies cannot use global infrastructure (CDNs, DNS, encryption tools) freely, fragmentation may force weaker substitutes or non-uniform security practices. Divergent laws about encryption or liability may reduce trust in cross-border services or require large overheads. The UNIDIR primer emphasizes these concerns. Economic costs and innovation drag Fragmentation means duplicate infrastructure: separate data centres, duplicated content moderation teams, local legal teams. That’s inefficient. Products and platforms may need multiple variants, reducing scale economies. Cross-border collaboration, which has been a source of innovation (in open source, research, startups) becomes more legally, technically, culturally constrained. Unequal access and power imbalances Countries or regions with weaker regulatory capacity, limited infrastructure, or less technical expertise may be less able to negotiate or enforce their interests. They could be “locked out” of parts of the Internet, or forced to use inferior services. Big tech companies based in powerful jurisdictions may be able to shape global norms (via export, legal reach, or market power) in ways that reflect their values, often without much input from places with less power. This may further amplify inequalities. What counters or moderating factors exist?Fragmentation is not unilateral nor total. There are forces, capacities, and policies that push in the opposite direction, or at least slow things down.
Standardization bodies / global protocols. The Internet Engineering Task Force (IETF), the W3C, ICANN, etc., continue to undergird a lot of the technical plumbing (DNS, HTTP, TCP/IP, SSL/TLS, etc.). These are not trivial to replace, though it seems like some regional standards organizations are trying. Commercial incentives for compatibility. Many platforms serving global markets prefer to maintain a common codebase, or to comply with the most restrictive regulation so it applies everywhere (bringing us back to the Brussels Effect). If a regulation (e.g., privacy law) in one place is strong, firms may just adopt it globally rather than maintain separate versions. User demand and expectation. Users expect services to “just work” across borders—social media, video conferencing, cloud tools. If fragmentation hurts usability, there is political/popular pushback. Cross-border political/institutional cooperation. Trade agreements, multi-stakeholder governance efforts, and international bodies sometimes negotiate common frameworks or minimum standards (e.g., data flow provisions, privacy protections, cybersecurity norms).These moderating factors mean that fragmentation is not an all-or-nothing state; it will be uneven, partial, and contested.
What we (you, we, society) can do to navigate & shape the outcomeFragmentation is already happening; how we respond matters. Here are some ways to think about shaping the future so that it is not simply divided, but more resilient and fair.
Advocate for interoperable baselines. Even as parts diverge, there can be minimum standards—on encryption, addressing, data portability, etc.—that maintain some baseline interoperability. This ensures users don’t fall off the map just because their country has different laws. Design for variation. Product and service designers need to think early about how their tools will work under different regulatory, infrastructural, and socio-political regimes. That means thinking about offline/online tradeoffs, degraded connectivity, local content, privacy expectations, etc. Invest in local capability. Regions with weaker infrastructure, less regulatory capacity, or less technical workforce should invest (or have investment from partners) in building up their tech ecosystems, including data centers, networking, local content, and developer education. This mitigates risk of being passive recipients rather than active shapers. Cross-bloc cooperation & treaties. Trade agreements or regional alliances for digital policies could harmonize rules where possible (e.g., privacy, data flows, cybersecurity), reduce compliance burden, and keep doors open across regions. New infrastructural experiments. Thinking creatively: mesh networks, decentralized Internet architecture, peer-to-peer content distribution, alternative routing, redundancy in undersea cables etc. In context of fragmentation, some of these may move from research curiosities to vital infrastructure. Policy awareness & public engagement. People often take the openness of the Internet for granted. Public debates, awareness of policy changes (online safety, surveillance, digital sovereignty) matter. A more informed citizenry can push for policies that preserve openness and resist overly restrictive fragmentation. Anchor in human rights and global goals. Fragmentation debates can’t just be about pipes and protocols. They must also reflect the fundamentals of an ethical society: protecting human rights, ensuring equitable access, and aligning with global commitments like the United Nations Sustainable Development Goals (SDGs) and the Global Digital Compact. These frameworks remind us that digital infrastructure isn’t an end in itself. It’s a means to advance dignity, inclusion, and sustainable development. Even as the Internet fragments, grounding decisions in these principles can help keep diverse systems interoperable not just technically, but socially. RecalibrationThe “global Internet” is fragmenting, if it ever really existed at all. That’s a statement I’m not comfortable with but which I’m also not going to approach as the ultimate technical tragedy. Fragmentation brings friction, risks, and challenges, sure. It threatens universality, raises security concerns, and could amplify inequalities. But it also forces us to imagine new architectures, new modes of cooperation, new ways to build more resilient and locally grounded technologies. It means innovation might look different: less about global scale, more about boundary-crossing craftsmanship, local resilience, hybrid systems.
In the end, fragmentation isn’t simply an ending. It may be a recalibration. The question is: will we let it just fragment into chaos, or guide it into a future where multiple, overlapping digital worlds still connect, where people everywhere are participants and not just objects of regulation?
Question for you, the reader: If the Internet becomes more of a patchwork than a tapestry, what kind of bridges do you think are essential? What minimum interoperability, trust, and rights should be preserved across borders?
If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
TranscriptHi everyone, and welcome back to the Digital Identity Digest. Today’s episode is called The End of the Global Internet.
This episode is longer than usual because there’s a lot to unpack. The global Internet, as we once imagined it, is changing rapidly. While it isn’t collapsing overnight, it is fragmenting. That fragmentation brings real risks — but also some surprising opportunities.
Throughout this month, I’ll be publishing slightly longer episodes, alongside detailed blog posts with links to research and source material. I encourage you to check those out as well.
What Fragmentation Really Means[00:01:15] Many of us grew up hoping for a single, borderless Internet: a vast network of networks without arbitrary firewalls. I’ve always loved that model, perhaps because I’m a globalist at heart. But that’s not where we’re heading.
In recent years, laws, cultures, infrastructure, and politics have pulled the Internet in different directions. The result? An increasingly fragmented landscape.
Researchers describe three key dimensions of fragmentation:
Technical fragmentation – national firewalls, alternative DNS systems, and content filtering that alter the “plumbing” of the Internet. Regulatory fragmentation – divergent laws on privacy, content, and data, such as the GDPR compared with lighter-touch U.S. approaches. Commercial fragmentation – companies restricting services by geography, whether for compliance, cost, or strategy.Together, these layers create friction in what once felt like a seamless system.
Evidence of Fragmentation in Practice[00:04:18] Let’s look at how fragmentation is showing up.
Regulatory divergence – The EU, China, and the U.S. are moving in very different directions. The EU emphasizes heavy regulation and precaution. The U.S. takes a lighter (but shifting) approach. China uses regulation to centralize control. Interestingly, strict laws often set global baselines. The Brussels Effect demonstrates how GDPR effectively raised global privacy standards, since it’s easier for companies to comply everywhere. Technical fragmentation – Governments are experimenting with independent DNS systems, national gateways, and even Internet shutdowns during protests or elections. On the flip side, this has fueled mesh networks and decentralized DNS, once fringe ideas that now serve as resilience tools. Commercial fragmentation – Supply chains and trade policy drive uneven access to hardware and cloud services. For example: Semiconductor fabs are being built outside Taiwan and Korea. New data centers are emerging in Africa and Latin America. African fintech thrives precisely because local firms adapt to fragmented conditions.McKinsey projects African fintech revenues will grow nearly 10% per year through 2028, showing how local innovation can thrive in fragmented markets.
Why Fragmentation Matters[00:06:45] Fragmentation has profound consequences.
Universality weakens – The original power of the Internet was its global reach. Fragmentation erodes that universality. Security and trust challenges – Choke points and divergent encryption weaken cross-border trust. Economic costs – Companies must duplicate infrastructure and compliance, slowing innovation. Inequality deepens – Weaker regions risk being left behind, forced to adopt systems imposed by stronger players. Moderating Factors[00:08:30] Fragmentation isn’t absolute. Several forces hold the Internet together:
Standards bodies like IETF and W3C keep core protocols aligned. Companies often adopt the strictest regimes globally, simplifying compliance. Users expect services to work everywhere — and complain when they don’t. Regional cooperation (e.g., EU, ASEAN, African Union) helps maintain partial cohesion.These factors form the connective tissue that prevents a total collapse.
Possible Future Scenarios[00:09:45] Looking ahead, I see four plausible scenarios:
Soft fragmentation Internet stays global, but friction rises. Platforms launch regional versions, compliance costs increase. Opportunity: stronger local ecosystems and regional innovation. Regulatory blocks Countries form digital provinces with internal harmony but divergence elsewhere. Opportunity: specialization (EU in privacy tech, Africa in mobile-first innovation, Asia in super apps). Technical fragmentation Shutdowns, divergent standards, and outages become common. Opportunity: mainstream adoption of decentralized and peer-to-peer networks. Pure isolationism Countries build proprietary platforms, national ID systems, and local chip fabs. Opportunity: preservation of local values, region-specific innovation. What Can We Do?[00:12:28] In the face of fragmentation, individuals, companies, and policymakers can take action:
Advocate for interoperable baselines (encryption, addressing, data portability). Design for variation so systems degrade gracefully under different regimes. Invest in local capacity — infrastructure, skills, developer ecosystems. Encourage regional cooperation through treaties and data agreements. Experiment with alternative architectures like mesh networks and decentralized identity. Anchor change in human rights — align with UN SDGs, protect freedoms, and center people, not just states or corporations. Closing Thoughts[00:15:50] The global Internet as we knew it may be ending — but that isn’t necessarily a tragedy.
Yes, fragmentation creates friction, risks, and inequality. But it also sparks resilience, innovation, and adaptation. In Africa, fintech thrives under fragmented conditions. In Europe, strong privacy laws raise global standards. In Asia, regional trade frameworks offer cooperation despite divergence.
The real question isn’t whether fragmentation is coming — it’s already here. The question is:
What kind of fragmented Internet do we want to build? Which bridges are worth preserving? Which minimum standards — technical, ethical, social — should always cross borders?These questions shape not only the Internet’s future, but our own.
[00:18:45] Thank you for listening to the Digital Identity Digest. If you found this episode useful, please subscribe to the blog or podcast, share it with others, and connect with me on LinkedIn @hlflanagan.
Stay curious, stay engaged, and let’s keep these conversations going.
The post The End of the Global Internet appeared first on Spherical Cow Consulting.
Customer identity verification is critical for fraud prevention, compliance, and building trust in digital business.
Businesses can use layered methods (document verification, biometrics, MFA, and risk scoring) to ensure security without sacrificing user experience.
The biggest challenges include synthetic identity fraud, cross-border verification, and balancing compliance with customer convenience.
Adopting best practices like multi-layered verification, advanced AI, and risk-based frameworks ensures security while streamlining onboarding.
What Is Customer Identity Verification?Customer identity verification confirms that customers are who they claim to be, using digital tools and data checks. It involves validating personal details and credentials against official records, documents, or biometric identifiers.
The purpose is simple: stop fraudsters at the gate while giving legitimate customers a seamless, trusted onboarding experience. Verification is no longer optional in a world where synthetic identities can be spun up with a stolen Social Security number and a fake address.
Modern verification systems use artificial intelligence, machine learning, and biometrics to increase accuracy and speed dramatically. Instead of forcing customers to wait days while documents are manually reviewed, businesses can now verify identities in minutes—or even seconds—with confidence levels above 99%.
The main types are document-based, biometric, knowledge-based, database verification, and multi-factor authentication (MFA).
Document-based verification checks the authenticity of passports, driver’s licenses, and other government IDs. Modern systems analyze holograms, fonts, and machine-readable zones (MRZs) to detect forgery attempts. Biometric verification leverages fingerprints, facial recognition, or iris scans. When paired with liveness detection, biometrics are far harder to spoof than traditional credentials. Knowledge-based authentication (KBA) relies on security questions, but with social media oversharing and widespread data breaches, attackers can easily guess or steal these answers. This method is rapidly losing relevance. Database verification cross-checks a customer’s details against government, financial, and sanctions databases to validate legitimacy. MFA strengthens defenses by requiring two or more identity factors: something you know (password), something you have (token), and something you are (biometric).Each method has strengths and weaknesses, but the most secure strategies don’t pick one; they combine them into a layered, adaptive verification framework.
How Does Customer Identity Verification Work?Verification breaks down into four stages: data collection, document assessment, identity validation, and risk assessment.
Everything starts with data collection, where customers provide personal details, government-issued IDs, biometrics, and contact information. Once collected, the data moves to document assessment, where AI tools check submitted IDs for authenticity and signs of tampering. This step catches expired, altered, or synthetic documents before they go any further. Next is identity validation, where the information gets cross-referenced against trusted government and financial databases. Biometrics are compared to ID photos, while watchlist screenings flag individuals who could pose regulatory or fraud risks. Last comes risk assessment that generates a trust score based on behavioral anomalies, device intelligence, geolocation data, and known fraud indicators.What once stretched across days now happens in seconds, allowing organizations to seamlessly onboard good customers while quietly blocking bad actors.
What Are The Challenges To Customer Identity Verification?Challenges include synthetic fraud, cross-border complexity, balancing user experience with security, advanced attack vectors, and compliance.
Synthetic identity fraud is the fastest-growing financial crime, estimated to reach $23 billion annually by 2030. Attackers stitch together real and fake data to create new “people” that slip past legacy checks. Cross-border verification struggles with inconsistent ID standards, languages, and regulatory frameworks. A passport in Germany won’t have the same features as a driver’s license in Mexico. User experience vs. security is a constant balancing act. Too much friction leads to legitimate users abandoning onboarding, while too little leads to attackers walking right in. Advanced attacks like deepfakes, AI-generated voice phishing, and synthetic biometrics make fraud detection harder than ever. Compliance obligations vary dramatically across sectors. With the General Data Protection Regulation (GDPR) in Europe, the Anti-Money Laundering (AML) rules for banks, and the Health Insurance Portability and Accountability Act (HIPAA) for healthcare, standards and regulations will run the gamut. Businesses must navigate a minefield of global standards.The reality is that fraudsters innovate faster than regulators. That means businesses need adaptive, technology-driven defenses that evolve continuously.
What Are The Best Practices To Customer Identity Verification?The best practices boil down to multi-layered checks, AI-driven analysis, risk-based frameworks, data security, and compliance alignment.
Multi-layered verification: Mix documents, biometrics, and databases for solid defense in depth. Advanced AI: Use machine learning models to catch spoofing, deepfakes, and behavioral red flags in real time. Risk-based approaches: Match verification intensity to transaction risk, including tougher checks for wire transfers, lighter touch for low-value stuff. Data protection: Encrypt sensitive data, store it securely, and run regular audits to stay compliant. Or, with blockchain solutions like 1Kosmos, skip centralized data storage entirely and eliminate that major attack vector. Regulatory alignment: Keep up with changing KYC/AML requirements and privacy laws around the world.Get these right, and you’ll block fraud while making onboarding so quick and smooth that customers actually choose businesses with stronger verification over the competition.
Why Is Customer Identity Verification Important To Businesses?It prevents fraud, ensures compliance, builds trust, and drives operational efficiency. By verifying users before granting access, businesses can stop account takeovers, impersonation scams, and synthetic identities. But the benefits go beyond just security. Regulatory compliance, from KYC and AML requirements in financial services to HIPAA rules in healthcare, makes verification a must-have for operations.
In an environment where breaches dominate headlines, demonstrating rigorous verification builds confidence with partners and customers alike.
How Should My Business Verify Customer Identities Step By Step?Businesses should follow a structured six-step implementation framework.
Assess requirements: Figure out your fraud risks, compliance mandates, and customer demographics. Choose methods: Based on your specific risk profile, you can select verification tools such as customer document verification or biometrics. Implement technology: Set up APIs, document scanning, and biometric integrations that scale without messing up your existing systems. Design journeys: Create user-friendly flows that reduce friction without compromising security. Train staff: Make sure employees can escalate suspicious cases, conduct manual reviews, and help customers when needed. Monitor and optimize: Continuously tweak based on fraud detection outcomes, customer drop-off rates, and regulatory changes.Following this framework ensures verification is both secure and customer-centric.
What Are The Common Customer Identity Verification Methods?Standard methods include document scanning, facial recognition, fingerprint scans, SMS OTPs, database checks, and MFA.
Some legacy methods are fading. KBA and SMS one-time passcodes, for example, are easily compromised. Attackers can scrape answers from social media or intercept text messages.
By contrast, modern approaches like AI-powered biometrics and blockchain-backed credentials are gaining traction. They’re faster, harder to spoof, and more transparent for users. Forward-looking businesses are already adopting reusable digital identity wallets, allowing customers to authenticate seamlessly across multiple services without re-verifying.
Trust 1Kosmos Verify for Identity VerificationPasswords and outdated MFA create friction for customers, leaving the door open to fraud, account takeovers, and synthetic identities. These obsolete methods slow onboarding, frustrate legitimate users, and fail to deliver the trust today’s digital economy demands.
1Kosmos Customer solves this by replacing weak credentials with a mighty, privacy-first digital identity wallet backed by deterministic identity proofing and public-private key cryptography. In just one quick, customizable registration, legitimate customers are verified with 99%+ accuracy and given secure, frictionless access to services, while fraudsters are stopped at the first attempt. From instant KYC compliance to zero-knowledge proofs that protect sensitive data, the result is a seamless authentication experience that customers love and businesses can rely on.
Ready to eliminate fraud, streamline onboarding, and delight your customers? Discover how 1Kosmos Customer can transform your digital identity strategy today.
The post Customer Identity Verification: Overview & How to Do It Right appeared first on 1Kosmos.
Artificial Intelligence is no longer science fiction. From unlocking your phone to passing through airport security, AI face recognition has become part of daily life. It is powerful, practical, and sometimes a little controversial. But how does it actually work, and where is it headed? Let’s break it down in simple terms.
What is AI Face Recognition
At its core, AI and face recognition is a technology that identifies or verifies a person using their facial features. Think of it as a digital detective. It looks at your face the same way you look at a fingerprint, comparing unique details like the distance between your eyes or the curve of your jaw.
This isn’t just about matching a selfie to your phone. The technology is also applied in banking apps, airports, healthcare, and even retail stores. It is driven by facial AI models trained on massive datasets, allowing systems to quickly learn the differences and similarities between millions of faces.
How AI Face Recognition Works
The process might sound complex, but let’s simplify it. The system works in three big steps:
Face Detection AIThese steps are powered by artificial intelligence face recognition algorithms that become more accurate over time.
Accuracy and Global Benchmarks
Not all systems are created equal. Some are lightning fast with near-perfect accuracy, while others struggle in low light or with diverse facial features. The NIST Face Recognition Vendor Test (FRVT) has become the gold standard for measuring how well different systems perform.
Visit NIST FRVT for performance data. Explore detailed evaluation results on FRVT 1:1 tests.These benchmarks give businesses and governments confidence before deploying large-scale projects.
Everyday Uses of Facial AI
You may not notice it, but facial AI is everywhere. Here are some real-world applications:
Smartphones: Unlocking devices without passwords. Airports: Quicker boarding with automated gates. Healthcare: Patient verification for secure records. E-commerce: AI face search for trying products virtually. Banking: Identity checks for fraud prevention.Fun fact: Some retailers even use AI facial systems to analyze customer demographics and improve shopping experiences.
Privacy Concerns and RegulationsWith great power comes great responsibility. While the technology is convenient, it also raises concerns about surveillance and misuse. Governments are stepping in with data protection laws like the GDPR to ensure individuals have control over their biometric data.
Companies using AI face recognition must follow strict compliance rules such as:
Informing users how their data will be used. Allowing opt-outs where possible. Storing encrypted biometric data securely.Failure to follow these rules can lead to massive fines and public backlash.
Challenges Facing Face Detection AI
Even with rapid progress, the technology isn’t flawless. Common challenges include:
Bias in datasets: Some systems perform better on certain skin tones. Spoofing attempts: Photos or videos tricking the system. Environmental issues: Poor lighting or extreme angles can reduce accuracy.To tackle spoofing, researchers are exploring liveness detection techniques, making sure the system knows the difference between a real human face and a photo.
The Future of AI and Face RecognitionLooking ahead, experts believe ai and face recognition will only get smarter. Here are a few trends shaping the future:
Edge computing: Processing done on local devices for speed and privacy. Cross-industry adoption: From gaming to education, new uses are emerging. Open-source innovation: Platforms like Recognito GitHub encourage collaboration and transparency.As systems improve, the balance between convenience and privacy will continue to dominate the conversation.
Final Thoughts
AI face recognition is changing the way the world verifies identity. It simplifies daily tasks, strengthens security, and opens doors to new possibilities. Yet, it also comes with challenges like privacy risks and the need for unbiased data. With organizations such as NIST setting global benchmarks and strict regulations like GDPR shaping policy, the future looks promising but carefully monitored.
And as innovation keeps moving forward, one name that continues to contribute in this space is Recognito.
Frequently Asked Questions
1. What is AI face recognition used for
AI face recognition is used for unlocking smartphones, airport security checks, banking identity verification, and even retail experiences like virtual try-ons.
2. How accurate is face detection AIAccuracy depends on the system. Some advanced tools tested by NIST FRVT report accuracy rates above 99 percent, especially in controlled environments.
3. Can AI face search find someone onlineAI face search can match faces within specific databases or platforms, but it cannot scan the entire internet. Accuracy depends on the size and quality of the database.
4. Is AI facial recognition safe to useYes, when regulated properly. Systems that follow privacy rules like GDPR and use encryption keep user data protected.
5. What is the difference between face match AI and face detection AIFace detection AI only spots if a face is present. Face match AI goes further by verifying if the detected face matches an existing one in the database.
The post How AI is Enhancing Sanctions Screening and Adverse Media Monitoring appeared first on uqudo.
I want to share the Holochain Foundation’s evolving strategic approach to our subsidiary organizations, Holo, and Unyt.
Strategic work always involves paying attention to the match between your efforts, and where the world is ready to receive those efforts. Since our inception there has always been a small group of supporters who have understood the potential and need for the kind of deep p2p infrastructure that we are building, which allows for all kinds of un-intermediated direct interactions and transactions of all types. But at this moment in time we are seeing a new convergence.
As Holochain is maturing significantly, the main-stream world is also maturing into understanding the need for p2p networks and processes. As my colleague Madelynn Martiniere says: “we are meeting the moment and the moment is meeting us.”
And there’s a key domain in which this is happening: the domain of value transfer.
The Unyt OpportunityAs you know, the foundation created a subsidiary, Unyt, to focus on building HoloFuel, the accounting system for Holo to use for its Hosting platform. But it turns out that the tooling Unyt built has a far broader application than we had initially realized. This is part of the convergence, and also a huge opportunity.
Unyt’s tooling turns out to be what people are calling “payment-rails”: generalized tooling for value tracking, and because it’s built on Holochain, it’s already fully p2p. This is part of the convergence. There is a huge opportunity for this technology to bring the deep qualitative value that p2p provides: increased transparency, agency, reduced cost, & privacy. And also in huge volumes: when talking about digital payments and transactions you count in the trillions.
The implications are huge, and they need and deserve the focus of the Foundation and our resources so we can fully develop the opportunity ahead of us.
Interactions with Integrity: Powered by HolochainOur original mission was to provide the underlying cryptographic fabric to make decentralized computing both easy and real - and ultimately, at a scale that could have a global impact.
That mission remains intact. The evolution we’re sharing today is not only directly connected to our original strategy, and a logical extension of it, but are ones that we believe will - over time - substantially increase the scale of and opportunities for anyone and everyone within the Holochain ecosystem.
When we introduced the idea of Holochain and Holo to the world in December of 2017, our goal was to provide a technology infrastructure that allowed people to own their own data, control their identities, choose how they connect themselves and their applications to the online world, and intrinsic to all of the above, interact and transact directly with each other as producers and consumers of value.
The foundation of the Holochain ecosystem has thus always required establishing a payment system where every transaction is an interaction with integrity: value is supported by productive capacity, validated on local chains (vs. global ledgers) by a unit of accounting - in our case, HoloFuel - and value and supply is grounded by a real-world service with practical value.
The Holochain Foundation entity charged with developing and delivering the technology infrastructure for this payment system is Unyt Accounting.
For almost a year now, the team at Unyt has been quietly working hard to develop the payment rails software that will permit users to build and deploy unique currencies (including HoloFuel), allow those currencies to circulate and interact, and ensure the integrity of every transaction. As it turns out, we got more than we bargained for, in the best possible way.
Meaning: in Unyt, we have software that not only enables HoloFuel, but we can see a brilliant way to link into both the blockchain and crypto world, and also the non-crypto world. As Holochain matures, with the application of Unyt technology, we see a major opportunity in the peer-to-peer payments space, and a chance to lead the non-financial transaction space.
These are, objectively, huge markets, as Unyt products and tools are not only aimed squarely at solving real-world crypto accounting and payment challenges, but will combine to create the infrastructure needed to launch HoloFuel, and additionally address multiple real-world use cases for anyone interested in high-integrity, decentralized, digital interactions.
Given Unyt’s progress, we arrived at a point where it became clear to everyone on our leadership team that it was time to make an important strategic decision about where to best devote our focus, time, and resources.
Strategic Decisions and Our Path ForwardHere’s where we landed:
When we reorganized Holo Ltd. last year, it was because we wanted to spur growth, and felt having a focus on a commercial application could expand the number of end users. But, it also put us into competition with some of the largest and best-capitalized tech companies on the planet.
We haven’t gotten enough traction yet for this to be our sole strategy. As part of our ongoing evaluation over the last months, the Holo dev team pursued an exploration of a very different approach - both technical and structural - to deploying Holochain always-on nodes.
Holo is calling it Edge Node, an open-source, OCI-compliant container image that can be run on any device, physical or virtual, to serve as an always-on-node for any hApp .
Today, Edge Node is available on GitHub for technically savvy folks to use. You can run the Docker container image or opt to install via the ISO onto HoloPorts or any other hardware
What’s different about this experiment is that it appeals to a much wider audience - those familiar with running docker containers, rather than the smaller audience who know Nix. And we’re releasing it now, as open-source, and actively seeking immediate feedback from the community on how this might evolve and contribute to Holo’s goals.
Second, it is equally clear we need to accelerate the timeline for Unyt. Unyt’s software underpins the accounting infrastructure necessary to create and launch HoloFuel, and subsequently allow owners of HOT to swap into it. More broadly, the multiple types of connectivity Unyt can foster have enormous potential to influence the size, growth, and overall value of Holochain - it is the substrate of peer-to-peer local currencies, and the foundation for future DePIN alliances.
This acceleration is already under way - in fact, Unyt has released its first community currency app, Circulo, which is meant for real-world use but also acts as proof-of-concept for the broader Unyt ecosystem.
Third, and finally, the Holochain Foundation will continue to focus on the stability and resilience of the Holochain codebase, prioritize key technical features required for the currency swap execution, and remain at the center of all our entities to ensure cohesion and coordination.
Leadership TransitionAs part of the next stage of Holo’s evolution, I want to share an important leadership update.
Mary Camacho, who has served as Executive Director of Holo since 2018, will be stepping down from that role, and I will be stepping in. Mary will continue to support Holo during this transition, particularly in guiding financial and strategic planning. We are deeply grateful for her years of leadership, steady guidance, and dedication to Holo’s vision.
At the same time, we also thank Alastair Ong, who has served as a Director of Holo, for his contributions on the board. We wish him the very best in his next endeavors.
These transitions mark a natural shift in leadership that allows Holo to move forward with renewed focus, alongside ongoing collaboration with Unyt and the wider Holochain ecosystem.
Looking AheadFrom the outset, we knew we were undertaking an extraordinary challenge. In conceiving of and developing Holochain, we set out to compete with some of the largest, best-resourced, and most powerful companies in the world. No part of what we have done, or intend to do, has been easy.
In many ways Holochain has always been a future-looking technology that users had difficulty fully appreciating and adopting at scale. Now, the world seems to have caught up to us, and is interested in implementing peer-to-peer networks and processes away from centralized structures.
When we formed Unyt to build the software infrastructure to permit the creation and accounting for HoloFuel, we also caught up to the world: A Major Opportunity Emerges(the volume of digital payments and transactions last year alone are measurable in the trillions).
We’ve spent a long time working to deliver on our commitments to our community, and there is much still to do.
As challenging as it is not to have crossed the finish line yet, it’s exciting to see it appearing on the horizon. We continue to experiment with how to best expand the potential for Holo hosting. And with Unyt, what we’re proposing to do here - if we are successful - is significantly grow the scale, potential, optionality, and value of every aspect of the Holochain ecosystem.
For those interested, please take the time to watch our most recent livestream, where we talk about this evolution and the opportunities it represents for all of us.
We have a lot to look forward to, and we look forward to continuing to work closely with our most valuable, and reliable, resource: you, the members of the Holochain community.
Last week marked our sixth Demo Day, this one focused on Fighting Third-Party Fraud. Ten vendors stepped up to show how their solutions tackle account takeover (ATO), business email compromise (BEC), and synthetic identity fraud. Each had 15 minutes to prove their case, followed by a live Q&A with an audience of fraud, risk, and security leaders.
Across the sessions, a consistent theme emerged: the biggest shift in the fraud prevention market isn’t in the tactics fraudsters use, but how enterprises are buying solutions. Detection is expected; what matters now is whether a tool can keep the business running without stalling growth or turning away good customers. Buyers want assurance that fraud prevention supports stability by keeping customers moving, revenue intact, and trust unbroken when fraud inevitably spikes.
What is third-party fraud?For readers outside the space, third-party fraud happens when criminals exploit someone else’s identity to gain access. Unlike first-party fraud, where the individual misrepresents themselves, third-party fraud relies on stolen or fabricated credentials to impersonate a trusted user.
Classic examples include:
Account takeover (ATO): hijacking legitimate accounts, often through phishing or stolen credentials Business email compromise (BEC): impersonating executives or vendors to redirect payments Synthetic identity fraud: blending real and fake data to create convincing personasIn 2024, consumers reported losing $12.5 billion to fraud, a 25% jump year-over-year and the highest on record. Account takeover attacks alone rose nearly 67% in the past two years as fraudsters leaned on phishing, social engineering, and increasingly AI-driven methods.
As Miguel Navarro, Head of Applied Emerging Technologies at KeyBank, put it: “Think about deepfakes like carbon monoxide — you may think you can detect it, but honestly, it’s untraceable without tools.” That risk is no longer theoretical; it’s already showing up in contact centers and HR pipelines.
Walking the friction tightropeEvery fraud solution has to walk a tightrope: protect the business without slowing customers down. In this Demo Day, that balance was explored in the Q&A, with audience questions focusing on onboarding delays, false positives, and manual review trade-offs. What happens when onboarding drags? How are false positives handled? Where do manual reviews fit?
Miguel also added:“…a tool might be a thousand times more effective, but if it’s too complex for teams to adopt, it’s effectively useless.”
Providers responded with different approaches. Several leaned on behavioral and device-based analytics to make authentication seamless, layering signals like keystroke patterns and device intelligence so genuine users pass in the background. Others showed risk-based orchestration, combining machine learning models and workflows so only high-risk activity triggers extra checks.
Protecting customers from themselvesOne theme that stood out was how solutions are evolving to address social engineering. As Mzu Rusi, VP of Product Development at Entersekt, explained: “It’s not enough to protect customers from outsiders — sometimes we have to protect them from themselves when they’re being socially engineered to approve fraud.”
That means fraud platforms are no longer judged only on blocking malicious logins. They’re also expected to intervene in context, analyzing signals like whether the user is on a call while approving a transfer, or whether a new recipient account shows signs of mule activity.
Human touch as a deterrentTechnology was the backbone of every demo, but Proof emphasized how human interaction remains a powerful fraud defense. Lauren Furey, Principal Product Manager, shared how stepping up to a live identity verification can shut down takeover attempts while preserving trust: “The deterrence of getting a fraudster in front of a human with these tools is enormous. Strong proofing doesn’t have to feel heavy, and customers leave reassured rather than abandoned.”
This balance — minimal friction for real customers, targeted intervention for fraudsters — ran through the day.
From fraud loss to balance sheet riskFraud was reframed as a balance sheet problem, not just a technology one. As Sunil Madhu, CEO & Founder of Instnt, put it: “Fraud is inevitable. Fraud loss is not. For the first time, businesses can transfer that risk off their balance sheet through fraud loss insurance.”
That comment landed because it spoke to CFO and board-level concerns. Fraud is no longer just an operational hit; it’s a financial exposure that can be shifted, managed, and priced. But shifting fraud into financial terms doesn’t reduce the pressure on prevention teams — it only raises the bar for the technology that keeps fraud within acceptable limits.
How detection is evolvingOn stage, several demos highlighted identity and device scoring as the new baseline, layering biometrics, transaction history, and tokenization to judge risk in milliseconds. Others pushed detection even earlier in the journey, using pre-submit screening to catch bad actors before they hit submit.
Machine learning also played a central role in the demos. Several providers showed how adaptive models can cut down false positives while continuously improving through feedback loops. Phil Gordon, Head of Solution Consulting at Callsign, described it as creating a kind of “digital DNA”: “Every customer develops a digital DNA — how they type, swipe, or move through a session. That lets us tell genuine users apart from bots, malware, or account takeover attempts in milliseconds.”
That theme carried into the fight against synthetic identities. Alex Tonello, SVP Global Partnerships at Trustfull, explained how fraudsters engineer personas to slip through traditional checks: “Synthetic fraudsters build identities with new emails, new phone numbers, no history. By checking hundreds of open-source signals at scale, we see right through that façade.”
Others extended the conversation to fraud at the network level. Artem Popov, Solutions Engineer at Sumsub, noted: “Fraudsters reuse documents, devices, and identities across hundreds of attempts. By linking those together, you expose entire networks — not just single bad actors.”
The boardroom shiftFraud used to be a line item in operations, managed quietly by fraud prevention teams and written off as the cost of doing business. That’s no longer the case. The scale of losses, reputational damage, and operational disruption means fraud has moved up the agenda and onto boardrooms.
Executives now face a harder challenge: choosing tools that don’t just stop fraud, but that protect business continuity. They want proof that investments in prevention will keep revenue flowing when attacks spike, not just reduce fraud losses on a spreadsheet. Boards are asking whether controls are strong enough to protect customer trust, whether onboarding processes can scale without breaking, and whether the business can keep moving if a wave of account takeovers hits overnight.
They are right to pay attention. Fraud and continuity now rank among the top five enterprise risks. Technology shifts like Apple and Google restricting access to device data are making established defenses less reliable, reframing fraud not only as a security issue but as a continuity problem.
Watch the RecordingDid you miss our Third-Party Fraud Demo Day? You can still catch the full replay of vendor demos and expert insights:
Watch the Third-Party Fraud Demo Day recording here
The post Third-Party Fraud: The Hidden Threat to Business Continuity appeared first on Liminal.co.
The call comes in at 4:55 PM on a Friday. It’s the CFO, and she’s frantic. She’s locked out of her account, needs to approve payroll, and her flight is boarding in ten minutes. She can’t remember the name of her first pet, and the code sent to her phone isn’t working. The pressure is immense. What does your help desk agent do? Do they bypass security to help the executive, or do they hold the line, potentially disrupting a critical business function?
This isn’t a hypothetical scenario; it's a daily, high-stakes gamble for support teams everywhere. And it’s a gamble that attackers are counting on. They know your help desk is staffed by humans who are measured on their ability to resolve problems quickly. They exploit this pressure, turning your most helpful employees into unwitting accomplices in major security breaches. It's time to stop gambling.
Why Is Your Help Desk a Prime Target for Social Engineering?The modern IT help desk is the enterprise's nerve center. It’s also its most vulnerable entry point. According to industry research, over 40% of all help desk tickets are for password resets and account lockouts (Gartner), each costing up to $70 to resolve (Forrester). This makes the help desk an incredibly attractive and cost-effective target for attackers.
Why? Because social engineers don't hack systems; they hack people. They thrive in environments where security relies on outdated, easily compromised data points:
Knowledge-Based Questions (KBA): The name of your first pet or the street you grew up on isn't a secret. It's public information, easily found on social media or purchased for pennies on the dark web. SMS & Email OTPs: Once considered secure, one-time passcodes are now routinely intercepted via SIM swapping attacks and sophisticated phishing campaigns. Employee ID Numbers & Manager Names: This information is often exposed in data breaches and is useless for proving real-time identity.Relying on this phishable data forces your agents to become human lie detectors, a role they were never trained for and a battle they are destined to lose. The result is a massive, unmitigated risk of help desk-driven account takeover.
Shifting from Guesswork to Certainty with HYPR's Help Desk AppToday, we're fundamentally changing this dynamic. To secure the help desk, you must move beyond verifying what someone knows and instead verify who someone is. That's why we're proud to introduce the HYPR Affirm Help Desk Application.
This purpose-built application empowers agents by integrating phishing-resistant, multi-factor identity verification directly into their workflow. Instead of asking agents to make high-pressure judgment calls, we give them the tools to verify identity with NIST IAL 2 assurance fast. This transforms your help desk from a primary target into a powerful line of defense against fraud.
How Can You Unify Identity Verification for Every Help Desk Scenario?The core of the solution is the HYPR Affirm Help Desk App, a command center for agents that integrates seamlessly with your existing support portals (like ServiceNow or Zendesk) and ticketing systems. This provides multiple, flexible paths to resolution, ensuring security and speed no matter how an interaction begins.
Initiate Verification from Anywhere: Self-Service: Empower users to resolve their own issues by launching a secure verification flow directly from your company's support portal. Agent-Assisted: For live calls or chats, an agent can use the HYPR Help Desk App to instantly send a secure, one-time verification link via email or SMS. User-Initiated (with PIN): A user can start the verification process on their own and receive a unique PIN. They provide this PIN to a support agent, who uses it to look up the verified session, ensuring a fast and secure handoff without sharing any PII. Verify with Biometric Certainty:The gap between traditional methods and modern identity assurance is staggering. One relies on luck, the other on proof.
End the Gamble: Stop Account Takeover at the Help DeskYour organization can't afford to keep rolling the dice. Every interaction at your help desk is a potential entry point for a catastrophic breach. The pressure on your agents is immense, the methods they've been given are broken, and the attackers are relentless.
But there is a different path. A path where certainty replaces guesswork. Where your support team is empowered, not exposed. Where your help desk transforms from a cost center and a risk vector into a secure, efficient enabler of the business. By removing the impossible burden of being human lie detectors, you free your agents to do what they do best: help people. Securely.
Ready to secure your biggest point of contact? Schedule your personalized HYPR Affirm demo today.
Frequently Asked Questions about HYPR Affirm’s Help Desk App (FAQ)Q. What is NIST IAL 2 and why is it important for help desk verification?
A: NIST Identity Assurance Level 2 (IAL 2) is a standard from the U.S. National Institute of Standards and Technology. It requires high-confidence identity proofing, including the verification of a government-issued photo ID. For help desk scenarios, meeting this standard ensures you are protected against sophisticated attacks, including deepfakes and social engineering, and is crucial for preventing fraud.
Q. How long does the verification process actually take for the user?
A: The entire user-facing process, from receiving the link to scanning an ID and taking a selfie, is designed for speed and simplicity. A typical full verification is completed in under 2 minutes, and the process is completely configurable.
Q. What happens if a user doesn't have their physical ID available?
A: HYPR Affirm's policy engine is fully configurable. While ID-based verification is the most secure method, organizations can define alternative escalation paths and workflows to securely handle exceptions based on their specific risk tolerance and needs.
Q. Is this solution just for large enterprises?
A: HYPR Affirm for Help Desk is for any organization that needs to eliminate the significant risk of account takeover fraud originating from support interactions. It scales from mid-sized companies to the world's largest enterprises, securing sensitive tasks like password resets, MFA recovery, and access escalations.
Welcome back to our ongoing reflections on the Many-to-Many project. In our last three posts, we’ve taken you through the journey of building our digital platform — from initial concepts and wrestling with complexity to creating our first tangible outputs like the Field Guide and Website. We’ve shared how the project’s tools have emerged from a living, iterative process.
Today, we’re taking a step back to look at the foundational methodology behind this entire initiative. How do you go about creating new models for collaboration when no blueprint exists? Our approach has been a “proof of possibility” — a live experiment where we, along with our ecosystem of partners, served as the primary test subjects.
In this post, the initiative’s co-stewards, Michelle and Annette, discuss the profound challenges and unique learnings that come from trying to build the plane while flying it.
How the Proof of Possibility fits within a wider context of predecessor work, and flows into other initiatives and partial testing in live contextsMichelle: We wanted to reflect on the “proof of possibility” we ran, where we essentially decided to live prototype on ourselves with a small group of partners in a Learning Network. While it sounds simple, we learned it’s incredibly complex. You’re making decisions and sense-making within a specific prototype, but you’re also constantly trying to translate those learnings into something more generalised and applicable for others. In many ways, it’s a cool, experimental way of working, but it was also a bit of a nightmare.
The prototype, test, learn loop that we started to develop in the Proof of PossibilityAnnette: It was very meta. In this proof of possibility, one of the things we were testing was a learning infrastructure for the ecosystem itself. So you’re testing learning within the experiment, while also prototyping the experiment, and then you have to step back and ask: what did we learn from this specific context versus what is context-agnostic and applicable elsewhere? Then there’s another layer: what did we learn about the wider external landscape and its readiness for this work? And finally, what did we learn about the process of learning about all of that? There’s this feeling of learning about learning about learning.
It’s representative of the fractal nature of this work. For instance, we were a core team working on our own governance while simultaneously orchestrating and supporting the ecosystem’s governance. The ecosystem itself was then focused on building capabilities of the system for many-to-many governance. It was navigating so many layers. On one hand, this has immense value because you’re looking at one question from multiple angles at once. On the other hand, it has been incredibly cognitively challenging.
Michelle: It’s that old adage of trying to build the plane whilst flying it — except there are no blueprints for the plane. I think the complexity we bumped into is probably present for anyone trying to do this kind of work, because everyone has to work at fractals all the time. So I was thinking, what are some things we bumped into, and how did we overcome them? The first breakthrough that comes to my mind was when we started to explicitly ask, “Are we talking about this specific prototype right now, or are we talking about the generalised model?” Just having that clear distinction, a shared vocabulary that the whole learning network could use, was a huge moment of alignment for us. It gave people a way to see we were working on at least two layers at the same time.
The draft “Layers of the Project” which was created during the project as a visual representation and description of the different spaces we were trying to hold and build all at once. We note that the thinking has evolved and this image has been superseded, but share it here as a point in time image.Annette: Yes and we found that the difference in thinking required for each of those layers was huge. Thinking through the specifics of what we did in one context versus pulling out principles applicable across all/any contexts was such a massive gear shift. Turning a specific example — “here’s something we tried” — into a generalised tool — “here’s something useful for others” — was probably a five-fold increase in workload, if not more. The amount of planning and thinking required was significantly different.
Michelle: What else comes up for you from this experience of prototyping on ourselves?
If nothing comes to mind, I can jump in. For me it was the dynamic of being the initiators. We were the ones who convened the group and set the mission. In these complex collaborations, the initiator tends to hold a lot of relational capital, power, and responsibility. This was exacerbated because we were managing all these different layers of learning. It centralised the knowledge and the relational dynamics back to us. If one of us was missing from a budget conversation, for example, it was difficult for others to proceed. For me, the bigger point is that to do good demonstration work, it has to be experimental and emergent. But that doesn’t come for free; it has downsides. This re-centralisation was one of them, and it was a lot for us to hold.
Annette: That makes me wonder if a certain degree of that centralisation is inevitable in organising for these kind of ‘proof of possibilities’. When something is this complex and emergent, you can only distribute so much, so early. To meet the real-time needs of the collaboration, you need an agile core team. This is where it gets interesting — we were operating in the thin space between a sandbox environment and a live context. It had to be a genuine live context for people to want to participate, but it was also a sandbox for testing the general model. You have to meet the timelines of the live context; you can’t just pause for six months to work out team dynamics, or the collaboration collapses. So you almost need a team providing strong leadership to hold both realities at once.
Michelle: So, would you do it the same way again?
Annette: I think if we did it again, the things we’ve learned would make it smoother. We’d be more explicit from the start about which layer we’re discussing. We’d have a better sense of how to capture live learning and translate it into a model as we go. When we started, most of our attention was on hosting the live context, and a lot of the synthesis happened afterwards. Having done it once, I’d be more conscious of doing that synthesis in real-time — though the cognitive lift to switch between those modes is still immense.
Michelle: I agree, I would do it again with those additions. The other thing is that when we started, we didn’t even really have the process that we wanted to go through. Now we do. We’ve learned more about what works. Starting fresh, we would have a decent sketch of a process to begin with. Not perfect, and you still have to wing it, but it’s a good start. I’d be interested to do it again and see what happens.
This meta-reflective process — learning about learning while doing — has been a central part of the Many-to-Many initiative creating a ‘Proof of Possibility’ as a way to learn about what’s possible at a system level. While navigating these fractal layers is cognitively demanding, it’s what allows for true emergence, distinguishing this deep, systemic work from simple chaos. It is a messy, challenging, and ultimately fruitful way to discover what’s possible.
In the Many-to-Many website [coming soon] you will find some resources based on what we did in the Proof of Possibility (Experimenter’s Logs and example methods and artefacts like the Contract) and some based on what might be applicable across contexts (a Field Guide, some tools and an overview of System Blockers we’ve encountered) along with case studies and top tips from other contexts in the learning network.
Thanks for following our journey. You can find our previous posts [here], [here] and [here] and stay updated by joining the Beyond the Rules newsletter [here].
Visual concept by Arianna Smaron & Anahat Kaur.
Many-to-Many: The Messy, Meta-Process of Prototyping on Ourselves was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.
Bluesky develops open protocols, and we want everybody to feel confident building on them. We have released our software SDKs and reference implementations under Open Source licenses, but those licenses don’t cover everything. To provide additional assurance around patent rights, we are making a non-aggression pledge.
This commitment builds on our recent announcement that we’re taking parts of AT to the IETF in an effort to establish long-term governance for the protocol.
Specifically, we are adopting the short and simple Protocol Labs Patent Non-Aggression Pledge:
Bluesky Social will not enforce any of the patents on any software invention Bluesky Social owns now or in the future, except against a party who files, threatens, or voluntarily participates in a claim for patent infringement against (i) Bluesky Social or (ii) any third party based on that party's use or distribution of technologies created by Bluesky Social.
This pledge is intended to be a legally binding statement. However, we may still enter into license agreements under individually negotiated terms for those who wish to use Bluesky Social technology but cannot or do not wish to rely on this pledge alone.
We are grateful to Protocol Labs for the research and legal review they undertook when developing this pledge text, as part of their permissive intellectual property strategy.
The post Mythics' Strategic Acquisitions Amplify Cloud-Powered, AI-Driven Transformation at Oracle AI World appeared first on Mythics.