Last Update 1:22 PM November 11, 2025 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Tuesday, 11. November 2025

Spherical Cow Consulting

The Paradox of Protection

Last month’s AWS outage did more than interrupt chats and scramble payment systems. It reignited a political argument that has been simmering for years: whether cloud platforms have become too essential to be left in private hands. In the U.K., calls for digital sovereignty resurfaced almost immediately. The post The Paradox of Protection appeared first on Spherical Cow Consulting.

“Last month’s AWS outage did more than interrupt chats and scramble payment systems. It reignited a political argument that has been simmering for years: whether cloud platforms have become too essential to be left in private hands.”

In the U.K., calls for digital sovereignty resurfaced almost immediately. Across Europe, people again questioned their dependence on U.S. providers. Even for companies that weren’t directly affected, the incident felt uncomfortably close.

In The Infrastructure We Forgot We Built, I pointed out that private infrastructure now performs public functions. The question isn’t whether these systems are critical—demonstrably, they are—it’s what happens when everything is critical. Governments continue to expand their definitions of “critical infrastructure,” extending the term to encompass finance, cloud, data, and communications. Each new addition feels justified, but the result is an ever-growing list that no one can fully protect.

Declaring something “critical” once meant ensuring its safety. Now it often means claiming jurisdiction. It creates an uncomfortable paradox: the more we classify, the more we appear to protect, and the less effective we become at coordinating a response when the next outage arrives.

Let’s poke at some interesting ramifications of classifying a service as critical.

A Digital Identity Digest The Paradox of Protection Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:14:09 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The American model: expanding scope, dispersing responsibility

Nowhere is this inflation more visible than in the United States, where “critical infrastructure” has evolved from a short list of sixteen sectors, including energy, water, transportation, and communications, to a sprawling catalog of national functions. The Cybersecurity and Infrastructure Security Agency (CISA) calls them National Critical Functions: over fifty interconnected capabilities that “enable the nation to function.” It’s an attempt to capture the web of dependencies that tie one system to another, but the list is so long that prioritization becomes impossible.

At the same time, National Security Memorandum 22 (NSM-22) shifted much of the responsibility for protecting those functions away from federal oversight. Under NSM-22, agencies and private operators were expected to manage their own resilience planning. In theory, decentralization builds flexibility; in practice, it creates a policy map with thousands of overlapping boundaries. The government defines criticality broadly, but control over what that means in practice is increasingly diffuse.

As of 2025, the current U.S. administration is reviewing NSM-22 and several other cybersecurity and infrastructure policies in an effort to clarify lines of responsibility and modernize federal strategy. According to Inside Government Contracts, this review could lead to significant revisions in how critical infrastructure is defined and governed, though the direction remains uncertain.

What’s unlikely to change is the underlying trend: expansion without coordination. The more functions labeled critical, the thinner the resources spread to defend them. If everyone is responsible, no one really is.

The European model: bureaucracy as resilience

Europe has taken almost the opposite approach. Where the U.S. delegates, the European Union codifies. The NIS2 Directive and the Critical Entities Resilience (CER) Directive bring a remarkable range of organizations, such as cloud providers, postal services, and wastewater plants, under the umbrella of “essential” or “important” entities. Each must demonstrate compliance with a thick stack of risk-management, incident-reporting, and supply-chain-security obligations.

It’s tempting to see this as overreach, but there’s a strange effectiveness to it. A friend recently observed that bureaucracy can be a form of resilience: it forces repeatable, auditable behavior, even when it slows everything down. Under NIS2, an outage may still occur, but the process for recovery is at least predictable. Europe’s system may be cumbersome, but it institutionalizes the habit of preparedness.

If the U.S. model risks diffusion, the European one risks inertia. Both confuse activity with assurance. To put it another way, expanding oversight doesn’t guarantee protection; it guarantees paperwork. Protection might just be a happy accident.

Interdependence cuts both ways

Underlying both approaches is the same dilemma: interdependence magnifies both stability and fragility. The OECD warns about “systemic risk” in its 2025 Governments at a Glance report. Similarly, the WEF describes this characteristic as “interconnected risk” in their Global Risks Report 2024. In both cases, they are talking about how a disturbance in one sector can ripple instantly into others, turning what should be a local failure into a global one.

But interdependence also enables the efficiencies that modern economies depend on. The same cloud architectures that expose organizations to shared risk also deliver shared recovery. If an AWS region goes down, another can often pick up the load within minutes. That doesn’t make the system invulnerable; it makes it tightly coupled, which is both a feature and a flaw.

That is the paradox of microservice design: locally resilient, globally fragile. The further we distribute responsibility, the more brittle the whole becomes. Managing that trade-off is less about eliminating interdependence than about deciding which dependencies are worth keeping.

Coordination in a fragmented world

The Carnegie Endowment’s recent report on global cybersecurity clearly frames the problem: the challenge is no longer whether to protect critical systems, but how to coordinate that protection across borders. The Internet made infrastructure transnational; regulation still stops at the border.

That tension was at the center of my earlier series, The End of the Global Internet. Fragmentation, through data-localization mandates, competing technical standards, and geopolitical distrust, is shrinking the space for cooperation. The systems that most need collective protection are emerging at the moment when collective action is least feasible.

That was made more than clear during the October 2025 AWS outage.

In the U.K., it reignited arguments about tech sovereignty, with commentators and MPs warning that reliance on U.S. providers left the country strategically exposed. In Brussels, the outage reinforced calls to accelerate the European Cloud Federation and “limit reliance on American hyperscalers.”

Tech.eu put it bluntly: “A global AWS outage exposes fragile digital foundations.” They are not wrong.

A technical event at this scale offers impressive political ammunition. The debate becomes about more than just uptime. It’s also about who controls the tools a society can’t seem to function without.

Labeling platforms as critical infrastructure amplifies that instinct. Once something is “critical,” every government wants jurisdiction. Every region seeks its own version. The intent is to strengthen sovereignty, but the result is a more fragmented Internet. Protection turns into partition.

Openness vs. control: lessons from digital public infrastructure

This tension between openness and control shows up again in global discussions around Digital Public Infrastructure (DPI). A recent G20 policy brief argues that while DPI and Critical Information Infrastructure (CII) both serve public purposes, they arise from opposite design instincts. DPI emphasizes inclusion, interoperability, and openness; CII emphasizes security, restriction, and control.

Some systems are designated critical only after they become indispensable. India’s Aadhaar identity platform is a great example. The Central Identities Data Repository (CIDR) was declared a Protected System under the country’s CII rules in 2015—five years after Aadhaar’s rollout—adding compliance obligations to what began as open, widely used public infrastructure. Those regulations were and are necessary, but it’s reasonable to ask whether a system managing such sensitive data should ever have operated without that protection in the first place.

The challenge isn’t simply timing. Too early can stifle innovation; too late can amplify harm. The real question is how societies decide when openness must yield to oversight, and whether that transition preserves the trust that made the system valuable in the first place.

The politics of protection

Critical infrastructure has always been political. As the Brookings Institution observed more than a decade ago, infrastructure investment—and, by extension, classification—has always reflected political will as much as technical necessity. The same logic applies online. Designating something “critical” can attract funding, exemptions, or strategic leverage. In a digital economy where perception drives policy, criticality itself becomes a form of currency.

The temptation to leverage the classification of “critical” is understandable: declaring something critical signals seriousness. But it also invites lobbying, nationalization, and regulatory capture. In the analog era, the line between public good and private gain was already blurry; the digital era simply made it blur faster and more broadly.

Criticality has become a negotiation, and as with all negotiations, outcomes depend less on evidence than on who has the microphone.

The discipline of selective resilience

If the first post in this series leaned toward recognizing new kinds of critical infrastructure, this one argues for restraint in doing so. Declaring everything critical doesn’t make societies safer; it makes prioritization impossible. Resilience requires hierarchy, specifically knowing what must endure, what can fail safely, and how systems recover in between.

That’s an uncomfortable truth for both policymakers and providers. (I would say I’m glad I don’t have that job, but I kind of do as a voting member of society) Safety sounds equitable; prioritization sounds elitist. But in practice, resilience demands choice. It asks us to acknowledge that some dependencies matter more than others, and to build systems that tolerate loss rather than pretending loss is preventable.

The more we classify, the more we appear to protect, and the less effective we become at coordinating when the next outage arrives. The task ahead isn’t expanding the list. It’s learning to live with a smaller one.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30]
Welcome back. Last month’s AWS outage did more than just interrupt chats and scramble payment systems — it ignited a long-simmering argument about whether cloud platforms have become too essential to be left entirely in private hands.

In the UK, calls for digital sovereignty resurfaced almost immediately. Across Europe, governments and enterprises once again questioned their dependence on U.S. providers. And even for organizations that weren’t directly affected, the outage felt uncomfortably close. The internet wobbled — and everybody noticed.

Defining What’s “Critical”

In my post last week, The Infrastructure We Forgot We Built, I argued that private infrastructure now performs public functions.
That’s the heart of the question here — not whether these systems are critical infrastructure (they are), but what happens when everything becomes critical?

When every failure becomes a matter of national concern, the language of protection starts collapsing under its own weight.

So, what do we actually mean when we say critical infrastructure? The phrase sounds straightforward, but it isn’t. Every jurisdiction defines it differently. Broadly speaking, critical infrastructure refers to assets, systems, and services essential for society and the economy — things whose disruption would cause harm to public safety, economic stability, or national security.

That definition works for power grids and water systems, but it gets complicated when we start talking about DNS, payments, or authentication services — the digital glue holding everything together.

Today, critical is no longer just about physical survival. It’s about functional continuity and keeping society running.

When Everything Is Critical, Nothing Is

Each country’s list of what’s critical keeps getting longer — and fuzzier. Declaring something critical once meant ensuring its safety. Now, it feels more like staking a claim to control.

That’s the paradox. The more we classify, the more we appear to protect — but the less effective we become when the next outage hits.

This tension is especially visible in the United States. Critical infrastructure once referred to 16 sectors — energy, water, transportation, communications — things you could point to in the real world.

Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recognizes more than 50 “national critical functions.” These include both government and private-sector operations so vital that their disruption could debilitate the nation.

It’s a noble definition — but a recipe for paralysis. Because if everything is critical, then nothing truly is.

Expansion Without Coherence

The National Security Memorandum 22 (NSM 22) was intended to modernize how those functions are managed. In theory, it decentralizes responsibility, allowing agencies and private operators to tailor protections to their own risk environments.

In practice, it’s become a policy map full of overlapping boundaries — blurry accountability, scattered resources, and fragmented oversight.

It’s a patchwork: agencies, regulators, and corporate partners each hold a piece of the responsibility, but no one has the full picture.

While the U.S. administration is reviewing these policies, the underlying trend remains: we keep expanding the definition of “critical” without improving coordination.

The result?

Expansion without coherence Protection without prioritization A system too diffused to defend

It’s the digital version of the bystander effect: if everyone is responsible, no one truly is.

Bureaucracy as Resilience

Let’s shift to the European model, which takes almost the opposite approach. Where the U.S. delegates, the EU codifies — through the NIS 2 Directive and the Critical Entities Resilience Directive.

These cover a wide range of organizations — from cloud providers to waste-water plants — all classified as “essential” or “important.” Each must prove compliance with risk management, incident reporting, and supply-chain security requirements.

It’s easy to dismiss that as bureaucratic overreach — and in part, it is.
But it’s also effective in its own way. Bureaucracy, for all its flaws, enforces repeatable, auditable behavior even as it slows things down.

Under NIS 2, an outage may still occur, but the recovery process is predictable. You may not like the paperwork, but you’ll have it — and sometimes, that’s half the battle.

Still, the EU’s model has limits. If the U.S. risks diffusion, the EU risks inertia. Both can be mistaken for resilience, but neither guarantees protection. What bureaucracy guarantees is documentation, not defense.

Interdependence and Fragility

Both systems face the same dilemma: interdependence.
It magnifies both stability and fragility. A local failure can ripple across sectors and become a global event — yet shared infrastructure also provides recovery pathways.

When an AWS region fails, another often takes over. That’s designed resilience, but it isn’t limitless. As we’ve seen, microservice architecture provides local stability but global fragility. The more distributed a system becomes, the harder it is to understand its failure points.

When everything depends on everything else, “critical infrastructure” starts to lose meaning.

The goal isn’t to eliminate dependencies — that’s impossible — but to decide which ones we can live with.

The Coordination Gap

Coordination, or the lack of it, is the real challenge.
A recent Carnegie Endowment report put it plainly: the issue isn’t whether to protect critical systems, but how to coordinate that protection across borders.

The internet made infrastructure transnational.
Regulation, however, still stops at the border. The wider that gap grows, the more fragile the entire system becomes.

We’re trying to protect a global network at a time when global cooperation is at a low point.

During the October AWS outage, responses were swift — and revealing:

In the UK, debates about tech sovereignty resurfaced. In Brussels, attention turned to reducing dependence on U.S. hyperscalers. Across tech journalism, the consensus was clear: a global AWS outage exposes fragile digital foundations.

And they’re right. But this technical failure has become political ammunition. The debate has shifted from uptime to control — who controls the tools we can’t function without?

From Protection to Fragmentation

Once something is labeled critical, every government wants jurisdiction.
Every region wants its own version. The intent is protection; the result is fragmentation.

This same tension shows up in debates about Digital Public Infrastructure (DPI) versus Critical Information Infrastructure (CII).

DPI emphasizes inclusion, interoperability, and openness. CII emphasizes security, restriction, and control.

Both serve public goals — they just stem from different design instincts.

For example, India’s Aadhaar identity system began as an open platform for inclusion. Five years later, it was reclassified as protected critical infrastructure. That shift was probably necessary, but it raises an uncomfortable question:

Should systems managing that level of personal data ever have operated without such protections?

Move too early, and you stifle innovation.
Move too late, and you amplify harm.

Timing, Trust, and Trade-Offs

The challenge is timing — and trust.
How do we decide when openness must yield to oversight, and how do we maintain public confidence when that shift happens?

Declaring something critical is never neutral. It’s a political act.
In the digital economy, criticality itself becomes a kind of currency — attracting investment, lobbying, and influence.

If a nation declares a platform critical, is it for resilience or for leverage?
Realistically, it’s both.

Selective Resilience

If The Infrastructure We Forgot We Built was about recognizing new kinds of critical systems, this reflection argues for restraint.

Declaring everything critical doesn’t make us safer — it makes prioritization impossible.
Resilience requires hierarchy: knowing what must endure, what can fail safely, and how recovery happens in between.

That’s uncomfortable for policymakers. Safety sounds equitable; prioritization sounds elitist.
But resilience demands choice. It asks us to build systems that tolerate failure rather than pretending it won’t happen.

The more we classify, the more we appear to protect — and the less control we have when it matters most.

Maybe the real task isn’t expanding the list of critical infrastructure, but learning to live with a smaller one.
Because protection is ultimately about trade-offs:

Between autonomy and interdependence Between openness and control Between trust and necessity

The harder we try to protect everything, the more fragile we make the whole.

[00:13:33]
That’s it for this week’s episode of The Digital Identity Digest.

[00:13:38]
If this helped make things clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn @hlflanagan, and if you enjoyed the show, please subscribe and leave a rating on your favorite podcast platform.

You can also read the full post at sphericalcowconsulting.com.
Stay curious, stay engaged, and let’s keep these conversations going.

The post The Paradox of Protection appeared first on Spherical Cow Consulting.


IDnow

The true face of fraud #2: The industrialization of fraud – How crime syndicates run $1 trillion scam empires.

The world's most dangerous criminal organizations don't look like what you'd expect – they resemble Fortune 500 companies. They are sophisticated, disciplined and scaled to the point of industrialization. In this part of our fraud series, we examine the inner workings of the world’s most pervasive crime: social engineering fraud. We go inside the compounds and their corporate-style departments to r
The world’s most dangerous criminal organizations don’t look like what you’d expect – they resemble Fortune 500 companies. They are sophisticated, disciplined and scaled to the point of industrialization. In this part of our fraud series, we examine the inner workings of the world’s most pervasive crime: social engineering fraud. We go inside the compounds and their corporate-style departments to reveal the organized machinery that makes them so hard to dismantle. 

Romance scams. Spear phishing. Authorized Push Payment fraud (APP fraud). These social engineering atttacks are no longer marginal threats. For banks and financial institutions, they represent one of the fastest-growing forms of fraud – costing billions each year and eroding customer trust and institutional reputation. 

In our first article of our fraud series, we revealed who is behind this global enterprise worth over $1 trillion and looked inside their vast complexes around the world, housing hundreds to thousands of trafficked workers. Now, we turn our focus towards how scam compounds operate; how they replicate corporate structure, scale with technology, deploy Fraud-as-a-Service (FaaS) and drive threats that risk not just money, but reputation and trust. 

Fraud Inc.: Departments like real companies 

Step inside a scam compound and what you’ll find looks less like a criminal hideout and more like a corporate headquarters. Inside, these operations function as fully fledged business ecosystems.  

It all begins with procurement, the recruitment process that fuels the enterprise. Recruiters post fake job ads on social media and employment platforms, offering high salaries and promising conditions. Many who apply are students, retirees, or people in vulnerable economic situations. Few realize they’re being drawn into a human trafficking network. Once they arrive at what they believe is their new workplace, they find themselves trapped within guarded compounds and forced into labour – trained and deployed to defraud victims around the world. 

From there, new arrivals enter structured training academies that mirror legitimate corporate onboarding. They are given scripts, coached on tone and persuasion, and taught to impersonate trusted individuals or institutions. They learn how to overcome objections, create urgency, and craft convincing messages and emails – all the hallmarks of professional sales training, repurposed for deception. 

Once trained, recruits join the call centres, the heart of the operation. Floor after floor of desks are filled with “sales teams” executing scams around the clock. Performance is tracked obsessively: conversion rates, value per victim, number of successful interactions, and response times to leads. High performers are rewarded. Those who fall behind face severe punishment

Underpinning it all are the operations and IT teams, ensuring the smooth running of the criminal enterprise. Infrastructure is maintained, systems monitored, and data managed. Meanwhile, payroll and accounting functions handle the proceeds, laundering the fraudulently obtained funds and reinvesting them to expand and sustain the operation further. 

But perhaps most sophisticated is the R&D unit: its sole purpose is to stay one step ahead of banks’ fraud prevention measures. These teams constantly evolve and fine-tune new attack methods to bypass the latest defenses. They test social engineering workflows, refine bypasses for two-factor authentication and explore how to exploit gaps in identity verification. Increasingly, they use AI tools to deepen deception with deepfake voice impersonations, synthetic IDs or AI-generated phishing platforms. 

On paper, you would not be able to distinguish the internal structure from that of a legitimate company. 

Scaling fraud with AI & FaaS 

No single compound has to reinvent the wheel – and increasingly, these large criminal enterprises are even franchising out their operations. Through fraud-as-a-Service (FaaS) models, they sell or lease “pluggable fraud kits” on the dark web. Those contain identity-spoofing services, exploit packages, script libraries, all available with a few clicks, making it easier for individuals to deploy sophisticated APP scams or impersonation attacks without any previous technical or scam background. It’s a franchise model for cybercrime. 

Using software and AI to streamline scams 

Scammers must reach the high call volume KPIs required everyday and to do so, they must rely on Voice-over-IP (VoIP) services. VoIP allows them to make international calls cheaply via the internet while spoofing caller IDs with UK or EU country codes to appear more credible. These tools also provide a steady supply of fresh phone numbers when agents’ numbers get blacklisted as spam. 

Scammers also use software stacks that mirror legitimate corporate tools. CRM-style dashboards track leads and capture victim information like investment experience, call history and personal details. Stolen identity databases enable highly personalised attacks, and increasingly, AI chatbots automate message personalisation and generate deepfakes. Tools like ChatGPT are actively deployed inside compounds to craft convincing investor narratives and sustain prolonged, trust-building conversations with victims. 

Why banks must look beyond the transaction 

Fraud losses are exploding. In 2024, consumers in the U.S. lost over $12.5 billion to scams, with investment and imposter scams alone making up most of it. In Norway, losses from social engineering rose 21% between 2021 and 2022, reaching NOK 290.3 million ($25-30 million USD) as more users were manipulated into authorizing payments – a trend that has also been noted by European banks, which saw digital payment fraud rise by 43% in 2024 compared to 2023, with social engineering tactics increasing by 156% and phishing by 77%.  

These operations hurt banks in far more ways than immediate financial loss. Each successful scam erodes trust – from customers, regulators and the public. When customers believe their bank can’t protect them, they may flee to competitors or lose faith. Regulatory scrutiny and fines also increase, especially as social engineering becomes the fraud vector regulators are watching most closely. 

The human toll and what can be done 

It is apparent that fraud is shifting from solely technical compromises to manipulations of human trust, but not only of those deceived to send money. Many scammers are recruited under false pretenses, trafficked or working under duress – a grim reality upon which these industrial fraud machines are built.  

Tools to fight (social engineering) fraud 

Social engineering scams are among the most challenging threats banks face today. Unlike traditional fraud like forged documents, these scams manipulate genuine customers into authorizing payments or sharing sensitive information – often without realizing they’re being deceived. This is especially true in cases like APP fraud, where the victim is tricked into sending money themselves, and because the transaction appears legitimate and is initiated by the account holder, detecting these scams demands a new level of vigilance and smarter technology. 

To combat this, banks need tools that go beyond standard identity checks. Solutions must be able to spot subtle signs of coercion and manipulation in real time. Video-based verification solutions are purpose-built for this and are the only verification method capable of detecting social engineering attempts through dynamic, human-led interactions and social-engineering–style questioning that reveal behavioral inconsistencies or signs of distress that may indicate a customer is being manipulated by a scammer. 

With social engineering, the focus shifts from verifying identity to understanding intent. That’s where platforms like the IDnow Trust Platform comes in. By integrating behavioral biometrics, such as erratic transaction histories, geographical inconsistencies and device switching, it flags suspicious patterns and enables real-time risk assessment throughout the entire customer lifecycle, not just at onboarding. 

In addition, end‑user education is a critical pillar: in the UK, where APP fraud has been huge, banks are now required to reimburse victims up to £85,000. With prevention efforts now in place, case volumes have fallen by 15 %.  

Together, these capabilities transform fraud prevention from reactive patching to proactive defense. 

Social engineering has always existed but in today’s digital, hyperconnected world, it has evolved into a global trade. What once were isolated scams have become industrialized operations running 24/7, powered by automation and scale. Fraud factories exploit the weakest link in the chain – human vulnerability – making them harder to detect and the biggest threat to banks today. For financial institutions, the challenge is no longer about patching single points of failure, it’s about dismantling entire production lines of deception. Understanding what happens inside these operations is now the first line of defense in a war that criminals are currently winning. 

Interested in more stories like this one? Check out: The true face of fraud #1: The masterminds behind the $1 trillion crime industry to explore who is behind the fastest-growing schemes today, where are the hubs of so-called scam compounds and what financial organizations must understand about their opponent. The rise of social media fraud: How one man almost lost it all to learn about romance fraud to investment scams and the multitude of ways that fraudsters use social media to target victims.

By

Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Herond Browser

Herond Browser – The Complete Web3 Integrated Browser

Are you tired of relying on multiple apps and complex extensions to interact with Web3? Herond Browser is the groundbreaking solution - the first All-in-One Web3 gateway. The post Herond Browser – The Complete Web3 Integrated Browser appeared first on Herond Blog. The post Herond Browser – The Complete Web3 Integrated Browser appeared first on Herond Blog.

Are you tired of relying on multiple apps and complex extensions to interact with Web3? Herond Browser is the groundbreaking solution – the first All-in-One Web3 gateway. Herond unifies everything you need, from a secure Keyless Wallet to advanced tracker blocking, all within a single interface. We eliminate fragmentation, helping you browse quickly and manage decentralized assets seamlessly, securely, and easily.

Latest Updates to Herond Browser

Herond Browser‘s new design features focus on clarity, speed, and personal organization:

New Tab Page Clean layout with a bold, distinctive background image. Your most-visited sites, front and center, easy to see. A focused search bar to get you moving fast. Vertical Tabs by Default (Desktop Only) See more titles, scan faster, and stay organized. Perfect for power users and tab collectors alike. Instant and Clear Bookmarking One-tap save with a sharper, more intuitive folder picker. Less guesswork, more “done.” Persistent Pinned & Grouped Tabs The pinning and grouping features are maintained—even after restart. Your workspace stays exactly how you left it. Reliable Synchronization Smoother, more trustworthy cross-device sync. Active sessions persist; data stays fresh and updated. Refreshed Onboarding Flow Redesigned for both desktop and mobile. Faster, simpler, and aligned with the new vision from the first tap to the first win. Conclusion: Embrace the Future with Herond

Herond Browser has proven that owning a complete Web3 integrated browser is no longer a pipe dream. We have eliminated fragmentation, unifying Security, Speed, and Simplicity into a single experience.

The power of the Keyless Wallet and seamless dApp integration puts control of your assets and data back in your hands, making Digital Freedom not a distant goal, but a reality within reach.

Start your journey to master the Internet for yourself. Download Herond Browser today and step into the fullest, safest, and easiest Web3 era!

About Herond

Herond is a browser that blocks advertisements and tracking cookies. This browser features fast web loading speed, allowing you to browse the web comfortably without interruption. Currently, Herond Browser has two core products:

Herond Shield: Software for ad-blocking and securing user privacy; Herond Wallet: A multi-chain, non-custodial social crypto wallet.

Herond Browser aims to bring Web 3.0 closer to global users. We hope that in the future, everyone will have control over their own data. The browser application is currently available on CH Play (Google Play) and the App Store, offering users a convenient experience.

Follow our next posts for more beneficial information on safe and effective web usage. If you have any feedback or questions, please contact us on the following platforms:

Telegram: https://t.me/herond_browser Social Media X: @HerondBrowser

The post Herond Browser – The Complete Web3 Integrated Browser appeared first on Herond Blog.

The post Herond Browser – The Complete Web3 Integrated Browser appeared first on Herond Blog.


Herond Browser Integrates Uniswap Trading API

Herond Browser now has the Uniswap Trading API integrated directly. This collaboration brings seamless crypto trading to your browser, offering a secure and fast way to swap tokens without leaving the Herond ecosystem. The post Herond Browser Integrates Uniswap Trading API appeared first on Herond Blog. The post Herond Browser Integrates Uniswap Trading API appeared first on Herond Blog.

The world of decentralized finance (DeFi) is evolving, and Herond Browser is at the forefront. We’re excited to announce a major step in our mission to create a secure and integrated Web3 experience: the integration of the Uniswap Trading API.

This allows Herond users to directly trade cryptocurrencies on Uniswap, the world’s leading decentralized exchange, without ever leaving our browser.

What is Uniswap?

Uniswap is the largest and most well-known decentralized exchange (DEX). Unlike centralized exchanges (CEXs) like Coinbase or Binance, Uniswap is a protocol built on the Ethereum blockchain that allows users to swap cryptocurrencies directly with each other without the need for a middleman.

How Uniswap Works?

Instead of using a traditional order book that matches buyers and sellers, Uniswap uses an Automated Market Maker (AMM) model. This system relies on liquidity pools, which are smart contracts containing reserves of two different tokens.

Liquidity Providers (LPs)

Users who own crypto can deposit a pair of tokens (e.g., ETH and DAI) into a liquidity pool. In return, they receive a portion of the trading fees generated from that pool. This provides the necessary assets for traders to use.

Traders

When a user wants to trade, for example, ETH for DAI, they don’t trade with another person. Instead, they interact with the smart contract of the liquidity pool. The protocol automatically executes the trade and adjusts the price of the tokens based on the ratio of assets in the pool.

Key Features of Uniswap

Decentralized and Trustless

Uniswap operates on smart contracts, meaning there is no central authority controlling your funds. You always maintain full custody of your assets in your personal wallet.

Permissionless

Anyone can use Uniswap to trade tokens or provide liquidity. There are no sign-ups, KYC (Know Your Customer) requirements, or geographic restrictions.

Massive Liquidity

As the largest DEX, Uniswap offers deep liquidity for a wide variety of tokens. Making it easy to swap even less common or newly launched cryptocurrencies.

UNI Governance Token

Uniswap has its own native token, UNI. Holders of UNI can participate in the governance of the protocol, voting on key decisions that shape its future development.

Why Choose Herond Browser for integration?

Choosing the right browser is crucial for a smooth and secure Web3 experience. And Herond’s integration with the Uniswap API offers several key advantages.

Seamless and Secure Web3 Experience

Herond Browser isn’t just a standard browser; it’s a “Web 2.5” solution that bridges the gap between the traditional web and the decentralized future. The built-in Herond Wallet is a multi-chain, non-custodial wallet that allows you to manage digital assets directly within the browser, without the need for a separate extension. This native integration reduces the risks associated with third-party software and creates a unified, user-friendly environment. By integrating the Uniswap API directly, Herond makes swapping tokens a core function of the browser itself, streamlining the entire process.

Enhanced Privacy and Security

Herond prioritizes user safety and privacy. With the included Herond Shield, the browser automatically blocks ads, trackers, and malicious sites, which helps protect you from phishing attempts and malware. This is especially important in the DeFi space, where scams are common. The browser also employs advanced security technologies like Multi-Party Computation (MPC) and a Social Login feature to help protect your assets and simplify the login process without compromising security.

Speed and Performance

Herond’s focus on privacy also contributes to its speed. By blocking intrusive ads and trackers that run in the background, the browser uses less RAM and bandwidth. This results in faster page load times, which can be a critical advantage when interacting with decentralized applications and executing time-sensitive transactions on platforms like Uniswap.

Why are we collaborating?

The collaboration between Herond Browser and Uniswap is a strategic move that benefits both platforms and, most importantly, the end user. This partnership leverages the strengths of each to create a more integrated and user-friendly experience for interacting with decentralized finance (DeFi).

Integrating the Uniswap Trading API is a crucial milestone for Herond, strengthening its position as a secure Web3 browser while opening direct access to the fast-growing world of decentralized finance (DeFi). With this powerful integration, Herond users can trade cryptocurrencies directly on Uniswap, one of the leading decentralized exchanges, without needing to leave the browser or rely on third-party tools.

This seamless trading experience makes Herond more than just a browser. It becomes a trusted gateway to DeFi, offering users a smooth, native interface that combines security, privacy, and efficiency. By eliminating the need for external wallet extensions or risky third-party websites. Herond not only simplifies the DeFi onboarding process but also reduces common threats such as phishing attacks and malicious links.

For users, this means faster, safer, and more reliable management of digital assets. For the broader ecosystem, it demonstrates how Herond Browser’s integration of the Uniswap Trading API lowers barriers to DeFi adoption and empowers more people to participate confidently in the blockchain economy. With this integration, Herond takes another step toward building a comprehensive, user-first Web3 platform that connects browsing, trading, and digital asset management in one secure environment.

About Uniswap Labs

Uniswap Labs builds products for users to safely and securely access DeFi, including an API, https://app.uniswap.org/ and https://wallet.uniswap.org/, which collectively serve millions of users. Uniswap Labs also contributes to the development of the Uniswap Protocol, a peer-to-peer system for swapping digital assets that has processed over $3 Trillion at all-time volume. To stay up-to-date on all things Uniswap Labs, follow us on https://x.com/Uniswap or https://www.linkedin.com/company/uniswaporg/.

About Herond Browser

Herond Browser is a Web3-focused browser designed to prioritize user privacy, security, and a seamless experience for interacting with decentralized applications (dApps). It positions itself as a next-generation browser that bridges the gap between the traditional internet (Web2) and the decentralized web (Web3).

The Herond x Uniswap integration unlocks on-chain reward campaigns that directly benefit users. With transparent, real-time incentives like tokens or NFTs for actions such as trading or providing liquidity, we empower the community to engage, earn, and grow with us.

Download Herond, trade with Herond Wallet & Uniswap, then grab huge rewards at our upcoming event!

The post Herond Browser Integrates Uniswap Trading API appeared first on Herond Blog.

The post Herond Browser Integrates Uniswap Trading API appeared first on Herond Blog.

Monday, 10. November 2025

Indicio

Hot 25 Travel Startups for 2026: Indicio

Phocuswire The post Hot 25 Travel Startups for 2026: Indicio appeared first on Indicio.

Herond Browser

Herond’s new vision for a user-centric Internet

This defines what "user-centric" truly means: you are the owner, the controller, and the priority. Stop being the product, and start enjoying the internet you deserve. The post Herond’s new vision for a user-centric Internet appeared first on Herond Blog. The post Herond’s new vision for a user-centric Internet appeared first on Herond Blog.

Are you tired of feeling like the product? The current state of the web is broken: it’s overwhelmed by intrusive ads, plagued by data exploitation, and crippled by unnecessary complexity. This pervasive privacy invasion proves the internet needs to be fundamentally reimagined. Introducing Herond’s revolutionary vision. We’ve built a browser on a foundation of respect, believing every user deserves an online experience that is safer, faster, and entirely their own. This defines what “user-centric” truly means: you are the owner, the controller, and the priority. Stop being the product, and start enjoying the internet you deserve.

User-centric Problem: Today’s Broken Internet Privacy Crisis Data harvesting by big tech Major corporations track virtually every click, search, and purchase you make, building comprehensive digital profiles used not just for targeted advertising but also for behavioral prediction and manipulation. Third-party tracking epidemic Hidden trackers, cookies, and fingerprinting scripts follow users across independent websites, creating a massive surveillance network that operates constantly without explicit, informed consent. Lack of user control over personal information Users have zero true ownership. Once data is shared, it’s treated as a free resource for companies to monetize, making it virtually impossible to truly delete or restrict its future use. Fragmented Experience Multiple apps for basic tasks Interacting with crypto and Web3 requires constant switching between browser extensions, separate external wallet apps, and various security tools, leading to friction, security risks, and frustrating errors. Web2 vs Web3 divide The current environment lacks native integration, making decentralized applications (Dapps), DeFi, and NFTs feel like complex, separate add-ons, which severely hinders mainstream adoption and ease of use. Complex tools that require technical expertise Core security necessities, such as managing traditional seed phrases or configuring advanced decentralized applications, often require specialized knowledge, alienating the average, everyday user. Hidden Costs “Free” services that sell your data Services that appear free often operate on a data-for-access model, where your personal information becomes the real currency and the primary profit engine for large corporations. Subscriptions for basic privacy features Users are increasingly forced to pay premium fees (for VPNs, ad-free versions, or advanced blockers) just to regain basic privacy and browsing performance that should be standard and freely available. Performance sacrificed for advertising Heavy ad scripts and intrusive trackers significantly slow down page loading times. This consumes excessive mobile data, and drain device battery life, costing the user valuable time and money directly. Herond’s Vision: Putting Users First

Herond is not just a browser; it’s a declaration of independence for the user. We are building a Web 3.0 experience defined by two non-negotiable standards: User Ownership and Seamless Efficiency.

Core Principle: You Own Your Internet Your data belongs to you Herond fundamentally shifts the data ownership model, providing tools like Herond Shield to prevent unauthorized tracking, ensuring that every piece of personal information remains under the user’s explicit control. Your choices matter The browser prioritizes transparency and gives users real-time control over their digital environment. Allowing them to decide what content loads and what data is shared, rather than being dictated by complex default settings. Your privacy is non-negotiable Privacy is treated as a default setting and an inherent right. Herond minimizes the data collected and stored by the browser itself. Setting a new standard where user anonymity is guaranteed, not offered as an expensive upgrade. User-centric – The Three Pillars Privacy by design, not by choice Security features, such as integrated ad-blocking and tracker prevention, are built into the browser’s core architecture from day one, meaning users are protected automatically without having to install external tools. Seamless integration across Web2 and Web3 By incorporating tools like the Herond Wallet directly into the browser, Herond eliminates the fragmented experience, making it easy to jump from a traditional news site to a DeFi application without switching context or extensions. Simplicity without compromise Herond delivers the power of advanced Web3 functionality – such as keyless asset management and optimized token swapping, in an intuitive, user-friendly interface, ensuring even beginners can access complex decentralized services securely and effortlessly. How Herond Delivers on the Vision Privacy First Architecture Built-in Ad and Tracker Blocking (Herond Shield) Unlike extensions that can be bypassed, protection is hard-coded into the browser’s foundation. Actively blocking invasive ads, malicious scripts, and third-party trackers before they even load. Zero Data Collection Policy: Herond operates with minimal data logging. We do not store, track, or analyze your browsing history, search queries, or personal usage patterns on our servers. No Surveillance, No Profiling, No Selling Data We guarantee that your information is never used to build a commercial profile or sold to advertisers, ensuring your digital presence remains private and free from algorithmic manipulation. All-in-One Gateway Traditional Browsing + Web3 Access in One Place Herond seamlessly bridges Web2 and Web3. You can read the news, watch videos, and manage your decentralized finances (DeFi) all within the same application window. Native dApp Integration Decentralized Applications (dApps) function flawlessly without the need for cumbersome browser extensions, vastly improving speed and reducing the security risks associated with third-party add-ons. Crypto Wallet Management Without Complexity (Herond Wallet) The integrated Keyless Wallet removes the complexity of traditional seed phrases, allowing users to send, receive, and swap tokens easily, directly within the browser, with industry-leading security. Speed & Performance Up to 2.3x Faster than Competitors By blocking resource-heavy ads and trackers from loading, Herond dramatically reduces bandwidth consumption and processing load, leading to significantly faster page loading speeds. Ad-Free by Default The default setting eliminates visual clutter and performance overhead caused by advertising scripts, providing a cleaner, more focused reading and browsing experience. Optimized for Modern Web Demands The browser is engineered to handle today’s complex, high-data web environments efficiently, ensuring smooth performance even when running intensive decentralized applications. Intuitive Design Vertical Workspace Organization Herond utilizes a streamlined vertical layout that prioritizes content visibility and simplifies navigation, making full use of modern screen space. One-Tap Features, Zero Configuration Essential security and crypto functions are accessible instantly, such as the Hide Balance feature or token swapping – eliminating the need to dig through deep settings menus. Made for Humans, Not Tech Experts The interface is designed with clarity and simplicity in mind, ensuring that the power of Web3 is accessible to everyone, regardless of their technical background. Conclusion: The Future Is User-Centric

The leap from a fragmented, surveillance-heavy Web2 to the decentralized future requires a new browser built on respect, control, and efficiency. Herond fulfills the original promise of the internet: a space where the user is sovereign.

We’ve solved the fundamental flaws of the old web: eliminating the Privacy Crisis with zero-data architecture, ending the Fragmented Experience by unifying Web2 and Web3 access, and removing Hidden Costs with built-in speed and security. With the integrated Herond Wallet and Shield, we’ve proven that advanced power can be simple.

The future of browsing is user-centric, and Herond is your definitive gateway. Claim your digital freedom and experience the web the way it was meant to be.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Herond’s new vision for a user-centric Internet appeared first on Herond Blog.

The post Herond’s new vision for a user-centric Internet appeared first on Herond Blog.


Ironclad Security, Superior Speed, and Default Privacy – The Power Trio of Herond Browser

Herond Browser offers the complete solution: redefining the online experience by unifying an inseparable triple threat: Absolute Security, Superior Speed, and Default Privacy The post Ironclad Security, Superior Speed, and Default Privacy – The Power Trio of Herond Browser appeared first on Herond Blog. The post Ironclad Security, Superior Speed, and Default Privacy – The Power Trio of Herond

Are you forced to trade web speed for personal privacy? Stop accepting a sluggish, ad-ridden browser that silently exploits your data. Herond Browser offers the complete solution: redefining the online experience by unifying an inseparable triple threat: Absolute Security, Superior Speed, and Default Privacy. Herond is more than just a Web 3.0 browser; it is your ultimate shield. Discover how Herond combines these three pillars to create the safe, fast, and user-centric browsing environment you deserve.

Absolute Security – The Shield Against Web3 Risks A. Keyless Architecture and Asset Security Herond Keyless Wallet: Introducing integrated wallet technology that eliminates the need for traditional seed phrases. This removes the single biggest security vulnerability for most users. Multi-Factor Security: Applies modern authentication methods to ensure only the owner can access the wallet, providing protection even if the device is stolen. Safe & Easy Recovery: Replaces complex seed phrases with a simple Backup Code, allowing for quick recovery while maintaining maximum encrypted security. B. Defending Against Online Threats Integrated Anti-Phishing: Automatically alerts and blocks access to fraudulent (scam/phishing) websites designed to steal wallet information, protecting assets at the browser level. Malware Protection: Prevents malicious scripts and files from downloading or running within the browser environment. Eliminating Extension Vulnerabilities: Since Herond Wallet is a native feature, it bypasses the security risks and vulnerabilities commonly found in third-party wallet extensions. Default Privacy – The End of the Surveillance Crisis A. Herond Shield – The Ultimate Blocking Tool Deeply Integrated Ad-Blocking and Tracker Prevention: Features for Ad-Blocking and Tracker Blocking are hard-coded into the core, not added as a separate extension. Profile Tracking Elimination: Actively strips away profiling cookies and sophisticated fingerprinting scripts that track you across the web. B. Zero Data Collection Policy Commitment to Privacy: Guaranteed No surveillance, no data selling, and no user profiling. Your data remains your own. Privacy by Design: Privacy is the default setting (Privacy by design), ensuring protection is always on and is not a premium feature you have to pay for. Superior Speed – Frictionless Browsing A. Accelerating Page Load Times Core Blocking Mechanism: Speed is dramatically improved because Herond Shield directly eliminates resource-heavy ads and tracking scripts before they are processed. Reduced Data Load: Decreases the amount of data needed to load a page by 50% or more, resulting in near-instant rendering, which is crucial for mobile users. Performance Metrics: Tests show Herond can load pages 2-3 times faster than competitors without integrated blocking features. B. Performance Optimization Conserving Device Resources: By reducing unnecessary script loading, Herond helps lower CPU and RAM consumption, significantly extending battery life and reducing device heat. Web3 Optimization: The browser infrastructure is fine-tuned to smoothly run demanding dApps, DeFi applications, and blockchain games without lag. Seamless Experience: Eliminates latency and waiting times, ensuring a continuous, uninterrupted browsing flow. The Synergy – Herond Browser is the All-in-One Gateway A. All-in-One Experience The Web2 and Web3 Bridge: Herond is the first browser to fully integrate the traditional web browsing environment with the decentralized world. No external extensions or applications are needed. Instant Crypto Transactions: Thanks to the integrated Keyless Wallet, users can perform all crypto operations (Swap, Send, Receive tokens) directly within the browser with just a few clicks. Minimized Friction: Eliminates the need to switch applications or manage multiple interfaces, creating a unified and efficient digital workspace. B. Simplicity without Compromise Intuitive Design: The interface is engineered to be “Made for Humans,” ensuring that complex Web3 features are easy to understand and use. Easy Access: Critical security and transaction features are readily accessible, requiring only one tap for activation and zero deep configuration. Universal Suitability: Advanced tools are simplified, allowing both crypto newcomers and seasoned Web3 experts to use Herond safely and confidently. Conclusion: The Future of the Internet is in Your Hands

We have explored the triple threat that sets Herond Browser apart: Absolute Security with its Keyless architecture, Superior Speed thanks to its core blocking mechanism, and Default Privacy delivered via Herond Shield. Herond is more than just a browser; it is a declaration of individual ownership in the Web3 era.

The seamless unification of Web2 and Web3 alongside the intuitive design proves that you don’t have to sacrifice simplicity for maximum security.

Don’t settle for a subpar online experience. Download Herond Browser today to proactively seize control of your data, your assets, and your browsing speed. The safe, fast, and user-centric Internet is ready.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser

The post Ironclad Security, Superior Speed, and Default Privacy – The Power Trio of Herond Browser appeared first on Herond Blog.

The post Ironclad Security, Superior Speed, and Default Privacy – The Power Trio of Herond Browser appeared first on Herond Blog.


Elliptic

Hong Kong and Wolfsberg point to new reality for stablecoin issuers: ongoing monitoring

Stablecoin issuers have traditionally concentrated their compliance efforts on two critical touchpoints: issuance and redemption. When tokens are minted, issuers verify the customer's identity and source of funds. When tokens are burned, they verify who is redeeming.

Stablecoin issuers have traditionally concentrated their compliance efforts on two critical touchpoints: issuance and redemption. When tokens are minted, issuers verify the customer's identity and source of funds. When tokens are burned, they verify who is redeeming.


Dock

Centralized ID, federated ID, decentralized ID: what’s the difference?

In our recent live workshop, Introduction to Decentralized Identity, Richard Esplin (Dock Labs' Head of Product) and Agne Caunt (Dock Labs' Product Owner) explained how digital identity has evolved over the years and why decentralized identity represents such a fundamental shift. If you couldn’t

In our recent live workshop, Introduction to Decentralized Identity, Richard Esplin (Dock Labs' Head of Product) and Agne Caunt (Dock Labs' Product Owner) explained how digital identity has evolved over the years and why decentralized identity represents such a fundamental shift.

If you couldn’t attend, here’s a quick summary of the three main identity models they covered:


HYPR

HYPR and Yubico Deepen Partnership to Secure and Scale Passkey Deployment Through Automated Identity Verification

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.

For years, HYPR and Yubico have stood shoulder to shoulder in the mission to eliminate passwords and improve identity security. Yubico’s early and sustained push for FIDO-certified hardware authenticators and HYPR’s leadership as part of the FIDO Alliance mission to reduce the world’s reliance on passwords have brought employees and customers alike into the era of modern authentication.

Today, that partnership continues to expand. As enterprise adoption of YubiKeys continues to accelerate worldwide, HYPR and Yubico are proud to announce innovations that help enterprises to further validate that the employees receiving or using their YubiKeys are assured to the highest levels of identity verification. 

HYPR Affirm, a leading identity verification orchestration product, now integrates directly with Yubico’s provisioning capabilities, enabling organizations to securely verify, provision, and deploy YubiKeys to their distributed workforce with full confidence that each key is used by the right, verified individual.

Secure YubiKey Provisioning for Hybrid Teams

Security leaders routinely purchase YubiKeys by the hundreds or thousands, only to confront a stubborn challenge: securely provisioning those keys to a remote or hybrid workforce quickly and verifiably.

Manual processes, from shipment tracking to recipient activation, are no longer adequate for modern security. The current setup, while seemingly robust, lacks the critical identity assurance needed to withstand today's threats. Even the most advanced hardware security key is compromised if it's issued or activated by an unverified individual. What’s needed is not just faster fulfillment, but a secure, automated bridge that links verified identity directly with hardware credentialing.

What YubiKey Provisioning with HYPR Affirm Delivers

Enterprises can now link a verified human identity to a hardware-backed, phishing-resistant credential before a device is shipped. Yubico provisions a pre-registered FIDO credential to the YubiKey, binds it to the organization’s identity provider (IdP), and ships the key directly to the end user - no IT or security team intermediation required. The user receives a key that’s ready to activate in minutes - no shared secrets over insecure communications, no guesswork, zero gaps of trust. This joint approach streamlines operations while preserving Yubico’s gold-standard hardware security and user experience.

How It Works: Pre-Register → Verify → Activate

The flow is seamless. To activate a YubiKey, HYPR Affirm verifies that the intended user is, in fact, the right individual through high-assurance identity verification that incorporates orchestration capabilities that include options such as government ID scanning, facial biometrics with liveness detection, location data, and can even include live video verification with peer-based attestation. Policy settings can be easily grouped by role & responsibility.
Once verified, the user is issued a PIN to activate the pre-registered, phishing-resistant credential on the YubiKey, linked to the organization’s identity provider. When the user receives their key, activation is simple, secure, and immediate.

The result is an end-to-end, verifiable trust chain that gives IT, security, and compliance teams the assurance that:

The YubiKey was issued to a verified user. The credential was provisioned securely and cannot be intercepted. An auditable record ties the verified identity to the hardware-backed credential.

Scalable Remote Distribution and Faster Rollouts

This is built for the real world: companies that buy 100, 1,000, or 10,000 keys and need to deploy them across regions, time zones, and employment types. By anchoring every key to a verified user before it ships, organizations reduce failed enrollments, eliminate back-and-forth helpdesk tickets, and accelerate time-to-protection for global teams. 

Beyond Day One: Resets, Re-issues, and Role Changes

Implementing automated identity verification checks into the YubiKey provisioning process streamlines initial deployment, but the same model applies after initial rollout. When a new employee is being onboarded, or a key is lost, damaged, or reassigned, HYPR Affirm can re-verify identity at the moment of risk, and Yubico can provision a replacement credential with the same tight linkage between proofing and issuance. This reduces social-engineering exposure during high-risk helpdesk moments and keeps lifecycle events as deterministic as day one.

Building a Future of Trusted, Effortless Authentication

Yubico set the global benchmark for hardware-backed, phishing-resistant authentication. HYPR is extending that foundation to unlock identity assurance at scale - ensuring every YubiKey is ready to protect access from day one.

Together, we’re transforming what has traditionally been a manual, trust-based process into a verifiable, automated, and user-friendly standard for enterprise security.

From my perspective, this partnership represents something bigger than integration. It’s a proof point that security and simplicity can coexist at scale - and that’s what excites me most. We’re helping organizations move faster toward a passwordless future where verified identity and hardware-backed trust work seamlessly, everywhere.

Learn more about how HYPR and Yubico are redefining workforce identity and authentication for the modern era: Explore the Integration.

HYPR and Yubico FAQ

Q: What changes with this new HYPR and Yubico partnership?

A: Identity verification and YubiKey provisioning are now tightly connected, so each key is pre-registered to a user before shipment and is activated through identity verification upon arrival.

Q: How does this improve remote rollouts?

A: Enterprises can ship keys globally with proof that intended recipients are the ones who activate the device, reducing logistics friction and failed enrollments.

Q:  What compliance benefits does this provide?

A: The verified identity event is linked to the cryptographic credential, producing a clear audit trail and aligning with NIST 800-63-3’s assurance model (IAL for proofing, AAL for authentication) while enabling AAL3 from first use.

Q: Does this help with loss, replacement, or re-enrollment?

A: Yes. HYPR Affirm can trigger re-verification for high-risk events (like replacement or role change) before provisioning, reducing social-engineering risk and maintaining assurance over time. Yubico Enterprise Delivery allows organizations to seamlessly replace lost authenticators in a secure and simple workflow.

Q: What is the end-user experience like?

A: Receive a pre-registered YubiKey and activate with a simple identity verification. They log in with phishing-resistant passkeys - no passwords or complex setup.

 


Herond Browser

How to Use Uniswap Aggregator on Herond Wallet: Secure & Optimized Multi-Chain Token Swaps

Uniswap Aggregator on Herond Wallet lets DeFi users swap tokens across chains with the best rates, lowest gas fees, and advanced non-custodial security. This step-by-step guide shows how to use it efficiently. The post How to Use Uniswap Aggregator on Herond Wallet: Secure & Optimized Multi-Chain Token Swaps appeared first on Herond Blog. The post How to Use Uniswap Aggregator on Herond Wa

Tired of manually hunting across a dozen different decentralized exchanges (DEXs) just to find the best token swap rate? In the volatile DeFi landscape, finding optimal liquidity and pricing can be time-consuming and often causes you to miss out on better deals. Herond Wallet delivers a game-changing solution by integrating the Uniswap Aggregator directly into its interface. This powerful tool automatically scans and combines liquidity from multiple DEXs across various chains, ensuring you consistently get the best possible price and lowest gas fees for every trade. This comprehensive guide will walk you through exactly how to harness the power of the Uniswap Aggregator on Herond Wallet for secure and optimized multi-chain token swaps.

What is the Uniswap Aggregator?

Uniswap Aggregator is a liquidity routing system that automatically finds the best exchange rate and lowest gas fee for your token swaps. Instead of swapping directly on a single DEX, the aggregator compares prices across multiple decentralized exchanges (DEXs) to ensure you always get the most optimal transaction.

When integrated into Herond Wallet, it allows users to:

Send or swap tokens across chains instantly. Reduce slippage and transaction fees. Complete swaps securely without leaving the browser. Why Send via Uniswap Aggregator in Herond Wallet?

Using Uniswap Aggregator inside Herond Wallet gives you the advantages of both worlds, the convenience of an embedded crypto wallet and the intelligence of an optimized DeFi router.

Key Benefits:

Best rates automatically: Aggregator scans multiple DEXs like Uniswap, SushiSwap, and 1inch for the best price. Lower gas costs: Transactions are routed for maximum efficiency. Non-custodial security: You own your keys; Herond never stores your private data. Instant transactions: All swaps are done directly inside your browser, no extensions required. How to Send Tokens via Uniswap Aggregator in Herond Wallet

Follow these simple steps to send or swap your tokens efficiently:

Step 1: Connect Herond Wallet and Open “Swap” Launch Herond Wallet inside Herond Browser. Go to Swap, choose your desired tokens, and enter the amount to exchange. Step 2: Review Route & Confirm Transaction View the optimized route generated by the Uniswap Aggregator. Confirm the transaction in your Herond Wallet and wait for on-chain confirmation. Track your transaction progress directly within Herond. Conclusion

Utilizing the Uniswap Aggregator through Herond Wallet is more than just a convenience—it’s a fundamental shift toward smarter, more secure trading.

You’ve now eliminated the need for complex manual research and the worry of executing a poorly priced trade. By dynamically routing your swaps through the deepest available liquidity pools across the multi-chain ecosystem, Herond ensures that every transaction is executed at the absolute best rate with minimized slippage. Take back control of your DeFi experience. Open Herond Wallet today and start trading smarter, where security, efficiency, and optimal pricing are always guaranteed.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser

The post How to Use Uniswap Aggregator on Herond Wallet: Secure & Optimized Multi-Chain Token Swaps appeared first on Herond Blog.

The post How to Use Uniswap Aggregator on Herond Wallet: Secure & Optimized Multi-Chain Token Swaps appeared first on Herond Blog.


How to Create a Backup Code for Herond Wallet – Secure Your Crypto Wallet Easily

Step-by-step guide on how to create and use a backup code for Herond Wallet. Safely back up, recover, and protect your Web3 crypto wallet with ease. The post How to Create a Backup Code for Herond Wallet – Secure Your Crypto Wallet Easily appeared first on Herond Blog. The post How to Create a Backup Code for Herond Wallet – Secure Your Crypto Wallet Easily appeared first on Herond Blog.

In the crypto world, losing access to your wallet can mean losing all your assets forever. While Herond Wallet stands out with its innovative Keyless technology – meaning you don’t have to manage complex seed phrases – having a foolproof recovery system is still essential. What happens if you lose your device or forget your login password? That’s where the Backup Code comes in. It’s Herond’s unique, simple, and secure recovery mechanism designed to ensure you can always access your wallet, regardless of what happens to your device. This isn’t just a guide; it’s your essential path to security. Follow these easy steps on how to create a Backup Code for Herond Wallet, guaranteeing you have a safe and stress-free way to secure your crypto wallet easily.

What Is a Backup Code?

This is an emergency recovery code created by Herond Wallet that helps you restore your crypto wallet in case of:

Lost or changed devices. Forgotten login password. Migration to a new browser or computer.

The Code works similarly to a seed phrase, but it’s simpler and more secure. It’s encrypted directly inside your browser, ensuring only you have access to your wallet recovery data.

Why You Should Create a Backup Code in Herond Wallet

Creating a Backup Code is not optional – it’s the only way to guarantee full control and access to your Web3 assets.

Key Benefits: Instant Wallet Recovery: Regain access to your wallet quickly with just one secure code. Enhanced Security: Protect your crypto assets against loss or accidental lockout. No Central Server Dependency: Your recovery data is stored locally, you own your wallet, not the server. Multi-device Support: Easily move your wallet across new laptops, browsers, or mobile devices. How to Create a Backup Code in Herond Wallet

Follow these simple steps to create and activate your backup code:

Step 1: Open Herond Browser Go to Settings -> Login & Security -> Recovery Code, then click Set up to start. Step 2: Enter Password and Verify Re-enter your Herond Wallet password to confirm your identity before generating the code. Step 3: Generate and Confirm Backup Code Herond will display your Recovery Code. Store it safely and privately – avoid saving online or sharing screenshots. Click Confirm to finish setup. Security Tips for Managing Your Backup Code Never share your Backup Code with anyone. Store it in 2–3 secure offline locations (USB drive, notebook, or encrypted file). Regenerate your code if you change devices or suspect exposure. Immediately reset your Backup Code if you think it’s been compromised. Conclusion

Creating your Backup Code is the final, non-negotiable step to fully securing your crypto future on Herond Wallet.

This simple, encrypted code guarantees that, even without a traditional seed phrase, you maintain absolute control over your funds, regardless of device loss or forgotten passwords. By successfully combining the simplicity of the Keyless system with the security of the Backup Code, Herond provides true peace of mind. Don’t wait until an incident occurs; take a minute now to create and store your Backup Code safely. Your decentralized future deserves this layer of total security!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser

The post How to Create a Backup Code for Herond Wallet – Secure Your Crypto Wallet Easily appeared first on Herond Blog.

The post How to Create a Backup Code for Herond Wallet – Secure Your Crypto Wallet Easily appeared first on Herond Blog.


Master Sending and Receiving Crypto with Herond Keyless Wallet

Experience effortless sending and receiving in Herond Keyless: no seed phrases, no hassle. Deposit funds instantly with a secure address or QR code, or send tokens with one-tap confirmation. Whether moving assets from another wallet or transferring out, every step is intuitive and secure. Follow this complete guide and take full control of your crypto […] The post Master Sending and Receiving Cr

Experience effortless sending and receiving in Herond Keyless: no seed phrases, no hassle. Deposit funds instantly with a secure address or QR code, or send tokens with one-tap confirmation. Whether moving assets from another wallet or transferring out, every step is intuitive and secure. Follow this complete guide and take full control of your crypto today!

Receive

Step 1: Once logged in and your Herond Keyless wallet is created, simply click Deposit Now to kickstart funding your account instantly.

Step 2: If you don’t yet own any crypto, start by buying with our guide here: 7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet. Already holding crypto in another wallet or exchange? Simply click Receive to transfer it into your Herond Keyless wallet instantly.

Step 3: Select the network you want to receive crypto on. Once chosen, your Herond Keyless wallet offers two easy deposit options: copy the wallet address or scan the QR code. You can also save the QR code for faster future transfers. Important: Solana addresses differ from EVM networks, double check the network to avoid irreversible mistakes!

Step 4: Once the transfer is successful, your Herond Keyless wallet instantly displays your total balance in USD value. You’ll also see a detailed breakdown of each coin’s quantity and real-time value, keeping you in full control at a glance.

Send

Step 1: To send crypto, simply click the Send button to get started instantly.

Step 2: Your Herond Keyless wallet will display all the coins you currently hold, choose the one you want to send and proceed to the next step effortlessly.

Step 3: Enter the amount you want to send, your Herond Keyless wallet offers quick presets like 25%, 50%, or Max of your balance. Then, in the To field, paste the recipient’s wallet address or select from your saved contacts for fast, error free transfers.

Once the amount and address are set, click Send to move to the next step.

Step 4: Double-check the send amount and recipient address one final time to ensure accuracy. Next, review the gas fee, Herond Wallet offers three smart presets: Low, Optimal, and High, or customize your own for full control. Once everything looks good, hit Confirm to execute your transaction securely and instantly.

Step 5: Once the transaction is successfully completed, instantly verify the details and transaction hash with a single click on View Transaction, keeping you in full control, every step of the way.

Transfers Complete – Full Control in Your Hands

Your Herond Keyless wallet now handles sends and receives with effortless precision, assets updated in real time, fees optimized, and every transaction verified instantly. Stay secure, stay fast, and stay in charge across all networks. Download Herond Browser now at https://herond.org to track, transfer, or grow your portfolio. Your Web3 command center is live, start moving crypto with confidence!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Master Sending and Receiving Crypto with Herond Keyless Wallet appeared first on Herond Blog.

The post Master Sending and Receiving Crypto with Herond Keyless Wallet appeared first on Herond Blog.


Instant Token Swaps with Herond Keyless Wallet

Trade any token swaps in seconds inside Herond Keyless: no seed phrases, no hidden fees. Pick your pair, set the amount, choose the best route, and confirm with one tap The post Instant Token Swaps with Herond Keyless Wallet appeared first on Herond Blog. The post Instant Token Swaps with Herond Keyless Wallet appeared first on Herond Blog.

Trade any token swaps in seconds inside Herond Keyless: no seed phrases, no hidden fees. Pick your pair, set the amount, choose the best route, and confirm with one tap. Enjoy zero wallet fees (limited time), customizable slippage, and full transaction transparency. Jump into this seamless swap guide and start trading smarter today!

Step 1: To start a swap, simply click the Swap button on the main screen or tap the swap icon in the bottom toolbar – quick access, instant trading.

Step 2: First, select the tokens you want to swap by clicking Select. Herond Keyless wallet smartly lists your balance holding token swaps at the top, while you can search for any other token by its ticker or contract address.

Step 3: Once your token pair is selected, enter the amount or use smart presets like 25%, 50%, or Max. Next, choose your swap route. Herond Wallet currently powers trades through UniswapX for optimal liquidity.

Then, review the slippage tolerance (default 0.5%), adjust anytime via Customize for full control.

Step 4: Once all selections are set, click Swap to instantly move to the transaction confirmation screen – your trade is just one tap away!

Pro Tip: All wallet fees are currently waived on Herond Wallet – zero extra costs for swaps! This limited time perk won’t last forever, so swap smarter and save more while it’s active. Don’t miss out!

Step 5: Final check: review token amounts, swap routes, and gas fee – everything at a glance. Once verified, tap Confirm to execute your Herond Keyless swap instantly and securely!

Step 6: Once the transaction is successfully completed, instantly verify the details and transaction hash with a single click on View Transaction, keeping you in full control, every step of the way.

Swap Finished: Your Portfolio Is Updated

Your Herond Keyless swap is done – tokens updated instantly, fees saved, and every detail verified with a single click. Keep full control with real-time tracking and zero-cost trades while the waiver lasts. Open Herond Browser anytime to swap again, send, or grow your portfolio. Web3 trading just got faster, cheaper, and fully yours!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Instant Token Swaps with Herond Keyless Wallet appeared first on Herond Blog.

The post Instant Token Swaps with Herond Keyless Wallet appeared first on Herond Blog.


7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet

Buy crypto and fund your Herond Keyless wallet with fiat in minutes - no seed phrases, no external transfers required. The post 7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet appeared first on Herond Blog. The post 7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet appeared first on Herond Blog.

Buy crypto and fund your Herond Keyless wallet with fiat in minutes – no seed phrases, no external transfers required. Just a few simple clicks to select currency, enter amount, complete secure KYC, and confirm payment. Enjoy automatic conversion, instant crediting, and full network control from day one. Follow this beginner friendly on ramp guide and start building your crypto portfolio today!

Step 1: Once logged in and your Herond Keyless wallet is created, simply click Deposit Now to kickstart funding your account instantly.

Step 2: If you already have crypto in another wallet, select Receive to deposit it instantly. Otherwise, click Buy to purchase crypto directly and fund your Herond Keyless wallet with ease.

Step 3: Choose your payment currency and the crypto you want to buy. Be sure to select the correct network to receive your assets instantly and securely.

Step 4: After making your selections, enter the fiat amount you wish to spend, the system will automatically convert it to the equivalent crypto and guide you to the select purchase method screen. Currently, Herond Wallet supports purchases via TransFi, with more trusted providers coming soon.

Step 5: Double-check the currency type and conversion rate, then click Buy to proceed. At this stage, the provider will collect essential details: full name, email, phone number, address, postal code, and more to complete your secure crypto purchase.

Step 6: After entering all required details, select your preferred payment method. The provider will instantly display international bank details – including international bank account number (IBAN), bank identifier code (BIC), recipient name, and reference text to complete the transfer. Once done, simply click Payment Completed to finalize your crypto deposit.

Step 7: Everything is ready! Your crypto is now in your Herond Keyless wallet – funded, secure, and ready to use across the blockchain! Buy crypto now!

Your Wallet Is Funded – Start Exploring Web3 Now

Your crypto is now live and secure in Herond Keyless wallet, buy crypto and get credited in minutes with zero hassle. Enjoy instant access, seamless cross chain support, and one tap management. Log in anytime, track balances, and explore Web3 with confidence. Download Herond Browser now at https://herond.org and start transacting, staking, or swapping. Your decentralized future begins here!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post 7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet appeared first on Herond Blog.

The post 7-Step Guide: Buy Crypto Directly into your Herond Keyless Wallet appeared first on Herond Blog.


auth0

.NET 10: What’s New for Authentication and Authorization

Dive into the latest .NET 10 updates for authentication and authorization, and important breaking changes for .NET developers.
Dive into the latest .NET 10 updates for authentication and authorization, and important breaking changes for .NET developers.

FastID

The New 2025 OWASP Top 10 List: What Changed, and What You Need to Know

The 2025 OWASP Top 10 list is here! Discover what changed, the two new categories, and how to secure your applications against emerging threats.
The 2025 OWASP Top 10 list is here! Discover what changed, the two new categories, and how to secure your applications against emerging threats.

Sunday, 09. November 2025

Herond Browser

Quick Personalization: Add Name & Avatar to Your Herond Keyless Wallet

No technical skills needed, just simple, instant personalization right inside Herond Keyless Wallet. Follow this quick guide and give your wallet a personal touch today! The post Quick Personalization: Add Name & Avatar to Your Herond Keyless Wallet appeared first on Herond Blog. The post Quick Personalization: Add Name & Avatar to Your Herond Keyless Wallet appeared first on Herond Bl

Make your Herond Keyless wallet truly yours in under a minute. With just three clicks, add a unique wallet name and custom avatar to stand out in Web3. No technical skills needed, just simple, instant personalization right inside Herond Browser. Follow this quick guide and give your wallet a personal touch today!

Step 1: Once logged in and your Herond Keyless wallet is created, to customize your avatar and set a wallet name, start by clicking the Account section.

Step 2: Click the three-dot icon and select Customize to personalize your wallet.

Step 3: Choose a custom name, pick your favorite avatar from the collection, and hit Save. Done, your Herond Keyless wallet is now fully personalized!

Personalization Complete – Your Wallet Stands Out!

Your Herond Keyless wallet now has a custom name and avatar – recognizable and fully yours. Enjoy a personalized experience every time you log in, send, or swap. Update anytime with ease and keep your identity secure. Open Herond Browser now athttps://herond.org and make Web3 feel like home!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser On Discord https://discord.gg/Herond-Browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Quick Personalization: Add Name & Avatar to Your Herond Keyless Wallet appeared first on Herond Blog.

The post Quick Personalization: Add Name & Avatar to Your Herond Keyless Wallet appeared first on Herond Blog.

Tuesday, 26. August 2025

Radiant Logic

Rethinking Enterprise IAM Deployments with Radiant Logic's Cloud-Native SaaS Innovation

What are the challenges enterprises face when deploying IAM systems in cloud-native environments? In today’s cloud-first enterprise landscape, organizations face unprecedented challenges in managing identity and access across distributed, hybrid environments. Traditional on-premises IAM systems have become operational bottlenecks, with deployment cycles measured in weeks rather than hours, securit
What are the challenges enterprises face when deploying IAM systems in cloud-native environments?


In today’s cloud-first enterprise landscape, organizations face unprecedented challenges in managing identity and access across distributed, hybrid environments. Traditional on-premises IAM systems have become operational bottlenecks, with deployment cycles measured in weeks rather than hours, security vulnerabilities emerging from static configurations, and scaling limitations that can’t keep pace with business growth. As enterprises accelerate their digital transformation and embrace cloud-native architectures, these legacy constraints threaten competitive advantage and operational resilience. 

Key Takeaway: Traditional IAM systems can’t keep pace with cloud-native speed, scale, and security demands.

At Radiant Logic, we recognized these industry-wide pain points weren’t just technical challenges—they represented a fundamental shift in how IAM must be delivered and managed in the cloud era.  

Addressing the Cloud-Native IAM Gap 

The enterprise IAM landscape has been stuck in a legacy mindset while the infrastructure beneath it has transformed completely. Organizations are migrating critical workloads to Kubernetes clusters, embracing microservices architectures, and demanding the same agility from their IAM infrastructure that they have achieved in their application delivery pipelines. Yet most IAM solutions still operate with monolithic deployment models, manual configuration processes, and reactive monitoring approaches that belong to the pre-cloud era. Setting up new environments can take weeks, and keeping everything secure and compliant is a constant battle with the rollout of version patches and updates. 

The Three Critical Gaps in Traditional IAM Delivery

Through our extensive work with enterprise customers, we identified the following critical gaps in traditional IAM delivery: 

Deployment velocity: enterprises need IAM environments provisioned in hours, not weeks, to match the pace of modern DevOps practices Operational resilience: IAM systems must be designed for failure, with automatic healing capabilities and zero-downtime updates Real-time observability: security teams need continuous visibility into IAM performance, usage patterns, and potential threats as they emerge

Radiant Logic’s cloud-native IAM approach addresses these gaps by fundamentally reimagining how IAM infrastructure is delivered, managed, and operated in cloud-native environments. 

Re-Imagining Your IAM Operations with a Strategic Cloud-Native Architecture 

Our Environment Operations Center (EOC) is exclusively available as part of our SaaS offering, representing our commitment to cloud-native IAM delivery. This isn’t simply hosting traditional software in the cloud—it is a ground-up reimagining of IAM operations leveraging Kubernetes orchestration, microservices architecture, and cloud-native design principles. 

Why EOC Is Different from Traditional Cloud Hosting

Every EOC deployment provides customers with their own private, isolated cloud environment built on Kubernetes foundations. This cloud-native, container-based approach delivers four strategic advantages that traditional IAM deployments simply cannot match. 

Agility through microservices architecture Each component of the IAM stack operates as an independent service that can be updated, scaled, or modified without affecting other system elements. This eliminates the risk of monolithic upgrades that have historically plagued enterprise IAM deployments and enables continuous delivery of new features and security patches. Resilience through Kubernetes orchestration The EOC leverages Kubernetes’ self-healing capabilities, automatically detecting and recovering from failures at the container, pod, and node levels. This means your IAM infrastructure maintains availability even when individual components experience issues, providing the operational resilience that modern enterprises demand. Automation through cloud-native tooling Manual configuration and deployment processes are replaced by automated workflows that provision, configure, and maintain IAM environments according to defined policies. This reduces human error, accelerates deployment cycles, and ensures consistent security posture across all environments. Real-time observability through integrated monitoring The EOC provides comprehensive visibility into system health, performance metrics, and security events through cloud-native observability tools that integrate seamlessly with existing enterprise monitoring infrastructure.  Key Takeaway: Cloud-native IAM replaces static deployments with flexible, self-healing, continuously observable environments.
Real-time Insights: AI-Powered Operations Management 

The EOC’s cloud-native architecture enables sophisticated AI-driven operations management that goes far beyond traditional monitoring approaches. Our platform continuously analyzes metrics including CPU utilization, memory consumption, network traffic patterns, and application response times across your Kubernetes-based IAM infrastructure. 

How AI Can Detect and Resolve Issues Automatically

When our AI detects anomalous patterns—such as unexpected spikes in authentication requests, unusual network traffic flows, or resource consumption trends that indicate potential security threats—it doesn’t just alert operators. The system automatically triggers remediation actions, such as scaling pod replicas to handle increased load, reallocating resources to maintain performance, or isolating potentially compromised components while maintaining overall system availability. 

This proactive approach to operations management represents a fundamental shift from reactive problem-solving to predictive optimization. Instead of waiting for issues to impact users, the EOC identifies and addresses potential problems before they affect service delivery. 

Unified Management: Purpose-Built for Enterprise Operations 

The EOC consolidates all aspects of IAM operations management into a single, intuitive interface designed specifically for enterprise security and IT teams. Our dashboards provide real-time visibility into system health, performance trends, and security posture across your entire IAM infrastructure. 

Streamlining Everyday IAM Operations Through One Interface

Critical operations such as application version management, automated backup orchestration, and security policy enforcement are streamlined through purpose-built workflows that integrate naturally with existing enterprise tools. The platform’s responsive design ensures full functionality whether accessed from desktop workstations or mobile devices, enabling operations teams to maintain visibility and control regardless of location. 

Because the EOC is built specifically for our SaaS offering, it includes deep integration with Radiant Logic’s IAM capabilities while maintaining compatibility with your existing identity, monitoring, logging, and security infrastructure. This ensures seamless operations without requiring wholesale replacement of existing tooling. 

Future-Ready: Adaptive Security and Compliance 

The EOC’s cloud-native foundation enables adaptive security capabilities that automatically adjust protection levels based on real-time risk assessment. Our compliance management tools leverage automation to maintain regulatory adherence across dynamic, distributed environments, reducing the manual overhead traditionally associated with compliance reporting and audit preparation. 

As enterprises continue their cloud transformation journey, the EOC evolves alongside changing requirements, leveraging Kubernetes’ extensibility and our continuous delivery capabilities to introduce new features and capabilities without disrupting ongoing operations. 

Transform Your IAM Operations 

By delivering cloud-native IAM infrastructure through our SaaS platform, we are helping enterprises achieve the agility, resilience, and security required to compete in the cloud era. 

Ready to see how to transform your identity and access management operations? Contact Radiant Logic for a demo and discover how our cloud-native SaaS innovation can accelerate your organization’s digital transformation journey. 

The post Rethinking Enterprise IAM Deployments with Radiant Logic's Cloud-Native SaaS Innovation appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


liminal (was OWI)

Returns Abuse in E-Commerce | Link Index Report

The post Returns Abuse in E-Commerce | Link Index Report appeared first on Liminal.co.

Recognito Vision

How to Protect Yourself from Identity Theft Using Trusted Biometric Solutions

In today’s connected world, your identity is more than your name or password. It’s your access key to everything from your bank to your email to your online shopping carts. But while technology has made life easier, it has also opened new doors for identity theft. Fortunately, trusted biometric solutions are here to close those...

In today’s connected world, your identity is more than your name or password. It’s your access key to everything from your bank to your email to your online shopping carts. But while technology has made life easier, it has also opened new doors for identity theft.

Fortunately, trusted biometric solutions are here to close those doors. These systems don’t just protect your data. They protect you using your unique traits to make identity theft nearly impossible.

 

The Growing Problem of Identity Theft

Identity theft isn’t a problem for tomorrow. It’s happening right now. According to cybersecurity analysts, global cases of online identity theft have jumped by more than 35% in just a year. Hackers now use deepfakes, AI-generated profiles, and synthetic data to impersonate real people.

Once your information is stolen, trying to recover it can be like chasing a ghost.

The most common forms of identity fraud include:

Financial theft: Stealing banking or credit details for unauthorized use

Medical identity theft: Using stolen identities for treatment or prescriptions

Synthetic identities: Creating fake people from pieces of real data

Social or digital impersonation: Cloning accounts to scam others

It’s not just about losing money. Victims spend months repairing their reputation, accounts, and credit. The best way to win this fight is by stopping it before it starts and biometric identity theft protection does exactly that.

 

Why Biometrics Are the Future of Identity Theft Protection

Biometrics use your unique physical and behavioral features, like your face, fingerprint, or voice, to verify your identity. Unlike passwords or PINs, they can’t be stolen, guessed, or shared.

Modern systems powered by AI are incredibly accurate. The NIST Face Recognition Vendor Test reports that advanced facial recognition models reach over 99% accuracy. That means they can verify you faster and more securely than traditional login methods.

Biometric security isn’t just the future of identity theft protection services. It’s becoming the standard for how we protect everything we value online.

 

How Biometric Identity Monitoring Services Work

Traditional identity theft monitoring only tells you something went wrong after it happens. But biometric protection acts before any damage occurs. It’s active, precise, and nearly foolproof.

Here’s how it works step by step:

1. Capture

The system starts by securely capturing your biometric data, such as a face scan. It’s quick, natural, and effortless. This becomes your digital signature, a personal identity key that no one else can copy.

2. Encryption

Your biometric data is instantly encrypted. Instead of storing your actual face or fingerprint, it’s turned into coded data that even a hacker couldn’t understand. This is where real identity theft prevention begins.

3. Matching

Whenever you try to log in or verify your identity, the system compares your live scan with your stored encrypted data. If it matches, access is granted. If it doesn’t, the system blocks entry and triggers identity fraud detection to check for suspicious behavior.

4. Alert

If the system spots something unusual, it alerts you immediately or locks down access. This rapid response prevents identity fraud protection breaches before they happen.

You can see how this works by trying Recognito’s Face Biometric Playground. It’s a fun, interactive way to see how biometric verification distinguishes real people from imposters in real time.

 

The Best Identity Theft Protection Uses Biometrics

The best identity theft protection doesn’t wait for alerts. It stops fraud before it starts. That’s what biometrics do so well, they make your physical presence part of the security process.

Modern systems use:

Facial recognition to instantly confirm identity

Liveness detection to ensure it’s a real person, not a photo or deepfake

Behavioral biometrics to monitor how users type or move

Voice recognition for call-based verification

Businesses can integrate these tools using Recognito’s Face Recognition SDK and Face Liveness Detection SDK. Together, they form the core of intelligent identity monitoring services that protect users from digital fraud without adding friction.

 

Real-Life Examples of Biometric Identity Fraud Prevention

 

1. Banking and Fintech

A global bank implemented facial verification to confirm customer logins. Within months, they prevented hundreds of fraudulent account openings. Fraudsters tried using edited selfies, but the liveness detection caught every fake.

2. E-commerce

Online retailers now use face recognition at checkout to confirm identity. Even if a hacker has your card details, they can’t mimic your live face or expressions.

3. Healthcare

Hospitals are starting to use biometrics for patient verification. This prevents criminals from using stolen identities for prescriptions or insurance fraud.

These are real examples of identity fraud protection at work. It’s fast, accurate, and much harder for scammers to outsmart.

 

Compliance and Data Security Come First

The rise of biometrics comes with responsibility. Ethical systems never store your photo or raw data. Instead, they keep encrypted templates that can’t be reverse-engineered.

This approach complies with the GDPR and other global privacy standards. It also promotes transparency, with open programs like FRVT 1:1 and community-driven research such as Recognito Vision’s GitHub. These efforts ensure fairness, security, and accountability across the biometric industry.

 

How Biometrics Stop Online Identity Theft

Online identity theft has become one of the fastest-growing cybercrimes in the world. Phishing scams, deepfakes, and password breaches make it easy for hackers to impersonate you online.

Biometric technology makes that nearly impossible. Even if criminals get your password, they can’t fake your face, voice, or live presence. AI-powered identity theft prevention systems recognize you using micro-expressions, natural movement, and behavioral patterns.

It’s no wonder that industries like banking, insurance, and remote onboarding are rapidly adopting these systems. They offer the perfect blend of convenience and unbeatable security.

 

Traditional vs Biometric Identity Protection

 

Feature Traditional Protection Biometric Protection Verification Based on what you know (passwords, PINs) Based on who you are Speed Slower, manual authentication Instant, automated Accuracy Prone to errors or guessing Over 99% accurate Fraud Prevention Reactive after breaches Proactive before breaches User Experience Complex and time-consuming Seamless and secure

If traditional methods are locks, biometrics are smart vaults that open only for their rightful owner.

 

The Future of Identity Theft Protection Services

The next generation of identity theft protection services will utilize a combination of AI, blockchain, and multi-biometric authentication for comprehensive digital security. Imagine verifying yourself anywhere in seconds, without sharing sensitive personal data.

Future systems will likely combine:

Face recognition for instant authentication

Voice and gesture biometrics for multi-layered security

Blockchain-backed identity to make personal data tamper-resistant

Regulators and innovators are already working together to ensure these systems stay ethical, inclusive, and bias-free. The goal is simple: a safer, more personal internet for everyone.

 

Staying Ahead with Recognito

Ultimately, identity theft protection is about trust. Biometrics provides that trust by using something only you have.

If you want to explore how biometric security can protect you or your business. Learn how Recognito helps organizations secure users through advanced facial recognition and liveness technology, keeping identities safe while making the user experience simple.

Because in the digital world, there’s only one you, and Recognito makes sure it stays that way.

 

Frequently Asked Questions

 

1. How does biometric technology prevent identity theft?

Biometric technology uses your unique traits, like your face or voice to verify your identity. It stops criminals from using stolen passwords or fake profiles, providing stronger identity theft protection than traditional methods.

 

2. Are biometric identity monitoring services secure?

Yes. Identity monitoring services that use biometrics encrypt your data, so your face or fingerprint is never stored as an image. This makes them safe, private, and nearly impossible for hackers to exploit.

 

3. What is the best way to protect yourself from online identity theft?

The best identity theft protection combines biometric verification with secure passwords and regular monitoring. Using facial recognition and liveness detection makes it much harder for cybercriminals to impersonate you online.

 

4. Can biometrics detect identity fraud in real time?

Yes. Modern identity fraud detection systems can instantly recognize fake attempts using AI and liveness checks. They verify real human presence and block fraud before any damage occurs.


Radiant Logic

Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity

The 2025 Gartner Hype Cycle for Digital Identity talks about the growing need for standardization in identity management—especially as organizations navigate fragmented directories, cloud sprawl, and increasingly complex hybrid environments. Among the mentioned technologies, SCIM (System for Cross-domain Identity Management) stands out as a foundational protocol for modern, scalable identity

The 2025 Gartner Hype Cycle for Digital Identity talks about the growing need for standardization in identity management—especially as organizations navigate fragmented directories, cloud sprawl, and increasingly complex hybrid environments. Among the mentioned technologies, SCIM (System for Cross-domain Identity Management) stands out as a foundational protocol for modern, scalable identity lifecycle management. 

Radiant Logic is proud to be recognized in this report. Our platform’s robust SCIMv2 support positions RadiantOne as a key enabler of identity automation, built on open standards and enterprise-proven architecture. 

Why Standardized Identity Management Matters 

SCIM was introduced to replace earlier models like SPML, offering a RESTful, schema-driven protocol to streamline identity resource management across systems. It defines a consistent structure and a set of operations for creating, reading, updating, and deleting (CRUD) identity resources such as User and Group. 

Today, SCIM is broadly adopted by SaaS and IAM platforms alike. It reduces manual effort, eliminates brittle custom integrations, and strengthens governance and compliance through standardized lifecycle operations. 

Without SCIM—or a consistent identity abstraction layer behind it—organizations are forced to manage identities with ad hoc connectors, divergent schemas, and fragile provisioning scripts. Gartner rightly identifies SCIM as essential to achieving identity governance at scale, enabling consistent policy enforcement and lowering operational risk. 

Radiant Logic’s SCIM Implementation 

RadiantOne delivers full SCIMv2 support, allowing organizations to extend standardized provisioning across their entire environment—cloud, on-prem, and hybrid—without rearchitecting existing infrastructure. 

As both a SCIM client and server, RadiantOne can expose enriched identity views to downstream applications or ingest SCIM-based data from external sources for correlation and normalization. This bidirectional flexibility eliminates the need for custom connectors and hardcoded integrations. 

At the core is RadiantOne’s semantic identity layer, which unifies identity data across sources, ensures consistency, and drives intelligent automation. This data foundation supports not only SCIM-based lifecycle management, but also Zero Trust access control, governance workflows, and AI-driven analytics. 

Where RadiantOne and SCIM Deliver Real Value 

Here are six practical use cases where Radiant Logic’s SCIM support drives immediate impact: 

Accelerated Onboarding with Trusted Identity Data  RadiantOne consolidates authoritative sources—HR, AD, ERP, SaaS—into a single, richly structured identity record. That record is exposed over SCIM v2 (or any preferred connector) to the customer’s existing join-move-leave engines—IGA, ITSM, or custom workflows—so they grant birth-right access through the tools already built for approvals and fulfillment.   Offering complete and accurate provisioning with minimal integration effort, RadiantOne stays focused on delivering clean, governed identity data rather than duplicating workflow logic. From SSO to Lifecycle Management  SSO controls access, but SCIM controls who gets access. RadiantOne aggregates and enriches identity data from sources like Active Directory, LDAP, and HR systems, making it available to SCIM-enabled applications. Provisioning decisions are based on accurate, policy-aligned identity, ensuring access is granted appropriately from the start.  This closes the gap between authentication and authorization, reducing overhead and aligning with Zero Trust principles.  Simplifying Application Migrations  RadiantOne delivers a clean, normalized identity record and, through its enriched SCIM v2 interface, maps every attribute name and value to the exact schema and format the target expects. This built-in translation removes custom scripts, connector rewrites, and brittle middleware, so admins can load thousands of users into new SaaS platforms quickly during M&A, re-platforming, or app consolidation.  Admins can provision thousands of users into SaaS platforms efficiently, making this ideal for M&A, re-platforming, or app consolidation.  Real-Time Updates as Identity Changes  RadiantOne keeps identity data current as roles change or users depart. Apps simply ask RadiantOne via SCIM v2 for the latest record—no custom sync jobs or code—so they can enforce least-privilege and de-provision on time while their own workflows remain untouched.  This ensures timely de-provisioning and continuous enforcement of least-privilege access.  Precision Access for Governance and PAM  Provisioning isn’t just account creation—it’s about controlled access. RadiantOne adds business context to identity data, such as org structure, clearance, and location, so SCIM can support fine-grained entitlements.  This aligns with PAM policies, improves audit readiness, and enhances IGA and analytics accuracy.  Keeping Workflows and Business Logic in Sync  SCIM also supports operational workflows. RadiantOne keeps identity attributes—like manager relationships, email, or job status—accurate across systems.   This ensures approval chains, directories, and collaboration tools function correctly without manual updates. Conclusion

Radiant Logic’s SCIM implementation is already powering identity automation in some of the world’s most complex IT environments, proving its value in delivering standards-based, high-integrity identity infrastructure. Book a demo to explore how Radiant Logic’s SCIM-enabled identity platform can transform your organization’s identity management practices, drive operational excellence, and secure your digital identity future. 

  

Disclaimers: 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 

 

 

The post Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


When IAM Technical Debt Becomes a Security Crisis — And How to Reverse It

There is a growing problem lurking in your identity infrastructure—one that doesn’t trigger alerts, isn’t flagged by vulnerability scanners, and yet quietly compounds security vulnerabilities: technical debt in IAM.  It is not just a side effect of legacy systems anymore. It is a direct result of the growing gap between rapid digital transformation and the […] The post When IAM Technical De

There is a growing problem lurking in your identity infrastructure—one that doesn’t trigger alerts, isn’t flagged by vulnerability scanners, and yet quietly compounds security vulnerabilities: technical debt in IAM. 

It is not just a side effect of legacy systems anymore. It is a direct result of the growing gap between rapid digital transformation and the brittle, aging identity plumbing beneath it. According to the 2025 Verizon Data Breach Investigations Report, stolen credentials were involved in 88% of web application attacks—reinforcing identity as the top threat vector. But now, Gartner adds another critical lens. 

In their June 2025 research GTP report, Reduce IAM Technical Debt1, Gartner® analysts Nat Krishnan and Erik Wahlstrom warn that “technical debt weakens the agility of an IAM team and the effectiveness of organizational security controls.”  In our opinion, their findings highlight the same five culprits we see in the field every day: siloed tools, outdated integrations, incomplete identity discovery, poor IAM hygiene and inconsistent application onboarding. 

When identity becomes fragmented, so does control—and without control, it defeats the very purpose of why we do IAM in the first place.  

What Is IAM Technical Debt, Really? 

To explain what technical debt is, think of it as the accumulated cost of shortcuts: ad-hoc integrations and workarounds, siloed tools, rushed deployments and postponed cleanup. It forms slowly, but the result is predictable. When left unchecked, it creates operational drag, governance blind spots, increased threat surface and catastrophic risk exposure.  

Here’s what drives it: 

Custom and siloed IAM tools that don’t communicate Legacy and nonstandard apps still critical to operations but incompatible with modern identity governance Incomplete discovery of identities and entitlements Weak hygiene around least-privilege, access reviews and MFA Fragmented onboarding of apps and services into IAM systems

When identity becomes fragmented, so does control. And in today’s cloud-first, hybrid-everything reality, that is both inefficient and dangerous. 

From Sprawl to Strategy: Reclaiming Identity Control 

Fixing IAM technical debt isn’t about ripping and replacing—it’s about rethinking identity as a data problem and solving it with the right architecture.

Based on both industry research and hands-on field experience, the path forward includes four critical steps: 

Identify your silos: Map out identity sources—across AD forests, cloud apps, legacy tools, shadow IT—and expose where the cracks begin Consolidate and virtualize: Aggregate fragmented data into a unified identity data lake. Use abstraction to simplify integration and reduce your connector footprint Control identity sprawl: Build bridges, not walls—stitch together disparate identity records without replacing systems and bring order to the chaos Orchestrate across the mess: Govern consistently across central and distributed environments, enabling context-rich enforcement no matter where access decisions happen Why Radiant Logic? 

Radiant Logic’s platform RadiantOne was built to solve this problem and to unify, enrich and activate identity data. 

RadiantOne virtualizes all identity sources into a single semantic layer—whether they come from AD, LDAP, Azure AD, Okta, SaaS applications or custom databases. It then brings real-time observability to the identity layer, enabling you to spot risky access patterns, automate entitlement cleanup and surface context-rich insights to stakeholders before an auditor or attacker finds the gap to exploit. 

With RadiantOne: 

You turn fragmented identity data into a governable, observable asset You gain line-of-sight across humans, machines, and APIs You eliminate the root causes of IAM project failures and identity-related incidents Final Thought: Identity Debt is Not Just IT’s Problem 

IAM technical debt isn’t just a nuisance—it’s a strategic liability. It stalls digital transformation and cloud projects, burdens compliance and weakens your security posture. But with the right foundation, it can be reversed. 

Ready to act? Schedule a demo of RadiantOne and start reducing your identity debt today. 

 

 

1: Gartner, Reduce IAM Technical Debt, ID G00798396, June 23, 2025, by Nat Krishnan and Erik Wahlstrom. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. 

 

The post When IAM Technical Debt Becomes a Security Crisis — And How to Reverse It appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


auth0

The Three Laws of AI Security

What principles guide AI security? We adapt Asimov's Three Laws for modern AI agents to solve core LLM security challenges, from data control to tool access.
What principles guide AI security? We adapt Asimov's Three Laws for modern AI agents to solve core LLM security challenges, from data control to tool access.

Thursday, 06. November 2025

Indicio

Indicio secures investment from NEC X, accelerating a new era of user-controlled digital identity

NEC-X The post Indicio secures investment from NEC X, accelerating a new era of user-controlled digital identity appeared first on Indicio.

Spruce Systems

Translating Privacy Law into Digital Architecture

Explore how statutory privacy protections become real through code and technical standards.

As states modernize public services, privacy must move from principle to practice. Laws define what to protect, but it’s technology that enforces how. Embedding statutory safeguards, such as unlinkability, data minimization, and selective disclosure, directly into the architecture is critical to making privacy protection real and reliable. In this blog post, we explore recommendations around translating privacy law into digital architecture.

Compliance by Design

Compliance with a state’s privacy statutes should be embedded directly into the design and governance of state digital identity, ensuring that protections are enforced through both technology and policy.

One approach is to use personal data licenses, where every credential presentation carries machine-readable terms that specify how the data may be used, for how long it may be retained, and whether it may be shared. Wallets can enforce these licenses automatically, creating automated privacy compliance that is consistent with statutory requirements and reducing reliance on after-the-fact enforcement.

Establishing Reasonable Disclosure Norms

States could also establish the principle of reasonable disclosure, defining contextual norms for when certain attributes may be shared. For example, in a bar setting, presenting “over 21” is a reasonable disclosure, but if the bar requests an email address, that exceeds the scope of the transaction and must be flagged or presented differently. 

An insurance company might ask for someone’s basic history, but additionally requesting genetic indicators of future disease may be considered unlawful or predatory. Embedding these rules into wallet UX and verifier obligations ensures that disclosures remain consistent with a state’s privacy laws while still supporting legitimate use cases.

Governance and Decentralized Enforcement

It is a very difficult but important task to determine the proper governance around agreeing upon “reasonable disclosure” across many different industry use cases. We believe that one entity would not be able to make good judgements across all industry verticals, and so industry engagement is critical for this to be successful. 

Further, it remains unclear if a government agency is the best entity to coordinate these efforts, versus non-profits, cooperatives, or even private companies specializing in digital reputation management. This is a hard and open problem in decentralized identity, but necessary to create the benefits while managing the risks of increased user control.

Balancing User Autonomy and System Safety

It’s our opinion that this should operate in a decentralized manner, with wallets mediating requests and issuers not serving as intermediaries for every transaction. We believe that enforcement of these reasonable disclosure frameworks should be composable across many different sources and list maintainers, and ultimately configured at the wallet level. 

We should “push decision-making towards the edges” as much as possible, while ensuring reasonable defaults which provide an acceptable trade-off between user choice and safety.

Incentivizing Privacy by Design

To further protect residents, states could consider imposing an insurance requirement on verifiers or entities that retain personally identifiable information (PII). This creates a financial incentive to minimize data collection and retention, while ensuring that residents are protected if breaches occur. 

States could also consider strongly restricting the appropriate request criteria, which would transmit PII and result in full identification. Finally it would also be possible for wallet providers to align on a privacy-preserving fraud signal mechanism, where relying parties overcollecting data are detected via anonymized aggregated reporting so that investigations and enforcement can take suitable action.

Putting Privacy Law Into Action

Translating privacy law into digital architecture is both a technical and civic responsibility. It demonstrates how statutory principles, such as unlinkability, minimal disclosure, and individual control, can be implemented in real systems. When wallets enforce policy through personal data licenses and reasonable disclosure frameworks, compliance becomes built-in and verifiable.

By embedding privacy into the core architecture, governments and institutions can establish a new standard for privacy-by-design governance that protects individuals and fosters confidence in digital services. SpruceID enables governments and organizations to turn privacy principles into secure, trusted digital systems. To learn more, contact our team.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.


Elliptic

Data rich, but insight poor: How government agencies can turn information overload into actionable intelligence

Government agencies have more digital asset data than ever before. The rapid growth in crypto adoption over recent years means that agencies now encounter digital assets across a much broader range of their work, from traditional fraud investigations that involve blockchain transactions to large-scale seizures containing vast amounts of crypto intelligence.

Government agencies have more digital asset data than ever before. The rapid growth in crypto adoption over recent years means that agencies now encounter digital assets across a much broader range of their work, from traditional fraud investigations that involve blockchain transactions to large-scale seizures containing vast amounts of crypto intelligence.


IDnow

Digital trust is the new differentiator: How congstar balances security and seamless onboarding.

speed with security and compliance. We sat down with Christopher Krause, Senior Manager Customer Finance at congstar, to discuss how the company's partnership with IDnow enables seamless eSIM activations and postpaid sign-ups in seconds through AI-powered identity verification, why digital trust has become as important as price or network speed, and why telcos are uniq
In a competitive telecom market where customers expect instant onboarding, congstar must balance speed with security and compliance.

We sat down with Christopher Krause, Senior Manager Customer Finance at congstar, to discuss how the company’s partnership with IDnow enables seamless eSIM activations and postpaid sign-ups in seconds through AI-powered identity verification, why digital trust has become as important as price or network speed, and why telcos are uniquely positioned to become anchors of digital identity in the era of EU Digital Wallets and decentralized credentials.

congstar is known for fair, transparent tariffs and a simple, digital customer experience. What role does identity verification play in keeping the onboarding process both secure and smooth? 

We put convenience and security on an equal pedestal. When a customer signs up for a postpaid plan, we offer a range of identification options, especially automated identity verifications so that only real, eligible customers get through – this protects against fraud and satisfies legal requirements in Germany.  

When customers sign up for postpaid plans or activate eSIMs, how do you ensure the process feels instant and seamless, while still meeting all regulatory and fraud-prevention requirements? 

For postpaid or eSIM activation, speed is king, but we never cut corners on compliance. A perfect example is our sub-brand fraenk, our fully digital mobile phone app tariff. Here we rely on fully digital KYC tools, in our case IDnow’s automated identity verification solution, that scans a photo ID and do a quick selfie/liveness check. These AI-powered checks turn around in seconds, replacing old-school video calls or lengthy paperwork. As a result, signing up feels almost instant to the customer – yet it still meets all legal requirements.  

Another good example is our congstar website, where we have incorporated the verification step into the checkout process. By doing the identity check with a fast, in-browser process (no extra app needed) and clearly explaining each step, customers hardly feel any friction.  

In short, we assure our customers are safe, while keeping the process simple and transparent – a key part of our “fair and worry-free” brand promise. 

You’ve been working with IDnow for identity verification. How does this collaboration support congstar’s goals for digital efficiency, compliance and customer satisfaction? 

Our partnership with IDnow is a cornerstone of this approach. IDnow’s automated identity verification solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations. 

Thanks to that, we can scale up our digital sales without bottlenecks – maintaining our light, digital touch while staying on the right side of the law. In practice, this means high customer satisfaction because sign-ups are almost instant, and our operations save time – all contributing to our goal of a smooth, yet secure and digital experience. 

IDnow’s automated solution is AI-powered and designed exactly for telco onboarding. It lets us verify identities fully automatically using just a smartphone. The benefit is twofold: it accelerates the process for users, and it guarantees compliance with strict regulations.

Christopher Krause, Senior Manager Customer Finance at congstar
With growing eSIM adoption and automated onboarding, where do you see the biggest opportunities or challenges for congstar in the next few years? 

Looking ahead, the rise of eSIMs and automated onboarding is a big opportunity for us. Analysts expect eSIMs to boom soon. For us, this means we can offer even more flexible, instant activations. It also cuts costs – no more plastic SIM cards or waiting for mail delivery. The flip side: as onboarding goes 100% digital, we need to stay vigilant against evolving fraud, like SIM swap attacks or deepfakes. We’re preparing by continuously improving our automated checks and monitoring tools. Overall, the shift is positive – it lets us focus on the best customer experience and leaves us more bandwidth to innovate on products and services. The main challenge is simply staying one step ahead of bad actors as we grow digitally. 

Do you see digital trust as a new differentiator in the telecom market, similar to how speed or price once defined competition? 

Absolutely – we see digital trust becoming a real differentiator. In a mature market like ours, price and speed are table stakes. What sets a brand apart now is how much customers trust it with their heart, their data and security. Trust wins loyalty: research shows that trusted telcos gain more market share, foster long-term customers and are recommended more than others. Our brand is built on transparency and fairness, so emphasizing trust feels natural for us.  

When a customer goes through an identity check, what do you want them to feel –safety, simplicity, control?  

In the identity check itself, we want customers to feel a sense of calm and confidence. They should feel that the process is simple as we guide them clearly through each step and respectful as they decide what information to share. Altogether, we want people to walk away thinking: “That was easy, and I know my account is protected.” 

How does your verification journey contribute to that emotional experience? 

Our verification flow is designed to build those positive feelings. For example, we use the in-app browser for IDnow’s automated identity verification solution in our fraenk app, which keeps the process friendly and fast. The user sees clear instructions and immediate feedback, so they never feel lost. Every step is optimized for transparency: we show progress bars, explain why we need each check, and never ask for data twice. The result is a consistent, reassuring experience that strengthens the feeling of security and control. 

What sets a brand apart now is how much customers trust it with their heart, their data and security.

Christopher Krause, Senior Manager Customer Finance at congstar
How do you prepare for upcoming regulations like eIDAS 2.0 and the EUDI Wallet, and what opportunities do these create for telcos?  

We’re monitoring the developments around eIDAS 2.0 and the EU Digital Wallet and we are seeing them as enablers rather than headaches. As the regulations come into force, we review them with our identification partners and examine how we can further improve identification with the new options available. For telcos, the opportunity could be big: according to experts, eIDAS-compliant credentials mean we can verify any EU customer’s identity seamlessly and with reduced risk of fraud.  

In a world of digital wallets and decentralised identity, how do you see the telco’s role in verifying and protecting digital identity?  

Telcos have a vital role to play. We already have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for certain credentials – for instance, confirming that a person is a current mobile subscriber, or verifying age to enable services. Industry analysts note that telcos are well-positioned as “mobile-SIM-anchored” issuers of digital credentials.  

Telcos have something others don’t have: a verified link between a real person and a SIM card. That makes us natural authorities for verifying credentials.

Christopher Krause, Senior Manager Customer Finance at congstar
How important is orchestration, i.e connecting verification, fraud and user experience, to achieving a scalable, future-proof onboarding process?  

Orchestration is absolutely critical for scaling securely. We can’t treat identity checks, fraud detection and user experience as separate silos. Instead, we tie them together. For example, if our system flags an order as high-risk, it immediately triggers additional steps. Conversely, if everything looks legitimate, the user sails through. This end-to-end coordination (identification, device risk profiling and behavior analytics) is what lets us grow quickly without ballooning costs.  

How do data-sharing initiatives or consortium-based approaches help strengthen fraud prevention across the telecom sector?  

Industry-wide collaboration is a force-multiplier against fraud because fraudsters don’t respect company boundaries. For instance, telecoms worldwide have started exchanging through platforms like the Deutsche Fraud Forum. In addition, regular and transparent communication with government authorities such as the BNetzA and LKA is essential to set uniform industry standards and combat potential fraud equally. 

How do you see AI helping to detect fraud in real time without adding friction for genuine users?  

Finally, AI is becoming essential to catch clever fraud without inconveniencing users. We use AI and machine learning models that watch behind the scenes. The smart part is that genuine customers hardly notice: the system learns normal behavior and only steps in (with an extra check or block) when something truly stands out. This adaptive learning means false alarms drop over time, reducing friction for legitimate users. We also benefit by deploying solutions like IDnow’s automated identity verification solution, which already uses AI trained on millions of data points to verify identities. In network operations, we complement that with risk scores on each transaction. The net effect is real-time fraud defense that locks out attackers but lets loyal customers pass through hassle-free.  

About congstar: 

Founded in 2007 as a subsidiary of Telekom Deutschland GmbH, congstar offers flexible, transparent, and fair mobile and DSL products tailored for digital-savvy customers. Known for its customer-first approach and innovative app-based brand fraenk, congstar continues to redefine simplicity and security in Germany’s telecom market. 

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. DGGS’s CEO, Florian Werner, talked to us about how strict regulatory requirements are shaping online gaming in Germany and what it’s like to be the first brand to receive a national slot licence.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


ComplyCube

What to Look for in a Biometrics Identity Verification System

This guide explains how biometrics identity verification systems use facial recognition, liveness detection, and AI to stop fraud, meet global compliance standards, and streamline secure digital onboarding at scale. The post What to Look for in a Biometrics Identity Verification System first appeared on ComplyCube.

This guide explains how biometrics identity verification systems use facial recognition, liveness detection, and AI to stop fraud, meet global compliance standards, and streamline secure digital onboarding at scale.

The post What to Look for in a Biometrics Identity Verification System first appeared on ComplyCube.


auth0

Trusting AI Output? Why Improper Output Handling is the New XSS

We know not to trust user input, but what about AI output? Learn how improper output handling leads to XSS, SQL injection, and RCE, and how to prevent it.
We know not to trust user input, but what about AI output? Learn how improper output handling leads to XSS, SQL injection, and RCE, and how to prevent it.

BlueSky

The World Series Was Electric — So Was Bluesky

“How can you not be romantic about baseball?” — Moneyball 2011

As blue confetti settles in Los Angeles after an historic World Series win, we close the chapter on another electric sports event on Bluesky. It’s during these cultural flashpoints when Bluesky is at its best – when stadium crowds are going wild, you can feel that same excitement flooding into posts.

Numbers can help describe the scale of that feeling. There were approximately 600,000 baseball posts made during the World Series, based on certain key terms. (note: we’re pretty sure that this number is an undercount, as it’s hard for us to accurately attribute to baseball the flood of “oh shit” posts that came in during Game 7)

At least 3% of all posts made on November 1 (Game 7) were about baseball. The final game also resulted in a +30% bump in traffic to Bluesky, with engagement spikes up to +66% from previous days.

We loved seeing World Series weekend in action, but it wasn’t a total surprise. In the last three months, sports generated the third-highest number of unique posts of any topic. Sports posters are also seeing greater engagement rates from posting on Bluesky than on legacy social media apps - up to ten times better.

But in a world of analytics, it’s easy to lose the love of the game. In that regard, we’re fortunate to have a roster of posters who bring the intangibles. They genuinely care about sports. Less hate, more substance and celebration.

yep, this is the baseball place now

[image or embed]

— Keith Law (@keithlaw.bsky.social) November 1, 2025 at 10:28 PM

That was the greatest baseball game I’ve ever seen.

— Molly Knight (@mollyknight.bsky.social) November 1, 2025 at 9:19 PM

If this World Series proved anything, it’s that big moments are more enjoyable when they unfold in real time, with real people. Sports has the juice on Bluesky — and every high-stakes game is bringing more fans into the conversation.


FastID

VCL Support for Parameters in Custom Subs

Learn about Fastly's VCL syntax updates, including return values and parameters for custom subroutines, enabling better code reuse and abstraction.
Learn about Fastly's VCL syntax updates, including return values and parameters for custom subroutines, enabling better code reuse and abstraction.

Wednesday, 05. November 2025

Mythics

Mythics, LLC Appoints Sundar Padmanaban as Executive Vice President, Consulting Sales & Solution Engineering, to Drive Transformative Growth

The post Mythics, LLC Appoints Sundar Padmanaban as Executive Vice President, Consulting Sales & Solution Engineering, to Drive Transformative Growth appeared first on Mythics.

FastID

Building Scalable Waiting Rooms with Fastly Compute

Control website traffic and prevent server overload with Fastly Compute waiting rooms. Learn how to build scalable, customizable queues for high-demand events.
Control website traffic and prevent server overload with Fastly Compute waiting rooms. Learn how to build scalable, customizable queues for high-demand events.

Tuesday, 04. November 2025

Thales Group

Copernicus Sentinel-1D Earth observation satellite successfully launched

Copernicus Sentinel-1D Earth observation satellite successfully launched tas Tue, 11/04/2025 - 22:45 Space Share options Facebook X
Copernicus Sentinel-1D Earth observation satellite successfully launched tas Tue, 11/04/2025 - 22:45 Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 04 Nov 2025

The Sentinel-1 family is now complete.

Sentinel-1D is the last satellite in the first constellation of the Copernicus program, supplying vital radar imagery for understanding climate change and protecting our planet.

Kourou, November 4, 2025 – The Copernicus Sentinel-1D Earth observation satellite, built by prime contractor Thales Alenia Space, the joint company between Thales (67%) and Leonardo (33%), was successfully launched by Arianespace with an Ariane 6 rocket from Europe’s spaceport, in French Guiana.

Sentinel-1D is part of Copernicus, the Earth Observation component of the European Union’s Space Programme. This program is managed by the European Commission and co-funded by the European Union and the European Space Agency.

With Sentinel-1D now successfully launched, the Sentinel-1 family is complete. As the final satellite in the first Copernicus constellation, Sentinel-1D will ensure continuity and enhance the missions in orbit, extending system operations for at least the next seven years and beyond.

Sentinel-1 © ESA

The satellite captures images of Earth’s surface — day and night, in all weathers — for a wide range of applications to help protect our planet. This crucial data will be used to monitor landslides, earthquake zones, volcanic activity and variations in polar ice cover. They will also provide valuable insights for monitoring deforestation, the use of water resources and supporting emergency responders and search and rescue teams in the event of natural disasters.

Sentinel-1D, like Sentinel-1C, carries an Automatic Identification System (AIS) payload to enhance maritime safety — improving traffic management, preventing collisions and monitoring vessels in sensitive areas. It also introduces a world-first: a patented mechanism that separates the radar antenna from the spacecraft bus during end-of-life reentry, helping reduce orbital debris.

“I’m particularly proud of this successful launch — now the Sentinel-1 family is complete,” said Giampiero di Paolo, Deputy CEO and Senior Vice President Observation, Exploration and Navigation at Thales Alenia Space. “Our long-standing recognized expertise in developing radar-based Earth observation satellites is once again on orbit”.

“Over the years, our company has proven it has the capabilities required to meet this program’s technological challenges, fully aligned with Europe’s environmental policy goals and marking a new phase in our collaboration with the European Commission and ESA” Thales Alenia Space CEO Hervé Derrey said.

Thales Alenia Space’s role

©Thales Alenia Space/Alban Pichon

As prime contractor for the Sentinel-1 mission on behalf of ESA, Thales Alenia Space is responsible for satellite design, development, integration and testing. Each Sentinel-1 satellite is built on the PRIMA spacecraft bus developed by Thales Alenia Space for the Italian Space Agency (ASI) and carries a C-band synthetic aperture radar (SAR) instrument developed by Airbus Defence & Space. This SAR instrument enables precise mapping at resolutions up to 5 meters and coverage out to 400 kilometers.

More about Sentinel 1

©Thales Alenia Space/Alban Pichon

The Sentinel-1 mission comprises two satellites in Sun-synchronous orbit operating in tandem to provide optimal global coverage with a 12-day repeat cycle. Their pre-tasking capability means data can be acquired consistently over long periods, which is essential for analyzing environmental trends. This data is made available to public authorities, businesses and citizens around the world on a free, full and open basis.

With a launch mass of 2,184 metric tons, Sentinel-1D will operate in low-Earth orbit at an altitude of 700 kilometers and has a design life of seven years.

Leonardo contributed to the development of the Sentinel-1D and 1C satellites by supplying the attitude sensors (Autonomous Star Tracker) and power units for the radar, ensuring continuous availability of images.

Telespazio Germany supported the launch of Sentinel-1D and is providing assistance during the launch and early operations phase (LEOP) with regard to mission operations, ground segment services and software coordination.

Data from the Sentinel-1D satellite will be collected by several European centres, including the Matera Space Centre operated by e-GEOS, a joint venture between Telespazio (80%) and the Italian Space Agency (20%). The Centre is part of the ESA Core Ground Segment within the Copernicus Programme.

About Copernicus

©Thales Alenia Space

Copernicus is the Earth observation component of the European Union Space Programme, monitoring our planet and its environment for the benefit of all Europeans. It delivers accurate, timely and accessible information to improve environmental management, address climate change and support civil security. As the world’s most advanced Earth observation system, Copernicus provides continuous, free and reliable data and services to public authorities, businesses and citizens worldwide.

Copernicus comprises 12 satellite families and a suite of monitoring networks — such as ground weather stations, ocean buoys and air-quality networks — to deliver robust integrated information and calibrate and validate satellite data.

The satellites are built for ESA by European prime contractors. A program of this scale helps Europe anticipate the impacts of climate change and take action to protect our planet.

The program is managed by the European Commission and funded by the EU, with additional contributions from ESA

Thales Alenia Space, a key Copernicus partner

Thales Alenia Space is a major contributor to 11 of the Copernicus program’s 12 missions. Sentinel-1 monitors land and sea in all weathers, day and night, thanks to its radar capabilities. Sentinel-2 and -3 acquire high-resolution optical imagery over land and coastal waters. Sentinel-4 and -5 are dedicated to meteorology and climatology missions. Sentinel-6 monitors the planet’s oceans. As well as being prime contractor for the Sentinel-1 and -3 satellite families, Thales Alenia Space also supplied the Sentinel-2 image ground segment and helped build the imaging spectrometer on Sentinel-5P and the Poseidon-4 radar altimeter on Sentinel-6. In 2020, Thales Alenia Space was awarded five contracts for the six new Copernicus Expansion missions, as prime contractor for the CIMR, ROSE-L and CHIME satellites and supplier of the CRISTAL and CO2M mission payloads. These new satellites will measure human-induced atmospheric carbon dioxide, survey sea ice and snow cover, support new optimized services for sustainable farming and biodiversity, observe sea-surface temperature and salinity as well as sea ice density and strengthen land monitoring and emergency management services.

About Thales Alenia Space

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe.

View PDF market_segment : Space copernicus-sentinel-1d-earth-observation-satellite-successfully-launched On

Elliptic

OFAC lists 53 crypto addresses of sanctioned North Korean Cheil Credit Bank

On November 4, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) listed over fifty crypto addresses belonging to the sanctioned North Korean bank Cheil Credit Bank for facilitating financial activity of North Korean cybercrime and espionage. Additionally, OFAC also sanctioned another North Korean financial institution and several bankers. Cheil Credit Ban

On November 4, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) listed over fifty crypto addresses belonging to the sanctioned North Korean bank Cheil Credit Bank for facilitating financial activity of North Korean cybercrime and espionage. Additionally, OFAC also sanctioned another North Korean financial institution and several bankers. Cheil Credit Bank was originally sanctioned by OFAC in 2017.


liminal (was OWI)

Introducing Scout — Turn Market Intelligence into GTM Action

Go-to-market teams today face a frustrating challenge: they know more about their market than ever before, but struggle to act on that intelligence fast enough. Teams have the data, but not the coordination. They know who’s buying, but not when or why. And they often watch opportunities slip simply because their systems and workflows can’t […] The post Introducing Scout — Turn Market Intelligenc

Go-to-market teams today face a frustrating challenge: they know more about their market than ever before, but struggle to act on that intelligence fast enough. Teams have the data, but not the coordination. They know who’s buying, but not when or why. And they often watch opportunities slip simply because their systems and workflows can’t keep pace with how fast buyer behavior changes.

That’s why we built Scout, a new product module inside our Link platform designed to help teams move from insight to execution in real time. Scout connects live market, competitor, and buyer signals directly to the tools teams already use—sales, marketing, and planning—turning static intelligence into coordinated GTM action.

Announced during Money20/20 USA 2025, Scout helps organizations in fraud prevention, cybersecurity, risk, and trust-critical sectors close the gap between strategy and execution. Rather than relying on static reports or siloed data sources, teams can now build programs that evolve in lockstep with the market itself.

From signals to outcomes

Most GTM teams are drowning in signals but starving for clarity. Buyer demand shifts fast, contacts decay, and generic messaging rarely resonates. Scout addresses this directly by surfacing verified buyer signals, mapping decision-makers, and activating guided plays across the platforms teams already rely on—from outbound tools and CRMs to marketing automation and enablement systems

“Organizations are drowning in signals but starving for clarity,” said Travis Jarae, CEO of Liminal. “With Scout, the intelligence our customers already trust inside Link becomes immediately actionable. We connect what teams know about the market to what they can actually do about it, right where they already work.”

Why Scout matters for go-to-market teams

Scout builds on the same proprietary intelligence graph that powers Link, continuously mapping relationships between vendors, buyers, and technologies to maintain a live, contextual view of the market. Predictive signals trigger guided playbooks that tell teams who to target, what to say, and which proof points will land. The result is faster, more precise go-to-market programs that stay aligned with real buyer intent and market movement.

“Liminal’s approach to data quality and context is unlike anything else in the industry,” said Matthew Thompson, CRO of Socure. “They help teams cut through the noise and make decisions based on what actually matters.”

Early users are already seeing measurable impact: 70–90% account coverage within target segments, roughly 97% automation of repetitive workflows, and 2× faster pipeline conversion compared with traditional outbound approaches. When Liminal deployed Scout across its own campaigns, the platform identified 570 target accounts and delivered personalized outreach to more than 2,400 verified contacts within minutes—demonstrating its ability to turn intelligence into coordinated execution.

How Scout works

Scout moves teams from intelligence to impact through four connected layers: Build, Align, Engage, and Scale.

Build

Scout gives teams the flexibility to start from scratch or plug into existing account strategies and data models. Whether you’re building new target lists or optimizing current ABM programs, Scout automatically unifies your market intelligence, contact data, and buyer signals into a single activation layer. With deep vertical context, teams can capture market movements as they happen, navigate complex GTM motions, and deploy enterprise-scale campaigns without manual lift.

Align

Scout compresses months of planning into automated Account Strategy Plans—a dynamic activation blueprint that clarifies market context, use-case alignment, key purchase criteria, and the campaign-aligned path to close. Each plan includes live buyer insights, prioritized talking points, and step-by-step execution guidance for every account.

Engage

No person is just one person. Scout gives teams deep context around who buyers are, what they need, and how to align value with their goals. Teams gain verified contacts, role-specific insights, and adaptable account strategies that drive meaningful engagement.

Scale

Every email, call, and inMail reflects the latest competitive, market, and regulatory intelligence. Scout helps reps highlight the strongest proof points, feature what motivates each prospect most, and handle objections with confidence—making every rep an expert in the moment that matters.

The new standard for GTM execution

Go-to-market intelligence is no longer about collecting data; it’s about activating it. Scout gives teams a unified intelligence layer that connects what’s happening in the market to what happens next in pipeline. The result is execution that’s coordinated, contextual, and always current.

See how Scout can help your team turn market intelligence into GTM action → Book a demo

The post Introducing Scout — Turn Market Intelligence into GTM Action appeared first on Liminal.co.


Elliptic

The two faces of AI in crypto: Threats, opportunities and what Elliptic is doing about it

As one dark web user selling a jailbroken AI tool aptly put it, “AI has two faces, just like humans.” At Elliptic, we are committed to helping the cryptoasset ecosystem detect both: Not only the crypto crime threats that AI may be exacerbating, but also the opportunities AI offers to upscale their detection and prevention.

As one dark web user selling a jailbroken AI tool aptly put it, “AI has two faces, just like humans.” At Elliptic, we are committed to helping the cryptoasset ecosystem detect both: Not only the crypto crime threats that AI may be exacerbating, but also the opportunities AI offers to upscale their detection and prevention.


Shyft Network

Shyft Veriscope Expands VASP Compliance Network with Endl Integration

The global stablecoin payment market is experiencing explosive growth, with businesses demanding infrastructure that combines seamless cross-border transactions with robust FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol and compliance solution provider, has partnered with Endl, a stablecoin neobank and payment rail provider, to integrate Veriscope for regulatory co

The global stablecoin payment market is experiencing explosive growth, with businesses demanding infrastructure that combines seamless cross-border transactions with robust FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol and compliance solution provider, has partnered with Endl, a stablecoin neobank and payment rail provider, to integrate Veriscope for regulatory compliance. This collaboration showcases how VASPs can enable secure, compliant digital finance for modern payment infrastructure while prioritizing user privacy and meeting global regulatory standards.

Building the Future of Compliant Payment Infrastructure

As the digital payments landscape evolves, Virtual Asset Service Providers (VASPs) need blockchain compliance tools that ensure regulatory adherence without adding friction to user experience. Veriscope leverages cryptographic proof technology to facilitate secure, privacy-preserving data exchanges, aligning with FATF Travel Rule requirements and AML compliance standards. By integrating Veriscope, Endl demonstrates how next-generation stablecoin payment platforms can achieve regulatory readiness seamlessly while maintaining operational efficiency.

Endl is a regulatory-compliant stablecoin neobank providing fiat and stablecoin payment rails, multicurrency accounts, and crypto on/off ramps designed for businesses and individuals seeking secure cross-border payment solutions. By integrating Veriscope for Travel Rule compliance, Endl strengthens its commitment to security and regulatory compliance, enabling users to seamlessly convert, manage, and transfer both fiat and cryptocurrencies while meeting global AML and KYC compliance standards.

The Power of Veriscope for Global Payment Platforms

The Shyft Network-Endl partnership highlights Veriscope’s ability to transform crypto compliance and blockchain regulatory infrastructure for payment platforms:

Seamless FATF Travel Rule Compliance: Automated cryptographic proof exchanges ensure FATF Travel Rule compliance for VASPs without disrupting user workflows or transaction speed Privacy-First AML Verification: User Signing technology enables secure KYC data verification and AML compliance while protecting customer privacy through blockchain encryption Global Regulatory Readiness for VASPs: Position Endl for expansion into regulated crypto markets worldwide with built-in compliance infrastructure that meets international standards Enhanced Trust in Digital Asset Transactions: Demonstrate commitment to security and regulatory standards, building confidence with both users and institutional partners in the stablecoin payment ecosystem

Zach Justein, co-founder of Veriscope, emphasized the integration’s impact on the crypto compliance landscape:

“The future of digital payments lies in seamless integration between fiat and digital assets with robust regulatory compliance. Veriscope’s integration with Endl reflects Shyft Network’s commitment to enabling compliant, privacy-preserving blockchain infrastructure for the next generation of payment platforms. As stablecoin adoption accelerates globally, FATF Travel Rule solutions like this will be essential for VASPs serving international markets and meeting evolving regulatory requirements.”

Endl joins a global network of Virtual Asset Service Providers adopting Veriscope to meet FATF Travel Rule and AML compliance demands seamlessly. This partnership underscores the critical need for secure, compliant crypto infrastructure as stablecoin payments become mainstream across cross-border transactions, international remittances, and B2B digital asset payments.

About Veriscope

Veriscope, built on Shyft Network, is the leading blockchain compliance infrastructure for Virtual Asset Service Providers (VASPs), offering a frictionless solution for FATF Travel Rule compliance and AML regulatory requirements. Powered by User Signing cryptographic proof technology, it enables VASPs to request verification from non-custodial wallets, simplifying secure KYC data verification while prioritizing privacy through blockchain encryption. Trusted globally by leading crypto exchanges and payment platforms, Veriscope reduces compliance complexity and empowers VASPs to operate confidently in regulated digital asset markets worldwide.

About Endl

Endl is a digital asset payment infrastructure provider established in 2024. The company operates stablecoin payment rails, multicurrency account services, and fiat-to-crypto conversion infrastructure for commercial and retail clients. Services include cross-border transaction processing, linked card spending functionality, and yield generation on deposited assets. The platform is designed to meet regulatory compliance standards in jurisdictions where it operates, including FATF Travel Rule and anti-money laundering requirements.

Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.

Book a consultation at calendly.com/tomas-shyft or email bd @ shyft.network

Shyft Veriscope Expands VASP Compliance Network with Endl Integration was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

The Future of Identity Security: Demands of a New Era

The post The Future of Identity Security: Demands of a New Era appeared first on uqudo.

Spherical Cow Consulting

The Infrastructure We Forgot We Built

When AWS went down, payments failed and digital life froze — exposing how fragile our cloud-based world really is. In this episode of Digital Identity Digest, Heather Flanagan explores why AWS, Stripe, Twilio, and Okta have become the new critical infrastructure of global commerce. Discover how invisible digital dependencies shape resilience, why uptime isn’t true stability, and what “too big to

“A friend sent over an interesting article by Ross Haleluik that opened with ‘Why it’s not just power grid and water, but also tools like Stripe and Twilio that should be defined as critical infrastructure.'”

The point being made is that there are some services (as demonstrated by the recent AWS outage) that cause significant harm if they become unavailable. The definition of critical infrastructure needs to go beyond power, water, or even core ICT networking.

So let’s talk about that outage. On 19 October 2025, an AWS outage (of course it was the DNS) made the Internet wobble. Payments failed. Authentication broke. Delivery systems froze. For a few hours, the digital economy looked a lot less digital and a lot more fragile.

From my perspective, the strangest things failed. I was in the process of boarding a plane to the Internet Identity Workshop. Air traffic control was fine (yay for archaic systems!), but the gate agent couldn’t run the automated bag check tools. The flight purser couldn’t see what catering had been loaded. And my seatmate completely panicked, wondering if it was even safe to fly.

So many things broke. A lot of things didn’t. Everyday people had no idea how to differentiate what mattered. That moment reminded me how fragile modern “resilience” can be.

We used to worry about power grids, water, and transportation—the visible bones of civilization. Now it is APIs, SaaS platforms, and cloud services that keep everything else alive. The outage didn’t just break a few apps; it exposed how invisible dependencies have become the modern equivalents of roads and power lines.

A Digital Identity Digest The Infrastructure We Forgot We Built Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:02 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The invisible backbone

Modern business runs on other people’s APIs (I’m also looking at you, too, MCP). Stripe handles payments. Twilio delivers authentication codes and customer messages. Okta provides identity. AWS, Google Cloud, and Azure host it all.

These are not niche conveniences. They form the infrastructure of global commerce. When you tap your phone to pay for coffee, when a government portal confirms your tax filing, or when an airline gate agent scans your boarding pass, one of these services is quietly mediating that interaction.

They don’t look like infrastructure. There are no visible grids, transformers, or pipes. They exist as lines of code, data centers, and service contracts in that they are modular, rentable, and ephemeral. Yet they behave like utilities in every meaningful way.

We have replaced public infrastructure with private platforms. The shift brought convenience and innovation, but also a new kind of risk. Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will stay online. We stopped building things to last and started building things to scale by leveraging someone else’s efficiencies. The assumption that the lights will always stay on has not caught up with reality.

The paradox of “resilient” design

Cloud architecture is often described as inherently resilient. Redundancy, failover, and microservices are meant to prevent collapse. But “resilient” in one dimension can mean “fragile” in another. I talked about this in an earlier post, The End of the Global Internet.

Designing for resilience makes sense in a world where the Internet is fragmenting. Companies build multi-region redundancies, on-prem backups, and hybrid clouds to protect themselves from geopolitical risk, supply chain issues, and simple human error. That same design logic—isolating risk, duplicating services, layering complexity—often increases fragility at the systemic level. Resilience is considered important, but efficiency is even better.

Microservices make each node stronger while the overall network becomes more brittle. Every layer of redundancy adds another point of failure and another dependency. A service might survive the loss of one data center but not the failure of a shared authentication API or DNS resolver. Local resilience frequently amplifies global fragility.

The AWS outage demonstrated this clearly. A system built for reliability failed because its dependencies were too successful. Interdependence works in both directions. When everyone relies on the same safety net, everyone falls together.

Utility or vendor?

This raises a larger question: should services like AWS, Stripe, or Twilio be treated as critical infrastructure? Haleluik says yes. I’m trying to decide where I stand on this, which is why I’m writing this series of blog posts.

In the United States, the FTC and FCC have debated for decades whether the Internet itself (aka, “broadband”) qualifies. If you aren’t familiar with that quagmire, you might be interested in “FCC vs FTC: A Primer on Broadband Privacy Oversight.”

The arguments for the designation are clear. Without broadband access, the modern economy falters. The arguments against it are equally clear. Labeling something as critical infrastructure introduces regulation, and regulation remains politically unpopular when applied to the Internet.

To put it another way, declaring something critical brings oversight, compliance requirements, and coordination mandates. Avoiding that label preserves flexibility and profit margins but leaves everyone downstream exposed. The result is an uneasy middle ground. These systems operate as essential infrastructure but remain governed by private interest. Their reach exceeds their obligations.

In traditional utilities, physical constraints limited monopoly power. Another way to look at it, though, is that traditional utilities are monopolized by government agencies (ideally) to the benefit of all. The economics of software, however, reward centralization. Success creates scale, and scale discourages competition. Very few can afford to get there (big enough to mask failures) from here (small enough to feel them).

I think we’re seeing quite a bit of magical thinking when it comes to the stories companies tell themselves when it comes to resilience: When your infrastructure depends on someone else’s business continuity plan, governance becomes an act of faith.

When “public” meets “critical”

While the debate over “critical infrastructure” in the United States often focuses on regulation versus innovation, the rest of the world is having a different but related conversation under the banner of digital public infrastructure (DPI).

Across the G20 and beyond, governments are grappling with whether digital public infrastructure—such as national payment systems, digital identity programs, and data exchange platforms—should be designated as critical information infrastructure (CII). A recent policy brief from a G20 engagement group argues that while both concepts overlap, they represent opposing design instincts: DPI is built for openness, interoperability, and inclusion, whereas CII emphasizes restriction, control, and national security.

That tension is already visible in India, where systems such as the Unified Payments Interface (UPI) have become de facto critical infrastructure. Although UPI has not been formally designated as CII, its scale and centrality to the nation’s payment system have raised similar questions about oversight and control. Its success has increased trust and security expectations, but also heightened concerns about market access for private and foreign participants, as well as the challenges of cross-border interoperability.

The G20 paper calls for ex-ante (early) designation of digital public systems as critical, rather than ex-post (after deployment), to avoid costly retrofits and policy confusion. But the underlying debate remains unresolved: Should public-facing digital infrastructure be treated like essential utilities, or like regulated assets of the state? The answer may depend less on technology and more on who society believes should bear responsibility for keeping the digital lights on. The answer to that won’t be the same everywhere.

Security versus availability

That tension over control doesn’t stop at the policy level. It runs straight through the design decisions companies make every day. When regulation is ambiguous, incentives fill the gap—and the strongest incentive of all is keeping systems online.

Availability has become the real currency of trust. It’s a strange thing, if you think about it logically, but human trust rarely is. (Cryptographic trust is another matter entirely.) Downtime brings backlash, lost revenue, and penalties, so companies do the rational thing: they optimize for uptime above all else. Security comes later. I don’t like it, but I understand why it happens.

Availability wins because it’s visible. Customers notice an outage immediately. They don’t notice an insecure configuration, a quiet policy failure, or a missing audit trail until something goes horribly wrong and the media gets a hold of the after-action report.

That visibility gap distorts priorities. When reliability is measured only by uptime, risk grows quietly in the background. You can’t meaningfully secure systems you don’t control, yet most organizations depend on the illusion that control and accountability can be outsourced while reliability remains intact.

And then there are the incentives, a word I probably use too often, but for good reason. The incentives in this landscape reward continuity, not transparency. Revenue flows as long as the service runs, even if it runs insecurely. Yes, fines exist, but they are exceptions, not deterrents.

What counts as “working” is still negotiated privately, even when the consequences are public. Until those definitions include societal resilience, we’ll continue to mistake uptime for stability.

Regulated by dependence

All of this sounds like arguments for the critical infrastructure label, doesn’t it? But remember, formal regulation is only one kind of control. Dependence is another, because dependence acts as a form of unofficial regulation.

Society already treats many tech platforms as critical infrastructure even without saying so. Governments host essential services on AWS. Health systems use commercial clouds for patient records. Banks rely on private payment APIs to move billions each day.

We trust these companies to act in the public interest, not because they must, but because we lack alternatives. Massive failures result in conversations like this post about whether these companies need to be more thoroughly monitored. This is the logic of “too big to fail,” translated into digital infrastructure. Authentication services, data hosting, and communication gateways now carry systemic risk once reserved for banks.

We have built a layer of critical infrastructure that is privately owned but publicly relied upon. It operates by trust, not by oversight, and that is a fragile foundation for a system this essential.

The illusion of choice

Dependence isn’t only a matter of trust. It’s also the result of market design. The systems we treat as infrastructure are built on platforms that appear competitive but converge around the same few providers.

Vendor neutrality looks fine on a procurement slide but falters in practice.

Ask a CIO whether their organization could migrate off a cloud provider; most will say yes. Ask whether they could do it today, and the answer shortens to silence.

APIs, SDKs, and proprietary integrations make switching possible but painful. That pain enforces dependence. It isn’t necessarily malicious, but it keeps theoretical competition safely theoretical.

Lock-in is the quiet tax on convenience.

The market appears to offer many choices, but those choices often lead back to the same infrastructure. A handful of global providers now underpin authentication, messaging, hosting, and payments.

When a platform failure can delay paychecks, ground flights, or disrupt hospital systems, we’re no longer talking about preference or pricing. We’re talking about public safety.

The same qualities that once made the Internet adaptable—modular APIs, composable services, seamless integration—have made it fragile in aggregate. We built a dependency chain and called it innovation.

That dependency chain doesn’t just reshape markets. It reshapes how societies determine what constitutes essential. When the same few providers sit beneath every major system, “critical infrastructure” stops being a policy category and starts being a description of reality.

The expanding definition of “critical”

What we’re looking at is the challenge that “critical” is just too big a concept. As societies become more technically complex, the definition of critical infrastructure also keeps growing.

Power, water, and transport once defined the baseline. Then came telecommunications. Then the Internet. The stack now includes authentication, payments, communication APIs, and identity services. Each layer improves capability while expanding exposure.

Whether or not you believe that these tools should exist, their failure now extends beyond the control of any single organization. As dependencies multiply, the distinction between convenience and infrastructure fades.

An AWS outage can make it really hard to check in for your flight. A Twilio misconfiguration can interrupt millions of authentication codes. A payment API failure can halt payroll for small businesses. These systems support not only individual companies but also the systems that support those companies.

If we decide that these systems function as critical infrastructure, the next question is what to do about them. Recognition doesn’t come free. It brings oversight, obligations, and trade-offs that neither governments nor providers are fully prepared to bear.

The cost of recognition

Calling an API a utility isn’t about nationalization. It’s about acknowledging that private infrastructure now performs public functions. With that acknowledgment comes responsibility.

Critical infrastructure is what society cannot function without. That definition once focused on physical essentials; now it includes the digital plumbing that supports everything else. Expanding that list has consequences. Every new addition stretches oversight thinner and diffuses accountability.

Resources are finite. Attention is finite. When every system is declared critical, prioritization becomes impossible. The next challenge isn’t deciding whether to protect these dependencies, but how much protection each truly deserves.

I can (and will!) say a lot more on this particular subject. Stay tuned for next week’s post.

Closing thoughts

Ross Haleluik’s observation was an interesting perspective on what utilities look like in modern life. Stripe, Twilio, AWS, and others do not just enable business; they are the business. They have become the unacknowledged utilities of a digital economy.

When I watched Delta’s systems falter during the AWS outage, it was not just an airline glitch. It was a glimpse into the depth of interdependence that defines a modern technical society. If efficiency is the goal, then labeling these systems as critical infrastructure may be the right path. But if resilience is the goal, then perhaps we have other choices to make.

The next outage will not be an exception. It will serve as another reminder that the foundations of the modern world are built on rented infrastructure, and the bill is coming due.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29]
Hi and welcome back.

I’m recording this episode while dealing with the cold that everyone seems to have right now — so apologies for it being a little bit late. I had hoped the cold would pass before I picked up the microphone again.

But here we are, and today I want to talk about Critical Infrastructure.

Rethinking What Counts as Critical

A friend of mine recently sent over an article by Ross Havelwick that began with an interesting point:

“It’s not just power grids and water systems that count as critical infrastructure, but also tools like Stripe and Twilio.”

His argument was simple yet powerful — some services have become so essential that when they fail, the impact ripples far beyond their own operations. The AWS outage in October proved that vividly.

Before diving deeper, it’s worth defining what we mean by critical infrastructure.
These are systems and assets so vital that their disruption would harm national security, the economy, or public safety.

In the United States, the Cybersecurity and Infrastructure Security Agency (CISA) identifies 16 sectors, including energy, water, transportation, and communications. Other countries use similar frameworks, but all share the same idea: protect what society cannot function without — ideally with some level of government oversight.

Yet, as Havelwick and others note, this list keeps expanding.

When AWS Went Down

On October 19, 2025, AWS experienced a major outage in one region. A database error cascaded into failures across DNS, payments, and authentication systems.

For a few hours, the digital economy looked far less digital.

I remember it clearly: I was boarding a flight to the Internet Identity Workshop. Air traffic control was fine — archaic but stable. Yet the gate agent couldn’t check bags, and the purser couldn’t confirm catering. My seatmate was visibly anxious about whether it was even safe to fly.

So many systems failed, yet many didn’t. What struck me most was how few people could tell the difference between what mattered and what didn’t.

Invisible Dependencies and Fragile Resilience

This incident made something clear — modern resilience is fragile because we’ve built it atop invisible dependencies that we rarely acknowledge.

Modern businesses run on other people’s APIs:

Stripe handles payments. Twilio delivers authentication codes. Okta manages identity. AWS, Google Cloud, Azure host nearly everything.

These aren’t niche conveniences anymore — they’re the infrastructure of global commerce. When you tap your phone to pay for coffee or file taxes online, one of these services is working silently in the background.

They may not look like traditional infrastructure — no visible grids or pipes — but they behave like utilities.

In short, we’ve replaced public infrastructure with private platforms.

Innovation and Its Risks

This shift has brought incredible innovation but also new risks.
Infrastructure used to be something we built and maintained. Now it’s something we subscribe to and assume will always work.

We’ve optimized for scale, not longevity.
But our assumptions about resilience haven’t kept pace.

There’s a paradox here:

Cloud architectures are built for redundancy and fault tolerance. Yet every layer of resilience adds another dependency — and therefore, another potential point of failure.

When a shared DNS resolver or authentication API fails, the entire ecosystem can crumble, no matter how many backups you have.

Interdependence and Oversight

Interdependence cuts both ways. When everyone relies on the same few providers, a failure for one becomes a failure for all.

So the big question arises:
Should services like AWS or Stripe be treated as critical infrastructure?

Havelwick argues yes. I’m not entirely convinced — but I see both sides.

In the U.S., agencies like the FTC and FCC have debated for decades whether the Internet itself qualifies as critical infrastructure.
Supporters argue that broadband is essential to modern life; opponents worry that regulation could slow innovation.

Declaring something “critical” brings oversight and compliance. Avoiding the label keeps flexibility — but also leaves society exposed.

We now have systems that operate like infrastructure yet remain governed by private interests. Their influence extends far beyond their legal obligations.

Digital Public Infrastructure and Global Perspectives

Outside the U.S., this debate continues under the banner of Digital Public Infrastructure (DPI).
Governments across the G20 are exploring whether payment systems, digital identity networks, and data exchange platforms should be classified as Critical Information Infrastructure (CII).

A recent G20 policy brief captured the tension well:

DPI emphasizes openness and inclusion. CII emphasizes restriction and control.

For example, India’s Unified Payments Interface (UPI) functions as critical infrastructure in practice, even if not in name.

Its success raises key questions:

Who controls access? How should foreign participants interact? Can cross-border interoperability be trusted?

The G20’s advice: identify critical systems early, before they become too big to retrofit with proper governance. But again, recognition invites regulation, which can stifle the innovation that made those systems successful.

The Incentive Problem

When regulation lags, incentives take over — and the biggest incentive of all is uptime.

Companies prioritize continuity because:

Downtime is visible. Security failures often aren’t.

As a result, availability becomes the currency of trust.
Revenue flows as long as systems run — even if they run insecurely.

Until we include societal resilience in our definition of “working,” we’ll keep mistaking uptime for stability.

The Trust Dilemma

Dependency itself already acts as a form of regulation.
Governments host their services on AWS. Hospitals store patient records in the cloud.

We trust these platforms — not because they’re obligated to serve the public interest, but because we have no alternative.

It’s the logic of too big to fail rewritten for the digital era.
We’ve built a layer of infrastructure that’s privately owned yet publicly indispensable — and it’s running on trust, not oversight.

Lock-In and Market Gravity

Dependence isn’t just about trust — it’s about design.

If you ask most CIOs whether they could migrate off a major cloud provider, they’ll say yes.
Ask if they could do it today, and the answer is no.

Proprietary integrations make switching possible but painful. That pain enforces dependence — not maliciously, but through market gravity.

Lock-in is the tax on convenience.
And when a platform failure can delay paychecks, disrupt hospitals, or ground flights, this isn’t about preference — it’s about public safety.

Expanding the Definition of Critical

As technology grows more complex, the concept of critical infrastructure keeps expanding.

Power, water, and transportation were once the baseline. Then came telecommunications and the Internet. Now we’re talking about authentication, payments, messaging, and identity services.

Each layer increases capability — but also multiplies exposure.

The real question isn’t whether these systems are critical. They clearly are.
It’s how to manage the responsibilities that come with that recognition.

Responsibility and Resilience

[00:13:10]
Calling an API a utility doesn’t mean nationalizing it. It means acknowledging that private infrastructure now performs public functions, and that recognition carries responsibility.

Yet every new addition to the “critical” list spreads oversight thinner. If everything’s a priority, nothing truly is.

We have to decide which dependencies deserve protection — and which risks we can live with.

Stripe, AWS, and similar services don’t just enable business. They are business. They’ve become the unacknowledged utilities of our digital economy.

When I saw my airline systems falter during the AWS outage, it wasn’t just a glitch — it was a glimpse into how deeply interwoven our dependencies have become.

If your goal is efficiency, labeling these systems as critical may help create stability through regulation.

But if your goal is resilience, perhaps it’s time to design for flexibility — to accept failure as part of stability, and to plan for it.

The next outage will happen. It won’t be an exception. It will simply remind us that the foundations of the modern world run on rented infrastructure, and that rent always comes due.

[00:15:26]
And that’s it for this week’s episode of The Digital Identity Digest.

If it helped make things a little clearer — or at least more interesting — share it with a friend or colleague and connect with me on LinkedIn @HLFLanagan.

If you enjoyed the show, please subscribe and leave a rating or review on Apple Podcasts or wherever you listen.

You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep these conversations going.

The post The Infrastructure We Forgot We Built appeared first on Spherical Cow Consulting.


Ocean Protocol

Ocean Community 51% Tokens

By: Bruce Pon I want to address the following false statements and allegations made by Sheikh, Goertzel and Burke: Oct 9, 2025 — Sheikh on X Spaces (Dmitrov)Oct 9, 2025 — Jamie Burke X PostOct 13, 2025 — Sheikh and Goertzel on X Spaces (Benali)Oct 15, 2025 — Jamie Burke X PostOct 17, 2025 — Sheikh on All-in-Crypto Podcast To put to rest any false claims or allegations of
By: Bruce Pon

I want to address the following false statements and allegations made by Sheikh, Goertzel and Burke:

Oct 9, 2025 — Sheikh on X Spaces (Dmitrov)Oct 9, 2025 — Jamie Burke X PostOct 13, 2025 — Sheikh and Goertzel on X Spaces (Benali)Oct 15, 2025 — Jamie Burke X PostOct 17, 2025 — Sheikh on All-in-Crypto Podcast

To put to rest any false claims or allegations of “theft” of property, let’s track the story from start to finish. This post has also been prepared with input from Ocean Expeditions.

Prior to March 2021, Ocean Protocol Foundation (‘OPF’) owned the minting rights to the $OCEAN token contract (0x967) with 5 signers assigned by OPF.

The Ocean community allocation, which would comprise 51% of eventual $OCEAN supply, (‘51% Tokens’) had not yet been minted but its allocation had been communicated to the Ocean community in the Ocean whitepaper.

To set up the oceanDAO properly, OPF engaged commercial lawyers, accountants and auditors to conceive a legal and auditable pathway to grant the oceanDAO the rights of minting the Ocean community allocation.

In June 2022 (but with documents dated March 2021), the rights of minting the 51% Tokens was irrevocably signed over to the oceanDAO. Along with this, seven Ocean community members and independent crypto OGs, stepped in, in their individual capacities to become trustees of the 51% Tokens.

March 31, 2021 — Legal and formal sign-over of assets to oceanDAO from Ocean Protocol Foundation

One year later, in May 2023, using the minting rights it had been granted in June 2022, the oceanDAO minted the 51% Token supply and irreversibly relinquished all control over the $OCEAN token contract 0x967 for eternity. The $OCEAN token lives wholly independent on the Ethereum blockchain and cannot be changed, modified or stopped.

TX ID: https://etherscan.io/tx/0x9286ab49306fd3fca4e79a1e3bdd88893042fcbd23ddb5e705e1029c6f53a068

The 51% Tokens were minted into existence on the Ethereum blockchain. The address holding the $OCEAN tokens sat in the ether, owned by no one. The address could release $OCEAN when at least 4 of 7 signers activated their key in the Gnosis Safe vault, which governs the address.

None of the signers had any claim of ownership over the address, Gnosis Safe vault or the contents (51% Tokens). They acted in the interest of the Ocean community and not anyone else, and certainly not OPF or the Ocean Founders.

During the ASI talks in Q2/2024, Fetch.ai applied significant pressure on OPF to convert the entire 51% Token supply immediately to $FET. OPF pushed back clearly stating that it had no power to do so as the 51% Tokens were not the property of, or under the control of OPF.

During those talks, OPF also repeatedly emphasized to Fetch.ai and SingularityNET that the property rights of EVERY token holder (including those of OPF and oceanDAO) must be respected, and that these rights were completely separate to the ASI Alliance. Fetch.ai and SingularityNET agreed with this fundamental principle.

March 24, 2024 — First discussion about Ocean joining ASIApril 3, 2024 — Pon to the Sheikh (Fetch.ai), Goertzel, Lake, Casiraghi (SingularityNET)May 24, 2024 — Pon to D. Levy (Fetch.ai)August 6, 2024 — ASI Board Call where Sheikh calls for ASI Alliance to refrain from exerting control over ASI membersAugust 17, 2025 — SingularityNET / Ocean Call

It must also be highlighted that oceanDAO was never a party to any agreement with Fetch.ai and SingularityNET. It had its own investment priorities and objectives as a regular token holder. oceanDAO’s existence, and the fact that it was the entity controlling the 51% Tokens with 7 signers, was acknowledged by SingularityNET at the very start of talks in March 2024.

In those discussions, OPF explained the intentions of the oceanDAO to release $OCEAN tokens to the community on an emission schedule.

In mid-2024, after the formation of the ASI Alliance, Fetch minted 611 million $FET tokens for the Ocean community. The sole purpose of minting this 611 million $FET was to accommodate a 100% swap-out of the Ocean community’s token supply of 1.41 billion $OCEAN. This swap-out would be via a $OCEAN-$FET token bridge and migration contract.

At that time, the oceanDAO did not initiate a conversion of the Gnosis Safe vault 51% Tokens from $OCEAN to $FET. The 51% of tokens sat dormant, as they had since being minted in May 2023.

However, with the continued and relentlessly deteriorating price of $FET due to the actions of Fetch.ai and SingularityNET, the Ocean community treasury had fallen in value from $1 billion in Q1/2024 to $300 million in Q2/2025.

oceanDAO therefore decided around April 2025 that it needed to take steps to avoid further fall in the value of the 51% Tokens for the Ocean community by converting some of the $OCEAN into other crypto-currencies or stablecoins, so that the oceanDAO would not be saddled with a large supply of steadily depreciating token.

The immediate and obvious risk to the Ocean community would be that if and when suitable projects come about, the Ocean community rewards could very well be worthless due to the continued fall in $FET price. This was an important consideration for the oceanDAO when it eventually decided that active steps had to be taken to protect the interests of the Ocean community.

Upon the establishment of Ocean Expeditions, a Cayman trust, in late-June, 2025, oceanDAO transferred signing rights over the 51% Tokens to Ocean Expeditions, who then initiated a conversion of $OCEAN to $FET using the token bridge and migration contract.

TX ID: https://etherscan.io/tx/0xce91eef8788c15c445fa8bb6312e8d316088ce174454bb3c96e7caeb62da980d

Sheikh alluded to this act of conversion of $OCEAN to $FET, along with his incorrect understanding of their purpose in a podcast.

Oct 17, 2025 — Sheikh speaking on All-in Crypto

However, unlike what Sheikh falsely claimed, Ocean Expeditions’s conversion of the 51% Tokens from $OCEAN to $FET and the selling of some of these tokens, is in no way a “theft” of these tokens by Ocean Expeditions or by OPF. These are unequivocally, not ASI community tokens, not “ASI” community reward tokens and not under any consideration of the ASI community.

Ocean Expeditions converted its $OCEAN holdings into $FET by utilising the $FET that were specifically minted for the Ocean community and earmarked by Fetch.ai for this conversion. It is important to emphasize that Ocean Expeditions did not tap into any other portion of the $FET supply. Simply put, there was no “theft” because Ocean Expeditions had claimed what it was rightfully allocated and entitled to.

Any token movements of the 51% Tokens to 30 wallets, Binance, GSR or any other recipient AND any token liquidations or disposals, are the sole right of, and at the discretion of Ocean Expeditions, and no one else.

Ocean Expeditions sought to preserve the value of the community assets, for the good of the Ocean community. Any assets, whether in $FET, other tokens or fiat, remain held by Ocean Expeditions in trust for the Ocean community. The assets have not been transferred to, or in any other way taken by OPF or the Ocean Founders.

We demand that Fetch.ai, Sheikh and all other representatives of the ASI Alliance who have promulgated any lies, incitement and misrepresentations (e.g. “stolen” “scammers” “we will get you”) immediately retract their statements, delete their social media posts where these statements were made, issue a clarification to the broader news media and issue a formal apology to Ocean Expeditions, Ocean Protocol Foundation, and the Ocean Founders.

We repeat that the 51% Tokens are owned by Ocean Expeditions, for the sole purpose of the Ocean community and no one else.

Ocean Community 51% Tokens was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Handling Third-Party Access Tokens Securely in AI Agents

AI Agents access sensitive data and are responsible to protect this data against attack vectors. Learn how a secure-by-design approach helps you build AI Agents that interact safely with applications, APIs, and services.
AI Agents access sensitive data and are responsible to protect this data against attack vectors. Learn how a secure-by-design approach helps you build AI Agents that interact safely with applications, APIs, and services.

Monday, 03. November 2025

Dock

The UAE Becomes the First Country to Phase Out SMS and Email OTPs

The Central Bank of the United Arab Emirates has taken a groundbreaking step in financial security.  It is now mandating the phase-out of SMS and email one-time passwords (OTPs).

The Central Bank of the United Arab Emirates has taken a groundbreaking step in financial security. 

It is now mandating the phase-out of SMS and email one-time passwords (OTPs).


FastID

Optimizing Web Performance: Unpacking Fastly’s Intelligent Compression Defaults

Optimize web performance with Fastly's intelligent compression defaults. Learn how Gzip and Brotli shrink payload sizes, reduce costs, and speed up your site.
Optimize web performance with Fastly's intelligent compression defaults. Learn how Gzip and Brotli shrink payload sizes, reduce costs, and speed up your site.

Sunday, 02. November 2025

Ockam

The 7-Day Compounding Sprint

The difference between effort that multiplies and effort that disappears Continue reading on Medium »

The difference between effort that multiplies and effort that disappears

Continue reading on Medium »

Saturday, 01. November 2025

Recognito Vision

How a Biometric Face Scanner Helps Businesses Verify Users with Confidence

In today’s digital age, knowing who’s on the other side of the screen isn’t just a security measure, it’s a necessity. From online banking to employee attendance, businesses across the globe are under pressure to verify users faster and with absolute accuracy. That’s where the biometric face scanner steps in, giving companies the confidence to...

In today’s digital age, knowing who’s on the other side of the screen isn’t just a security measure, it’s a necessity. From online banking to employee attendance, businesses across the globe are under pressure to verify users faster and with absolute accuracy. That’s where the biometric face scanner steps in, giving companies the confidence to know every interaction is authentic.

The rise of artificial intelligence has turned face scanning into a cornerstone of identity verification. Whether it’s an AI face scanner, a facial recognition scanner, or an advanced face scanning machine, the goal remains the same: keep things secure without slowing people down.

 

The Evolution of Facial Scanning Technology

Facial recognition has come a long way since the early 2000s. What once required large systems and clunky cameras now fits into a sleek face scanner device powered by deep learning. Modern facial scanning technology can detect and verify faces in milliseconds while maintaining compliance with global data standards such as GDPR.

AI-driven algorithms analyze facial landmarks, comparing them with stored biometric templates. This process ensures unmatched accuracy. A study conducted by NIST’s Face Recognition Vendor Test confirmed that advanced AI models now achieve over 99% accuracy in matching and verification, outperforming traditional biometric systems like fingerprints under certain conditions.

These results show that biometric verification isn’t just futuristic talk, it’s an essential layer of digital trust.

 

Why Businesses Are Switching to Face Scanner Biometric Systems

Passwords, ID cards, and manual checks are vulnerable to theft, fraud, and human error. A face scanner biometric solution eliminates these weaknesses. For many businesses, it’s not about replacing human judgment, it’s about enhancing it.

Companies are now using AI face scan systems to authenticate employees, onboard new clients, and manage visitor access seamlessly. Here’s why adoption is growing so fast:

Faster verification: A simple glance replaces lengthy manual identity checks.

Stronger security: Faces can’t be borrowed, stolen, or easily replicated.

Higher accuracy: The system adapts to lighting, angles, and even subtle changes like facial hair.

Better compliance: Aligned with data protection and global standards.

It’s the balance between convenience and control that makes facial recognition scanners invaluable in sectors such as finance, healthcare, retail, and corporate access management.

 

How a Face Scan Attendance Machine Improves Workforce Management

Time theft and attendance fraud cost businesses millions annually. Traditional punch cards or RFID systems can be manipulated, but a face scan attendance machine offers transparency and efficiency. Employees simply look into a face scan camera, and their attendance is logged instantly.

This system ensures that only real, verified individuals are recorded. No more buddy punching or proxy logins. Companies integrating such systems experience improved productivity and cleaner attendance data. It’s a small change that brings big operational discipline.

Solutions like the face recognition SDK make implementation simple by offering APIs that integrate directly into existing HR and access management software.

 

The Technology Behind AI Face Scanners

A biometric face scanner operates on the principles of artificial intelligence and computer vision. It starts by mapping key facial points such as eyes, nose, jawline, and contours to create a unique mathematical pattern.

Here’s how the process unfolds:

A face scan camera captures the user’s face in real-time.

The AI model extracts biometric data points.

The AI face scanner compares the captured data with stored templates.

The result is an instant verification decision.

Unlike passwords or tokens, facial biometrics are almost impossible to replicate. Many systems also include liveness detection to distinguish between a live person and a photo or mask. Businesses can test this feature through the face liveness detection SDK, ensuring their verification process isn’t fooled by fake attempts.

 

Ensuring Privacy and Data Security

One major concern surrounding facial scanning technology is data privacy. Responsible companies know that collecting biometric data requires careful handling. The good news is that modern systems don’t store raw images. Instead, they use encrypted templates, mathematical representations that can’t be reverse-engineered into a real face.

Organizations adhering to GDPR and global privacy laws can confidently deploy face scanner devices without compromising user rights. Transparency, consent, and clear data retention policies are the pillars of ethical AI use.

To stay updated on compliance standards and performance benchmarks, many developers reference the NIST FRVT 1:1 reports, which highlight progress in algorithmic accuracy and fairness.

 

Real-World Applications of Face Scanning Machines

Facial recognition scanners have a wide range of real-world applications that continue to grow each year. Here are some key areas where they are being used:

1. Banking and Finance

Facial recognition technology helps prevent identity fraud during digital onboarding, ensuring secure access to banking services.

2. Corporate Offices

These systems provide secure and frictionless access control, allowing employees to enter restricted areas without the need for physical keys or ID cards.

3. Airports

Airports use facial recognition to streamline processes, offering faster and more secure boarding and immigration checks.

4. Education

In education, facial recognition is used for automated attendance tracking and exam proctoring, reducing administrative overhead and ensuring exam integrity.

For developers or businesses looking to explore how these systems work, the face biometric playground provides a hands-on environment to test AI-based facial recognition in action.

 

Challenges and Ethical Considerations

While the benefits are undeniable, biometric systems must still address several challenges. AI bias, varying lighting conditions, and evolving spoofing methods are ongoing hurdles. Continuous algorithm training using diverse datasets is key to ensuring fairness and reliability.

Ethical implementation also plays a major role. Users must always know when and why their data is being collected. Transparent policies build trust, the same trust that a biometric face scanner promises to uphold.

Open-source initiatives like Recognito Vision’s GitHub repository are helping drive responsible innovation by allowing researchers to refine and test AI-based recognition models openly and collaboratively.

 

The Future of Face Scanning and Business Verification

As AI becomes more sophisticated, so will biometric systems. Future scanners will combine 3D depth sensing, emotion analytics, and advanced liveness detection to improve security even further.

The evolution of AI face scan systems is not about replacing traditional verification but complementing it, building a security framework that feels effortless to users yet nearly impossible to breach.

 

Building Trust in the Age of Intelligent Verification

Trust isn’t built in a day, but it can be verified in a second. A well-designed biometric face scanner offers that confidence, enabling companies to know their users without a doubt. From corporate offices to fintech platforms, businesses that invest in intelligent verification today will lead tomorrow’s secure digital economy.

As one of the pioneers in ethical biometric verification, Recognito continues to empower organizations with AI-driven identity solutions that combine precision, privacy, and confidence.

 

Frequently Asked Questions

 

1. What is a biometric face scanner and how does it work?

It’s an AI-powered system that analyzes facial features to verify identity in seconds.

 

2. Is facial recognition technology safe for user privacy?

Yes. Modern systems use encrypted facial templates instead of storing real images.

 

3. What are the main benefits of using facial recognition in businesses?

It offers faster verification, stronger security, and reduced fraud risks.

 

4. How can companies integrate a biometric face scanner into their systems?

They can use APIs or SDKs to easily add facial verification to existing software.

Friday, 31. October 2025

Ocean Protocol

DF160, DF161 Complete and DF162 Launches

Predictoor DF160, DF161 rewards available. DF162 runs October 30th — November 6th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor. Data Farming Rounds 160 (DF160) and 161 (DF161) have completed and rewards are now available after a temporary interruption in service from Oct 13 —
Predictoor DF160, DF161 rewards available. DF162 runs October 30th — November 6th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.

Data Farming Rounds 160 (DF160) and 161 (DF161) have completed and rewards are now available after a temporary interruption in service from Oct 13 — Oct30.

DF162 is live, October 30th. It concludes on November 6th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF162 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF162

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean and DF Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF160, DF161 Complete and DF162 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

How to Safely Entrust a ‘Digital Power of Attorney’ to My AI

Introducing a patent-pending framework that fuses AI, DID, and Blockchain to enable secure AI delegation Imagine this: What if you could give your personal AI assistant a Power of Attorney — so it could act on your behalf? That sounds convenient, but only if it’s done in a fully secure, verifiable, and trustworthy way. No more sharing passwords. No more blind trust in centralized services. Just

Introducing a patent-pending framework that fuses AI, DID, and Blockchain to enable secure AI delegation

Imagine this:

What if you could give your personal AI assistant a Power of Attorney — so it could act on your behalf? That sounds convenient, but only if it’s done in a fully secure, verifiable, and trustworthy way. No more sharing passwords. No more blind trust in centralized services. Just a cryptographically proven system of delegation.

A recent patent filed by CPLABS, titled “Method for Delegated AI Agent-Based Service Execution on Behalf of a User,” outlines exactly such a future. This patent proposes a new model that combines Decentralized Identity (DID), blockchain, and AI agents — so your AI can act for you, but with boundaries, verifiability, and accountability built in.

The Problem with Today’s Delegation Methods

We delegate all the time to people, apps, and APIs. But today’s delegation models are broken

Paper-based authorizations are outdated: Physical documents are cumbersome and easily forged, and verifying who issued them is hard. API keys & password sharing are risky: Tokens can be leaked, and once exposed, there’s no way to limit or track their use. Most systems lack built-in revocation or expiration controls. There is no clear apparent trace of responsibility: If your AI does something using your credentials, it is recorded as if you did it. There is no audit trail, proof of scope, or consent.

We need a more secure and user-centric model in the age of AI agents acting autonomously.

A New Solution: DID + Blockchain + AI Agent

The patent proposes an architecture built on three core technologies

1. Decentralized Identity (DID)

Every user has a self-sovereign, blockchain-based digital ID. So does the AI agent — it operates as its own verifiable identity.

2. Blockchain Ledger

All actions and delegations are immutably recorded on-chain. Who delegated what, to whom, when, and how is traceable and tamper-proof.

3. Encrypted Delegation Credential (Digital PoA)

Instead of paper documents, users issue digitally signed credentials. These include

The agent’s DID The scope of authority Expiration timestamp Revocation endpoint
This creates a “digital power of attorney” that can be cryptographically verified by any service.

The entire process runs without centralized intermediaries and is powered by standardized DID and blockchain protocols.

How It Actually Works User delegates authority to AI
e.g., “My AI may book doctor appointments this month.” AI submits delegation proof when acting
The AI presents both its own DID and the user-signed credential. The service provider verifies
The service checks the signatures and revocation status via the DID registry and the blockchain. Authorization scope is enforced
If the action goes beyond the delegated scope, it’s rejected. Every action is logged on-chain
Whether successful or failed, all attempts are transparently recorded. The user can revoke at any time
Revocation is immediate and recorded immutably on-chain.

The AI can only act using the digital “key” you’ve granted — and every move is auditable.

Potential Use Cases Healthcare: The AI assistant retrieves records from Hospital A, forwards them to Hospital B with full user consent, and logs them securely. Finance: Delegate your AI to automate transfers up to $1,000 per day. Every transaction is verified and capped. Government services: AI files address changes or document requests using digital credentials — recognized legally as your proxy. Smart home access: Courier arrives? Your AI is granted temporary “open door” access, which is revoked automatically post-delivery. Why This Matters User-Controlled Delegation
You define the rules. Nothing happens without your explicit, cryptographically backed consent. Verifiable Trust
Anyone can audit the record on-chain. Services don’t need to “trust” blindly. Scalable Automation
Enables safe, rule-bound AI delegation across sectors. Clear Responsibility
Transparent logs help determine who’s accountable if anything goes wrong. In Summary

This system provides a secure infrastructure for AI-human collaboration, backed by blockchain. It’s like handing your AI a digitally signed key with built-in expiration and tracking — ensuring it never oversteps its bounds.

This patent envisions a simple but powerful future: Your AI can act for you, but only within the rules you define, and everything it does is traceable and accountable.

That’s not just clever tech — it’s the foundation of digital trust in an AI-driven world.

내 AI에게 ‘디지털 위임장’을 안전하게 맡기는 법 내 AI에게 권한을 주는 시대

생각해보세요. 내 개인 AI 비서에게 위임장(Power of Attorney)을 주어 나를 대신해 일을 처리하게 한다면 어떨까요? 하지만 이때 중요한 건 완전히 안전하고 신뢰할 수 있는 방식으로 맡기는 것입니다. 더 이상 비밀번호를 공유하거나, 앱에 내 데이터 전체를 맡기며 불안해할 필요가 없는 거죠.

최근 CPLABS에서 출원된 특허(명칭: 사용자로부터 권한을 위임받아 서비스를 대리 수행하는 방법 및 이를 이용한 AI 에이전트)는 바로 이런 미래를 보여줍니다. 이 기술은 분산 신원(DID)과 블록체인, 그리고 AI 에이전트를 결합해, AI가 사용자를 대신해 행동하되 모든 것이 검증 가능하고 추적 가능한 방식으로 이루어지도록 합니다.

지금의 위임 방식이 가진 문제점

우리는 종종 다른 사람이나 소프트웨어에 일을 맡깁니다. 하지만 현재의 위임 방식에는 여러 가지 문제가 있습니다.

서류 기반 위임의 불편함: 종이 위임장은 작성도 번거롭고 위조도 쉽습니다. 위임자와 수임자의 관계를 확인하기도 애매하고, 신뢰성이 떨어집니다. API 키/비밀번호 공유의 위험성: 오늘날 앱이나 서비스 연결은 보통 API 키나 토큰을 공유하는 방식입니다. 이 키가 유출되면 공격자가 무제한 권한을 행사할 수 있죠. 만료나 철회 기능도 미비합니다. 책임 추적의 부재: AI가 내 계정으로 수행한 행동은 결국 ‘내가 한 것’처럼 기록돼 분쟁 시 책임 소재가 불분명합니다.

지금의 위임은 너무 불편하거나, 너무 위험합니다. 특히 AI 시대에는 이런 방식으로는 부족합니다.

새로운 해법: DID + 블록체인 + AI 에이전트

이번 특허가 제안하는 해법은 세 가지 핵심 기술을 결합합니다.

1. DID (Decentralized Identifier) 중앙 기관 없이 발급되고, 사용자가 직접 관리하는 디지털 신분증입니다. 사용자와 AI 에이전트 모두 독립적인 DID를 가집니다. 2. 블록체인 위임 관계와 AI 행동 내역을 변조 불가능하게 기록합니다. 누구에게, 언제, 어떤 권한이 위임되었고 AI가 어떤 행동을 했는지를 모두 투명하게 남깁니다. 3. 디지털 자격증명 기반 위임장 사용자가 직접 전자 서명해 발급하는 위임 토큰(Credential)에는 에이전트 DID, 권한 범위, 만료일, 철회 URL 등이 포함됩니다. 누구든 검증 가능한 디지털 위임장입니다.

이 모든 것은 중앙 시스템 없이, DID와 블록체인 기반 인프라 위에서 이루어집니다.

실제 흐름은 이렇게 작동합니다 사용자가 AI에 위임장 발급 (예: “내 AI는 이번 달 동안 병원 예약을 대신할 수 있다”) AI가 서비스에 위임장 제출 (AI는 자신의 DID 서명과 함께, 위임장을 제시하며 요청을 수행합니다.) 서비스가 검증 수행 (블록체인 DID 레지스트리를 통해 사용자와 AI의 DID를 검증하고, 위임장 서명도 확인합니다.) 권한 범위 확인 (위임장에 명시된 범위를 벗어난 요청은 거부됩니다.) 행동 로그 기록 (성공/실패 여부를 블록체인에 기록하여 감사가 가능합니다.) 즉시 철회 가능 (사용자는 언제든 위임을 취소할 수 있고, 철회 사실도 블록체인에 반영됩니다.)

즉, AI는 당신이 발급한 디지털 열쇠로만 행동할 수 있으며, 모든 흔적은 감사 가능한 증거로 남습니다.

활용 시나리오 의료: 환자가 AI에게 병원 기록 접근 권한을 위임. AI는 병원 A에서 기록을 받아 병원 B에 전달 (오남용은 불가능) 금융: AI 금융비서에게 “하루 100만원 한도 자동 이체” 권한을 위임. 초과 시 자동 거절 행정: 시민이 AI에게 주소 변경, 주민등록등본 발급 등 민원 업무를 맡김. AI는 합법적인 디지털 대리인으로 인식됨 생활/스마트홈: 택배기사 방문 시간에만 유효한 “문 열기 권한”을 발급. 시간이 지나면 자동 철회 왜 중요한가? 사용자 중심 통제: 내가 허락하지 않은 행동은 애초에 불가능 디지털 신뢰 확보: 블록체인 기반의 검증 가능한 기록 AI 자동화 확산: 신뢰 기반으로 더 많은 업무를 맡길 수 있음 책임 구분 가능: 문제가 생겨도 책임 주체가 명확

결국 이 시스템은 AI와 사람이 안전하게 협업할 수 있는 새로운 신뢰 인프라입니다. 마치 AI에게 범위가 제한된 디지털 열쇠를 주고, 그 모든 흔적을 공증처럼 남기는 셈이죠.

마치며

이 특허 기술은 단순한 자동화 도구가 아닙니다. AI가 우리를 대신해 행동할 수 있는 디지털 신뢰 프레임워크를 구현한 사례입니다. 앞으로 다가올 “AI와 함께 사는 세상”에서, 우리는 이런 시스템을 통해 안전하게 협업하고, 책임을 분명히 나누며, 더 많은 자유를 누릴 수 있게 될 것입니다.

이제, 당신의 AI에게 진짜 믿을 수 있는 디지털 열쇠를 쥐어줄 때입니다.

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KOR) | http://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

How to Safely Entrust a ‘Digital Power of Attorney’ to My AI was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

BC 101 #7: DAO, Standardization, and Neutrality

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced. For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this stru

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced.

For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this structure provides the transparency and accountability required by regulated entities while enabling decentralized control.

DAO governance delivers substantial value by providing a standardized, neutral framework for coordination that reduces operational and regulatory friction.

Third-Party and In-House DAO Infrastructures

In recent years, the infrastructure supporting DAOs has advanced significantly. A variety of third-party governance solutions now offer stable, enterprise-ready interfaces for managing proposals, conducting votes, and executing multi-signature transactions. Some noteworthy platforms include:

Snapshot: An off-chain, gasless voting platform widely used across leading protocols. It allows flexible voting strategies, quorum requirements, and verifiable results without introducing high transaction costs. Tally: A fully on-chain governance dashboard built on Ethereum, designed for transparency and auditability of protocol votes, treasury management, and proposal lifecycle tracking.

These solutions form a growing middleware ecosystem that brings governance to the same level of technical maturity as enterprise resource planning systems.

At the same time, in-house DAO frameworks extend beyond generic governance tooling. They integrate DAO logic with the project’s native identity, treasury, and compliance layers, enabling seamless coordination between on-chain and organizational processes. This approach ensures that governance not only reflects community consensus but also aligns with operational and regulatory realities.

DAO Governance as a Mechanism for Neutrality

DAO governance reinforces network neutrality, a crucial characteristic for projects that operate across multiple jurisdictions or regulatory contexts. This structural neutrality diminishes the concentration of control that can lead to compliance issues and enables projects to remain resilient during regulatory or organizational changes.

For blockchain systems aimed at enterprises, DAO infrastructure provides three measurable benefits:

Regulatory Adaptability: Transparent proposal and voting systems create a verifiable governance record suitable for audits, disclosures, or compliance reviews. Operational Continuity: Distributed governance logic allows decision-making to persist independently of any single corporate entity or leadership group. Stakeholder Alignment: Token-weighted or role-based participation aligns validators, contributors, and investors under a unified, rule-based coordination framework. Toward Structured and Resilient Governance

As blockchain networks evolve into critical data and financial infrastructure, governance must progress beyond mere symbolic decentralization. DAO systems offer a structured, compliant, and resilient approach to managing complex ecosystems.

DAOs are not merely voting or staking platforms. They serve as the operational core that defines how decentralized systems make, record, and enforce decisions. Only with a well-structured DAO model can projects establish the legal, operational, and procedural foundation required to function as sustainable organizations.

BC 101 #7: DAO, Standardization, and Neutrality was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

8 Log Detections for Credential Stuffing and MFA Exploit Prevention

Boost your Auth0 security monitoring with eight essential log detections for your SIEM available in the Auth0 Detection Catalog. Protect against credential stuffing, MFA exploits, and authorization request abuse.
Boost your Auth0 security monitoring with eight essential log detections for your SIEM available in the Auth0 Detection Catalog. Protect against credential stuffing, MFA exploits, and authorization request abuse.

BlueSky

Progress Update: Building Healthier Social Media

Over the next few months, we’ll be iterating on the systems that make Bluesky a better place for healthy conversations. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.

At Bluesky, we’re building a place where people can have better conversations, not just louder ones. We’re not driven by engagement-at-all-costs metrics or ad incentives, so we’re free to do what’s good for people. One of the biggest parts of that is the replies section. We want fun, genuine, and respectful exchanges that build friendships, and we’re taking steps to make that happen.

So far, we’ve introduced several tools that give people more control over how they interact on Bluesky. The followers-only reply setting helps posters keep discussions focused on trusted connections, mod lists make it easier to share moderation preferences, and the option to detach quote posts gives people a way to limit unwanted attention or dogpiling. These features have laid the groundwork for what we’re focused on now: improving the quality of replies and making conversations feel more personal, constructive, and in your control.

In our recent post, we shared some of the new ideas we were starting to develop to encourage healthier interactions. Since then, we’ve started rolling out updates, testing new ranking models, and studying how small product decisions can change the tone of conversations across the network.

We’re testing a mix of ranking updates, design changes, and new feedback tools — all aimed at improving the quality of conversation and giving people more control over their experience.

Social proximity

We’re developing a system that maps the “social neighborhoods” that naturally form on Bluesky — the people you already interact with or would likely enjoy knowing. By prioritizing replies from people closer to your neighborhood, we can make conversations feel more relevant, familiar, and less prone to misunderstandings.

Dislikes beta

Soon, we’ll start testing a “dislike” option as a new feedback signal to improve personalization in Discover and other feeds. Dislikes help the system understand what kinds of posts you’d prefer to see less of. They may also lightly inform reply ranking, reducing the visibility of low-quality replies. Dislikes are private and the signal isn’t global — it mainly affects your own experience and, to an extent, others in your social neighborhood.

Improved toxicity detection

Our latest model aims to do a better job of detecting replies that are toxic, spammy, off-topic, or posted in bad faith. Posts that cross the line are down-ranked in reply threads, search results, and notifications, reducing their visibility while keeping conversations open for good-faith discussion.

Reply context

We’re testing a small change to how the “Reply” button works on top-level posts: instead of jumping straight into the composer, it now takes you to the full thread first. We think this will encourage people to read before replying — a simple way to reduce context collapse and redundant replies.

Reply settings refresh

Bluesky’s reply settings give posters fine-grained control over who can reply, but many people don’t realize they exist. We’re rolling out a clearer design and a one-time nudge in the post composer to make them easier to find and use. Better visibility means more people can shape their own conversations and prevent unwanted replies before they happen. Conversations you start should belong to you.

We won’t get everything right on the first try. Building healthier social media will take ongoing experimentation supported by your feedback. This work matters because it tackles a root flaw in how social platforms have been built in the past — systems that optimize for attention and outrage instead of genuine conversation. Improving replies cuts to the heart of that problem.

Over the next few months, we’ll keep refining these systems and measuring their impact on how people experience Bluesky. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.

Thursday, 30. October 2025

Indicio

Five reasons why AI needs decentralized identity

The post Five reasons why AI needs decentralized identity appeared first on Indicio.
AI systems need decentralized identity. Why? Only decentralized identity provides the authentication, consent, delegated authority, structure, and governance needed for AI to deliver value.

By Trevor Butterworth

AI is going to be everywhere. From virtual assistants to digital twins and autonomous systems, it will reinvent how we do everything. But only if it can be trusted with high value data, only if it can access high quality data, only if there’s user consent to that data being shared, and only if it can be easily governed.

This is where decentralized identity comes in. It removes obstacles, solves problems, and does so in a way that delivers next-generation security. Here are the five ways decentralized identity and its key technology — Verifiable Credentials — puts AI agents and autonomous AI systems on the path to trust and monetization.

1. Authentication

We are going to need to authenticate AI agents. They are going to need to authenticate us. It’s an obvious trust issue when so much data is at stake.

“We” means everything that interacts with an agent — people, organizations, devices, robots, and other AI agents.

Traditional forms of identity authentication aren’t going to cut it (see this recent article by Hackernoon — “When OAuth Becomes a Weapon: AI Agents Authentication Crisis”).

And given the current volume of losses to identity fraud (the estimated global cost of digital fraud was $534 billion over the past 12 months, according to Infosecurity Magazine), the idea that we should now open up massive quantities of high-value data to the same security vulnerabilities is insane.

The first fake AI agent that scams a major financial customer will cause panic, burn trust, and trigger regulation.

Only decentralized, Verifiable Credentials can provide the seamless, secure, and AI-resistant authentication to identify both AI agents and their users. And they enable authentication to occur before any data is shared.

2. Consent

AI needs data to work — and that means a lot of personal data and user data. If you want AI solutions that require access to personal data to comply with GDPR and other data privacy regulations, the “data subject” needs to be able to consent to sharing their data. Otherwise, that data is going nowhere — or you’re headed toward compliance hell.

Verifiable Credentials are a privacy-by-design technology. Consent is built into how they work. This simplifies compliance issues and can be easily recorded for audit.

3. Delegated authority

AI agents are going to need to access multiple data sources. While Verifiable Credentials and digital wallets allow people and organizations to hold their own data, they are not necessarily going to hold all the data needed for a task.

For example, banks and financial institutions have multiple departments. An AI agent that is given permission to access an account holder’s information, will need to share that information across different departments either to access the customer’s data or connect it to other data. It might need to share the data with other agents or external organizations.

Verifiable Credentials make it easy for a person to delegate their authority to an AI agent to go where it needs to go to execute a task, radically simplifying compliance. Decentralized governance (more of which later) simplifies establishing trust between different organizations and systems.

4. Structured data

AI agents and systems need good quality data to do their job (and therefore earn their keep). Verifiable Credentials issued by trusted data sources contain information that’s tamper-proof, that can come from validated documents, and that is structured in a way that each data point can be selectively disclosed.

In other words by putting information into a Verifiable Credential, we minimize error while structuring it to be easy to consume. In the process, we enable data and purpose minimization to meet GDPR requirements.

5. Decentralized governance

Finally, we come to one of the less-well known features of decentralized identity: decentralized ecosystem governance — or as we call it DEGov, which is based on the Decentralized Identity Foundation Credential Trust Establishment specification.

DEGov is a way for humans to structure interaction through trust. The governance authority for a particular use case publishes trust lists for credential issuers and credential verifiers in a machine readable form. This is downloaded by each participant in a credential ecosystem, and it enables a credential holder’s software to automatically recognize that an AI agent issued by a given organization is trustable. These files also contain rules for data presentation workflows.

DEGov enables you to easily orchestrate data sharing: for example, a Digital Travel Credential issued by an airline for a passenger identity can be used by a hotel to automate check-in because the hotel’s verifier software has downloaded a governance file containing the airline as trusted credential issuer (this also facilitates offline verification as governance rules are cached).

The value of decentralized governance really comes to the fore when you start building autonomous systems with multiple AI agents. You can easily program which agent can interact with which resource and what information needs to be presented. You can orchestrate interaction and authentication across different departments, domains, sectors.

As you can also enable devices, such as sensors, to generate Verifiable Credentials containing the data they record, you can rapidly share trusted data across domains for use by pre-permissioned AI agents.

In sum, decentralized identity is more than identity or identity authentication — it’s a way to authenticate and share any kind of data across any kind of environment, seamlessly and securely. It’s a way to create digital relationships between participants, even virtual ones.

Indicio ProvenAI

We designed Indicio ProvenAI to do all of the above. It’s the counterpart of the Proven technology we’re deploying to manage borders, KYC, travel and everything in between. It’s why we are now a member of the NVIDIA Inception program.

We see decentralized identity as the key to AI unlocking the right kind of data in the right way. It’s the path to trust, and trust means value.

Contact Indicio to learn how we’re building a world filled with intelligent authentication.

The post Five reasons why AI needs decentralized identity appeared first on Indicio.


ComplyCube

The CryptoCubed Newsletter: October Edition

In this edition of CryptoCubed, we look at the top crypto cases worldwide. This includes Canada's record-breaking $177 million fine against Cryptomus, Dubai's ongoing enforcement sweep on virtual asset firms, and Trump's pardon. The post The CryptoCubed Newsletter: October Edition first appeared on ComplyCube.

In this edition of CryptoCubed, we look at the top crypto cases worldwide. This includes Canada's record-breaking $177 million fine against Cryptomus, Dubai's ongoing enforcement sweep on virtual asset firms, and Trump's pardon.

The post The CryptoCubed Newsletter: October Edition first appeared on ComplyCube.


How to Use a KYC AML Pricing Benchmark Effectively

Defining a pricing benchmark for KYC and AML is an important step in managing compliance expenses effectively. Understanding the factors that drive the costs of KYC and AML helps organizations make more informed pricing decisions. The post How to Use a KYC AML Pricing Benchmark Effectively first appeared on ComplyCube.

Defining a pricing benchmark for KYC and AML is an important step in managing compliance expenses effectively. Understanding the factors that drive the costs of KYC and AML helps organizations make more informed pricing decisions.

The post How to Use a KYC AML Pricing Benchmark Effectively first appeared on ComplyCube.


Elliptic

Hosted vs unhosted wallets: Compliance risks and practical solutions

Any institution engaging with digital assets faces a persistent compliance challenge: How should you handle transactions involving unhosted wallets when regulators have not yet provided clear guidance on specific obligations? As customer demand for crypto services intensifies, the question of hosted vs unhosted wallets has moved from theoretical to operationally urgent.

Any institution engaging with digital assets faces a persistent compliance challenge: How should you handle transactions involving unhosted wallets when regulators have not yet provided clear guidance on specific obligations? As customer demand for crypto services intensifies, the question of hosted vs unhosted wallets has moved from theoretical to operationally urgent.


Thales Group

AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution

AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution prezly Thu, 10/30/2025 - 12:00 Enterprise Mobile communications Cybersecurity Share options Facebook
AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution prezly Thu, 10/30/2025 - 12:00 Enterprise Mobile communications Cybersecurity

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 30 Oct 2025 AT&T and Thales introduce a next generation eSIM solution, powered by the latest GSMA IoT specification (SGP.32), giving enterprises a consolidated platform to remotely and securely manage IoT subscriptions, while preserving device integrity on a highly secure and reliable network. Backed by Thales’ “secure by design” approach, this solution targets the highest level of cybersecurity for IoT devices and supports compliance with evolving global cybersecurity regulations. Optimized for large-scale IoT deployments, the new eSIM management platform simplifies operations, reduces costs, and delivers advanced automation beyond SGP.32 standards to support diverse industries and device types.

With over 5.8 billion IoT cellular connections expected globally by 2030 (GSMA Intelligence) — powering everything from smart meters to wearable health trackers — the need for secure, scalable, and easy-to-manage connectivity is greater than ever. AT&T, a leader in connectivity and IoT solutions, and Thales, a global leader in advanced Cyber & Digital technologies, announce the launch of a new eSIM solution designed to help businesses remotely activate and manage IoT devices. This eSIM solution, powered by Thales Adaptive Connect (TAC), becomes a key part of AT&T’s global IoT solution, AT&T Virtual Profile Management for IoT, and can support many industries worldwide including automotive, smart cities, healthcare and utilities.

Compliant with the GSMA SGP.32 standard1, the new solution enables customers to ship connected devices anywhere in the world with one single, pre-integrated eSIM from Thales, then seamlessly activate the correct local connectivity profile remotely, eliminating the need for any physical access to it. This results in faster launches and simpler logistics for global IoT deployments. It also enables AT&T and its customers to easily manage connectivity policies, diagnostics, and subscription changes entirely over the air, through a single unified industry-certified interface.

This solution also adds advanced automation to simplify the remote eSIM management of large numbers of devices. It automates complex tasks, such as switching subscriptions or updating fleets rules, so enterprises can spend less time on logistics and operations, while bringing new products and services faster to market.

Thanks to these advanced features, Thales’ eSIM solution (TAC) gives companies the flexibility to localize within AT&T network partners or adjust devices’ subscriptions across large fleets without hardware changes, helping optimize costs, supply chains, coverage, and performance.

The service is now available for commercial use and supports customers worldwide.

At AT&T, we deliver intelligent IoT solutions you can trust­—highly secure, end-to-end, and built to scale,” said Cameron Coursey, VP of AT&T Connected Solutions. “Our state-of-the-art approach, paired with Thales’ solution, will help customers reduce friction and gain control of managing their own devices with reliable connectivity.”
We are entering a new era for remote eSIM Provisioning, ready to power billions of IoT devices, and we are proud to collaborate with AT&T in delivering smarter and safer IoT connectivity around the world,” said Eva Rudin, EVP Mobile Connectivity Solutions at Thales. “With Thales Adaptive Connect, we’re ensuring that every connected device benefits from strong security, reliable service, and simplified management — from the first connection and throughout its lifetime.”

1 The GSMA SGP.32 standard the latest specification from the GSM Association for eSIM (embedded SIM) Remote SIM Provisioning (RSP), for the remote eSIM management of Internet of things (IoT) devices and other types of mobile device deployments.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About AT&T

We help more than 100 million U.S. families, friends and neighbors, plus nearly 2.5 million businesses, connect to greater possibility. From the first phone call 140+ years ago to our 5G wireless and multi-gig internt offerings today, we @ATT innovate to improve lives. For more information about AT&T Inc. (NYSE:T), please visit us at about.att.com. Investors can learn more at investors.att.com.

View PDF market_segment : Cybersecurity + Enterprise > Mobile communications ; countries : Americas > United States https://thales-group.prezly.com/att-and-thales-collaborate-to-revolutionize-iot-deployments-with-new-esim-solution att-and-thales-collaborate-revolutionize-iot-deployments-new-esim-solution On AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution

IDnow

Breaking down biases in AI-powered facial verification.

How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone. While the ability of artificial intelligence (AI) to optimize certain processes is well documented, there are still genuine concerns regarding the link between unfa
How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone.

While the ability of artificial intelligence (AI) to optimize certain processes is well documented, there are still genuine concerns regarding the link between unfair and unequal data processing and discriminatory practices and social inequality.    

In November 2022, IDnow, alongside 12 European partners, including academic institutions, associations and private companies, began the MAMMOth project, which set out to explore ways of addressing bias in face verification systems. 

Funded by the European Research Executive Agency, the goal of the three-year long project, which wrapped on October 30, 2025, was to study existing biases and create a toolkit for AI engineers, developers and data scientists so they may better identify and mitigate biases in datasets and algorithm outputs.   

Three use cases were identified:    

Face verification in identity verification processes. 
  Evaluation of academic work. In the academic world, the reputation of a researcher is often tied to the visibility of their scientific papers, and how frequently they are cited. Studies have shown that on certain search engines, women and authors coming from less prestigious countries/universities tend to be less represented.   
  Assessment of loan applications. 

IDnow predominantly focused on the face verification use case, with the aim of implementing methods to mitigate biases found in algorithms.   

Data diversity and face verification bias.

Even the most state-of-the-art face verification models are typically trained on conventional public datasets, which features an underrepresentation of minority demographics. A lack of diversity in data makes it difficult for models to perform well on underrepresented groups, leading to higher error rates for people with darker skin tones.  

To address this issue, IDnow proposed using a ‘style transfer’ method to generate new identity card photos that mimic the natural variation and inconsistencies found in real-world data. By augmenting the training dataset with synthetic images, it not only improves model robustness through exposure to a wider range of variations but also enables a further reduction of bias against darker skin faces, which significantly reduces error rates for darker-skinned users, and provides a better user experience for all.

The MAMMOth project has equipped us with the tools to retrain our face verification systems to ensure fairness and accuracy – regardless of a user’s skin tone or gender. Here’s how IDnow Face Verification works.

When registering for a service or onboarding, IDnow runs the Capture and Liveness step, which detects the face and assesses image quality. We also run a liveness/ anti‑spoofing check to check that photos, screen replays, or paper masks are not used. 

The image is then cross-checked against a reference source, such as a passport or ID card. During this stage, faces from the capture step and the reference face are converted into compact facial templates, capturing distinctive features for matching. 

Finally, the two templates are compared to determine a “match” vs. “non‑match”, i.e. do the two faces belong to the same person or not? 

Through hard work by IDnow and its partners, we developed the MAI-BIAS Toolkit to enable developers and researchers to detect, understand, and mitigate bias in datasets and AI models.

We are proud to have been a part of such an important collaborative research project. We have long recognized the need for trustworthy, unbiased facial verification algorithms. This is the challenge that IDnow and MAMMOth partners set out to overcome, and we are delighted to have succeeded.

Lara Younes, Engineering Team Lead and Biometrics Expert at IDnow.
What’s good for the user is good for the business.

While the MAI-BIAS Toolkit has demonstrated clear technical improvements in model fairness and performance, the ultimate validation, as is often the case, will lie in the ability to deliver tangible business benefits.  

IDnow has already began to retrain its systems with learnings from the project to ensure our solutions are enhanced not only in terms of technical performance but also in terms of ethical and social responsibility.

Top 5 business benefits of IDnow’s unbiased face verification. Fairer decisions: The MAI-BIAS Toolkit ensures all users, regardless of skin color or gender, are given equal opportunities to pass face verification checks, ensuring that no group is unfairly disadvantaged.  
  Reduced fraud risks: By addressing biases that may create security gaps for darker skinned users, the MAI-BIAS Toolkit strengthens overall fraud prevention by offering a more harmonized fraud detection rate across all demographics. 
  Explainable AI: Knowledge is power, and the Toolkit provides actionable insights into the decision-making processes of AI-based identity verification systems. This enhances transparency and accountability by clarifying the reasons behind specific algorithmic determinations.  
  Bias monitoring: Continuous assessment and mitigation of biases are supported throughout all stages of AI development, ensuring that databases and models remain fair with each update to our solutions.  
  Reducing biases: By following the recommendations provided in the Toolkit, research methods developed within the MAMMOth project can be applied across industries and contribute to the delivery of more trustworthy AI solutions.  

As the global adoption of biometric face verification systems continues to increase across industries, it’s crucial that any new technology remains accurate and fair for all individuals, regardless of skin tone, gender or age.

Montaser Awal, Director of AI & ML at IDnow.

“The legacy of the MAMMOth project will continue through its open-source tools, academic resources, and policy frameworks,” added Montaser. 

For a more technical deep dive into the project from one of our research scientists, read our blog ‘A synthetic solution? Facing up to identity verification bias.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Ontology

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem

Introduction Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digi
Introduction

Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digital identity, data security, and decentralized trust.

Unlike general-purpose blockchains, Ontology focuses on building trust infrastructure for individuals, businesses, and governments. By leveraging Ontology’s features, Zambia can unlock innovation in financial inclusion, supply chain transparency, e-governance, and education.

1. Digital Identity for All Zambians

A key challenge in Zambia is limited access to official identification. Without proper IDs, many citizens struggle to open bank accounts, access healthcare, or register land. Ontology’s ONT ID (a decentralized digital identity solution) could:

Provide every citizen with a secure, self-sovereign digital ID stored on the blockchain. Link identity with services such as mobile money, health records, and education certificates. Reduce fraud in financial services, voting systems, and government benefit programs.

This supports Zambia’s push for universal access to identification while protecting privacy.

2. Financial Inclusion & Digital Payments

With a large unbanked population, Zambia’s fintech growth depends on trust and interoperability. Ontology offers:

Decentralized finance (DeFi) solutions for micro-loans, savings, and remittances without reliance on traditional banks. Cross-chain compatibility to connect Zambian fintech startups with global crypto networks. Reduced transaction fees compared to traditional remittance channels, making it cheaper for Zambians abroad to send money home. 3. Supply Chain Transparency (Agriculture & Mining)

Agriculture and mining are Zambia’s economic backbones, but inefficiencies and lack of transparency hinder growth. Ontology can:

Enable farm-to-market tracking of crops, ensuring farmers get fair prices and buyers trust product origins. Provide traceability in copper and gemstone mining, reducing smuggling and boosting global market confidence. Help cooperatives and SMEs access financing by proving their transaction history and supply chain credibility via blockchain records. 4. E-Government & Service Delivery

The Zambian government aims to digitize public services. Ontology Blockchain could:

Power secure land registries, reducing disputes and fraud. Create tamper-proof records for civil registration (births, deaths, marriages). Support digital voting systems that are transparent, verifiable, and resistant to manipulation. Improve public procurement processes by reducing corruption through transparent contract tracking. 5. Education & Skills Development

Certificates and qualifications are often hard to verify in Zambia. Ontology offers:

Blockchain-based education records: universities and colleges can issue tamper-proof digital diplomas. A verifiable skills database that employers and training institutions can trust. Empowerment of youth in blockchain and Web3 development, opening new economic opportunities. 6. Data Security & Trust in the Digital Economy

Zambia’s growing reliance on mobile money and e-commerce requires strong data protection. Ontology brings:

User-controlled data sharing: individuals decide who can access their personal information. Decentralized identity verification for businesses, preventing fraud in digital transactions. Strong compliance frameworks to align with Zambia’s Data Protection Act of 2021. Challenges to Overcome

Digital literacy gaps: Zambian citizens need training to use blockchain-based services.

Regulatory clarity: as Zambia we must craft clear policies around blockchain and cryptocurrencies.

Infrastructure: because reliable internet and mobile access are essential for blockchain adoption.

Conclusion

Ontology Blockchain provides Zambia with more than just a digital ledger — it offers a trust framework for identity, finance, governance, and innovation. By integrating Ontology into key sectors like agriculture, health, mining, and public administration, Zambia can accelerate its journey toward a secure, inclusive, and transparent digital economy.

This is not just about technology it’s about empowering citizens, building investor confidence, and positioning Zambia as a leader in blockchain innovation in Africa.

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

Putting responsible AI into practice: IDnow’s work on bias mitigation

As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification. London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings
As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification.

London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings on reducing bias in artificial intelligence (AI) systems. Funded by the EU’s Horizon Europe program, the project brought together organizations from a consortium of leading universities, research centers, and private companies across Europe. 

IDnow, a leading identity verification platform provider in Europe, was directly involved in the implementation of the project as an industry partner. Through targeted research and testing, an optimized AI model was developed to significantly reduce bias in facial recognition, which is now integrated into IDnow’s solutions.

Combating algorithmic bias in practice

Facial recognition systems that leverage AI are increasingly used for digital identity verification, for example, when opening a bank account or registering for car sharing. Users take a digital image of their face, and AI compares it with their submitted ID photo. However, such systems can exhibit bias, leading to poorer results for certain demographic groups. This is due to the underrepresentation of minorities in public data sets, which can result in higher error rates for people with darker skin tones. 

A study by MIT Media Lab showed just how significant these discrepancies can be: while facial recognition systems had an error rate of only 0.8% for light-skinned men, the error rate for dark-skinned women was 34.7%. These figures clearly illustrate how unbalanced many AI systems are – and how urgent it is to rely on more diverse data. 

As part of MAMMOth, IDnow worked specifically to identify and minimize such biases in facial recognition – with the aim of increasing both fairness and reliability.

Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application. By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.

Montaser Awal, Director of AI & ML at IDnow.
Technological progress with measurable impact

As part of the project, IDnow investigated possible biases in its facial recognition algorithm, developed its own approaches to reduce these biases, and additionally tested bias mitigation methods proposed by other project partners.

For example, as ID photos often undergo color adjustments by issuing authorities, skin tone can play a challenging role, especially if the calibration is not optimized for darker skin tones. Such miscalibration can lead to inconsistencies between a selfie image and the person’s appearance in an ID photo.  

To solve this problem, IDnow used a style transfer method to expand the training data, which allowed the model to become more resilient to different conditions and significantly reduced the bias toward darker skin tones.

Tests on public and company-owned data sets showed that the new training method achieved an 8% increase in verification accuracy – while using only 25% of the original training data volume. Even more significantly, the accuracy difference between people with lighter and darker skin tones was reduced by over 50% – an important step toward fairer identity verification without compromising security or user-friendliness. 

The resulting improved AI model was integrated into IDnow’s identity verification solutions in March 2025 and has been in use ever since.

Setting the standard for responsible AI

In addition to specific product improvements, IDnow plans to use the open-source toolkit MAI-BIAS developed in the project in internal development and evaluation processes. This will allow fairness to be comprehensively tested and documented before new AI models are released in the future – an important contribution to responsible AI development. 

“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” adds Montaser Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”


Ockto

Fraude met vervalste documenten neemt toe: bronverificatie biedt uitkomst

Naarden, 30 oktober 2025 – De politie heeft in Amsterdam en Zaandam afgelopen maand acht mensen aangehouden op verdenking van grootschalige hypotheekfraude, witwassen en valsheid in geschrifte. De zaak draait volgens de politie om valse werkgeversverklaringen en fictieve dienstverbanden. Het benadrukt opnieuw hoe kwetsbaar processen zijn die vertrouwen op door consumenten aangeleverde d

Naarden, 30 oktober 2025 – De politie heeft in Amsterdam en Zaandam afgelopen maand acht mensen aangehouden op verdenking van grootschalige hypotheekfraude, witwassen en valsheid in geschrifte.
De zaak draait volgens de politie om valse werkgeversverklaringen en fictieve dienstverbanden. Het benadrukt opnieuw hoe kwetsbaar processen zijn die vertrouwen op door consumenten aangeleverde documenten.


auth0

Auth0 for Scaling Apps: Advanced Security and Authentication

Discover the three key signs that your app is outgrowing its user authentication setup. Learn to solve these challenges and scale with Auth0's advanced features.
Discover the three key signs that your app is outgrowing its user authentication setup. Learn to solve these challenges and scale with Auth0's advanced features.

FastID

Rewriting HTML with the Fastly JavaScript SDK

Boost web performance with Fastly’s JS SDK v3.35.0. Use the new streaming HTML rewriter to customize, cache, and transform pages faster and more efficiently.
Boost web performance with Fastly’s JS SDK v3.35.0. Use the new streaming HTML rewriter to customize, cache, and transform pages faster and more efficiently.

Resilience by Design: Lessons in Multi-Cloud Readiness

Stay online when it matters most. Learn how Fastly's multi-cloud and edge strategies protect against outages, keeping your systems fast and reliable.
Stay online when it matters most. Learn how Fastly's multi-cloud and edge strategies protect against outages, keeping your systems fast and reliable.

Wednesday, 29. October 2025

liminal (was OWI)

Redefining Age Assurance

The post Redefining Age Assurance appeared first on Liminal.co.

The post Redefining Age Assurance appeared first on Liminal.co.


Elliptic

Crypto regulatory affairs: EU sanctions target A7A5 Ruble-backed stablecoin

In its latest round of sanctions on Russia, the European Union has taken aim at the A7A5 stablecoin - part of efforts to choke off Russia’s sanctions circumvention schemes. 

In its latest round of sanctions on Russia, the European Union has taken aim at the A7A5 stablecoin - part of efforts to choke off Russia’s sanctions circumvention schemes. 


Ocean Protocol

Ocean Protocol: Q4 2025 Update

A look at what the Ocean core team has built, and what’s to come · 1. Introduction · 2. Ocean Nodes: from Foundation to Framework · 3. Annotators Hub: Community-driven data annotations · 4. Lunor: Crowdsourcing Intelligence for AI · 5. Predictoor and DeFi Trading · 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence · 7. Conclusion 1. Introduction Back in June,
A look at what the Ocean core team has built, and what’s to come

· 1. Introduction
· 2. Ocean Nodes: from Foundation to Framework
· 3. Annotators Hub: Community-driven data annotations
· 4. Lunor: Crowdsourcing Intelligence for AI
· 5. Predictoor and DeFi Trading
· 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence
· 7. Conclusion

1. Introduction

Back in June, we shared the Ocean Protocol Product Update half-year check-in for 2025 where we outlined the progress made across Ocean Nodes, Predictoor, and other Ocean ecosystem initiatives. This post is a follow-up, highlighting the major steps taken since then and what’s next as we close out 2025.

We’re heading into the final stretch of 2025, so it’s only fitting to have a look over what the core team has been working on and what is soon to be released. Ocean Protocol was built to level the playing field for AI and data. From day one, the vision has been to make data more accessible, AI more transparent, and infrastructure more open. The Ocean tech stack is built for that mission: to combine decentralized compute, smart contracts, and open data marketplaces to help developers, researchers, and companies tap into the true potential of AI.

This year has been about making that mission real. Here’s how:

2. Ocean Nodes: from Foundation to Framework

Since the launch of Ocean Nodes in August 2024, the Ocean community has shown what’s possible when decentralized infrastructure meets real-world ambition. With over 1.7 million nodes deployed across 70+ countries, the network has grown far beyond expectations.

Throughout 2025, the focus has been on reducing friction, boosting usability, and enabling practical workflows. A highlight: the release of the Ocean Nodes Visual Studio Code extension. It lets developers and data scientists run compute jobs directly from their editor — free (within defined parameters), fast, and frictionless. Whether they’re testing algorithms or prototyping dApps, it’s the quickest path to real utility. The extension is now available on the VS Code Marketplace, as well as in Cursor and other distributions, via the Open VSX registry.

We’ve also seen strong momentum from partners like NetMind and Aethir, who’ve helped push GPU-ready infrastructure into the Ocean stack. Their contribution has paved the way for Phase 2, a major upgrade that the core team is still actively working on and that’s set to move the product from PoC to real production-grade capabilities.

That means:

Compute jobs that actually pay, with a pay-as-you-go system in place Benchmarking GPU nodes to shape a fair and scalable reward model Real-world AI workflows: from model training to advanced evaluation

And while Phase 2 is still in active development, it’s now reached a stage where user feedback is needed. To get there, we’ve launched the Alpha GPU Testers program, for a small group of community members to help us validate performance, stability and reward mechanics across GPU nodes. Selected participants simply need to set their GPU node up and make it available for the core team to run benchmark tests. As a thank-you for their effort and uptime, each successfully tested node will receive a $100 reward.

Key information:

Node selection: Oct 24–31, 2025 Benchmark testing: Nov 3–17, 2025 Reward: $100 per successfully tested node Total participants: up to 15, on a first come-first served basis. Only one node/owner is allowed

With Phase 2 of Ocean Nodes, we will be laying the groundwork for something even bigger: the Ocean Network. Spoiler alert: it will be a peer-to-peer AI Compute-as-a-Service platform designed to make GPU infrastructure accessible, affordable, and censorship-resistant for anyone who needs it.

More details on the transition are coming soon. But if you’re running a node, building on Ocean, or following along, you’re already part of it.

What else have we launched?

3. Annotators Hub: Community-driven data annotations Current challenge: CivicLens, ends on Oct 31, 2025

AI doesn’t work without quality data. And creating it still is a huge bottleneck. That’s why we’ve launched the Annotators Hub: a structured, community-driven initiative where contributors help evaluate and shape high-quality datasets through focused annotation challenges.

The goal is to improve AI performance by improving what it learns from: the data. High-quality annotations are the foundation for reliable, bias-aware, and domain-relevant models. And Ocean is building the tools and processes to make that easier, more consistent, and more inclusive.

Human annotations remain the single most effective way to improve AI performance, especially in complex domains like education and politics. By contributing to the Annotators Hub, the Ocean community members directly help build better models, that can power adaptive tutors, improve literacy tools, and even make political discourse more accessible.

For example, LiteracyForge, the first challenge ran in collaboration with Lunor.ai, focused on improving adaptive learning systems by collecting high-quality evaluations of reading comprehension material. The aim: to train AI that better understands question complexity and supports literacy tools. Here are a few highlights, as the challenge is currently being evaluated:

49,832 total annotations submitted 19,973 unique entries 147 annotators joined throughout the three weeks of the first challenge 17,581 double-reviewed annotation

The second challenge will finish in just 2 days, on Friday, October 31. This time we’re looking into analyzing speeches from the European Parliament, to help researchers, civic organizations as well as the general public better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible. There’s still time to jump in and become an annotator,

Yes, this initiative can be seen as a “launchpad” for a marketplace with ready-to-use, annotated data, designed to give everyone access to training-ready data that meets real-world quality standards. But on that in an upcoming blogpost.

As we get closer to the end of 2025, we’re doubling down on utility, usability, and adoption. The next phase is about scale and about creating tangible ways for Ocean’s community to contribute, earn, and build.

4. Lunor: Crowdsourcing Intelligence for AI

Lunor is building a crowdsourced intelligence ecosystem where anyone can co-create, co-own, and monetize Intelligent Systems. As one of the core projects within the Ocean Ecosystem, Lunor represents a new approach to AI, one where the community drives both innovation and ownership.

Lunor’s achievements so far, together with Ocean Protocol, comprise of:

Over $350,000 in rewards distributed from the designated Ocean community wallet More than 4,000 contributions submitted 38 structured data and AI quests completed

Assets from Lunor Quests are published on the Ocean stack, while future integration with Ocean nodes will bring private and compliant Compute-to-Data for secure model training.

Together with Ocean, Lunor has hosted quests like LiteracyForge, showcasing how open collaboration can unlock high-quality data and AI for education, sustainability, and beyond.

5. Predictoor and DeFi Trading

About Predictoor. In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. The “earn $” part is key, because it fosters usage.

Predictoor involves two groups of people:

Predictoors: data scientists who use AI models to predict what the price of ETH, BTC, etc will be 5 (or 60) minutes into the future. The scientists run bots that submit these predictions onto the chain every 5 minutes. Predictoors earn $ based on sales of the feeds, including sales from Ocean’s Data Farming incentives program. Traders: run bots that input predictoors’ aggregated predictions, to use as alpha in trading. It’s another edge for making $ while trading.

Predictoor is built using the Ocean stack. And, it runs on Oasis Sapphire; we’ve partnered with the Oasis team.

Predictoor traction. Since mainnet launch in October 2023, Predictoor has accumulated about $2B total volume. [Source: DappRadar]. Furthermore, in spring 2025, our collaborators at Oasis launched WT3, a decentralized, verifiable trading agent that uses Predictoor feeds for its alpha.

Predictoor past, present, future. After Predictoor product and rewards program were launched in fall 2023, the next major goal was “traders to make serious $”. If that is met, then traders will spend $ to buy feeds; which leads to serious $ for predictoors. The Predictoor team worked towards this primary goal throughout 2024, testing trading strategies with real $. Bonus side effects of this were improved analytics and tooling.

Obviously “make $ trading” is not an easy task. It’s a grind taking skill and perseverance. The team has ratcheted, inching ever-closer to making money. Starting in early 2025, the live trading algorithms started to bear fruit. The team’s 2025 plan was — and is — keep grinding, towards the goal “make serious $ trading”. It’s going well enough that there is work towards a spinoff. We can expect trading work to be the main progress in Predictoor throughout 2025. Everything else in Predictoor and related will follow.

6. bci/acc: accelerate brain-computer interfaces towards human superintelligence

Another stream in Ocean has been taking form: bci/acc. Ocean co-founder Trent McConaghy first gave a talk on bci/acc at NASA in Oct 2023, and published a seminal blog post on it a couple months later. Since then, he’s given 10+ invited talks and podcasts, including Consensus 2025 and Web3 Summit 2025.

bci/acc thesis. AI will likely reach superintelligence in the next 2–10 years. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence.

Ocean bci/acc team. In January 2025, Ocean assembled a small research team to pursue bci/acc, with the goal to create BCI killer apps that it can take to market. The team has been building towards this ever since: working with state-of-the-art BCI devices, constructing AI-data pipelines, and running data-gathering experiments. Ocean-style decentralized access control will play a role, as neural signals are perhaps the most private data of all: “not your keys, not your thoughts”. In line with Ocean culture and practice, we look forward to sharing more details once the project has progressed to tangible utility for target users.

7. Conclusion

2025 has been a year of turning vision into practice. From Predictoor’s trading traction, Ocean Nodes being pushed into a GPU-powered Phase 2, to the launch of Annotators Hub, and with ecosystem projects like Lunor driving community-led AI forward, it feels like the pieces of the Ocean vision are falling into place.

The focus is clear for the Ocean core team in Q4: scale, usability, and adoption. Thanks for being part of it. The best is yet to come.

Ocean Protocol: Q4 2025 Update was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Voting will take place on OWallet from October 28, 2025 (00:00 UTC) through October 31, 2025 (00:00 UTC).

Understanding the Current Model

Let’s start with where things stand today.

Total ONG Supply: 1 billion Total Released: ≈ 450 million (≈ 430 million circulating) Annual Release: ≈ 31.5 million ONG Release Curve: All ONG unlocked over 18 years. The remaining 11 years follow a mixed release pace: 1 ONG per second for 6 years, then 2, 2, 2, 3, and 3 ONG per second in the final 5 years.

Currently, both unlocked ONG and transaction fees flow back to ONT stakers as incentives, generating an annual percentage rate of roughly 23 percent at current prices.

What the Proposal Changes

The new proposal suggests several key adjustments to rebalance distribution and align long-term incentives:

Cap the total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value, effectively removing them from circulation. Strengthen staker rewards and ecosystem growth by making the release schedule steadier and liquidity more sustainable. Implementation Plan

1. Adjust the ONG Release Curve

Total supply capped at 800 million. Release period extended from 18 to 19 years. Maintain a 1 ONG per second release rate for the remaining years.

2. Allocation of Released ONG

80 percent directed to ONT staking incentives. 20 percent, plus transaction fees, contributed to ecological liquidity.

3. Swap Mechanism

Use ONG to acquire ONT within a defined fluctuation range. Pair the two tokens to create liquidity and receive LP tokens. Burn the LP tokens to permanently lock both ONG and ONT, tightening circulating supply. Community Q & A

Q1. How long will the ONT + ONG (worth 100 million ONG) be locked?

It’s a permanent lock.

Q2. Why does the total ONG supply decrease while the release period increases?

Under the current model, release speeds up in later years. This proposal keeps the rate fixed at 1 ONG per second, so fewer tokens are released overall but over a slightly longer span — about 19 years in total.

Q3. Will this affect ONT staking APY?

It may, but not necessarily negatively. While staking rewards in ONG drop 20 percent, APY depends on market prices of ONT and ONG. If ONG appreciates as expected, overall returns could remain steady or even rise.

Q4. How does this help the Ontology ecosystem?

Capping supply at 800 million and permanently locking 100 million ONG will make ONG scarcer. With part of the released ONG continuously swapped for ONT to support DEX liquidity, the effective circulating supply may fall to around 750 million. That scarcity, paired with new products consuming ONG, could strengthen price dynamics and promote sustainable network growth. More on-chain activity would also mean stronger rewards for stakers.

Q5. Who can vote, and how?

All Triones nodes have the right to vote through OWallet during the official voting window.

Why It Matters

This isn’t just a supply adjustment. It’s a structural change designed to balance reward distribution, liquidity, and governance in a way that benefits both the Ontology network and its long-term participants.

Every vote counts. By joining this governance round, Triones nodes have a direct hand in shaping how value flows through the Ontology ecosystem — not just for today’s staking cycle, but for the years of decentralized growth ahead.

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Elliptic submits recommendations to US Treasury on ways to fight crypto crime

In August 2025, the US Department of the Treasury issued a request for comment on innovative methods to detect illicit activity involving digital assets. Treasury specifically sought input on four key technologies:

In August 2025, the US Department of the Treasury issued a request for comment on innovative methods to detect illicit activity involving digital assets. Treasury specifically sought input on four key technologies:


PingTalk

Ping YOUniverse 2025: Resilient Trust in Motion

Ping YOUniverse 2025 traveled to Sydney, Melbourne, Singapore, Jakarta, Austin, London, and Amsterdam. Read the highlights of our global conference, and see how identity, AI, and Resilient Trust took center stage.

Identity is moving fast–AI agents, new fraud patterns, and tightening regulations are reshaping the identity landscape under our feet. At Ping YOUniverse 2025, thousands of identity leaders, customers, and partners came together to confront this dramatic shift.

 

We compared notes on what matters now:

 

Stopping account takeover without killing conversion, so security doesn’t tax your revenue engine.

Orchestrating trust signals across apps and partners, so decisions get smarter everywhere.

Shrinking risk and cost with just‑in‑time access, so the right access appears—and disappears—on demand.

 

This recap distills the most useful takeaways for you: real-world use cases, technical demos within our very own Trust Lab, and deep-dive presentations from partners like Deloitte, AWS, ProofID, Midships, Versent, and more—plus guest keynotes from Former Secretary General of Interpol, Dr. Jürgen Stock and cybersecurity futurist, Heather Vescent. And it’s unified by a single theme: Resilient Trust isn’t a moment. It’s a mindset.

 


FastID

A Smarter ACME Challenge for a Multi-CDN World

Optimize your multi-CDN setup with Fastly's new dns-account-01 ACME challenge. Eliminate label collisions and enhance certificate management.
Optimize your multi-CDN setup with Fastly's new dns-account-01 ACME challenge. Eliminate label collisions and enhance certificate management.

Tuesday, 28. October 2025

Spruce Systems

Digital Wallet Certification: The Foundation for Interoperable State Identity Systems

To build trust, protect privacy, and enable true interoperability, states must establish a certification program for digital wallets and issuers that enforces technical safeguards, statutory principles, and vendor accountability from the start.

As states move toward private, interoperable, and resident-controlled digital identity systems, certification of wallets and issuers becomes a cornerstone of trust. Certification doesn’t just validate technical conformance; it enforces privacy, supports procurement flexibility, and enables multiple vendors to participate under a consistent trust framework. This blog post outlines recommendations to meet these goals, with statutory guardrails and governance practices built in from the start.

The Case for Certification

We believe that states should require certification of Digital Wallets that are capable of holding a state digital identity. Certification provides assurance that wallet providers comply with key requirements such as privacy preservation, unlinkability, minimal disclosure, and security of key material, which are enforced by design.

SpruceID believes that additional legislation should be enacted to establish a formal certification program for wallets, issuers, and potentially verifiers participating in a state digital identity ecosystem. The legislation should specify that the designated regulating entity may conduct audits and certify providers directly, or delegate certification responsibilities to qualified external organizations, provided such delegation is formally approved by the appropriate higher authority.

Enforcing Privacy and Minimization

A certification program would mandate compliance with privacy-preserving technical standards, restrict verifiers from requesting or storing more information than is legally required, and require wallets to obtain clear user consent before transmitting credential data. Wallets would also need to provide plain-language explanations of how data is used in each transaction. By creating a statutory basis for certification and oversight, states can ensure that unlinkability and data minimization are not just principles, but enforceable requirements with technological and governance safeguards.

Pilot Programs to Support Innovation

We recommend that states enact a pilot program allowing provisional, limited, and expiring operating approvals of issuers, wallets, and verifiers, preceding the establishment and full operation of its formal certification programs, for the purpose of encouraging market solutions to operate in real-world environments and generate learnings. The appropriate oversight agency would be able to adapt the resulting learnings towards creating the formal certification programs. Today’s best practice in the software industry has been to take an iterative “agile” approach towards implementation, and we believe this would be the best approach to creating certification programs for software as well, to fully engage industry early and often in a limited operating capacity, instead of attempting to fully specify rules a priori, which may become irrelevant if not created with perfect knowledge.

Clarifying Responsibilities Across the Ecosystem

Clear allocation of liability and responsibility is essential for the trust and sustainability of any state digital identity program. A state's role is to establish statutory guardrails, oversee governance, and authorize a certification framework that ensures all ecosystem participants meet consistent standards. This includes creating a certification program for both digital wallet providers and credential issuers, verifying that they comply with statutory principles for privacy, unlinkability, minimal disclosure, and security.

Wallet Provider Responsibilities

Digital wallet providers bear responsibility for ensuring acceptable security mechanisms, proper user consent, presenting clear and plain-language disclosures which meet accessibility requirements, and ensuring features like personal data licenses and privacy-protective technical standards are honored in practice. Certified wallets must also support recovery mechanisms for key loss or compromise, ensuring that holders are not permanently locked out of identity credentials due to technical failures. Digital wallet providers should coordinate with issuers, designing solutions which anticipate that wallets and keys will be lost, stolen, and compromised.

Issuer Responsibilities

Issuers are responsible for creating a strong operational environment that ensures the accuracy of the attributes they release, and for maintaining correct and untampered authoritative source records. They are also responsible for ensuring that state digital identity credentials are issued to the correct holders, and to any acceptable wallets, free of unreasonable delay, burden, or fees. They must provide accessibility to holders, such as providing workable paths for holders who lose their credentials, wallets, and/or keys. Their certification ensures that state digital identity credentials are issued only under audited processes that meet required levels of identity proofing and revocation safeguards.

Legislating Technical Safeguards and Liability

In addition, states should require certification of wallets against a published state digital identity protection profile and create clear liability rules. Legislation should establish that wallet providers are responsible for implementing technical safeguards, that Holders maintain control over disclosure decisions, and that verifiers may only request attributes that are legally permitted. By legislating these aspects, states will ensure that residents can trust any certified wallet to uphold their rights, while fostering a competitive ecosystem of providers who innovate on usability and design within a consistent regulatory baseline.

Enabling Interoperability and Competition

Certification also creates a mechanism for interoperability and trust across the ecosystem. By publishing a clear “state digital identity Wallet Protection Profile” and certifying wallets against it, states can ensure that wallets from different vendors operate consistently while still allowing for competition and innovation.

Building Public Confidence Through Transparency

Finally, certification helps build public confidence. Residents will know that any wallet bearing a certification mark has been independently tested and approved to uphold privacy and prevent surveillance, while verifiers will know they can safely interact with those wallets. At the same time, states should keep certification processes lightweight and transparent to avoid excluding smaller vendors, ensuring that certification supports security and privacy without stifling innovation.

Establishing the Guardrails of a Trusted Ecosystem

Certification is more than a checkbox, it's how we turn principles like unlinkability and minimal disclosure into an enforceable reality. By embedding privacy protections in wallet and issuer certification, states can foster innovation without compromising trust. The foundation for interoperable, people-first digital identity isn’t a single app or provider, it’s a standards-aligned ecosystem, governed responsibly and built to last.

SpruceID works with governments and standards bodies to build privacy-preserving, interoperable infrastructure designed for public trust from day one. Contact us to start the conversation.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.


Ontology

Identity in the Age of AI

What does this mean? Identity, privacy, and AI are colliding fast. In this community conversation, builders and advocates examined who should own identity online, how to protect privacy, and how AI agents change the trust model for everything we do on the internet. Read the full post Featured speakers Humpty — long-time contributor and advocate of decentralized identity and pri
What does this mean?

Identity, privacy, and AI are colliding fast. In this community conversation, builders and advocates examined who should own identity online, how to protect privacy, and how AI agents change the trust model for everything we do on the internet.

Read the full post

Featured speakers Humpty — long-time contributor and advocate of decentralized identity and privacy Geoff — veteran ecosystem builder and Head of Community at Ontology Barnabas — grassroots organizer driving Web3 education and adoption across Africa Five core takeaways

Ownership and agency come first
Web3 should let people own their identity and control what they share. Identity is not a wallet address. It is a richer record that reflects consent and context.

“You are in control of your data, and you get to choose what you want people to see.” — Barnabas

Privacy with portability
Identity must work across apps and chains while preserving privacy. Single-chain IDs limit users.

“Portable identity should not work only on one chain.” — Humpty

Design for everyone
Education and simple UX are essential so new users can participate without feeling overwhelmed.

“Removing barriers is essential to building community.” — Geoff

AI needs attribution and reputation
As AI agents multiply, we must evaluate outputs and the credibility of agents and their builders.

“We need attribution to know if a result is good, outdated, or hallucinated.” — Humpty

A builder’s opening
There is real opportunity to launch AI apps and agents with verifiable identity and reputation that users can trust.

“Start thinking about how you can develop those AI apps to launch in the marketplace.” — Geoff
Bigger picture

Identity is becoming shared infrastructure. It underpins privacy, enables reputation, and helps us decide which people or agents to trust. As AI agents begin to outnumber humans online, transparent identity and reputation will guide safe participation for everyone.

TL;DR

User-owned identity must be private and portable. Education and simple UX bring people in. AI raises the stakes for attribution and reputation, which is a clear opportunity for builders to ship trustworthy agents tied to real user intent.

Read the full post

Related reading Explore ONT ID and decentralized reputation. ont.id Who Really Owns Web3’s Data? 7 Questions for the Community. ont.io

Identity in the Age of AI was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

Thales unveils space surveillance radar AURORE - unique in Europe

Thales unveils space surveillance radar AURORE - unique in Europe prezly Tue, 10/28/2025 - 11:22 Defence Space Share options Facebook X
Thales unveils space surveillance radar AURORE - unique in Europe prezly Tue, 10/28/2025 - 11:22 Defence Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 28 Oct 2025 As part of the ARES program (Action and Space Resilience), Thales has been notified of a contract from the French Defence Procurement Agency (DGA) to develop, deliver and deploy a new ground-based low orbit space surveillance radar system. Called AURORE, it will watch the satellites and debris in low orbit from the earth. This ground-breaking radar system provides continuous monitoring and tracking of multiple space objects simultaneously. It will strengthen French capabilities for assessing the space situation. AURORE will be the largest surveillance radar deployed in Europe. In the context of the increasing militarization of space, this decision marks an important milestone for European and French sovereignty, by providing unprecedented detection capabilities to enhance military space surveillance missions of activities in low orbit. Designed and manufactured at Thales’ Limours site, the AURORE radar also benefits from the expertise gained through partnerships with several French SMEs.

AURORE, a new solution for spatial monitoring and situation assessment, chosen by France © Thales

As space operations are challenged by a substantial increase in threats, from military to space debris, the ability to identify and track multiple small objects in space in real-time makes all the difference when it comes to space sovereignty and protecting the skies.

AURORE is a software-defined radar operating in the Ultra High Frequency (UHF) band, that will provide continuous surveillance and simultaneous multi-tracking of numerous space objects with a fast low Earth orbit responsiveness time, and the generation of a high-resolution picture of the space environment in real-time.

The modularity of its architecture will form the backbone of a comprehensive roadmap aimed at expanding the product portfolio with a new family of UHF radars capable of meeting the needs of multiple critical missions, enabling protection against emerging ballistic and hypersonic threats.

"With AURORE, the only radar of its kind in Europe, Thales is contributing to French sovereignty by strengthening its capabilities for monitoring the space environment at low orbits. AURORE demonstrates, once again, Thales' leadership in the field of air and space surveillance systems.” said Patrice Caine, Chairman and Chief Executive Officer, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF market_segment : Defence + Space https://thales-group.prezly.com/thales-unveils-space-surveillance-radar-aurore-unique-in-europe thales-unveils-space-surveillance-radar-aurore-unique-europe On Thales unveils space surveillance radar AURORE - unique in Europe

Dock

GSMA, Telefónica Tech, TMT ID and Dock Labs collaborate to reinvent call centre authentication

We’re excited to share that Dock Labs is collaborating with GSMA, Telefónica Tech, and TMT ID on a new initiative to reinvent call centre authentication. Here's why: Today’s customer authentication processes often rely on knowledge-based

We’re excited to share that Dock Labs is collaborating with GSMATelefónica Tech, and TMT ID on a new initiative to reinvent call centre authentication.

Here's why:

Today’s customer authentication processes often rely on knowledge-based questions or one-time passwords (OTPs). These methods are time-consuming, typically taking between 30 and 90 seconds, and can be vulnerable to SIM swap attacks, phishing, Caller Line Identification (CLI) spoofing and data breaches. 

On top of that, they frequently require customers to disclose personal information to call center agents, creating privacy and compliance risks for organisations.

To address these issues, the group has initiated a Proof of Concept (PoC) to explore a new, privacy-preserving model of caller authentication that is faster, secure, and user-friendly.


Spherical Cow Consulting

Can Standards Survive Trade Wars and Sovereignty Battles?

For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all. The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.

“For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all.”

That premise is a lovely ideal, but it no longer reflects reality. The Internet isn’t collapsing, but it is fragmenting: tariffs, digital sovereignty drives, export controls, and surveillance regimes all chip away at the illusion of universality. Standards bodies that still aim for global consensus risk paralysis. And yet, walking away from standards altogether isn’t an option.

The real question isn’t whether we still need standards. The question is how to rethink them for a world that is fractured by design.

This is the fourth of a four-part series on what the Internet will look like for the next generation of people on this planet.

First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: “The People Problem: How Demographics Decide the Future of the Internet“ Fourth post: [this one]

A Digital Identity Digest Can Standards Survive Trade Wars and Sovereignty Battles? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:24 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Global internet, local rulebooks

If you look closely, today’s Internet is already more a patchwork quilt of overlapping, sometimes incompatible regimes, and less global.

Europe pushes digital sovereignty and data protection rules, with eIDAS2 and the AI Act setting global precedents. The U.S. leans on export controls and sanctions, using access to chips and cloud services as levers of influence. China has doubled down on domestic control, firewalling traffic and setting its own technical specs. Africa and Latin America are building data centers and digital ID schemes to reduce dependence on foreign providers, while still trying to keep doors open for trade and investment.

Standards development bodies now live in this reality. The old model where universality was the goal and compromise was the method is harder to sustain. If everyone insists on their priorities, consensus stalls. But splintering into incompatible systems isn’t viable either. Global supply chains, cross-border research, and the resilience of communications all require at least a shared baseline.

The challenge is to define what “interoperable enough” looks like.

The cost side is getting heavier

The incentives for participation in global standards bodies used to be relatively clear: access to markets, influence over technical direction, and reputational benefits. Today, the costs of cross-border participation have gone up dramatically.

Trade wars have re-entered the picture. The U.S. has imposed sweeping tariffs on imports from China and other countries, hitting semiconductors and electronics with rates ranging from 10% to 41%. These costs ripple across supply chains. On top of tariffs, the U.S. has restricted exports of advanced chips and AI-related hardware to China. The uncertainty of licensing adds compliance overhead and forces firms to hedge.

Meanwhile, the “China + 1” strategy—where companies diversify sourcing away from over-reliance on China—comes with a hefty price tag. Logistics get more complex, shipping delays grow, and firms often hold more inventory to buffer against shocks. A 2025 study estimated these frictions alone cut industrial output by over 7% and added nearly half a percent to inflation.

And beyond tariffs or logistics, transparency and compliance laws add their own burden. The U.S. Corporate Transparency Act requires firms to disclose beneficial ownership. Germany’s Transparency Register and Norway’s Transparency Act impose similar obligations, with Norway’s rules extending to human-rights due diligence.

The result is that companies are paying more just to maintain cross-border operations. In that climate, the calculus for standards shifts. “Do we need this standard?” becomes “Is the payoff from this standard enough to justify the added cost of playing internationally?”

When standards tip the scales

The good news is that standards can offset some of these costs when they come with the right incentives.

One audit, many markets. Standards that are recognized across borders save money. If a product tested in one region is automatically accepted in another, firms avoid duplicative testing fees and time-to-market shrinks.

Case study: the European Digital Identity Wallet (EUDI). In 2024, the EU adopted a reform of the eIDAS regulation that requires all Member States to issue a European Digital Identity Wallet and mandates cross-border recognition of wallets issued by other states. The premise here is that if you can prove your identity using a wallet in France, that same credential should be accepted in Germany, Spain, or Italy without new audits or registrations.

The incentives are potentially powerful. Citizens gain convenience by using one credential for many services. Businesses reduce onboarding friction across borders, from banking to telecoms. Governments get harmonized assurance frameworks while retaining the ability to add national extensions. Yes, the implementation costs are steep—wallet rollouts, legal alignment, security reviews—but the payoff is smoother digital trade and service delivery across a whole bloc.

Regulatory fast lanes. Governments can offer “presumption of conformity” when products follow recognized standards. That reduces legal risk and accelerates procurement cycles.

Procurement carrots. Large buyers, both public and private, increasingly bake interoperability and security standards into tenders. Compliance isn’t optional; it’s the ticket to compete.

Risk transfer. Demonstrating that you followed a recognized standard can reduce penalties after a breach or compliance failure. In practice, standards act as a form of liability insurance.

Flexibility in a fractured market. A layered approach—global minimums with regional overlays—lets companies avoid maintaining entirely separate product lines. They can ship one base product, then configure for sovereignty requirements at the edges.

When incentives aren’t enough

Of course, there are limits to how far incentives can stretch. Sometimes the costs simply outweigh the benefits.

Consider a market that imposes steep tariffs on imports while also requiring its own unique technical standards, with no recognition of external certifications. In such a case, the incentive of “one audit, many markets” collapses. Firms face a choice between duplicating compliance efforts, forking product lines, or withdrawing from the market entirely.

Similarly, rules of origin can blunt the value of global standards. Even if a product complies technically, it may still fail to qualify for preferential access if its components are sourced from disfavored regions. Political volatility adds another layer of uncertainty. The back-and-forth implementation of the U.S. Corporate Transparency Act illustrates how compliance obligations can change rapidly, leaving firms unable to plan long-term around standards incentives.

These realities underline a sad reality that incentives alone cannot overcome every cost. Standards must be paired with trade policies, recognition agreements, and regulatory stability if they are to deliver meaningful relief. Technology is not enough.

How standards bodies must adapt

It’s easy enough to say “standards still matter.” What’s harder is figuring out how the institutions that make those standards need to change. The pressures of a fractured Internet aren’t just technical. They’re geopolitical, economic, and regulatory. That means standards bodies can’t keep doing business as usual. They need to adapt on two fronts: process and scope.

Process: speed, modularity, and incentives

The traditional model of consensus-driven standards development assumes time and patience are plentiful. Groups grind away until they’ve achieved broad agreement. In today’s climate, that often translates to deadlock. Standards bodies need to recalibrate toward “minimum viable consensus” that offer enough agreement to set a global baseline, even if some regions add overlays later.

Speed also matters. When tariffs or export controls can be announced on a Friday and reshape supply chains by Monday, five-year standards cycles are untenable. Bodies need mechanisms for lighter-weight deliverables: profiles, living documents, and updates that track closer to regulatory timelines.

And then there’s participation. Costs to attend international meetings are rising, both financially and politically. Without intervention, only the biggest vendors and wealthiest governments will show up. That’s why initiatives like the U.S. Enduring Security Framework explicitly recommend funding travel, streamlining visa access, and rotating meetings to more accessible locations. If the goal is to keep global baselines legitimate, the doors have to stay open to more than a handful of actors.

Scope: from universality to layering

Just as important as process is deciding what actually belongs in a global standard. The instinct to solve every problem universally is no longer realistic. Instead, standards bodies need to embrace layering. At the global level, focus on the minimums: secure routing, baseline cryptography, credential formats. At the regional level, let overlays handle sovereignty concerns like privacy, lawful access, or labor requirements.

This shift also means expanding scope beyond “pure technology.” Standards aren’t just about APIs and message formats anymore; they’re tied directly to procurement, liability, and compliance. If a standard can’t be mapped to how companies get through audits or how governments accept certifications, it won’t lower costs enough to be worth the trouble.

Finally, standards bodies must move closer to deployment. A glossy PDF isn’t sufficient if it doesn’t include reference implementations, test suites, and certification paths. Companies need ways to prove compliance that regulators and markets will accept. Otherwise, the promise of “interoperability” remains theoretical while costs keep mounting.

The balance

So is it process or scope? The answer is both. Process has to get faster, more modular, and more inclusive. Scope has to narrow to what can truly be global while expanding to reflect regulatory and economic realities. Miss one side of the equation, and the other can’t carry the weight. Get them both right, and standards bodies can still provide the bridges we desperately need in a fractured world.

A layered model for fractured times

So what might a sustainable approach look like? I expect the future will feature layered models rather than a universal one.

At the bottom of this new stack are the baseline standards for secure software development, routing, and digital credential formats. These don’t attempt to satisfy every national priority, but they keep the infrastructure interoperable enough to enable trade, communication, and research.

On top of that baseline are regional overlays. These extensions allow regions to encode sovereignty priorities, such as privacy protections in Europe, lawful access in the U.S., or data localization requirements in parts of Asia. The overlays are where politics and local control find their expression.

This design isn’t neat or elegant. But it’s pragmatic. The key is ensuring that overlays don’t erode the global baseline. The European Digital Identity Wallet is a good example: the baseline is cross-border recognition across EU states, while national governments can still add extensions that reflect their specific needs. The balance isn’t perfect, but it shows how interoperability and sovereignty can coexist if the model is layered thoughtfully.

What happens if standards fail

It’s tempting to imagine that if standards bodies stall, the market will simply route around them. But the reality of a fractured Internet is far messier. Without viable global baselines, companies retreat into regional silos, and the costs of compliance multiply. This section is the stick to go with the carrots of incentives.

If standards fail, cross-border trade slows as every shipment of software or hardware has to be retested for each jurisdiction. Innovation fragments as developers build for narrow markets instead of global ones, losing economies of scale. Security weakens as incompatible implementations open new cracks for attackers. And perhaps most damaging, trust erodes: governments stop believing that interoperable solutions can respect sovereignty, while enterprises stop believing that global participation is worth the cost.

The likely outcome is not resilience, but duplication and waste. Firms will maintain redundant product lines, governments will fund overlapping infrastructures, and users will pay the bill in the form of higher prices and poorer services. The Internet won’t collapse, but it will harden into a collection of barely connected islands.

That’s why standards bodies cannot afford to drift. The choice isn’t between universal consensus and nothing. The choice is between layered, adaptable standards that keep the floor intact or a slow grind into fragmentation that makes everyone poorer and less secure.

Closing thought

The incentives versus cost tradeoff is not a side issue in standards development. It is the issue. The technical community must accept that tariffs, sovereignty, and compliance aren’t temporary distractions but structural realities.

The key question to ask about any standard today is simple: Does this make it cheaper, faster, or less risky to operate across borders? If the answer is yes, the standard has a future. If not, it risks becoming another paper artifact, while fragmentation accelerates.

Now I have a question for you: in your market, do the incentives for adopting bridge standards outweigh the mounting costs of tariffs, export controls, and compliance regimes? Or are we headed for a world where regional overlays dominate and the global floor is paper-thin?

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Welcome back to A Digital Identity Digest.

Today, I’m asking a big question that’s especially relevant to those of us working in technical standards development:

Can standards survive trade wars and sovereignty battles?

For decades, the story of Internet standards seemed fairly simple — though never easy:
get the right people in the room, hammer out details, and eventually end up with rules that worked for everyone.

[00:00:58] The Internet was one global network, and standards reflected that vision.

[00:01:09] That story, however, is starting to fall apart.
We’re not watching the Internet collapse, but we are watching it fragment — and that fragmentation carries real consequences for how standards are made, adopted, and enforced.

[00:01:21] In this episode, we’ll explore:

Why the cost of participating in global standards has gone up How incentives can still make standards development worthwhile What happens when those incentives fall short And how standards bodies need to adapt to stay relevant

[00:01:36] So, let’s dive in.

The Fragmenting Internet

[00:01:39] When the Internet first spread globally, it seemed like one big network — or at least, one big concept.

[00:01:55] But that’s not quite true anymore.

Let’s take a few regional examples.

Europe has leaned heavily into digital sovereignty, with rules like GDPR, the AI Act, and the updated eIDAS regulation. Their focus is clear: privacy and sovereignty come first. The United States takes a different tack, using export controls and sanctions as tools of influence — with access to semiconductors and cloud services as leverage in its geopolitical strategy. China has gone further, building its own technical standards and asserting domestic control over traffic and infrastructure. Africa and Latin America are investing in local data centers and digital identity schemes, aiming to reduce dependency while keeping doors open for global trade and investment.

[00:02:46] When every region brings its own rulebook, global consensus doesn’t come easily.
Bodies like ISO, ITU, IETF, or W3C risk stalling out.

Yet splintering into incompatible systems is also costly:

It disrupts supply chains Slows research collaborations And fractures global communications

[00:03:31] So let’s start by looking at what all of this really costs.

The Rising Cost of Participation

[00:03:35] Historically, incentives for joining standards efforts were clear:

Influence technology direction Ensure interoperability Build goodwill as a responsible actor

[00:03:52] But that equation is changing.

Take tariffs, for example.

U.S. tariffs on imports from China and others now range from 10% to 41% on semiconductors and electronics. Export controls restrict the flow of advanced chips, reshaping entire markets. Companies face new costs: redesigning products, applying for licenses, and managing uncertainty.

[00:04:33] Add in supply chain rerouting — the so-called “China Plus One” strategy — and you get:

More complex logistics Longer delays Higher inventory buffers

Recent studies show these frictions cut industrial output by over 7% and add 0.5% to inflation.

[00:04:58] It’s not just the U.S. — tariffs are now a global trend.

Then there are transparency laws, like:

The U.S. Corporate Transparency Act Germany’s Transparency Register Norway’s Transparency Act, which even mandates human rights due diligence

[00:05:33] The result?
The baseline cost of cross-border operations is rising — forcing companies to ask if global standards participation is still worth it.

Why Standards Still Matter

[00:05:50] So, why bother with standards at all?

Because well-designed standards can offset many of these costs.

[00:05:56] Consider the power of recognition.
If one region accepts a product tested in another, companies save on duplicate testing and reach markets faster.

[00:06:07] A clear example is the European Digital Identity Wallet (EUDI Wallet).

In 2024, the EU updated eIDAS to:

Require each member state to issue a European Digital Identity Wallet Mandate mutual recognition between member states

This means:

A wallet verified in France also works in Germany or Spain Citizens gain convenience Businesses reduce onboarding friction Governments maintain a harmonized baseline with room for local adaptation

[00:06:56] Though rollout costs are high — covering legal alignment, wallet development, and security testing — the payoff is smoother digital trade.

Beyond recognition, strong standards also offer:

Regulatory fast lanes: Reduced legal risk when products follow recognized standards Procurement advantages: Interoperability requirements in public tenders Risk transfer: Accepted standards can serve as a partial defense after incidents

[00:07:34] In effect, standards can act as liability insurance.

[00:07:41] But not all incentives outweigh the costs.
When countries insist on unique local standards without mutual recognition, “one audit, many markets” collapses.

[00:08:05] Companies duplicate compliance, fork product lines, or leave markets.
Rules of origin and political volatility add further uncertainty.

[00:08:44] So yes — standards can tip the scales, but they can’t overcome every barrier.

The Changing Role of Standards Bodies

[00:08:54] Saying “standards still matter” is one thing — ensuring their institutions adapt is another.

[00:09:02] The pressures shaping today’s Internet are not just technical but geopolitical, economic, and regulatory.

That means standards bodies must evolve in two key ways:

Process adaptation Scope adaptation

[00:09:19] The old “everyone must agree” consensus model now risks deadlock.
Bodies need to move toward a minimum viable consensus — enough agreement to set a baseline, even if regional overlays come later.

[00:09:39] Increasingly, both state and corporate actors exploit the process to delay progress.
Meanwhile, when trade policies change in months, a five-year standards cycle is useless.

[00:10:16] Standards organizations must embrace:

Lighter deliverables Living documents Faster updates aligned with regulatory change

[00:10:32] Participation costs are another barrier.
If only the richest governments and companies can attend meetings, legitimacy suffers.

Efforts like the U.S. Enduring Security Framework, which supports broader participation, are essential.

[00:11:10] Remote participation helps — but it’s not enough.
In-person collaboration still matters because trust is built across tables, not screens.

Rethinking Scope and Relevance

[00:11:31] Scope matters too.

Standards bodies should embrace layering:

Global level: focus on secure routing, baseline cryptography, credential formats Regional level: handle sovereignty overlays — privacy, lawful access, labor rules

[00:11:55] Moreover, the scope must expand beyond technology to include:

Procurement Liability Compliance

If standards don’t reduce costs in these areas, they won’t gain traction — no matter how elegant they look in PDF form.

[00:12:12] Standards also need to move closer to deployment:

Include reference implementations Provide test suites Define certification paths that regulators will accept

Without these, interoperability remains theoretical while costs keep rising.

[00:12:53] Ultimately, this is both a process problem and a scope problem.
Processes must be faster and more inclusive.
Scopes must be realistic and economically relevant.

The Risk of Fragmentation

[00:13:11] Some argue that if standards bodies stall, the market will route around them.
But a fractured Internet is messy:

Cross-border trade slows under multiple testing regimes Innovation fragments into narrow regional silos Security weakens as incompatible implementations open new vulnerabilities

[00:13:45] And perhaps worst of all, trust erodes.
Governments lose faith in interoperability; companies question the value of participation.

[00:13:55] The outcome isn’t resilience — it’s duplication, waste, and higher costs.

[00:14:07] The Internet won’t disappear, but it risks hardening into isolated digital islands.
That’s why standards bodies can’t afford drift.

[00:14:26] The real choice is between:

Layered, adaptable standards that maintain a shared baseline Or a slow grind into fragmentation that makes everyone poorer and less secure Wrapping Up

[00:14:38] The incentives-versus-cost trade-off is no longer a side note in standards work — it’s the core issue.

Tariffs, sovereignty, and compliance regimes aren’t temporary distractions.
They’re structural realities shaping the future of interoperability.

[00:14:52] The key question for any new standard is:

Does this make it cheaper, faster, or less risky to operate across borders?

If yes — that standard has a future.
If no — it risks becoming another PDF gathering dust while fragmentation accelerates.

[00:15:03] With that thought — thank you for listening.

I’d love to hear your perspective:

Do incentives for adopting bridge standards outweigh the rising costs of sovereignty battles? Or are we headed toward a world of purely regional overlays?

[00:15:37] Share your thoughts, and let’s keep this conversation going.

[00:15:48] That’s it for this week’s Digital Identity Digest.

If this episode helped clarify or inspire your thinking, please:

Share it with a friend or colleague Connect with me on LinkedIn @hlflanagan Subscribe and leave a rating on Apple Podcasts or wherever you listen

[00:16:00] You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep the dialogue alive.

The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.


Ocean Protocol

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” — Disproven

Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below. Source: X Spaces — Oct 9, 2025 @BubbleMaps has made this very helpful diagram to
Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon

Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below.

Source: X Spaces — Oct 9, 2025

@BubbleMaps has made this very helpful diagram to identify the flows of $FET from the Ocean community wallet (give them a follow):

https://x.com/bubblemaps/status/1980601840388723064

So, what’s the truth behind the distribution of $FET out of a single wallet and into 30 wallets?

Was it, as Sheikh claims, an ill-intentioned action to obfuscate the token flows and “dump” on the ASI community? Absolutely not.

First, it was done out of prudence. Given that a significant number of tokens were held in a single wallet, it was to reduce the risk of having the community treasury tokens hacked or otherwise vulnerable to bad actors. Clearly, spreading the tokens across 30 wallets greatly reduces the risk of their being hacked or forcefully taken compared to tokens being held in a single wallet.

Second, the spreading of the community treasury tokens across many wallets was something that Fetch and Singularity had themselves requested we do, to avoid causing problems with ETF deals which they had decided to enter into using $FET.

As presented in the previous “ASI Alliance from Ocean Perspective” blogpost, on Aug 13, 2025, Casigrahi, SingularityNet’s CFO, wrote an email to Ocean Directors, cc’ing Dr. Goertzel and Lake:

In it, he references 8 ETF deals in progress that were underway with institutional investors and the concerns that “the window — is open now” to close these deals.

Immediately after this email, Casigrahi reached out to a member of the Ocean community, explaining that such a large sum of $FET in the Ocean community wallet, which is not controlled by either Fetch or SingularityNET, would raise difficult questions from ETF issuers. Recall that Ocean did not participate in these side deals promoted by Fetch, and was often kept out of the loop, e.g. the TRNR deal.

Casiraghi requested (on behalf of Fetch and SingularityNET) that if the $FET in the Ocean community wallet could not be frozen, whether arrangements could be made to split the $FET tokens across multiple wallets?

Casiraghi explained that if this could be done with the $FET in the Ocean community wallet, Fetch and SingularityNET could plausibly deny the existence of a very large token holder which they had no control over. They could sweep it under the rug and avoid uncomfortable due diligence questions.

On Aug 16 2025, David Levy of Fetch called me with the same arguments, reasoning and plea, whether Ocean could obfuscate the tokens and split them across more wallets?

Incidentally, in this call Levy also for the first time, shared with me the details of the TRNR deal which alarmed me once I understood the implications (“TRNR” Section §12).

At this juncture, it should be recalled that the Ocean community wallet is under the control of Ocean Expeditions. The Ocean community member who spoke with Casiraghi, as well as myself, informed the Ocean Expeditions trustees of this request and reasoning. Thereafter a decision was made by the Ocean Expedition’s trustees, as an act of goodwill, to distribute the $FET across 30 wallets as requested by Fetch and SingularityNet.

Turning back to the bigger picture, as a pioneer in the blockchain space, I am obviously well aware that all token movements are absolutely transparent to the world. Any transfers are recorded immutably forever and can be traced easily by anyone with a modicum of basic knowledge. I build blockchains for a living. It is ridiculous to suggest that I or anyone in Ocean could have hoped to “conceal” tokens in this public manner.

A simple act of goodwill and cooperation that was requested by both Fetch and SingularityNET has instead been deliberately blown up by Sheikh, and painted as a malicious act to harm the ASI community.

Sheikh has now used the wallet distribution to launch an all-out assault on Ocean Expeditions and start a manhunt to identify the trustees of the Ocean Expeditions wallet.

Sheikh has wantonly spread lies, libel and misinformation to muddy the waters, construct a false narrative accusing Ocean and its founders of misappropriation, and to incite community sentiment against us.

Sheikh’s accusations and his twisting of the facts to mislead the community are so absurd that they would be laughable, if they were not so dangerous and harmful to the whole community.

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” — Disproven was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Radiant Logic

Architecting a Data-Centric Identity Security Infrastructure

Architecting a Data-Centric Identity Security Infrastructure  As organizations build more interconnected digital ecosystems, securing identity is no longer just a component of cybersecurity—it is the foundation of protecting everything from data to devices. We are now seeing an unprecedented proliferation of machine identities, which frequently outnumber human identities. Yet traditional iden
Architecting a Data-Centric Identity Security Infrastructure 

As organizations build more interconnected digital ecosystems, securing identity is no longer just a component of cybersecurity—it is the foundation of protecting everything from data to devices. We are now seeing an unprecedented proliferation of machine identities, which frequently outnumber human identities. Yet traditional identity systems struggle to manage these effectively.  

Organizations grapple with fragmented, siloed identity data sources and IAM solutions leave blind spots and inefficiencies. The flexibility of using AI and mobile or personal devices for business operations further aggravates this issue.  

These security concerns become a critical pain point for larger enterprises that want to scale. Mergers and acquisitions complicate matters by combining disparate identity systems and policies. It frequently leads to conflicting identity data and compromised access management, posing severe security risks.  

What is the solution to making an organization’s identity security practices more effective and resilient, not just for the current threat landscape but also for next-gen risks?

The answer is to architect a data-centric identity security infrastructure, making identity data the cornerstone of all security decisions.

The Need for a Data-Centric Identity Security Infrastructure 

Traditionally, enterprises have addressed identity security problems individually, implementing separate solutions for Identity Governance and Administration (IGA), Privileged Access Management (PAM), access management, and SaaS-native systems like Microsoft Entra and Okta. Although individually functional, these tools collectively create fragmented identity data silos. As identity and application counts grow, these silos generate significant security gaps due to inconsistent data visibility and management. 

The solution lies in making identity data foundational. Identity data must be positioned at the core of every security decision, ensuring consistency, accuracy, and completeness across all processes related to authentication, authorization, and lifecycle management. 

The goal is to perfectly align with Gartner’s definition of identity-first security—an approach positioning identity-based access control as the cornerstone of cybersecurity. 

Implementing Gartner’s VIA Model 

To achieve data-centric identity security, Gartner’s VIA model (Visibility, Intelligence, Action) provides a clear and structured roadmap: 

Visibility: Establishing unified identity data visibility Intelligence: Analyzing data for actionable insights  Action: Executing real-time remediation based on intelligence

Each component is crucial for successful deployment. 

Visibility: Consolidating Fragmented Identity Data 

Organizations must first tackle fragmented identity data scattered across various sources—Active Directory, HR systems, PAM solutions, and cloud identity solutions. Consolidating these into an identity data lake is critical. This data lake must be data-agnostic, scalable, real-time, event-driven, and capable of handling vast volumes of data, both structured and unstructured. 

Once consolidated, raw identity data needs to be transformed into actionable information via a semantic layer. A semantic layer is a structured representation or model that organizes identity data into meaningful relationships and context. It turns fragmented, raw data into unified, easily understood information. 

In short, this semantic layer maps identity data into a coherent model providing unified visibility across human and non-human identities, entitlements, and actual usage. It must: 

Ensure that diverse identity data is standardized and unified  Break data silos by treating access uniformly, regardless of its source  Leverage a graph-based structure for intuitive, multi-dimensional navigation  Maintain data lineage for precise traceability and remediation  Intelligence: Identifying and Observing Anomalies 

The semantic layer significantly improves data coherence but often results in large volumes of information that are challenging to analyze manually. For this reason, the Intelligence layer’s role is crucial. It continuously observes identity data, focusing specifically on detecting: 

Deviations  Discrepancies  Unauthorized or abnormal changes  Risky behavior 

Organizations benefit less from routine events than from abnormal situations requiring immediate attention. Intelligence leverages queries, usage analysis, change detection, peer group baselining, and correlation techniques.  

Observations enrich the semantic layer, enhancing decision-making in downstream systems such as PAM, IGA, and access management platforms by providing crucial context around potential risks and anomalies. 

Action: Executing Flexible Remediation 

The Action layer addresses identified issues based on intelligence. This step requires a flexible approach, capable of adapting to different scenarios. Some actions may be straightforward, such as directly writing corrections back to endpoint systems. Others require interaction with existing cybersecurity tools—IGA, PAM, or ticketing systems—emphasizing the importance of well-maintained connectors and integrations. 

Remediation often critically requires consensus from stakeholders beyond IT security teams. Engaging the business stakeholders—the first line of defense, such as line managers and resource owners—is essential to distinguish legitimate threats from false positives. This engagement transforms the security system into a collaborative “security cockpit,” amplifying the cybersecurity team’s capabilities. 

Effective collaboration requires clear roles and responsibilities across all stakeholders, ensuring that ownership and accountability are well defined when addressing identity security risks. Additionally, seamless integration with everyday digital workplace tools like Slack or Microsoft Teams, possibly enhanced by LLM-based conversational interfaces, can significantly streamline interactions, enabling quick confirmations and decisions from non-technical stakeholders. 

Strengthening Identity Security with a Data-Centric Approach 

Building a data-centric identity security infrastructure using Gartner’s VIA model provides comprehensive benefits: 

Unified Visibility: Eliminates fragmented silos, creating a coherent identity view Actionable Intelligence: Proactively identifies risks and anomalies, enhancing threat detection Real-time Remediation: Ensures quick, precise actions tailored to diverse cybersecurity scenarios Collaborative Remediation: Actively involves non-technical stakeholders, significantly improving accuracy and response effectiveness

Ultimately, by placing identity data at the heart of security infrastructure, organizations significantly strengthen their security posture, achieving genuine, identity-first security. 

How RadiantOne Implements a Data-Centric Identity Security Infrastructure 

The RadiantOne platform simplifies and accelerates the transition to a data-centric identity security model. The solution consolidates identity data from legacy on-premises and cloud-based sources into a unified, standards-based, vendor-neutral identity data lake. This consolidation eliminates identity data silos and provides a global IAM data catalog with rich, attribute-enhanced user profiles. 

With RadiantOne, organizations can efficiently build unlimited virtual views of identity data that are unified across various protocols (LDAP, SQL, REST, SCIM, Web Service APIs). Its low-code/no-code transformation logic enables seamless data mapping, ensuring quick adaptability to changing business and security requirements without disrupting existing systems. 

RadiantOne scales to support hundreds of millions of identities, adding resilience and speed through a highly available, near real-time identity service. The solution automates identity data management, streamlines user and group management, rationalizes group memberships, and dynamically controls access rights. 

Its visibility capability provides a real-time, unified view of the situation for all human and non-human identities down to a permission level. Coupled with its observability capabilities, it spots misconfiguration and detects anomalies or abnormal changes to keep the identity landscape under control. 

Most notably, the platform’s AI-powered assistant, AIDA, simplifies user access reviews, swiftly identifying anomalies and providing actionable remediation suggestions. By automating tedious manual reviews, AIDA drastically reduces administrative effort and improves decision accuracy, making it easier to enforce a least-privilege approach and continuous compliance. 

The post Architecting a Data-Centric Identity Security Infrastructure appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


FastID

Increasing the accessibility of managed security services

Make world-class protection accessible. Fastly’s new Managed Security Professional delivers 24/7 expert defense for your most critical apps and APIs.
Make world-class protection accessible. Fastly’s new Managed Security Professional delivers 24/7 expert defense for your most critical apps and APIs.

Monday, 27. October 2025

KILT

KILT Liquidity Incentive Program

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base. The portal can be accessed here: liq.kilt.io For the best experience, desktop/browser use is recommended. Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for whic

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base.

The portal can be accessed here: liq.kilt.io

For the best experience, desktop/browser use is recommended.

Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for which you have been part of the program. Your liquidity is not locked in any way; you can add or remove liquidity at any time. The portal does not take custody of your KILT or ETH; positions remain on Uniswap under your direct control. Rewards can be claimed after 24hrs, and then at any time of your choosing. You will need KILT (0x5D0DD05bB095fdD6Af4865A1AdF97c39C85ad2d8) on Base ETH or wETH on Base An EVM wallet (e.g. MetaMask etc.) Joining the LIP Overview

There are two steps to joining the LIP:

Add KILT and ETH/wETH to the Uniswap pool in a full-range position. The correct pool is v3 with 0.3% fees. Note that whilst part of the LIP you will continue to earn the usual Uniswap pool fees as well. Register this position on the Liquidity Portal. Your rewards will start automatically. 1) Adding Liquidity

Positions may be created either on Uniswap in the usual way, or directly via the portal. If you choose to create positions on Uniswap then return to the portal afterwards to register them.

To create a position via the portal:

Go to liq.kilt.io and connect your wallet. Under the Overview tab, you may use the Quick Add Liquidity function. For more features, go to the Add Liquidity tab where you can choose how much KILT and ETH to contribute. 2) Registering Positions

Once you have created a position, either on Uniswap or via the portal, return to the Overview tab

Your KILT:ETH positions will be displayed under Eligible Positions. Select your positions and Register them to enroll in the LIP. Monitoring your Positions and Rewards

Once registered, you can find your positions in the Positions tab. The Analytics tab provides more information, for example your time bonuses and details about each position’s contribution towards your rewards.

Claiming Rewards

Your rewards start accumulating from the moment you register, but the portal may not reflect this immediately. Go to the Rewards tab to view and claim your rewards. Rewards are locked for the first 24hrs, after which you may claim at any time.

Removing Liquidity

Your LP remains 100% under your control; there are no locks or other restrictions and you may remove liquidity at any time. This can be done in the usual way directly on Uniswap. Removing LP will not in any way affect rewards accumulated up to that time, but if you later re-join the program then any time bonuses will have been reset.

How are my Rewards Calculated?

Rewards are based on:

The value of your KILT/ETH position(s). The total combined value of the pool as a whole. The number of days your position(s) have been registered.

Rewards are calculated from the moment you register a position, but the portal may not reflect them right away.

Need Help?

Support is available in our telegram group: https://t.me/KILTProtocolChat

-The KILT Foundation

KILT Liquidity Incentive Program was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

What government agencies need to fight fraud

 

 


auth0

MS Agent Framework and Python: Use the Auth0 Token Vault to Call Third-Party APIs

Build a secure Python AI Agent with Microsoft Agent Framework and FastAPI and learn to use Auth0 Token Vault to securely connect to the Gmail API.
Build a secure Python AI Agent with Microsoft Agent Framework and FastAPI and learn to use Auth0 Token Vault to securely connect to the Gmail API.

Recognito Vision

Why Businesses Are Investing in Deepfake Detection Tools to Stop AI-Generated Fraud

Remember when “seeing is believing” used to be the rule? Not anymore. The world is now facing an identity crisis, digital identity that is. As artificial intelligence advances, so do the fraudsters who use it. Deepfakes have gone from internet curiosities to boardroom threats, putting reputations, finances, and trust at risk. Businesses worldwide are waking...

Remember when “seeing is believing” used to be the rule? Not anymore. The world is now facing an identity crisis, digital identity that is. As artificial intelligence advances, so do the fraudsters who use it. Deepfakes have gone from internet curiosities to boardroom threats, putting reputations, finances, and trust at risk.

Businesses worldwide are waking up to the danger of manipulated media and turning toward deepfake detection tools as a line of defense. These systems are becoming the business equivalent of a truth serum, helping companies verify authenticity before deception costs them dearly.

 

What Makes Deepfakes So Dangerous

A deepfake is an AI-generated video, image, or audio clip that convincingly mimics a real person. Using neural networks, these fakes can replicate facial movements, voice tones, and gestures so accurately that even experts struggle to tell them apart.

The technology itself isn’t inherently bad. In entertainment, it helps de-age actors or create realistic video games. The problem arises when it’s used for fraud, misinformation, or identity theft. A 2024 report by cybersecurity analysts revealed that over 40% of businesses had encountered at least one deepfake-related fraud attempt in the last year.

Common use cases that keep executives awake at night include:

Fake video calls where “executives” instruct employees to transfer money Synthetic job interviews where fraudsters impersonate real candidates False political or corporate statements are circulated to damage reputations

 

How Deepfake Detection Technology Works

The idea behind deepfake detection technology is simple: spot what looks real but isn’t. The execution, however, is complex. Detection systems use advanced machine learning and biometrics to analyze videos, images, and audio clips at a microscopic level.

Here’s a breakdown of common detection methods:

Technique What It Detects Purpose Pixel Analysis Lighting, shadows, unnatural edges Identifies visual manipulation Audio-Visual Sync Lip and speech mismatches Flags voice-over imposters Facial Geometry Mapping Eye movement, micro-expressions Validates natural human patterns Metadata Forensics Hidden file data Detects tampering or file regeneration

These methods form the core of most deepfake detection software. They look for details invisible to the human eye, like the way light reflects in a person’s eyes or how facial muscles move during speech. Even the slightest irregularity can trigger a red flag.

 

Deepfake Detection in Corporate Security

For organizations, adopting a deepfake detector isn’t just a security upgrade, it’s a necessity. Financial institutions, identity verification providers, and digital platforms are integrating these solutions to prevent fraud in real time.

A growing number of companies have fallen prey to AI-generated fraud, with criminals using fabricated voices or videos to trick employees into approving transactions. One European company reportedly lost 25 million dollars after a convincing fake video call with their “CFO.” That’s not a Hollywood plot, it’s a real-world case.

Businesses now use deepfake facial recognition and deepfake image detection tools to verify faces during high-risk transactions, onboarding, and identity verification. By combining biometric data with behavioral analytics, these tools make it nearly impossible for fakes to pass undetected.

 

Real-World Examples of Deepfake Fraud

 

Finance: A multinational bank used a deepfake detection tool to validate executive communications. Within six months, it blocked three fraudulent video call attempts that mimicked senior leaders. Recruitment: HR departments now use deepfake detection software to confirm job candidates are who they claim to be. AI-generated interviews have become a growing issue in remote hiring. Social Media: Platforms like Facebook and TikTok rely on deepfake face recognition systems to automatically flag and remove fake celebrity or political videos before they go viral.

Each case reinforces a key truth: deepfakes aren’t just a cybersecurity issue, they’re a trust issue.

 

 

Challenges in Detecting Deepfakes

Even with cutting-edge tools, detecting deepfakes remains a technological tug-of-war. Every time detection systems advance, generative AI models evolve to bypass them, creating an ongoing race between innovation and deception. Businesses face several persistent challenges in this fight.

One major issue is evolving algorithms, as AI models constantly learn new tricks that make fake content appear more authentic. Another key challenge is data bias, where systems trained on limited datasets may struggle to perform accurately across different ethnicities or under varied lighting conditions.

Additionally, high processing costs remain a concern, as real-time deepfake detection requires powerful hardware and highly optimized algorithms. On top of that, privacy concerns also play a role, since collecting facial data for analysis must align with global data protection laws such as the GDPR.

To address these challenges, open-source initiatives like Recognito Vision GitHub are fostering transparency and collaboration in AI-based identity verification research, helping bridge the gap between innovation and ethical implementation.

 

 

Integrating Deepfake Detection Into Identity Verification

Deepfakes pose the greatest risk to identity verification systems. Fraudsters use synthetic faces and voice clips to bypass onboarding checks and exploit weak verification processes.

To counter this, many companies integrate deepfake detect models with liveness detection, systems that determine if a face belongs to a live human being or a static image. By tracking subtle movements like blinking, breathing, or pupil dilation, these systems make it much harder for fake identities to pass.

If you’re interested in testing how liveness verification works, explore Recognito’s face liveness detection SDK and face recognition SDK. Both provide tools to identify fraud attempts during digital onboarding or biometric verification.

 

The Business Case for Deepfake Detection Tools

So why are companies investing heavily in this technology? Because it directly protects their money, reputation, and compliance status.

 

1. Fraud Prevention

Deepfakes enable social engineering attacks that traditional security systems can’t catch. Detection tools provide a safeguard against voice and video scams that target executives or employees.

2. Compliance with Data Regulations

Laws like GDPR and other digital identity regulations require companies to verify authenticity. Using deepfake detection technology supports compliance by ensuring every identity is legitimate.

3. Brand Integrity

One fake video can cause irreversible PR damage. Detection systems help safeguard brand image by filtering manipulated media before it spreads.

4. Consumer Confidence

Customers feel safer when they know your brand can identify real users from digital imposters. Trust is the new currency of business.

 

 

Popular Deepfake Detection Solutions in 2025 Tool Name Main Feature Ideal Use Case Reality Defender Multi-layer AI detection Financial institutions Deepware Scanner Video and image verification Cybersecurity firms Sensity AI Online content monitoring Social platforms Microsoft Video Authenticator Frame-by-frame confidence scoring Government and enterprise use

For businesses that want to experiment with AI-based face authentication, the Face biometric playground provides an interactive environment to test and understand how facial recognition and deepfake facial recognition systems perform under real-world conditions.

 

What’s Next for Deepfake Detection

The war between creation and detection is far from over. As generative AI improves, the line between real and fake will blur further. However, one thing remains certain, businesses that invest early in deepfake detection tools will be better prepared.

Future systems will likely combine blockchain validation, biometric encryption, and AI-powered forensics to ensure content authenticity. Collaboration between regulators, researchers, and businesses will be crucial to staying ahead of fraudsters.

 

Staying Real in a World of Fakes

The rise of deepfakes is rewriting the rules of digital trust. Businesses can no longer rely on human judgment alone. They need technology that looks beneath the surface, into the data itself.

Recognito is one of the pioneers helping organizations build that trust through reliable and ethical deepfake detection solutions, ensuring businesses stay one step ahead in an AI-powered world where reality itself can be rewritten.

 

Frequently Asked Questions

 

1. How can deepfake detection protect businesses from fraud?

Deepfake detection identifies fake videos or audio before they cause financial or reputational damage, protecting companies from scams and impersonation attempts.

 

2. What is the most accurate deepfake detection technology?

The most accurate systems combine biometric analysis, facial geometry mapping, and liveness detection to verify real human behavior.

 

3. Can deepfake detection software identify audio fakes too?

Yes, modern tools analyze pitch, tone, and rhythm to detect audio deepfakes along with visual ones.

 

4. Is deepfake detection compliant with data protection laws like GDPR?

Yes, when implemented responsibly. Businesses must process biometric data securely and follow data protection regulations.

 

5. How can companies start using deepfake detection tools?

Organizations can integrate off-the-shelf detection and liveness solutions into their existing identity verification systems to enhance security and prevent fraud.


Ocean Protocol

Ocean Community Tokens are the Property of Ocean Expeditions

oceanDAO (now Ocean Expeditions) is not a party to the ASI Alliance Token Merger Agreement and Ocean community tokens are not the property of the ASI Alliance By: Ocean Protocol Foundation On Oct 9, 2025 in an X Space in response to the withdrawal of the Ocean Protocol Foundation from the ASI Alliance, Sheikh said: “You don’t try and steal from the community and get away with it that
oceanDAO (now Ocean Expeditions) is not a party to the ASI Alliance Token Merger Agreement and Ocean community tokens are not the property of the ASI Alliance By: Ocean Protocol Foundation

On Oct 9, 2025 in an X Space in response to the withdrawal of the Ocean Protocol Foundation from the ASI Alliance, Sheikh said:

“You don’t try and steal from the community and get away with it that quickly, because we’re not going to just let it go, right? In the sense that, if you didn’t want to be part of the community, why did you then go into the token which belonged to the community, or which belonged to the alliance?”

This statement is false, misleading, and libelous, and this blogpost will demonstrate why.

The only three parties to the ASI Alliance are Fetch.ai Foundation (Singapore), Ocean Protocol Foundation (Singapore) and SingularityNET Foundation (Switzerland).

Neither the oceanDAO, nor Ocean Expeditions, are a party to the ASI Alliance Token Merger Agreement.

This fact, that oceanDAO (now Ocean Expeditions) is a wholly independent 3rd party from Ocean, was disclosed (Section §6) to Fetch and SingularityNET in May 2024 as part of the merger discussions.

Sheikh appears to deliberately conflate the Ocean Protocol Foundation with oceanDAO, as a tactic to mislead the community. To be clear, oceanDAO is a separate organisation that was formed in 2021 and then incorporated as Ocean Expeditions in June 2025. The reasons for this incorporation have been set out in an earlier blog post here: (https://blog.oceanprotocol.com/the-asi-alliance-from-oceans-perspective-f7848b2ad61f)

The Ocean community treasury remains in the custodianship of Ocean Expeditions guardians via a duly established, wholly legal trust in the Cayman Islands.

Every $FET token holder has sovereign property rights over its own tokens and is not answerable to the ASI Alliance as to what it does with its tokens.

Ocean Expeditions has no legal obligations to the ASI Alliance. Rather, the ASI Alliance has a clear obligation towards Ocean Expeditions as a token holder.

As a reminder relating to Fetch.ai obligations under the Token Merger Agreement, Fetch.ai is under a legally binding obligation to inject the remaining 110.9 million $FET into the $OCEAN:$FET token bridge and migration contract, and keep them available for any $OCEAN token holder who wishes to exercise their right to convert to $FET. To date, this obligation remains unmet. Fetch.ai must immediately execute this legally mandated action.

Any published information regarding this matter, unless confirmed officially by Ocean Protocol Foundation, should be assumed false.

We also request that Fetch.ai, Sheikh and all other ASI Alliance spokesmen refrain from confusing the public with false, misleading and libelous allegations that any tokens have been in any way “stolen”.

The $FET tokens Sheikh refers to are safely with Ocean Expeditions, for the sole benefit of the Ocean community.

Q&A

Q: There has recently been talk of Ocean “returning” tokens to ASI Alliance, through negotiated agreement. What’s that about?

A: This is complete nonsense. There are no tokens to return because no tokens were “stolen” or “taken”. Accordingly, it would make no sense to “return” any such tokens.

Ocean Community Tokens are the Property of Ocean Expeditions was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Securing AI Agents: Mitigate Excessive Agency with Zero Trust Security

Learn how to secure your AI agents to prevent Excessive Agency, a top OWASP LLM vulnerability, by implementing a Zero Trust model.
Learn how to secure your AI agents to prevent Excessive Agency, a top OWASP LLM vulnerability, by implementing a Zero Trust model.

Sunday, 26. October 2025

Ockam

What Is a Brand?

The brand filter that makes or breaks every sale Continue reading on Medium »

The brand filter that makes or breaks every sale

Continue reading on Medium »

Friday, 24. October 2025

Spruce Systems

A Practical Checklist to Future-Proof Your State’s Digital Infrastructure

From vendor lock-in to privacy compliance, the path to digital modernization is full of trade-offs. This checklist gives state decision-makers a practical framework for evaluating emerging identity technologies and aligning with open-standards best practices.

State IT modernization is a perpetual challenge. For new technologies like verifiable digital credentials (secure, digital versions of physical IDs), this presents a classic "chicken and egg" problem: widespread adoption by residents and businesses is necessary to justify the investment, but that adoption won't happen without a robust ecosystem of places to use them. How can states ensure the significant investments they make today will build a foundation for a resilient and trusted digital future?

State IT leaders face increasing pressure to modernize aging infrastructure, combat rising security threats, and overcome stubborn data silos. These challenges are magnified by tight budgets and the pervasive risk of vendor lock-in. With a complex landscape of competing standards, making the right strategic decision is more difficult than ever. This uncertainty stifles the growth needed for a thriving digital identity ecosystem. The drive for modernization is clear, with over 65% of state and local governments, according to industry research, on a digital transformation journey.

Here, we'll offer a clear, actionable framework for state technology decision-makers: a practical checklist to evaluate technologies on their adherence to open standards. By embracing these principles, states can make informed choices that foster sustainable innovation and avoid costly pitfalls, aligning with a broader vision for open, secure, and interoperable digital systems that empower citizens and governments alike.

The Risks of Niche Technology

Choosing proprietary or niche technologies can seem like a shortcut, but it often leads to a dead end. These systems create hidden costs that drain resources and limit a state's ability to adapt. The financial drain extends beyond initial procurement to include escalating licensing fees, expensive custom integrations, and unpredictable upgrade paths that leave little room for innovation.

Operationally, these systems create digital islands. When a new platform doesn't speak the same language as existing infrastructure, it reinforces the data silos that effective government aims to eliminate. This lack of interoperability complicates everything from inter-agency collaboration to delivering seamless services to residents. For digital identity credentials, the consequences are even more direct. If a citizen's new digital ID isn't supported across jurisdictions or by key private sector partners, its utility plummets, undermining the entire rationale for the program.

Perhaps the greatest risk is vendor lock-in. Dependence on a single provider for maintenance, upgrades, and support strips a state of its negotiating power and agility. As a key driver for government IT leaders, avoiding vendor lock-in is a strategic priority. Niche systems also lack the broad, transparent community review that strengthens security. Unsupported or obscure software can harbor unaddressed vulnerabilities, a risk highlighted by data showing organizations running end-of-life systems are three times more likely to fail a compliance audit.

Embracing the Power of Open Standards for State IT

The most effective way to mitigate these risks is to build on a foundation of open standards. In the context of IT, an open standard is a publicly accessible specification developed and maintained through a collaborative and consensus-driven process. It ensures non-discriminatory usage rights, community-driven governance, and long-term viability. For verifiable digital credentials, this includes critical specifications like the ISO mDL standard for mobile driver's licenses (ISO 18013-5 and 18013-7), W3C Verifiable Credentials, and IETF SD-JWTs. The principles of open standards, however, extend far beyond digital credentials to all critical IT infrastructure decisions.

Adopting this approach delivers many core benefits for State government. First is enhanced interoperability, which allows disparate systems to communicate seamlessly. This breaks down data silos and improves service delivery, a principle demonstrated by the U.S. Department of State's Open Data Plan, which prioritizes open formats to ensure portability. Second, open standards foster robust security. The transparent development process allows for broad community review, which leads to faster identification of vulnerabilities and more secure, vetted protocols.

Third, they provide exceptional adaptability and future-proofing. By reducing vendor lock-in, open standards enable states to easily upgrade systems and integrate new technologies without costly overhauls. This was the goal of Massachusetts' pioneering 2003 initiative to ensure long-term control over its public records. Fourth is significant cost-effectiveness. Open standards foster competitive markets, reducing reliance on expensive proprietary licenses and enabling the reuse of components. For government agencies, cost reduction is a primary driver for adoption.

Finally, this approach accelerates innovation. With 96% of organizations maintaining or increasing their use of open-source software, it is clear that shared, stable foundations create a fertile ground for a broader ecosystem of tools and expertise.

The State IT Open Standards Checklist

This actionable checklist provides clear criteria for state IT leaders, procurement officers, and policymakers to evaluate any new digital identity technology or system. Use this framework to ensure technology investments are resilient, secure, and future-proof.

Ability to Support Privacy Controls: Does the technology inherently support all state privacy controls, or can a suitable privacy profile be readily created and enforced? Technologies that enable privacy-preserving techniques like selective disclosure and zero-knowledge proofs are critical for building public trust. Alignment with Use Cases: Does the standard enable real-world transactions that are critical to residents and relying parties? This includes everything from proof-of-age for controlled purchases and access to government benefits to streamlined Know Your Customer (KYC) checks that support Bank Secrecy Act modernization. Ecosystem Size and Maturity: Does the standard have a healthy base of adopters? Look for active participation from multiple vendors and demonstrated investment from both public and private sectors. A mature ecosystem includes support from major platforms like Apple Wallet and Google Wallet, indicating broad market acceptance. Number of Vendors: Are there multiple independent vendors supporting the standard? A competitive marketplace fosters innovation, drives down costs, and is a powerful defense against vendor lock-in. Level of Investment: Is there clear evidence of sustained investment in tools, reference implementations, and commercial deployments? This indicates long-term viability and a commitment from the community to support and evolve the standard. A strong identity governance framework depends on this long-term stability. Standards Body Support: Is the standard governed by a credible and recognized standards development organization? Bodies like ISO, W3C, IETF, and the OpenID Foundation ensure a neutral, globally-vetted process that builds consensus and promotes stability. Interoperability Implementations: Has the standard demonstrated successful cross-vendor and cross-jurisdiction implementations? Look for evidence of conformance testing or a digital ID certification program that validates wallet interoperability and ensures a consistent user experience. Account/Credential Compromise and Recovery: How does the technology handle worst-case scenarios like stolen private keys or lost devices? Prioritize standards that support a robust VDC lifecycle, including credential revocation. A clear process for credential revocation, such as using credential status lists, is essential for maintaining trust. Scalability: Has the technology been proven in scaled, production use cases? Assess whether scaling requires custom infrastructure, which increases operational risk, or if it relies on standard, well-understood techniques. Technologies that align with established standards like NIST SP 800-63A digital identity at IAL2 or IAL3, and leverage proven cloud architectures, offer a more reliable path to large-scale deployment. Building for tomorrow, today

The strategic shift towards globally supported open standards is not just a technological choice; it is a critical imperative for states committed to modernizing responsibly and sustainably. It is the difference between building disposable applications and investing in durable digital infrastructure.

By adopting this forward-thinking mindset and leveraging the provided checklist, state IT leaders can confidently navigate the complexities of digital identity procurement. This approach empowers states to build resilient, secure, and adaptable IT infrastructure that truly future-proofs public services.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.


Innopay

INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025

INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025 from 02 Dec 2025 till 02 Dec 2025 Trudy Zomer 24 October 2025 - 16:11 Frankfurt, Germany On 2 December 2025, the Global Legal Entity
INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025 from 02 Dec 2025 till 02 Dec 2025 Trudy Zomer 24 October 2025 - 16:11 Frankfurt, Germany

On 2 December 2025, the Global Legal Entity Identifier Foundation (GLEIF) will host its Global Forum on Digital Organizational Identity in Frankfurt, featuring the Grand Finale of the vLEI Hackathon. INNOPAY is proud to sponsor this initiative.

The Global Legal Entity Identifier Foundation (GLEIF) invites developers, entrepreneurs, and innovators worldwide to harness the power of digital organizational identity and redefine how trust is established in the digital economy.

As industries accelerate their digital transformation, the demand for digital organisational identity is greater than ever. Spanning machine-to-machine interactions and next-generation business wallets, the verifiable Legal Entity Identifier (vLEI) unlocks new opportunities for transparency, automation, and compliance.

On the event day, the finalists of the second theme of the vLEI Hackathon, which focuses on Industry 4.0, will present their technical solution. At the end of the event, the winner and runner-up will be officially announced.

More information about the vLEI Hackathon can be found on the GLEIF website: https://www.gleif.org/en/newsroom/events/gleif-vlei-hackathon-2025


uquodo

Enhancing Risk Insights by Integrating KYC Data with Transaction Monitoring

The post Enhancing Risk Insights by Integrating KYC Data with Transaction Monitoring appeared first on uqudo.

IDnow

The true face of fraud #1: The masterminds behind the $1 trillion crime industry.

Forget the lone hacker stereotype. Today's fraud is a $1 trillion global industry run by organized crime syndicates operating from industrial-scale compounds across Southeast Asia, Africa, and Eastern Europe. These networks use trafficked workers, corporate structures, and sophisticated tech infrastructure to deceive victims worldwide—and banks are facing mounting losses, compliance costs, and erod
Fraud today is a work of gangs that operate across borders. Worth more than $1 trillion, this industry doesn’t just steal money – it destroys lives. In the first part of our fraud series, we explore who is behind the fastest-growing schemes today, where are the hubs of so-called scam compounds and what financial organizations must understand about their opponent. 

For decades, pop culture painted fraudsters as solitary figures hunched over laptops in darkened rooms. That stereotype is not only wrong, it is dangerously outdated. Today’s most damaging scams are orchestrated by global crime syndicates spanning every continent. These networks build tens of thousands-strong operations, traffic people, train their “staff” and basically operate like Fortune 100 companies, but their product is deception, and the victims pay the cost. 

Their scale is staggering: global fraud reached over $1 trillion in 2024. The numbers, however, tell only part of the story. The fastest-growing schemes today are app-based and social engineering scams, which are also the most common types of fraud affecting banks and financial institutions, causing record losses from reimbursements and compliance costs. 

These attacks not only target systems, but exploit people, undermining trust in financial institutions, regulators, and courts, while also supporting human trafficking and forced labor. Behind every fake investment ad or romance scam lies a darker reality: compounds where people are held captive and forced to defraud strangers across the world. 

Global scam centres: Where to find them 

When most people think of criminal gangs, they imagine shadowy figures operating from jungles or remote hideouts. But the criminals behind the world’s largest fraud rings work very differently. These aren’t small-time operations running in the dark, they’re industrial-scale enterprises operating in plain sight. 

Their structure closely mirrors those of legitimate businesses with executives overseeing operations, middle managers coaching employees and tracking KPIs and frontline workers executing scams via phone, social media or messaging apps.  

Their facilities are not hidden in basements. They are large, purpose-built sites, often converted from former hotels, casinos or business parks. Located primarily in Southeast Asia – in Cambodia, Myanmar, Vietnam, and the Philippines – but increasingly also in Africa and Eastern Europe, these complexes can be vast. Investigators have uncovered huge compounds where hundreds of people work in rotating shifts, day and night. Some sites are so large they have been described as “villages,” covering dozens of acres, with syndicates often running multiple locations across regions. At scale, this means a single network can control thousands of people. 

However, not all people who work for syndicates on site, are there voluntarily. In fact, most of the front-line workers and call centre agents are victims of human trafficking. Lured by the promise of big money and escape from poverty, they travel across borders, only to find themselves kidnapped, captured and coerced into deceiving others.  

Life inside scam compounds: A prison disguised as an office 

On-site structure is designed to sustain a captive workforce. They include dormitories, shops, entertainment rooms, kitchens and even small clinics. On the surface, these facilities might resemble employee perks, and for vulnerable recruits from poorer backgrounds, they can even sound appealing, but the reality is dark: rows of desks, bunkrooms stacked with beds, CCTV cameras monitoring every corner, kitchens feeding hundreds. With razor-wire fences and armed guards at the gates, these compounds look more like prisons rather than offices. And in many ways, that is exactly what they are. 

The “masterminds” of the crime ecosystem 

Behind the compounds lies a web of transnational operators and a shadow service economy. The organisers of these operations come in many forms – from criminal entrepreneurs diversifying from drugs to online scams, to networks linked with regional crime groups such as Southeast Asian gangs, Chinese or Eastern European syndicates, and illicit operators tied to South American cartels. In some places, politically connected actors or local elites profit from – and even protect – these operations, ensuring they continue with little interference. 

Another layer consists of companies that appear legitimate on paper but in reality, supply the infrastructure that keeps the fraud industry running: phone numbers, fake identity documents, shell firms and payment processors willing to handle high-risk transactions. Investigations have uncovered, how underground service providers and proxy accounts help scammers move victims’ money through banks and into crypto using fake invoices and front companies as cover.  

It’s an industrial-scale business model: acquisition channels built on fake ads, call centres with scripts and a laundering pipeline powered by mules, shell companies and crypto gateways. The setup is remarkably resilient – shut down one centre or payment route, and the network simply reroutes through another provider or jurisdiction. 

How fraud hurts banks and other financial companies  

For banks and financial firms, the impact is severe. Direct financial losses and costs to financial institutions are significant and rising. Banks, fintechs and credit unions report substantial direct fraud losses: nearly 60% reported losing over $500k in direct fraud in a 12-month period and a large share reported losses over $1m. These trends force firms to allocate budget away from growth into loss-prevention and remediation.  

Payment fraud at scale also increases operational and compliance costs. For example, in 2022, payment fraud was reported at €4.3 billion in European Economic Area and consumer-reported losses in other jurisdictions show multi-billion-dollar annual impacts that increase every year – all of which ripple into higher Suspicious Activity Report (SAR) volumes, Anti-Money Laundering (AML) investigations and strained dispute and reimbursement processes for banks. These costs are both direct (reimbursed losses) and indirect (investigation time, compliance staffing, fines, customer churn and reputational damage). 

Banks face a daily balancing act: tighten controls and risk frustrating customers or loosen them and risk becoming a target. Either way, regulators demand ever-stronger safeguards. And even though stronger authentication and checks can increase drop-offs during onboarding or transactions, failure to comply risks exposure to legal and regulatory trouble (recent cases tied to payment rails illustrate how banks can face large remediation obligations and lawsuits if controls are perceived as inadequate). 

The long-term consequences, however, go beyond operational complexity. Fraud undermines customer trust, which is the foundation of finance. It increases costs, slows innovation and forces financial institutions to redesign products with restrictions that customers feel but rarely understand. And this can lead to a long-term loss of market share. 

What financial institutions must understand about the opponent 

Banks are not fighting individual perpetrators. They are facing industrialized criminal organizations. To defeat them, defensive measures must also be organized accordingly. 

This means moving beyond isolated controls toward systemic resilience: robust fraud checks, stronger identity verification, continuous monitoring, transaction orchestration and faster coordination with law enforcement. But technology alone is not enough. Collaboration across institutions and industries is crucial to disrupt fraud networks that operate globally. 

How financial organizations can protect themselves against financial crime 

Financial firms should invest in multi-layered identity checks combining document, liveness and behavioral signals (like the ones offered by IDnow); integrate real-time AML orchestration to flag mule activity early (like the soon-to-be-launched IDnow Trust Platform); and participate in intelligence-sharing networks that connect patterns across borders.  

Fraud is no longer a fringe crime. It’s a billion-dollar corporate machine. To dismantle it, financial institutions must shift from investigating fraud after it happens to preventing it before it strikes, stopping both criminals and socially engineered victims before any loss occurs. 

By

Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Thales Group

Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite

Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite tas Fri, 10/24/2025 - 08:42 Space Share options Facebook
Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite tas Fri, 10/24/2025 - 08:42 Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 24 Oct 2025 The Spanish secure communications satellite SpainSat NG II, successfully launched from Cape Canaveral, Florida, will provide services to the Armed Forces and international organizations such as the European Commission and NATO, as well as to the governments of allied countries. Thales Alenia Space, together with Airbus Defence and Space, has led the development and construction of the SpainSat NG satellites. The company, involved in different areas of the project, has been responsible for the integration of the Communication Module for both satellites along with Airbus in a dedicated clean room built for this purpose at its Tres Cantos facilities in Madrid, making it the largest satellite system ever assembled in Spain to date.

 

Madrid, October 24, 2025 - The secure communications satellite SpainSat NG II has successfully been launched by a SpaceX Falcon 9 rocket from Cape Canaveral, Florida. The SPAINSAT NG system will ensure secure communications for the Spanish Armed Forces and allied countries for decades to come.

SPAINSAT NG, a program led, owned and operated by Hisdesat Servicios Estratégicos S.A., is considered to be the most ambitious space project in Spain’s history, both due to its performance and the outstanding involvement of the national industry. The SpainSat NG satellites are among the most advanced telecommunications birds in the world. They operate from geostationary orbit in X, military Ka and UHF frequency bands, used for high throughput secure communications, enabling to provide dual, secure and resilient services to the Spanish Armed Forces, as well as to international organizations such as the European Commission, NATO, and allied countries.

 

© Airbus

Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%), together with Airbus Defence and Space, has led the execution and construction of the satellites. In Spain, the company has been responsible for the UHF and military Ka-band payloads and, together with Airbus, for the integration of the communication Modules, which embark the communication payloads and forms the core of their advanced technological capabilities.

 

Ismael López, CEO of Thales Alenia Space in Spain, said: “This launch marks the culmination of a transformative project for the Spanish space industry. We thank Hisdesat and the Ministry of Defense for the trust placed in our company to lead, for the first time in Spain, the development of the communications payloads for the SPAINSAT NG geostationary satellites. For our teams in Madrid, successfully overcoming a challenge of this magnitude places the national space industry on a higher level.”

 

State-of-the-art satellite technology in Madrid

To carry out this mission, the company built a satellite assembly and integration clean room at its Tres Cantos site in Madrid, inaugurated in 2021, and specifically designed to integrate the communication modules of both satellites. These advanced and cutting-edge facilities make it possible to integrate and test large-scale, highly complex satellite systems, capabilities that until now were only within the reach of a few space powers worldwide.

For the first time in Spain, these facilities have enabled the integration of a module weighting more than 2 tons and measuring 6 meters in height, fully equipped with cutting-edge space communications technology and comprising hundreds of sophisticated electronic units.

Additionally, Thales Alenia Space has designed and manufactured in Spain, France, Italy, and Belgium over 200 electronic and radiofrequency units that are an integral part of the communications payloads and the satellite's telecommand and telemetry system. Among them are the UHF processor, the core of the UHF payload; the Digital Transparent Processor (DTP) that interconnects the payloads in the X and military Ka bands; and the Hilink unit, responsible for providing a high-speed service link that will facilitate a quick reconfiguration of the payloads.
 

About Thales Alenia Space

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe.

View PDF market_segment : Space thales-alenia-space-strengthens-spanish-space-industry-leadership-through-its-participation-spainsat On

Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control

Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control prezly Fri, 10/24/2025 - 07:00 Public Security National security Civil Aviation Share options Facebook
Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control prezly Fri, 10/24/2025 - 07:00 Public Security National security Civil Aviation

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 24 Oct 2025 Thales, a global leader in cybersecurity and identity management, has been recognized by Frost & Sullivan as Company of the Year 2025 in the Automated Border Control (ABC) eGates industry. This award underscores Thales’s ability to consistently innovate, deliver scalable and sustainable solutions, and set new standards for border security and passenger experience worldwide. Frost & Sullivan emphasized Thales’s proven ability to deliver at scale, its close collaboration with ministries of interior and airport operators, and its forward-looking strategy that integrates regulatory compliance, digital identity, and sustainability.

As governments and airports face rising passenger volumes, new regulatory requirements such as the European Entry Exit System (EES), and evolving security threats, Thales’s solutions help strike the right balance between strong security and traveller convenience. Frost & Sullivan praised Thales’s end-to-end expertise, customer focus, and ability to design systems that are modular, eco-conscious, and powered by advanced biometrics and artificial intelligence.

At the heart of Thales’s innovation is the traveller experience. Crossing a border that once meant long queues and stressful procedures can now be completed in less than 15 seconds. A passenger simply presents their passport, looks briefly at a camera, and passes through an automated gate thanks to biometric verification and real-time checks.

This simplicity is supported by advanced technology:

Cybersecurity by design, embedding data protection, privacy by default, and role-based access control to ensure secure, compliant and resilient identity verification. Multi-modal biometrics (face, fingerprint, iris) with AI-driven accuracy and liveness detection. Flexible, modular eGates adaptable to any border or airport environment. Digital identity frameworks aligned with international standards. Sustainable engineering, using lightweight, recycled materials and designs that extend product life.

From a deployment perspective, Thales strengthens its leadership in border security with a diversified global footprint, operating across numerous border points worldwide. The company’s expansive presence spans Europe, the Middle East, Latin America, Africa, and North America, with landmark projects including the deployment of hundreds of eGates and self-service kiosks in countries such as France, Spain, and Belgium.

These deployments highlight the positive impact on citizens: smoother journeys, less time waiting, and greater trust in border security. For governments and border agencies, the result is higher throughput, enhanced resilience, and full regulatory compliance. For airports, it means more efficient operations and stronger passenger satisfaction.

We are honored to receive this recognition from Frost & Sullivan. At Thales, we believe that security and passenger experience must go hand in hand. From France to India, our border control solutions allow millions of travellers to cross borders every day with greater speed, trust, and confidence. This award reflects the dedication of our teams and our commitment to helping governments and airports around the world shape the future of secure, seamless travel” commented Emmanuel Wang, Border & Travel Director at Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF market_segment : Enterprise + Public Security > National security + Civil Aviation > Airports https://thales-group.prezly.com/thales-named-frost--sullivans-2025-company-of-the-year-in-automated-border-control thales-named-frost-sullivans-2025-company-year-automated-border-control On Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control

Aergo

HPP Update: Technology Ready, Market Expansion Underway

After months of preparation, the HPP mainnet is live, our core technologies are stable, and we are now entering the most exciting phase of our journey — growth and adoption. 1. Technical Milestones Achieved All major technical roadmaps have been met and are now production-ready. This includes the mainnet launch and multiple project integrations across the HPP ecosystem. The network is built for

After months of preparation, the HPP mainnet is live, our core technologies are stable, and we are now entering the most exciting phase of our journey — growth and adoption.

1. Technical Milestones Achieved

All major technical roadmaps have been met and are now production-ready. This includes the mainnet launch and multiple project integrations across the HPP ecosystem. The network is built for scale, equipped for cross-chain connectivity, and ready for full activation.

2. Migration and Market Readiness

The migration infrastructure, which includes the official bridge and migration portal, is complete and fully tested. Legacy Aergo and AQT token holders will be able to transition seamlessly into HPP through a secure, verifiable process designed to ensure accuracy and transparency across chains.

With the full network framework in place, HPP is now entering the growth and liquidity phase. We are in coordination with several major exchanges to align token listings, update technical integrations, and synchronize branding across trading platforms. These efforts aim to create a strong, sustainable market structure that supports institutional participation, community accessibility, and long-term ecosystem stability.

3. Building a Real-World Breakthrough

We are developing one of the most significant blockchain real-world use cases to date. This initiative combines a large user base, mission-critical data, and enterprise-grade requirements. It will demonstrate how our L2 infrastructure can power high-value, data-driven applications that go beyond typical blockchain use cases.

At the same time, we are working with enterprise partners, including early Aergo collaborators, to adopt HPP’s advanced features through the Noosphere layer.

4. Keeping You Updated

To ensure full transparency, we are continuously updating our HPP Living Roadmap, a real-time tracker that shows ongoing technical progress, upcoming milestones, and partner developments as they happen.

The technology is ready, the ecosystem is forming, and the next phase is set to begin. HPP is moving from readiness to execution, and the wait is almost over.

HPP Update: Technology Ready, Market Expansion Underway was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 23. October 2025

Spruce Systems

The Technology Powering Digital Identity

This article is the third installment of our series: The Future of Digital Identity in America.

Read the first installment in our series on The Future of Digital Identity in America here and the second installment here.

If policy sets the rules of the road, technology lays the pavement. Without strong technical foundations, decentralized identity would remain an inspiring vision but little more. What makes it real are the advances in cryptography, open standards, and system design that let people carry credentials in their own wallets, present them securely, and protect their privacy along the way. These technologies aren’t abstract: they are already running in production, powering mobile driver’s licenses, digital immigration pilots, and cross-border banking use cases.

Why Technology Matters for Identity

Identity is the trust layer of the digital world. Every interaction - logging into a platform, applying for a loan, proving eligibility for benefits - depends on it. Yet today, that trust layer is fractured. We scatter our identity across countless accounts and passwords. We rely on federated logins controlled by Big Tech platforms. Businesses pour money into fraud prevention while governments struggle to verify citizens securely.

The costs of this fragmentation are staggering. In 2024 alone, Americans reported record losses of $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). At the institutional level, the average cost of a U.S. data breach hit $10.22 million in 2025 (IBM). And the risks are accelerating: synthetic identity fraud drained an estimated $35 billion in 2023 (Federal Reserve), while FinCEN has warned that criminals are now using deepfakes, synthetic documents, and AI-generated audio to bypass traditional checks at scale.

Decentralized identity offers a way forward; but, only if the technology can make it reliable, usable, and interoperable. That’s where verifiable credentials, decentralized identifiers, cryptography, and open standards come in.

The Standards that Make it Work

Every successful infrastructure layer in technology—whether it was TCP/IP for the internet or HTTPS for secure web traffic—has been built on standards. Decentralized identity is no different. Standards ensure that issuers, holders, and verifiers can interact without building one-off integrations or relying on proprietary systems.

Here are the key ones shaping today’s decentralized identity landscape:

W3C Verifiable Credentials (VCs): This is the universal data model for digital credentials. A VC is essentially a cryptographically signed digital version of something like a driver’s license, diploma, or membership card. It defines how the credential is structured (with attributes, metadata, and signatures) so that anyone who receives it knows how to parse and verify it. Decentralized Identifiers (DIDs): DIDs are globally unique identifiers that are cryptographically verifiable and not tied to any single registry. Unlike email addresses or usernames, which depend on central providers, a DID is self-sovereign. For example, a university might issue a credential to did:example:university12345. The DID resolves to metadata (such as public keys) that allows verifiers to check signatures and authenticity. OID4VCI and OID4VP (OpenID for Verifiable Credential Issuance and Presentation): These protocols define how credentials move between systems. They extend OAuth2 and OpenID Connect, the same standards that handle billions of secure logins each day. With OID4VCI, you can request and receive a credential securely from an issuer. With OID4VP, you can present that credential to a verifier. This reuse of familiar login plumbing makes adoption easier for developers and enterprises. SD-JWT (Selective Disclosure JWTs): A new extension of JSON Web Tokens that enables selective disclosure directly within a familiar JWT format. Instead of revealing all fields in a token, SD-JWTs let the holder decide which claims to disclose, while still allowing the verifier to check the issuer’s signature. This bridges modern privacy-preserving features with the widespread JWT ecosystem already in use across industries. ISO/IEC 18013-5 and 18013-7: These international standards define how mobile driver’s licenses (mDLs) are presented both in person and online. For example, 18013-5 specifies the NFC and QR code mechanisms for proving your identity at a checkpoint without handing over your phone. 18013-7 expands these definitions to online use cases—critical for remote verification scenarios. ISO/IEC 23220-4 (mdocs): A broader framework for mobile documents (mdocs), extending beyond driver’s licenses to other government-issued credentials like passports, resident permits, or voter IDs. This standard provides a consistent way to issue and verify digital documents across multiple contexts, supporting both offline and online verification. NIST SP 800-63-4: The National Institute of Standards and Technology publishes the “Digital Identity Guidelines,” setting out levels of assurance (LOAs) for identity proofing and authentication. The latest revision reflects the shift toward verifiable credentials and modern assurance methods. U.S. federal agencies and financial institutions often rely on NIST guidance as their baseline for compliance.

Reading the list above, you may realize that one challenge in following this space is the sheer number of credential formats in play—W3C Verifiable Credentials, ISO mDLs, ISO 23220 mdocs, and SD-JWTs, among others. Each has its strengths: VCs offer flexibility across industries, ISO standards are backed by governments and transportation regulators, and SD-JWTs connect privacy-preserving features with the massive JWT ecosystem already used in enterprise systems. The key recommendation for anyone trying to make sense of “what’s best” is not to pick a single winner, but to look for interoperability.

Wallets, issuers, and verifiers should be designed to support multiple formats, since different industries and jurisdictions will inevitably favor different standards. In practice, the safest bet is to align with open standards bodies (W3C, ISO, IETF, OpenID Foundation) and ensure your implementation can bridge formats rather than being locked into just one.

The following sections detail (in a vastly oversimplified way, some may argue) the strengths, weaknesses, and best fit by credential format type.

W3C Verifiable Credentials (VCs)

A flexible, standards-based data model for any kind of digital credential, maintained by the World Wide Web Consortium (W3C).

Strengths: Broadly applicable across industries, highly extensible, and supports advanced privacy techniques like selective disclosure and zero-knowledge proofs. Limitations: Still maturing; ecosystem flexibility can lead to fragmentation without a specific implementation profile; certification programs are less mature than ISO-based approaches; requires investment in verifier readiness. Best fit: Used by universities, employers, financial institutions, and governments experimenting with general-purpose digital identity. ISO/IEC 18013-5 & 18013-7 (Mobile Driver’s Licenses, or mDLs)

International standards defining how mobile driver’s licenses are issued, stored, and verified.

Strengths: Mature international standards already deployed in U.S. state pilots; supported by TSA TSIF testing for federal checkpoint acceptance; backed by significant TSA investment in CAT-2 readers nationwide; privacy-preserving offline verification. Limitations: Narrow scope (focused on driver’s licenses); complex implementation; limited support outside government and DMV contexts. Best fit: State DMVs, airports, traffic enforcement, and retail environments handling age-restricted sales. ISO/IEC 23220-4 (“Mobile Documents,” or mdocs)

A broader ISO definition expanding mDL principles to other official credentials such as passports, residence permits, and social security cards.

Strengths: Extends interoperability to a broader range of credentials; supports both offline and online presentation; aligned with existing ISO frameworks. Limitations: Still early in deployment; adoption and vendor support are limited compared to mDLs. Best fit: Immigration, cross-border travel, and civil registry systems. SD-JWT (Selective Disclosure JSON Web Tokens)

A privacy-preserving evolution of JSON Web Tokens (JWTs), adding selective disclosure capabilities to an already widely used web and enterprise identity format.

Strengths: Easy to adopt within existing JWT ecosystems; enables selective disclosure without requiring new infrastructure or wallets. Limitations: Less flexible than VCs; focused on direct issuer-to-verifier interactions; limited for long-term portability or offline use. Best fit: Enterprise identity, healthcare, and fintech environments already built around JWT-based authentication and access systems.

Together, these standards create the backbone of interoperability. They ensure that a credential issued by the California DMV can be recognized at TSA, or that a diploma issued by a European university can be trusted by a U.S. employer. Without them, decentralized identity would splinter into silos. With them, it has the potential to scale globally.

How Trust Flows Between Issuers, Holders, and Verifiers

Decentralized identity works through a triangular relationship between issuers, holders, and verifiers. Issuers (such as DMVs, universities, or employers) create credentials. Holders (the individuals) store them in their wallets. Verifiers (such as banks, retailers, or government agencies) request proofs.

What makes this model revolutionary is that issuers and verifiers don’t need to know each other directly. Trust doesn’t come from an integration between the DMV and the bank, for example. It comes from the credential itself. The DMV signs a driver’s license credential. You carry it. When you present it to a bank, the bank simply checks the DMV’s digital signature.

Think about going to a bar. Today, you hand over a plastic driver’s license with far more information than the bartender needs. With decentralized identity, you would simply present a cryptographic proof that says, “I am over 21,” without revealing your name or address. The bartender’s system verifies the DMV’s signature and that’s it - proof without oversharing.

Cryptography at Work

To make this work, at the core of decentralized identity lies one deceptively simple but immensely powerful concept: the digital signature.

A digital signature is created when an issuer (say, a DMV or a university) uses its private key to sign a credential. This cryptographic signature is attached to the credential itself. When a holder later presents the credential to a verifier, the verifier checks the signature using the issuer’s public key.

If the credential has been altered in any way—even by a single character—the signature will no longer match. If the credential is valid, the verifier has instant assurance that it really came from the claimed issuer.

This creates trust without intermediaries.

Imagine a university issues a digital diploma as a verifiable credential. Ten years later, you apply for a job. The employer asks for proof of your degree. Instead of calling the university registrar or requesting a PDF, you simply send the credential from your wallet. The employer’s system checks the digital signature against the university’s public key. Within seconds, it knows the credential is genuine.

This removes bottlenecks and central databases of verification services. It also shifts the trust anchor from phone calls or PDFs—which can be forged—to mathematics. Digital signatures are unforgeable without the private key, and the public key can be widely distributed to anyone who needs to verify.

Digital signatures also make revocation possible. If a credential is suspended or withdrawn, the issuer can publish a revocation list. When a verifier checks the credential, it not only validates the signature but also checks whether it’s still active.

Without digital signatures, decentralized identity wouldn’t work. With them, credentials become tamper-proof, portable, and verifiable anywhere.

Selective Disclosure: Sharing Just Enough

One of the major problems with physical IDs is oversharing. As we detailed in the scenario earlier, you only want to show a bartender that you are over 21, without revealing your name, home address, or exact date of birth. That information is far more than the bartender needs—and far more than you should have to give.

Selective disclosure, one of the other major features underpinning decentralized identity, fixes this. It allows a credential holder to reveal only the specific attributes needed for a transaction, while keeping everything else hidden.

Example in Practice: Proving Age A DMV issues you a credential with multiple attributes: name, address, date of birth, license number. At a bar, a bartender verifies if your age is over 21 by scanning your digital credential QR code. The verifier checks the DMV’s signature on the proof and confirms it matches the original credential. The bartender sees only a confirmation that you are over 21. They never see your name, address, or full birthdate. Example in Practice: Proving Residency A city issues residents a digital credential for municipal benefits. A service provider asks for proof of residency. You present your digital credential and the service provider verifies that your “Zip code is within city limits” without exposing your full street address.

Selective disclosure enforces the principle of data minimization. Verifiers get what they need, nothing more. Holders retain privacy. And because the cryptography ensures the disclosed attribute is tied to the original issuer’s signature, verifiers can trust the result without seeing the full credential.

This flips the identity model from “all or nothing” to “just enough.”

Example in Practice: Sanctions Compliance

Under the Bank Secrecy Act (BSA) and OFAC requirements, financial institutions must verify that customers are not on the Specially Designated Nationals (SDN) list before opening or maintaining accounts. Today, this process often involves collecting and storing excessive personal data—full identity documents, addresses, and transaction histories—simply to prove a negative.

In our U.S. Treasury RFC response, we outlined how verifiable credentials and zero-knowledge proofs (ZKPs) can modernize this process. Instead of transmitting complete personal data, a customer could present a cryptographically signed credential from a trusted issuer attesting that they have already been screened against the SDN list. A ZKP allows the verifier (e.g., a bank) to confirm that the check was performed and that the customer is not on the list—without ever seeing or storing the underlying personal details. This approach satisfies regulatory intent, strengthens auditability, and dramatically reduces the risks of overcollection, breaches, and identity theft.

ZKPs are particularly important for compliance-heavy industries like finance, healthcare, and government services. They allow institutions to meet regulatory requirements without creating data honeypots vulnerable to breaches.

They also open the door to new forms of digital interaction. Imagine a voting system where you can prove you’re eligible to vote without revealing your identity, or a cross-border trade platform where businesses prove compliance with customs requirements without exposing their full supply chain data.

ZKPs represent the cutting edge of privacy-preserving technology. They transform the old equation, “to prove something, you must reveal everything,” into one where trust is established without unnecessary exposure.

Challenges and the Path Forward

Decentralized identity isn’t just a lofty principle about autonomy and privacy. At its core, it is a set of technologies that make those values real.

Standards ensure interoperability across issuers, wallets, and verifiers. Digital signatures anchor credentials in cryptographic trust. Selective disclosure prevents oversharing, giving people control of what they reveal. Zero-knowledge proofs allow compliance and verification without sacrificing privacy.

These aren’t abstract concepts. They are already protecting millions of people from fraud, reducing compliance costs, and embedding privacy into everyday transactions.

However, there are still hurdles. Interoperability across borders and industries is not guaranteed. Wallets must become as easy to use as a boarding pass on your phone. Verifiers need incentives to integrate credential checks into their systems. And standards need governance frameworks that help verifiers decide which issuers to trust.

None of these challenges are insurmountable, but they require careful collaboration between policymakers, technologists, and businesses. Without alignment, decentralized identity risks becoming fragmented—ironically recreating the silos it aims to replace.

SpruceID’s Role

SpruceID works at this intersection, building the tooling and standards that make decentralized identity practical. Our SDKs help developers issue and verify credentials. Our projects with states, like California and Utah, have proven that privacy and usability can go hand in hand. And our contributions to W3C, ISO, and the OpenID Foundation help ensure that the ecosystem remains open and interoperable.

Our objective is to make identity something you own—not something you rent from a platform. The technology is here. The challenge now is scaling it responsibly, with privacy and democracy at the center.

The trajectory is clear. Decentralized identity is evolving from a promising technology into the infrastructure of trust for the digital age. Like HTTPS, it will become invisible. Unlike many systems that came before it, it is being designed with people at the center from the very start.

This article is part of SpruceID’s series on the future of digital identity in America. Read more in the series:

SpruceID Digital Identity in America Series

Foundations of Decentralized Identity Digital Identity Policy Momentum The Technology of Digital Identity (this article) Privacy and User Control (coming soon) Practical Digital Identity in America (coming soon) Enabling U.S. Identity Issuers (coming soon) Verifiers at the Point of Use (coming soon) Holders and the User Experience (coming soon)

1Kosmos BlockID

Key Lessons in Digital Identity Verification

The post Key Lessons in Digital Identity Verification appeared first on 1Kosmos.

Radiant Logic

Migrating to Data-Centric ISPM

This is the second in a three-part series of articles looking at the critical need to take a data-centric approach to identity. The first article conveyed why identity and access management in general is rapidly becoming a “big data” problem – and what that means and what benefits can be gained by taking such an […] The post Migrating to Data-Centric ISPM appeared first on Radiant Logic | Unify,

This is the second in a three-part series of articles looking at the critical need to take a data-centric approach to identity. The first article conveyed why identity and access management in general is rapidly becoming a “big data” problem – and what that means and what benefits can be gained by taking such an approach. 

In this article, we will apply our data-centric approach and understanding to delivering identity security posture management (ISPM). 

The Rise of ISPM  IGA in Distress  ISPM as Evolution  Continuous Compliance 

It is important to define some constraints and capabilities surrounding ISPM and how this relates to both existing IAM infrastructure and to IGA.

ISPM focuses on the preventative and pre-breach control set. This means that it is important to consider that the controls being applied to IAM are being done on the data layer – which often focuses upon identity assurance, data hygiene and permissions life cycle management.

These controls can make a material difference as they pertain to risk impact and things like blast radius. 

The cousin to ISPM in recent years is identity threat detection and response (ITDR). While intricately related –  they are both focused on end-to-end identity, identity risk reduction and holistic protection – ITDR is often more focused on detection and post-breach, using response and controls that often influence in-flight attacks before they are completed. The remediation options available to ITDR and ISPM are often different, too. 

ISPM has emerged in the past 3 years as both an extension and potentially an alternative design pattern to the traditional identity governance and administration (IGA) functions that emerged in the early 2000s. On the back of regulation such as Sarbanes Oxley and the Gramm Leach Bliley act, the US and more recently global financial institutions look to document and automate the “Who has access to what?” question. However, many IGA programs ended up in distress. 

Source: The Cyber Hut Community Poll, April 2024, n=65

The causes are likely to vary depending on project size, sector and requirements, but some consistencies do emerge. Slow deployments, poor connectivity to integrated systems and costly and complex requirements to alter existing business processes all contribute to IGA capabilities that cover only a subset of systems. 

The movement to a more continuous compliance initiative – one that places less emphasis on time consuming periodic reviews – sees ISPM aiming to leverage existing and evolving best practices around data hygiene with the application of both static and dynamic controls.  

The benefits increase as more systems become part of the strategy – whether they be downstream managed applications or orthogonal systems that can provide information that helps deliver more context and insight. 

 

These economies of scale not only help deliver value by improving identity hygiene, access management and permissions clean-up for more systems, but they also help deliver analytics and insights. These insights are likely to support different stakeholders across the business in their pursuit of productivity improvements, faster application and identity onboarding, risk reduction and security improvements. 

Improving with a Data-Centric View Data Fabric  Insights for Pre-Breach  Prevention as a Strategy 

As we start to imagine a more data-centric approach to both IGA improvements and a strategic ISPM capability, we immediately begin to see both data sources and data consumers across our IAM landscape. As discussed in the first article in this series, a connected data fabric provides a bi-directional set of touch points that supports a conjoined and multiplying effect with respect to understanding how our identities are created, updated, used and managed. Identities (for both humans and non-humans) do not exist in a vacuum and are tightly coupled to business process and task completion.  

However, numerous natural blind spots often exist within this life cycle in many organizations today. For example, this could include a lack of context during access requests, insufficient clarity during access reviews, or uncertainty when removing excessive permissions. Individually, these concerns are significant, but they often result in a cascading set of issues, as downstream systems and reactive detective controls are frequently built on false security assumptions. These false assumptions are implicitly built into relying on party systems, detective controls, and monitoring systems.

For example, assuming all identities have originated from an authoritative source often removes the secondary validation that helps identify ghost, orphaned, shared and redundant accounts. Downstream systems do not always provide ways of analyzing and questioning permission associations – they are more focused on whether a permission is present against an account in an access control entry. How that permission got there is assumed to be accurate. There is an assumption that removals and changes upstream are to be “automagically” relayed to a host of downstream systems. This is rarely the case. 

These false assumptions and resulting failings in security posture are just some examples of the blind spots that are manifesting on the IAM life cycle management plane. Informed decision making is now possible by leveraging existing data sources across the technology landscape. This data-centric approach can help move organizations to a pre-breach way of reducing identity risk.  

While detective, behavioral, and post-breach (and ideally, real-time breach) capabilities are critical, they must be grounded in strong, reliable identity data. 

Getting Started  Architecture and Concepts  Start Small  Process and Metrics 

Delivering a data-centric IAM framework is a journey. While that may sound like a technological cliché, it is important to realize the factors involved in delivering success here. IAM is gaining in significance from being an operational and tactical component (which is often reactionary) to a strategic fulcrum that delivers productivity, security and revenue-generating opportunities. To that end, people, processes and technology must all change – and often that change will come at different times.  

It is important to deliver each component of this journey in a way that aligns with the business’s current state and needs. This means that processes, workflows, job-specific tasks and ways of working have evolved over a considerable period of time and are inherently optimized based on effort, skill, and knowledge.  

The IAM data foundation must integrate with these factors initially without change. Asking organizations to immediately alter onboarding or application access functions will simply inhibit adoption. Over time, those processes can be altered and optimized once managed within the system. 

The journey for adoption will cover two main aspects: systems coverage and maturity, but also capability maturity. The integration of more systems is crucial – with some of those systems not having an IAM-centric background such as ISTM or CMDB.  

Secondarily, the maturity of data-centric capabilities in these systems will also grow over time – from an initial reading and correlation of data sources to concepts like predictive analytics and human-in-the-loop, guided clean-up and remediation. The rise of AI is rapid and ever-changing, and organizations are quickly learning how to consume and leverage this new autonomous way of working through agentic AI and RAG-based querying. However, it still takes time to alter governance, operations, and accountability frameworks. 

As with any journey through change, it is also important to quickly and clearly communicate success and metrics to a range of audiences. This will include non-technical stakeholders focused on improving productivity, reducing costs, and enhancing collaboration, as well as those interested in metrics related to risk reduction, connectivity, and overall performance. 

The final article in this series will look at best practices and critical capabilities needed to succeed in this area. 

 

 

The post Migrating to Data-Centric ISPM appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic

California has returned to the Zero-Trust front line. When Assemblymember Jacqui Irwin re-introduced the mandate this year as AB 869, she rewound the clock only far enough to give agencies a fighting chance: every executive-branch department must show a mature Zero-Trust architecture by June 1, 2026.   The bill sailed through the Assembly without a dissenting […] The post California’s Count

California has returned to the Zero-Trust front line. When Assemblymember Jacqui Irwin re-introduced the mandate this year as AB 869, she rewound the clock only far enough to give agencies a fighting chance: every executive-branch department must show a mature Zero-Trust architecture by June 1, 2026.  

The bill sailed through the Assembly without a dissenting vote and now sits in the Senate Governmental Organization Committee with its first hearing queued for early July. Momentum looks strong: the measure already carries public endorsement from major players in the security space such as Okta, Palo Alto Networks, Microsoft, TechNet, Zscaler and a unanimous fiscal-committee green light.  

The text itself is straightforward. It lifts the same three pillars that the White House spelled out in Executive Order 14028—multi factor authentication everywhere, enterprise-class endpoint detection and response and forensic-grade logging—and stamps a date on each pillar. Agencies that fail will be out of statutory compliance, but, as the committee’s analysis warns, the real price tag is the downtime, ransom and public-trust loss that follow a breach.  

Why the Hardest Part Isn’t Technology 

California has spent four years laying technical groundwork. The Cal-Secure roadmap already calls for continuous monitoring, identity lifecycle discipline and tight access controls. Yet progress has stalled because most departments still lack a single, authoritative view of who and what is touching their systems. Identity data lives in overlapping Active Directory forests, SaaS directories, HR databases and contractor spreadsheets. When job titles lag three weeks behind reality or an account persists after its owner leaves, even the best MFA prompt or EDR sensor can’t make an accurate determination.

Identity Data Fabric and the RadiantOne Platform 

Radiant Logic solves the obstacle at its root. The platform connects to every identity store—on-prem, cloud, legacy or modern—then correlates, cleans and serves a real-time global profile for every person and device. That fabric becomes the single source of truth that each Zero-Trust control needs and consumes: 

MFA tokens draw fresh role and device attributes, so “adaptive” policies really do adapt.  EDR and SIEM events carry one immutable user + device ID, letting analysts trace lateral movement in minutes instead of days.  Log files share the same identifier, turning post-incident forensics into a straight line instead of a spider web. 

The system’s built-in hygiene analytics spotlight dormant accounts, stale entitlements and toxic combinations—precisely the gaps auditors test when they judge “least-privilege” maturity. 

A Concrete, 12-Month Playbook 

Map and connect every authoritative and shadow identity source to RadiantOne. No production system needs to stop; the platform operates as an overlay.  Redirect authentication flows—IdPs, VPNs, ZTNA gateways—so their policy engines read from the new identity fabric.  Legacy applications gain modern, attribute-driven authorization without code changes.  Stream enriched context into existing EDR and SIEM pipelines.  Alerts now include the who, what and where information that incident responders crave.  Run hygiene dashboards to purge inactive or over-privileged accounts.  The same reports double as proof of progress for the annual OIS maturity survey. 

Teams that follow the sequence typically see two wins long before the statutory deadline, one being faster mean-time-to-detect during adversarial red-teaming exercises and, secondly, a dramatic cut in audit questions that start with, “How do you know…?” 

Beyond Compliance 

AB 869 may be the nudge, but the destination is bigger than a check box. When de facto identity is the new perimeter—and when that identity is always current, complete and trustworthy—California’s digital services stay open even on the worst cyber day. Radiant Logic provides the identity fabric that makes Zero-Trust controls smarter, cheaper and easier to prove. 

The countdown ends June 1, 2026. The journey can start with a single connection to your first directory. 

REFERENCES 

https://cdt.ca.gov/wp-content/uploads/2021/10/Cybersecurity_Strategy_Plan_FINAL.pdf

https://calmatters.digitaldemocracy.org/bills/ca_202520260ab869

The post California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


AI for Access Administration: From Promise to Practice

  Gartner’s 2025 Hype Cycle for Digital Identity and Hype Cycle for Zero-Trust Technology, 2025 highlights AI for Access Administration as an emerging innovation with high potential, or as it is called by Gartner, an “Innovation trigger.” The promise to automate entitlement reviews, streamline least-privilege enforcement and replace months of manual cleanup with intelligent, adaptive […] Th

 

Gartner’s 2025 Hype Cycle for Digital Identity and Hype Cycle for Zero-Trust Technology, 2025 highlights AI for Access Administration as an emerging innovation with high potential, or as it is called by Gartner, an “Innovation trigger.” The promise to automate entitlement reviews, streamline least-privilege enforcement and replace months of manual cleanup with intelligent, adaptive identity governance is very compelling.

But as Gartner cautions, “AI is no better than human intelligence at dealing with data that doesn’t exist.” 

When it comes to AI, the limiting factor is not the algorithms: it’s the data. Fragmented directories, inconsistent entitlement models, and dormant accounts create blind spots that undermine any attempt at automation. Without a reliable identity foundation, AI has little to work with and what it does work with is riddled with flaws and problems.  

 

Why This Matters Now 

Identity-driven attacks continue to outpace traditional IAM processes. Verizon’s 2025 DBIR confirms credential misuse as the leading breach vector, with attackers increasingly exploiting valid accounts rather than brute-forcing their way in. IBM X-Force highlights that the complexity of responding to identity-driven incidents nearly doubles compared to other attack types. Trend Micro adds that risky cloud app access and stale accounts remain among the most common exposure points. These are just three out of many prominent organizations voicing their concern. To break this down and put it simply, static certifications and spreadsheet-based entitlement reviews cannot keep pace with adversaries who are already automating their side of the equation. 

 Making Identity Data AI-Ready 

Radiant Logic is recognized in Gartner’s Hype Cycle for enabling AI for Access Administration as a Sample Vendor. Our role is foundational—we provide the trustworthy identity data layer that AI systems require to function effectively. 

The RadiantOne Platform unifies identity information from directories, HR systems, cloud services, and databases into one semantic identity layer. This layer ensures that access intelligence operates on clean, normalized, and correlated data. The result is an explainable and auditable basis for AI-driven recommendations and automation. 

With this in place, AI can shift access administration from episodic to continuous monitoring, detecting entitlement drift, rationalizing excessive access, and adapting policies in near real time. 

Enabling Agentic AI in Access Governance 

Radiant Logic is investing deeply in advancing the field of Agentic AI and has already delivered tangible innovations for customers through AIDA and fastWorkflow

 AIDA (AI Data Assistant) is a core capability of the platform. It is presented as a virtual assistant to simplify user interactions, improve operational efficiency and help to make more informed decisions. 

For example, AIDA is used to address one of the most resource-heavy processes in IAM: user access reviews. Instead of overwhelming reviewers with raw data, AIDA highlights isolated access, surfaces over-privileged or dormant accounts, and proposes remediations in plain language. Each suggestion is linked to the underlying identity relationships, ensuring decisions remain auditable and defensible.  

 

 

The result is a faster review cycle with less fatigue for reviewers, while giving compliance teams confidence that AI assistance does not compromise accountability. At its core, AIDA leverages fastWorkflow—A reliable Agentic Python Framework.  

fastWorkflow aims to address common challenges in AI agent development such as intent misunderstanding, incorrect tool calling, parameter extraction hallucinations, and difficulties in scaling. 

The outcome is much faster agent development, providing deterministic results even when leveraging smaller (and cheaper) AI models. 

Radiant Logic has released fastWorkflow to the open-source community under the permissive Apache 2.0 license, enabling developers to accelerate their AI initiatives with a flexible and proven framework. 

If you are interested in knowing more about fastWorfklow, this article series is available. You can access the project and code for fastWorkflow on GitHub.

These capabilities are the first public expressions of our broader Agentic AI strategy, moving AI beyond theoretical promise into operational reality. These innovations are part of a larger roadmap exploring how intelligent agents can fundamentally transform the way enterprises secure and govern identity data. 

Our recognition in Gartner’s Hype Cycle for Digital Identity reflects why this matters: most AI initiatives in IAM fail not because of algorithms, but because of poor data quality and unreliable execution. By unifying identity data, enabling explainable guidance through AIDA, and ensuring safe, reliable execution with fastWorkflow, we are making Agentic AI practical for access governance today—while laying the foundation for what comes next.

The Business Impact 

For CISOs, this means reducing exposure by closing gaps before they are exploited. For CIOs, it delivers modernization without breaking legacy systems. For compliance leaders, it simplifies audits with data-backed, explainable decisions. 

AI for Access Administration will not replace governance programs, but it will change their tempo. What was once a quarterly campaign becomes a continuous process. What was once a compliance checkbox becomes a dynamic part of security posture. This is closely in line with regulatory initiatives where a continuous risk-based security posture is critical.  

Radiant Logic provides the missing foundation: unified, governed, and observable identity data.  

See how you can shift from a reactive identity security posture to a proactive, data-centric, AI-driven approach: contact us today. 

The post AI for Access Administration: From Promise to Practice appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Gartner Recognizes Radiant Logic as Leader in Identity Visibility and Intelligence Platforms

  Today’s CISOs agree that there is a growing challenge from identity-driven threats due to complex environments with a growing technical debt combined with cloud adoption and sprawling identity ecosystems. This is confirmed by the various breach reports by third parties such as research from Verizon and IBM who point out identity as a primary […] The post Gartner Recognizes Radiant Logic a

 

Today’s CISOs agree that there is a growing challenge from identity-driven threats due to complex environments with a growing technical debt combined with cloud adoption and sprawling identity ecosystems. This is confirmed by the various breach reports by third parties such as research from Verizon and IBM who point out identity as a primary attack vector. This is also recognized by Gartner who explicitly warns: 

“Organizations lacking comprehensive visibility into identity data face significant security vulnerabilities and operational inefficiencies.” — Gartner 

In this second blog of our three-part series on Gartner’s 2025 Digital Identity Hype Cycle, we explore the critical category of Identity Visibility and Intelligence Platforms, where Radiant Logic is recognized for its leadership as a Sample Vendor. This recognition affirms our strategic commitment to helping organizations secure and operationalize identity through real-time observability. 

 

The Missing Piece in IAM Maturity 

Despite years of investment, many IAM programs remain stuck at the operational layer, focused on provisioning, password management, and compliance reporting. What they are missing is observability. 

“Identity Visibility and Intelligence platforms are essential in navigating complex identity environments, enabling proactive identity risk management and consistent security policy enforcement.” — Gartner 

Why Identity Visibility and Observability 

Radiant Logic addresses identity sprawl at its root by delivering a unified identity data fabric that allows for authoritative, real-time visibility across your entire identity ecosystem. This eliminates blind spots and resolves inconsistencies across fragmented systems. Unlike legacy tools, RadiantOne offers a single, trusted source of truth for both human and non-human identities and their access relationships. 

But visibility alone is not enough. Radiant Logic further provides near real-time event detection through active observability of changes, controls, and processes at the identity data layer. Proactive detection and intervention are foundational to shrinking the attack surface and stopping compromise attempts before they start. 

Security operations teams gain instant visibility, accelerated threat detection, and proactive risk management. 

Cleaning Up the Identity Foundation 

Identity observability is the connective tissue between your existing controls and the proactive, intelligent security posture demanded by today’s threat landscape. It is worth pointing out that Identity Observability is not just another feature; it is what allows organizations to mature their Identity and Access Management architecture. 

Modern IAM controls are only as resilient as the data that feeds them. As Gartner underscores, effective IAM starts with visibility into every account, access relationship, and policy. RadiantOne strengthens identity hygiene at the data layer by detecting orphaned or misaligned accounts, redundant entitlements, incorrectly provisioned users, and unmanaged users and groups. This ensures that SSO, IGA, PAM, Zero Trust, and SIEM tools ingest complete, accurate, and actionable data. 

With the rise of Agentic AI, the stakes are higher than ever. LLMs increasingly consume and act on enterprise identity data, making its integrity and continuous monitoring both a compliance obligation (from frameworks such as DORA) and a security imperative against data poisoning, drift, and misconfigurations. By unifying and securing identity data at the source, RadiantOne reduces technical debt, enforces consistent policy, and strengthens risk-based decisions, all actions that effectively shrink the attack surface while enabling AI-powered security operations. 

The Business Impact of Identity Visibility 

For most enterprises, the identity layer is now the largest and most dynamic attack surface. Every new SaaS subscription, every contractor onboarded, and every micro-service deployed creates new accounts, credentials, and entitlements. Increasingly, this also includes AI agents. Without observability, these changes accumulate, silently introducing risk, eroding compliance, and slowing down transformation programs. 

Identity Visibility and Intelligence platforms like RadiantOne directly impact three critical dimensions: 

Reduced Risk – Shrink the window of exposure by surfacing dormant accounts, excessive entitlements, and anomalous activity before adversaries exploit them Streamlined Compliance – Optimize certifications, audits, and regulatory reporting (e.g., DORA, NIS2, SOX) by automating lineage and reconciliation at the identity data layer Increased Agility – Enable faster M&A integration, smoother cloud adoption, and more resilient Zero Trust enforcement by providing a single, unified source of truth for identity

When identity data is unified, observable, and continuously governed, organizations can accelerate digital initiatives without sacrificing security. That is the true value of being recognized in Gartner’s Hype Cycle: it validates that Identity Visibility is not only a technical enabler but also a business imperative. 

The Path Forward 

As Gartner’s 2025 Digital Identity Hype Cycle confirms, Identity Visibility and Intelligence is no longer optional — it is foundational. Observability is not a standalone feature or a bolt-on product: it is the critical layer that sits atop your identity fabric, transforming fragmented data into actionable intelligence. 

By adding observability to the identity fabric, organizations mature their IAM stack from reactive operations to proactive defense, equipping SSO, IGA, PAM, ZTA, and SIEM tools with the clean, real-time insights they need to act decisively. 

Learn More 

Explore how Radiant Logic’s RadiantOne platform can strengthen your organization’s security and mature your IAM program. Contact us today.  

Disclaimers: 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 

The post Gartner Recognizes Radiant Logic as Leader in Identity Visibility and Intelligence Platforms appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust

  In many places in the world, Zero Trust has shifted from being a security philosophy to a mandate by regulators, including the U.S., as discussed in California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic. Gartner’s 2025 Hype Cycle for Zero Trust Technology highlights identity as the foundation for Zero Trust success and […] The post Gartner® Recognizes Radiant Logic i

 

In many places in the world, Zero Trust has shifted from being a security philosophy to a mandate by regulators, including the U.S., as discussed in California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic. Gartner’s 2025 Hype Cycle for Zero Trust Technology highlights identity as the foundation for Zero Trust success and names Radiant Logic as a Sample Vendor enabling that foundation in the AI for Access Administration category. 

Across both public and private sectors, the push for implementing Zero Trust is accelerating. California’s Assembly Bill 869, for example, requires every executive-branch agency to demonstrate a mature Zero Trust architecture by June 2026. This is one example of how regulations are putting firm dates on adoption. Gartner’s recognition underscores why Radiant Logic matters in this context.

Zero Trust depends not only on reliable identity data but also on making that data accessible. The challenge for most organizations is not the lack of Zero Trust tools but the difficulty of getting the right identity data. Attributes, context, and relationships all need to be provided to the tools in a format and way that these can be used.

Without that foundation, Zero Trust efforts typically stall.  

 

Why Identity is Central to Zero Trust 

The National Institute of Standards and Technologies (NIST) defines Zero Trust around a simple idea: never trust, always verify. Every request must be authenticated and authorized in its context. Yet in most enterprises, identity data is fragmented across directories, cloud services, HR systems, and contractor databases. This is the reality of what we call identity sprawl. When accounts linger after employees leave or when attributes are out of date, even the best MFA solutions or EDR policies falter. 

Gartner cautions that organizations lacking visibility into their identity data face both elevated security risks and operational inefficiencies. Zero Trust controls cannot deliver on their promise if they operate on incomplete or inconsistent input. That means that the result is only as good as the underlying identity data.  

Radiant Logic’s Role 

RadiantOne unifies identity data from every source into a single, correlated view of every identity, whether human or non-human. That fabric becomes the authoritative context that Zero Trust controls require and need to be successful. This foundation lets MFA policies adapt dynamically to current identity and device signals while, at the same time, unifying log files under a single identifier and enabling Zero Trust access, network segmentation, and more. So why is this important? Many regulatory initiatives are tightening up the reporting should a breach occur; therefore, correlating identities into a single view streamlines forensic work and ultimately allows for swift signaling or reporting to a competent authority.  

The importance of identity data hygiene is that it allows organizations to detect dormant accounts, stale entitlements, and toxic combinations before auditors or adversaries find them.

Maintaining this hygiene is critical to mitigating risk and ensuring that Zero Trust policies are enforced on accurate, trustworthy data. By ensuring Zero Trust policies run on clean, governed identity data, Radiant Logic enables organizations to enforce least privilege, reduce the attack surface, and meet compliance obligations in a timely fashion. 

The Business Impact 

For CISOs, this reduces risk by closing identity gaps before attackers exploit them. 

For CIOs, it modernizes access controls without disrupting legacy systems. 

For compliance leaders, it provides defensible evidence for regulatory audits and mandates and, in case of a breach, a swift response to regulators signaling and reporting requirements. 

Zero Trust is no longer an academic philosophic idea — it is operational to modern security. Gartner’s recognition of Radiant Logic validates our role in making it achievable, practical, and provable. 

Learn More

The full report can be downloaded here. Discover how Radiant Logic strengthens Zero Trust initiatives with unified, real-time identity data and intelligence. To discuss with an identity and Zero Trust expert, contact us today.  

 

The post Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Identity: The Lifeline of Modern Healthcare

In today’s healthcare ecosystem, seconds can mean the difference between life and death. Clinicians need instant access to systems, patient records, and tools that guide treatment decisions. But too often, identity and access management (IAM) becomes a silent bottleneck—slowing workflows, increasing frustration, and opening new avenues for attackers.  Identity is not just an IT function. […]

In today’s healthcare ecosystem, seconds can mean the difference between life and death. Clinicians need instant access to systems, patient records, and tools that guide treatment decisions. But too often, identity and access management (IAM) becomes a silent bottleneck—slowing workflows, increasing frustration, and opening new avenues for attackers. 

Identity is not just an IT function. It is the connective tissue between operational efficiency and strong security. When access works seamlessly, clinicians focus on patients. When it falters, care delivery stalls. The stakes are that high.

 

Healthcare organizations carry a legacy burden that includes identity infrastructures stitched together from mergers, acquisitions, and outdated systems. The results are familiar and painful: 

Slow onboarding: Clinicians wait days or weeks to access EHRs, e-prescribing platforms, or HR systems  Siloed systems: Contractors, vendors, and students are often tracked manually or inconsistently, creating blind spots  Fragmented logins: Multiple usernames and passwords drain productivity, encourage weak credential practices, and create security risks 

Each inefficiency cascades into operational and security problems. In a shared workstation environment where multiple staff members rotate across terminals, the friction of multiple logins is more than inconvenient—it is unsafe. 

Modern clinicians often wear many hats: surgeon, professor, and clinic practitioner. Each role demands different entitlements, application views, and permissions. Legacy IAM systems struggle to keep pace, forcing clinicians into frustrating workarounds that compromise both care and compliance. 

A modern identity data foundation solves this “persona problem” by enabling: 

multi-persona profiles: A unified identity that captures every role under one credential  contextual access: Role-specific entitlements delivered at the point of authorization  streamlined governance: Fewer duplicates, cleaner oversight, and enforced least privilege 

The result? Clinicians move seamlessly across their responsibilities without juggling multiple logins, and security teams gain a clearer, more manageable access model. 

 

 

Identity is also the frontline of healthcare cybersecurity. Disconnected directories, inconsistent access records, and orphaned accounts create fertile ground for attackers. The 2024 Change Healthcare ransomware incident, traced back to compromised remote access credentials, highlighted the catastrophic impact that a single identity failure can unleash. 

Poor IAM hygiene doesn’t just slow down care—it invites compliance nightmares. Regulations like HIPAA require clear evidence of least-privilege access and timely de-provisioning, but piecing that evidence together from fractured systems is a losing battle. 

Temporary fixes and one-off integrations won’t cure healthcare’s identity problem. What is needed is a modern identity data foundation that: 

unifies identity data from HR systems, AD domains, credentialing databases, cloud apps, and more rationalizes and correlates records into a single, authoritative profile for each user delivers tailored views to each consuming system—EHR, tele-health, billing, scheduling—through standard protocols like LDAP, SCIM, and REST strengthens ISPM by ensuring security policies, risk analytics, and compliance reporting all act on the same high-quality identity data

RadiantOne provides that foundation. Acting as a universal adapter and central nervous system, it abstracts away complexity, enables day-one M&A integration, supports multi-tenant models for affiliated clinics, and reduces costly manual cleanup. 

Healthcare’s identity challenge is not theoretical. It is visible every day in delayed access, clinician frustration, regulatory fines, and high-profile breaches. But it doesn’t have to be this way. 

With a unified identity data foundation, healthcare organizations can: 

accelerate clinician onboarding  reduce operational bottlenecks  strengthen identity security posture  simplify compliance  empower caregivers with seamless, secure access 

The question is no longer whether identity impacts care delivery and security: it is whether your identity infrastructure is helping or holding you back. 

Download the white paper, The Unified Identity Prescription: Securing Modern Healthcare & Empowering Caregivers, to explore how a unified identity data foundation can power better care and stronger security.

The post Identity: The Lifeline of Modern Healthcare appeared first on Radiant Logic | Unify, Observe, and Act on ALL Identity Data.


Thales Group

Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system

Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system prezly Thu, 10/23/2025 - 15:00 Civil Aviation Colombia Share options Facebook
Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system prezly Thu, 10/23/2025 - 15:00 Civil Aviation Colombia

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 Thales supplies a co-mounted Air Traffic Control radar station, featuring its next generation primary and secondary approach radars, STAR NG and RSM NG, located at the Picacho station, to improve air surveillance of the Bucaramanga terminal manoeuvring area (TMA) in Colombia. This contract is part of a countrywide modernisation project initiated by Aeronáutica Civil de Colombia, to improve airspace management and reinforce collaboration with the Colombian Air Force. The unique design and optimised performance of the two radars will enable increased civil and military collaboration in the country.

STAR NG primary surveillance radar & RSM NG Secondary radar © Thales

Thales is modernizing the current Picacho radar station, operated by Colombia’s civil aviation authority, Aeronáutica Civil de Colombia, in partnership with local company GyC to provide new capabilities for airspace surveillance. With its combined set of STAR NG and RSM NG radars, there will now be six Thales radars in operation in the country, allowing air traffic controllers to continuously track the position of aircraft, regardless of the conditions.

The 16-month project, which is already well underway, sees Thales manufacture, deliver and install the co-mounted radars STAR NG (approach Primary Surveillance Radar) and RSM NG (Mode S Secondary Surveillance Radar), while its local partner, GyC, renews the existing infrastructure. This technologically advanced radar station, supported by additional four stand-alone ADS-B ground stations, will strengthen continuous air surveillance in the approach area, in particular the North-Eastern part of Bucaramanga, Colombia.

To date, Thales has successfully completed the Factory Acceptance Tests (FAT) in coordination with Aeronáutica Civil. The required structural reinforcement studies for the tower have been approved, and progress is proceeding on schedule to ensure the radar becomes operational within the timeline. ​ The new system will bring many benefits, including:

Enhanced airspace surveillance and sovereignty: The STAR NG will deliver real-time information on both cooperative and non-cooperative aircraft to strengthen Colombia’s ability to monitor and protect its national airspace. Greater reliability and resilience in aircraft identification: The RSM NG meta-sensor provides more accurate identification and tracking, ensuring continuity of information even in cases of jamming or spoofing attempts. Robust cybersecurity protection: Both radars are equipped with the latest cybersecurity updates, safeguarding critical surveillance data against evolving digital threats.

With over 50 years’ experience in Air Traffic Control and Surveillance, and more than 1,200 radars operating around the globe, Thales is the trusted leader in this domain worldwide. In Colombia, Thales radars already equip sites in Flandes, Cerro Verde, Santa Ana, Villavicencio and Carimagua, and soon in Picacho. Thales also supplied the APP control centre in San Andrés, and the ATC and Tower simulator in Bogota to support Air Traffic Controller training. The Group has also provided navigation aids in various key sites all over the country.

“Thales is proud to strengthen its 25-year partnership with the Civil Aviation Authority of Colombia. This new contract will enhance the country’s airspace surveillance capabilities by combining the strengths of its primary and secondary radars. It highlights the versatility of Thales’ ATC radars in meeting the needs of both civilian and military operators, and demonstrates our long-term commitment to ensuring excellence in surveillance and air safety systems." Lionel de CASTELLANE, Vice President Air Traffic Radars, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies in advanced for the Defence, Aerospace and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in Latin America

With six decades of presence in Latin America, Thales, a global tech leader for the Defence, Aerospace, Cyber & Digital sectors. The Group is investing in digital and “deep tech” innovations – Big Data, artificial intelligence, connectivity, cybersecurity and quantum technology – to build a future we can all trust.

The company has 2,500 employees in the region, across 7 countries - Argentina, Bolivia, Brazil, Chile, Colombia, Mexico and Panama - with ten offices, five manufacturing plants, and engineering and service centres in all the sectors in which it operates.

Through strategic partnerships and innovative projects, Thales in Latin America drives sustainable growth and strengthens its ties with governments, public and private institutions, as well as airports, airlines, banks, telecommunications and technology companies.

View PDF market_segment : Civil Aviation ; countries : Americas > Colombia https://thales-group.prezly.com/thales-strengthens-airspace-surveillance-with-aeronautica-civil-de-colombia-with-advanced-atc-system-c7rr2j thales-strengthens-airspace-surveillance-aeronautica-civil-de-colombia-advanced-atc-system On Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system

Indicio

Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems

The post Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems appeared first on Indicio.
Indicio was the first to recognize the importance of secure authentication to AI agents by launching Indicio ProvenAI, recently recognized by Gartner. With NVIDIA Inception, we’re going to take this to a new level — a secure authentication and decentralized governance solution for autonomous systems and the internet of AI.

By Trevor Butterworth

Indicio has officially joined the NVIDIA Inception Program, a global initiative that supports startups advancing artificial intelligence and high-performance computing. Indicio will focus on applying decentralized identity and Verifiable Credential technology — in the form of Indicio ProvenAI — to AI systems.

ProvenAI enables AI agents and their users to authenticate each other using decentralized identifiers and Verifiable Credentials. This means an AI agent can cryptographically prove who it is interacting with and that entity can do the same  — all before any data is shared.

Once identified, a person or organization can give permission to the AI agent to access their data and can delegate authority to the agent to act on behalf of the person or organization.

To monetize AI, agents and users need to be able to trust each other

Agentic AI and AI agents cannot fulfill their mission without accessing data. The more data they can access, the easier it is to execute a task. But this exposes the companies that use them to significant risk.

How can they be sure their agent is interacting with a real person, an authentic user or customer? And how can that user, similarly, verify the authenticity of the agent?

The simplest way is to issue each with a Verifiable Credential, a cryptographic way to authenticate not only an identity but the data that is being shared. Importantly, this cryptography is AI-resistant, meaning it can’t be reengineered by people using AI to try and alter the underlying information.

The critical benefit to using Verifiable Credentials for this task is that there is no need for either party to phone home to crosscheck a database during authentication or authorization. Because a Verifiable Credential is digitally signed, the original credential issuer can be verified without having to contact the issuer. The information in the credential can also be cryptographically checked to see if it has been altered. As a result, if you know the identity of the credential issuer and you trust that issuer, you can trust the contents of the credential to instantly act on them.

With Verifiable Credentials, AI’s GDPR nightmare goes away

For AI agents to be useful, they must be able to access personal data — lots of it. For this to be compliant with data privacy regulations such as GDPR, a person must be able to consent to share their data. There’s just no way of getting around this.

Verifiable Credentials makes consent easy because the person or organization holds their data in a credential or can provide a credential to a party containing permission to access data. Once a user consents to share their data, you’ve met a critical requirement of GDPR, and that decision can be recorded for audit.

But Verifiable Credentials — or at least some credential formats — also allow for selective disclosure or zero-knowledge proofs, which means that the data and purpose for which it is being used can be minimized, thereby fulfilling other GDPR requirements.

As AI Agents will also need to access large amounts of data held belonging to people and organizations that are held elsewhere,  a Verifiable Credential can be used by a person or organization to delegate authority to access that data, with the rock-solid assurance that this permission has been given by the legitimate data subject.

Decentralized governance, the engine for autonomous systems

These features create a seamless way for AI agents to operate. But things get even more exciting when we look at the way Verifiable Credentials are governed.

With Indicio Proven and ProvenAI, a network is governed by machine-readable files sent to the software of each participant in the network (i.e., the credential issuer(s), holders and verifiers). This software tells each participant who is a trusted issuer, who is a trusted verifier, and which information needs to be presented for which use case in what order.

Indicio DEGov enables the natural authority for a network or use case to orchestrate interaction by publishing a machine-readable governance file. And this orchestration can be configured to respect hierarchical levels of authority. The result is seamless interaction driven by automated trust.

Now think about autonomous systems where each connected element has a Verifiable Identity that can be orchestrated to interact with an AI agent. You have a very powerful way to apply authentication and information sharing to very complex systems in a highly secure way. Every element of this system can be known, can authenticate another element, and can share data in complex workflows. Each interaction can be made secure and element-to-element.

Indicio is making a safe, secure, trusted AI future possible

Secure and trustworthy authentication is foundational to unlocking the market benefits of AI and enabling AI networks to interoperate and scale. This is why we were the first decentralized identity company to connect Verifiable Credentials to AI and the first to offer a Verifiable Credential AI solution — Indicio ProvenAI — recognized by Gartner in its latest report on decentralized identity.

We’re tremendously excited to be a part of the NVIDIA Inception Program. We see decentralized identity as a catalytic technology to AI, one that can quickly unlock market opportunities, and support AI agents and agentic AI.

Learn how Indicio ProvenAI can help your organization build secure, verifiable AI systems. Contact Indicio to schedule a demo or explore integration options for your enterprise.

The post Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems appeared first on Indicio.


Ontology

Decentralized Messaging Just Forked Four Ways

Messaging isn’t broken because of encryption. It’s broken because no one can prove who’s on the other side. Spam, scams, and impersonation because every “encrypted” app still depends on centralized identity. Phone numbers, emails, usernames. The same systems that made spam a trillion-dollar industry now anchor your private chats. A new wave of decentralized messengers wants to fix that

Messaging isn’t broken because of encryption. It’s broken because no one can prove who’s on the other side.

Spam, scams, and impersonation because every “encrypted” app still depends on centralized identity. Phone numbers, emails, usernames. The same systems that made spam a trillion-dollar industry now anchor your private chats.

A new wave of decentralized messengers wants to fix that. But it’s already fragmenting.

Right now, four different philosophies are battling to define “trust” in the next generation of messaging.

DID-based agents: trust built in

A serious effort is happening around DIDComm, a messaging protocol from the Decentralized Identity Foundation. It’s not a chat app. It’s a framework where messages carry cryptographic proof of identity, not a phone number or email.

DIDComm v2.1 formalizes this idea: secure, encrypted, and transport-agnostic messages exchanged between agents, within wallets, servers, or IoT devices, each anchored to a Decentralized Identifier (DID).

It’s programmable trust. Machines and humans can exchange verified credentials in real time. A supplier can prove a shipment is certified. A user can verify a customer support agent without revealing their ID card.

The downside: UX. DIDComm is infrastructure. Most people won’t see it; they’ll feel it… if developers get it right.

Wallet messaging: your address is your inbox

The second camp lives inside wallets. XMTP is doing great work here. Your crypto address doubles as a messaging endpoint. Coinbase Wallet now supports XMTP chats, pushing the idea of “wallet = identity = communication channel.”

Wallet-based messaging fits perfectly for Web3 commerce. You can receive DAO votes, marketplace updates, or direct messages tied to verified wallets. You know the sender because their address and on-chain record prove it.

It’s not free from issues. Spam and Sybil attacks follow open systems. But the framework makes portable identity possible. You can leave one app and take your chat history and reputation with you.

P2P relays: censorship-proof, messy, alive

Waku, Nostr, and SimpleX occupy the third camp: decentralized relays and gossip networks. These protocols trade convenience for censorship resistance.

Waku v2 is a cleaned-up successor to Whisper. It stores, forwards, and routes messages across peers with no servers, and no central discovery. Nostr’s new NIP-17 spec adds encrypted DMs to its social relay system. SimpleX leans even harder on privacy, routing messages through one-time relays and Tor channels.

It’s the purest form of decentralization, and as such the hardest to scale. No account recovery, no global discovery, and little spam control. These networks will matter most where freedom matters most.

MLS and the RCS landgrab

While the crypto world argues about decentralization, the telecom giants are quietly shipping end-to-end encryption to billions.

The Message Layer Security (MLS) protocol, an IETF standard, will soon power RCS, the default chat layer across Android and iOS. Apple finally joined Google and the GSMA alliance to support it.

This isn’t decentralized, but it’s historic. MLS brings group E2EE to carrier messaging for the first time. Billions of phones will suddenly be encrypted by default.

What it doesn’t solve: identity and portability. Your phone number remains your passport. You can’t take your reputation with you when you switch apps.

The X factor

Then there’s X.

After killing off its previous encryption scheme, Elon Musk’s platform is rebuilding DMs as “XChat.” The marketing talks about “Bitcoin-style encryption.” The code doesn’t.

It’s not decentralized. It’s still a closed, centralized system with an uncertain cryptographic base. The value lies in the user base, not the trust model.

X proves a point: big platforms can bolt on crypto, but they can’t decentralize identity. The DNA doesn’t match.

The next layer: identity, reputation, portability

Every messaging system now faces the same three questions:

Who are you?: Centralized apps rely on phone numbers. Decentralized systems use DIDs and verifiable credentials. Can I trust you?: That’s the spam problem. Without portable reputation, decentralized networks drown in noise. Systems like Orange Protocol can assign on-chain reputation scores that follow you across apps. Good behavior and bad. Can I leave?: Real decentralization means you can take your messages, graph, and identity elsewhere. Wallet-based and DID-based protocols are getting close. How Ontology fits

Ontology has spent years building the rails for these questions.

ONT ID gives users a verifiable, self-controlled digital identity. Orange Protocol builds portable reputation and trust scoring across platforms. ONTO Wallet connects both: a messenger, identity agent, and asset hub in one. Ontology Network provides the reliable and low-cost infrastructure, designed for identity-centric use. Reality check

Decentralized messaging isn’t utopia.

Matrix, the biggest federated network, recently broke compatibility to fix protocol-level flaws. Privacy-first tools like SimpleX attract both activists and abuse. Wallet UX remains fragile.

Still, the direction is clear. The next messaging war won’t be fought over stickers or features. It’ll be fought over trust without dependency.

The takeaway

Encryption is table stakes.

Identity and reputation are the new moat.

The protocol that nails both, without locking you in, wins.

And when that happens, messaging stops being an app.

It becomes an ecosystem of verifiable relationships.

That’s the real revolution.

Decentralized Messaging Just Forked Four Ways was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ComplyCube

Canada Bank Fined $601,139.80 for Five Major AML Breaches

FNBC has been fined $601,139.80 for five AML violations. FINTRAC discovered that the firm did not meet AML standards, including submitting suspicious activity reports, updating client information, and performing due diligence. The post Canada Bank Fined $601,139.80 for Five Major AML Breaches first appeared on ComplyCube.

FNBC has been fined $601,139.80 for five AML violations. FINTRAC discovered that the firm did not meet AML standards, including submitting suspicious activity reports, updating client information, and performing due diligence.

The post Canada Bank Fined $601,139.80 for Five Major AML Breaches first appeared on ComplyCube.


Ocean Protocol

The ASI Alliance from Ocean’s Perspective

By: Bruce Pon People are rightly angry and frustrated. No one is a winner in this current state of unease, lack of information and transparency, and mudslinging. Ocean doesn’t see the benefit of throwing around unfounded and false allegations or the attempts to sully the reputations of the projects and people — it just damages both the ASI and Ocean communities unnecessarily. Ocean has chos
By: Bruce Pon

People are rightly angry and frustrated. No one is a winner in this current state of unease, lack of information and transparency, and mudslinging. Ocean doesn’t see the benefit of throwing around unfounded and false allegations or the attempts to sully the reputations of the projects and people — it just damages both the ASI and Ocean communities unnecessarily.

Ocean has chosen to remain silent until now, out of respect for the ongoing legal processes. But given so many flagrant violations of decency, Ocean would like to take an opportunity to rebut many publicly voiced false allegations, libels, and baseless claims being irresponsibly directed towards the Ocean Protocol project. The false and misleading statements serve only to further inflame our community, while inciting anger and causing even more harm to the ASI and Ocean communities than is necessary.

There are former $OCEAN token holders who converted to $FET and who now face the dilemma of whether to stay with $FET, return to $OCEAN, or to liquidate and be completely done with the drama.

Rather than throw unsubstantiated jabs, I would like to provide a full context with supporting evidence and links, to address many of the questions around the ASI Alliance, Ocean’s participation, and the many incorrect allegations thrown out to muddy the waters and sow confusion among our community.

This blogpost will be followed up with a claim-by-claim rebuttal of all the allegations that have been directed towards Ocean since October 9, 2025 but for now, this blog gives the context and Ocean’s perspective.

I encourage you to read it all, as it reflects months of conversations that reveal the context and progression of events, so that you can best understand why Ocean took steps to chart a separate course from the ASI Alliance. We hope the ASI Alliance can continue its work and we wish them well. Meanwhile, Ocean will go its own way, as we have every right to do.

These are the core principles of decentralization — non-coercion, non-compulsion, individual agency, sovereign property ownership and the power of you, the individual, to own and control your life.

Table of Contents

∘ 1. The Builders
∘ 2. June 2014 — Audacious Goals
∘ 3. January 2024 — AI Revolution in Full Swing
∘ 4. March 2024 — ASI Alliance
∘ 5. April 2024 — A Very Short Honeymoon
∘ 6. May 2024 — Legal Dispute Delays the ASI Launch
∘ 7. June 2024 — Re-Cap Contractual Obligations of the ASI Alliance
∘ 8. August 2024 — Cudos Admittance into ASI Alliance
∘ 9. December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasury
∘ 10. January 2025 — oceanDAO Shifts from a Passive to an Active Token Holder
∘ 11. May 2025 — oceanDAO Establishes in Cayman
∘ 12. June 2025 — Fetch’s TRNR “ISI” Deal
∘ 13. June 2025 — oceanDAO becomes Ocean Expeditions
∘ 14. June 2025 — ASI Alliance Financials
∘ 15. July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community Treasury
∘ 16. August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract
∘ 17. August 2025 — A Conspiracy To Force Ocean to Submit
∘ 18. October 2025 — Ocean Exits the ASI Alliance
∘ 19. Summary

The Builders

Trent and I are dreamers with a pragmatic builder ethos. We have done multiple startups together and what unifies us is an unquenchable belief in human potential and technological progress.

To live our beliefs, we’ve started multiple companies between us. One of the most rewarding things I’ve done in my life is to join forces with, and have the honor to work with Trent.

Builders create an inordinate amount of value for society. Look at any free and open society where capital is allowed to be deployed to launch new ideas — they thrive by leveraging the imagination, brainpower and hard work needed to bring about technological progress. These builders attract an ecosystem of supporters and services, but also as is natural, those who seek to earn easy money.

Builders also start projects with good faith, forthrightness and a respect for the truth, since everyone who has played this game knows that the easiest person to lie to is yourself. So, it’s best to constantly check assumptions and stand grounded on truth, even if wildly uncomfortable. Truth is always the best policy, sometimes because it is the hardest path. It also means that one doesn’t need to live a web of lies, in a toxic environment and constantly wondering when the lie catches up with you.

Builders focus on Win-Win outcomes, seeking to maximize value for everyone in the game, and make the best of bad situations by building our way through it. No one wants to waste time, with what limited time one has on Earth, least of all, to leave the world worse off for being in it. We all want to have a positive impact, however small, so that our existence has meaning in the void of the cosmic whole.

June 2014 — Set Audacious Goals

Twelve years ago, Trent and I decided to try something audacious — to build a global, decentralized network for data and AI that serves as a viable alternative to the centralized, corrupted and captured platforms. We had been inspired by Snowden, Assange and Manning, and horrified to learn what lies we’d been told. If successful, we could impact the lives of millions of developers, who in turn, could touch the lives of everyone on earth. It could be our technology that powered the revolution.

Trent had pulled this off before. In a prior startup, Trent was a pioneer at deploying AI at scale. His knowledge and the software he’d built helped to drive Moore’s Law for two decades. Every single device you hold or use has a small piece of Trent’s intellect, embedded in at the atomic level to make your device run so you can keep in touch with loved ones, scroll memes, and do business globally.

We’d learnt from the builders of the Domain Name System (DNS), Jim Rutt and David Holtzman who are legends in their own right, that the most valuable services on earth are registries — Facebook for your social graph, Amazon for purchases, and, surprisingly, governments with all the registry services they provide. We delved into the early foundations of the Internet and corresponded with Ted Nelson, one of the architects of our modern internet in the early 1960’s. Ted was convinced that the original sin of the internet was to strip away the “ownership” part of information and intellectual property.

Blockchains restored this missing connection. As knowledge and transactions were all to be ported to blockchains over the next 30 years, these blockchain registries would serve as the most powerful and valuable databases on earth. They were also public, free and open to anyone. A magical epiphany was then made by Trent. It wouldn’t be humans that drew intelligence and insight, it would be AI. The logical users of the eventual thousands of blockchains are AI algorithms, bots and agents.

After 3.5 years on ascribe and then BigchainDB, Ocean was the culmination of work as pioneers in the crypto and blockchain space. Trent saw that the logical endpoint for all these L0 and L1 blockchains was a set of powerful registries for data and transactions. Ocean was our project to build this bridging technology between the existing world (which was 2017 by now) and the future world where LLMs, agents and other AI tools could scour the world and make sense for humans.

January 2024 — AI Revolution in Full Swing

ChatGPT had been released 14 months prior, in November 2022, launching the AI revolution for consumers and businesses. Internet companies committed hundreds of billions to buy server farms, AI talent was getting scooped up for seven-to-nine figure sums and the pace was accelerating fast. Ocean had been at the forefront on a lonely one-lane road and overnight the highway expanded to an eight-lane freeway with traffic zooming past us.

By that time, Trent and I had been at it for 10 years. We’d built some amazing technologies and moved the space forward with fundamental insights on blockchains, consensus algorithms, token design, and AI primitives on blockchains, with brilliant teammates along the way. We’d launched multiple initiatives with varying degrees of adoption and success. We’d seen a small, vibrant community, “The Ocean Navy,” led by Captain Donnie “BigBags”, emerge around data and AI, bound with a cryptotoken — the $OCEAN token.

We were also feeling the fatigue of managing a large community that incessantly wanted the token price to go up, with expectations of constant product updates, competitions, and future product roadmaps. I myself had been on the startup grind since 2008, having unwisely jumped into blockchain to join Trent immediately after exiting my first startup, without taking any break to reflect and recover. By the beginning of 2024, I was coming out of a deep 2-year burnout where it had been a struggle to just get out of bed and accomplish one or two things of value in a day. After 17 years of unrelenting adrenaline and stress, my body and mind shut down and the spirit demanded payment. The Ocean core team was fabulous, they stepped in and led a lot of the efforts of Ocean.

When January 2024 came around, both Trent and I were in reasonable shape. He and I had a discussion on “What’s next?” with Ocean. We wanted to reconcile the competing demands of product development and the expectations of the Ocean community for the $OCEAN token. Trent and I felt that the AI space was going to be fine with unbridled momentum kicked off with ChatGPT, and that we should consider how Ocean could adapt.

Trent wanted to build hardcore products and services that could have a high impact on the lives of hundreds of people to start — a narrow but deep product, rather than aim for the entire world — broad but shallower. The rest of the Ocean team had been working on several viable hypotheses at varying scales of impact. For me, after 12 years of relentless focus in blockchain, I wanted to explore emerging technologies and novel concepts with less day-to-day operational pressure.

Trent and I looked at supporting the launch of 2–5 person teams as their own self-contained “startups” and then carving out 20% of revenue to plow back into $OCEAN token buybacks. We also bandied about the idea of joining up with another mature project in the crypto-space, where we could merge our token into theirs or vice versa. This had the elegant outcome where both Trent and I could be relieved of the day-to-day pressures, offloading the token and community management, and growing with a larger community.

March 2024 — ASI Alliance

In mid-March, Humayun Sheikh (“Sheikh”) reached out to Trent with an offer to join forces. Fetch and SingularityNet had been in discussions for several months on merging their projects, led and driven by Sheikh.

Even though Fetch and SingularityNet were not Ocean’s first choice for a partnership, and the offer came seemingly out of the blue, I was brought in the next day. Within 5 days, all three parties announced a shotgun marriage between Fetch, SingularityNet and Ocean. To put it bluntly, we, Ocean, had short-circuited our slow-brain with our fast-brain, because we had prepped ourselves for this type of outcome when it appeared, even with candidates that we hadn’t previously considered, and we rationalized it.

24 Mar 2024 Call between Dr. Goertzel & Bruce Pon

The terms for Ocean to join the ASI Alliance were the following:

The Alliance members will be working towards building decentralized AI Foundations retain absolute control over their treasuries and wallets It is a tokenomic merger only and all other collaborations or activities would be decided by the independent Foundations.

Sovereign ownership over property is THE core principle of crypto and it was the primary condition of Ocean joining the ASI Alliance. Given that there were two treasuries for the benefit of the Ocean community, a hot wallet managed by Ocean Protocol Foundation for operational expenses and a cold wallet, owned and controlled by oceanDAO (the independent 3rd party collective charged with holding $OCEAN passively,), we wanted to make sure that sovereign property and autonomy would be respected. In these very first discussions, SingularityNet also acknowledged the existence of the oceanDAO as a separate entity from Ocean. With this understanding for ownership of treasuries in place, Ocean felt comfortable to move forward. Ocean announced that it was joining the ASI Alliance on 27 March 2024.

April 2024 — A Very Short Honeymoon

Immediately after the announcement, cracks started to appear and the commercial understandings that had induced Ocean to enter into the deal started to be violated or proven untrue.

SingularityNet confided in us that they were very grateful that Ocean could join since their own community would balk at a merger solely with Fetch, citing the SingularityNet community skepticism of Mr. Sheikh and Fetch.

Ben Goertzel of SingularityNet spoke of how Sheikh would change his position on various issues, and of Sheikh’s desire to “pump and dump” a lot of $FET tokens. Ben confided in us that they were very grateful that Ocean could join since their own SingularityNet community would balk at a merger solely with Fetch, citing the strong community skepticism about Sheikh and Fetch.

Immediately after the ASI Alliance was announced, SingularityNet implemented a community vote to mint $100 million worth of $AGIX with the clear intent on selling them down via the token bridge and migration contract, in our newly shared $ASI/$FET liquidity pools.

The community governance voting process was a farce. Fetch is 100% owned and controlled by Sheikh who holds over 1.2 billion $FET, so any “community” vote was guaranteed to pass. For SingularityNet, the voting was more uncertain, so SingularityNet was forced to massage the messages to convince major token holders to get on board. Ocean took its own pragmatic approach to community voting with the position, if $OCEAN holders don’t want $FET, they can sell their $OCEAN and move on. Ocean wanted to keep the “voting” as thin as possible so that declared preferences matched actual preferences.

Mr. David Lake (“Lake”), Board Member of SingularityNet also disclosed that Sheikh treated network decentralization as an inconvenient detail that he didn’t particularly care about and only paid “lip service” to it.

In hindsight this should have been a major red flag.

April 3, 2024 — Lake to Pon

Ocean discovered that the token migration contracts, which SingularityNet had represented as being finished and security audited, were nowhere near finished or security audited.

A combined technology roadmap assessment showed little overlap, and any joint initiatives for Ocean would be “for show” and expediency rather than serving a practical, useful purpose.

The vision of bringing multiple new projects on-board, the vision sold to Ocean for ASI, hit the Wall when Fetch asserted that their L1 chain would retain primacy, so they could keep their $FET token. This meant that only ERC20 tokens could be incorporated into ASI in the future. ASI would not be able to integrate any other L1 chain into the Alliance.

This presented a dilemma for Ocean. Ocean was working closely with Oasis ($ROSE) and had planned on deeper technical integrations on multiple projects. If Ocean’s token was going to become $FET but Ocean’s technology and incentives were on $ROSE, there was an obvious mismatch.

Ocean worked feverishly for three weeks to develop integration plans, migration plans and technology roadmaps that could bridge the mismatch but, in the end, the options were rejected outright.

Summary of Ocean’s Proposal and Technical Analysis that was presented to Fetch and SingularityNET

Outside of technology, the Ocean core team were being dragged into meeting-hell with 4–6 meetings a day, sucking up all our capacity to focus on delivering value to the Ocean community. ASI assumed the shape of SingularityNet, which was very administratively heavy and slow.

No one had done proper due diligence. We’d all made a mistake of jumping quickly into a deal.

At the end of April 2024, 1 month after signing the ASI Token Merger Agreement, Ocean asked to be let out of the ASI Alliance. Ocean had ignored the red flags for long enough and wanted to part ways amicably with minimal damage. Ocean drafted a departure announcement that was shared in good faith with Fetch and SingularityNet.

April 25/26 — Sheikh and Pon

The next day emails were exchanged, starting with one from Sheikh to myself, threatening Ocean and myself with a lawsuit that would result in “significant damages.”

Believing that Sheikh shared a commitment to the principles of non-coercion and non-compulsion, I responded to say that the escalation path of Sheikh went immediately towards a lawsuit.

Sheikh then accused Ocean of being guilty of compelling and coercing the other parties against their will, and made clear that any public statement about Ocean leaving the ASI Alliance would be met with a lawsuit.

I re-asserted Ocean’s right to join or not join ASI, and asked that the least destructive path be chosen to minimize harm on the Fetch, SingularityNet and Ocean communities.

For Ocean, it was regrettable that we’d jumped into a deal haphazardly. At the same time, Ocean had signed a contract and we valued our word and our promises. We knew that it was a mistake to join ASI, but we’d gotten ourselves into a dilemma. We decided to ask to be let out of the ASI contract.

May 2024 — Legal Dispute Delays the ASI Launch

Ocean’s request to be let out of the ASI Alliance was met with fury, aggression, and legal action was initiated immediately on Ocean. Sheikh was apparently petrified of the market reaction and refused to entertain anything other than a full merger.

Over the month of May 2024, with the residual goodwill from initial March merger discussions, I negotiated with David Levy who was representing Fetch, with SingularityNet stuck in the middle trying to play referee and keep everyone happy.

May 2, 2024 — Lake and Pon

Trent put together an extensive set of technical analyses exploring possible options for all parties to get what they wanted. Fetch wanted a merger while keeping their $FET token. Ocean needed a pathway that wouldn’t obstruct us to integrate with Oasis. SingularityNet wanted everyone to just get along.

By mid-May sufficient progress had been made so that I could lay down a proposal for Ocean to rejoin the ASI initiative.

May 12, 2024 — Pon to Sheikh

By May 24, 2024 we were coming close to an agreement.

Given our residual reluctance to continue with the ASI Alliance, Ocean argued for minority rights so that we would not be bullied with administrative resolutions at the ASI level that compelled us to do anything that did not align with our values or priorities.

May 24, 2024 — Pon to Levy

Despite Fetch and SingularityNET each (separately) expressing to Ocean concerns that each of the other was liquidating too many tokens too quickly (or had the intention to do so), we strongly reiterated the sacrosanct principle of all crypto, that property in your wallet is Yours. SingularityNet agreed, wanting the option to execute airdrops on the legacy Singularity community if they deemed it useful.

In short:

· Ocean would not interfere with Fetch’s or SingularityNet’s treasury, nor should they interfere with Ocean (or any other token holder).
· Fetch’s, SingularityNET’s and Ocean’s treasuries were sole property of the Foundation entities, and the owning entities had unencumbered, unrestricted rights to do as they wish with their tokens.

oceanDAO, the Ocean community treasury DAO which had been previously acknowledged by SingularityNET in March at the commencement of merger discussions, then came up over multiple discussions with Mr. Levy.

A sticking point in the negotiations appeared when Fetch placed significant pressure to compel Ocean (and oceanDAO) to convert all $OCEAN to $FET immediately after the token bridge was opened. Ocean did not control oceanDAO, and Ocean reiterated forcefully that oceanDAO would make their own decision on the token swap. No one could compel a 3rd party to act one way or the other, but Ocean would give best efforts to socialize the token swap benefits.

In keeping with an ethos of decentralization, Ocean would support any exchange choosing to de-list $OCEAN but Ocean would not forcefully advocate it. Ocean believed that every actor — exchange, token holder, Foundation — should retain their sovereign rights to do as they wish unless contractually obligated.

May 24, 2024 — Pon to Levy

As part of this discussion, Ocean disclosed to Fetch all wallets that it was aware of for both Ocean and the oceanDAO collective. What is clearly notable is that Ocean clearly highlighted to Fetch that OceanDAO was a separate entity, independent from Ocean (i.e. Ocean Protocol Foundation) and that it is not in any way controlled by Ocean.

May 24, 2024 — Pon to Levy (Full disclosure of all Ocean Protocol Foundation and oceanDAO wallets)

Fetch applied intense pressure on Ocean to convert all $OCEAN treasury tokens (including oceanDAO treasury tokens) into $FET. In fact, Fetch sought to contractually compel Ocean to do so in the terms of the ASI deal. Ocean refused to agree to this, since, as already made known to Fetch, the oceanDAO was an independent 3rd party.

Finally acknowledging the reality of the oceanDAO as a 3rd party, Fetch.ai agreed to the following term into the ASI deal:

Ocean “endeavors on best efforts to urge the oceanDAO collective to swap tokens in their custody” into $FET/$ASI as soon as the token bridge was opened, acknowledging that Ocean could not be compelled to force a 3rd party to act.

Being close to a deal, we moved on to the Constitution of the ASI entity (Superintelligence Alliance Ltd). As was clear from the Constitution, the only role of the ASI entity was the assessment and admittance of new Members, and the follow-on instruction to Fetch to mint the necessary tokens to swap out the entire token supply of the newly admitted party.

This negotiated agreement allowed Ocean to preserve its full independence within the ASI Alliance so that it could pursue its own product roadmap based on pragmatism and market demand, rather than fake collaborations within ASI Alliance for marketing “show.” Ocean had fought, over and over again, for the core principle of crypto — each wallet holder has a sole, unencumbered right to their property and tokens to use as they saw fit.

It also allowed Ocean to reject any cost sharing on spending proposals which did not align to Ocean’s needs or priorities, to the significant dismay of Fetch and SingularityNet. They desired that Ocean would pay 1/3 of all ASI expenses that were proposed, even those that were nonsensical or absurd. Ocean’s market cap made up 20% of ASI’s total market cap, so whatever costs were commonly agreed, Ocean would still be paying “more than its fair share” relative to the other two members.

May 24, 2024 — Pon to Levy
In early-June, Ocean, Fetch and SingularityNet struck a deal and agreed to proceed. Fetch made an announcement of the ASI merger moving forward for July 15, 2024.

Ocean reasoned that a protracted legal case would not have helped anyone, $OCEAN holders would have a good home with $FET, that there were worse outcomes than joining $FET and that it would relieve the entire Ocean organization from the day-to-day management of community expectations, freeing the Ocean core team to focus on technology and product.

From June 2024, the Ocean team dove in to execute, in support of the ASI Alliance merger. Ocean had technical, marketing and community teams aligned across all three projects. The merger went according to plan, in spite of the earlier hiccups.

Seeing that there would potentially be technology integration amongst the parties moving forward, the oceanDAO announced through a series of blogposts that all $OCEAN rewards programs would be sun-downed in an orderly manner and that the use of Ocean community rewards would be re-assessed at a later date.

51% treasury for the Ocean community

It’s possible that it was at this juncture that Sheikh mistakenly assumed that the Ocean treasury would be relinquished solely for ASI Alliance purposes. This is what may have led to Sheikh’s many false allegations, libelous claims and misleading statements that Ocean somehow “stole” ASI community funds when, throughout the entire process, Ocean has made forceful, consistent assertions for treasury sovereignty.

Meanwhile, the operational delay had somewhat dampened the enthusiasm in the market for the merger. SingularityNet conveyed to Ocean that this had likely prevented Sheikh from using the originally anticipated hype and increased liquidity to exit large portions of his $FET position with a huge profit for himself. As it turned out, Ocean’s hesitation, driven by valid commercial concerns, may have inadvertently protected the entire ASI community by taking Sheikh’s planned liquidation window away.

In spite of any earlier bad blood, I sent Sheikh a private note immediately upon hearing that his father was gravely ill.

June 10, 2024 — Pon to Sheikh June 2024 — Re-Cap Contractual Obligations of the ASI Alliance

To take a quick step back, the obligations for the ASI Alliance were the following:

Fetch would mint 610.8m $FET to swap out all Ocean holders at a rate of 0.433226 $FET / $OCEAN Fetch would inject 610.8m $FET into the token bridge and migration contract so every $OCEAN token holder could swap their $OCEAN for $FET.

In exchange, Ocean would:

Swap a minimum of 4m $OCEAN to $FET (Ocean Protocol Foundation only had 25m $OCEAN, of which 20m $OCEAN were locked with GSR) Support exchanges in the swap from $OCEAN to $FET Join the to-be established ASI Alliance entity (Superintelligence Alliance Ltd).

When the merger happened in July 2024, Fetch.ai injected 500m $FET each into the migration contracts for $AGIX and $OCEAN, leaving a shortfall of 110.8m $FET which Ocean assumed would be injected later when the migration contract ran low.

With the merger completed, Ocean set about to focus on product development and technology, eschewing many of the outrageous marketing and administrative initiatives proposed by Fetch and SingularityNet.

July 17, 2024 — Pon to Lake and Levy

This singular product focus continued until Ocean’s eventual departure from the ASI Alliance in October 2025.

August 2024 — Cudos Admittance into ASI Alliance

In August 2024, Fetch had prepared a dossier to admit Cudos into the ASI Alliance. The dossier was relatively sparse and missed many key pertinent technical details. Trent had many questions about Cudos’ level of decentralization, which was supposedly one of the key objectives of the ASI Alliance, and whether Cudos’ service was both a cultural and technical fit within the Alliance. During the 2h Board meeting, it got heated when Sheikh made clear that he regarded decentralization as some “rubbish, unknown concept”.

The vote on Cudos proceeded. I voted for Cudos to try to maintain good relations with the others while Trent rightfully voiced his dissatisfaction with the compromise on decentralization principles. The resolution passed 5 of 6 when Fetch and Singularity both unanimously voted “Yes” for entry of Cudos.

The Cudos community “vote” proceeded. Even before the results had been publicly announced on 27 Sep 2024, Fetch.ai had minted the Cudos token allotment, and then sent the $FET to the migration contract to swap out $CUDOS token holders.

December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasury

By December 2024, many of the ASI and Ocean communities had identified large flows of $AGIX and $FET tokens from the SingularityNet treasury wallets. At the start of the ASI Alliance, Ocean ignored the red flag signals from SingularityNet on their undisciplined spending that was untethered to reality.

Dr. Goertzel was hellbent on competing with the big boys of AI who were deploying $1 Trillion in CapEx. Meanwhile Dr. Goertzel apparently thought that a $100m buy of GPUs could make a difference. As part of this desire to “keep up” with OpenAI, X and others, SingularityNet had a headcount over 300 people. Their monthly fixed burn rate of $6 million per month exceeded the annual burn rate of both the Fetch and Ocean teams combined. This was, in Ocean’s view, unsustainable.

The results were clear as day in the $FET token price chart. From a peak of $3.22/$FET when the ASI Alliance was announced, the token price had dropped to $1.25 by end December 2024. Ocean had not sold, or caused to be sold, any $FET tokens.

Based on independent research, it appears that Fetch.ai also sold or distributed tokens to the tune of 390 million $FET worth $314 million from March 2024 until October 2025:

Further research shows a strong correlation between Fetch liquidations and injections into companies controlled by Sheikh in the UK.

All excess liquidity and buy-demand for $FET was sucked out through SingularityNet’s $6 million per month (or more) burn rate and Fetch’s liquidations with a large portion likely going into Sheikh controlled companies. As a result, the entire ASI community suffered, as $FET underperformed virtually every other AI-crypto token, save one. $PAAL had the unfortunate luck to get tangled up with the ASI Alliance, and through the failed token merger attempt, lost their community’s trust and support, earning the unenviable honour of the worst performing AI-crypto token this past year.

SingularityNet was harming all of ASI due to their out-of-control spending and Fetch’s massive sell-downs compounded the strong negative price pressure.

As painful as it was, Ocean held back from castigating SingularityNet, as one of the core principles of crypto is that a wallet holder fully controls their assets. Ocean kept to that principle, believing that it would likewise apply to any assets controlled by Ocean or oceanDAO. We kept our heads down and maintained strict fiscal discipline.

For the record, from March 2024 until July 2025, a period of 16 months, neither Ocean nor oceanDAO liquidated ANY $FET or $OCEAN into the market, other than for the issuance of community grants, operational obligations and market making to ensure liquid and functioning markets. Ocean had lived through too many bear markets to be undisciplined in spending. Ocean kept budgets tight, assessed every expense regularly and gave respect to the liquidity pools generated by organic demand from token holders and traders.

Contrast this financial discipline with the records which now seem to be coming out. Between SingularityNet and Fetch, approximately $500 million was sent to exchange accounts on the Ethereum, BSC and Cardano blockchains, with huge amounts apparently being liquidated for injection into Sheikh’s personal companies or being sent for custody as part of the TRNR deal (see below). This was money coming from the pockets of all the ASI token holders.

January 2025 — oceanDAO Shifts from a Passive to an Active Token Holder

In January 2025, questions arose from the oceanDAO, whether it would be prudent to explore options to preserve the Ocean community treasury’s value. In light of a $FET price that was clearly declining at a faster rate relative to other AI-crypto tokens, something had to be done.

Since 2021, when the custodianship of oceanDAO had been formally and legally transferred from the Ocean Protocol Foundation, the oceanDAO had held all assets passively. In June 2023, the oceanDAO minted the remaining 51% of the $OCEAN supply and kept them fully under control of a multisig without any activity until July 2025, to minimize any potential tax liabilities on the collective. I was one of seven keyholders.

To put to bed any false allegations, the $OCEAN held by oceanDAO are for the sole benefit of the Ocean community and no one else. It doesn’t matter if Sheikh makes claims based on an alternative reality hundreds of times or that these claims are repeated by his sycophants — the truth is that the $OCEAN / $FET owned by oceanDAO is for the benefit of the Ocean community.

May 2025 — oceanDAO Establishes in Cayman

The realization that SingularityNet (and, as it now turns out, Fetch) was draining liquidity and creating a consistent negative price impact on the community spurred the oceanDAO to investigate what could be done to diversify the Ocean community treasury out of the passively held $OCEAN which was pegged to $FET.

The oceanDAO collective realized it had to actively manage the Ocean community treasury to protect Ocean community interests, especially as the DeFi landscape had matured significantly over the years and now offered attractive yields. Lawyers, accountants and auditors were engaged to survey suitable jurisdictions for this purpose — Singapore, Dubai, Switzerland, offshore Islands. In the end, the oceanDAO decided on Cayman.

Cayman offered several unique advantages for DAOs. Cayman law permits the creation of entities which could avoid giving Founders or those close to the project any legal claim on community assets, ensuring that the work of the entity would be solely deployed for the Ocean community. One quarter of all DAOs choose Cayman as their place to establish, including SingularityNet.

By June 2025, a Cayman trust was established on behalf of the oceanDAO collective for the benefit of the Ocean community. This new entity became known as Ocean Expeditions (OE). oceanDAO transferred its assets to the OE entity and the passively held $OCEAN were converted to $FET. OE could now execute an active management of the treasury. As it happened, Fetch.ai had in fact gotten what it wanted, namely, for oceanDAO to convert its entire treasury of 661 million $OCEAN into $FET tokens.

Contrary to what Sheikh has been insinuating, Ocean does not control OE. Whilst I am the sole director of OE, I remain only one of several keyholders, all of whom entered into a legally binding instrument to act for the collective benefit of the Ocean community.

June 2025 — Fetch’s TRNR “ISI” Deal

Unbeknownst to Ocean or oceanDAO, in parallel, Fetch.ai UK had been working on an ETF deal with Interactive Strength Inc (ISI), aka the “TRNR Deal”.

Neither Ocean nor oceanDAO (or subsequently OE) had any prior knowledge, involvement or awareness of this. In fact, “Ocean” is not mentioned even once in the SEC filings. Consistent with the original understanding that each Foundation had sole control of their treasuries, Ocean was not consulted by Fetch.

I don’t have the full details and I encourage the ASI community to inquire further but the mid-June TRNR deal seems to have committed Fetch to supply $50 million in a cash loan for DWF and ISI, and $100 million in tokens (125m $FET) for a backstop to be custodied with BitGo.

SingularityNet told Ocean that they were strong-armed by Fetch.ai to put in $15 million in cash for this deal, but were not named in any of the filings. The strike price for the deal was around $0.80 per $FET and the backstop would kick-in if $FET dropped to $0.45, essentially betting that $FET would never drop -45%.

However, this ignored the fact that crypto can fall 90% in bear markets or flash crashes. The TRNR deal not only put Fetch.ai’s assets at risk if the collateral was called, the 125m $FET would be liquidated as well, causing significant harm to the entire ASI community.

Well, four months later, that’s exactly what happened. On the night of Oct 10, 2025, Trump announced tariffs on China sending the crypto market into chaos. Many tokens saw a temporary drawdown of 95% before recovering with 2/3 of their valuation from the day before. One week later on Oct 17, further crypto-market aftershocks occurred with another round of smaller liquidations.

Again, I don’t have all the details, but it appears that large portions of the $FET custodied with BitGo were liquidated causing a drop in $FET price from $0.40 down to $0.32.

Oct 12, 2025 — Artificial Superintelligence Alliance Telegram Chat

The ASI and Fetch community should be asking Fetch.ai some hard questions such as why Fetch.ai would sign such a reckless and disastrous deal? They should ask for full transparency on the TRNR deal with clear numbers on the amounts loaned, $FET used as collateral, and the risk assessment on the negative price impact to $FET if the collateral was called and liquidated by force.

June 2025 — oceanDAO becomes Ocean Expeditions

Two weeks after the TRNR deal was announced, OE received its business incorporation papers in Cayman and assets from oceanDAO could be immediately transferred over to the OE entity.

The timing of OE’s incorporation was totally unrelated to Fetch’s TRNR deal, and had in fact been in the works long before the TRNR deal was announced. OE’s strategy to actively manage the Ocean community treasury was developed completely independently from Fetch’s TRNR deal, because remember, Ocean was never informed of anything except for a head’s up on the main press release a few days before publication.

OE had few options with the $OCEAN it held because (contrary to recent assertions) Fetch.ai had mandated a one-way transfer from $OCEAN to $FET in the June 2024 deal for Ocean to re-engage with the ASI Alliance. By this time, most exchanges had de-listed $OCEAN, which closed off virtually all liquidity avenues. As a result, $OCEAN lost 90% of its liquidity and exchange pairs.

OE had only one way out and that was to convert $OCEAN to $FET. This was consistent with the ASI deal. It was Fetch.ai that wanted Ocean to compel oceanDAO to convert $OCEAN to $FET as part of the ASI deal.

On July 1 2025, all 661m $OCEAN held by OE in the Ocean community wallet were converted.

Completely unbeknownst to Ocean and to OE, Sheikh viewed OE’s treasury activities, not as support for his $FET token, rather as sabotage for his TRNR plans.

But recall, OE had no idea about the details of the deal. Neither OE, nor Ocean, was a party to the deal in any way. I found out like everyone else via a press release on June 11 that the deal had closed and I promptly ignored it to focus on Ocean’s strategy, products and technology.

Sticking to the principle that each Foundation works in its own manner for the benefit of the ASI community, Ocean didn’t feel the need to demand any restrictions on Fetch.ai nor to delve into any documents. Personally, I didn’t even read the SEC filings until September, in the course of the ongoing legal proceedings to understand the allegations being made against Ocean. The TRNR deal was solely a Fetch.ai matter.

June 2025 — ASI Alliance Financials

As an aside, I had been driving the effort to keep the books of the ASI Alliance properly up-to-date.

Sheikh was insistent that Fetch be reimbursed by the other Members for its financial outlays, assuming that other ASI members had spent less than Fetch.ai. When Sheikh found out that it was actually Ocean who had contributed the most money to commonly agreed expenditures, even though Ocean was the smallest member, and SingularityNet and Fetch.ai would owe Ocean money, the complaint was dropped.

Instead, Sheikh tried another tactic to offload expenses.

SingularityNet and Ocean had signed off on the 2024 financial statements for the ASI Alliance. However, the financials were delayed by Fetch.ai. Sheikh wanted to load up the balance sheet of the ASI Alliance with debt obligations based on the spendings of the member Foundations.

June 20, 2025 — Pon to Sheikh

Fetch’s insistence was against the agreement made at the founding of ASI, that each Member would spend and direct their efforts on ASI initiatives of their own choosing and volition, and the books of ASI Alliance would be kept clean and simple. This was especially prudent as the ASI Alliance had no income or assets.

After a 6-week delay and back and forth discussions, in mid-August we finally got Fetch.ai’s agreement to move forward by deferring the conversation on cost sharing to the following year.

This incident stuck in my mind as an enormous red flag, as these types of accounting practices hinted at the type of tactics that Sheikh may consider as a normal way of doing business. Ocean strongly disagrees and does not find such methods to be prudent.

July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community Treasury

On July 3, Ocean Expeditions (OE) sent 34 million $FET to a reputable market maker for mid-dated options with sell limits set to $0.75–0.95, so OE could earn premiums while allowing for the liquidation of $FET if the price was higher at option expiry.

This sort of option strategy is a standard approach to treasury management that is ethical, responsible and benefits token holders by maintaining relative price stability. The options only execute and trigger a sale if, upon maturity, the $FET price is higher than the strike price. If at maturity the $FET price is lower than the strike price, the options expire unexercised while still allowing OE to earn premiums, benefiting the Ocean community.

Insinuations that these transactions were a form of “token dumping” are nonsensical and misinformed. OE was simply managing the community treasury.

On July 14, a further 56 million $FET was sent out as part of the same treasury strategy with strikes set at $0.70-$1.05.

These option transactions did lead to a responsible liquidation of $18.2 million worth of $FET on July 21, one that accorded with market demand and did not depress the $FET price. Further, this was 6 weeks after the TRNR deal was announced. From July 21 until Ocean’s exit from the ASI Alliance on Oct 9, 2025, there were no further liquidations of $FET save for one small tranche that raised $2.2m.

In total, Ocean Expeditions raised $22.4 million for the Ocean community, a significantly smaller sum compared to the estimated $500 million of liquidations by the other ASI members.
August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract

Around this time, Ocean realized that the $OCEAN/$FET token migration contract was running perilously low. The migration contract was supposed to cover over 270 million $OCEAN to be converted by 37,500 token holders, but only 7 million $FET were left in the migration contract.

On July 22, Ocean requested Fetch to top-up the migration contract with 50m $FET without response. Another email was sent to Sheikh on July 29 with a response from him of “will work on it.” Sheikh asked for a call on Aug 1, where he agreed to top up the migration contract with the remaining tokens. On Aug 5, I wrote an email to Fetch and Sheikh with a formal request for a top-up, while confirming that all wallets are secured for the Ocean community.

I sent a final note on August 12 to Sheikh with a request for information why the promised top-up had not yet occurred.

August 2025 — A Conspiracy To Force Ocean to Submit

Starting August 12, Fetch.ai and SingularityNet actively conspired against Ocean. Without allowing Ocean’s directors to vote on the matter (on the grounds that Ocean’s directors were purportedly “conflicted”), Fetch’s and SingularityNet’s directors on the ASI Alliance unilaterally attempted to pass a resolution to close the $OCEAN-$FET token bridge. This action clearly violated the ASI Constitution which mandated a unanimous agreement by all directors for any ASI Alliance actions.

On August 13, Mario Casiraghi, SingularityNET’s CFO, issued the following email:

The next day on August 14, I received this message from Lake:

(In this note, Lake acknowledged that Sheikh’s original plans to dump the ASI Alliance were still in place, albeit potentially at an accelerated pace).

Ocean objected forcefully, citing the need to protect the ASI and Ocean communities, and pleading to keep the matter private and contained.

August 19, 2025 — Pon, Dr. Goertzel, Mario Casiraghi

At this point, I highlighted the obvious hypocrisy of SingularityNet and Fetch.

SingularityNet and Fetch had moved $500 million worth of $FET, sucking out excess liquidity from all token holders. All the while, Ocean held its tongue and maintained fiscal discipline.

Yet, the very first instance that oceanDAO/ Ocean Expeditions actually liquidated any $FET tokens, Ocean was accused of malicious intent, exercising control over OceanDAO/OE and called to task. Fetch had accused the wrong entity, Ocean, for actions of a wholly separate 3rd party, and jumped to completely false conclusions about the motives.

The improper ASI Alliance Directors’ actions violated the core principle of the ASI Alliance that crypto-property was to be solely directed by each Foundation. Additional clauses with demands for transparency, something neither Fetch.ai nor SingularityNet had ever offered or provided themselves, were included to further try to hamper and limit Ocean Protocol Foundation.

The only authority of the ASI Alliance and the Board, as defined in the ASI Constitution, was to vote on accepting new members and then minting the appropriate tokens for a swap-out. There was no authority, power or mandate to sanction any Member Foundation.

Any and all other actions needed a unanimous decision from the Board and Member Foundations. This illegal action was exactly what Ocean was so concerned about in the May 2024 “re-joining” discussions — the potential for the bullying and harassment of Ocean as the weakest and smallest member of the ASI Alliance.

Finally, in seeing the clear intent on closing the token bridge and the active measures to harm 37,600 innocent $OCEAN token holders, Ocean needed to act.

Ocean immediately initiated legal action to protect Ocean and ASI users on August 15, 2025. This remains ongoing.

Within hours of Ocean’s filing, Fetch responded with a lengthy countersuit against Ocean accompanied with witness statements and voluminous exhibits. This showed that Fetch had for weeks been planning on commencing a lawsuit against Ocean and instructing lawyers behind the scenes. On August 19, Ocean also received a DocuSign from SingularityNet’s lawyer. This contained the resolution which Fetch and Singularity attempted to pass without the Ocean-appointed directors, i.e. myself and Trent.

On August 22, by consent, parties agreed to an order to maintain confidentiality during the legal process, and out of respect for the process, Ocean refrained from communicating with any 3rd parties, including OE who was not a party to the dispute or the proceedings. It is also the reason why Ocean has, until now, refrained from litigating this dispute in public.

October 2025 — Ocean Exits the ASI Alliance

As the legal proceedings carried on, and evidence was provided from August until late-September, it was clear that Ocean could no longer be a part of the ASI Alliance.

The only question was when to exit?

Ocean was confident that the evidence and facts presented to the adjudicator would prove its case and vindicate it, so Ocean wanted the adjudicator to forcefully make an assessment.

Once the adjudicator issued his findings (which Ocean has proposed to waive confidentiality over and release to the community so the community can see the truth for themselves, but which Fetch has refused to agree to), Ocean decided that it was time to leave the ASI Alliance.

The 18-month ordeal was too much to bear.

From the violation of the original agreements on the principles of decentralization, to the encroachment on both Ocean and Ocean Expedition treasuries, while watching SingularityNet and Fetch disregard and pilfer the community for their own priorities, Ocean knew that it needed out.

Ocean couldn’t save ASI, but could try to salvage something for the Ocean community.

SingularityNet and Fetch used their treasuries recklessly as they saw fit, without regard or consideration of the impacts to the ASI community.

From Fetch’s over-reaction the first time Ocean wanted to bow out amicably, Ocean knew that additional legal challenges and attempts to block Ocean from leaving could be expected.

Ocean has only tried to build decentralized AI products, exert strict fiscal discipline, collaborate in good faith and protect the ASI and Ocean communities as best as we can.

As of Oct. 9, Ocean Expeditions retained the vast majority of the $FET that were converted from $OCEAN. All tokens held by Ocean Expeditions are its property, and will be used solely for the benefit of the Ocean community. They are not controlled by Ocean, or by me.

Summary

$FET dropped from a peak of $3.22 at the time of the ASI Alliance announcement to $0.235 today, a -93% drop. Fetch and SingularityNet have tried to convince the community that this was all a result of Ocean leaving the ASI Alliance, but that is untrue.

Ocean announced its withdrawal on Oct 9 from the ASI Alliance in a fully amicable manner, without pointing fingers to minimize any potential fallout. Even after 8 hours of Ocean’s announcement, the price of $FET had only fallen marginally from $0.55 to $0.53. In other words, Sheikh is blaming Ocean for a problem that has little to do with anything Ocean has done.

Price Chart “1h-Candles” on $FET at the time of the Ocean withdrawal

The Oct 10/11 Crypto flash crash due to Trump’s China tariff announcement took the entire market down and $FET went down to $0.11 before it recovered to $0.40.

On the evening of Oct 12, a further decline in $FET came when the TRNR collateral was called on and started to be liquidated. This event brought $FET down to $0.32. This was the ill-conceived deal entered into by Fetch.ai which apparently ignored the extreme volatility of crypto-markets and caused unnecessary damage to the entire ASI community.

Meanwhile, in the general crypto market, a second aftershock of liquidations happened around Oct 17.

Combined with Fetch and Sheikh’s attempts to denigrate Ocean, and in the process causing damage to their own $FET token as the allegations became more and more ludicrous, and the narrative attacks started to contradict themselves.

In short, the -93% drop in $FET from 27 March 2024 until 19 October 2025 was due to the broader market sentiment and volatility, SingularityNet and Fetch’s draining of liquidity from the entire community by dumping upwards of $500 million worth of $FET tokens, a reckless TRNR deal that failed to anticipate crypto dropping more than 45% and wiping out $150 million in cash and tokens, and Fetch.ai’s FUDing of its own project, bringing disrepute on itself when Ocean decided that it could not in good conscience remain a part of the ASI Alliance.

X Post: @cryptorinweb3 — https://x.com/cryptorinweb3/status/1980644944256930202

I’m not going to say whose fault I think the drop in $FET’s price is, but I can with very high confidence say it has next to nothing to do with Ocean leaving the ASI Alliance.

I hope that the Fetch and SingularityNet communities ask for full financial transparency on the spendings of the respective Fetch.ai and SingularityNet Companies and Foundations.

I would also like to sincerely thank the Ocean community, trustees and the public for their patience and support during Ocean’s radio silence in respect of the legal processes.

The ASI Alliance from Ocean’s Perspective was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking

The post Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking appeared first on Tokeny.

Luxembourg & Zug – 23 October 2025 – AMINA Bank AG (“AMINA”), a Swiss Financial Market Supervisory Authority (FINMA)-regulated crypto bank with global reach, has entered into a collaboration agreement with Tokeny, (an Apex Group company), the leading onchain finance operating system, to create a regulated banking bridge for institutional tokenisation. This strategic collaboration addresses critical institutional bottlenecks by applying Swiss banking standards to blockchain innovation.

Through this agreement, AMINA Bank will deliver regulated banking and custody for underlying assets such as government bonds, corporate securities, treasury bills, and other traditional financial instruments, while Tokeny provides the tokenisation platform. AMINA’s extensive crypto and stablecoin offering also enables clients to seamlessly move on and off chain.

Market demand for tokenisation is coming from the open blockchain ecosystems, and institutions need a compliant and scalable way to meet it. By integrating AMINA Bank's regulated banking and custody framework with Tokeny's orchestrated tokenisation infrastructure, we provide financial institutions with a fast, seamless, and secure path to market. Luc FalempinCEO of Tokeny, and Head of Product for Apex Digital

The tokenised assets market is experiencing explosive growth, with major institutions, including JP Morgan and BlackRock, leading adoption of blockchain-based financial products. This momentum is supported by accelerating regulatory clarity across the globe, from the US GENIUS Act to Hong Kong’s ASPIRe framework.

The collaboration leverages AMINA’s regulated banking infrastructure alongside Tokeny’s proven tokenisation expertise. AMINA provides Swiss banking-standard custody and compliance, while Tokeny contributes first-mover tokenisation technology and an enterprise-grade platform that has powered over 120 use cases and billions of dollars in assets. It has recently been acquired by Apex Group, a global financial services provider with $3.5 trillion in assets under administration.

In the past year, there’s been increased demand from our institutional clients for compliant access to tokenised assets on public blockchains. Tokenised entities still face critical challenges such as setting up banking and custody solutions. There’s a lack of orchestrated infrastructure that connects with legacy systems. My priority is delivering this innovation through the safest, most regulated pathway possible, and we’re excited to partner with Tokeny to make this happen. Myles HarrisonChief Product Officer at AMINA Bank

The combined solution offers financial institutions end-to-end tokenisation capability with fast time-to-market measured in weeks. Starting with traditional financial instruments where institutional demand is focused, the collaboration agreement establishes the regulated infrastructure foundation for future expansion into asset classes where tokenisation can deliver greater utility.

Tokeny’s platform leverages the ERC-3643 standard for compliant tokenisation, the standard is built on top of ERC-20 with a compliance layer to ensure interoperability with the broader DeFi ecosystem. This ensures that, even within an open blockchain ecosystem, only authorised investors can hold and transfer tokenised assets while maintaining issuer control and automated regulatory compliance.

“The future of finance is open, and institutions now have the tools to take full advantage, without compromising on compliance, security, or operational efficiency,” added Falempin.

About Tokeny

Tokeny is a leading onchain finance platform and part of Apex Group, a global financial services provider with $3.5 trillion in assets under administration and over 13,000 professionals across 52 countries. With seven years of proven experience, Tokeny provides financial institutions with the technical tools to represent assets on the blockchain securely and compliantly without facing complex technical hurdles.Institutions can issue, manage, and distribute securities fully onchain, benefiting from faster transfers, lower costs, and broader distribution. Investors enjoy instant settlement, peer-to-peer transferability, and access to a growing ecosystem of tokenized assets and DeFi services. From opening new distribution channels to reducing operational friction, Tokeny enables institutions to modernize how assets move and go to market faster, without needing to be blockchain experts.

Website | LinkedIn | X/Twitter

About AMINA – Crypto. Banking. Simplified.

Founded in April 2018 and established in Zug (Switzerland), AMINA Bank AG is a pioneer in the crypto banking industry. In August 2019, AMINA Bank AG received the Swiss Banking and Securities Dealer License from the Swiss Financial Market Supervisory Authority (“FINMA”). In February 2022, AMINA Bank AG, Abu Dhabi Global Markets (“ADGM”) Branch received Financial Services Permission from the Financial Services Regulatory Authority (“FSRA”) of ADGM. In November 2023, AMINA (Hong Kong) Limited received its Type 1, Type 4 and Type 9 licenses from the Securities and Futures Commission (“SFC”).

To learn more about AMINA, visit aminagroup.com

The post Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking appeared first on Tokeny.


Thales Group

Thales reports its order intake and sales as of September 30, 2025

Thales reports its order intake and sales as of September 30, 2025 prezly Thu, 10/23/2025 - 06:00 Group Investor relations Share options Facebook X
Thales reports its order intake and sales as of September 30, 2025 prezly Thu, 10/23/2025 - 06:00 Group Investor relations

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 Order intake: €16.8 billion, up +8% (+9% on an organic basis1) Sales: €15.3 billion, up +8.4% (+9.1% on an organic basis) Confirmation of all 2025 financial targets2 Book-to-bill ratio above 1 Organic sales growth between +6% and +7%3 Adjusted EBIT margin: 12.2% to 12.4%

Thales (Euronext Paris: HO) today announced its order intake and sales for the period ending September 30, 2025.

Order intake

In € millions

9m 2025

9m 2024

Total

change

Organic change

Aerospace

3,919

3,639

+8%

+7%

Defence

9,943

8,951

+11%

+12%

Cyber & Digital

2,827

2,905

-3%

-0%

Total – operating segments

16,689

15,494

+8%

+8%

Other

73

56

Total

16,762

15,551

+8%

+9%

Of which mature markets4

12,342

11,413

+8%

+9%

Of which emerging markets4

4,419

4,137

+7%

+8%

Sales

In € millions

9m 2025

9m 2024

Total

change

Organic change

Aerospace

4,108

3,839

+7.0%

+6.9%

Defence

8,243

7,239

+13.9%

+14.0%

Cyber & Digital

2,803

2,914

-3.8%

-1.3%

Of which Cyber

1,059

1,140

-7.1%

-4.8%

Of which Digital

1,744

1,774

-1.7%

+1.0%

Total – operating segments

15,154

13,993

+8.3%

+8.9%

Other

101

76

Total

15,256

14,069

+8.4%

+9.1%

Of which mature markets4

12,053

11,220

+7.4%

+7.7%

Of which emerging markets4

3,203

2,849

+12.4%

+14.5%

“In the third quarter 2025, Thales delivered sustained organic growth in both order intake and sales, further confirming the Group's strong momentum since the beginning of the year. ​
​In this supportive environment, Thales confirms all its financial targets for 2025. I welcome the constant commitment of our teams to pursue this sustainable growth trajectory.”
​Patrice Caine, Chairman & Chief Executive Officer
Order intake

Over the first nine months of 2025, order intake amounted to €16,762 million, up +9% organically compared with the first nine months of 2024 (up +8% on a reported basis). The Group continues to benefit from strong commercial momentum in most of its activities, particularly in the Aerospace and Defence segments. ​

Over this period, Thales recorded 14 large orders with a unit value of more than €100 million, for a total amount of €5,331 million:

5 large orders recorded in Q1 2025: Contract signed with Space Norway, a Norwegian satellite operator, for the supply of the THOR 8 telecommunications satellite; Order by SKY Perfect JSAT to Thales Alenia Space of JSAT-32, a geostationary telecommunications satellite; Signing of a contract between Thales and the European Space Agency (ESA) to develop Argonaut, a future autonomous and versatile lunar lander designed to deliver cargo and scientific instruments to the Moon; Order from the Dutch Ministry of Defence for the modernization and support of vehicle tactical simulators; Order from the French Defence Procurement Agency (DGA) for the development, production, and maintenance of vetronics equipment for various Army vehicles as part of the SCORPION programme. 5 large orders recorded in Q2 2025: Contract related to the supply of 26 Rafale Marine to India to equip the Indian Navy; As part of the SDMM (Strategic Domestic Munition Manufacturing) contract signed in 2020 for the supply of ammunition to the Australian armed forces, entry into force of years 6 to 8. The continuation of the SDMM contract concerns the design, the development, manufacture and maintenance of a variety of ammunition; Contract for the delivery to Ukraine of 70 mm ammunition and the transfer of the final assembly line of certain components of this ammunition from Belgium to Ukraine; Order for the production and supply of AWWS (Above-Water Warfare System) combat systems intended for frigates equipment in Europe; Order by Sweden of compact multi-mission medium range Ground Master 200 radars. 4 large orders recorded in Q3 2025: Signing of the Initial Phase Contract between Thales Alenia Space and the SpaceRISE consortium of satellite operators to engineer the system and secured payload solutions for the future European constellation IRIS²; Order from the UK Ministry of Defence for the production and delivery of 5,000 air defence LMM missiles; Order from the German Ministry of Defence for the delivery to a third country of portable land surveillance radars; Order from a European country for the production and delivery of 70 mm ammunition.

At €11,431 million, order intake of a unit amount below €100 million was up +8% compared to the first nine months of 2024; meanwhile, those with a unit value of less than €10 million were slightly up at September 30, 2025.

Geographically5, order intake in mature markets recorded organic growth of +9%, at €12,342 million, driven notably by solid momentum in Europe (up organically by +13%). Order intake in emerging markets amounted to €4,419 million and showed an organic increase of +8% at 30 September 2025, notably benefiting from the strong dynamism in Asia (+39% organic growth).

Order intake in the Aerospace segment amounted to €3,919 million, up +7% over the first nine months of 2025. The Avionics market has enjoyed sustained commercial momentum in its various activities since the beginning of the year. The Space business, which recorded four orders with a unit value of more than €100 million in the first nine months of 2025, also saw its order intake increase over the period.

With an amount of €9,943 million compared to €8,951 million in the first nine months of 2024, order intake in the Defence segment recorded a strong organic increase of +12%. This growth reflects an excellent commercial dynamic, supported notably by the relevance of Thales’ portfolio of products and solutions in the current context. Nine orders with a unit amount exceeding €100 million have been recorded since the beginning of the year 2025. Among them, two orders in the field of air defence in the UK and Germany were recorded in the third quarter.

At €2,827 million, order intake in the Cyber & Digital segment was structurally very close to sales as most business lines in this segment operate on short sales cycles. The order book is therefore not significant.

Sales

Sales for the first nine months of 2025 amounted to €15,256 million, compared with €14,069 million in the same period of 2024, up +9.1%6 at constant scope and exchange rates (+8.4% on a reported basis).

Geographically7, sales recorded solid growth in mature markets (+7.7% in organic terms), notably in the United Kingdom (+12.3%). Emerging markets also recorded strong growth (+14.5% organically over the period), with double-digit organic growth in all regions.

In the Aerospace segment, sales reached €4,108 million, up +7.0% compared to the first nine months of 2024 (+6.9% at constant scope and exchange rates). This growth reflects the continued momentum in the Avionics market, with a solid performance in both civil and military domains. Sales in the Space business recorded growth in line with annual expectations over the first nine months of 2025.

Sales in the Defence segment reached €8,243 million, up +13.9% compared to the first nine months of 2024 (+14.0% at constant scope and exchange rates). This growth was driven by all activities in the Defence segment, which benefitted notably from production capacity expansion projects being deployed.

Cyber & Digital segment sales amounted to €2,803 million, down -3.8% compared to the first nine months of 2024 (-1.3% at constant scope and exchange rates), reflecting contrasted trends:

Cyber businesses reported a decrease over the first nine months of 2025 (-4.8% at constant scope and exchange rates): The Cyber Products business, down at September 30, 2025, has not yet returned to a normal level of activity after the disturbances recorded during the first half of the year. These disturbances, that still weighed on the third quarter, are linked to the merger of Imperva and Thales' sales teams, a key step in the integration that will allow to benefit from the full potential of the business; The Cyber Premium Services business also showed a decline over the first nine months of 2025, affected by soft market demand, particularly in Australia. The ongoing execution of the strategy aimed at refocusing the business on selective profitable growth segments shows encouraging signs. Digital activities recorded an increase of +1.0% at constant scope and exchange rates: Sales from Payment Services enjoyed a strong growth in digital banking solutions, but remained affected by still low volumes on payment cards; Secure Connectivity solutions recorded sustained growth, driven by digital solutions (including eSIM as well as on-demand connectivity platforms). Outlook

Thales, with its strong positioning in all of its major markets and the relevance of its products and solutions, benefits from a favorable medium and long-term outlook.

Assuming no new disruptions in the macroeconomic and geopolitical contexts, and no new tariffs developments8, Thales confirms all its targets for 2025:

A book-to-bill ratio above 1; An expected organic sales growth between +6% and +7%, corresponding to a sales range of €21.8 to €22.0 billion9; An Adjusted EBIT margin between 12.2% and 12.4%.

****

This press release contains certain forward-looking statements. Although Thales believes that its expectations are based on reasonable assumptions, actual results may differ significantly from the forward-looking statements due to various risks and uncertainties, as described in the Company's Universal Registration Document, which has been filed with the French financial markets authority (Autorité des marchés financiers – AMF).

UPCOMING EVENTS

Ex-interim dividend date

December 2, 2025

Interim dividend payment date

December 4, 2025

Full Year 2025 results

March 3, 2026 (before market)

Annual General Meeting

May 12, 2026

1 In this press release, “organic” means “at constant scope and exchange rates”.

2 Assuming no new disruptions of the macroeconomic and geopolitical context. Regarding tariffs, the Group’s guidance for the year 2025 is valid on the basis of 1) reciprocal tariffs of 15% from the EU, 10% from and the UK and 25% from Mexico, 2) the maintenance of the EU-US tariff exemption on Aeronautics and 3) consequently, the absence of European retaliatory measures.

3 ​ Corresponding to €21.8 to €22.0 billion and based on end of September 2025 scope, average exchange rates as at 30 September 2025 and the assumption of an average EUR/USD exchange rate of 1.17 in Q4 2025.

4 Mature markets: Europe, North America, Australia, New Zealand; emerging markets: all other countries.

5 See table on page 7.

6 Considering a positive currency effect of -€164 million and a net scope effect of €90 million.

7 See table on page 7.

8 Regarding tariffs, the Group’s guidance for the year 2025 is valid on the basis of 1) reciprocal tariffs of 15% from the EU, 10% from the UK and 25% from Mexico, 2) the maintenance of the EU-US tariff exemption on Aeronautics and 3) consequently, the absence of European retaliatory measures.

9 Based on end of September 2025, average exchange rates as at 30 September 2025 and the assumption of an average EUR/USD exchange rate of 1.17 in Q4 2025.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF Related Documents

Thales - Q3 2025 - slideshow

1.37 MB 29 Oct 2025 PDF Download corporate : Group + Investor relations https://thales-group.prezly.com/thales-reports-its-order-intake-and-sales-as-of-september-30-2025 thales-reports-its-order-intake-and-sales-september-30-2025 On Thales reports its order intake and sales as of September 30, 2025

Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space

Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space prezly Thu, 10/23/2025 - 06:00 Civil Aviation Defence Space Share options Facebook
Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space prezly Thu, 10/23/2025 - 06:00 Civil Aviation Defence Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 New European space player aims to unite and enhance capabilities by combining the three respective activities in satellite and space systems manufacturing and space services. Major milestone in strengthening the European space ecosystem, supporting strategic autonomy and competitiveness, to ensure Europe enhances its role as a key player in the space global market. New company could be operational in 2027, subject to regulatory approvals and satisfaction of other closing conditions. Project expected to generate significant synergies, foster innovation, and deliver added value to customers, shareholders and employees.

* * *

Airbus (stock exchange symbol: AIR), Leonardo (Borsa Italiana: LDO) and Thales (Euronext Paris: HO) have signed a Memorandum of Understanding (“MoU”) aimed at combining their respective space activities into a new company.

By joining forces, Airbus, Leonardo and Thales aim to strengthen Europe’s strategic autonomy in space, a major sector that underpins critical infrastructure and services related to telecommunications, global navigation, earth observation, science, exploration and national security. This new company also intends to serve as the trusted partner for developing and implementing national sovereign space programmes.

This new company will pool, build and develop a comprehensive portfolio of complementary technologies and end-to-end solutions, from space infrastructure to services (excluding space launchers). It will accelerate innovation in this strategic market, in order to create a unified, integrated and resilient European space player, with the critical mass to compete globally and grow on the export markets.

This new player will be able to foster innovation, combine and strengthen investments in future space products and services, building on the complementary assets and world-class expertise of all three companies. The combination is expected to generate mid triple digit million euro of total annual synergies on operating income five years after closing. Associated costs to generate those synergies are expected to be in line with industry benchmark.

The project is expected to unlock incremental revenues, leveraging an expanded portfolio of end-to-end products and services leading to a more competitive offering, and greater global commercial reach. The combined capabilities also pave the way for even more innovative new programmes to enlarge the new company’s market positioning. Further operational synergies in, among others, engineering, manufacturing and project management, are anticipated to drive long-term efficiency and value creation. Upon conclusion of the transaction, this new company will encompass the following contributions:

Airbus will contribute with its Space Systems and Space Digital businesses, coming from Airbus Defence and Space. Leonardo will contribute with its Space Division, including its shares in Telespazio and Thales Alenia Space. Thales will mainly contribute with its shares in Thales Alenia Space, Telespazio, and Thales SESO.

The combined entity will employ around 25,000 people across Europe. With an annual turnover of about 6.5bn€ (end of 2024, pro-forma) and an order backlog representing more than three years of projected sales, this new company will form a robust and competitive entity worldwide.

Ownership of the new company will be shared among the parent companies, with Airbus, Leonardo and Thales owning respectively 35%, 32,5% and 32,5% stakes. It will operate under joint control, with a balanced governance structure among shareholders.

Accelerating European leadership in space and ensuring its strategic autonomy, the new company aims to:

Foster innovation and technological progress by harnessing joint R&D capabilities to be at the cutting edge of space missions in all domains, including services, and enhance operational efficiency, benefiting from economies of scale and optimized production processes. Increase competitiveness facing global players, reaching critical mass and ensuring Europe secures its role as a major player in the international space market. Lead innovative programmes to address evolving customer and European sovereign needs, national sovereign and military programmes, by providing integrated solutions for infrastructure & services in all major space domains, driving cooperation across nations and having the capability to invest. Strengthen the European space ecosystem by bringing more stability and predictability to the industrial landscape, amplifying opportunities for the benefit of European suppliers of all sizes. Create new opportunities for employee development through broader technical capabilities and the extensive multinational footprint of the new company.

Joint Statement

Guillaume Faury, Chief Executive Officer of Airbus, Roberto Cingolani, Chief Executive Officer and General Manager of Leonardo and Patrice Caine, Chairman & Chief Executive Officer of Thales, declared:
​“This proposed new company marks a pivotal milestone for Europe’s space industry. It embodies our shared vision to build a stronger and more competitive European presence in an increasingly dynamic global space market. By pooling our talent, resources, expertise and R&D capabilities, we aim to generate growth, accelerate innovation and deliver greater value to our customers and stakeholders. This partnership aligns with the ambitions of European governments to strengthen their industrial and technological assets, ensuring Europe’s autonomy across the strategic space domain and its many applications. It offers employees the opportunity to be at the heart of this ambitious initiative, while benefiting from enhanced career prospects and the collective strength of the three industry leaders.”

Next steps

Employee representatives of Airbus, Leonardo and Thales will be informed and consulted on this project according to the laws of involved countries and the collective agreements applicable at each parent company.

Completion of the transaction is subject to customary conditions including regulatory clearances, with the new company expected to be operational in 2027.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion. The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Airbus

Airbus pioneers sustainable aerospace for a safe and united world. The Company constantly innovates to provide efficient and technologically-advanced solutions in aerospace, defence, and connected services. In commercial aircraft, Airbus designs and manufactures modern and fuel-efficient airliners and associated services. Airbus is also a European leader in space systems, defence and security. In helicopters, Airbus provides efficient civil and military rotorcraft solutions and services worldwide.

About Leonardo

Leonardo is an international industrial group, among the main global companies in Aerospace, Defence, and Security (AD&S). With 60,000 employees worldwide, the company approaches global security through the Helicopters, Electronics, Aeronautics, Cyber & Security and Space sectors, and is a partner on the most important international programmes such as Eurofighter, JSF, NH-90, FREMM, GCAP, and Eurodrone. Leonardo has significant production capabilities in Italy, the UK, Poland, and the USA. Leonardo utilises its subsidiaries, joint ventures, and shareholdings, which include Leonardo DRS (71.6%), MBDA (25%), ATR (50% ), Hensoldt (22.8%), Telespazio (67%), Thales Alenia Space (33%), and Avio (28.7%). Listed on the Milan Stock Exchange (LDO), in 2024 Leonardo recorded new orders for €20.9 billion, with an order book of €44.2 billion and consolidated revenues of €17.8 billion. Included in the MIB ESG index, the company has also been part of the Dow Jones Sustainability Indices (DJSI) since 2010.

View PDF market_segment : Civil Aviation + Defence + Space ; corporate : Investor relations + Group https://thales-group.prezly.com/airbus-leonardo-and-thales-sign-memorandum-of-understanding-to-create-a-leading-european-player-in-space airbus-leonardo-and-thales-sign-memorandum-understanding-create-leading-european-player-space On Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space

FastID

Build for Scale: Fastly’s Principles of Distributed Decision Making and Self-healing Systems

Learn how Fastly's distributed decision-making and self-healing systems build a resilient, high-performance network. Discover key benefits and examples.
Learn how Fastly's distributed decision-making and self-healing systems build a resilient, high-performance network. Discover key benefits and examples.

Wednesday, 22. October 2025

Trinsic Podcast: Future of ID

Chris Goh – Scaling Mobile IDs in Australia with ISO mDocs

In this episode of The Future of Identity Podcast, I’m joined by Chris Goh, former National Harmonisation Lead for Australia’s mobile driver’s licenses (mDLs) and the architect behind Queensland’s digital driver’s license. Chris played a pivotal role in driving national alignment across states and territories, culminating in the 2024 agreement to adopt ISO mDoc/mDL standards for mobile driver

In this episode of The Future of Identity Podcast, I’m joined by Chris Goh, former National Harmonisation Lead for Australia’s mobile driver’s licenses (mDLs) and the architect behind Queensland’s digital driver’s license. Chris played a pivotal role in driving national alignment across states and territories, culminating in the 2024 agreement to adopt ISO mDoc/mDL standards for mobile driver’s licenses and photo IDs across Australia and New Zealand.

Our conversation dives into Australia’s path from early blockchain experiments to a unified, standards-based approach - one that balances innovation, security, and accessibility. Chris shares lessons from real-world deployments, cultural challenges like “flash passes,” and how both Australia and New Zealand are building digital ID ecosystems ready for global interoperability.

In this episode we explore:

Why mDoc became the foundation: Offline + online verification, PKI-based trust, and modular architecture enabling scalable, interoperable credentials. From Hyperledger to harmony: Lessons from early decentralized trials and how certification and conformance reduce fragmentation. Balancing innovation and standardization: Why agility and stability must coexist to keep identity ecosystems moving forward. The cultural realities of adoption: How flash passes, retail constraints, and public education shaped Australia’s rollout strategy. The road ahead: How national trust lists, privacy “contracts,” and delegated authority could define the next phase of digital identity in the region.

This episode is essential listening for anyone building or implementing digital credentials, whether you’re a policymaker, issuer, verifier, or technology provider. Chris offers a clear, grounded perspective on what it really takes to move from pilots to national-scale digital identity infrastructure.

Enjoy the episode, and don’t forget to share it with others who are passionate about the future of identity!

Learn more about Valid8.

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.

Listen to the full episode on Apple Podcasts or Spotify, or find all ways to listen at trinsic.id/podcast.


Veracity trust Network

How do we deal with “synthetic users” accessing our data?

“If AI agents can now behave like humans well enough to pass CAPTCHA, we’re no longer dealing with bots we’re dealing with synthetic users. That creates real risk.” The above statement, by Marcom Strategic Advisor and Investor Katherine Kennedy-White, in response to a LinkedIn post by Veracity’s CEO Nigel Bridges, shows the real concern over how sophisticated AI agents and bots are becoming.

“If AI agents can now behave like humans well enough to pass CAPTCHA, we’re no longer dealing with bots we’re dealing with synthetic users. That creates real risk.”

The above statement, by Marcom Strategic Advisor and Investor Katherine Kennedy-White, in response to a LinkedIn post by Veracity’s CEO Nigel Bridges, shows the real concern over how sophisticated AI agents and bots are becoming.

The post How do we deal with “synthetic users” accessing our data? appeared first on Veracity Trust Network.

Tuesday, 21. October 2025

Indicio

Why Digital Travel Credentials provide the strongest digital identity assurance

The post Why Digital Travel Credentials provide the strongest digital identity assurance appeared first on Indicio.
Stop fraud before it starts. Indicio Proven closes the door on account takeovers, social engineering scams and deepfakes with secure, interoperable Digital Travel Credentials — the highest level of digital identity assurance and the easiest to use.

By Helen Garneau

Identity fraud is rising around the world, and travelers are starting to lose confidence that airlines and hotels can keep their personal data safe. More stories about new kinds of scams are coming out using AI-generated deepfakes, fake documents, and other digital tricks that can fool identity systems. As airports and airlines depend more on facial recognition and other biometric tools, the risk of these attacks becomes a serious threat to the entire travel experience.

Think of how this plays out in real life. A thief uses a stolen credit card to buy an airline ticket and checks in with a forged passport. An impostor calls into an airline call center with a stolen password, takes over the victim’s account, and steals their miles. A criminal walks through a border checkpoint using false biometrics. Each case happens because identity cannot be verified in real time, directly from the traveler.

Digital Travel Credentials fix this problem.

A Digital Travel Credential — DTC — is a secure, digital version of a passport that aligns with    specifications outlined by the International Civil Aviation Organization, a global body that was responsible for standardizing physical passports.

Currently, there are two types of implementable DTCs: One issued by a government along with a physical passport (DTC-2), or one issued by an organization, such as an airline or hotel, by way of data derived from a valid passport and biometrically authenticated against the rightful passport holder (DTC-1).

The data in each DTC is digitally  signed, which provides a way to cryptographically prove its origin (who issued it) and that it hasn’t been tampered with. The credential is held by the passport holder in a digital wallet on their mobile device, which provides two additional layers of binding (biometric/code to first unlock the device and then the wallet).

Here’s what makes the DTC a deepfake buster

First, you can’t re-engineer the cryptography using AI to alter the data. Second, each person is able to carry an authenticated biometric with them for verification. It’s like having a second you to prove who you are. The biometric template in the credential can be automatically compared with a liveness check, so authentication is not only instant, it doesn’t require the verifying party to store biometric data.

The DTC completely transforms identity verification and fraud prevention in one go.

The upshot is that identity authentication no longer needs usernames, passwords, centralized data storage, multifactor authentication, or increasingly complex and expensive layers of security; instead, customers hold their data and present it for seamless, cryptographic authentication, which can be done anywhere using simple mobile software.

Their data is protected, you’re protected, and your operations can be automated and streamlined for better customer experiences and reduced cost.

The easy switch for implementing DTC credentials

Indicio Proven® is the most advanced market solution for issuing, holding, and verifying interoperable DTC-1 and DTC-2 aligned credentials, with options for the leading three Verifiable Credential formats, SD JWT VC, AnonCreds, and mDL.

Proven is equipped with advanced biometric and document authentication tools, and our partnership with Regula enables us to validate identity documents from 254 countries and territories for issuance as Verifiable Credentials. It has a white-label digital wallet compatible with global digital identity standards and a mobile SDK for adding Verifiable Credentials to your apps.

It’s easy and quick to add to your existing biometric infrastructure, removing the need to rip and replace identity systems. It can effortlessly scale to country level deployment, and best of all, it’s significantly less expensive than centralized identity management.

Proven also follows the latest open standards, including eIDAS 2.0 and the EUDI framework, lowering regulatory risks, preserving traveler privacy, and opening markets that would otherwise be off limits.

Shut down fraud before it starts

Fraud should never be accepted as part of doing business. With Proven DTCs, airlines can defend against ticket and loyalty fraud before they even talk to a passenger. Airports can trust the traveler and the data they receive because it matches the credential and the verified government records. Hotels can check in guests with confidence, no passport photocopying or manual lookups required — and they have a simple and powerful way to reduce chargeback fraud.

Indicio Proven removes legacy vulnerabilities to identity fraud and closes the gaps between systems so identity can be trusted from start to finish. It protects revenue, safeguards customer relationships, and restores confidence across every stage of travel.

It’s time to stop fraud, simplify identity verification, and give travelers a secure, seamless experience with Indicio Proven.

Contact Indicio today and see how you can protect your business and your customers with Indicio Proven.

 

The post Why Digital Travel Credentials provide the strongest digital identity assurance appeared first on Indicio.


This week in identity

E64 - The Growing Impact of Digital Dependency

Keywords AWS outage, digital dependency, business continuity, FIDO, authentication, passkeys, digital certificates, threat informed defense, false positives, cyber resilience Summary In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the recent AWS outage and its implications on digital dependency and business continuity. They explore the importance of disast

Keywords

AWS outage, digital dependency, business continuity, FIDO, authentication, passkeys, digital certificates, threat informed defense, false positives, cyber resilience


Summary

In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the recent AWS outage and its implications on digital dependency and business continuity. They explore the importance of disaster recovery plans and the evolving landscape of authentication technologies, particularly focusing on the FIDO Authenticate Conference. The conversation delves into the lifecycle of passkeys and digital certificates, emphasizing the need for threat-informed defense strategies and the challenges of managing false positives in security. The episode concludes with a call for better integration of systems and shared intelligence across the industry.


Chapters


00:00 Introduction and Global Outage Discussion

03:01 The Impact of Digital Dependency

06:00 Business Continuity and Disaster Recovery

09:10 FIDO Authenticate Conference Overview

16:09 Evolution of Authentication Technologies

21:45 The Lifecycle of Passkeys and Digital Certificates

29:59 Threat Informed Defense and False Positives

39:55 Conclusion and Future Considerations



Dock

Europe’s travel experiment just made digital identity real

This summer, Amadeus, a global travel technology company that powers many airline and airport systems, and Lufthansa, Germany’s flag carrier and one of Europe’s largest airlines, successfully tested the EU Digital Identity Wallet (EUDI Wallet) in real travel scenarios. The test showed how credential-based

This summer, Amadeus, a global travel technology company that powers many airline and airport systems, and Lufthansa, Germany’s flag carrier and one of Europe’s largest airlines, successfully tested the EU Digital Identity Wallet (EUDI Wallet) in real travel scenarios.

The test showed how credential-based travel could soon replace manual document checks.

During these tests, travellers could:

Check-in online by sharing verified ID credentials from their wallet with one click, instead of entering passport data manually.

Move through the airport by simply tapping their phone at check-in desks, bag drop machines, and boarding gates, rather than repeatedly showing physical documents.

The results point to a future where travel becomes smoother and more secure, thanks to verifiable credentials and privacy-preserving identity verification.


Elliptic

Why government agencies should own their blockchain intelligence data

Government agencies now have access to blockchain intelligence capabilities that were impossible just a few years ago. Where investigators once had to work within the constraints of third-party platforms designed for individual transaction tracing, they can now run comprehensive intelligence operations across complete blockchain datasets.

Government agencies now have access to blockchain intelligence capabilities that were impossible just a few years ago. Where investigators once had to work within the constraints of third-party platforms designed for individual transaction tracing, they can now run comprehensive intelligence operations across complete blockchain datasets.


Spherical Cow Consulting

The People Problem: How Demographics Decide the Future of the Internet

I've been having an intellectually fascinating time diving into Internet fragmentation and how it is shaped by supply chains more than protocols. There’s another bottleneck ahead, though, one that’s even harder to reroute: people. Innovation doesn’t happen in a vacuum. It requires human talent that builds systems and sets standards. The post The People Problem: How Demographics Decide the Future

“I’ve been having an intellectually fascinating time diving into Internet fragmentation and how it is shaped by supply chains more than protocols. There’s another bottleneck ahead, though, one that’s even harder to reroute: people.”

Innovation doesn’t happen in a vacuum. It requires engineers, designers, policy thinkers, and entrepreneurs. In other words, it needs human talent to build systems and set standards. And demographics are destiny when it comes to innovation. The places where populations are shrinking face not only economic strain but also a dwindling supply of innovators. The regions with young, growing populations could take the lead, but only if they can translate those numbers into participation in building tomorrow’s Internet.

Right now, the imbalance is striking. The countries that dominated the early generations of the Internet—the U.S., Europe, Japan, and now China—are either stagnating or shrinking. Meanwhile, countries with youthful demographics, especially across Africa and parts of South Asia, aren’t yet present in large numbers in the open standards process that defines the global Internet. That absence will shape the systems they inherit in the next 10-15 years.

This is the third in a four-part series of blog posts about the future of the Internet, seen through the lens of fragmentation.

First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: [this one] Fourth post: “Can standards survive trade wars and sovereignty battles?” [scheduled to publish 28 October 2025]

A Digital Identity Digest The People Problem: How Demographics Decide the Future of the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:12:01 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The United States: a leaking talent pipeline

For decades, the U.S. thrived as the global hub of Internet development. Silicon Valley became Silicon Valley not just because of venture capital, but because talent from around the world came to build there. That was then.

Domestically, U.S. students continue to lag behind peers in international comparisons of math and science performance, as OECD’s PISA 2022 makes clear. Graduate programs in engineering and computer science still brim with energy, but overwhelmingly from international students. Those students often want to stay, yet immigration bottlenecks, capped and riotously expensive H-1B visas, and green card backlogs create real uncertainty about whether they can.

Even inside the standards world, there are warning signs. The IETF’s 2024 community survey showed concerns about age distribution, with long-time participants nearing retirement and too few younger contributors entering. If the U.S. cannot fix its education and immigration systems, its long-standing leadership in setting Internet rules will decline, not through policy shifts in Washington which are not helping, but because of demographic erosion.

China: aging before it gets rich

China has built its growth story on a huge working-age population. That dividend is spent. Fertility hovers around 1.0, far below the replacement rate of 2.1, and the working-age population has already begun shrinking. By 2040, the elderly dependency ratio will climb sharply, with more pressure on pensions, healthcare, and younger workers.

The state has made automation and AI a cornerstone of its adaptation strategy. Investments in robotics and machine learning are designed to offset the loss of youthful labor. But an older population means fewer risk-takers, fewer startups, and more fiscal resources tied up in sustaining a rapidly aging society.

Japan’s experience offers a cautionary tale. Starting in the 1990s, it faced a similar contraction. Despite strong institutions and technological sophistication, growth stagnated. China risks repeating that path on a larger scale, and with less wealth per capita to cushion the fall.

Europe & Central Asia: slow contraction, unevenly distributed

Europe’s demographic transformation is a slow squeeze rather than a sudden cliff. According to the International Labour Organization’s 2025 working paper, the old-age ratio in Europe and Central Asia—the number of people over 65 compared to those of working age—will rise from 28 in 2024 to 43 by 2050. The region is expected to lose roughly ten million workers over that period.

The impact will not be uniform. Southern Europe is on track for some of the steepest shifts, with old-age ratios rising to two-thirds by 2050. By contrast, Central Asia maintains a relatively youthful profile, with projections of only 17 older adults per 100 workers. Policymakers across the continent are pushing familiar levers: encouraging older workers to stay employed longer, increasing women’s participation, and opening doors to migrants. But even with those adjustments, the fiscal weight of pensions, healthcare, and social protection will grow heavier, forcing innovation to rely more on productivity than population.

South Korea: the hyper-aged pioneer

South Korea is the most dramatic example of how quickly demographics can shift. The Beyond The Demographic Cliff report describes a “demographic cliff”: fertility has collapsed to just 0.7 children per woman, the lowest in the world. The working-age share, once 72 percent in 2016, will fall to just 56 percent by 2040. By 2070, nearly half the population will be over 65.

Unlike the U.S. or Germany, South Korea has little immigration to soften the decline; only about five percent of the population is foreign-born. Despite spending trillions of won since 2005 on pronatalist programs, fertility has only dropped further. The government has little choice but to adapt. With one of the world’s highest industrial robot densities, Korea is leaning heavily on automation and robotics. At the same time, the “silver economy” is becoming a growth engine, with eldercare, health technology, and age-friendly industries gaining traction.

The sheer speed of Korea’s shift is staggering. What took France nearly two centuries—from 7 percent to 20 percent of the population being over 65—took Korea less than thirty years. That compressed timeline means Korea is a test case for what happens when demographics move faster than institutions can adapt.

Africa: the continent of the future

While the industrialized world contracts, Africa surges. As a World Futures article makes clear, Tropical Africa alone will account for much of the world’s population growth this century. By 2100, Africa will be the largest source of working-age population in the world.

This demographic wave could be transformative. Africa holds vast reserves of cobalt, lithium, and other rare earths critical to green technologies. Combined with a youthful workforce, that could give the continent a central role in shaping the next century’s innovation. But the risks are real: education systems remain uneven, governance is fragile in many states, and climate pressures could destabilize growth. A demographic dividend only pays out if paired with investment in education and institutions.

Still, Africa is where the people will be. Whether or not it becomes a driver of global innovation depends on choices made now by African governments, but also by those investing in the continent’s infrastructure and industries.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Who shows up in the standards process

And here is the connection to Internet fragmentation: the regions with the fastest-growing, youngest populations are not yet shaping the standards process in any significant way.

The W3C Diversity Report 2025 shows that governance seats are still dominated by North America, Europe, and a slice of Asia. Africa and South Asia barely register. ISO admits the same problem: while more than 75 percent of its members are from developing countries, many lack the resources to participate actively. That’s why ISO has launched programs such as its Action Plan for Developing Countries and capacity-building initiatives for technical committees. Membership may be global, but influence is not.

Participation isn’t just about fairness. It determines the rules that future systems will follow. If youthful regions aren’t in the room when those rules are written, they’ll inherit an Internet designed elsewhere, reflecting other priorities. In the meantime, outside players are shaping the infrastructure. China is investing heavily in African digital and industrial networks, creating regional value chains that may set defaults long before African voices appear in open standards bodies.

Cross-border interdependence

Even if the Internet fragments politically or technologically, demographics will keep it globally entangled. Aging countries will depend on migration and remote work links to tap youthful labor pools. Younger countries will increasingly provide the engineers, developers, and operators who sustain platforms. Standards bodies may eventually shift to reflect new population centers, but the lag between demographic change and institutional presence can be decades.

This interdependence means that fragmentation won’t create neatly separated Internets. Instead, we’ll see overlapping systems shaped partly by who has the people and partly by who invests in them.

Destiny is in the demographics

Demographics don’t move quickly, but they do move inexorably. The U.S. risks losing its edge through education and immigration failures. China is aging before it fully secures prosperity. Europe faces a slow decline. South Korea is already living the reality of a hyper-aged society. Africa is the wild card, with the potential to become the global engine of innovation if it can turn population growth into a dividend rather than a liability.

The stage is clearly set: the regions with the people to build tomorrow’s Internet aren’t yet present in the open standards process. Others, especially China, are already investing heavily in shaping what those regions will inherit.

If you want to know what kind of Internet we’ll have in the decades to come, don’t just look at protocols or supply chains. Watch the people. Watch where they are, and who is investing in them. That’s where the future of innovation lies.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30] Welcome back to the Digital Identity Digest! I’m Heather Flanagan, and if you’ve been following this series, you’ll remember that we’ve been exploring Internet fragmentation from multiple angles.

In this episode, we’re zooming out once again—because even when the protocols align perfectly and the chips get made, there’s still one more piece of the puzzle that determines the Internet’s future: people.

More precisely, demographics.

Why Demographics Matter

[00:01:17] Who shows up to build tomorrow’s systems?
Who are the engineers, the designers, the startup founders?
Which regions have enough young people to sustain innovation—and which don’t?

This isn’t just about the present moment. It’s about what happens in 15 years.

[00:01:35] The countries that built and shaped the early Internet—the U.S., the EU, Japan, and more recently China—are all aging. Some are even shrinking.

Meanwhile, regions with the youngest and fastest-growing populations, such as Africa and South Asia, are not yet fully represented in the rooms where global standards are written. And that gap matters deeply for the Internet we’ll all inherit.

The United States: Talent Pipeline Challenges

[00:02:07] For decades, the U.S. has been the global hub for Internet innovation. Silicon Valley thrived not just on venture capital, but because brilliant people from around the world came to build there.

[00:02:20] Yet, the domestic talent pipeline is starting to leak:

U.S. students lag behind international peers in math and science. Graduate programs remain strong, but most are filled with international students. Immigration backlogs and visa caps make it harder for those graduates to stay.

[00:02:44] Even inside the standards community, demographics are aging. The IETF’s own survey shows long-time contributors retiring and not enough young participants stepping in.

If the U.S. can’t fix its education and immigration systems, its leadership won’t decline due to competition—it’ll slip because there aren’t enough people to carry the work forward.

China: From Growth to Grey

[00:03:10] China’s story is different—but no less stark. For decades, its explosive growth came from a huge working-age population.

[00:03:19] That demographic dividend is over. Fertility rates have fallen to barely one child per woman. The working-age population peaked in 2015 and has been shrinking since.

[00:03:33] China’s solution has been to automate—investing heavily in robotics, AI, and machine learning.

But as populations age, societies often shift resources away from risk-taking. An older economy tends to:

Produce fewer startups Take fewer risks Spend more on pensions and healthcare

Japan’s experience offers a cautionary example—and China risks following it on a larger scale and with less wealth per person to cushion the impact.

Europe: Managing a Slow Decline

[00:04:24] Europe faces a quieter version of the same story.

[00:04:41] By 2050, the ratio of older to working-age adults in Europe and Central Asia is expected to rise from 28 to 43. That means millions fewer workers and millions more retirees.

Europe’s strategy includes:

Keeping older workers employed longer Expanding women’s participation in the workforce Opening the door to migrants

However, the basic reality remains—fewer young people are entering the workforce. Innovation will depend more on productivity gains than on population growth.

South Korea: The Hyper-Aged Future

[00:05:12] South Korea offers a glimpse into the world’s most rapidly aging society.

[00:05:14] Fertility has collapsed to 0.7 children per woman, the lowest in the world. By 2070, nearly half the population will be over 65.

Unlike the U.S. or Germany, Korea has almost no immigration to balance the decline. Despite huge government investments in pronatalist programs, fertility continues to fall.

Korea is adapting through:

High robot density and automation Growth in the silver economy — industries around elder care, health tech, and age-friendly products

The speed of this shift is astonishing: what took France 200 years, Korea did in less than 30. It’s now a laboratory for adaptation—figuring out how policy and technology respond when demographics move faster than politics.

Africa: The Continent of the Future

[00:06:28] While industrialized nations age, Africa is booming.

By the end of this century, Africa will account for the majority of the world’s working-age population.

Its advantages are immense:

Rapid population growth Rich reserves of critical minerals (cobalt, lithium, rare earths) Expanding urbanization and education

However, these opportunities are balanced by real challenges:

Under-resourced education systems Fragile governance Climate pressures

[00:07:22] If managed well, Africa could become the innovation hub of the late 21st century. But much depends on where investment originates—within Africa or from abroad—and whose values and standards shape the technologies that follow.

Who’s in the Room?

[00:07:54] This is where demographics meet Internet fragmentation directly.

Regions with the youngest populations are still underrepresented in open standards bodies.

The W3C’s diversity reports show most seats are still held by North America, Europe, and parts of Asia. Africa and South Asia barely register. ISO has many developing-country members, but few can participate actively.

[00:08:36] Membership may be broad, but influence is not.

And that absence matters—because standards define power. They determine how the Internet functions, what’s prioritized, and who benefits.

If youthful regions aren’t in the room when rules are written, they’ll inherit an Internet designed elsewhere.

Looking Ahead

[00:09:02] Meanwhile, China is filling that vacuum—investing heavily in African digital infrastructure and shaping defaults long before African voices are fully present in global standards.

Even as the Internet fragments politically and technologically, demographics tie us together.

Aging nations will rely on migration and remote work. Younger countries will provide the engineers and operators sustaining global platforms. Standards institutions may eventually reflect new population centers—but change lags behind demographic reality.

[00:09:43] The people who build the Internet of the future will increasingly come from Africa and Southeast Asia—while the institutions writing the rules still reflect yesterday’s demographics.

Wrapping Up

[00:10:00] Demographics move slowly—but they are relentless. You can’t rush them.

The U.S. risks losing its edge through education and immigration challenges. China is aging before securing long-term prosperity. Europe faces a gradual, gentle decline. South Korea is already living the reality of hyper-aging. Africa remains the wild card—its youth could define the next Internet if it can translate population growth into participation and policy.

[00:10:57] So, if you want to glimpse the Internet’s future, don’t just look at protocols or supply chains. Look at the people—where they are, and who’s investing in them. That’s where innovation’s future lies.

Closing Notes

[00:11:09] Thanks for listening to this week’s episode of the Digital Identity Digest.

If this discussion helped make things clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn (@lflanagan), and if you enjoyed the show, please subscribe and leave a review on your favorite podcast platform.

You can always find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and keep the conversations going.

The post The People Problem: How Demographics Decide the Future of the Internet appeared first on Spherical Cow Consulting.


Metadium

Metadium Explorer — Database Upgrade Notice

🛠 Metadium Explorer — Database Upgrade Notice To improve the stability and performance of Metadium Explorer, we will be performing a database upgrade as outlined below. 📅 Schedule October 23, 2025 (Thu) 10:30–11:30 (KST) Please note that the end time may vary depending on the progress of the upgrade. 🔧 Update Details DB minor version upgrade ⚠️ Notice During t
🛠 Metadium Explorer — Database Upgrade Notice

To improve the stability and performance of Metadium Explorer, we will be performing a database upgrade as outlined below.

📅 Schedule

October 23, 2025 (Thu) 10:30–11:30 (KST) Please note that the end time may vary depending on the progress of the upgrade.

🔧 Update Details

DB minor version upgrade

⚠️ Notice

During the upgrade window, Metadium Explorer will be temporarily unavailable. Users will not be able to access services such as transaction history, block information, and related features.

We appreciate your understanding as we work to improve the reliability of our services.
Thank you. The Metadium Team.

🛠 Metadium Explorer — DB업그레이드 안내

Metadium Explorer의 안정성 및 성능 향상을 위해 아래와 같이 데이터베이스 업그레이드를 진행합니다.

📅 일정

2025년 10월 23일 (목) 10:30 ~ 11:30 (KST) 작업 상황에 따라 종료 시간이 변동될 수 있습니다.

🔧 주요 변경 사항

DB 마이너 버전 업그레이드

⚠️ 유의 사항

업그레이드 진행 시간 동안 Explorer 서비스 이용이 일시 중단됩니다. Explorer를 통해 트랜잭션 조회, 블록 정보 확인 등의 기능을 이용하시는 경우 참고 부탁드립니다.

서비스 안정화를 위한 작업이오니 이용자 여러분의 너른 양해 부탁드립니다.
감사합니다. 메타디움 팀.

🛠 Metadium Explorer — Database Upgrade Notice was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Social or Enterprise: Which Connection is Right?

Understand the differences between Social and Enterprise Connections to choose the right identity provider for your application.
Understand the differences between Social and Enterprise Connections to choose the right identity provider for your application.

FastID

Your API Catalog Just Got an Upgrade

Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.
Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.

Monday, 20. October 2025

1Kosmos BlockID

What Is Decentralized Identity? Complete Guide & How To Prepare

The post What Is Decentralized Identity? Complete Guide & How To Prepare appeared first on 1Kosmos.

Spruce Systems

Modernizing BSA and AML/CFT Compliance with Verifiable Digital Identity

In our U.S. Treasury RFC response, we propose an Identity Trust model to modernize AML/CFT compliance—delivering transparency, accountability, and trust for regulators and institutions, without sacrificing innovation or privacy.
Thank you to Linda Jeng and Elizabeth Santucci for their instrumental contributions to the analysis and recommendations in our U.S. Treasury comment letter.

The financial system’s integrity, and the public trust it depends on, can no longer rest on paper-era compliance. For more than fifty years, the Bank Secrecy Act (BSA) has guided how institutions detect and report illicit activity. Yet as the economy digitizes, this framework has become a drag on both effectiveness and inclusion. The cost of compliance has soared to $59 billion annually, while less than 0.2% of illicit proceeds are recovered. Community banks spend up to 9% of non-interest expenses on compliance; millions of Americans remain unbanked because the system is too manual, too fragmented, and too dependent on outdated verification models.

SpruceID’s response to the U.S. Treasury’s recent Request for Comment on Innovative Methods to Detect Illicit Activity Involving Digital Assets (TREAS-DO-2025-0070-0001) outlines a path forward. Drawing on our real-world experience building California’s mobile driver’s license (mDL) and powering state-endorsed verifiable digital credentials in Utah, we propose a model that unites lawful compliance, privacy protection, and public trust.

Our framework, called the Identity Trust model, shows how verifiable digital credentials and privacy-enhancing technologies can make compliance both more effective for enforcement and more respectful of individual rights.

Our proposal is not to expand surveillance or broaden data collection, but to make compliance more precise. The Identity Trust model is designed to be applied only where existing laws such as the BSA and AML/CFT rules require verification or reporting. Today’s compliance systems often require collecting and storing more personal information than is strictly necessary, which increases costs and risks for institutions and customers alike. By enabling verifiable digital credentials and privacy-enhancing technologies, our model ensures institutions can fulfill their obligations with higher assurance while minimizing the amount of personal data collected, stored, and exposed. This shift replaces excess data retention with cryptographic proofs, delivering better outcomes for regulators, financial institutions, and individuals alike.

This framework proposes regulation for the digital age, using the same cryptographic assurance that already secures the nation’s payments, passports, and critical systems to bring transparency, precision, and fairness to financial oversight.

A System Ready for Reform

Compliance with BSA and AML/CFT rules remain rooted in outdated workflows: identity verified by a physical ID, information stored in readable form, and centralized personal data. These methods have become liabilities. They drive up costs, create honeypots of data for breaches, and encourage “de-risking” that locks out lower-income and minority communities.

The technology to fix this exists today. Mobile driver’s licenses (mDLs) are live in more than seventeen U.S. states, accepted by the TSA at over 250 airports. Utah’s proposed State-Endorsed Digital Identity (SEDI) approach, detailed in Utah Code § 63A-16-1202, already provides a framework for trusted, privacy-preserving digital credentials. Federal pilots, such as NIST’s National Cybersecurity Center of Excellence (NCCoE) mobile driver’s license initiative, are proving these models ready for financial use.

What’s missing is regulatory recognition: the clarity that these trusted credentials, when properly verified, fulfill legal identity verification and reporting obligations under the BSA.

The Identity Trust Model

The Identity Trust model offers a blueprint for modernizing compliance without the need for new legislation. It allows regulated entities, such as banks or state- or nationally chartered trusts, to issue and rely on pseudonymous, cryptographically verifiable credentials that prove required attributes (such as sanctions screening status or citizenship) without disclosing unnecessary personal data.

The framework operates in four stages:

Identifying: A regulated entity (the Identity Trust, of which there can be many) is responsible for verifying an individual’s identity using digital and physical methods, based on modern best practices such as NIST SP 800-63-4A for identity proofing. Once verified, the trust issues a pseudonymous credential to the individual and encrypts their personal information. Conceptually, the unlocking key is split into three parts: one held by the individual, one by the Trust, and one by the courts, with any two sufficient to unlock the record (roughly, a “two-of-three key threshold”). Transacting: When the individual conducts financial activity, the individual presents their pseudonymous credential. Transactions are then tagged with unique one-time-use identifiers that prevent linking activity across contexts, even if collusion were attempted. Each identifier carries a cryptographically-protected payload that can only be “unlocked” with the conceptual two-of-three key threshold. Entities and decentralized finance protocols processing the identifiers are able to cryptographically verify that the identifier is correctly issued by an Identity Trust and remains valid. Investigating: If law enforcement or regulators demonstrate lawful cause, conceptually, both the court and the Identity Trust decide to operate their keys to reach the two-of-three threshold to designate authorized access to specific, limited data justified by the circumstances. The Identity Trust must have a robust governance framework for granting access to law enforcement that respects privacy and due process rights with law enforcement needs through judicial orders. Once the keys from the two entities are combined, the vault containing the relevant information about the identity can then be decrypted if it exists, revealing the individual’s information in a controlled and auditable manner, including correlating other transactions depending on the level of access granted by the lawful request. Alternatively, the individual is able to combine their key with the Identity Trust’s key to gain the ability to see their entire audit log, and also create cryptographic proofs of their actions across their transactions. Monitoring: The Identity Trust performs these continuous checks against suspicious actors and sanctions lists in a privacy-preserving manner with approved policies for manner and intervals, with the auditable logs protected and encrypted such that only the individual or duly authorized investigators can work with the Identity Trust to access the plaintext. Individuals may also request attribute attestations from the Identity Trust, for example, that they are not on suspicious actors or sanctions lists, or attestations for credit checks. 

This structure embeds accountability and due process into the architecture itself. It enables lawful access when required and prevents unauthorized surveillance when not. Crucially, the model fits within existing AML authority, leveraging the same legal and supervisory frameworks that already govern banks, trust companies, and credential service providers. 

Policy Recommendations for Treasury

SpruceID’s recommendations to Treasury and FinCEN focus on aligning policy with existing technology, ensuring that the U.S. remains a global leader in both compliance and digital trust.

Request for Consideration

Reasoning and Impact

1. Recognize verifiable digital credentials (VDCs) issued by many acceptable sources as valid evidence under Customer Identification Program (CIP) and Customer Due Diligence (CDD) obligations, including as “documentary” verification methods when appropriate.

Treasury and FinCEN should interpret 31 CFR § 1020.220 (and corresponding CIP rules and guidance) to include verifiable digital credentials if they can meet industry standards, such as a baseline of National Institute of Standards and Technology (NIST) SP 800-63-4 Identity Assurance Level 2 (IAL2) identity verification or higher, issued directly from government authorities, or through reliance upon approved institutions or identity trusts.

These verifiable digital credentials (VDCs), such as those issued pursuant to the State-Endorsed Digital Identity (SEDI) approaches, should be treated as “documentary” evidence where appropriate. The principle of data minimization should become a pillar of financial compliance, enabling VDC-enabled attribute verification encouraged over requiring the sharing of unnecessary personally identifiable information (PII), such as static identity documents, where possible.


Current CIP programs largely presume physical IDs, limiting innovation and remote onboarding, even as the statute is not prescriptive in medium or security mechanisms.

Verifiable digital credentials issued by trusted authorities provide cryptographically proven authenticity and higher assurance against forgery or impersonation, to better fulfill the aims of risk-based compliance management programs.

Recognizing VDCs as documentary evidence would enhance verification accuracy, reduce compliance costs, and align U.S. practice with FATF Digital ID Guidance (2023) and EU eIDAS 2.0, promoting global interoperability.

Attribute-based approaches to AML, such as “not-on-sanctions-list” or “US-person,” should be preferred whenever possible as they can effectively manage risks without the overcollection of PII data, avoiding a “checkpoint society” riddled with unnecessary ID requirements.

2. Permit financial institutions to rely on VDCs issued by other regulated entities, identity trusts, or accredited sources via verified real-time APIs for AML/CFT compliance.

Treasury and FinCEN should authorize institutions to accept credentials and attestations from peer financial institutions or identity trust networks when those issuers meet assurance and audit standards.

Congress should further consider the addition of a new § 201(d) to the Digital Asset Market Structure Discussion Draft (Sept. 2025) clarifying Treasury’s authority to recognize and accredit digital-identity and privacy-enhancing compliance frameworks.

While current CIP programs still assume physical ID presentation, the underlying statute is technology neutral and does not mandate any specific medium or security mechanism. Recognizing VDCs can modernize onboarding by reducing costs and friction, improving AML data quality and transparency, and enabling faster, more collaborative investigations across institutions and borders—all while minimizing data-collection risk.

Statutory clarity ensures that Treasury’s modernization efforts rest on a durable, technology-neutral foundation. This amendment would future-proof the U.S. AML/CFT regime, align it with G7 digital-identity roadmaps, and strengthen U.S. leadership in global digital-asset regulation.

3. Permit privacy-enhancing technologies (PETs) to meet verification and monitoring obligations.

Treasury should issue interpretive guidance or rulemaking confirming that zero-knowledge proofs, pseudonymous identifiers, and multi-party computation may be used for CIP, CDD, and Travel-Rule compliance if equivalent assurance and auditability are maintained.


PETs enable institutions to prove AML/CFT compliance without exposing underlying PII, minimizing data breach and insider risk exposure while maintaining verifiable oversight.

Recognizing PETs would modernize compliance architecture, lower data-handling costs, and encourage innovation consistent with global privacy and financial-integrity standards.

4. Modernize the Travel Rule to enable verifiable digital credential-based information transfer.

Treasury should amend 31 CFR § 1010.410(f) or issue guidance allowing originator/beneficiary data to be transmitted via cryptographically verifiable credentials or proofs instead of plaintext PII.

The current Travel Rule framework was built for wire transfers, not blockchain systems. Verifiable digital credentials can carry or attest to required information with integrity, selective disclosure, and traceability.

This approach preserves law-enforcement visibility while protecting privacy, ensuring interoperability with FATF Recommendation 16 and global Virtual Asset Service Providers (VASPs).

5. Establish exceptive relief for good-faith reliance on accredited identity trust, VDC, and Privacy-Enhancing Technology (PET) systems.

Treasury should use its § 1020.220(b) rulemaking authority to provide exceptive relief deeming institutions compliant when they rely on Treasury-accredited credentials or PET frameworks meeting defined assurance standards.

Institutions adopting accredited compliance tools should not face enforcement liability for third-party system errors beyond their control. Exceptive relief would provide regulatory certainty and clear boundaries of accountability.

Exceptive relief incentivizes the adoption of privacy-preserving identity systems such as identity trusts, reducing costs while strengthening overall compliance integrity.

6. Leverage NIST NCCoE collaboration for technical pilots and standards.

Treasury and FinCEN should partner with NIST’s National Cybersecurity Center of Excellence (NCCoE) Digital Identities project to pilot mDLs, VDCs, and interoperable trust registries for CIP and CDD testing.

The NCCoE provides standards-based prototypes (e.g., NIST SP 800-63-4 and ISO/IEC 18013-5/-7 mDL) that validate real-world feasibility and assurance equivalence.

Collaboration ensures technical soundness, interagency alignment, and rapid deployment of privacy-preserving digital-identity frameworks.

7. Direct FinCEN to engage proactively with industry on the adoption of advanced technologies that enhance AML compliance, investigations, and privacy protection.

Treasury should issue formal direction or guidance requiring FinCEN to establish an ongoing public-private technical working group with industry, academia, states, and standards bodies to pilot and evaluate advanced compliance technologies.

Continuous engagement with the private sector ensures that FinCEN’s rules keep pace with innovation and that compliance tools remain effective, privacy-preserving, and economically efficient.

This collaboration would strengthen AML/CFT investigations, reduce false positives, and alleviate the compliance burden on financial institutions while upholding privacy and data-protection standards.

The Path Forward

Time and again, regulatory compliance challenges have sparked the next generation of financial infrastructure. EMV chips transformed fraud detection; tokenization improved payment security; now, verifiable identity can redefine AML/CFT compliance.

By replacing static data collection with cryptographic proofs of compliance, regulators gain better visibility, institutions reduce cost, and individuals retain control over their personal information. The transformation is not solely technological—it’s institutional: from data collection to trust verification.

SpruceID’s aim is to build open digital identity frameworks that empower trust—not just between users and apps, but between citizens and institutions. Our experience powering government-issued credentials demonstrates that strong identity assurance and privacy can coexist. In our response to the Treasury, we’ve shown how those same principles can reshape AML/CFT for the digital age. But the work is far from finished.

Over the coming months, SpruceID will release additional thought pieces on how public agencies and private institutions can collaborate to advance trustworthy digital identity, from privacy-preserving regulatory reporting to unified standards for trustworthy digital identity.

We invite policymakers, regulators, technologists, and financial leaders to join us in dialogue and in action. Together, we can build a compliance framework that is lawful, auditable, and worthy of public trust.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ockto

Klantbeheer met brondata: van verplicht nummer naar waardevolle check-in

In aflevering 14 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Gert-Jan van Dijke en Jeroen van Winden (Ockto) over klantbeheer in de financiële sector. Want zodra een klant aan boord is, begint het echte werk pas. 

In aflevering 14 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Gert-Jan van Dijke en Jeroen van Winden (Ockto) over klantbeheer in de financiële sector. Want zodra een klant aan boord is, begint het echte werk pas. 


Kin AI

The Kinside Scoop 👀 #16

Kin's biggest update yet

Hey folks 👋

Big one today.

Like we hinted in the last edition, Kin 0.6 is rolling out. This is Kin’s biggest update ever, and it’s packed.

Full rollout begins tomorrow (Tuesday, October 21, 2025), but we’ve got some sneak peeks for you.

We also have a super prompt based around making the most out of the new opportunities this update provides - so make sure you read to the end.

What’s New With Kin🚀 Meet your advisory board 🧠

You’ve probably seen them drifting into chat recently - little hints of Harmony, Sage, and the rest. Now the full board of five arrives, each with expertise in advising on a particular topic.

Sage: Career & Work

Aura: Values & Meaning

Harmony (Premium only): Relationships

Pulse (Premium only): Sleep & Energy

Ember (Premium only): Social

Each one brings a different lens on your life, but all pull insight from your Journal entries, conversations, and memories.

Conversation Starters 💬

Every advisor’s chat screen now includes some personalized, context-aware starters - not just to make beginning a conversation easier, but remembering things you wanted to talk about as effortless as possible.

Memory, re-engineered (finally) 📂

It feels like we’re always alluding to this - but now it’s here.

Kin’s Memory is now 5× more accurate when recognizing and extracting memories from conversations.

Advisors can also now search across all of your memories, Journal entries, and conversations, so they can build an understanding of context quickly.

All of this means that no matter which advisor you speak with, Kin is much more able to pull the relevant information from its improved memory structure - so you get better, smarter, more relevant advice from every advisor.

We’ve also beefed up the Memory UI. On top of the classic memory graph, you can now see what Kin knows about you - as well as the organizations and people you’re connected to.

And each of these people/organizations/places now have their own Entity pages, where you can see, edit, and add to what Kin has collected about them from your conversations.

You can even finally search memories for key words and associations!

See your progress 📊

There’s a brand new Stats page that visualizes your growth with Kin.

You can see a breakdown of usage stats and Memory types, so you can see what you’re talking about a lot, and where you and your Kin might have some blind spots.

Journaling, cleaned up 📝

Based on all your feedback, we’ve finished rebuilding the Journal from the ground up.

There’s a brand-new, simplified UI to make daily journaling easier than ever.

Premium users also unlock custom journal templates, perfect for capturing anything from gratitude logs to tough feedback moments.

New onboarding (for everyone) 🔐

Next time you open Kin, you’ll be prompted to sign in with Apple, Google, or Email.


This makes onboarding smoother and syncing easier (rumours of a desktop version abound), and lays the groundwork for future features.

But don’t worry: your data hasn’t moved an inch.

It still lives securely with you, on your device.

We’ll share a detailed write-up soon (as promised), but the short version is: simpler sign-in, same privacy-first design.

Premium (by request!) ⭐

You asked. We built it.

Premium unlocks the full Kin experience, and extends existing Free features so you can make the most of your Kin.

If you join Premium, you’ll get:

All 5 advisors (Harmony, Pulse, Ember + the two free advisors, Sage and Aura)

Unlimited text messages

1 hour of voice per day

Custom journal templates

Premium is currently $20/month - and there’s a discount if you go for 3 months.

If you don’t want to upgrade though, don’t fret. The Free tier is going nowhere: Premium is for power users who want the full advisor board and voice time.

When? 🗓️

Rollout starts Tuesday, October 21, 2025. That’s tomorrow, if you’re reading this as it goes out!


Expect updates over the following week as we make sure everything runs smoothly. Speaking of…

Talk to us!🗣

This is the biggest change Kin has ever gone through. It’s our largest step toward a 1.0 release yet - and we want to make sure we’re heading in the right direction before we get too far.

The KIN team can be reached at hello@mykin.ai for anything, from feedback on the app to a bit of tech talk (though support@mykin.ai is better placed to help with any issues).

You can also share your feedback in-app. Just screenshot to trigger the feedback form.

But if you really want to get involved, the official Kin Discord is the best place to talk to the Kin development team (as well as other users) about anything AI.

We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week - and we’d love for you to join them:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

We updated Kin so that it can better help you - help us make sure that’s what it does!

Our current reads 📚

No new Slack screenshot this edition - we’ve been too busy to share new articles recently!

Article: Google announced DeepSomatic, an open-source cancer research AI
READ - blog.google

Article: Meta AI glasses fuel Ray-Ban maker’s best quarterly performance ever
READ - reuters.com

Article: Google launch Gemini Enterprise, to make AI accessible to employees
READ - cloud.google.com

Article: How are MIT entrepreneurs using AI?
READ - MIT News

This edition’s super prompt 🤖

This time, your chosen advisor can help you answer the question:

“How do I make the most of new opportunities?”

Try prompt in Kin

Once the update comes out tomorrow, try hitting the link with different advisors selected, and get a few different viewpoints!

Your are Kin 0.6 (and beyond) 👥

Without you, Kin wouldn’t be anything. We want to make sure you don’t just know that, but feel it.

So, please: get involved. Chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.

Most importantly: enjoy the update!

With love,

The KIN Team


Ockto

Klantbeheer met brondata: van verplicht nummer naar waardevolle check-in

Banken hebben de afgelopen jaren flinke stappen gezet in het digitaliseren van onboarding. Nieuwe klanten aanhaken gaat steeds makkelijker. Maar zodra het gaat om klantbeheer – het actueel houden van klantgegevens tijdens de looptijd – blijft de sector achter. Terwijl juist hier de druk toeneemt, van toezichthouders én vanuit zorgplicht.

Banken hebben de afgelopen jaren flinke stappen gezet in het digitaliseren van onboarding. Nieuwe klanten aanhaken gaat steeds makkelijker. Maar zodra het gaat om klantbeheer – het actueel houden van klantgegevens tijdens de looptijd – blijft de sector achter. Terwijl juist hier de druk toeneemt, van toezichthouders én vanuit zorgplicht.


Metadium

Metadium 2025 Q3 Activity report

Dear Community, Metadium made meaningful strides in the third quarter of 2025. We deeply appreciate your continued support and are pleased to share the key achievements and updates from Q3. Summary Metadium’s DID and NFT technologies were applied to support Korea’s first ITMO (Internationally Transferred Mitigation Outcomes) certified carbon reduction project. The AI conversation partner

Dear Community,

Metadium made meaningful strides in the third quarter of 2025. We deeply appreciate your continued support and are pleased to share the key achievements and updates from Q3.

Summary
Metadium’s DID and NFT technologies were applied to support Korea’s first ITMO (Internationally Transferred Mitigation Outcomes) certified carbon reduction project. The AI conversation partner service “Daepa” officially launched, using Metadium DID for identity management in the backend. The Metadium Node Partnership Program (2025–2027) officially began, with a total of 9 partners operating nodes across the network. Technology and Ecosystem Updates
Q3 Monthly Transactions

From July to September 2025, a total of 586,608 transactions were processed, and 42,979 DID wallets were created.

ITMO-Certified Carbon Reduction Project

The Verrywords project, officially recognized by the Cambodian government for reducing 680,000 tons of greenhouse gas emissions, utilized Metadium’s DID and NFT technologies as core infrastructure.

Metadium DIDs were issued to electric motorcycle recipients, enabling participant identification and tracking. The reduction records were issued through a Metadium-based point system and uniquely verified via NFTs. This marks Korea’s first officially approved ITMO case and a significant milestone demonstrating Metadium’s potential in global environmental cooperation.

For more details, please click here.

Metadium DID Integrated into “Daepa” AI Service

The AI relationship training service “Daepa” has been officially launched with Metadium DID integrated into its backend identity management system.

Users do not directly interact with the DID system, but all interactions are managed through unique DIDs. The DID system will be leveraged for future expansion into trust-based services, including points, rewards, and user-to-user connections. This represents a case of DID and AI technology convergence, showcasing the diverse applicability of Metadium’s DID framework.

For more details, please click here.

Transition to the 2025–2027 Metadium Node Partnership

As of September 2025, Metadium’s Node Partnership Program has transitioned into a new operational cycle (2025–2027).

Alongside existing partners like Metadium, Berrywords, and Superschool, new global partners have joined, totaling nine participants. These partners play a vital role as co-operators of the Metadium ecosystem, participating in block generation, validation, and governance. The new structure enhances technical diversity and global scalability of the network.

For more details, please click here.

Metadium remains dedicated to advancing real-world applications of blockchain technology and decentralized identity.

We will continue to innovate and build a trusted digital infrastructure for our users and partners.

Thank you,

The Metadium Team

안녕하세요, 메타디움 팀입니다.

2025년 3분기에도 메타디움은 퍼블릭 블록체인으로서의 역할을 한층 확장하며, 기술적·생태계적으로 의미 있는 성과를 만들어냈습니다. 메타디움을 지지해주시는 모든 커뮤니티 여러분께 감사드리며, 이번 분기 주요 내용을 아래와 같이 보고드립니다.

요약
메타디움 DID와 NFT 기술이 한국 최초의 국제 감축(ITMO) 실증 프로젝트에 적용되어, 총 68만 톤의 온실가스 감축 실적을 공식 인정받는 데 기여했습니다. AI 기반 감정 트레이닝 서비스 ‘대파(Daepa)’에 메타디움 DID가 식별 인프라로 적용되어 정식 출시되었습니다. 2025–2027 메타디움 노드 파트너십 체계로 전환되며, 기존 및 신규 파트너사 총 9개사가 참여하여 운영의 신뢰성과 다양성이 강화되었습니다. 기술 및 생태계 업데이트
Q3 월간 트랜잭션

2025년 7월부터 9월까지 총 586,608건의 트랜잭션이 처리되었으며, DID는 42,979건이 생성되었습니다.

ITMO 국제 감축 프로젝트에 메타디움 기술 기여

캄보디아 정부로부터 68만 톤 규모의 온실가스 감축 실적이 공식 인정된 베리워즈 프로젝트에 메타디움 DID 및 NFT 기술이 핵심 인프라로 사용되었습니다.

전기 오토바이 수령자에게 메타디움 DID가 발급되어 참여자 식별 및 추적이 가능해졌습니다. 감축 실적은 메타디움 기반 포인트 시스템으로 발행되며, NFT를 통해 고유하게 증명됩니다. 이는 한국 최초의 ITMO 승인 사례로, 메타디움 기술의 글로벌 환경 협력 기여 가능성을 보여주는 의미 있는 이정표입니다.

자세한 내용은 여기를 확인해보세요.

AI 서비스 ‘대파(Daepa)’에 메타디움 DID 적용

AI 연애 트레이닝 서비스 ‘대파’가 정식 출시되었으며, 사용자 식별 백엔드 시스템에 메타디움 DID가 적용되었습니다.

사용자는 DID를 직접 인식하지 않지만, 모든 상호작용은 고유 DID 기반으로 관리됩니다. 향후 포인트, 리워드, 사용자 간 연결 등 신뢰 기반 서비스 확장에 활용될 예정입니다. 이는 DID와 AI 기술의 결합 사례로, 메타디움 DID의 다양한 활용 가능성을 보여줍니다.

자세한 내용은 여기를 확인해보세요.

2025–2027 메타디움 노드 파트너십으로 전환

2025년 9월부로 메타디움의 노드 파트너십이 새로운 운영 주기(2025~2027) 로 전환되었습니다.

기존 메타디움, 베리워즈, 슈퍼스쿨 등과 함께 새로운 글로벌 파트너사들이 합류하여 총 9개사가 참여 중입니다. 이들은 블록 생성, 검증, 거버넌스 운영 등 메타디움 생태계의 공동 운영 주체(Co-operator) 로서 핵심 역할을 수행합니다. 기술적 다양성과 글로벌 확장성이 강화되는 구조로의 전환이 이루어졌습니다.

자세한 내용은 여기를 확인해보세요.

메타디움은 블록체인 기술의 공공성과 신뢰를 바탕으로, 지속 가능한 사회적 가치와 실질적인 활용 사례를 만들어가고 있습니다.

앞으로도 다양한 영역에서 DID, NFT, 퍼블릭 체인의 역할을 실현해 나가며, 메타디움 생태계를 더욱 발전시켜 나가겠습니다.

감사합니다.

메타디움 팀

Metadium 2025 Q3 Activity report was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium DID Applied in KISA’s “K-BTF” National Pilot Project

Blockchain-Based Identity Infrastructure Tested for Public Sector Use CSAP Security Certification Underway — Demonstrating Real-World Viability of Metadium’s DID Technology Government-Backed Pilot Adopts Metadium’s DID Technology Metadium’s decentralized identity (DID) technology has been adopted for the 2025 pilot project of the Korea Blockchain Trust Framework (K-BTF), led by the Korea Interne

Blockchain-Based Identity Infrastructure Tested for Public Sector Use
CSAP Security Certification Underway — Demonstrating Real-World Viability of Metadium’s DID Technology

Government-Backed Pilot Adopts Metadium’s DID Technology

Metadium’s decentralized identity (DID) technology has been adopted for the 2025 pilot project of the Korea Blockchain Trust Framework (K-BTF), led by the Korea Internet & Security Agency (KISA). The K-BTF program aims to validate the use of blockchain-powered trust infrastructure within public services.

CPLABS is responsible for building the DID platform for this year’s pilot, with Metadium as the underlying blockchain protocol. The project covers the entire lifecycle of DID infrastructure — from system design to cloud-based deployment and security certification — offering a robust opportunity to demonstrate the scalability and reliability of Metadium’s technology in real-world public cloud environments.

CSAP Certification in Progress for Metadium-Based DID Platform

CPLABS has successfully implemented a Metadium-based DID platform, including DID issuance, authentication, and verification functions. The platform has formally entered the CSAP (Cloud Security Assurance Program) evaluation process, which is a mandatory certification for any cloud service provider offering solutions to public institutions in Korea.

The CSAP review will assess technical performance and verify the platform’s compliance with administrative and policy standards.

This marks a meaningful step forward for Metadium DID as it transitions from technical showcase to deployable infrastructure for government-grade digital identity systems.

Expanding from Private Sector to Public Sector Applications

Having proven itself in multiple commercial use cases, Metadium DID is now being tested for public sector readiness and policy alignment. Metadium’s blockchain is purpose-built for identity issuance and verification and features a native DID method optimized for privacy, traceability, and regulatory compliance — qualities that align well with government requirements.

The newly built platform is expected to serve as the foundation for a range of future public services, including cloud-based identity issuance, digital authentication, and point-integrated administrative programs.

👉 The results of the CSAP certification and future use cases will be shared via Metadium’s official channels.

The Metadium Team

메타디움 DID, KISA ‘K-BTF 실증사업’에 적용

공공 신뢰 인프라 실증 사업에 메타디움 기반 DID 기술 활용
공공 클라우드 인증(CSAP) 절차 진행 중… 블록체인 기반 신원 인프라의 실용 가능성 입증

정부 주도의 DID 실증 사업에 메타디움 기술 채택

메타디움 DID 기술이 한국인터넷진흥원(KISA)이 주관하는 K-BTF(Korea Blockchain Trust Framework) 실증사업에 적용되었습니다. K-BTF는 블록체인 기반 신뢰 인프라의 공공 도입 가능성을 실증하는 정부 과제로, 2025년 사업에서는 씨피랩스(CPLABS)가 DID 플랫폼 구축을 맡고, 기반 블록체인 기술로 메타디움이 활용되고 있습니다.

이번 실증은 클라우드 기반 DID 서비스의 설계, 구현, 보안인증까지 아우르는 전주기 구조로 구성되어 있으며, 이를 통해 메타디움 기술이 실제 공공 클라우드 환경에서 기술적 안정성과 확장 가능성을 입증하는 계기를 마련하고 있습니다.

메타디움 DID 기반 플랫폼, CSAP 보안인증 절차 진행 중

씨피랩스는 메타디움 기반 DID 시스템을 중심으로 사용자 DID 발급, 인증, 검증 기능을 구현한 플랫폼을 구축하였고, 현재는 공공 클라우드 보안 인증(CSAP)을 정식 신청하여 본심사 절차에 돌입한 상태입니다.

CSAP 인증은 공공기관 대상 SaaS 서비스 제공을 위한 가장 핵심적인 제도적 요건으로, 본 사업을 통해 메타디움 DID는 기술적 성능뿐 아니라 정책적, 제도적 기준 충족 여부까지 종합적으로 검증받고 있는 중입니다.

이는 단순 기술 데모를 넘어서, 실제 행정 시스템과 연동 가능한 수준의 DID 인프라로서의 활용 가능성을 입증하는 중요한 사례로 평가됩니다.

민간 중심 DID 기술, 공공시장으로의 확장 기반 마련

메타디움 DID는 그간 다양한 민간 서비스에서 축적된 기술 기반을 바탕으로, 이번 실증사업을 통해 공공 부문으로의 확장 가능성을 테스트하고 있습니다. 특히, 메타디움 블록체인은 DID 신원 발급 및 검증에 최적화된 구조와 자체 DID Method를 갖추고 있어, 공공기관이 요구하는 비식별성, 추적 가능성, 개인정보 보호 기준에 효과적으로 대응할 수 있습니다.

이번 실증을 통해 구축된 플랫폼은 향후 공공 클라우드에서 운영되는 DID 기반 행정 서비스, 인증 서비스, 포인트 기반 행정 연계 서비스 등 다양한 서비스로 확장될 수 있는 기반을 제공할 것으로 기대됩니다.

👉 CSAP 인증 결과와 공공 적용 사례는 추후 메타디움 공식 채널을 통해 상세히 공개할 예정입니다.

감사합니다.

메타디움팀 드림

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KOR) | http://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

Metadium DID Applied in KISA’s “K-BTF” National Pilot Project was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


SuperSchool Joins the Metadium Node Partner Network

At the intersection of technology, education, and trust infrastructure At Metadium, we build infrastructure for trust. We believe trust isn’t just about how a technology operates, but about how that technology is responsibly operated together. Trust isn’t built in isolation — it’s co-created with committed partners. Today, we’re proud to welcome a new partner to our ecosystem: SuperSchool, an AI
At the intersection of technology, education, and trust infrastructure

At Metadium, we build infrastructure for trust. We believe trust isn’t just about how a technology operates, but about how that technology is responsibly operated together. Trust isn’t built in isolation — it’s co-created with committed partners.

Today, we’re proud to welcome a new partner to our ecosystem: SuperSchool, an AI-powered education platform, has officially joined Metadium as a Node Partner.

Technology that works in the classroom — The SuperSchool mission

SuperSchool is an EdTech company innovating the school system through AI and cloud-based technology.

From attendance and activity logs to counseling, performance analytics, college guidance, and student records — SuperSchool digitizes the full spectrum of school operations, offering automated, AI-driven solutions for teachers and students.

Their work is grounded in real classrooms:

Co-designed with over 300 teachers nationwide Patent-registered for digital attendance & administration systems Privacy partnership with Korea University’s Graduate School of Cybersecurity Co-developed an IB education platform with Jeju PyoSeon High School Signed international contracts with schools abroad, including in China

This is not just about making school digital — it’s about managing sensitive educational data responsibly and meaningfully.

From technology user to ecosystem operator — What the node partnership means

Metadium has already been providing blockchain technology for parts of SuperSchool’s platform.

Now, that collaboration deepens: SuperSchool joins as a Node Partner, becoming an active participant in the operation of the Metadium network.

A node is more than just infrastructure. It’s a trust operator responsible for helping maintain the integrity, transparency, and continuity of a decentralized ecosystem. SuperSchool now shares in that responsibility.

Blockchain-Powered verification for educational records

SuperSchool manages data that reflects students’ lives — data that informs academic growth, career decisions, and institutional trust.

AI-generated activity record analysis Auto-drafted student reports for evaluation Personalized career & college guidance

Parts of this sensitive data are cryptographically verified through Metadium’s blockchain, ensuring they are tamper-proof and authentically sourced.

This represents a real-world intersection of public-sector education and blockchain-level trust.

Metadium x SuperSchool

We now stand on the same node where technology meets education and trust supports students’ future.

📎 Learn more about SuperSchool

메타디움 노드 파트너로 ‘슈퍼스쿨’이 합류합니다 기술, 교육, 그리고 신뢰 생태계의 접점에서

메타디움은 ‘신뢰 인프라’를 설계합니다. 우리는 기술이 작동하는 방식보다, 그 기술을 어떻게 함께 운영하느냐를 더 중요하게 생각합니다. 그리고 그 신뢰는 혼자서는 완성되지 않습니다 — 책임 있게 참여하는 파트너들과 함께 구축되어야 합니다.

메타디움 생태계에 또 하나의 의미 있는 노드 파트너가 합류했습니다. AI 기반 교육 플랫폼을 운영하는 ‘슈퍼스쿨(SuperSchool)’이 공식 메타디움 노드 파트너로 참여합니다.

교육 현장에서 실천하는 기술, 슈퍼스쿨

슈퍼스쿨은 AI와 클라우드 기술을 기반으로 학교 교육을 혁신하고 있는 에듀테크 기업입니다. 출결 관리, 활동 기록, 상담, 성적 분석, 진로 컨설팅, 생기부 작성 등 학교의 모든 교육 데이터를 디지털화하고, AI 분석과 자동화를 통해 교사와 학생 모두의 경험을 향상시키는 플랫폼을 만들어왔습니다.

전국 현직 교사 300여 명과 함께 기획된 현장 중심 솔루션 출결 및 행정 자동화 시스템 특허 등록 고려대학교 정보보호대학원과 개인정보보호 협력 제주 표선고와 IB 교육과정 전용 플랫폼 공동 개발 중국 교육기관과의 해외 공급 계약 체결

이 모든 활동은 단순한 서비스가 아니라, 교육 데이터를 어떻게 안전하게 다루고, 의미 있게 사용할 수 있는가에 대한 고민의 결과입니다.

기술 사용자에서 생태계 운영자로 — 노드 파트너십의 의미

메타디움은 이미 슈퍼스쿨의 일부 시스템에 기술을 제공하며 협력해 왔습니다. 그리고 이제, 단순한 사용자 관계를 넘어 슈퍼스쿨은 메타디움의 노드 파트너로서 생태계 운영에 직접 참여하게 되었습니다.

노드는 단순한 기술 장비가 아닙니다. 그것은 네트워크의 일부를 책임지는, 신뢰의 운영자입니다. 블록체인의 데이터가 기록되는 모든 과정에 관여하며, 생태계의 무결성과 지속성을 함께 지켜나가는 존재입니다.

교육과 DID, 그리고 데이터의 새로운 흐름

슈퍼스쿨이 다루는 교육 데이터는 학생의 삶과 연결되어 있고, 사회로 이어집니다.

AI 활동기록 분석 디지털 생기부 초안 자동 작성 학생 맞춤형 진로/진학 추천

이 모든 흐름 위에, 메타디움 블록체인 기술이 함께하게 되면서 교육이라는 공공성과 블록체인 기술의 신뢰성이 현실적으로 접목됩니다.

메타디움 x 슈퍼스쿨

기술이 교육을 만나고, 신뢰가 학생의 미래를 지지하는 지점에서 — 우리는 이제 같은 노드 위에 서 있습니다.

📎 [슈퍼스쿨 자세히 보기]

SuperSchool Joins the Metadium Node Partner Network was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


VERYWORDS Joins the Metadium Node Partner Network

At the intersection of technology, climate, and trust infrastructure Metadium is more than just a blockchain project. We design trust infrastructure. And trust is not built by technology alone — it’s built through responsible operation and meaningful participation by those who share that responsibility. Today, we’re pleased to announce the latest addition to the Metadium ecosystem: VERYWORDS, a
At the intersection of technology, climate, and trust infrastructure

Metadium is more than just a blockchain project. We design trust infrastructure. And trust is not built by technology alone — it’s built through responsible operation and meaningful participation by those who share that responsibility.

Today, we’re pleased to announce the latest addition to the Metadium ecosystem: VERYWORDS, a company developing and operating an e-Mobility-based carbon reduction platform, has officially joined as a Metadium Node Partner.

VERYWORDS — Building sustainable technology through real-world action

VERYWORDS is not a company that merely expresses interest in climate issues. It’s a team that has spent years in the field, building practical, technology-based models to reduce carbon emissions.

2017–2019: Carbon neutrality consulting for government and enterprise 2020–2022: e-Mobility pilot programs across ASEAN countries 2023: Established electric motorcycle assembly plant in Cambodia 2024: Signed carbon credit pre-purchase agreement with Korea’s Ministry of Trade, Industry, and Energy 2025: Secured Korea’s first ITMO (Internationally Transferred Mitigation Outcome) project approval

VERYWORDS operates a multidimensional climate tech ecosystem, integrating carbon reduction, EV infrastructure, international policy cooperation, and cutting-edge technology. They’ve built a blockchain-based MRV (Monitoring, Reporting, Verification) system to ensure transparency and reliability of carbon data. They are already running IoT-based data security infrastructure and carbon reward mechanisms.

Through partnerships with the Cambodian government and various regional institutions, VERYWORDS has secured key footholds in international climate mitigation — not as experiments, but as operational models already in motion.

From technology user to ecosystem operator — a partnership grounded in shared trust

Metadium has already provided blockchain technology for VERYWORDS’ platform, helping ensure integrity in carbon verification and data transparency. This new node partnership marks an evolution in that relationship — a step toward greater autonomy, shared responsibility, and deeper integration.

A node is not just a server. Maintaining trust, security, and decentralization is a critical part of the blockchain’s backbone. Becoming a node partner means becoming an active steward of the network itself.

VERYWORDS will now participate directly in the data creation and validation process within the Metadium blockchain, assuming the role of trust operator in a broader ecosystem of transparency and accountability.

At the intersection of climate action and Web3

Climate change is one of the most complex, far-reaching challenges humanity faces. When applied thoughtfully, blockchain offers one of the most powerful tools to address it: a tamper-proof, transparent, verifiable system of record.

VERYWORDS is a leading example of applying this technology beyond borders — creating tangible momentum in the global fight against climate change.

Their role as a Metadium Node Partner further strengthens the trust architecture behind that work.

To those building this ecosystem with us

Metadium is an open-source blockchain project focused on decentralized identity (DID) and trust infrastructure. But ecosystems are not built on code alone. They are built by those who share the responsibility to run them, those who apply them meaningfully, and those who have the vision — and grit — to bring it into the real world.

VERYWORDS is one of those partners.

Metadium x VERYWORDS

Where sustainability meets technology, and trust supports real-world climate action — We now stand on the same node.

📎 Learn more about VERYWORDS

메타디움 노드 파트너로 ‘베리워즈’가 합류합니다 기술, 기후, 그리고 신뢰 생태계의 접점에서

메타디움은 단지 블록체인을 개발하는 프로젝트가 아닙니다. 우리는 ‘신뢰 인프라’를 설계합니다. 그리고 그 신뢰는 기술로만 완성되지 않습니다 — 책임 있게 운영하고, 진정성 있게 참여하는 파트너들과 함께할 때 비로소 구축됩니다.

그런 의미에서, 메타디움 생태계에 새로운 노드 파트너의 합류를 소개하게 된 것을 기쁘게 생각합니다. e-Mobility 기반 탄소감축 플랫폼을 개발하고 운영하는 베리워즈(VERYWORDS)가 공식 메타디움 노드 파트너로 참여하게 되었습니다.

지속가능성을 위한 기술, 현장에서 실천해온 베리워즈

베리워즈는 단순히 기후 문제에 ‘관심 있는’ 기업이 아닙니다. 이들은 수년간 직접 현장에 들어가, 기술을 기반으로 실질적인 탄소감축 모델을 구현해온 기업입니다.

2017~2019: 정부 및 기업 대상 탄소중립 컨설팅 2020~2022: 아세안 국가 대상 e-Mobility 시범사업 수행 2023: 캄보디아 현지에 전기 오토바이 조립 공장 구축 2024: 산업부와 탄소크레딧 선구매 협약 체결 2025: 국내 최초 국제탄소감축사업(ITMO) 공식 승인

베리워즈는 탄소 감축, 전기차 인프라, 국제 정책 협력, 기후테크 기술이 유기적으로 연결된 복합적이고 입체적인 생태계를 구축하고 있습니다.

기술적으로는 블록체인 기반 MRV(Monitoring, Reporting, Verification) 시스템을 구축해 감축 활동의 투명성과 신뢰성을 확보했고, IoT 기반 데이터 보안 및 탄소 리워드 시스템도 이미 운영 중입니다.

또한, 캄보디아 정부 및 다양한 현지 기관과의 협업을 통해 국제 감축 사업의 거점을 실질적으로 확보하고 있으며, 이는 단순한 ‘실험’이 아닌 운영 가능한 모델로 증명되고 있습니다.

기술 사용자에서 생태계 운영자로 — 신뢰의 무게를 함께 지는 파트너십

메타디움은 이미 베리워즈의 플랫폼에 기술을 제공하며, 감축 인증 및 데이터 투명성 확보에 협력해 왔습니다. 그리고 이번 노드 파트너십은, 그 협업 관계가 한층 더 주체적이고 책임 있는 단계로 진화했음을 의미합니다.

노드는 단순한 기술 장비가 아닙니다. 노드는 블록체인 네트워크의 일부로서, 생태계의 신뢰성과 무결성을 유지하는 핵심 요소입니다. 블록체인의 거버넌스 구조 안에서 직접 참여하고, 네트워크를 ‘함께 지키는 존재’가 된다는 뜻입니다.

베리워즈는 이제, 메타디움 블록체인의 데이터가 만들어지고 저장되는 과정에 참여하며, 생태계의 신뢰 운영자(trust operator)로 함께하게 됩니다.

기후 행동과 Web3의 연결 지점에서

기후 변화는 인류가 직면한 가장 복잡하고 거대한 과제입니다. 그리고 블록체인은, 그 과제를 해결하는 데 있어 가장 강력한 도구 중 하나입니다 — 데이터 위조가 불가능하고, 투명하며, 누구나 확인 가능한 기록 시스템.

베리워즈는 이 기술을 기후 분야에 적용하는 데 있어 한국을 넘어 국제 무대에서도 의미 있는 전환점을 만들어내고 있으며, 이번 메타디움 노드 파트너 합류는 이런 움직임에 더 깊은 신뢰 구조를 부여하는 작업이라 할 수 있습니다.

함께 생태계를 만드는 이들에게

메타디움은 오픈소스 블록체인 프로젝트로서, 자기주권형 신원(DID)과 신뢰 기반 인프라를 구축하는 것을 목표로 합니다.

그리고 그 생태계는 기술만으로는 완성되지 않습니다. 책임을 함께 지는 파트너, 기술을 실제로 구현하는 기업, 그리고 그것을 실현할 철학과 실행력을 가진 사람들과 함께 만들어집니다.

베리워즈는 바로 그런 파트너입니다.

메타디움 x 베리워즈

기술이 지속가능성을 만나고, 신뢰가 기후 행동을 지지하는 지점에서 — 우리는 이제 같은 노드 위에 서 있습니다.

📎 [베리워즈 자세히 보기]

VERYWORDS Joins the Metadium Node Partner Network was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Introducing CheckMate for Auth0: A New Auth0 Security Tool

Announcing CheckMate for Auth0, a new, open-source tool to proactively assess and improve your Auth0 security. Analyze your tenant configuration against best practices.
Announcing CheckMate for Auth0, a new, open-source tool to proactively assess and improve your Auth0 security. Analyze your tenant configuration against best practices.

FastID

3 Costly Mistakes in App and API Security and How to Avoid Them

Avoid costly app and API security mistakes. Learn how to streamline WAF evaluation, estimate TCO, and embrace agile development for optimal security.
Avoid costly app and API security mistakes. Learn how to streamline WAF evaluation, estimate TCO, and embrace agile development for optimal security.

Sunday, 19. October 2025

Matterium

THE DIGITAL CRISIS — TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE

THE DIGITAL CRISIS TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE Artificial inflation of real estate prices for decades caused the global financial crisis. We propose converting the global system from a debt-backed to an equity-backed model to solve it. We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligat
THE DIGITAL CRISIS

TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE

Artificial inflation of real estate prices for decades caused the global financial crisis.

We propose converting the global system from a debt-backed to an equity-backed model to solve it.

We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.

By Vinay Gupta (CEO Mattereum)
with economics and policy support from Matthew Latham of Bradshaw Advisory
and art and capitalism analysis from A.P. Clarke

THE BAR CODE STARTED IT

Back when I was in primary school in the 1970s, they suddenly started teaching us binary arithmetic; why?

Well, because they could see that computers were a coming thing and, of course, we’d all have to know binary to program them. So, The Digital has been a rolling wave for pretty much all my life — that was an early ripple, but it continued to build relentlessly, and when it truly started surging forwards in the 1990s, it began to transform everything in its path. Some things it washed away entirely, some things floated on it, but everywhere it went, digital technology transformed things. Sometimes the result is amazing, sometimes it is disastrous.

The wave was sometimes visible, but sometimes invisible. You could see barcodes arrive, and replace price stickers.

I’m just a bit too young to remember when UK money was pounds, shillings and pence. In 1970 there were two hundred and forty pence to the pound.

Through the 1970s and 1980s the introduction of barcodes on goods was a fundamental change in retail, not just because it changed how prices were communicated in stores, but because it enabled a flow of realtime information about the sale of goods to the central computer systems managing logistics and money in the big store’s supply chain.

Before the bar code, every store used to put the price on every object with a little sticker gun. Changing prices meant redoing all the stickers. Pricing was analogue.

In many ways decimalization and barcoding marked the end of the British medieval period. We still buy and sell real estate pretty much the same way we did in 1970.

Monty Python and the Holy Grail, 1975 The sword has two edges

When you get digitization wrong, the downsides tend to be much larger than the upsides. It’s all very well to “move fast and break things” but the hard work is in replacing the broken thing with something that works better. It’s not a given that better systems will emerge by smashing up the old order, but the digital pioneers were young, and it seems obvious to young people that literally anything would be better than the old people’s systems. This is particularly true in America, which, being founded by revolutionaries, lacks a truly conservative tradition: in America, what is conserved, what people have nostalgia for, is revolution itself.

That makes change a constant, and this is both America’s greatest strength, and weakness. The only thing you can get people interested in is a revolution. Nobody cares about incrementally improving business-as-usual. Everybody acts like they have nothing to lose at all times.

This winner-takes-all nothing-held-back attitude exemplified by “move fast and break things” has become the house style of the digitization.

But as a result a lot of things are broken these days.

Jonathan Taplin - Move Fast and Break Things

Wikipedia turned out pretty well

You can get a decent enough answer for most purposes from Wikipedia, it’s free, it’s community generated, there’s no ads, it doesn’t enshittify (yet), and you do not need to spend a fortune on a shelf full of instantly out of date paper encyclopedias. Most people would agree this is “digitization done right.”

Spotify, not so much; it wrecked musicians livelihoods, turned music listening into a coercive hellscape of ‘curated’ playlists, and is on course to overwhelm actual human created music with AI produced digital soundalike slop that will do its best to kill imaginative new leaps in music — no AI without centuries of history and culture build up in its digital soul could come up with anything like the extraordinary stuff Uganda’s Nyegge Nyegge Tapes label finds for example — and you pay for it or get hosed with ads. It was, after all, always intended to be an advertising platform that tempted audiences in with streamed music.

Nobody ever stopped to ask how the musicians were doing.

Of course for the listeners — the consumers of music-content-product — the experience was initially utopian. People used to talk about the celestial jukebox, the everything-machine, and for a while $9.99 a month got you that. The first phase was utopian, for everybody except the musicians, the creators of music: they had a better financial deal when they got their own CDs pressed and sold them at shows. Seriously, musicians look back at that time as the good old days. Digitization went badly wrong for music.

How Spotify is stealing from small indie artists, why it matters, and what to do about it

It’s not just the software that can be disastrous; take massive data centres, not only do they cover vast areas and consume ludicrous amounts of energy, they are extremely vulnerable to disaster. The Korean government just lost the best part of a petabyte of data when one went up in smoke — the backup was in the same place it seems.

Then there’s a contagious Bluetooth hack of humanoid robots that has just come to light. You can infect one of the robots, and then over Bluetooth, it can infect the other robots, until you have a swarm of compromised humanoid robots, and Elon Musk says he’s going to produce something like 500,000 of these things.

We always thought Skynet would be some sinister defence company AI, but it turns out that basically it’s just going to be 4Chan’s version of ChatGPT — and it’s not like there isn’t plenty of dodgy, abrasive internet culture in the training data already!

This is the digital crisis, it inevitably hits field after field, but whether what emerges at the end is a winner or a disaster is completely unpredictable.

Will it lead to a Wikipedia, or a Spotify, something that’s just sort of OK, like Netflix, or something deeply weird and sinister like those hacked robots. Did Linux save the world? Will it?

Why is there such a range in outcomes over a process whose arrival is so predictable? That is because the Powers-That-Be that might steer the transition, that could come up with an adequate response, the nation states, are really poor at digital. Nation States move too slowly, they fundamentally fail to understand the digital, and their mechanisms just haven’t caught up; they just suck at digital at a very primordial level, so the result of any digital crisis requiring state intervention is a worse crisis.

That’s not to say that any of the possible alternatives to nation states show any sign of doing this better — that’s part of the problem.

Whoever is doing the digitizing during the critical period for each industry has outsized freedom to shape how the digitization process plays out.

4chan faces UK ban after refusing to pay 'stupid' fine

Move fast and break democracy

Eventually “move fast and break things” took over in California and beyond, and crypto (the political faction, the industry, the ideology!) identified the fastest moving object in the political space (Trump-style MAGA Republicanism) and backed it to the hilt.

The American Libertarian branch of the crypto world is now trying to build out the rest of their new political model without a real grasp of how politics worked before they got interested in it. The crypto SuperPACs and associated movements threw money at electing a team who would accommodate them, in the process destroying the old American political mode without, perhaps, much concern about what else they might do once in power.

There’s a whole bunch of “break things phase” activity emerging from this right now.

Unprecedented Big Money Surge for Super PAC Tied to Trump

The “break things” part of “move fast and break things” has a very constrained downside in a corporation. Governments are a lot more dangerous to tinker with.

The Silicon Valley venture capital ecosystem itself is a relic. It is itself a legacy system. Dating back to the 1950s American boom times, Silicon Valley is having an increasingly hard time generating revenue, and today its insularity and short sightedness are legendary. There is a lot of need for innovation, and there’s no good way to fund most of it. Keeping innovation running needs a new generation of financial instruments (remember the 2018 ICO craze?) but instead we’re stuck with Series funding models.

Funding the future is a now a legacy industry.

Series A, B, C, D, and E Funding: How It Works

It still isn’t fully appreciated that today’s political crisis, to a significant extent, is because the Silicon Valley could not integrate into the old American political mode. For decades Silicon Valley struggled to find a voice in Washington, or to figure out whether the right wing or the left wing was its natural home. Meanwhile life got worse and worse in California because of a set of frozen political conflicts and bad compromises nobody seemed to be able to fix. The situation slowly escalated, but the problem in Silicon Valley was always real estate.

How Proposition 13 Broke California Housing Politics

Digital real estate is a huge global gamble

The digital crisis is just about to collide with one of society’s other major crises — the housing crisis.

We have problems, globally, with real estate. We don’t seem to be able to build enough of it, nobody seems to be able to afford it, largely because it’s being used as an asset class by finance instead of being treated as a basic human need.

Real estate availability and real estate bubbles are horrendous problems.

The U.S. Financial Crisis

Now the hedge funds are moving in to further financialize the sector at the same time as people seem not to be able to buy enough housing to have kids in.

This has been steadily getting worse since Thatcher and Reagan in the late 70s/early 80s. Once, one person in work could comfortably buy a house and support a family, then it became necessary for two people to work to do that, now it’s slipping beyond the grasp of even two people, and renting is no cheaper; renters are just people who haven’t got a deposit together for a mortgage, so are paying someone else’s and coming out the end with nothing to show for it. It’s a mess, and then we’re going to come along and we’re going to digitize real estate. What could possibly go wrong?

Well, if we don’t deal with this as being an aspect of a much larger crisis, we will be rolling the dice on whether we like the outcome we get from digitization of real estate. Things are really bad already, and bad digitization could make them so much worse. But, as is the nature of the digital crisis, it could also make them better, and it is up to us, while things are still up in the air, to make sure that this is what happens.

The initial skirmishes around digitization of real estate have mostly been messy: the poster children are Airbnb and Booking, both of which enjoy near-monopoly status and externalize a range of costs onto the general public, while usually offering seamless convenience to renters and guests. But when things go wrong and an apartment gets trashed or a hotel is severely substandard, often people are left out in the cold dealing with a corporation which is so large it might as well be a government and this is, indeed, usually how the Nation State as an institution has handled digital.

Corporations the size of governments negotiate using mechanisms that look more like international treaties than contracts, and they increasingly wield powers previously reserved to the State itself. It’s not a great way to handle a customer service dispute on an apartment.

Neoreaction (NRx) and all the rest of it simply want to ratify this arrangement and create a permanent digital aristocracy as a layer above the democracy of the (post-)industrial nation states: the directors and owners of corporations treated as above the law.

Inside the New Right, Where Peter Thiel Is Placing His Biggest Bets

Economic stratification and political complexity

One reason we aren’t dealing adequately with these crises is that the very existence of many of them is buried by an increase in the variance of outcomes. It used to be that people operated within a fairly narrow bandwidth. The standard deviation of your life expectations was relatively narrow, barring things like wars. Now, what we have is this incredibly broad bimodal distribution, trimodal distribution. A chunk of people manage to stay in the average, a tiny number of people wind up as billionaires, and then maybe 20% of the society gets shoved into various kinds of gutters. In America, it’s medical bankruptcy, it’s homelessness, it’s the opioid epidemic, it’s being abducted by ICE, those kinds of things.

What we’ve done is create a much wider range of possible outcomes, and a lot of those outcomes are bad, but the average still looks kind of acceptable — the people at the top end of that spectrum are throwing off the averages for the entire rest of the thing.

Ten facts about wealth inequality in the USA - LSE Inequalities

In fact, generally speaking, on the streets things repeatedly approach the point of revolution as various groups boil over. If they all boil over at the same time, that’s it, game over, new regime.

We’re in a position where we’ve managed to create a much more free society with a much wider range of possible outcomes, however, the bad outcomes are very severe and often masked by the glitzy media circus around the people enjoying the good outcomes. Good outcomes are being disproportionately controlled by a tiny politically dangerous minority at the top, but as these are the ones making the rules, trying to correct the balance is super difficult.

Democracy as we knew it was rooted in economic democracy, and nothing is further from economic democracy than robots, AI, and massive underemployment. Political democracy without economic democracy is unstable and only gives the lucky rich short term benefits; they are gambling on being able to constantly surf the instabilities to keep ahead of the game, continuing to reap those benefits while palming the externalities off on everyone else. But that can’t be done; eventually someone gets something wrong and the whole lot hits the wall in financial crashes, riot, revolution, and no one gets a good outcome. It all ends up like Brazil if you’re lucky and Haiti if you’re not.

The combination of extreme wealth gaps and democracy cannot be stabilized, and increasingly the rich are looking at democracy as a problem to be solved, rather than the solution it once was. I cannot tell you how bad this is.

Yet the benefits of technology are all around us, increasingly so. Democracy tends towards the constant redistribution of those benefits through taxation-and-subsidy. To fight against being redistributed, the billionaires are rapidly moving towards a post-democratic model of political power. The general human need for access to a safe and stable future seems to be less and less a stated goal for any political faction. This is getting messy.

Today, middle of the road democratic redistribution sounds like communism, but it’s not; it just sounds like that because the current version of capitalism is so distorted and out of whack. American capitalism used to function much more like Scandinavian capitalism, a version of capitalism that gives everyone a reasonable bit of the pie, with a strong focus on social cohesion. Within that model, the slice may vary considerably in size, but it should allow even those at the lower end safe and dignified lives. Weirdly enough the only large country running a successful 1950s/1960s “rapid economic growth with reasonable redistribution of wealth” model of capitalism today is China.

Breakneck

Fractocrises and magic bullets

In 2016 there was a little dog with a cup of coffee who reflected back the feeeling the world had gone out of control and nobody cared.

In 2016. Sixteen. No covid. Not much AI. Little war. But still the pressure.

https://www.nytimes.com/2016/08/06/arts/this-is-fine-meme-dog-fire.html

Understandably some very smart people are pursuing the concept of polycrisis as a response to the many arms of chaos.

https://x.com/70sBachchan/status/1723103050116763804

Deal with the crises in silos and this mess is the result.

The impulse towards polycrisis as a model is understandable, but it’s a path we know leads to a very particular kind of nowhere. It leads to Powerpoint.

https://www.nytimes.com/2010/04/27/world/27powerpoint.html

In truth, crises are fractal. They are self-similar across levels. The chains of cause-and-effect which spider across the policy landscape in impenetrable webs are produced by a relatively small number of repeating patterns.

“Follow the money”, for example, almost always cuts through polycrisis and replaces the complexity of the situation with a small number of actors who are above the law.

To use a medical analogy, a patient can present with a devastating array of systemic failures driven by a single root cause. Consider someone suffering from dehydration — blood pressure is way down, kidneys are failing, maybe 40 different systems are going seriously wrong. Treat them individually and the patient will just die.

Step back and realise “Oh, this patient is dehydrated!”, give them water and rehydration salts and appropriate care and all the problems are solved at once.

Or maybe it’s reintroducing wolves to Yellowstone Park; suddenly the rivers work better, there are more trees, insect pests decline, because one big key change ramifies through the system and brings about a whole load of unanticipated benefits downstream. Systems have systemic health. Systems also have systemic decline. But the complex systems / “polycrisis” analysts focus entirely on how failing systems interact to produce faster failure in other failing systems, effectively documenting decline, and carry around phrases like “there is no magic bullet.”

There is. The magic bullet for dehydration is water.

Finding the magic bullets is medicine; documenting systemic collapses is merely biology.

REHYDRATING THE AMERICAN DREAM

The dollar is a dead man walking — there is no way to stabilise the dollar in the current climate. It is holed below the waterline but the general public has only the very earliest awareness of this problem today. By the time they all know there will be no more dollar. Perhaps the entire fiat system is in terminal decline as a result: if the dollar hyperinflates, or dies in some other way, will it take the Pound and the Euro and the Yen with it? Who could have foreseen this?

Frankly, in 2008, following the Great Financial Crisis, everybody knew.

https://theconversation.com/as-uk-inflation-falls-to-2-3-heres-what-it-could-mean-for-wages-230563

There is a long term macro trend of fiat devaluation. There is also the acute fallout of the 2008 catastrophe. We have a fundamental problem: the 1971 adoption of the fiat currency system (over the gold standard) is not working. The dysfunction of the fiat system detonated in 2008. We have now had 17 years of negotiations with the facts of the matter, but so far, no solutions.

Well, other than this one…

All the fiat economies are carrying tons of debt, crazy unsustainable amounts of debt, both personal and national. It could well be that a lot of smart people are thinking that a “great reset” of some kind would solve a lot of problems simultaneously.

The Jubilee Report: A Blueprint for Tackling the Debt and Development Crises and Creating the Financial Foundations for a Sustainable People-Centered Global Economy

The nature of that “great reset” is going to determine whether your children live as slaves, or live at all.

So the global approach to currency needs overhauling as part of a more general effort to make the political economy stabilize in an age of exponential change.

It is not the first time that it has been done even within living memory.

Bowie released “The Man Who Sold The World” while the dollar was still backed by physical gold. This is not ancient history. It’s not at all irrational to think that 6000 year old human norms about handing over shiny bits of metal for food might need to be updated for the world we are in today. But it’s also not too late to adjust our models and fine-tune the experiment.

Globally issued non-state fiat, like Bitcoin, is just not going to get you the society that you want, unless the society you want is an aristocratic oligarchy. Bitcoin is just a different kind of fiat — money that only exists because someone says it’s money and enough people go along with it, rather than money based on something that has intrinsic value itself. It has the same problem as fiat currency has: there is no way to accurately vary the amount of money to meet the demand for money to keep the price of money stable. Purchasing power is always going to be unpredictable and that makes long term economic forecasting difficult for workers and governments alike.

Governments print too much. Bitcoin prints too little, particularly this late in the Halving Cycle.

Understanding the Bitcoin Halving Cycle and Its Impact on 2025 Market Trends

The problem of purchasing power fluctuations causes for estimating long term infrastructure project economics has huge impacts too: if you can’t accurately predict the future, you can’t finance infrastructure. You can’t plan for pensions. The great wheel of civilization grinds to a halt as short-termism eats the seed corn of society. Nobody wants to make a 30 year bet because of robots and AI and all the rest, and so we wind up ruled quarter by quarter with occasional 4 year elections.

Not dollars, not Bitcoin

The debates about what money should be are not new.

Broadly, there are three models for currency

(1) government fiat/national fiat — fine in principle but, in practice, in nearly all highly democratic societies the governments wind up inflating away their own currencies over time

(2) global fiat issued on blockchains — Ethereum, Bitcoin, all the rest of those things

(3) resource-backed currencies — conventionally that means gold but it can also apply to things like Timebanking and various mutual credit systems

Gold is already massively liquid. You cannot solve a global crisis by making gold 40 times more valuable than it currently is because it becomes the backing for all currencies again. Gold is also very unequally distributed: Asian women famously collect the stuff in the form of jewellery and a shift to a new gold standard could easily make India one of the wealthiest countries in the world again, women first. Much as this sounds like a delightful outcome, it’s hard to imagine a new economic order ruled by now-very-wealthy-indeed middle class Indian housewives who had a couple of generations to build up a solid pile of bangles.

This, by the way, is the same argument against hyperbitcoinization — being on the Silk Road in 2011 and buying illegal substances using bitcoin is not the same thing as being good at productive business or being a skilled capital allocator: windfalls based on a social choice about currency systems are not a sensible way to allocate wealth, although it does often happen.

Hyperbitcoinization Explained - Bitcoin Magazine

You can argue that bitcoin mining requires a ton of expertise and technological capacity, and this is worthy economic reward, but there is a fundamental limit to how many kilograms of gold you can rationally expect to pull out of a data center running 15 year old open source software.

Similarly, the areas which were geographically blessed (or is that cursed?) by gold would wind up with a huge economic uplift. It becomes a question of geological roulette whether you have gold or not, and unlike the coal and oil and iron and uranium lotteries, nobody can build anything using gold as an energy source or a tool. Gold is just money. It’s like an inheritance.

Resource curse - Wikipedia

So what’s the alternative? Bitcoin scarcity, gold scarcity, these are all models in which early owners of the asset do very well when the asset class is selected for the backing of the new system. Needless to say those asset owners are locked in a very significant geostrategic power struggle for the right to define the next system of the world. They are all bastards.

Strange women lying in ponds distributing swords is no basis for a system of government...

But what if we move to something that is genuinely, fundamentally useful? Well, what about land? You’re much more likely to get a world that works if you rebase the global currencies on real estate in a way that causes homes to get built, than if you rebase the world’s currencies on non-state fiat.

Both sides of this equation must balance. If we simply lock the amount of real estate in the game, then (figure out how to) use it as currency we wind up with another inflexible monetary supply problem. Might as well use Bitcoin or Gold. We’ve been down this track: we did not like it, and in 1971 we changed course permanently.

Real estate could be “the new gold” but real estate has flexible supply because you can always build more housing.

If the law permits.

And if we can solve that problem, the incentives align in a new way: building housing increases the money supply. If house prices are rising too fast, build more housing.

Bryan's Cross of Gold and the Partisan Battle over Economic Policy | Miller Center

Artificially scarce real estate is the gold of today

We’ve been manipulating real estate prices for a few generations.

The data is screamingly clear, and that’s 100% evidence of pervasive market manipulation: housing is not hard to physically build, but bureaucratically there’s been a massive concerted effort to keep the stuff expensive by bureaucratic limitations on supply. There are entire nation states dedicated to this cause.

The exceptions to this rule look like revolutionary actions.

Consider Austin, Texas which saw its real economic growth and potential status as The Next Great Californian City threatened by a San Francisco style house price explosion. Austin responded with a massive building wave, and managed to rapidly stabilize house prices at a more sustainable level.

Some reports say that >50% of Silicon Valley investor money eventually winds up in the pockets of landlords.

Peter Thiel: Majority of capital poured into SV startups goes to 'urban slumlords'

The way out is to build housing, and a lot of it.

But not like this.

Digital finance has to build more real estate to win

At the root of everything is that the digitization of real estate has to build more real estate. If the next system does not work for average people to get them a better outcome than the current system, there is going to be real trouble: state failures or the violent end of capitalism.

First and above all, this means we need to build more real estate.

Building has been artificially restricted because to make it work as an investment that increases in value, there needs to be scarcity; if you build more its investment value goes down, but its utility value increases.

YIMBY - Wikipedia

One way to digitize real estate is to create currencies backed by real estate, but the logical outcome of this is to make real estate as scarce as possible to protect the value of the currency, which is a disaster for the people who actually need to live somewhere. It would be like a society where mining gold is illegal because the value of the gold supply has to be protected, except we are doing this for homes. We are here now, and we could make this disaster worse.

In truth, if we take that path, we are fucked beyond all human belief. We will have literally immanentized the eschaton. You basically wind up with the economic siege of the young by the old, and that is a powder keg waiting to blow. State failures and violent revolutions.

The 2008 crisis was triggered by over-valuing real estate (underpricing the risk, to be precise) on gigantic over-simplified financial instruments like mortgage-backed securities, literally gigantic bundles of mortgages with a fake estimate about how many of the people taking out those mortgages could afford them in the long run. The global economic slowdown triggered by the US-led invasion of Afghanistan and Iraq (don’t even get me started) hit the mortgage payers, and the risk concentrated in markets like “subprime mortgages” and the credit default swaps which were being used to hedge those (and other) risks.

Credit default swap - Wikipedia

The digital crisis, when it hits real estate, could make 2008 look like the boom of the early 90s. However we choose to tokenize real estate, it has to result in more homes getting built.

However we choose to tokenize real estate, it has to result in more homes getting built.

You cannot use real estate as the backend for stablecoins, then limit the supply of real estate in a way that causes prices to continually go up. That paradigm is what has caused the current real estate crisis. It’s been destroying our societies in America and Europe for decades, so it’s not going to solve the crisis it has caused.

This is largely downwind of Thatcher and Reagan and financial deregulation on one hand, paired with promises to control inflation over the long run (we’re talking decades). This was the core promise made by the Conservatives: inflation will stay low forever. We will not print money.

Once that promise was in place it was possible to have low interest rates and long mortgages, meaning the working class could afford to buy housing. They called this model the Ownership Society.

Ownership society - Wikipedia

The ownership society (and associated models) was an attempt to change the incentives for poor voters so they would not use democracy to take control of the government and vote money from the rich into their own pockets.

What we’ve done is we’ve basically bribed an entire generation (the boomers) with that model, and now we’re at the point where they have no grandchildren and the entire thing is collapsing because housing is a much worse kind of bitcoin than bitcoin. Expensive bitcoin makes bitcoin hard to buy. Expensive housing devastates entire societies. And that’s where we are today.

The solution to all of these ills is to solve these crises at a fundamental level. The patient is dehydrated. The patient needs water. Affordable housing.

This is why you’ve got to focus on outcomes for average people: in any crisis you can find a minority of people who are thriving. Those people are useless for diagnosing the cause of the crisis. You have to look at the losers to understand why the system is broken.

The rent is too damn high.

The patient needs water not antibiotics

If we fix the housing part of this digitization crisis correctly, the results are going to be amazing. That could be the one big change that propagates through the entire financial system and brings back the balance.

Essentially, what works is not backing a currency with real estate, then manipulating the real estate supply to prop it up. What works, we believe, is being able to use land directly as a kind of currency. This is not in the current sense of taking out a loan with the land as collateral, but instead using it directly as money without ever having to dip into any kind of fiat; no need to turn anything into dollars to be able to trade things. Why would I pay interest on a loan against my collateral if I can simply pay for something using my collateral directly?

If we digitize real estate properly, the reward is that we could potentially use tokenized real estate to stabilize the financial system. Regulatory friction is keeping real estate, by far the world’s largest asset class, illiquid in a world which desperately needs liquidity. But there is also a very hard problem in automating the valuation of real estate, and that is going to need AI.

When something is digitized it is inevitably an approximation, and the consequences of that approximation are much larger in some areas than others. With real estate, when we buy and sell we’re constantly in a position where we are dealing with the gap between the written documentation of the real estate and the actual value of the asset. As a result, you wind up with another kind of digitization crisis, one caused by the gap between the digital representation of the object and the object itself.

Using current systems, the liability pathways attached to misleading information in a data set being used to value assets would normally be revealed during legal discovery. If the problem is worth less than tens of millions it’s never going to be found out. If the problem is worth tens or hundreds of billions, it’s now too late. A lot slips through the gaps, historically speaking. And this is only going to get worse now that sellers have started to fake listings using AI.

Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risks

This information-valuation-risk nexus creates friction; to get real estate digitization to work we need to eradicate that friction, and keep fake listings out of the system. This challenge is only going to get harder.

Total Value of Global Real Estate: Property remains the world's biggest store of wealth | Savills Impacts

Real estate is a safer fix for the currency crisis

“Without revolution” is a feature, not a bug.

Vitally, unlike gold or bitcoin, the distribution of land and real estate ownership is close to the current estimates of people’s wealth: a shift to a real estate based economic model would not have the same gigantic and disruptive impacts as moving to either gold or bitcoin or both. There is enough value there too: $400 trillion dollars of real estate, versus $30 trillion of gold or only $38 trillion of US national debt. Global GDP is a bit over $100 trillion.

There is enough real estate, correctly deployed, to create a stable global medium of exchange.

The valuation problem has meant that previously the transactional costs of pricing real estate as collateral were insane. Instead of doing the hard work, pricing the real estate, financial institutions priced the mortgages on the real estate using simplistic models. The 2008-era financial system simply treated the mortgage as a promise to pay, without evaluating whether the person who was supposed to pay had a job, or if anybody was willing to buy the underlying asset which was meant to be backing the mortgage. A thing is worth what you can sell it for!

Shocking Headlines of the 2008 Financial Crisis - CLIPPING CHAINS

You would think somebody was minding the store, but you only need to look at the post-2008 shambles to realize not only is there nobody minding the store, the store itself burned down some time ago. In fact the global financial system is a system in name only: it’s more like a hybrid of a memecoin economy, a Schelling point, a set of real economic flows of oil and machine tools and microprocessors, and big old books of nuclear strategy and doctrine. The globa “system” is a bunch of bolted together game boards with innumerable weird pieces held together by massive external pressures, gradually collapsing because the stress points between different complex systems are beyond human comprehension. Environmental economics, for example. Or the energy policy / national security interface. AI and everything. The complexity overwhelms understanding and the system degrades.

It does not have to be this way.

When you have AI to price complex collateral like real estate (or running businesses), you can do things with that collateral that you couldn’t do previously. Of course that AI system needs trustworthy inputs. If the information coming into the system is factual, and the AI is an objective analyst, various parties can use their own AI systems to do the pricing without human intervention, so the trade friction plummets. Remember too these are competitive systems: players with better AI pricing models will beat out players with less effective price estimation, and that continuous competition will keep the markets honest, at least for a while.

Mattereum Asset Passports can provide the trustworthy inputs, again based on an extremely competitive model to price the risk of bad information getting into the Mattereum Asset Passport which is being used by the AI system to price the asset. The economic model we use was built from the ground up to price every substantial asset in the world even in an environment with the pervasive use of AI systems to manufacture fake documents and perpetrate fraud. We literally built it for these times, but we started in 2017. That’s futurism for you!

The economic mechanism of the Mattereum Asset Passport is a thing of beauty. The way that it works is that data about an asset is broken up into a series of claims. For example, for a gold bar, weight and purity and provenance and vaulting and delivery details and likely enough to price the bar. For an apartment there might be 70 claims including video walk throughs of the space and third party insurances covering issues like title or flood insurance. Every piece of information in the portfolio is tied to a competitively-priced warranty: buyers will rationally select the least expensive adaquate warranty on each piece of data. This keeps warranty prices down. This process is a strain with humans in the loop for every single decision, but in an agentic AI economy this competitive “Product Information Market” model is by far the best way of arriving at a stable on-chain truth about matters of objective fact.

It’s not that the system drives out error: it does, but the point is that it accurately prices the risk of error which is a much more fundamental economic process. This is a subtle point.

Bringing Truth to Market with Trust Communities & Product Information Markets

The combination of AI to commit real estate fraud and Zcash and similar technologies to launder the money stolen in those frauds is going to be unstoppable without really good, competitive models for pricing and then eliminating risk on transactions. The alternatives are pretty unthinkable.

In this new model, if I come to you with a token that says, based on the Mattereum Asset Passport data, this property is worth $380,000. Then you can say I will pay you 20% of this property in return for a car, there’s the transaction. You take 20% of a piece of real estate, I take an SUV. Maybe you can require me to buy back a chunk of that equity every month (a put option). Maybe the equity is pulled into an enormous sovereign wealth fund type apparatus which uses the pool to back standard stable tokens backed by fractions of all the real estate in the country. The story may begin with correctly priced collateral, but it does not end with correctly priced collateral. This is the anchor but it is only a part of a system.

If we get it right — and it’s a lot of moving parts — we could get out of the awful shadow of not only 2008’s financial crisis, but the calamitous changes to the global system which emerged from 1971.

WTF Happened in 1971?

The pragmatics of making real estate liquid

As long as you’ve got the ability to do relative pricing based on AI analysis, you don’t need to convert everything into currency to use it in trade. If you have an AI that can do the relative valuations, including risk and uncertainty, you can reach a position where you don’t have to use fiat money to make a fair exchange between different items, like land and cars, or apartments and antique furniture, or factories and farms; there are a whole set of AI-based value-estimation mechanisms that can be used for doing that and produce a fair outcome.

This cuts down or eliminates the valuation problems which can be caused by any kind of fiat — be it government fiat like the dollar, or private fiat like bitcoin — making it possible to operate on tokenized land — tokens based on an asset that is inherently dramatically more stable and inherently non-volatile. Solid assets back transactions. Closer to gold, but more widely distributed.

It’s a big story. But at its simplest what if we just said… “look, this is a global currency crisis. And the reason we’re in that crisis is artificial inflation. Real estate prices. Take the inflated real estate and the debt associated with the real estate transform it into equity, you know, debt to equity transformation…” and we restart the game on a sounder basis.

Who can follow along with that tune?

If you tokenize half, or even a third, of the real estate, what that provides is a staggeringly enormous pool of assets which move from being illiquid to liquid and that liquidity — widely distributed in the hands of ordinary people by virtue of them already owning these properties — then bails out the rest of the system. The conversion of mortgage debt into shared ownership arrangements, as mortgage lenders take equity rather than facing huge waves of defaults (again), balances the books without requiring huge government bailouts and money printing as in 2008. Homeowners do not hit the sheer logistical nightmares of moving house (particularly in old age) nor do they have to borrow money from lenders by remortgaging, creating more debt.

Rather than attaching debt to the real estate, we simply add a cap table to the real estate as if it was a tiny little company, and then let the owners sell or exchange some of that equity for whatever they want.

It’s a relatively small change to established norms, with massive, outsized benefits.

The key benefit of this approach is precisely that it is non-revolutionary. Compare the social stresses between this approach and doing that rescue process by massively pumping the price of (say) Bitcoin. In the hyperbitcoinization model you wind up with massive, massive, massive class war because you have people that were cryptocurrency nerds who are now worth half a trillion. You can’t have that kind of transfer of power without the system trying to engineer around it. Same thing happens with gold at $38,000 an ounce. The shift in wealth distribution is too violent for society to survive the transitional processes.

But making real estate truly liquid gives the economy the flexibility it desperately needs, probably without wrecking the world in the process.

Turning real estate debt into real estate equity and then making the equity tradable is not a new trick in finance: large scale real estate finance projects do things like this all the time. We’re just using established techniques from corporate finance at a much smaller scale, on a house-by-house basis, to safely manage the otherwise unmanageable real estate bubble. If every piece of real estate in America had the ability to do tokenized equity release built into the title deeds, America would not have solvency problems.

Pricing debt which does not default is relatively easy and prior to 2008 the global system sought stability by pricing debt as if it would not default. This looks like a joke now. But pricing defaults on debt is very very hard because the global economy is a just a part of a much larger unitary interlinked system and factors from beyond the view of spreadsheets can cause the world to move: covid, most recently. Such correlated risks change everything and are inherently unpredictable. Debt-based economies carry such risks poorly. Equity is a much better instrument for handling risk, but we have over-restricted its use, and are paying the price (literally) for this societal-scale error of judgement.

Debt cannot do what equity can, and we have too much debt and not enough equity.

Pricing complex and diverse assets like real estate is orders of magnitude harder than pricing good debt. Fortunately we now have the computer.

Flipping us from a debt world to an equity world needs a competitive AI environment to value the assets, and the blockchain to make issuing and transferring equity in those assets manageable.

That’s what’s needed to start clearing up the gridlocked debt obligation nightmare.

It’s not that hard to imagine, if you could tokenize one house, you could tokenize all of them. If you think of it as the debt to equity transformation for all of the mortgage debt, and then you pull the mortgage debt back out of the American system because you turn it into equity and then you allocate it to the banks, you could actually make America liquid again much faster.

It is an extreme manoeuvre, but the question is, as always, “compared to what?”

At the end of that we’d be left with a very different real estate ownership model, more like the Australian strata title or English Commonhold model. In both of these instances, aspects of a real estate title deed are split between multiple owners (the “freehold” is fractional) forming what amounts to an implicit corporate structure within every real estate title deed.

Imagine, that, but scaled.

Strata title - Wikipedia Commonhold - Wikipedia So practical government fought dirty for years

Business is pretty good at change once government gets out of the way.

Once tokenized equity is clearly regulated in America, business will figure out real estate tokenization very fast. We could see 5,000 companies in America that are capable of doing real estate tokenization five years after the SEC says it’s okay to do it.

Business will create competing industrial machines that will effect the transformation, and get huge numbers of people out of the debt. Shared equity arrangements for housing could rebalance the economy without crashing society. The speed at which society can get the assets on chain is equal to how quickly finance can satisfactorily document them and fractionalize them.

What is a plausible documentation standard for a real world asset on chain that you could use an AI system to create? That’s a Mattereum Asset Passport.

Mattereum aims to get real estate through the digitization crisis in a healthy and productive way. Specifically, a decentralized, networked way which is kept honest by ruthless competition to honestly price risk in fair and free (AI powered) markets.

A business model which is the best of capitalism.

The alternatives are not attractive.

But there is reason for hope.

Once you tell the Americans what the rules are, the Americans will go there and do it. The only way that the SEC could hold back mass adoption of crypto was by refusing to tell people the rules. It doesn’t matter how onerous the regulatory burden was, if the SEC had told people the rules, they would have crawled up that regulatory tree a branch at a time, and we would have had mass tokenization six months after the rules were set, whatever the rules were.

The long delay was only possible because of an aggressive use of ambiguity, I’m going to say charitably, to protect Wall Street. Maybe it was to keep Silicon Valley out of the banking business, but however you want to think about it, the SEC had a very strong commitment under previous administrations that there was not going to be mass tokenization.

We can take this further — as the digitization wave washes inevitably over everything, if we continue to use this model we can finally be done with the age of the Digital Crisis and all its chaoses, replaced with far more stable, and predictably advantageous outcomes. For example, if everybody is using an AI to put a price tag on anything they look at, and all I have to do is hold that up in the air and say, does anybody want this? Then what you could get is effectively a spot market in everything, because the AIs do pricing. In that environment is anybody going to get a destructive permanent lock in? What makes most of the big digitization disasters into disasters is the formation of wicked monopolies, after all.

Spot markets today are for things like gold, oil or foreign exchange, anything where there’s so much volume in the marketplace that the prices are basically set by magic. With a vast number of participants in a global marketplace, all you need to do is hold up an asset, then everybody uses their AI to price the asset, resulting in a market that has a spot price for everything. Add the tokens to effect the title transfer. When you have a market that has a spot price and everything, all assets are in some way equivalent to gold — the thing that makes gold, gold, is that you can get a spot price on it. So if we have spot pricing for basically everything, based on AI agents, what you wind up with is being able to use almost any asset in the world as if it was gold. Everything is capital, completing the de Soto vision of the future.

Finance and Development

In this future, all assets are equivalent to gold because you can price them accurately and cheaply, and can verify the data about them. It changes the entire nature of global finance, because that finally removes the friction from transacting assets. Then, if you’ve got near-zero friction transactions in assets, why use money? No need for dollars, no need for bitcoin; instead, a new financial system creating itself out of the whole cloth on the fly, and one that is stable and shows every sign of being rational because it is diverse and not tied to any single asset that can distort the market through exuberance and crashes. Diversification is the only stability.

Now that would be a paradigm worthy of the name “stablecoins”!

Anyone got a better plan for saving the world?

In a world that has blockchains, artificial intelligence, and a global currency crisis, we need big ideas and big reach to get to a preferable future. It’s an alignment problem, not just AI alignment but capital alignment. We don’t just strive against 2008’s AAA bonds backed by mouldy sheds alone, but against future Nick Landian factors about AI alignment.

Through the lens of AI, we can start looking at all the world’s real estate as an anchor for the rest of the economy. When we put the diligence package for a piece of real estate on chain in the form of Mattereum Asset Passport, then over time 50 or 70 or 90 or 95 or 99.9% of the diligence could be done by competing networks of AIs, striving to correctly value property and price risk in competitive markets which reliably punish corruption with (for example) shorts. With those tools, we could rapidly tokenize the world and use the resulting liquidity to keep the wheels from falling off the global economy.

This is, at least in potential, a positive way of solving the next financial crisis before it really starts and ensuring that the digitization of real estate does not create another digital disaster.

CONCLUSION

Artificial inflation of real estate prices for decades caused the global financial crisis.

We propose converting the global system from a debt-backed to an equity-backed model to solve it.

We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.

THE DIGITAL CRISIS — TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 17. October 2025

Shyft Network

G20’s Crypto Dilemma: Regulation Without Coordination

The Financial Stability Board (FSB) — the G20’s global risk watchdog — released a sobering statement: there remain “significant gaps” in global crypto regulation. It wasn’t the typical bureaucratic warning. It was a clear signal that the world’s financial governance structures are lagging behind the speed and fluidity of decentralized systems. For an industry built on cross-border code and border

The Financial Stability Board (FSB) — the G20’s global risk watchdog — released a sobering statement: there remain “significant gaps” in global crypto regulation.

It wasn’t the typical bureaucratic warning. It was a clear signal that the world’s financial governance structures are lagging behind the speed and fluidity of decentralized systems. For an industry built on cross-border code and borderless capital, national rulebooks no longer suffice.

But the FSB’s concern reaches beyond oversight. It exposes an unresolved paradox at the heart of digital finance: how to regulate what was designed to resist regulation.

Fragmented Governance, Unified Risk

The FSB’s assessment underscores a growing structural mismatch. The world’s regulatory responses to crypto have been disparate, reactive, and jurisdictionally fragmented.

The United States continues to rely on enforcement-driven oversight, led by the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), each defining “crypto assets” through its own lens. The European Union is pursuing harmonization through the Markets in Crypto-Assets Regulation (MiCA), creating the first comprehensive regional rulebook for digital assets. Asia remains diverse: Japan and Singapore operate under established licensing regimes, while India and China take more restrictive, state-centric approaches.

To the FSB, this regulatory pluralism is not innovation — it’s exposure. The lack of standardized frameworks for risk management, consumer protection, and cross-border enforcement creates vulnerabilities that can spill over into the traditional financial system.

In a market where blockchain transactions flow without borders, inconsistent regulation becomes the new systemic risk.

Regulatory Arbitrage: The Silent Threat

This fragmented environment fuels what the FSB calls “regulatory arbitrage” — the quiet migration of capital, operations, and data to jurisdictions with the weakest oversight.

Stablecoin issuers, decentralized finance (DeFi) platforms, and digital asset exchanges can relocate at the speed of software. For regulators, national boundaries have become lines on a digital map that capital simply ignores.

The result is a patchwork of supervision. Entities can appear compliant in one jurisdiction while operating opaque structures in another. Risk becomes mobile, and accountability becomes ambiguous.

Ironically, this dynamic mirrors the early years of global banking — before coordinated frameworks like Basel III sought to standardize capital rules. Crypto now faces the same evolution: a system outgrowing its regulatory perimeter.

Privacy as a Barrier and a Battleground

One of the FSB’s most striking observations concerns privacy laws. Regulations originally designed to protect individual data are now obstructing global financial oversight.

Cross-border supervision depends on data sharing — but privacy regimes like the EU’s General Data Protection Regulation (GDPR) and similar frameworks in Asia restrict what can be exchanged between authorities.

This creates a paradox:

To monitor crypto markets effectively, regulators need visibility. To protect users’ rights, privacy laws impose opacity.

The collision of these principles reveals a deeper tension between financial transparency and digital sovereignty.

For blockchain advocates, this friction isn’t a flaw — it’s the point. Privacy, pseudonymity, and autonomy were not accidental features of decentralized systems; they were foundational responses to surveillance-based finance.

Now, as regulators push for traceability “from wallet to wallet,” the original ethos of blockchain — self-sovereignty over data and identity — faces its greatest institutional test.

The Expanding Regulatory Perimeter

The FSB’s report marks a turning point: the global regulatory community no longer debates whether crypto needs rules, but how far those rules should reach.

Stablecoins have become the front line. The Bank of England (BoE) recently stated it will not lift planned caps on individual stablecoin holdings until it is confident such assets pose no systemic threat. Meanwhile, the U.S. Federal Reserve has warned that the growth of privately backed digital currencies could undermine monetary policy if left unchecked.

These positions signal that regulators see crypto not as a niche market, but as a parallel financial infrastructure that must be integrated or contained.

Yet, as oversight expands, so does the distance from decentralization’s original promise. The drive to institutionalize crypto — through licensing, capital controls, and compliance standards — risks turning decentralized finance into regulated middleware for the existing system.

The innovation remains, but the autonomy fades.

From Innovation to Integration

What the FSB implicitly acknowledges is that crypto’s mainstreaming is no longer hypothetical. Tokenized assets, on-chain settlement, and programmable money are being adopted by major banks and financial institutions.

However, this adoption often comes with a trade-off: decentralized architecture operated under centralized control.

The example of AMINA Bank — which recently conducted regulated staking of Polygon (POL) under the Swiss Financial Market Supervisory Authority (FINMA) — illustrates this trajectory. The blockchain may remain decentralized in code, but its operation is now filtered through institutional risk, compliance, and prudential oversight.

Crypto is entering a phase of institutional assimilation, where its tools survive but its principles are moderated.

The Ethical Undercurrent: Control vs. Autonomy

At its core, the FSB’s warning is not only about risk but about control. Global regulators see the same infrastructure that enables open, peer-to-peer exchange also enabling opaque, borderless financial activity that escapes accountability.

Their response — standardization and supervision — is rational from a stability standpoint. But it introduces a new ethical question: who governs digital value?

If every decentralized protocol must operate through regulated entities, if every wallet must be traceable, and if every transaction must comply with jurisdictional mandates, then blockchain’s promise of financial self-determination becomes conditional — granted by regulators, not coded by design.

This doesn’t make regulation wrong. It makes it philosophically consequential.

A Call for Coordination, Not Convergence

The FSB’s call for tighter global alignment does not mean a single, monolithic framework. True coordination will require mutual recognition, data interoperability, and respect for jurisdictional privacy laws, not their erosion.

Without this nuance, global harmonization risks turning into regulatory homogenization, where innovation bends entirely to institutional comfort.

A sustainable balance will depend on how regulators treat decentralization:

As a risk to be mitigated, or As an architecture to be understood and integrated responsibly.

The distinction is subtle but defining.

The Architecture of Financial Sovereignty

The G20’s warning marks a pivotal moment. It is a reminder that the future of digital finance will not be decided by code alone, but by the alignment — or collision — of regulatory philosophies.

Crypto began as a rejection of centralized financial power. It now faces regulation not as an external force, but as an inevitable layer of the system it helped create.

The question ahead is not whether crypto will be regulated. It already is.
The real question is whose definition of sovereignty will prevail — that of the individual, or that of the institution.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance into blockchain while preserving privacy and sovereignty. SHFT is its native token and fuel of the network.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visitour website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn,Telegram,Medium, andYouTube.Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network

G20’s Crypto Dilemma: Regulation Without Coordination was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Elliptic’s Typologies Report: Detecting the money flows behind the global pig butchering ecosystem

In recent years, the growing scale and profitability of so-called pig butchering scams has sparked increasing concern among law enforcement and regulatory agencies around the world. 

In recent years, the growing scale and profitability of so-called pig butchering scams has sparked increasing concern among law enforcement and regulatory agencies around the world. 


FastID

DDoS in September

Fastly's September 2025 DDoS report details modern application attacks. Get insights and guidance to strengthen your security initiatives.
Fastly's September 2025 DDoS report details modern application attacks. Get insights and guidance to strengthen your security initiatives.

Thursday, 16. October 2025

1Kosmos BlockID

What Is Digital Identity Management & How to Do It Right

The post What Is Digital Identity Management & How to Do It Right appeared first on 1Kosmos.

Spruce Systems

Designing Digital Guardianship for Modern Identity Systems

Considerations for how states can responsibly represent parental, custodial, and delegated authority without compromising privacy.

In the move toward more inclusive and privacy-respecting digital government services, guardianship (when one person is legally authorized to act on behalf of another) is a core, but often overlooked, component.

Today, guardianship processes are fragmented across probate court, family court, and agency-level determinations, with no clear mechanism for digital verifications. Without clarity, agencies risk legal challenges if they inadvertently allow the wrong person to act on behalf of a dependent.

Rather than treating guardianship as an abstract capability, we believe states should identify a non-exhaustive list of key use cases they want to enable. For example, a parent accessing school records on behalf of a minor, a guardian applying for healthcare or social services on behalf of a dependent senior adult, or a foster parent temporarily authorized to pick a child up. Each of these may require a different level of assurance, auditability, and inter-agency coordination.

Why Legal Infrastructure Falls Short

Several legal and regulatory barriers may affect the implementation of a state digital identity. At the state level, existing statutes were drafted for physical credentials and may not clearly authorize digital equivalents in all contexts. Without explicit recognition of state digital identity as a legally valid proof of identity, agencies may be constrained in adopting digital credentials for remote service delivery.

This legal ambiguity creates friction for both agencies and residents, limiting the full potential of digital identity solutions.

Mapping Authority: Who Can Issue What, and When

Guardianship in digital identity is a complex and, as yet, unsolved problem. A guardianship solution should accept decisions from the entities legally empowered to make them, represent those decisions in credentials rather than recreating them, and keep endorsements current as circumstances change.

The first step is to enumerate today’s pathways to establishing guardianship and to identify which entities are authorized to issue evidence. This mapping enables cohesive implementation and prevents confusion about who can issue what.

In parallel, a program should also clarify which agencies authorize which actions and what evidence each verifier needs. Where authorities differ, the state can allow agencies to issue guardianship credentials that reflect their scope while still unifying common steps to reduce friction.

A Taxonomy for Real-World Guardianship Scenarios

We believe that states should define a clear guardianship credential taxonomy.

There are multiple ways to define guardianship depending on legal and operational context, such as parental authority, foster care, medical consent, or financial guardianship. This will naturally lead to multiple guardianship credential types, tailored to definitions, use cases, and issuing agencies.

Design for Flexibility and Change

Digital delivery introduces several challenges that the program should address up front. Endorsements need to change cleanly at the age of majority or when a court modifies an order, including a clear transfer of control to the individual. Reissuance and backstops should be specified for lost devices or keys and calibrated to the chosen technical models. 

The design should remain flexible enough to accommodate emerging topics, including AI agent-based interactions, without locking in assumptions that are likely to shift.

Support Human Judgment and Prevent Abuse

The overall system for guardianship should maximize the ability for appropriate and contextualized exercise of human judgement by responsible individuals. All of these systems, even protected with cryptography, security measures, and fraud detection, will still be faulty. They should be designed to prioritize humans and their wellbeing, even with failures and fraud present.

A state digital identity framework should require that as much credential validity information as is appropriate and necessary to be made available to the relying party, and that clear indicators of the credential’s current status are available to holders.

It is equally important to prevent abuse of the system. A state must ensure that guardianship credentials cannot be issued or accumulated in ways that could enable fraud, such as one person holding dozens of guardian endorsements to unlawfully access benefits or facilitate trafficking.

The Future of Digital Guardianship

Guardianship in digital identity is not a future problem, it’s a present-day requirement. A successful state digital identity framework must support these relationships with clarity, flexibility, and privacy at its core.

SpruceID helps states design systems that reduce the risk of fraud without sacrificing individual autonomy. Contact us to learn more.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Thales Group

Thales Celebrates 60 Years in Mexico, driving technological innovation and local development

Thales Celebrates 60 Years in Mexico, driving technological innovation and local development prezly Thu, 10/16/2025 - 16:00 Mexico Share options Facebook X
Thales Celebrates 60 Years in Mexico, driving technological innovation and local development prezly Thu, 10/16/2025 - 16:00 Mexico

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 16 Oct 2025 Thales, a global leader in advanced technologies, marks 60 years in Mexico, supporting the country’s technological development with solutions in Defence, Aerospace, Cybersecurity, and Digital. With more than 1,300 employees, the company has established a strong industrial footprint, spearheading key strategic projects for national growth. In this milestone year, Thales has proudly received the official “Hecho en México” label from the Mexican government, recognizing products and services that are designed and manufactured locally.

Mexico City, October 15, 2025 – Since 1965, Thales has been part of Mexico’s technological transformation. Today, with over 1,300 employees, it maintains a strong industrial presence that includes two production and personalization centers for payment cards and SIM/eSIM, an Air Traffic Management Service and Integration Center, and a Cyber Academy that trains professionals in cybersecurity. These operations serve not only the domestic market but also customers around the world, positioning Mexico as a strategic hub for the Group.

Over the past six decades, Thales has become an integral part of the daily lives of millions of Mexicans—from every phone call or mobile connection, every card or digital payment transaction, to the safety of their air travel and national defense. Thales’ radars and control centers manage 100% of Mexico’s airspace traffic. Additionally, the Mexican Navy’s Long-Range Oceanic Patrol Vessel (POLA) is equipped with Thales combat systems and sensors.

Thales is present wherever defence, security, and technological innovation are essential to advancing and safeguarding society. This journey has been made possible thanks to the trust of government entities, private companies, institutions, and cities that, for six decades, have chosen Thales as a strategic partner to face critical moments and explore new frontiers with confidence in an increasingly interconnected and complex world. In the face of every challenge, we reaffirm our commitment to building a future we can all trust.

This year, Thales proudly received the “Hecho en México” designation, awarded by the Ministry of Economy, recognizing not only the local origin of its production, but also its ongoing commitment to innovation, job creation, and specialized talent development in the country. This recognition underscores the company’s dedication to Mexico’s growth and global competitiveness.

"We look to the future with the same enthusiasm that marked the beginning of our journey 60 years ago, ready to remain a driver of change and progress in Mexico’s strategic sectors. And what better way to celebrate 60 years in the country than by honoring our people, strengthening national innovation, and reaffirming our commitment to this nation. At Thales, we proudly carry the 'Hecho en Mexico' label, because behind every project, client, and solution, there are Mexican engineers, researchers, and professionals making world-class technological advancements possible," said Analicia García, Country Director of Thales in Mexico.

Thales plays a key role in strengthening Mexico’s defence and security, with advanced systems that help safeguard its sovereignty and protect its citizens. It is also the leading provider of air traffic management systems in Mexico and a key player in the financial sector, where its cybersecurity and digital identity solutions protect the transactions and sensitive information of millions of citizens. In the field of defence and security, the Group contributes to strengthening national capabilities with advanced technologies that support the protection of territory, sovereignty, and the security of critical infrastructure. Its technology promotes trust in the national financial ecosystem and enhances the country’s resilience against emerging digital threats.

With pride in its legacy and eyes firmly on the future, Thales in Mexico will continue to expand its talent pool, investing in Mexican engineers whose high level of expertise and ability to excel on the international stage are undeniable. The company remains committed to promoting local talent, innovation, and research—solidifying its role as a strategic partner in building a safer, more competitive, and globally connected Mexico.

About Thales in Latin America

With six decades of presence in Latin America, Thales, a global tech leader for the Defence, Aerospace, Cyber & Digital sectors. The Group is investing in digital and “deep tech” innovations – Big Data, artificial intelligence, connectivity, cybersecurity and quantum technology – to build a future we can all trust.

The company has 2,500 employees in the region, across 7 countries - Argentina, Bolivia, Brazil, Chile, Colombia, Mexico and Panama - with ten offices, five manufacturing plants, and engineering and service centres in all the sectors in which it operates.

Through strategic partnerships and innovative projects, Thales in Latin America drives sustainable growth and strengthens its ties with governments, public and private institutions, as well as airports, airlines, banks, telecommunications and technology companies.

View PDF countries : Americas > Mexico https://thales-group.prezly.com/thales-celebrates-60-years-in-mexico-driving-technological-innovation-and-local-development thales-celebrates-60-years-mexico-driving-technological-innovation-and-local-development On Thales Celebrates 60 Years in Mexico, driving technological innovation and local development

LISNR

4 Ways Ultrasonic Proximity Solves the Security-Friction Trade-Off

The Payments Paradox: The financial services landscape is defined by a relentless drive for frictionless commerce. Yet, the industry remains trapped in a payments paradox: increasing convenience often comes at the expense of security and reliability. The current generation of low-friction solutions, primarily QR codes, are highly susceptible to spoofing and fraud. Conversely, secure methods […]
The Payments Paradox:

The financial services landscape is defined by a relentless drive for frictionless commerce. Yet, the industry remains trapped in a payments paradox: increasing convenience often comes at the expense of security and reliability. The current generation of low-friction solutions, primarily QR codes, are highly susceptible to spoofing and fraud. Conversely, secure methods like NFC are costly, hardware-dependent, and struggle with mass deployment.

This trade-off is untenable.

LISNR has introduced the definitive answer: Radius. By utilizing ultrasonic data-over-sound, Radius provides the industry with the missing link—a secure, hardware-agnostic, and offline-reliable method for token exchange and proximity verification. This technology is not an iteration; it is the strategic shift required to future-proof mobile payments.

 

The Current Vulnerability and Reliability Gaps

For financial institutions and payment processors, the challenge lies in securing high-value transactions across a fractured ecosystem:

QR Code Spoofing: QR code payments are vulnerable to “quishing” (QR code phishing). A fraudster can easily overlay a malicious code onto a legitimate one, hijacking payments or stealing credentials. This simplicity is its greatest security flaw. Offline Transaction Liability: In environments with poor connectivity (e.g., transit, emerging markets), most digital wallets revert to a hybrid system where transactions are batched. This exposes merchants to greater fraud liability and introduces a dangerous delay in payment certainty. Deployment Bottlenecks: Scaling a payment solution for tap-to-pay payment solutions quickly requires high capital expenditure. The mandatory, dedicated hardware required for NFC makes global deployment slow and expensive, hindering financial inclusion. Radius: The Strategic Imperative for Payment Modernization

LISNR’s Radius SDK addresses these strategic deficiencies by decoupling transactional security from reliance on hardware and the network. It transforms every device with a speaker and microphone into a secure payment endpoint.

Here are the four non-negotiable benefits of adopting Radius for your payments platform:

1. Absolute Security 

LISNR eliminates the core vulnerability of open-source payment modalities by building security directly into the data transfer protocol.

Spoofing Elimination: ToneLock® uses a proprietary security precaution to obfuscate the payload before transmission. Only receivers with the correct, authorized key can demodulate the tone, making it impossible for unauthorized apps to read or spoof the payment data. End-to-End Encryption: For the highest security standards, the SDK offers optional, built-in AES 256 Encryption for all payloads, ensuring data remains unreadable. 2. Unrivaled Offline Transaction Certainty

Radius is engineered for mission-critical reliability, ensuring transactions are secure and auditable even when the network fails.

Network Agnostic Reliability: The entire ToneLock and AES 256 Encryption/Decryption process can occur offline. This enables the secure exchange and validation of payment tokens without requiring an active internet connection. Radius ensures instant transaction certainty and lowers merchant liability in disconnected environments. Bi-Directional Exchange: The SDK supports bidirectional transactions, allowing two devices (e.g., customer wallet and merchant terminal) to simultaneously transmit and receive tones on separate channels. This two-way handshake initiates payment instantly while simultaneously delivering a merchant record to the consumer device. 3. High-Velocity, Zero-Friction Commerce

The speed of a transaction directly correlates with consumer satisfaction and throughput in high-volume settings. Radius accelerates the process with specialized tone profiles.

Rapid High-Throughput: For point-of-sale environments, LISNR offers Point 1000 and Point 2000 tone profiles. These are optimized for sub-1 meter range and engineered for high throughput, enabling near-instantaneous credential exchange for rapid checkout and self-service kiosks. Seamless User Experience: The process can be nearly entirely automated: the user simply opens the app, and the transaction is initiated and verified by proximity, eliminating manual input, scanning, or tapping. 4. Low-Cost, Universal Deployment

Radius is a software-only solution that democratizes access to secure, contactless payment infrastructure.

Hardware-Agnostic: The SDK is integrated into existing applications and requires only a device’s standard speaker and microphone. This removes the need for costly upgrades to POS hardware, dramatically reducing the capital expenditure barrier for global payment modernization. Scalability: As a software solution, upgrading the entire payment infrastructure is as easy as updating the app. Because there is no new hardware to manage, payment providers can achieve unparalleled scale and speed in deploying secure payment functionality across millions of endpoints instantly.

LISNR is the worldwide leader in proximity verification because our software-first approach delivers the security and reliability the payments industry demands, without sacrificing the frictionless experience consumers expect.

Want to Learn more?

We’d love to learn more about your payment solution and discuss how data-over-sound can help improve your consumer experience. Learn more about our solutions in finance on our website or contact us to set up a meeting. 

 

 

The post 4 Ways Ultrasonic Proximity Solves the Security-Friction Trade-Off appeared first on LISNR.


Ockto

Efficiënter beoordelen zonder gedoe: documentloos is de nieuwe standaard

De tijd dat je stapels documenten nodig had om een klant goed te beoordelen, loopt op z’n einde. In een wereld waarin snelheid, compliance en klanttevredenheid steeds belangrijker worden, is werken met pdf’s, bijlagen en handmatige controles niet meer houdbaar. Zeker in de credit management sector leidt het oude proces tot vertraging, fouten en frustratie – voor zowel de klant als de or

De tijd dat je stapels documenten nodig had om een klant goed te beoordelen, loopt op z’n einde. In een wereld waarin snelheid, compliance en klanttevredenheid steeds belangrijker worden, is werken met pdf’s, bijlagen en handmatige controles niet meer houdbaar. Zeker in de credit management sector leidt het oude proces tot vertraging, fouten en frustratie – voor zowel de klant als de organisatie.


Ontology

Building What Matters

The Future of Web3 Communities Everyone in Web3 talks about community. It is the word every project uses. The badge everyone wears. But what does it actually mean? Too often, “community” becomes a checkbox. A Telegram channel. A Discord server with NFT giveaways. Some quick incentives to drive engagement. It looks alive, but it is often built on borrowed attention. When the rewards stop, so
The Future of Web3 Communities

Everyone in Web3 talks about community. It is the word every project uses. The badge everyone wears. But what does it actually mean?

Too often, “community” becomes a checkbox. A Telegram channel. A Discord server with NFT giveaways. Some quick incentives to drive engagement. It looks alive, but it is often built on borrowed attention. When the rewards stop, so does the activity.

That is not a community. That is marketing.

Real community building is slower. It is harder. It is the process of aligning people who build with people who use what is built. It is finding the point where incentives and intention meet. Because incentives bring people in, but intention keeps them there.

Ontology has been working at this intersection for years. Its ecosystem, Ontology Network, ONT ID, ONTO Wallet, and Orange Protocol, is designed to make digital identity, reputation, and ownership usable. The mission is not to promise a new world. It is to build the tools that make that world functional.

The challenge, and the opportunity, lies in connection. How do we connect the builders who create new infrastructure with the users who actually need it? How do we make sure that what gets built is not only possible, but wanted?

The Two Paths to Community

There are two basic ways to grow a Web3 community.

The first is bottom-up. Builders and users start together, often from an open-source idea or shared need. Growth is organic. The intent is pure. It can lead to real innovation, but it often lacks structure. Without incentives or direction, momentum slows. Projects fade before reaching scale.

The second is top-down. A project defines the mission, creates incentives, and drives participation. This works in the short term. It brings clear goals and resources. But it risks becoming transactional. When participation is driven only by reward, genuine buy-in disappears.

Ontology’s view is that neither path works alone. Bottom-up builds belief. Top-down brings clarity. The right approach mixes both. You need intent to guide action, and incentives to accelerate it.

Incentives Are Not the Enemy

Incentives get a bad reputation in Web3, mostly because they are often misused. Too much focus on token rewards can distort priorities. But incentives are not the problem. Misalignment is.

Used correctly, incentives can do what they are meant to do: attract attention, reward effort, and encourage collaboration. They should not replace purpose. They should amplify it.

A healthy Web3 community does not reward speculation. It rewards contribution. The best projects find ways to recognize value that is created, not just traded. That is where Ontology’s focus on verifiable identity and reputation becomes powerful.

Through tools like ONT ID and Orange Protocol, participants can prove who they are and what they have done. This makes contribution measurable. It lets communities recognize real participation, not just noise. Builders can see who their users are. Users can trust who they are working with.

That is how you turn incentives from a gimmick into a growth engine.

What People Need vs. What People Want

Every product in Web3 faces a simple question: do people need it, or do they want it?

The truth is that need alone is not enough. People need security, privacy, and control of their data, but they rarely act on those needs until they want the solution. Want drives action.

At the same time, want without need leads to hype. Short-term excitement, no lasting value.

The strongest projects meet both. They make people want what they need. That is the balance Ontology’s tools aim to strike. Identity and reputation are not new ideas, but in Web3 they become essential. Users are learning that decentralized identity is not just a feature. It is freedom. It is usability.

When developers build with that in mind, they create products that solve real problems. ONTO Wallet gives users control of their assets and identity in one place. Orange Protocol turns reputation into a building block for trust. ONT ID lets applications integrate secure, verifiable identity without friction.

These are not abstract innovations. They are the foundation for the next generation of apps, games, and communities.

The Bridge Between Builders and Users

Community building in Web3 is not just about size. It is about structure. Builders and users need to meet in the middle.

That is where Ontology wants to focus: creating spaces and systems where developers and users can collaborate directly. Builders should understand what users need before they design. Users should influence what gets built. The result is not just adoption, but alignment.

How that happens can vary. Incubators can bring early projects into focus. Incentives can reward experimentation. Retrospective funding can support what already works. The structure is flexible. The principle is constant. Connect intent with incentive.

Ontology’s ecosystem gives that structure a home. It already supports tools for identity, data, and trust. The next step is bringing those who build with those who use. Because Web3 only scales when both sides grow together.

From Incentives to Intent

The early years of Web3 were about speculation. The next phase is about utility. The projects that last will be the ones that shift from short-term incentives to long-term intent.

That means building for real people, not just wallets. It means communities where participation has meaning, and contribution has visibility. It means giving users a reason to stay even when rewards change.

Ontology’s technology is ready for that shift. But technology alone is not enough. It needs people. Builders who see the value of decentralized identity and reputation. Users who want control and trust. Contributors who believe in open collaboration.

The future of Web3 will not be built by one group or the other. It will be built by both, together.

The Next Step

If the goal of Web3 is freedom, then community is the mechanism that gets us there. Not through marketing or speculation, but through shared purpose.

Ontology is ready to help build that future. To connect the developers who create with the users who validate. To make collaboration not just possible, but natural.

It starts by asking the right question: what do people need, and what do they want? Then building where those answers overlap.

Let us bring you together. Builders, meet your users. Users, meet your builders. The next phase of Web3 begins with both.

Building What Matters was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ComplyCube

19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators

Nineteen Virtual Asset firms in Dubai have been charged with penalties amounting to $163,000. These firms were fined for operating without a Virtual Assets Regulatory Authority (VARA) license and breaching Dubai's marketing rules. The post 19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators first appeared on ComplyCube.

Nineteen Virtual Asset firms in Dubai have been charged with penalties amounting to $163,000. These firms were fined for operating without a Virtual Assets Regulatory Authority (VARA) license and breaching Dubai's marketing rules.

The post 19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators first appeared on ComplyCube.


Recognito Vision

Why ID Verification Services Are the Smart Choice for Businesses Verifying Customers

You know that moment when a new app asks for your ID and selfie before letting you in? You sigh, snap the photo, and in seconds it says “You’re verified!” It feels simple, but behind that small step sits an advanced system called ID verification services that keeps businesses safe and fraudsters out. In today’s...

You know that moment when a new app asks for your ID and selfie before letting you in? You sigh, snap the photo, and in seconds it says “You’re verified!” It feels simple, but behind that small step sits an advanced system called ID verification services that keeps businesses safe and fraudsters out.

In today’s digital world, identity verification isn’t a luxury. It’s a necessity. Without it, online platforms would be a playground for scammers. That’s why more companies are turning to digital ID verification to secure their platforms while keeping user experiences smooth and fast.

 

How ID Verification Evolved into a Digital Superpower

Not too long ago, verifying someone’s identity meant visiting a bank, filling out forms, and waiting days for approval. It was slow and painful. Today, online identity verification has turned that ordeal into a 10-second selfie check.

Feature Traditional ID Checks Digital ID Verification Time Days or weeks Seconds or minutes Accuracy Prone to human error AI-powered precision Accessibility In-person only Anywhere, anytime Security Paper-based Encrypted and biometric

According to a Juniper Research 2024 report, businesses using digital identity checks have reduced onboarding times by 55% and cut fraud by nearly 40%. That’s not an upgrade, that’s a revolution.

 

How ID Verification Services Actually Work

It looks easy on your screen, but behind the scenes, it’s like a full orchestra performing perfectly in sync. When you upload your ID, OCR technology instantly extracts your details. Then, facial recognition compares your selfie to the photo on your document, while an ID verification check cross-references the data with secure global databases.

All this happens faster than your coffee order at Starbucks. And yes, it’s fully encrypted from start to finish.

If you want to see how global accuracy standards are tested, visit the NIST Face Recognition Vendor Test (FRVT). This benchmark helps developers measure the precision of their facial recognition algorithms.

 

Why Businesses Are Making the Shift

Let’s be honest, no one likes waiting days to get verified. Businesses know that, and users expect speed. So, they’re shifting from manual checks to identity verification solutions that deliver results in real time.

ID verification software gives businesses an edge by:

Cutting down on manual reviews

Reducing fraud risks through AI analysis

Staying compliant with rules like GDPR

Enhancing global accessibility

A McKinsey & Company study found that businesses using automated ID verification checks experienced up to 70% fewer fraudulent sign-ups. Another Gartner analysis (2023) reported that automation in verification reduces onboarding costs by over 50%.

So, businesses aren’t just going digital for fun; they’re doing it to stay alive in a market where users expect instant trust.

 

The Technology Making It All Possible

Every smooth verification hides some serious tech genius. Artificial intelligence detects tampered IDs or fake lighting, while machine learning improves detection accuracy over time. Facial recognition compares live selfies to document photos, even if your hair color or background lighting changes.

The FRVT 1:1 results show that today’s best facial recognition models are over 20 times more accurate than they were a decade ago, according to NIST.

Optical Character Recognition (OCR) handles the text on IDs, and encryption ensures data privacy. It’s these small but powerful innovations that make modern ID document verification fast, secure, and scalable.

Want to explore real-world tech examples? Visit the Recognito Vision GitHub, where you can see how advanced verification systems are built from the ground up.

 

Why It’s a Smart Investment

Investing in reliable ID verification solutions isn’t just about compliance, it’s about building customer trust. When users feel safe, they’re more likely to finish sign-ups and come back.

According to Statista’s 2024 Digital Trust Report, companies using digital identity verification saw conversion rates increase by 30–35%. That’s because users today value both speed and security.

So, when you invest in this technology, you’re not just protecting your business. You’re giving users the confidence to engage without hesitation.

Where ID Verification Shines

The beauty of user ID verification is that it works across every industry. It’s not just for banks or fintech startups.

In finance, it prevents money laundering and fraud.

In healthcare, it confirms patient identities for telemedicine.

In eCommerce, it helps fight fake orders and stolen cards.

In gaming, it enforces age restrictions.

In ridesharing and rentals, it keeps both parties safe.

According to a 2022 IBM Security Study, 82% of users say they trust companies more when those companies use digital identity checks. That’s how powerful this technology is; it builds credibility while keeping everyone safe.

 

Recognito Vision’s Role in Modern Verification

For businesses ready to step into the future, Recognito Vision makes it simple. Their ID document recognition SDK helps developers integrate verification directly into apps, while the ID document verification playground lets anyone test the process firsthand.

Recognito’s platform blends AI accuracy, fast processing, and user-friendly design. The result? Businesses verify customers securely while users hardly notice it’s happening. That’s efficiency at its best.

 

Challenges to Consider

Of course, nothing’s perfect. Some users hesitate to share IDs online, and global documents come in thousands of formats. Integrating verification tools into older systems can also feel tricky.

However, choosing a trustworthy ID verification provider can solve most of these issues. As Gartner’s 2024 Cybersecurity Trends Report points out, companies that adopt verified digital identity frameworks see significantly fewer data breaches than those using manual checks.

So while there are challenges, the benefits easily outweigh them.

 

The Road Ahead

The next phase of digital identity verification is all about control and privacy. Imagine verifying yourself without even sharing your ID. That’s what decentralized identity systems and zero-knowledge proofs are bringing to life.

According to the PwC Global Economic Crime Report 2024, widespread digital ID verification could save over $1 trillion in fraud losses by 2030. That’s not science fiction, it’s happening right now.

The world is heading toward frictionless, instant trust. And businesses that adopt early will lead the pack.

 

Final Thoughts

At its core, ID verification services aren’t just about checking who someone is. They’re about creating confidence for users, for businesses, and for the digital world as a whole.

If you’re a company ready to modernize and protect your platform, explore Recognito Vision’s identity verification solutions. Because in an era of deepfakes, scams, and cyber tricks, the smartest move is simply knowing who you’re dealing with safely, quickly, and confidently.

 

Frequently Asked Questions

 

1. What are ID verification services and how do they work?

ID verification services confirm a person’s identity by analyzing official ID documents and matching them with facial or biometric data using AI technology.

 

2. Why are ID verification services important for businesses?

They help businesses prevent fraud, comply with KYC regulations, and build customer trust through secure and fast verification processes.

 

3. Is digital ID verification secure for users?

Yes, digital ID verification is highly secure because it uses encryption, biometric checks, and data protection standards to keep user information safe.

 

4. How do ID verification services help reduce fraud?

They detect fake or stolen IDs, verify real users instantly, and prevent unauthorized access, reducing fraud risk significantly.

 

5. What should businesses look for in an ID verification provider?

Businesses should look for providers that offer fast results, global document support, strong data security, and full regulatory compliance.

Wednesday, 15. October 2025

Anonym

DVAM 2025: MySudo discount for survivors of domestic violence

October is National Domestic Violence Awareness Month (DVAM), an annual event dedicated to shedding light on the devastating impact of domestic violence and advocating for those affected.  The theme for DVAM 2025 is With Survivors, Always, which is exploring what it means to be in partnership with survivors towards safety, support, and solidarity. Anonyome Labs […] The post DVAM 2025: MySud

October is National Domestic Violence Awareness Month (DVAM), an annual event dedicated to shedding light on the devastating impact of domestic violence and advocating for those affected. 

The theme for DVAM 2025 is With Survivors, Always, which is exploring what it means to be in partnership with survivors towards safety, support, and solidarity.

Anonyome Labs stands #WithSurvivors this National Domestic Violence Awareness Month and every day—and is proud to help empower safety through privacy for survivors of domestic violence via our Sudo Safe Initiative.

What is the Sudo Safe Initiative?

The Sudo Safe Initiative is a program developed to bring privacy to those at higher risk of verbal harassment or physical violence.

Sudo Safe offers introductory discounts on the MySudo privacy app, to help people to keep their personally identifiable information private.

You can get a special introductory discount to try MySudoby becoming a Sudo Safe Advocate.

Here’s how it works:

Visit our website at anonyome.com. Sign up to be a Sudo Safe Advocate — it’s quick and easy. Once you’re signed up, you’ll receive details on how to access your exclusive discount and start using MySudo.

In addition to survivors of domestic violence, the Sudo Safe Initiative also empowers safety through privacy for:

Healthcare professionals Teachers Foster care workers Volunteers Survivors of violence, bullying, or stalking.

How can MySudo help survivors of domestic violence?

MySudo allows people to communicate with others without using their own phone number and email address, to reduce the risk of that information being used for tracking or stalking.

With MySudo, a user creates secure digital profiles called Sudos. Each Sudo has a unique phone number, handle, and email address for communicating privately and securely.

The user can avoid making calls and sending texts and emails from their personal phone line and email inbox by using the secure alternative contact details in their Sudos.

No personal information is required to create an account with MySudo through the app stores. 

Download MySudo

Four other ways to help survivors of domestic violence Educate yourself and others

Learn and share the different types of abuse (physical, emotional, sexual, financial, and technology-facilitated) and how to find local resources and support services. 

Listen without judgment

One of the most powerful things you can offer a domestic violence survivor is support, by doing things like:

Creating a safe space for them to share their experiences without fear of judgment or blame Letting them express their feelings while validating their emotions Being willing to listen  Helping them create a safety plan.

Encourage professional support

Encourage your friend or family experiencing domestic violence to seek help from counselors, therapists, or support groups that specialize in trauma and abuse. You can assist by researching local resources, offering to accompany them to appointments, or helping them find online support communities. Professional guidance can provide victims with the tools they need to rebuild their lives.

Raise awareness and advocate for change

Support survivors not just during DVAM, but year-round. Find ideas here and learn about the National Domestic Violence Awareness Project.

Become a Sudo Safe Advocate

If your organization can help us spread the word about how MySudo allows at-risk people to interact with others without giving away their phone number, email address, and other personal details, we invite you to become a Sudo Safe Advocate.

As an advocate, you’ll receive:

A toolkit of shareable privacy resources A guide to safer communication Special MySudo promotions Your own digital badge.

Become a Sudo Safe Advocate today.

More information

Contact the National Domestic Violence Hotline.

Learn about the National Domestic Violence Awareness Project.

Learn more about Sudo Safe Initiative and Anonyome Labs.

Anonyome Labs is also a proud partner of the Coalition Against Stalkerware.

The post DVAM 2025: MySudo discount for survivors of domestic violence appeared first on Anonyome Labs.


HYPR

HYPR Delivers the First True Enterprise Passkey for Microsoft Entra ID

For years, the promise of a truly passwordless enterprise has felt just out of reach. We’ve had passwordless for web apps, but the desktop remained a stubborn holdout. We’ve seen the consumer world embrace passkeys, but the solutions were built for convenience, not the rigorous security and compliance demands of the enterprise. This created a dangerous gap, a world where employees could

For years, the promise of a truly passwordless enterprise has felt just out of reach. We’ve had passwordless for web apps, but the desktop remained a stubborn holdout. We’ve seen the consumer world embrace passkeys, but the solutions were built for convenience, not the rigorous security and compliance demands of the enterprise. This created a dangerous gap, a world where employees could access a sensitive cloud application with a phishing-resistant passkey, only to log in to their workstation with a phishable password.

That gap closes today.

HYPR is proud to announce our partnership with Microsoft to deliver the industry's first true enterprise-grade passkey solution. By integrating HYPR’s non-syncable, FIDO2 passkeys directly with Microsoft Entra ID, we are finally eliminating the last password and providing a unified, phishing-resistant authentication experience from the desktop to the cloud.

What is the Difference Between Enterprise and Other Passkeys?

The term "passkey" has become a buzzword, but not all passkeys are created equal. The synced, consumer-grade passkeys offered by large tech providers are a fantastic step forward for the public, but they present significant challenges for the enterprise:

Loss of Control: Synced passkeys are stored in third-party consumer cloud accounts, outside of enterprise control and visibility. Security Gaps: They are designed to be shared and synced by users, which can break the chain of trust required for corporate assets. The Workstation Problem: They do not natively support passwordless login for enterprise workstations (Windows/macOS), leaving the most critical entry point vulnerable.

For the enterprise, you need more than convenience. You need control, visibility, and end-to-end security. You need an enterprise passkey.

Introducing HYPR Enterprise Passkeys for Microsoft Entra ID

HYPR’s partnership with Microsoft directly addresses the enterprise passkey gap. Our solution is purpose-built for the demands of large-scale, complex IT environments that rely on Microsoft for their identity infrastructure.

This isn't a retrofitted consumer product. It's a FIDO2-based, non-syncable passkey that is stored on the user's device, not in a third-party cloud. This ensures that your organization retains full ownership and control over the credential lifecycle.

With a single, fast registration, your employees can use one phishing-resistant credential to unlock everything they need:

Passwordless Desktop Login: Users log in to their Entra ID-joined Windows workstations using the HYPR Enterprise Passkey on their phone. No password, no phishing, no push-bombing.
Seamless SSO and App Access: That same secure login event grants them a Primary Refresh Token (PRT), seamlessly signing them into all their Entra ID-protected applications without needing to authenticate again. Why Is This a Game-Changer for Microsoft Environments?

This partnership isn't just about adding another MFA option; it's about fundamentally upgrading the security posture of your entire Microsoft ecosystem.

Effortless Deployment: Go Passwordless in Days, Not Quarters

You’ve invested heavily in the Microsoft ecosystem. Now, you can finally maximize that investment by eliminating the #1 cause of breaches: the password. The HYPR and Microsoft partnership makes true, end-to-end passwordless authentication a reality.

There are no complex federation requirements, no painful certificate management, and no AD dependencies. It's a simple, lightweight deployment that allows you to roll out phishing-resistant MFA across your entire workforce in days, not quarters.

Empower your employees with fast, frictionless access that works everywhere they do. And empower your security team with the control and assurance that only a true enterprise passkey can provide.

Ready to bring enterprise-grade passkeys to your Microsoft environment? Schedule your personalized demo today.

Enterprise Passkey FAQ

Q: What is a "non-syncable" passkey?

A:  A non-syncable passkey is a FIDO2 credential that is bound to the user's physical device and cannot be copied, shared, or backed up to a third-party cloud. This provides a higher level of security and assurance because the enterprise maintains control over where the credential resides.

Q: How is this different from using an authenticator app for MFA?

A: Authenticator apps that use OTPs or push notifications are still susceptible to phishing and push-bombing attacks. HYPR Enterprise Passkeys are based on the FIDO2 standard, which is cryptographically resistant to phishing, man-in-the-middle, and other credential theft attacks

Q: What does the deployment process look like?

A: Deployment is designed to be fast and lightweight. It involves deploying the HYPR client to workstations and configuring the integration within your Microsoft Entra ID tenant. Because there are no federation servers or complex certificate requirements, many organizations can go from proof-of-concept to production rollout in a matter of days.

Q: Does this support Bring-Your-Own-Device (BYOD) scenarios?

A: Yes. The solution is vendor-agnostic and supports both corporate-managed and employee-owned (BYOD) devices, providing a simple, IT-approved self-service recovery flow that keeps users productive without compromising security.


Ocean Protocol

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches A new Annotators Hub challenge The European Parliament generates thousands of speeches, covering everything from local affairs to international diplomacy. These speeches shape policies that impact millions across Europe and beyond. Yet, much of this discourse remains unstructured, hard to track, and difficult to
CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches

A new Annotators Hub challenge

The European Parliament generates thousands of speeches, covering everything from local affairs to international diplomacy. These speeches shape policies that impact millions across Europe and beyond. Yet, much of this discourse remains unstructured, hard to track, and difficult to analyze at scale.

CivicLens, the second and latest task in the Annotators Hub, invites contributors to help change that. Together with Lunor, Ocean is building a structured, research-grade dataset based on real EU plenary speeches. Your annotations will support civic tech, media explainers, and political AI, and will give you the chance to earn a share of the $10,000 USDC prize pool.

What you’ll do

You’ll read short excerpts from speeches and answer a small set of targeted questions:

Vote Intent: Does the speaker explicitly state how they will vote (yes/no/abstain/unclear)? Tone: Is the rhetoric cooperative, neutral, or confrontational? Scope of Focus: Is the emphasis on the EU, the speaker’s country, or both? Verifiable Claims: Does the excerpt contain a factual, checkable claim (flag and highlight the span)? Topics (multi-label): e.g., economy, fairness/rights, security/defense, environment/energy, governance/procedure, health/education, technology/industry. Ideological Signal (if any): Is there an inferable stance or framing (e.g., pro-integration, national interest first, market-oriented, social welfare-oriented), or no clear signal?

Each task follows a consistent schema with clear tooltips and examples. Quality is ensured through overlap assignments, consensus checks, and spot audits.

Requirements Good command of written English (reading comprehension and vocabulary) Ability to recognize when political or ideological arguments are being made Basic understanding of common political dimensions (e.g., left vs. right, authoritarian vs. libertarian) Minimum knowledge of international organizations and relations (e.g., what the EU is, roles of member states) Awareness of what parliamentary speeches are and their general purpose in the context of EU roll call votes Why it matters

Your contributions will help researchers and civic organizations better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible.

The resulting dataset isn’t just for political analysis, but it has broad, real-world applications:

Fact-checking automation: AI models trained on this data can learn to distinguish checkable assertions from opinions or vague claims, helping organizations like PolitiFact, Snopes, or Full Fact prioritize their verification workload Compliance and policy tracking: Financial compliance platforms, watchdog groups, and regtech firms can detect and monitor predictive or market-moving statements in political and economic discourse Content understanding and education: News aggregators, summarization tools, and AI assistants (like Feedly or Artifact) can better tag and summarize political content. The same methods can also power educational apps that teach critical thinking and media literacy Rewards

A total prize pool of $10,000 USDC is available for contributors.

Rewards are distributed linearly based on validated submissions, using the formula:

Your Reward = (Your Score ÷ Sum of All Scores) × Total Prize Pool

The higher the quality and volume of your accepted annotations, the higher your share.

For full participation details, submission rules, and instructions, visit the quest details page on Lunor Quest.

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Infocert

VPN for lawyers, labour consultants, accountants

Lawyers, labour consultants, accountants: 5 practical ways in which a Business VPN can protect your work and data   Are you a lawyer working away? A smartworking accountant? Do you provide consulting services at clients’ premises? If so, read this article to learn why you should use a Business VPN to connect to a network […] The post VPN for lawyers, labour consultants, accountants appeared
Lawyers, labour consultants, accountants: 5 practical ways in which a Business VPN can protect your work and data

 

Are you a lawyer working away? A smartworking accountant? Do you provide consulting services at clients’ premises? If so, read this article to learn why you should use a Business VPN to connect to a network other than your own. A VPN for business is a valuable professional ally because it helps protect highly sensitive information while guaranteeing secure remote access to professional content wherever you are, even abroad.

 

So, what exactly can a VPN – a virtual private network – do for you when you work remotely? Here are five practical ways in which using a VPN for remote work can make a difference to professionals and small businesses.

Work from home security

You are working from home and, as always, you have to access business management systems, dashboards, customer and supplier databases. You may also need to consult or send confidential documents like balance sheets, contracts and court procedures. You even have crucial calls and meetings on your agenda to finalise agreements or submit reports. To do all this, you rely on your home router and perhaps use your own laptop or smartphone. Without a VPN to protect your connection, your home network can become a point of vulnerability – a potential entry point for eavesdropping and data breaches. Have you ever thought what would happen if all the information you work with were to fall into the wrong hands? Your clients’ confidentiality, the security of your work and your own professional reputation would be severely compromised.

 

A VPN creates an encrypted and therefore secure tunnel between your device and company servers, ensuring cybersecurity and protecting resources and internal communications. In this way, even remotely sharing files with co-workers or customers is absolutely secure. Many premium VPNs also offer additional security tools that protect you from malware, intrusive advertisements, dangerous sites and trackers, and warn you in case of data leaks.

 

Public Wi-Fi security

On a business trip, you are highly likely to use hotel or airport lounge Wi-Fi to complete a presentation or access your corporate cloud. What could happen without a VPN? Imagine you are waiting for your flight and want to check your email. The moment you connect to the public network and access your mail server, a hacker intercepts your traffic, reads your email and steals your login credentials. You don’t know it, but you have just suffered what is called a man-in-the-middle attack. With a virtual private network, no hacker can see what you do online, even on open Wi-Fi networks.

Accessing national services and portals, even abroad

If you are abroad and need to access essential international websites, portals and services like National Insurance, Inland Revenue, or corporate intranets, you may encounter access limitations and geo-blocking. This is because, for security reasons, some public portals and corporate networks choose to restrict access from foreign IPs. In some cases, the site may not function properly or may not show certain sections.

 

In these cases, a VPN is absolutely indispensable. Irrespective of where you are physically located, all you need to do is connect to a server in another country to simulate a presence there, bypass geo-blocking and gain access the content you want, while still enjoying an encrypted and protected connection.

Privacy and data security

This aspect is often overlooked. Surfing online without adequate protection endangers the security not only of your own information but also that of your employees, collaborators, suppliers and customers, risking potentially enormous economic and reputational damage.

 

If you think data breaches only concern big tech companies like Meta, Amazon and Google, you are wrong. Very often hackers and cybercriminals choose to target professional firms or small businesses that fail to pay attention to IT security, underestimating the need for proper tools and protective infrastructures to prevent data breaches.

 

When dealing with sensitive data, health, legal or financial information on a daily basis, keeping it secure is not just common sense in today’s fully digitalised world, but a legal duty.

 

Data privacy is as crucial for individuals as it is for companies, because it represents a key element of protection, trust and accountability. It means maintaining control over your personal information and protecting yourself against abuse or misuse that may damage brand reputation or personal security.

 

Using a VPN for business travel is one of the tools that cybersecurity experts recommend to protect privacy and client data, since, as we have seen, VPNs change your IP address and encrypt your Internet connection, preventing potential intrusions.

Access to international websites and content

If you work with international customers or suppliers, a virtual private network is indispensable. As we have seen, for security reasons, some institutional and professional sites and portals restrict access based on your geographical location. With a VPN, you can simulate your presence in a country other than the one in which you are physically located.

For instance, do you ever need to consult public registers or legal databases in non-EU countries, access tax or customs portals, use SaaS software for foreign markets or monitor the pricing strategies of foreign competitors by accessing local versions of their sites? With a VPN you only need to connect to the server of the country or geographical area you are interested in to bypass geo-blocking and access the financial resources you need.

 

Whatever your profession, whatever the size of your company, and wherever you are, a VPN is indispensable to the security and privacy of your work.

The post VPN for lawyers, labour consultants, accountants appeared first on Tinexta Infocert international website.


VPN: a non-technical guide for professionals

What is a VPN? A non-technical guide for professionals We have been living in a vast digital workplace for some time now, a permanently connected environment that transcends the boundaries of the traditional office to include the sofa at home, airport lounges, hotel rooms, coffee shops and train carriages. In this fluid and constantly evolving […] The post VPN: a non-technical guide for professi
What is a VPN? A non-technical guide for professionals

We have been living in a vast digital workplace for some time now, a permanently connected environment that transcends the boundaries of the traditional office to include the sofa at home, airport lounges, hotel rooms, coffee shops and train carriages. In this fluid and constantly evolving digital space, you read the news, shop online, download apps, participate in calls and meetings, answer emails, access sensitive data, perform banking transactions, and more besides, on a daily basis. But do you ever wonder what happens to your data while you are online? Are you really in control of the information you share, the sites you visit, and the actions you take? Spoiler: a large number of others can see what you do during your daily visits to the Internet. Unless, of course, you use a VPN – a Virtual Private Network to protect your Internet connection and online privacy. So, how does a VPN work? A VPN acts as a vigilant and attentive guardian to protect you from prying eyes and malicious attacks.

Who can see what you do online?

Though it might seem so, surfing online is by no means private. Every click you make leaves a trace. These traces form what is called a “digital shadow” or fingerprint. Every time you “touch” something online, many actors monitor, collect or intercept what you do. Who are these people?

 

1. Your Internet Service Provider (ISP): your provider can track all the sites you visit, when you visit them, and for how long. Not only that, but your provider may store and share certain information with third parties (not only the police and judicial authorities, but even advertisers) for a variable period of time, depending on the type of content, the consent you have given, internal policies and legislation (national and European). In Italy, for example, Internet service providers may retain certain data for up to 10 years.

 

2. Network administrators: if you connect to corporate or public Wi-Fi, e.g. a hotel network, the network administrator can monitor its traffic and thus have access to information on your online activities.

 

3. Websites and online platforms: many sites collect browsing data, including through cookies (just think of all those pop-ups that constantly interrupt your browsing), pixels and trackers. This allows them to profile you in order to show you personalised advertisements or sell your data to third parties.

 

4. Search engines: if you use a traditional search engine like Google, Bing or Yahoo, everything you do is traceable – even if you use “Incognito mode”. If you want to keep your searches private, we suggest using non-traceable search engines such as DuckDuckGo, Qwant, Startpage or Swisscows.

 

5. Hackers and criminals: surfing online exposes you to daily risks, especially when you choose to connect to unprotected public Wi-Fi networks or surf without the use of security tools like antivirus software, VPNs or anti-malware tools. Credentials, emails, bank details, even your identity, are valuable commodities.

The Internet is not a private house; it is a public square.

Every time you connect to the Internet, your device uses an Internet Protocol (IP) address, which can reveal not only your online identity, but also the location from which you connect. Technically, an IP address is a numerical label assigned by the Internet service provider. Because it is used to identify individual devices among billions of others, it can be regarded as a postal address in the digital world.

 

When you enter the name of a website (example.com) in your browser’s address bar, your computer has to perform certain operations because it cannot actually read words, only numbers. First of all, the browser locates the IP address corresponding to the site you want (example.com = 192.168.1.1), then, once the location is found, it loads the site onto the screen. An IP address functions like a home address, ensuring that data sent over the Internet always reaches the correct destination.

 

This identifier is visible to all the subjects listed above.

 

Not only that, but the information you routinely exchange online – passwords, emails, documents and sensitive data – often travel in “plaintext” i.e. without being encrypted. This means that anyone who manages to intercept them on their way through the network can read or copy them. Think of sending a postcard: anyone intercepting it on the way can read its contents, your name, the recipient’s address and so on. The same happens with your online data. Not using adequate protection systems, like a VPN, is like leaving your front door open. Would you ever do that?

How does a VPN work?

Typically, when you attempt to access a website, your Internet provider receives the request and directs it straight to the desired destination. A VPN, however, directs your Internet traffic through a remote server before sending it on to its destination, creating an encrypted tunnel between your device and the Internet. This tunnel not only secures the data you send and receive, but also hides it from outside eyes, providing you with greater privacy and online security. A VPN also changes your real IP address (i.e. your digital location), e.g. Milan, and replaces it with that of the remote server you have chosen to connect to, e.g. Tokyo. In this way, no one – neither your Internet provider, nor the sites you visit, nor any malicious attackers – can know where you are really connecting from.

 

It is as if the virtual public square, where everyone sees and listens, turns into a closed room, invisible to those outside, at the click of a button.

 

This, in brief, is how a virtual private network works:

 

1. First, the VPN server identifies you by authenticating your client.

2. The VPN server applies an encryption protocol to all the data you send and receive, making it unreadable to anyone trying to intercept it.

3. The VPN creates a virtual, secure “tunnel” through which your data travels to its destination, so that no one can access it without authorisation.

4. The VPN wraps each data packet inside an external packet (an “envelope”) which is encrypted by encapsulation. The envelope is the essential element of the VPN tunnel that keeps your data safe during transfer.

5. When the data reaches the server, the external packet is removed through a decryption process.

Using a VPN should be part of your digital hygiene

Every professional should use a VPN, not only when working remotely or using public Wi-Fi, but as an essential tool to surf more securely, privately and responsibly, day after day. You can think of a VPN as a habit of digital hygiene that provides greater privacy and an additional layer of protection against potential online threats.

A VPN:

 

● encrypts your data, protecting you from prying eyes
● changes your real IP, protecting your identity
● routes your data through remote servers, creating a secure and private tunnel
● stops your Internet provider and other third parties tracking your data.

 

To sum up, a VPN is not just a tool for special situations, like using public Wi-Fi, accessing restricted content. Neither is it only for experienced users and cybersecurity enthusiasts. On the contrary, it is an essential tool – a “must-have” – for all professionals and individuals who want to inhabit the digital space that surrounds us with greater awareness and less fear.

The post VPN: a non-technical guide for professionals appeared first on Tinexta Infocert international website.

Tuesday, 14. October 2025

Spruce Systems

Digital Identity Policy Momentum

This article is the second installment of our series: The Future of Digital Identity in America.

Read the first installment in our series on The Future of Digital Identity in America here.

Technology alone doesn’t change societies; policy does. Every leap forward in digital infrastructure (whether electrification, the internet, or mobile payments) has been accelerated or slowed by policy. The same is true for verifiable digital identity. The question today isn’t whether the technology works; it does. The question is whether policy frameworks will make it accessible, trusted, and interoperable across industries and borders.

Momentum is building quickly. State legislatures, federal agencies, and international bodies are beginning to treat verifiable digital identity not as a niche experiment, but as critical public infrastructure. In this post, we’ll explore how policy is shaping digital identity, from U.S. state laws to European regulations, and why governments care now more than ever.

States Leading the Way: Laboratories of Democracy

In the U.S., states have become the proving ground for verifiable digital identity. Seventeen states, including California, New York, and Georgia, already issue mobile driver’s licenses (mDLs) that are accepted at more than 250 TSA checkpoints. By 2026, that number is expected to double, with projections of 143 million mDL holders by 2030, according to ABI Research forecasts.

Seventeen states now issue mobile driver’s licenses accepted at more than 250 TSA checkpoints - digital ID is already real, growing faster than many expected.

California’s DMV Wallet offers one of the most comprehensive examples. In less than two years, over two million Californians have provisioned mobile driver’s licenses, which can be used at TSA checkpoints, in convenience stores for age-restricted purchases, and even online to access government services—real, everyday transactions that people recognize. In addition to the digital licenses, more than thirty million vehicle titles have been digitized using blockchain, making it easier for people to transfer ownership, register cars, or prove title history without mountains of paperwork. Businesses can verify credentials directly, residents can present them online or in person, and the system is designed to work across states and industries. In other words, this program demonstrates proof that digital identity can scale to millions of people and millions of records while solving real problems.

California’s DMV Wallet has issued over two million mDLs and has digitized over 42 million vehicle titles using blockchain - demonstrating trustworthiness at scale.

Utah took a different approach by legislating principles before widespread deployment. SB 260, passed in 2025, lays down a bill of rights for digital identity. Citizens cannot be forced to unlock their phones to present a digital ID. Verifiers cannot track or build profiles from ID use. Selective disclosure must be supported, allowing people to prove an attribute, like being over 21, without revealing unnecessary details. Digital IDs remain optional, and physical IDs must continue to be accepted. Utah’s framework shows how policy can proactively protect civil liberties while enabling innovation.

Utah’s SB 260 doesn’t just pilot identity tech - it builds in privacy and choice from day one, naming those values as rights.

Together, California and Utah illustrate a spectrum of policymaking. One demonstrates what’s possible with rapid deployment at scale - how quickly millions of people can adopt new credentials when the technology is made practical and widely available. The other shows how legislation can proactively embed privacy and choice into the foundations of digital identity, creating durable protections that guard against misuse as adoption grows. Both approaches are valuable: California proves the model can work in practice, while Utah ensures it works on terms that respect civil liberties. Taken together, they show that speed and safeguards are not opposing forces, but complementary strategies that, if aligned, can accelerate trust and adoption nationwide.

Federal Engagement: Trust, Security, and Compliance

Federal agencies are also stepping in, linking digital identity to national security and resilience. The Department of Homeland Security (DHS) is piloting verifiable digital credentials for immigration—a use case where both accuracy and accessibility are essential.

Meanwhile, the National Institute of Standards and Technology (NIST), through its National Cybersecurity Center of Excellence (NCCoE), has launched a hands-on mDL initiative. In collaboration with banks, state agencies, and technology vendors (including 1Password, Capital One, Microsoft, and SpruceID, among others), the project is building a reference architecture demonstrating how mobile driver’s licenses and verifiable credentials can be applied in real-world use cases: CIP/KYC onboarding, federated credential service providers, and healthcare/e-prescribing workflows. The NCCoE has already published draft CIP/KYC use-case criteria, wireframe flows, and a sample bank mDL information page to show how a financial institution might integrate and present mDLs to customers—bringing theory into usable models for regulation and deployment. 

Why the urgency? Centralized identity systems are prime targets for adversaries. Breach one large database, and millions of people’s information is compromised. Decentralized approaches change that risk equation by sharding and encrypting user data, reducing the value of any single “crown jewel” target.

Decentralized identity reshapes the risk equation—no single crown jewel database for adversaries to breach.

Policy is also catching up to compliance challenges in financial services. In July 2025, Congress passed the Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act, which, among other provisions, directs the U.S. Treasury to treat stablecoin issuers as financial institutions under the Bank Secrecy Act (BSA). Section 9 of the Act requires Treasury to solicit public comment on innovative methods to detect illicit finance in digital assets, including APIs, artificial intelligence, blockchain monitoring, and (critically) digital identity verification.

Treasury’s August 2025 Request for Comment (RFC) builds directly on this mandate. It seeks input on how portable, privacy-preserving digital identity credentials can support AML/CFT and sanctions compliance, reduce fraud, and lower compliance costs for financial institutions. Importantly, the RFC recognizes privacy as a design factor, asking specifically about risks from over-collection of personal data, the sensitivity of information reviewed, and how to implement safeguards alongside compliance.

This is a significant shift: digital identity is not only being framed as a user-rights issue or a convenience feature, but also as a national security and financial stability priority. By embedding identity into the GENIUS Act’s framework for stablecoins and BSA modernization, policymakers are effectively saying that modernized, cryptographically anchored identity is essential for the resilience of U.S. markets.

The European Example: eIDAS 2.0

While the U.S. pursues a patchwork of state pilots and federal engagement, Europe has opted for a coordinated regulatory approach. In May 2024, eIDAS 2.0 came into force, requiring every EU Member State to issue a European Digital Identity Wallet by 2026.

The regulation mandates acceptance across public services and major private sectors like banks, telecoms, and large online platforms. Privacy is baked into the requirements: wallets must be voluntary and free for citizens, support selective disclosure, and avoid central databases. Offline QR options are also mandated, ensuring usability even without connectivity.

Europe is treating digital identity as a right: free, voluntary, private, and accepted across borders.

Why does this matter? For citizens, it means one-click onboarding across borders. For businesses, it means lower compliance costs and reduced fraud. For the EU, it’s a step toward digital sovereignty, reducing dependency on foreign platforms and asserting leadership in global standards.

Identity as Infrastructure

Look closely, and a pattern emerges: policymakers are treating identity as infrastructure. Like roads, grids, or communications networks, identity is a shared resource that underpins everything else. Without it, markets stumble, governments waste resources, and citizens lose trust. With it, economies run smoother, fraud drops, and individuals gain autonomy.

Identity is infrastructure—like roads or grids, it underpins every modern economy and democracy.

This framing (identity as infrastructure) helps explain why governments care now. Fraud losses are staggering, trust in institutions is fragile, and AI is amplifying risks at unprecedented speed. Policy is not just reacting to technology; it’s shaping the conditions for decentralized identity to succeed.

Risks of Policy Done Wrong

Of course, not all policy is good policy. Poorly designed frameworks could centralize power, entrench surveillance, or create vendor lock-in. Imagine if a single state-issued wallet were mandatory for all services, or if verifiers were allowed to log every credential presentation. The result would be digital identity as a tool of control, not freedom.

That’s why principles matter. Utah’s SB 260 is instructive: user consent, no tracking, no profiling, open standards, and continued availability of physical IDs. These are not just policy features; they are guardrails to keep digital identity aligned with democratic values.

Privacy as Policy: Guardrails Before Growth

Alongside momentum in statehouses and federal pilots, civil liberties organizations have raised a critical warning: digital identity cannot scale without strong privacy guardrails. Groups like the ACLU, EFF, and EPIC have cautioned that mobile driver’s licenses (mDLs) and other digital ID systems risk entrenching surveillance if designed poorly.

The ACLU’s Digital ID State Legislative Recommendations outline twelve essential protections: from banning “phone-home” tracking and requiring selective disclosure, to preserving the right to paper credentials and ensuring a private right of action for violations. EFF warns that without these safeguards, digital IDs could “normalize ID checks” and make identity presentation more frequent in American life .

The message is clear: technology alone isn’t enough. Policy must enshrine privacy-preserving features as requirements, not optional features. Utah’s SB 260 points in this direction by mandating selective disclosure and prohibiting tracking. But the broader U.S. landscape will need consistent frameworks if decentralized identity is to earn public trust.

We'll explore these principles in greater depth in a later post in this series, where we examine how civil liberties critiques shape the design of decentralized identity and why policy and technology must work together to prevent surveillance creep.

SpruceID’s Perspective

At SpruceID, we sit at the intersection of policy and technology. We’ve helped launch California’s DMV Wallet, partnered on Utah’s statewide verifiable digital credentialing framework, and collaborated with DHS on verifiable digital immigration credentials. We also contribute to global standards bodies, such as the W3C and the OpenID Foundation, ensuring interoperability across jurisdictions.

Our perspective is simple: decentralized identity must remain interoperable, privacy-preserving, and aligned with democratic principles. Policy can either accelerate this vision or derail it. The frameworks being shaped today will determine whether decentralized identity becomes a tool for empowerment or for surveillance.

Why Governments Care Now

The urgency comes down to four forces converging at once:

Fraud costs are exploding. In 2024, Americans reported record losses - $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). On the institutional side, the average U.S. data breach cost hit $10.22 million in 2025, the highest ever recorded (IBM). AI is raising the stakes. Synthetic identity fraud alone accounted for $35 billion in losses in 2023 (Federal Reserve). FinCEN has warned that criminals are now using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks and exploit financial systems at scale. Global trade requires interoperability. Cross-border commerce depends on reliable, shared frameworks for verifying identity. Without them, compliance costs balloon and innovation slows. Citizens expect both privacy and convenience. People want frictionless, consumer-grade experiences from digital services, but they will not tolerate surveillance or being forced into a single system.

Policymakers increasingly see decentralized identity as a way to respond to all four at once. By reducing fraud, strengthening democratic resilience, supporting global trade, and protecting privacy, decentralized identity offers governments both defensive and offensive advantages.

The Policy Frontier

We are standing at the frontier of decentralized identity. States are pioneering real deployments. Federal agencies are tying identity to national security and compliance. The EU is mandating wallets as infrastructure. Around the world, policymakers are realizing that identity is not just a product, it’s the scaffolding for digital trust.

The decisions made in statehouses, federal agencies, and international bodies over the next few years will shape how identity works for decades. Done right, verifiable digital identity can become the invisible infrastructure of freedom, convenience, and security. Done wrong, it risks becoming another layer of surveillance and control.

That’s why SpruceID is working to align policy with technology, ensuring that verifiable digital identity is built on open standards, privacy-first principles, and user control. Governments care now because the stakes have never been higher. And the time to act is now.

This article is part of SpruceID’s series on the future of digital identity in America.

Subscribe to be notified when we publish the next installment.


Ocean Protocol

DF159 Completes and DF160 Launches

Predictoor DF159 rewards available. DF160 runs October 16th — October 23rd, 2025 1. Overview Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor. Data Farming Round 159 (DF159) has completed. DF160 is live, October 16th. It concludes on October 23rd. For this DF round, Predictoor DF has 3,750 OCEAN
Predictoor DF159 rewards available. DF160 runs October 16th — October 23rd, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.

Data Farming Round 159 (DF159) has completed.

DF160 is live, October 16th. It concludes on October 23rd. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF160 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF160

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean and DF Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF159 Completes and DF160 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spherical Cow Consulting

Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet

I had one of those chance airplane conversations recently—the kind that sticks in your mind longer than the flight itself. My seatmate was reading a book about artificial intelligence, and at one point they described the idea of an “infinitely growing AI.” I couldn’t help but giggle a bit. The post Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet appeared first on Sph

“I had one of those chance airplane conversations recently—the kind that sticks in your mind longer than the flight itself.”

My seatmate was reading a book about artificial intelligence, and at one point, they described the idea of an “infinitely growing AI.” I couldn’t help but giggle a bit. Not at them, but at the premise.

An AI cannot be infinite. Computers are not infinite. We don’t live in a world where matter and energy are limitless. There aren’t enough chips, fabs, minerals, power plants, or trained engineers to sustain an infinite anything.

This isn’t just a nitpicky detail about science fiction. It gets at something I’ve written about before:

In Who Really Pays When AI Agents Run Wild? I noted that scaling AI systems isn’t just about clever protocols or smarter algorithms. Every prompt, every model run, every inference carries a cost in water, energy, and hardware cycles. In The End of the Global Internet, I argued that we are already moving toward a fractured network where national and regional policies shape what’s possible online.

The “infinite AI” conversation is an example that ties both threads together. We may dream about global systems that grow without end, but the reality is that technology is built on finite supply chains. It’s those supply chains that are turning out to be the real bottleneck for the future of the Internet.

A Digital Identity Digest Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:15:19 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The real limits aren’t protocols

When people in the identity and Internet standards space talk about limits, we often point to protocols. Can the protocol scale? Will a new protocol successfully replace cookies? Can we use existing protocols to manage delegation across ecosystems?

These are important questions, but they are not the limiting factor. Protocols, after all, are words in documents and lines of code. They can be revised, extended, and reinvented. The hard limits come from the physical world.

Chips and fabs. Advanced semiconductors require fabrication plants that cost tens of billions of dollars and take years to build. Extreme ultraviolet lithography machines (say that five times, fast) are produced (as of 2023) by exactly one company in the Netherlands—ASML—and delivery schedules are measured in years. Minerals and materials. Every computer depends on a handful of rare inputs: lithium for batteries, cobalt for electrodes, rare earth elements for magnets, neon for chipmaking lasers, high-purity quartz for wafers. These are not evenly distributed across the globe. China dominates rare earth refining, while Ukraine has been a critical source of neon. And there is no substitute for water in semiconductor production. Power and cooling. Training a frontier AI model consumes gigawatt-hours of electricity. Running hyperscale data centers requires water for cooling that rivals the consumption of entire towns. When power grids are strained, there’s no protocol that can fix it. People. None of this runs itself. Chip designers, process engineers, cleanroom technicians, miners, metallurgists—these are highly specialized roles. Many countries are facing demographic changes that include aging workforces and immigration restrictions for the current tech giants and uneven education where the populations are booming.

You can’t standardize your way out of these shortages. You can only manage, redistribute, or adapt to them.

Geopolitics and demographics

The Internet was often described as “borderless,” but the hardware that makes it run is anything but. Supply chains for semiconductors, network equipment, and the minerals that feed them are deeply entangled with geopolitics and demographics.

No region has a fully independent pipeline:

The US leads in chip design but depends on the Indo-Pacific region for chip manufacturing. China dominates rare earth refining but relies on imports of high-end chipmaking tools it cannot yet build domestically. Europe has niche strengths in lithography and specialty equipment but lacks the scale for end-to-end independence. Countries like Japan, India, and Australia supply critical inputs—from silicon wafers to rare earth ores—but not the whole stack.

This interdependence is not an accident. Globalization optimized supply chains for efficiency, not resilience. Each region specialized in the step where it had a comparative advantage, creating a finely tuned but fragile web.

Demographics add another layer. Many of the most skilled engineers in chip design and manufacturing are reaching retirement age. The same is true for technical standards architects; they are an aging group. Training replacements takes years, not months. Immigration restrictions in key economies further shrink the talent pool. Even if we had the minerals and the fabs, we might not have the people to keep the pipelines running.

The illusion of global resilience

For decades, efficiency reigned supreme. Tech companies embraced just-in-time supply chains. Manufacturers outsourced to the cheapest reliable suppliers. Investors punished redundancy as waste.

That efficiency gave us cheap smartphones, affordable cloud services, and rapid AI innovation. But it also created a brittle system. When one link in the chain breaks, the effects cascade:

A tsunami in Japan or a drought in Taiwan can disrupt global chip supply. A geopolitical dispute can halt exports of critical minerals overnight. A labor strike at a port can ripple through shipping networks for months.

We saw this during the 2020–2023 global chip shortage. A pandemic-driven demand spike collided with supply chain shocks: a fire at a Japanese chip plant, drought in Taiwan, and war in Ukraine cutting off neon supplies. Automakers idled plants. Consumer electronics prices rose. Lead times stretched into years.

AI at scale only magnifies the problem. Training one large model requires thousands of specialized GPUs. If one upstream material is constrained—say, the gallium used in semiconductors—it doesn’t matter how advanced your algorithms are. The model doesn’t get trained.

Cross-border dependencies never vanish

This is where the conversation loops back to the idea of a “global Internet.” Even if the Internet fragments into national or regional spheres—the “splinternet” scenario—supply chains remain irreducibly cross-border.

You can build your own national identity system. You can wall off your data flows. But you cannot build advanced technology entirely within your own borders without enormous tradeoffs.

A U.S. data center may run on American-designed chips, but those chips likely contain rare earths refined in China. A Chinese smartphone may use domestically assembled components, but the photolithography machine that patterned its chips came from Europe. An EU-based AI startup may host its models on European servers, but the GPUs were packaged and tested in Southeast Asia.

Fragmentation at the protocol and governance level doesn’t erase these dependencies. It only adds new layers of complexity as governments try to manage who trades with whom, under what terms, and with what safeguards.

The myth of “digital sovereignty” often ignores the material foundations of technology. Sovereignty over protocols does not equal sovereignty over minerals, fabs, or skilled labor.

Opportunities in regional diversity

If infinite AI is impossible and total independence is unrealistic, what’s left? One answer is regional diversity.

Instead of assuming we can build one perfectly resilient global supply chain, we can design multiple overlapping regional ones. Each may not be fully independent, but together they reduce the risk of “one failure breaks all.”

Examples already in motion:

United States. The CHIPS and Science Act is pouring billions into domestic semiconductor manufacturing (though how long that act will be in place is in question). The U.S. is also investing in rare earth mining and processing though environmental and permitting challenges remain. European Union. The EU Raw Materials Alliance is working to secure critical mineral supply and recycling. European firms already lead in certain high-end equipment niches. Japan and South Korea. Both countries are investing in duplicating supply chain segments currently dominated by China, such as battery materials. India. This country has ambitious plans to build local chip fabs and become a global assembly hub. Australia and Canada. Positioned as suppliers of critical minerals, Australia and Canada are working to move beyond extraction to refining.

Regional chains come with tradeoffs: higher costs, slower rollout, and sometimes redundant investments. But they create buffers. If one region falters, others can pick up slack.

They also open the door to more design diversity. Different regions may approach problems in distinct ways, leading to innovation not just in technology but in governance, regulation, and labor practices.

Reframing the narrative

So let’s come back to that airplane conversation. The myth of infinite AI (or infinite cloud computing, for that matter) isn’t just bad science fiction. It’s a misunderstanding of how technology actually grows.

AI, like the Internet itself, is bounded by the real world. Protocols matter, but they are only the top layer. Beneath them are the chips, the minerals, the power, and the people. Those are the constraints that will shape the next decade.

Which leads us to the current irony in all of this: even as the Internet fragments along political and regulatory lines, the supply chains that support it remain irreducibly global. We can argue about governance models and sovereignty all we like and target tariffs at a whim, but a smartphone or a GPU is still a planetary collaboration.

The challenge, then, isn’t to pretend we can achieve total independence. It’s to design supply chains—local, regional, and global—that acknowledge these limits and build resilience into them.

Looking ahead

When I wrote about The End of the Global Internet, I wanted to show that fragmentation is not just possible, but already happening. But fragmentation doesn’t erase interdependence. It just makes it messier.

When I wrote about Who Pays When AI Agents Run Wild? I wanted to point out that scaling computation is not a free lunch. It comes with bills measured in electricity, water, and silicon.

This post ties both threads together: the real bottlenecks in technology are not the protocols we argue about in standards meetings. They are the supply chains that determine whether the chips, power, minerals, and people exist in the first place.

AI is a vivid example because its appetite is so enormous. But the lesson applies more broadly. The Internet is fracturing into spheres of influence, but those spheres will remain bound by the physical pipelines that crisscross borders.

So the next time someone suggests an infinite AI, or a fully sovereign domestic Internet, remember: computers aren’t infinite. Supply chains aren’t sovereign. The real question isn’t whether we can break free of those facts, it’s how we design systems that can thrive within them.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Welcome back to The Digital Identity Digest. I’m Heather Flanagan, and today, we’re going to dig into one of those invisible but very real limits on our digital future — supply chains.

[00:00:42] Now, I know supply chains don’t sound nearly as exciting as AI agents or new Internet protocols. But stay with me — because without the physical stuff (chips, minerals, power, and people), all of those clever protocols and powerful algorithms don’t amount to much.

[00:01:00] This episode builds on two earlier posts:

Who Really Pays for AI? — exploring how AI comes with a bill in water, electricity, and silicon. The End of the Global Internet — examining how fragmentation is reshaping the network itself.

Both lead us here: the supply chain is one of the biggest constraints on how far both AI and the Internet can actually go.

[00:01:27] So, if you really want to understand the future of technology, you can’t just look at the code or the protocols.

[00:01:35] You have to look at the supply chains.

The Reality Check: Technology Needs Stuff

[00:01:38] Let’s start with a story. On a recent flight, my seatmate was reading a book about artificial intelligence. Go him.

[00:01:49] At one point, he leaned over and described an idea of an infinitely growing AI.

[00:01:56] I couldn’t help but laugh a little — because computers are not infinite.

[00:02:04] There just aren’t enough chips, fabs, minerals, power plants, or trained people on the planet to sustain infinite anything. It’s not imagination — it’s physics, chemistry, and labor.

[00:02:20] That exchange captured something I keep seeing in conversations about AI, identity, and the Internet. We treat protocols as if they’re the bottleneck. But ultimately, it’s the supply chains underneath that constrain everything.

Chips, Fabs, and the Fragility of Progress

[00:02:38] Let’s break that down — starting with chips and fabricators, also known as fabs.

[00:02:44] The most advanced semiconductors come from fabrication plants that cost tens of billions of dollars to build — and take years, even a decade, to come online.

[00:02:56] And the entire process hinges on one company — ASML in the Netherlands.

[00:03:03] They’re the only supplier of extreme ultraviolet lithography machines. Without those, you simply can’t make the latest generation of chips. The backlog? Measured in years.

[00:03:21] Then there’s the issue of minerals and materials:

Lithium for batteries Cobalt for electrodes Rare earth elements for magnets Neon for chipmaking lasers High-purity quartz for wafers

[00:03:44] These resources aren’t evenly distributed. China refines most rare earths. Ukraine supplies much of the world’s neon. And water — another critical input — is also unevenly available.

Power, People, and Production

[00:04:05] A frontier AI model doesn’t just use a lot of electricity — it uses gigawatt-hours of power.

[00:04:26] Running a hyperscale data center can consume as much water as a small city. And when power grids are strained, no clever standard can conjure new electrons out of thin air.

[00:04:26] Then there’s the people. None of this runs itself:

Chip designers Process engineers Clean room technicians Miners and metallurgists

[00:04:57] These are highly specialized roles — and many experts are nearing retirement. Replacing them takes years, not months. Immigration limits compound the challenge.

[00:05:05] So yes, protocols matter — but the real limits come from the physical world.

Geopolitics and the Global Supply Web

[00:05:16] The Internet may feel borderless, but the hardware that makes it work is not.

[00:05:26] Every link in the supply chain is tangled in geopolitics:

The U.S. leads in chip design but depends on Taiwan and South Korea for manufacturing. China dominates rare earth refining but still relies on imported chipmaking tools. Europe has niche strengths in lithography but lacks materials for full independence. Japan, India, and Australia provide key raw inputs but not the entire production stack.

[00:06:16] This global interdependence made systems efficient — but also fragile.

Demographics: The Aging Workforce

[00:06:21] There’s also a demographic angle. Skilled engineers and technicians are aging out.

[00:06:35] In about 15 years, we’ll see significant skill gaps. Even if minerals and fabs are available, we might not have the people to keep things running.

[00:06:58] The story isn’t just about where resources are — it’s about who can use them.

The Illusion of Resilience

[00:07:06] For decades, efficiency ruled. Tech companies built “just-in-time” supply chains, outsourcing to low-cost, reliable suppliers.

[00:07:21] That gave us cheap smartphones and rapid innovation — but also brittle systems.

[00:07:38] A few reminders of fragility:

2011: Tsunami in Japan disrupts semiconductor production. 2021: Drought in Taiwan forces fabs to truck in water. 2022: War in Ukraine cuts off neon supplies. 2020–2023: Global chip shortage reveals how fragile everything truly is.

[00:08:18] AI at scale only magnifies this fragility. Even one constrained resource, like gallium, can halt model training — regardless of how advanced the algorithms are.

The Splinternet Still Needs a Global Supply Chain

[00:08:48] Even as the Internet fragments into regional “Splinternets,” supply chains remain global.

[00:09:18] You can wall off your data, but you can’t build advanced tech entirely within one nation’s borders.

Examples include:

A U.S. data center using chips refined with Chinese minerals. A Chinese smartphone using European lithography tools. An EU startup running on GPUs packaged in Southeast Asia.

[00:09:46] Fragmentation adds complexity, not independence.

The Myth of Digital Sovereignty

[00:09:46] The idea of total “digital sovereignty” sounds empowering — but it’s misleading.

[00:10:07] You can control protocols, standards, and regulations.
But you can’t control:

Minerals you don’t have Fabricators you can’t build Workforces you can’t train Designing Resilient Regional Systems

[00:10:14] So, what’s the alternative? Regional diversity.

Instead of one global, fragile chain, we can build multiple overlapping regional systems:

U.S.: The CHIPS and Science Act investing in domestic semiconductor manufacturing. EU: The Raw Materials Alliance strengthening mineral supply and recycling. Japan & South Korea: Building redundancy in battery and material supply. India: Launching its “Semiconductor Mission.” Australia & Canada: Expanding refining capacity for critical minerals.

[00:11:38] Yes, these efforts are costlier and slower — but they build buffers. If one region falters, another can pick up the slack.

The Takeaway: Infinite AI is a Myth

[00:12:06] That airplane conversation sums it up. The myth of infinite AI isn’t just science fiction — it’s a misunderstanding of how technology works.

[00:12:17] AI, like the Internet, is bounded by the real world — by chips, minerals, power, and people.

[00:12:45] Even as the Internet fragments, its supply chains remain irreducibly global.

[00:13:02] The challenge isn’t escaping these limits — it’s designing systems that thrive within them.

Closing Thoughts

[00:13:27] The real bottleneck in technology isn’t protocols — it’s supply chains.

[00:13:48] AI is just the most visible example of how finite our digital ambitions are.

[00:14:13] So, the next time you hear someone talk about “infinite AI” or a “sovereign Internet,” remember:

Computers are not infinite. Supply chains cannot be sovereign.

[00:14:19] The real question isn’t how to escape those facts — it’s how to build systems that can thrive within them.

Outro

[00:14:19] Thanks for listening to The Digital Identity Digest.

If you enjoyed the episode:

Share it with a colleague or friend. Connect with me on LinkedIn @hlflanagan. Subscribe and leave a rating wherever you listen to podcasts.

[00:15:02] You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep the conversation going.

The post Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet appeared first on Spherical Cow Consulting.


Recognito Vision

The Complete Guide to KYC Verification Online and How It Protects Your Identity

You’ve probably seen that pop-up asking you to verify your identity when signing up for a new banking app or wallet. That’s KYC, short for Know Your Customer. It helps businesses confirm that users are real, not digital impostors trying to pull a fast one. In the old days, this meant long queues, forms, and...

You’ve probably seen that pop-up asking you to verify your identity when signing up for a new banking app or wallet. That’s KYC, short for Know Your Customer. It helps businesses confirm that users are real, not digital impostors trying to pull a fast one.

In the old days, this meant long queues, forms, and signatures. Today, KYC verification online makes that process digital, instant, and painless.

Here’s how the two compare.

Feature Traditional KYC Online KYC Verification Time Taken Days or weeks A few minutes Method Manual paperwork Automated verification Accuracy Prone to error AI-based precision Accessibility Branch visits required Anywhere, anytime Security Paper-based Encrypted and biometric

According to Deloitte’s “Revolutionising Due Diligence for the Digital Age”, digital verification and automation can drastically improve compliance efficiency and customer experience, both of which are central to modern financial services.

That’s why KYC verification online has become the backbone of secure onboarding for fintechs, banks, and even government platforms.

 

How KYC Verification Online Actually Works

When you perform a KYC check online, it feels quick and effortless, but behind that simple process, powerful AI is doing the hard work. It matches your selfie with your ID, reads your details using OCR, and cross-checks everything with trusted databases, all in seconds.

Here’s what’s really happening:

You upload your ID (passport, driver’s license, or national ID).

You take a quick selfie using your phone camera.

The system compares your selfie to the photo on your ID using advanced facial recognition.

OCR (Optical Character Recognition) extracts the text from your ID to verify your name, address, and date of birth.

Data is validated against government or regulatory databases.

You get approved often in under two minutes.

That’s KYC authentication in action: fast, secure, and contact-free.

According to the NIST Face Recognition Vendor Test (FRVT), today’s leading algorithms are over 20 times more accurate than those used just a decade ago. That leap in precision is one reason why eKYC verification is now trusted by global banks and fintech companies.

Why Businesses Are Switching to KYC Verification Online

No one enjoys filling out endless forms or waiting days for approvals. That’s why businesses everywhere are turning to KYC verify online systems; they make onboarding smoother for customers while cutting costs for organizations.

Some of the biggest reasons behind this shift include:

Faster onboarding times that enhance customer experience.

Greater accuracy from AI-powered checks.

Enhanced fraud detection through biometric validation.

Regulatory compliance with frameworks like GDPR.

Global accessibility for users to verify KYC online anytime, anywhere.

Research by Deloitte Insights notes that organizations automating due diligence and verification processes reduce manual costs while increasing compliance accuracy, a huge win for financial institutions managing high user volumes.

Simply put, online KYC check systems help companies onboard customers faster while minimizing human error and fraud.

Technology Behind Modern KYC Verification Solutions

Every smooth verification process is powered by some serious tech muscle.

Artificial Intelligence (AI) helps detect fraudulent IDs and spot manipulation patterns in photos. Machine learning continuously improves accuracy by learning from new data. Facial recognition verifies your selfie against your ID photo with pinpoint precision, tested under the NIST FRVT benchmark.

Meanwhile, Optical Character Recognition (OCR) pulls data from your documents instantly, and encryption technologies protect that data as it moves across systems.

For developers and organizations wanting to implement their own KYC verification solutions, Recognito’s face recognition SDK and ID document recognition SDK are reliable tools that simplify integration.

You can also explore Recognito’s GitHub repository to see how real-time AI verification systems evolve in practice.

 

How to Verify Your KYC Online Without the Hassle

If you haven’t tried KYC verification online yet, it’s simpler than you think. Just open the app, upload your ID, take a selfie, and let the system handle the rest.

Most platforms now allow you to check online KYC status in real time. You’ll see exactly when your verification moves from “in review” to “approved.”

Curious about how it all works behind the scenes? Try the ID Document Verification Playground. It’s an interactive way to see how modern KYC systems scan, process, and authenticate IDs no real data required.

According to Allied Market Research, the global eKYC verification market is expected to reach nearly $2.4 billion by 2030, growing at over 22% CAGR. That surge shows just how essential digital KYC has become to the future of online services.

The Future of KYC Authentication

The next generation of KYC authentication is going to feel almost invisible. Biometric technology and AI are merging to make verification instant; imagine unlocking your account just by looking at your camera.

In India, systems like UIDAI’s Aadhaar e-KYC have already transformed how millions of users open bank accounts and access government services. It’s fast, paperless, and secure.

Global research by PwC on Digital Identity predicts that the world is moving toward a unified digital identity model, one verified profile for all services, from banking to healthcare.

This is the future of KYC identity verification: a seamless, secure, and user-friendly process that builds trust without slowing you down.

 

Final Thoughts

In the end, KYC verification online is about more than compliance; it’s about confidence. It ensures that businesses and customers can interact safely in an increasingly digital world.

It eliminates paperwork, reduces fraud, and makes onboarding faster and smarter. That’s progress everyone can appreciate.

If you’re a business exploring modern KYC verification solutions, check out Recognito. Their AI-powered technology helps companies verify identities accurately, comply with regulations, and create frictionless user experiences.

 

Frequently Asked Questions

 

1. How does KYC verification online work?

You upload your ID, take a selfie, and the system checks both using AI. KYC verification online confirms your identity in just a few minutes.

 

2. Is eKYC verification safe to use?

Yes, eKYC verification is secure since it uses encryption and biometric checks. Your personal data stays protected throughout the process.

 

3. What do I need to verify my KYC online?

To verify KYC online, you only need a valid government ID and a selfie. The rest is handled automatically by the system.

 

4. Why are companies using online KYC checks now?

Businesses use online KYC check systems because they’re faster and help prevent fraud. It also makes onboarding easier for users.

 

5. What makes a good KYC verification solution?

A great KYC verification solution should be fast, accurate, and compliant with privacy laws. It should make KYC identity verification simple for both users and companies.

Monday, 13. October 2025

1Kosmos BlockID

FedRAMP Levels Explained & Compared (with Recommendations)

The post FedRAMP Levels Explained & Compared (with Recommendations) appeared first on 1Kosmos.

HYPR

The Salesforce Breach Is Every RevOps Leader’s Nightmare: How to Secure Connected Apps

The RevOps Tightrope: When "Just Connect It" Becomes a Breach Vector If you're in Revenue Operations, Marketing Ops, or Sales Ops, your core mandate is velocity. Every week, someone needs to integrate a new tool: "Can we connect Drift to Salesforce?" "Can we push this data into HubSpot?" "Can you just give marketing API access?" You approve the OAuth tokens, you connect the "trusted"
The RevOps Tightrope: When "Just Connect It" Becomes a Breach Vector

If you're in Revenue Operations, Marketing Ops, or Sales Ops, your core mandate is velocity. Every week, someone needs to integrate a new tool: "Can we connect Drift to Salesforce?" "Can we push this data into HubSpot?" "Can you just give marketing API access?" You approve the OAuth tokens, you connect the "trusted" apps, and you enable the business to move fast. You assume the security team has your back.

But the ShinyHunters extortion spree that surfaced this year, targeting Salesforce customer data, exposed the deadly vulnerability built into that convenience-first trust model. This wasn't just a "cyber event" for the security team; it was a devastating wake-up call for every operator who relies on that data. Suddenly, every connected app looks like a ticking time bomb, filled with sensitive PII, contact records, and pipeline data.

Anatomy of the Attack: Hacking Authorization, Not Authentication

The success of the ShinyHunters campaign wasn't about a software bug or a cracked password. It was about trusting the wrong thing. The attackers strategically bypassed traditional MFA by exploiting two key vectors: OAuth consent and API token reuse.

Path 1: The Fake "Data Loader" That Wasn't (OAuth Phishing)

The most insidious vector involved manipulating human behavior through advanced vishing (voice phishing).

Attackers impersonated internal IT support, creating urgency to trick an administrator. Under the pretext of fixing an urgent issue, the victim was directed to approve a malicious Connected App—often disguised as a legitimate tool like a Data Loader.

The result was the same as a physical breach: the employee, under false pretenses, granted the attacker’s malicious app a valid, persistent OAuth access token. This token is the backstage pass—it gave the attacker free rein to pull vast amounts of CRM data via legitimate APIs, quietly and without triggering MFA or login-based alerts.

Path 2: Token Theft in the Shadows (API Credential Reuse)

The parallel vector targeted tokens from already integrated third-party applications, such as Drift or Salesloft.

Attackers compromised these services to steal their existing OAuth tokens or API keys used for the Salesforce integration. These stolen tokens act like session cookies: they are valid, silent, and allow persistent access to Salesforce data without ever touching a login page. Crucially, once stolen, these tokens can be reused until revoked, representing an open back door into your most valuable data.

Both paths point to a single conclusion: your digital ecosystem is built on convenience-first trust, and in the hands of sophisticated attackers, trust is the ultimate exploitable vulnerability.

The Trust Problem: Securing Logins, Not Logic

For years, security focused on enforcing strong MFA and password rotation. But the ShinyHunters campaign proved that this focus is too narrow.

You can enforce the best MFA, rotate passwords monthly, and check all your compliance boxes. But if an attacker can:

Convince an employee to approve a fake OAuth app, or Steal a token that never expires from an integration

...then everything else is just window dressing.

The uncomfortable truth for RevOps is that attackers are not exploiting a zero-day; they are hacking how you work. The industry-wide shift now, led by NIST and CISA, is toward phishing-resistant authentication. Why? Because the weak spots exploited in this breach - reusable passwords and phishable MFA - are eliminated when you replace them with cryptographic, device-bound credentials.

Where HYPR Fits In: Making Identity Deterministic, Not Trust-Based

HYPR was built for moments like this—when the mantra "never trust, always verify" must transition from a slogan into an operational necessity. Our Identity Assurance platform delivers the deterministic certainty needed to stop both forms of token theft cold.

Here’s how HYPR's approach prevents these breach vectors:

Eliminating Shared Secrets: HYPR Authenticate uses FIDO2-certified passwordless authentication. There is no password or shared secret for attackers to steal, replay, or trick a user into approving. This automatically eliminates the phishable vector used in Path 1. Domain Binding Stops OAuth Phishing: FIDO Passkeys are cryptographically bound to the specific URL of the service. If an attacker tries to trick a user into authenticating on a malicious domain (OAuth phishing), the key will not match the registered domain, and the authentication will fail instantly and silently. Deterministic Identity Proofing for High-Risk Actions (HYPR Affirm): Granting new app privileges is a high-risk action. HYPR Affirm brings deterministic identity proofing—using live liveness checks, biometric verification, and document validation—before any credential or app authorization is granted. This stops social engineering attacks aimed at the help desk or an administrator because you ensure the person making the request is the rightful account owner. No Unchecked Trust (HYPR Adapt): Every high-risk action - whether it’s a new device enrollment, a token reset, or a highly-privileged connected app approval - can trigger identity re-verification. If your HYPR Adapt risk engine detects anomalous API activity (Path 2), it can dynamically challenge the user to re-authenticate with a phishing-resistant passkey, immediately revoking the session/token until certainty is established.

This platform isn't about simply locking things down; it's about building secure, efficient systems that can verify who is on the other end with cryptographic certainty.

Next Steps for RevOps: Championing the Identity Perimeter

The Salesforce breach was about trust at scale. As RevOps leaders, you need to protect not just the data, but how that data is accessed and shared.

Here is what you must prioritize now:

Revisit Your Integrations: Know which connected apps have offline access and broad permissions (e.g., refresh_token, full) to your Salesforce data - and ruthlessly trim the list to only essential tools. Automate Least Privilege: Implement a policy for temporary tokens and expiring scopes. Move away from permanent credentials where possible, forcing periodic re-consent. Champion Phishing-Resistant MFA: Make FIDO2 Passkeys the minimum baseline for every high-value user and administrator. Anything less is a calculated risk you can’t afford.

The uncomfortable truth is: Attackers did not utilize brute force - they strategically weaponized OAuth consent and token theft. The good news is that passwordless, phishing-resistant authentication would have stopped both paths cold.

Unlock the pipeline velocity you need with the deterministic security you can trust.

👉 Request a Demo of the HYPR Identity Assurance Platform Today.


Holochain

Dev Pulse 152: Wind Tunnel Updates, Holo Edge Node Container

Dev Pulse 152
Wind Tunnel gets reports, automation, multiple conductors

All the hard work put into Wind Tunnel, our scale testing suite, is starting to become visible! We’re now collecting metrics from both the host OS and Holochain, in addition to the scenario metrics we’d already been collecting (where zome call time and arbitrary scenario-defined metrics could be measured). We’re also running scenarios on an automated schedule and generating reports from them. Our ultimate goals are to be able to:

monitor releases for performance improvements and regressions, identify bottlenecks for improvement, and turn report data into release-specific information you can use and act upon in your app development process.

Finally, Wind Tunnel is getting the ability to select a specific version of Holochain right from the test scenario, which will be useful for running tests on a network with a mix of different conductors. It also saves us some dev ops headaches, because the right version for a test can be downloaded automatically as needed.

Holochain 0.6: roughly two (ideal) weeks remaining

Our current estimates predict that Holochain 0.6’s first release will take about two team-weeks to complete. Some of the dev team is focused on Wind Tunnel and other tasks, so this may not mean two calendar weeks, but it’s getting closer. To recap what we’ve shared in past Dev Pulses, 0.6 will focus on:

Warrants — reporting validation failures to agent activity authorities, who collect and supply these warrants to anyone who asks for them. As soon as an agent sees and validates a warrant, they retain it and block the bad agent, even if they aren’t responsible for validating the agent’s data. If the warrant itself is invalid (that is, the warranted data is valid), the authority issuing the warrant will be blocked. Currently warrants are only sent in response to a get_agent_activity query; in the future, they’ll be sent in response to other DHT queries too. Blocking — the kitsune2 networking layer will allow communication with remote agents to be blocked, and the Holochain layer will use this to block agents after a warrant against them is discovered. Performance improvements — working with Unyt, we’ve discovered some performance issues with must_get_agent_activity and get_agent_activity which we’re working on improving.
Open-source Holo Edge Node

You have probably already seen the recent announcements from Holochain and Holo (or the livestream), but if not, here’s the news from the org: Holo is open-sourcing its always-on node software in an OCI-compliant container called Edge Node.

This is going to do a couple things for hApp developers:

make it easier to spin up always-on nodes to provide data availability and redundancy for your hApp networks, provide a base dockerfile for devs to add other services to — maybe an SMS, email, payment, or HTTP gateway for your hApp, and allow more hosts to set up nodes, because Docker is a familiar distribution format

I think this new release connects Holo back to its roots — the decentralised, open-source values that gave birth to it — and we hope that’ll mean more innovation in the software that powers the Holo network. HoloPort owners will need to be handy with the command line, but a recent survey found that almost four fifths of them already are.

So if you want to get involved, either to bootstrap your own infrastructure or support other hApp creators and users, here’s what you can do:

Download the latest HolOS ISO for HoloPorts, other hardware, VMs, and cloud instances. Download the Edge Node container for Docker, Kubernetes, etc. Get in touch with Rob from Holo on the Holo Forum, the Holo Edge Node Support Telegram, Calendly, or the DEV.HC Discord (you’ll need to self-select Access to: Projects role in the #select-a-role channel, then go to the #always-on-nodes channel). Join the regular online Holo Huddle calls for support (get access to these calls by getting in touch with Rob above). Soon, there’ll be a series of Holo Forge calls for people who want to focus on building the ecosystem (testing, modifying the Edge Node container, etc).
Next Dev Office Hours: 15 Oct 2025

Join us on the DEV.HC Discord at 16:00 UTC for the next Dev Office Hours call — bring your ideas, questions, projects, bugs, and hApp development challenges to the dev team, where we’ll do our best to respond to them. See you there!


Dock

Introduction to Decentralized Identity [Video + Takeaways]

Decentralized identity is becoming the backbone of how organizations, governments, and individuals exchange trusted information. In this live workshop, Agne Caunt (Product Owner, Dock Labs) and Richard Esplin (Head of Product, Dock Labs) guided learners through the foundations of decentralized identity: how digital identity models have evolved, the Trust

Decentralized identity is becoming the backbone of how organizations, governments, and individuals exchange trusted information.

In this live workshop, Agne Caunt (Product Owner, Dock Labs) and Richard Esplin (Head of Product, Dock Labs) guided learners through the foundations of decentralized identity: how digital identity models have evolved, the Trust Triangle that powers verifiable data exchange, and the technologies behind it: from verifiable credentials to DIDs, wallets, and biometric-bound credentials.

Below are the core takeaways from the session.


Kin AI

Kinside Scoop 👀 #15

Accounts, memory, and more upcoming features

Hey folks 👋

Following the rapid-fire releases in the last few newsletters, we have a quieter one for you this edition.

Everyone’s busy working on some bigger features and editions to Kin, meaning not much has gone out in the last two weeks.

So instead, this’ll be a sneak peek into what’s coming really soon - with the usual super prompt at the end for you.

What (will be) new with Kin 🕑 Your Kin, expanded 🌱

The biggest change coming up is our rollout of Kin Accounts. Don’t worry: these accounts won’t store any of your conversation data - just some minimal basics that we’ll keep secure.

We’ll be introducing Kin Accounts to lay the groundwork for multi-device sync (which inches closer!), more integrations into Kin, and eventually Kin memberships.

More information on Kin Accounts, and what we mean by “minimal basics” will come out soon too, so you stay fully informed

More personal advisors and notifications 🧩

Off the back of the positive feedback for the advisor updates covered in the last edition, we’re continuing to expand their personalities and push notification abilities.

Very soon, you’ll notice that each advisor feels even more unique, more understanding of you, and more suited to their role - both in chat and in push notifications.

And in case you missed it, you have full control over the push notification frequency. If you want to hear from an advisor while outside Kin more, you can turn it up in each advisor’s edit tab from the home screen - and if you want to hear less from them, you can turn it down more.

Memory continues to grow 🧠

Memory appears in these updates almost every time - and that’s because we really are working on it almost every week.

The imminent update will continue to work toward our long-standing goal of making Kin the best personal AI at understand time in conversations - something we’ve explained in more depth in previous articles.

More on this when the next stage of the update rolls out!

Journaling, refined by you yet again 📓

Similarly, Journaling also makes another appearance as we continue to re-work it according to your feedback. Guided daily and weekly Journals will help you track your progress, more visible streak counts will help keep you involved, and a new prompting system will help entries feel more insightful. You’ll hear more about exactly what’s changing once we’ve released some of it.

Start a conversation 💬

I know this reminder is in every newsletter - but that’s because it’s integral to Kin.

Kin is built for you, with your ideas. So, your feedback is essential to helping us know whether we’re making things the way you like them.

The KIN team is always around at hello@mykin.ai for anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).

To get more stuck in, the official Kin Discord is still the best place to interact with the Kin development team (as well as other users) about anything AI.

We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week - you’re welcome to join:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

Kin is yours, not ours. Help us build something you love!

Finally, you can also share your feedback in-app. Just screenshot to trigger the feedback form.

Our current reads 📚

Article: OpenAI admits to forcibly switching subscribers away from GTP 4 and 5 models in some situations
READ - techradar.com

Article: San Diego State University launch first AI responsibility degree in California
READ - San Diego State University

Article: Australia’s healthcare system adopting AI tools
READ - The Guardian

Article: California’s AI laws could balance innovation and regulation
READ - techcrunch.com

This edition’s super prompt 🤖

This week, your Kin will help you answer the question:

“How can I better prepare for change?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to explore how you think about pressure, and how you can keep cool under it.

As a reminder, you can do this on both iOS and Android.

Open prompt in Kin

We build Kin together 🤝

If you only ever take one thing away from these emails, it should be that you have as much say in Kin as we do (if not more).

So, please chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.

With love,

The KIN Team

Sunday, 12. October 2025

Ockam

When You Run Out of Things to Say

The 6-month plan to go from zero to traction (with weekly tasks you can start today) Continue reading on Medium »

The 6-month plan to go from zero to traction (with weekly tasks you can start today)

Continue reading on Medium »


Dock

Why The US Won’t Allow “Phone Home” Digital IDs

In our recent live podcast, Richard Esplin (Dock Labs) sat down with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) to unpack the new ISO standards for mobile driver’s licenses (mDLs). One topic dominated the discussion: server retrieval

In our recent live podcast, Richard Esplin (Dock Labs) sat down with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) to unpack the new ISO standards for mobile driver’s licenses (mDLs).

One topic dominated the discussion: server retrieval.

Saturday, 11. October 2025

Ockam

The Gap Between Knowing and Doing

Why Knowledge Without Action Is Just Expensive Entertainment Continue reading on Medium »

Why Knowledge Without Action Is Just Expensive Entertainment

Continue reading on Medium »

Friday, 10. October 2025

Ockam

When Brands Break the Rules

The Art of Unconventional Advertising That Actually Works Continue reading on Clubwritter »

The Art of Unconventional Advertising That Actually Works

Continue reading on Clubwritter »


HYPR

It’s a Partnership, Not a Handoff: Doug McLaughlin on Navigating Enterprise Change

The journey from a signed contract to a fully deployed security solution is one of the most challenging in enterprise technology. For a mission-critical function like identity, the stakes are even higher. It requires more than just great technology; it demands a true partnership to drive change across massive, complex organizations.

The journey from a signed contract to a fully deployed security solution is one of the most challenging in enterprise technology. For a mission-critical function like identity, the stakes are even higher. It requires more than just great technology; it demands a true partnership to drive change across massive, complex organizations.

I sat down with HYPR’s SVP of Worldwide Sales, Doug McLaughlin, to discuss what it really takes to get from the initial sale to the finish line, and how HYPR works with customers to manage the complexities of procurement, organizational buy-in, and full-scale deployment for millions of users.

Let’s talk about the initial hurdles – procurement and legal. These processes can stall even the most enthusiastic projects. How do you get across that initial finish line?

Doug: By the time you get to procurement and legal, the business and security champions should be convinced of the solution's value. These teams aren't there to re-evaluate whether the solution is needed; they're there to vet who is providing it and under what terms. The biggest mistake you can make is treating them like a final sales gate.

Our approach is to be radically transparent and prepared. We have our security certifications, compliance documentation, and legal frameworks ready to go well in advance. We’ve already proven the business value and ROI to our champions, who then become our advocates in those internal procurement meetings. It’s about making their job as easy as possible. When you’ve built a strong, trust-based relationship across the organization, procurement becomes a process to manage efficiently, not an obstacle to overcome. The contract signature is less the "end" and more the "official beginning" of the real work.

You’ve navigated some of the largest passwordless deployments in history. Many people think the deal is done when the contract is signed. What’s the biggest misconception about that moment?

Doug: The biggest misconception is that the signature is the finish line. In reality, it’s the starting gun. For us, that contract isn’t an endpoint; it’s a formal commitment to a partnership. You've just earned the right to help the customer begin the real work of transformation.

In these large-scale projects, especially at global financial institutions or manufacturing giants, you’re not just installing software. You’re fundamentally changing a core business process that can touch every single employee, partner, and sometimes even their customers. If you view that as a simple handoff to a deployment team, you're setting yourself up for failure. The trust you built during the sales cycle is the foundation you need for the change management journey ahead.

When you’re dealing with a global corporation, you have IT, security, legal, procurement, and business units all with their own priorities. How do you start building the consensus needed for a successful rollout?

Doug: You have to build a coalition, and you do that by speaking the language of each stakeholder. I remember working with a major global bank. Their security team was our initial champion; they immediately saw how passkeys would eliminate phishing risk and secure their high-value transactions. But one of the key stakeholders was wary. Their primary concern was a potential surge in help desk calls during the transition, which would blow up their budget.

Instead of just talking about security with them, we shifted the conversation entirely and early. We presented the case study from another financial services deployment showing a 70-80% reduction in password-related help desk tickets within six months of rollout. We framed the project not as a security mandate, but as an operational efficiency initiative that would free up the team's time.

We connected the dots for them. Security got their risk reduction. IT saw a path to lower operational costs. The business leaders saw a faster, more productive login experience for their bankers. When each department saw its specific problem being solved, they became a unified force pushing the project forward. That's how you turn individual stakeholders into a powerful coalition.

That leads to the user. How do you get hundreds of thousands of employees at a global company to embrace a new way of signing in?

Doug: You can’t force change on people; you have to make them want it. A great example is a Fortune 500 manufacturing company we worked with. They had an incredibly diverse workforce. From corporate executives on laptops to factory floor workers using shared kiosks and tablets. Compounding this further, employees spanned the globe, from US, to China to LatAm and beyond. Let’s face it, a single, top-down email mandate was never going to work.

We partnered with them to create a phased rollout that respected these different user groups. For the factory floor, we focused on speed. The message was simple: "Clock in faster, start your shift faster." We trained the shift supervisors to be the local experts and put up simple, visual posters near the kiosks.

For the corporate employees, we focused on convenience and security, highlighting the ability to log in from anywhere without typing a password. We identified influential employees in different departments to be part of a pilot program. Within weeks, these "champions" were talking about how much easier their sign-in experience was. That word-of-mouth was more powerful than any corporate memo. The goal is to make the new way so demonstrably better that people are actively asking when it's their turn. That’s when adoption pulls itself forward.

Looking back at these massive, multi-year deployments, what defines a truly "successful" partnership for you?

Doug: Success isn’t the go-live announcement. It's six months later when the CISO tells you their help desk calls are down 70%. It's when an employee from a branch in Singapore sends unsolicited feedback about how much they love the new login experience. It’s when the customer’s security team stops seeing you as a vendor and starts calling you for advice on their entire identity strategy.

That's the real finish line. It's when the change has stuck, the value is being realized every day, and you’ve built a foundation of trust that you can continue to build on for years to come.

What's the biggest topic that keeps coming up in your customer conversations these days?

Doug: I'm having a lot of fun clarifying the difference between simply checking a document and actually verifying a person's identity. Many companies believe that if they scan a driver's license, they're secure. But I always ask, "Okay, that tells you the document is probably real, but how do you really know who's holding it?" That question changes everything. Between the rise of AI-generated fakes, or the simple reality that people lose their wallets, relying on a single document is incredibly fragile. The last thing you want is your top employee stranded and locked out of their accounts because their license is missing.

I move the conversation to a multi-factor approach. We check the document, yes, but then we use biometrics to bind it to the live person in front of the camera, and then we cross-reference that against another trusted signal, like the phone they already use to sign in. It gives you true assurance that the right person is there. More importantly, it provides multiple paths so your employees are never left helpless. It’s about building a resilient system that’s both more secure and more practical for your people.

Bonus question! What’s one piece of advice you’d give to someone just starting to manage these complex sales and deployment cycles?

Doug: Get obsessed with your customer's business, not your product. Understand what keeps their executives up at night, what their biggest operational headaches are, and what their long-term goals are. If you can authentically map your solution to solving those core problems, you stop being a salesperson and start being a strategic partner. Everything else follows from that.

Thanks for the insights, Doug. It’s clear that partnership is the key ingredient to success!


This week in identity

E63 - Are Identity Platforms Legacy? The Rise of Identity Information Flows

Keywords PAM, IGA, CyberArk, Palo Alto, identity security, AI, machine identity, cybersecurity, information flows, behavioral analysis Summary In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the significant changes in the cybersecurity landscape, particularly focusing on Privileged Access Management (PAM) and Identity Governance and Administration (IGA)

Keywords

PAM, IGA, CyberArk, Palo Alto, identity security, AI, machine identity, cybersecurity, information flows, behavioral analysis


Summary


In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the significant changes in the cybersecurity landscape, particularly focusing on Privileged Access Management (PAM) and Identity Governance and Administration (IGA). They explore the recent acquisition of CyberArk by Palo Alto, the evolution of identity security, and the convergence of various identity management solutions.

The conversation highlights the importance of information flows, and the need for a mindset shift in the industry to effectively address identity security challenges.


Takeaways


The cybersecurity landscape is rapidly changing due to AI. PAM and IGA are evolving but remain siloed. The acquisition of CyberArk by Palo Alto signifies a shift in identity security. Organizations struggle with integrating disparate identity technologies. Behavioral analysis is crucial for identifying security threats. AI will play a significant role in optimizing identity security. Defensive acquisitions are common in the cybersecurity industry. The future of identity security relies on understanding information flows.


Chapters


00:00 Welcome Back and Industry Changes

02:01 The Evolution of Privileged Access Management (PAM)

10:41 The Convergence of Cybersecurity and Identity

16:13 The Future of Identity Management Platforms

24:23 Understanding Information Flows in Cybersecurity

28:12 The Role of AI in Identity Management

33:42 Navigating Mergers and Acquisitions in Tech

39:50 The Future of Identity Security and AI Integration



Tokeny Solutions

Are markets ready for tokenised stocks’ global impact?

The post Are markets ready for tokenised stocks’ global impact? appeared first on Tokeny.
September 2025 Are markets ready for tokenised stocks’ global impact?

Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved, this would be the first time tokenised securities trade on a major U.S. exchange, a milestone that could transform global capital markets. Under the proposal, investors will be able to choose whether to settle their trades in traditional digital form or in tokenised blockchain form.

As, more and more firms are tokenising stocks. The implications are potentially huge:

24/7 trading of tokenised equities Instant settlement Programmable ownership Full shareholder rights, identical to traditional shares

This is a large overhaul of market infrastructure. Sounds great, but the reality is much more complex.

How to tokenise stocks?

Tokenised stocks today can be structured in several ways, including:

Indirect tokenisation: The issuer raises money via the issuance of a financial instrument different from the stocks, typically a debt instrument (e.g. bond/note), and buys the underlying stocks with the raised funds. The tokens may either be the financial instrument itself or represent a claim on that financial instrument. The token does not grant investors direct ownership of the underlying stock. However, it is simple to launch. Direct tokenisation: Stocks are tokenised directly at the stock company level, preserving voting, dividends, and reporting rights. Although this method tends to be more difficult to implement due to legal and infrastructure requirements.

Both structures have their benefits and drawbacks. The real issue, however, is how the tokens are managed post-issuance.

Permissionless vs permissioned tokens

While choosing a structure for tokenised stocks is important, the true success of tokenisation depends on whether the tokens are controlled or free to move, because this determines compliance, investor protection, and ultimately whether the market can scale safely.

Permissionless: Tokens can move freely on-chain after issuance. Token holders gain economic exposure, but not shareholder rights. Secondary market trading is not controlled, creating compliance risks. The legitimate owner of the security is not always clear. Permissioned: Compliance and eligibility are enforced at every stage, embedding rules directly into the token. Crucially, permissioned tokens also guarantee investor safety by making ownership legally visible in the issuer’s register. For issuers, this model also fulfils their legal obligation to know who their investors are at all times. Transfers to non-eligible wallets are blocked, maintaining regulatory safety while preserving trust.

While permissionless tokens may be quicker to launch, they carry significant legal risks, weaken investor trust, and fragment growth. By contrast, permissioned tokens should be considered as the only sustainable approach to tokenising stocks, because they combine compliance, investor protection, and long-term scalability.

The right way forward – compliance at the token level

Nasdaq’s SEC filing shows the path to do this right. Tokenised stocks will only succeed if eligibility and compliance are enforced in both issuance and secondary trading.

That’s where open-source standards like ERC-3643 come in:

Automated compliance baked in: Rules are enforced automatically at the protocol level, not manually after the fact Eligibility checks: Only approved investors can hold the asset, enabling ownership tracking efficiently Controlled transfers: Tokens cannot be sent to non-eligible investors, even in the secondary market Auditability: Every transaction can be monitored in real time, ensuring trust with regulators

This is how tokenised stocks can operate safely at scale, with compliance embedded directly into the digital infrastructure, no matter if it’s through direct or indirect tokenisation. This provides safety at scale, unlocked liquidity, efficiency, and regulatory alignment.

Why this matters now?

Investor demand for tokenised assets is surging. Global banks are exploring issuance, Coinbase has sought approval, and now Nasdaq is moving ahead under the SEC’s umbrella. Tokenisation will be at the core of financial markets.

But shortcuts built on permissionless, freely transferable tokens will only invite regulatory backlash, slowing innovation and preventing the market from scaling.

The future of tokenised shares will be built on:

Carrying full shareholder rights and guaranteeing ownership Automatic, enforced compliance on every trade Integrating directly into existing market infrastructure

That is what true tokenisation means, not synthetic exposure, but embedding the rules of finance into the share itself.

We believe this is the turning point. Nasdaq’s move validates what we’ve been building toward: a global financial system where tokenisation unlocks liquidity, efficiency, and access, not at the expense of compliance, but because of it.

The race is on. The winners won’t be those who move fastest, but those who build markets that are trusted, compliant, and scalable from day one.

Tokeny Spotlight

Annual team building

We head to Valencia for our annual offsite team building. A fantastic time filled with great memories.

Read More

Token2049

Our CEO and Head of Product for Apex Digital Assets, and CBO, head to Singapore for Token2049

Read More

New eBook

Global payments reimagined. Download to learn what’s driving the rapid rise of digital currencies.

Read More

RWA tokenisation report

We are proud to have contributed to the newly released RWA Report published by Venturebloxx.

Read More

SALT Wyoming

Our CCO and Global Head of Digital Assets at Apex Group, Daniel Coheur, discusses Blockchain Onramps at SALT.

Read More

We test SilentData’s privacy

Their technology explores how programmable privacy allows for secure and compliant RWA tokenisation.

Read More Tokeny Events

Token2049 Singapore
October 1st-2nd, 2025 | 🇸🇬 Singapore

Register Now

Digital Assets Week London
October 8th-10th, 2025 | 🇬🇧 United Kingdom

Register Now

ALFI London Conference
October 15th, 2025 | 🇬🇧 United Kingdom

Register Now

RWA Singapore Summit
October 2nd, 2025 | 🇸🇬 Singapore

Register Now

Hedgeweek Funds of the Future US 2025
October 9th, 2025 | 🇺🇸 United States of America

Register Now ERC3643 Association Recap

ERC-3643 is recognized in Animoca Brands Research’s latest report on tokenised real-world assets (RWAs).

The report highlights ERC-3643 as a positive step for permissioned token standards, built to solve the exact compliance and interoperability challenges holding the market back.

Read the story here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Oct10 Are markets ready for tokenised stocks’ global impact? September 2025 Are markets ready for tokenised stocks’ global impact? Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved,… Sep1 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group Last month, together with Apex Group, we introduced Apex Digital 3.0, the first… Aug1 Apex Digital 3.0 is Live – The Future of Finance Starts Now July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now To truly scale tokenisation, we need a global force at the… Jul1 Real Estate Tokenization Takes Off in Dubai June 2025 Real Estate Tokenization Takes Off in Dubai Dubai’s real estate market is breaking records. According to data shared by Property Finder, Dubai recorded…

The post Are markets ready for tokenised stocks’ global impact? appeared first on Tokeny.

Thursday, 09. October 2025

Spruce Systems

Why Digital Identity Frameworks Should Be Public Infrastructure

Digital identity is essential infrastructure, and it deserves the same level of public investment, oversight, and trust as other core systems like roads or utilities.

Most people think of digital identity as a mobile driver’s license or app on their phone. But identity isn’t just a credential, it’s infrastructure. Like roads, broadband, or electricity, digital identity frameworks must be built, governed, and funded as public goods.

Today, the lack of a unified identity system fuels fraud, inefficiency, and distrust.  In 2023, the U.S. recorded 3,205 data breaches affecting 353 million people, and the Federal Trade Commission reported $12.5 billion in fraud losses, much of it rooted in identity theft and benefit scams.

These aren’t isolated incidents but symptoms of fragmentation: every agency and organization maintaining its own version of identity, duplicating effort, increasing breach risk, and eroding public trust.

We argue that identity should serve as public infrastructure: a government-backed framework that lets residents prove who they are securely and privately, across contexts, without unnecessary data collection or centralization. Rather than a single product or app, this framework can represent a durable set of technical and statutory controls built to foster long-term trust, protect privacy, and ensure interoperability and individual control.

From Projects to Public Infrastructure

Governments often launch identity initiatives as short-term projects: a credential pilot, a custom-built app, or a single-agency deployment. While these efforts may deliver immediate results, they rarely provide the interoperability, security, or adoption needed for a sustainable identity ecosystem. Treating digital identity as infrastructure avoids these pitfalls by establishing common rails that multiple programs, agencies, and providers can build upon.

A better approach is to adopt a framework model, where digital identity isn’t defined by a single product or format but by adherence to a shared set of technical and policy requirements. These requirements, such as selective disclosure, minimal data retention, and individual control, can apply across many credential types, from driver’s licenses and professional certifications to benefit eligibility and guardianship documentation.

This enables credentials to be iterated and expanded on thoughtfully: credentials can be introduced one at a time, upgraded as standards evolve, and tailored to specific use cases while maintaining consistency in protections and interoperability.

Enforcing Privacy Through Law and Code

Foundational privacy principles such as consent, data minimization, and unlinkability must be enforced by technology, not just policy documents. Digital identity systems should make privacy the default posture, using features (depending on the type of credential) such as:

Selective disclosure (such as proving “over 21” without showing a birthdate) Hardware-based device binding Cryptographically verifiable digital credentials with offline presentation Avoid architectures that risk exposing user metadata during verification.

By embedding security, privacy, and interoperability directly into the architecture, identity systems move beyond compliance and toward real-world protection for residents. These are not optional features, they are statutory expectations brought to life through secure protocols.

Open Standards, Broad Interoperability

Public infrastructure should allow for vendor choice and competitive markets that foster innovation. That’s why modern identity systems should be built on open, freely implementable standards, such as ISO/IEC 18013-5/7, OpenID for Verifiable Presentations (OID4VP), W3C Verifiable Credentials, and IETF SD-JWTs.

These standards allow credentials to be portable across wallet providers and verifiable in both public and private sector contexts, from airports and financial institutions to universities and healthcare. Multi-format issuance ensures credentials are accepted in the widest range of transactions, without compromising on core privacy requirements.

A clear certification framework covering wallets, issuers, and verifiers can ensure compliance with these standards through independent testing, while maintaining flexibility for providers to innovate. Transparent certification also builds trust and ensures accountability at every layer of the ecosystem.

Governance Leads, Industry Builds

Treating digital identity as infrastructure doesn’t mean the public sector has to (or even should) build everything. It means the public sector must set the rules, defining minimum standards, overseeing compliance, and ensuring vendor neutrality.

Wallet providers, credential issuers, and verifiers can all operate within a certified framework if they meet established criteria for security, privacy, interoperability, and user control. Governments can maintain legal authority and oversight while encouraging healthy private-sector competition and innovation.

This governance-first approach creates a marketplace that respects rights, lowers risk, and is solvent. Agencies retain procurement flexibility, while residents benefit from tools that align with their expectations for usability and safety.

Why This Matters

Digital identity is the entry point to essential services: healthcare, education, housing, employment, and more. If it’s designed poorly, it can become fragmented, invasive, or exclusionary. But if it’s designed as infrastructure with strong governance and enforceable protections, it becomes a foundation for inclusion, trust, and public value.

Well-governed digital identity infrastructure enables systems that are:

Interoperable across jurisdictions and sectors Private by design, not retrofitted later Transparent, with open standards and auditability Resilient, avoiding lock-in and enabling long-term evolution

Most importantly, it is trustworthy for residents, not just functional.

A Foundation for the Future

Public infrastructure requires alignment between law, technology, and market design. With identity, that means enforcing privacy in code, using open standards to drive adoption, and establishing certification programs that ensure accountability through independent validation without stifling innovation.

This is more than a modernization effort. It’s a transformation that ensures digital identity systems can grow, adapt, and serve the public for decades to come.

Ready to Build Trustworthy Digital ID Infrastructure?

SpruceID partners with governments to design and implement privacy-preserving digital identity systems that scale. Contact us to explore how we can help you build standards-aligned, future-ready identity infrastructure grounded in law, enforced by code, and trusted by residents.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ockam

The Psychology of Buying

How Brand positioning impact people behaviour Continue reading on Medium »

How Brand positioning impact people behaviour

Continue reading on Medium »


Indicio

Your authentication dilemma: DIY or off-the-shelf decentralized identity?

The post Your authentication dilemma: DIY or off-the-shelf decentralized identity? appeared first on Indicio.
With the European Union mandating digital wallets by the end of 2026, and Verifiable Credentials offering new, powerful, and cost-effective ways to solve identity fraud and simplify operations, you may be thinking it’s time to embrace decentralized identity and build your own Verifiable Credential system. You’ve got a developer team, they understand security — so it couldn’t be that difficult, right?

By Helen Garneau

It’s tempting to do things yourself. But there’s a reason a professional painter will almost certainly do a better and quicker job at painting your house than you will. And, when you price how much time it would take you, there’s a good chance a professional will probably end up costing you less too.

The same logic applies to building decentralized identity systems with Verifiable Credentials.

If you have a talented team of engineers, it’s easy to think, “we’ve got this.” They understand security, they can code, issuing and verifying a few credentials sounds simple enough.

But once you start digging into credential formats, protocols, interoperability, global standards, regulations, and governance, what seems like a quick project for a few developers quickly becomes a long, complex, and costly effort to build and maintain a secure, standards-compliant system.

How fast “We got this” turns into “Why did we do this?”

Decentralized identity makes data portable and cryptographically verifiable without the need for certificate authorities or centralized data management. Its vehicle is the Verifiable Credential, a way of sealing any kind of information in a digital container so that it cannot be altered and you can be certain of its origin.

If you trust the origin of the credential — say a passport office or a bank — you can trust that the information placed in the credential has not been altered. Verifiable Credentials are held in digital wallet apps and can be shared by the consent of the holder, whether a person or an organization, in privacy-preserving ways.

Verifiable Credentials are most commonly used to create instantly authenticatable versions of trusted documents, such as passports, driver’s licenses, but they can be created and held by devices for secure data sharing, or robots and AI agents, for authentication and permissioned data access.

The point of all this is that it transforms authentication, fraud prevention, privacy, security, and operational efficiency. You are able to remove usernames and passwords, centralized storage and multi-factor authentication and combine authentication and fraud prevention in a seamless, instant process.

A decentralized ecosystem consists of three parts: an issuer that creates and digitally signs the credential, a holder who keeps it in a digital wallet and presents it for authentication and access to resources, and a verifier or relying party that needs to authenticate the information presented for some purpose.

When building an ecosystem for a use case — say systems account access — here’s what you need to consider: There are, presently, three major credential formats, each with differing communications protocols. They’ve got to interoperate with each other and across different digital wallets according to whatever standards you want to align with. Which are you going to pick?

Then, you need to get them into people’s wallets. Which wallet? An existing one or do you need an SDK?

If you want to verify credentials, you should be able to verify thousands — perhaps tens of thousands — simultaneously. How do you do this when mobile devices don’t have fixed IP addresses? How are you going to establish offline verification? And how are you going to establish governance so that participants know who is a trusted issuer of a credential?

This is just a basic implementation — a foundation to build the kind of solutions the market wants. Are you also prepared to then develop integrated payments, integrated biometrics, digital travel credentials, document validation, and identity and delegated authority for AI agents and robots? You better be, because that’s where the market is now at.

There’s a reason Indicio was the first (and still the only) company to launch a complete, off-the-shelf solution for implementing Verifiable Credentials in both the Amazon and Google Cloud Marketplaces: We built a team composed of pioneers and leaders in decentralized identity, engineers and developers deeply engaged with the open source codebases and communities that have shaped this technology. They live and breathe this stuff every day. And even so, it still took years to build an interoperable, multi-credential, multi-protocol, system that can scale to country-level deployments.

If your team isn’t already familiar with the open-source codebases and the evolving international specifications and standards, how are they going to deliver in a realistic time frame at an acceptable cost?

The probability that your team is going to do all that we did in six months is… low.

The likelihood that they will end up blowing through a lot of your budget attempting to do this is… high.

Interoperability — everyone expects it

No one is going to buy a proprietary, siloed system. Decentralized identity is an architecture for data sharing and integrating markets into functioning ecosystems; if your solution can’t do this, can’t interoperate or scale, it’s missing out on key features that drive business growth. Sure, you may want to start by securing your SSO with a Verifiable Credential, but why limit the power of verification?

For example, one of the key failures of the mobile driver’s license (m/DL) in the U.S. is that so many implementations failed to make verification open to other parties. Think of all the ways an m/DL could be used to prove age or identity. A digital identity that’s locked into a narrow use case and proprietary verification is a wasted opportunity not least because verification can be monetized (Indicio’s m/DL is easily verifiable anywhere).

To make a system work with the rest of the world, it has to speak the relevant languages. That means following multiple standards and protocols that define how credentials are created, stored, and exchanged and, depending on what your needs are, for whatever specific credential workflow you want to deploy, keeping up with some or all of the following:

W3C Verifiable Credential Data Model (VCDM) — defines how credentials are structured and signed.

ISO/IEC 18013-5 and ICAO DTC — govern mobile driver’s licenses (mDL) and Digital Travel Credentials, ensuring global interoperability across borders and transport systems.

DIDComm and DID methods — specify how secure, peer-to-peer communication and decentralized identifiers work.

OpenID for Verifiable Credentials (OID4VC and OID4VP) — bridges decentralized identity with mainstream authentication systems like OAuth and OpenID Connect.

Each of these comes with its own working groups, test suites, and compliance updates. Building your own system means keeping pace with all of them and making sure your implementation doesn’t break every time a standard changes.

With off-the-shelf, you implement in days

Indicio Proven® eliminates the DIY risk. You have a way to start implementing a POC in days, pilot in weeks, launch in months. We’ve spent years doing the heavy lifting so you don’t have to. It’s the mature, field-tested Verifiable Credential infrastructure that governments, airports, and financial institutions already use.

Instead of building from scratch, you have everything you need to start building a solution, a product, or a service so your team is free to focus on things that make you money.

Indicio Proven can already handle country-level deployments and multi-credential workflows. It has been DPIA’d for GDPR. It comes with document validation and biometric authentication, a white-label digital wallet if you need one, a mobile SDK to add Verifiable Credentials to your apps. We’ve already mastered:

Multiple credential formats (AnonCreds, SD-JWT VC, JSON-LD, mdoc/mDL) DIDComm and OID4VC/OID4VP communications protocols Digital Travel Credentials aligned with ICAO DTC-1 and DTC-2 specifications. Decentralized ecosystem governance Hosting on premise, in the cloud or as a SaaS product. A global, enterprise-grade blockchain-based distributed ledger for anchoring credentials Certified training in every aspect of decentralized identity Support packages Continuous updates

In one package, you get everything you need to build, deploy, and stay current with evolving standards, so your team doesn’t have to chase every update.

Deploy with confidence

There’s no shame in DIY, but for Verifiable Credentials, the smarter move is to build on top of something that already works. Indicio does the heavy lifting so you can focus on what matters: using trusted digital identity to deliver value to your users. A Verifiable Credential system should give you trust, not technical debt.

In short: don’t reinvent the tech. Build with what’s already proven.

Want to do it right the first time? Let’s talk.

The post Your authentication dilemma: DIY or off-the-shelf decentralized identity? appeared first on Indicio.


Dock

What We Learned Showing Digital IDs for Local Government

In a recent client call, we were asked whether our platform could help a local government issue digital IDs.  To answer that, Richard Esplin (Head of Product) put together a live demo. Instead of complex architectures or long timelines, he showed how a city could issue a&

In a recent client call, we were asked whether our platform could help a local government issue digital IDs. 

To answer that, Richard Esplin (Head of Product) put together a live demo.

Instead of complex architectures or long timelines, he showed how a city could issue a digital residency credential and use it instantly across departments. From getting a library card to scheduling trash pickup.

The front end for the proof-of-concept was spun up in an afternoon with an AI code generator. 

Behind the scenes, we handled verifiable credential issuance, verification, selective disclosure, revocation, and ecosystem governance, proving that governments can move from paper processes to reusable, privacy-preserving digital IDs in days, not months.


From ID uploads to VPN downloads: The UK’s digital rebellion

The UK's Online Safety Act triggered a staggering 1,800% surge in VPN signups within days of implementation. The UK’s Online Safety Act was introduced to make the internet “safer,” especially for children. It forces websites and platforms to implement strict age verification measures

The UK's Online Safety Act triggered a staggering 1,800% surge in VPN signups within days of implementation.

The UK’s Online Safety Act was introduced to make the internet “safer,” especially for children. It forces websites and platforms to implement strict age verification measures for adult and “harmful” content, often requiring users to upload government IDs, credit cards, or even biometric scans.

While the goal is protection, the method feels intrusive. 

Suddenly, every UK citizen is being asked to share sensitive identity data with third-party verification companies just to access certain sites.

The public response was immediate. 

Within days of implementation, the UK saw a staggering 1,800% surge in VPN signups. 

ProtonVPN jumped to the #1 app in the UK App Store. NordVPN reported a 1,000% surge. In fact, four of the top five free iOS apps in the UK were VPNs. 

Millions of people literally paid to preserve their privacy rather than comply.

This backlash reveals a fundamental flaw in how age verification was implemented.

People are rejecting what they perceive to be privacy-invasive ID uploads. They don’t want to hand over passports, driver’s licenses, or facial scans just to browse.

Can we blame them?

The problem isn’t age verification itself. The problem is the method, which pushes people to circumvent the rules with VPNs or even fake data.

But here’s the thing: we already have better options.

Government-issued digital IDs already exist. Zero-knowledge proofs let you prove you’re 18+ without revealing who you are. Verifiable credentials combine reliability (government-backed trust) with privacy by design.

With this model, the website never sees your personal data. 

The check is still secure, government-backed, and reliable, without creating surveillance or new honeypots of sensitive data.

The VPN surge is proof that people value their digital privacy so much that they’ll pay for it.

If governments want compliance and safety, they need to meet people where they are: with solutions that respect privacy as much as protection.

The UK’s privacy backlash demonstrates exactly why verifiable ID credentials are the way forward. 

They can resolve public resistance while maintaining both effective age checks and digital rights.


Why Derived Credentials Are the Future of Digital ID

In our recent live podcast, Richard Esplin (Dock Labs) spoke with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) about the rollout of mobile driver’s licenses (mDLs) and what comes next. One idea stood out: derived credentials. mDLs

In our recent live podcast, Richard Esplin (Dock Labs) spoke with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) about the rollout of mobile driver’s licenses (mDLs) and what comes next.

One idea stood out: derived credentials.

mDLs are powerful because they bring government-issued identity into a digital format. 

But in practice, most verifiers don’t need everything on your driver’s license. 

A student bookstore doesn’t need your address, it only needs to know that you’re enrolled.

That’s where derived credentials come in. 

They allow you to take verified data from a root credential like an mDL and create purpose-specific credentials:

A student ID for campus services. An employee badge for workplace access. A travel pass or loyalty credential.

Andrew put it simply: if you don’t need to use the original credential with everything loaded into it, don’t. 

Ryan added that the real benefit is eliminating unnecessary personal data entirely, only passing on what’s relevant for the transaction.

Derived credentials also make it possible to combine data from multiple credentials into one, enabling new use cases. 

For example, a travel credential could draw on both a government-issued ID and a loyalty program credential, giving the verifier exactly what they need in a single, streamlined interaction.

This approach flips the model of identity sharing. 

Instead of over-exposing sensitive details, derived credentials enable “less is more” identity verification: stronger assurance for the verifier, greater privacy for the user.

Looking ahead, Andrew revealed that the ISO 18013 Edition 2 will introduce support for revocation and zero-knowledge proofs, enhancements that will make derived credentials even more practical and privacy-preserving.

Bottom line: mDLs are an important foundation, but the everyday future of digital ID lies in derived credentials.


Thales Group

Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria

Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria prezly Thu, 10/09/2025 - 11:00 Civil Aviation Austria Share options Facebook
Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria prezly Thu, 10/09/2025 - 11:00 Civil Aviation Austria

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 09 Oct 2025 Heli Austria selects the StableLight 4-Axis Autopilot from for its single-engine H125 helicopter Next-gen safety & performance: cutting pilot workload and boosting mission capability Proven FAA-certified system now entering EASA validation for European operators

Thales and StandardAero are pleased to announce the StableLight 4-Axis Autopilot system has been selected by Heli Austria, a leading European helicopter operator. The system is currently being installed on one of Heli Austria’s H125 helicopters at their facility in Sankt Johann im Pongau, Salzburg, Austria.

Based on Thales’s Compact Autopilot System, the StableLight 4-Axis Autopilot system combines several robust features into a lightweight system ideally suited for light category rotorcraft. ​ The system transforms the flight control experience of the helicopter with its stability augmentation. Adding stabilized climb flight attitude recovery, auto hover, and a wide range of other sophisticated features, significantly decreases pilot workload. This enhances mission capability and can help to reduce risks in critical flight phases and adverse conditions such as Inadvertent entry into Instrument Meteorological Conditions (IIMC). StableLight has a Supplemental Type Certificate (STC) from the US Federal Aviation Administration (FAA).

“Operational and pilot safety are very important to Heli Austria. ​ We have been eagerly awaiting the opportunity to be the European launch customer of this proven product. The added safety features and reliability is a welcomed advantage to our pilots.” Roy Knaus, CEO, Heli Austria.
“At Thales, integrating cutting-edge technologies to deliver safety and trust is fundamental to who we are. By uniting Thales’s advanced expertise with StandardAero’s deep industry knowledge, we harness a powerful combination to provide Heli Austria’s pilots with the autopilot solution they have eagerly awaited.” Florent Chauvancy, Vice President Flight Avionics Activities, Thales.
“We are thrilled to be working with Heli Austria, a renowned operator in the European market. The adoption of our StableLight autopilot system demonstrates their commitment to safety and innovation. Once certified by EASA, European H125 operators will be able to reach a new level of safety and efficiency of helicopter operations with the StableLight system.” ​ Andrew Park, General Manager, StandardAero.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About StandardAero

StandardAero is a leading independent pure-play provider of aerospace engine aftermarket services for fixed- and rotary-wing aircraft, serving the commercial, military and business aviation end markets. StandardAero provides a comprehensive suite of critical, value-added aftermarket solutions, including engine maintenance, repair and overhaul, engine component repair, on-wing and field service support, asset management and engineering solutions. StandardAero is an NYSE listed company, under the symbol SARO. For more information about StandardAero, go to www.standardaero.com.

View PDF market_segment : Civil Aviation ; countries : Europe > Austria https://thales-group.prezly.com/thales-and-standardaeros-stablelight-autopilot-chosen-by-leading-helicopter-operator-heli-austria thales-and-standardaeros-stablelighttm-autopilot-chosen-leading-helicopter-operator-heli-austria On Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria

Ocean Protocol

Ocean Protocol Foundation withdraws from the Artificial Superintelligence Alliance

$OCEAN can be de-pegged and re-listed on exchanges Singapore, 9 October 2025 Effective immediately, Ocean Protocol Foundation has withdrawn its designated directors and resigned as a member from the Superintelligence Alliance (Singapore) Ltd, aka the “ASI Alliance”. The ASI Alliance was founded on voluntary association and collaboration to promote decentralized AI through a token merge
$OCEAN can be de-pegged and re-listed on exchanges

Singapore, 9 October 2025

Effective immediately, Ocean Protocol Foundation has withdrawn its designated directors and resigned as a member from the Superintelligence Alliance (Singapore) Ltd, aka the “ASI Alliance”. The ASI Alliance was founded on voluntary association and collaboration to promote decentralized AI through a token merger.

Ocean has worked closely with the other members of the Alliance to seek technology integration, joint podcasts and run community events such as the Superintelligence Summit and ETHGlobal NYC hackathon in the past year.

Moving forward, funding for future Ocean development efforts is fully secured. A portion of profits from spin-outs of Ocean derived-technologies will be used to buyback and burn $OCEAN, offering a permanent and continual supply reduction of the $OCEAN supply.

Since 7/2024, 81% of the $OCEAN token supply has been converted into $FET, yet there are still 37,334 $OCEAN token holders representing 270 million $OCEAN, that have not yet converted to $FET on the existing $OCEAN token contract (0x967da … b9F48).

As independent economic actors, former $OCEAN holders can fully decide to continue to hold $FET or not.

At the time of this announcement, the token bridge, fully managed and controlled by Fetch.ai, remains open for $OCEAN holders to convert to $FET at the rate of 0.433226 $FET/$OCEAN.

Any exchange that has de-listed $OCEAN may assess whether they would like to re-list the $OCEAN token. Acquirors can currently exchange for $OCEAN on Coinbase, Kraken, UpBit, Binance US, Uniswap and SushiSwap.

Community questions to be sent to https://t.me/OceanProtocol_Community.

Press questions can be sent to inquiries@oceanprotocol.com.

Ocean Protocol Foundation withdraws from the Artificial Superintelligence Alliance was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


DF158 Completes and DF159 Launches

Predictoor DF158 rewards available. DF159 runs October 9th — October 16th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 158 (DF158) has completed. DF159 is live today, October 9th. It concludes on October 16th. For this DF round, P
Predictoor DF158 rewards available. DF159 runs October 9th — October 16th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 158 (DF158) has completed.

DF159 is live today, October 9th. It concludes on October 16th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF159 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF159

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF158 Completes and DF159 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

How to Tame Varnish Memory Usage Safely

How Fastly turned a shelved Varnish idea into 25% fewer memory writes and real system-wide gains.
How Fastly turned a shelved Varnish idea into 25% fewer memory writes and real system-wide gains.

Wednesday, 08. October 2025

Ockam

The Art of Building in Public

Turn Your Journey Into Your Unfair Advantage Continue reading on Medium »

Turn Your Journey Into Your Unfair Advantage

Continue reading on Medium »


liminal (was OWI)

Building Trust in Agentic Commerce

The post Building Trust in Agentic Commerce appeared first on Liminal.co.

The post Building Trust in Agentic Commerce appeared first on Liminal.co.


Building Trust in Agentic Commerce

Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust. If agentic AI is going […] The post Building Trust in Agentic Commerce appeared first on Liminal.co.

Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust.

If agentic AI is going to matter in e-commerce, we need guardrails that make it safe, compliant, and worth the operational risk. That is where authentication, authorization, and verification come in. Think identity, boundaries, and proof. Until teams can check those boxes with confidence, adoption will stall.

What is an AI agent, and why does it matter in e-commerce

At its simplest, an AI agent is software that can act on instructions without waiting for every step of human input. Instead of a static chatbot or recommendation engine, an agent can take context, make a decision, and carry out an action.

In e-commerce, that could mean:

Verifying a buyer’s identity before an agent executes a purchase on their behalf Allowing an agent to issue refunds up to a set limit, but requiring human approval beyond that threshold Confirming that an AI-driven order or promotion matches both customer intent and compliance rules before it goes live

The upside is clear: faster processes, lower manual overhead, and customer experiences that feel effortless. But the risk is just as clear. If an agent acts under the wrong identity, oversteps its boundaries, or produces outcomes that don’t match user intent, the impact is immediately evident in increased fraud losses, compliance failures, or customer churn.

That’s why the industry is focusing on three pillars: authentication, authorization, and verification. Without them, agentic commerce cannot scale.

The adoption gap

Analysts project autonomous agents will grow to $70B+ by 2030. Buyers want speed, automation, and scale, but customers are not fully on board. In fact, only 24% of consumers say they are comfortable letting AI complete a purchase on their own.

That consumer hesitation is the critical signal. Ship agentic commerce without shipping trust, and you don’t just risk adoption, you risk chargebacks, brand erosion, and an internal rollback before your pilot even scales.

What’s broken today

Three realities keep coming up in my conversations with product, fraud, and risk leaders:

Attack surface expansion. Synthetic identity and deepfakes raise the baseline risk. 71% of organizations say they lack the AI/ML depth to defend against these tactics. Confidence is slipping. Trust in fully autonomous agents dropped from 43% to 27% in one year, even among tech-forward orgs. Hype hurts. A meaningful share of agent projects will get scrapped by 2027 because teams cannot tie them to real value or reliable controls.

The regulatory lens makes this sharper. Under the new EU AI Act, autonomous systems are often treated as high-risk, requiring transparency, human oversight, and auditability. In the U.S., proposals like the Algorithmic Accountability Act and state laws such as the Colorado AI Act point in the same direction—demanding explainability, bias testing, and risk assessments. For buyers, that means security measures are not only best practice but a growing compliance requirement.

When I see this pattern, I look for the missing scaffolding. It is almost always the same three blanks: who is the agent, what can it do, and did it do the right thing.

The guardrails that matter

If you are evaluating solutions, anchor on these three categories. This is the difference between a flashy demo and something you can put in production.

Authentication

Prove the agent’s identity before you let it act. That means credentials for agents, not just users. It means attestation, issuance, rotation, and revocation. It means non-repudiation, so you can tie a transaction to a specific agent and key.

 What to look for:

strong, verifiable agent identities and credentials support for attestation, key management, rotation, and kill switches logs that let you prove who initiated what, and when
Authorization

Set boundaries that are understood by both machines and auditors. Map policies to budgets, scopes, merchants, SKUs, and risk thresholds. Keep it explainable so a human can reason about the blast radius.

What to look for:

policy engines that accommodate granular scopes and spend limits runtime constraints, approvals, and step-up controls simulation and sandboxes to test policies before they go live
Verification

Trust but verify. Confirm that outcomes align to user intent, compliance, and business rules. You need evidence that holds up in a post-incident review.

Verification isn’t just operational hygiene. Under privacy rules like GDPR Article 22, individuals have a right to safeguards when automated systems make decisions about them. That means the ability to explain, evidence, and roll back agent actions is not optional.

What to look for:

transparent audit trails and readable explanations outcome verification against explicit user directives real-time anomaly detection and rollback paths

If a vendor cannot demonstrate these three pillars working together, you are buying a future incident.

Real-world examples today

Real deployments are still early, but they show what’s possible when trust is built in.

ChatGPT Instant Checkout marks one of the first large-scale examples of agentic commerce in production. Powered by the open-source Agentic Commerce Protocol, co-developed with Stripe, it enables users in the U.S. to buy directly from Etsy sellers in chat, with Shopify merchants like Glossier, SKIMS, and Vuori coming next. The article affirms each purchase is authenticated, authorized, and verified through secure payment tokens and explicit user confirmation—demonstrating how agentic AI can act safely within clear trust boundaries. Konvo AI automates ~65% of customer queries for European retailers and converts ~8% of those into purchases, using agents that can both interact with customers and resolve logistics issues. Visa Intelligent Commerce for Agents is building APIs that let AI agents make purchases using tokenized credentials and strong authentication — showing how payment-grade security can extend to autonomous actions. Amazon Bedrock AgentCore Identity provides identity, access control, and credential vaulting for AI agents, giving enterprises the tools to authenticate and authorize agent actions at scale Agent Commerce Kit (ACK-ID) demonstrates how one agent can verify the identity and ownership of another before sensitive interactions, laying the groundwork for peer-to-peer trust in agentic commerce.

These aren’t fully autonomous across all commerce workflows, but they demonstrate that agentic AI can deliver value when authentication, authorization, and verification are in place.

What good looks like in practice

Buyers ask for a checklist. I prefer evaluation cues you can test in a live environment:

Accuracy and drift. Does the system maintain performance as the catalog, promotions, and fraud patterns shift? Latency and UX. Do the controls keep decisions fast enough for checkout and service flows? Integration reality. Can this plug into your identity, payments, and risk stack without six months of glue code? Explainability. When an agent takes an action, can a product manager and a compliance lead both understand why? Recourse. If something goes wrong, what can you unwind, how quickly can you roll it back, and what evidence exists to explain the decision to auditors, customers, or regulators?

The strongest teams will treat agent actions like high-risk API calls. Every action is authenticated, every scope is authorized, and every outcome is verified. The tooling makes that visible.

Why this matters right now

It is tempting to wait. The reality is that agentic workflows are already creeping into back-office operations, customer onboarding, support, and payments. Early movers who get trust right will bank the upside: lower manual effort, faster cycle time, and a margin story that survives scrutiny.

The inverse is also true. Ship without safeguards, and you’ll spend the next quarter explaining rollback plans and chargeback spikes. Customers won’t give you the benefit of the doubt. Neither will your CFO.

A buyer’s short list

If you are mapping pilots for Q4 and Q1 2026, here’s a simple way to keep the process grounded:

define the jobs to be done write the rules first simulate and stage measure what matters keep humans in the loop regulatory readiness. Confirm vendors can meet requirements for explainability, audit logs, and human oversight under privacy rules. The road ahead

Agentic commerce is not a future bet. It is a present decision about trust. The winners will separate signal from noise, invest in authentication, authorization, and verification, and scale only when those pillars are real.

At Liminal, we track the vendors and patterns shaping this shift. If you want a deeper dive into how teams are solving these challenges today, we’re bringing together nine providers for a live look at the authentication, authorization, and verification layers behind agentic AI. No pitches, just real solutions built to scale safely.

▶️ Want to know more about it? Watch our Liminal Demo Day: Agentic AI in E-Commerce recording, and explore how leading vendors are tackling this challenge.

My take: The winners won’t be the first to launch AI agents. They’ll be the first to prove their agents can be trusted at scale.

The post Building Trust in Agentic Commerce appeared first on Liminal.co.


FastID

The CDN Showdown: Fastly Outpaces Akamai in Real-World Performance

As user expectations rise and milliseconds define outcomes, choosing a modern, high-speed CDN is no longer optional but a strategic imperative. Independent Google data shows Fastly consistently outperforms Akamai in real-world web performance.
As user expectations rise and milliseconds define outcomes, choosing a modern, high-speed CDN is no longer optional but a strategic imperative. Independent Google data shows Fastly consistently outperforms Akamai in real-world web performance.

In AI We Trust? Increasing AI Adoption in AppSec Despite Limited Oversight

AI adoption in AppSec is soaring, yet oversight lags. Explore the paradox of trust vs. risk, false positives, and the future of AI in application security.
AI adoption in AppSec is soaring, yet oversight lags. Explore the paradox of trust vs. risk, false positives, and the future of AI in application security.

Tuesday, 07. October 2025

Anonym

6 Ways Insurers Can Differentiate Identity Theft Insurance  

Identity theft is one of the fastest-growing financial crimes worldwide, and consumers are more aware of the risks than ever before. But in an increasingly competitive market, offering “basic” identity theft insurance is no longer enough. To stand out, insurers need to think beyond the minimum by focusing on product innovation, customer experience, and trust.  […] The post 6 Ways Insurers C

Identity theft is one of the fastest-growing financial crimes worldwide, and consumers are more aware of the risks than ever before. But in an increasingly competitive market, offering “basic” identity theft insurance is no longer enough. To stand out, insurers need to think beyond the minimum by focusing on product innovation, customer experience, and trust. 

Below, we explore six powerful ways insurers can differentiate their identity theft insurance offerings.  

1. Innovate with product features & coverage  

Most identity theft insurance policies cover financial losses and restoration costs, but few go beyond reactive measures to prevent identity theft from occurring. To gain a competitive edge, insurers can expand coverage to offer proactive identity protection solutions, such as:  

Alternative phone numbers and emails to keep customer communications private and reduce phishing risks.  A password manager to help policyholders secure accounts and prevent credential-based account takeovers.  VPN for private browsing to protect sensitive activity on public Wi-Fi and stop data interception.   Virtual cards that protect payment details and shield credit card numbers from fraudsters.  Real-time breach alerts so customers can take immediate action when their data is compromised.  Personal data removal tools to wipe sensitive information from people-search sites and reduce exposure.  A privacy-first browser with ad and tracker blocking to prevent data harvesting and malicious tracking. 

By proactively covering these risks and offering early detection, insurers not only reduce claims costs but also create meaningful value for customers. 

2. Provide strong restoration & case management 

Customers are often overwhelmed and unsure what to do next when their identity is stolen. Insurers can become their most trusted ally by offering: 

A dedicated case manager who works with them from incident to resolution.  A restoration kit with step-by-step instructions, pre-filled forms, and key contacts.  24/7 access to a helpline for guidance and reassurance. 

A study from the University of Edinburgh shows that case management can reduce the cost burden of an incident by up to 90%. It also boosts customer satisfaction and loyalty, which is a critical differentiator in a market where switching providers is easy. 

3. Build proactive prevention & education programs  

Most consumers only think about identity protection after an incident occurs. Insurers can flip this dynamic by helping customers stay ahead of threats. 

Ideas include:  

Regular scam alerts and phishing education campaigns.   Tools for identity monitoring, breach notifications, and credit report access.   Dashboards that visualize a customer’s digital exposure, allowing them to see their risk level.   Ongoing educational content such as webinars, how-to guides, and FAQs. 

Short, targeted online fraud education lowers the risk of falling for scams by roughly 42–44% immediately after training. This finding is based on a study that used a 3-minute video or short text intervention with 2,000 U.S. adults. 

4. Offer flexible pricing & bundling options

Flexibility is key to reaching a broader customer base. Instead of a one-size-fits-all product, insurers can:  

Offer tiered plans (basic, mid, premium) with incremental features.  Bundle identity theft insurance with homeowners, renters, etc.  Provide family plans that protect multiple household members.   

This strategy serves both budget-conscious and premium segments. 

5. Double down on customer experience 

Trust is one of the most important factors consumers consider when buying identity theft insurance. Insurers can build confidence by:   

Using clear, jargon-free language in policy documents.    Responding quickly and resolving cases smoothly.    Displaying trust signals, such as third-party audits, security certifications, and privacy commitments.    Publishing reviews, testimonials, and case studies that show real results. 

A better experience leads to higher Net Promoter Scores (NPS), lower churn rates, and a long-term competitive advantage.   

6. Leverage partnerships

Working with technology partners can enhance insurers’ offerings without straining internal resources. Here are some examples of what partners can do:   

Custom-branded dashboards and mobile apps that seamlessly integrate into your existing customer experience, keeping your brand front and center.    Privacy status at a glance, indicating to customers whether their information has been found in data breaches.   Management of alternative phone numbers and emails, allowing customers to create, update, or retire these directly in the portal. 

By offering these features through a white-labeled experience, insurers provide customers with daily, visible value while partners, like Anonyome Labs, handles the privacy technology behind the scenes. 

Outside of white-label opportunities, strategic partnerships and endorsements also strengthen offerings. Collaborations with credit bureaus, cybersecurity firms, and privacy organizations expand capabilities and build credibility. 

Powering the next generation of identity theft insurance  

The future of identity theft insurance is proactive, not reactive. Insurers who move beyond basic reimbursement to offer daily-use privacy and security tools will lead the industry in trust, engagement, and profitability. Anonyome Labs makes this shift seamless with a fully white-labeled Digital Identity Protection suite that includes alternative phone numbers and emails, password managers, VPNs, virtual cards, breach alerts, and tools for removing personal data. 

By offering these proactive protections, you provide customers with peace of mind, prevent costly fraud incidents before they occur, and unlock new revenue opportunities through subscription-based services. 

By partnering with Anonyome Labs, you can transform identity theft insurance into a daily value driver, positioning your company as a market leader in proactive protection. 

Learn more by getting a demo of our Digital Identity Protection suite today! 

The post 6 Ways Insurers Can Differentiate Identity Theft Insurance   appeared first on Anonyome Labs.


Spruce Systems

Foundations of Decentralized Identity

This article is the first installment of our series: The Future of Digital Identity in America.
What is Decentralized Identity?

Most of us never think about identity online. We type in a username, reuse a password, or click “Log in with Google” without a second thought. Identity, in the digital world, has been designed for convenience. But behind that convenience lies a hidden cost: surveillance, lock-in, and a system where we don’t really own the data that defines us.

Digital identity today is built for convenience, not for people.

Decentralized identity is a way of proving who you are without relying on a single company or government database to hold all the power. Instead of logging in with Google or handing over a photocopy of your driver’s license, you receive digital verifiable credentials, digital versions of IDs, diplomas, or licenses, directly from trusted issuers like DMVs, universities, or employers. You store these credentials securely in your own digital wallet and decide when, where, and how to share them. Each credential is cryptographically signed, so a verifier can instantly confirm its authenticity without needing to contact the issuer. The result is an identity model that’s portable, privacy-preserving, and designed to give control back to the individual rather than intermediaries.

Decentralized identity means you own and control your credentials, like IDs or diplomas, stored in your wallet, not in someone else’s database.

In this series, we’ll explore why decentralized identity matters, how policymakers are responding, and the technology making it possible. But before diving into policy debates or technical standards, it’s worth starting with the foundations: why identity matters at all, and what it means to build a freer digital world around it.

From Borrowed Logins to Borrowed Autonomy

The internet we know today was built on borrowed identity. Early online gaming systems issued usernames, turning every move into a logged action inside a closed sandbox. Social media platforms went further, normalizing surveillance as the price of connection and building entire economies on behavioral data. Even in industries like healthcare or financial services, “identity” was usually just whatever proprietary account a platform would let you open, and then keep hostage.

Each step offered convenience, but at the cost of autonomy. Accounts could be suspended. Data could be resold. Trust was intermediated by companies whose incentives rarely aligned with their users. The result was an internet where identity was an asset to be monetized, not a right to be owned.

On today’s internet, identity is something you rent, not something you own.

Decentralized identity represents a chance to reverse that arc. Instead of treating identity as something you rent, it becomes something you carry. Instead of asking permission from platforms, platforms must ask permission from you.

Why Identity Is a Pillar of Free Societies

This isn’t just a technical argument - it’s a philosophical and economic one. Identity is at the center of how societies function.

Economists have long warned of the dangers of concentrated power. Adam Smith argued that monopolies distort markets. Milton Friedman cautioned against regulatory capture. Friedrich Hayek showed that dispersed knowledge, not central planning, leads to better decisions. Ronald Coase explained how lowering transaction costs opens new forms of cooperation.

Philosophers, too, placed identity at the heart of freedom. John Locke’s principle of self-ownership and John Stuart Mill’s defense of liberty both emphasize that individuals must control what they disclose, limited only by the harm it might cause others.

Decentralized identity operationalizes these ideas for the digital era. By distributing trust, it reduces dependency on monopolistic platforms. By lowering the cost of verification, it unlocks new forms of commerce. By centering autonomy, it ensures liberty is preserved even as interactions move online.

The Costs of Getting It Wrong

American consumers and institutions are losing more money than ever to fraud and cybercrime. In 2024 alone, the FBI’s Internet Crime Complaint Center (IC3) reported that scammers stole a record $16.6 billion, a stark 33% increase from the previous year. Meanwhile, the FTC reports that consumers lost over $12.5 billion to fraud in 2024, a 25% rise compared to 2023.

On the organizational side, data breach costs are soaring. IBM’s 2025 Cost of a Data Breach Report shows that the average cost of a breach in the U.S. has reached a record $10.22 million, driven by higher remediation expenses, regulatory penalties, and deepening complexity of attacks  .

Identity theft has become one of the fastest-growing crimes worldwide. Fake accounts drain social programs. Fraudulent applications weigh down financial institutions. Businesses lose customers, governments lose trust, and people lose confidence that digital systems are designed with their interests in mind.

The Role of AI: Threat and Catalyst

As artificial intelligence tools advance, they’re empowering fraudsters with tools that make identity scams faster, more automated, and more believable. According to a Federal Reserve–affiliated analysis, synthetic identity fraud, where criminals stitch together real and fake information to fabricate identities, reached a staggering $35 billion in losses in 2023. These figures highlight the increasing risk posed by deepfakes and AI-generated personas in undermining financial systems and consumer trust.

And at the frontline of consumer protection, the Financial Crimes Enforcement Network (FinCEN) has warned that criminals are increasingly using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks, evade fraud detection systems, and exploit financial institutions at scale.

AI doesn’t just make fraud easier—it makes strong identity more urgent.

As a result, AI looms over every digital identity conversation. On one side, it makes fraud easier: synthetic faces, forged documents, and bots capable of impersonating humans at scale. On the other, it makes strong identity more urgent and more possible.

Digital Credentials: The Building Blocks of Trust

That’s why the solution isn’t more passwords, scans, or one-off fixes - it’s a new foundation built on verifiable digital credentials. These are cryptographically signed attestations of fact - your age, your license status, your professional certification - that can be presented and verified digitally.

Unlike static PDFs or scans, digital credentials are tamper-proof. They can’t be forged or altered without detection. They’re also user-controlled: you decide when, where, and how to share them. They also support selective disclosure: you can prove you’re over 21 without sharing your exact birthdate, or prove your address is in a certain state without exposing the full line of your home address.

Verifiable digital credentials are tamper-proof, portable, and under the user’s control—an identity model built for trust.

Decentralized identity acts like an “immune system” for AI. By binding credentials to real people and organizations, it distinguishes between synthetic actors and verified entities. It also makes possible a future where AI agents can act on your behalf - booking travel, filling out forms, negotiating contracts - while remaining revocable and accountable to you.

Built on open standards, digital credentials are globally interoperable. Whether issued by a state DMV, a university, or an employer, they can be combined in a wallet and presented across contexts. For the first time, people can carry their identity across borders and sectors without relying on a single gatekeeper.

From Pilots to Infrastructure

Decentralized identity isn’t just theory - it’s already being deployed.

In California, the DMV Wallet has issued more than two million mobile driver’s licenses in under 18 months, alongside blockchain-backed vehicle titles for over 30 million cars. Utah has created a statewide framework for verifiable credentials, with privacy-first principles written directly into law. SB 260 prohibits forced phone handovers, bans tracking and profiling, and mandates that physical IDs remain an option . At the federal level, the U.S. Department of Homeland Security is piloting verifiable digital credentials for immigration, while NIST’s NCCoE has convened banks, state agencies, and technology providers, including SpruceID, to define standards . Over 250 TSA checkpoints already accept mobile IDs from seventeen states, and adoption is expected to double by 2026 .

These examples show that decentralized identity is moving from pilot projects to infrastructure, just as HTTPS went from niche to invisible plumbing for the web.

Why It Matters Now

We are at a crossroads. On one side, centralized systems continue to create single points of failure - massive databases waiting to be breached, platforms incentivized to surveil, and users with no say in the process. On the other, decentralized identity offers resilience, interoperability, and empowerment.

For governments, it reduces fraud and strengthens democratic resilience. For businesses, it lowers compliance costs and builds trust. For individuals, it restores autonomy and privacy.

This isn’t just a new login model. It’s the foundation for digital trust in the 21st century - the bedrock upon which free societies and vibrant economies can thrive.

This article is part of SpruceID’s series on the future of digital identity in America.

Subscribe to be notified when we publish the next installment.

Subscribe Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.


Ockam

The Content Creation System That Multiplies Your Output by 7x

How to Use Human + AI to Do the Work of 7 People Continue reading on Medium »

How to Use Human + AI to Do the Work of 7 People

Continue reading on Medium »


LISNR

The New Transit Security Mandate

How Hardware-Agnostic Authentication Solves Fraud and Revenue Leakage The public transit sector is undergoing a significant digital transformation, consolidating operations under the vision of Mobility-as-a-Service (MaaS). This shift promises passenger convenience through integrated mobile ticketing and Account-Based Ticketing (ABT) systems, but it simultaneously introduces a critical vulnerabilit
How Hardware-Agnostic Authentication Solves Fraud and Revenue Leakage

The public transit sector is undergoing a significant digital transformation, consolidating operations under the vision of Mobility-as-a-Service (MaaS). This shift promises passenger convenience through integrated mobile ticketing and Account-Based Ticketing (ABT) systems, but it simultaneously introduces a critical vulnerability: the rising threat of mobile fraud and revenue leakage.

For transit operators, the stakes are substantial. Revenue losses from fare evasion and ticket forgery, ranging from simple misuse of paper tickets to sophisticated man-in-the-middle attacks, can significantly impact the sustainability of MaaS and the ability to reinvest in services.

Traditional authentication methods are proving insufficient for the complexity of modern, multimodal transit:

NFC: Require significant, capital-intensive infrastructure replacement, which creates a high barrier to entry and slows deployment. QR Codes: Are prone to fraud, can be easily duplicated, and suffer from friction, slowing down passenger throughput at peak hours. BLE: Relies on robust cellular connectivity, which is often unavailable in critical transit environments, such as underground tunnels or moving vehicles.

The strategic imperative for any transit authority or MaaS provider is to adopt a hardware-agnostic, software-defined proximity verification solution that is secure, fast, and works reliably regardless of network availability.

The Strategic Imperative: Securing the Transaction at the Point of Presence

The sophistication of mobile fraud is escalating, posing a threat to the integrity of digital payment systems. Fraudsters exploit vulnerabilities, such as deferred payment authorization, to use compromised credentials repeatedly.

The solution requires a layer of security that instantly validates both the physical proximity and digital identity of the passenger. LISNR, as a worldwide leader in proximity verification, delivers this capability by transforming everyday audio components into secure transactional endpoints.

Technical Solution: Proximity Authentication with Radius® and ToneLock

LISNR’s technology provides a secure, reliable, and cost-effective foundation for next-generation transit ticketing and ticket validation. This is achieved through the Radius® SDK, which facilitates the ultrasonic data-over-sound communication and the proprietary ToneLock security protocol.

Proximity Validation with Radius

The Radius SDK is integrated directly into the transit agency’s mobile application and installed as a lightweight software component onto existing transit hardware equipped with a speaker or microphone (e.g., fare gates, information screens, on-bus systems).

Offline Capability: The MaaS application uses ultrasonic audio with user ticket data embedded within for fast data exchange. Crucially, the tone generation and verification process can occur entirely offline, ensuring that ticketing and payment validation remain functional and sub-second fast, even in areas with zero network coverage. Hardware Agnostic Deployment: Since Radius only requires a standard speaker and microphone, it eliminates the high cost and complexity of deploying proprietary NFC hardware, allowing for rapid and scalable deployment across an entire fleet or network. Security for Fraud Prevention

To combat the growing threat of mobile fraud, LISNR enables ecosystem leaders to deploy multiple advanced measures directly into the ultrasonic transaction:

ToneLock Security: Every Radius transaction can be protected by ToneLock, a proprietary tone security protocol. Only the intended receiver, with the correct, pre-shared key, can demodulate and authenticate the tone. AES256 Encryption: LISNR also offers the ability for developers to add the security protocol trusted by governments worldwide, AES256 Encryption, to all emitted tones. By folding this feature into mobility ecosystems, transit providers can ensure a secure and scalable solution for their ticketing infrastructure. 

 

The Top Business Values of Ultrasonic Proximity in Transit

For forward-thinking transit agencies and MaaS providers, adopting LISNR’s technology offers tangible operational and financial advantages:

Reduced Capital and Operational Expenditure Business Value: Eliminates the need for expensive, proprietary NFC reader hardware replacement and maintenance. Impact on ROI: Lowered infrastructure cost and faster time-to-market for new ticketing solutions. Enhanced Security and Revenue Protection Business Value: ToneLock and Encryption provide an advanced and off-network security layer for ticket and payment authentication. Impact on ROI: Significant reduction in fare evasion, fraud, and revenue leakage, directly increasing financial stability. Superior Passenger Throughput and Experience Business Value: Sub-second authentication regardless of connectivity or weather conditions. Impact on ROI: Increased rider throughput and satisfaction, encouraging greater adoption of digital ticketing and MaaS. Future-Proof and Scalable Platform Business Value: Provides a flexible, software-defined foundation that easily integrates with new Account-Based Ticketing (ABT) and payment models. Impact on ROI: Ensures longevity of infrastructure and adaptability to future urban mobility standards.

By integrating the Radius SDK into their existing platform, transit operators secure their revenue, eliminate infrastructure debt, and deliver the seamless, high-security experience modern passengers demand. 

Are you interested in how Radius can provide an additional stream while onboard (i.e. proximity marketing)? Are you using a loyalty system to capture and reward your most loyal riders? Want to learn more about how Radius works in your ecosystem? Fill out the contact form below to get in contact with an ultrasonic expert.

The post The New Transit Security Mandate appeared first on LISNR.


Spherical Cow Consulting

The End of the Global Internet

Many people reading this post grew up believing and expecting in a single, borderless Internet: a vast network of networks that let us talk, share, and build without arbitrary walls. I like that model, probably because I am a globalist, but I don't think that's where the world is heading. The post The End of the Global Internet appeared first on Spherical Cow Consulting.

“The Internet is too big to fail, but it may be becoming too big to hold together as one.”

Many of the people reading this post grew up believing and expecting in a single, borderless Internet: a vast network of networks that let us talk, share, learn, and build without arbitrary walls. I like that model, probably because I am a globalist, but I don’t think that’s where the world is heading. In recent years, laws, norms, infrastructure, and power pulling in different directions, driving us increasingly towards a fragmented Internet. This is a reality that is shaping how we connect, what tools we use, and who controls what.

In this post, I talk about what fragmentation is, how it is happening, why it matters, and what cracks in the system may also open up room for new kinds of opportunity. It’s a longer post than usual; there’s a lot to think about here.

A Digital Identity Digest The End of the Global Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:34 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What is “fragmentation”?

Fragmentation isn’t a single event with a single definition; it’s a multi-dimentional process. Research has identified at least three overlapping types:

Technical fragmentation: differences in protocols, infrastructure, censorship, filtering; sometimes entire national “gateways” or shutdowns. Regulatory / governmental fragmentation: national laws around data flows, privacy, platform regulation, online safety, and content moderation diverge sharply. Commercial fragmentation: companies facing divergent rules in different markets (privacy, liability, content) so they adapt differently; global products become “local versions.”

A primer from the United Nations Institute for Disarmament Research (UNIDIR) published in 2023 lays this out in detail. The authors of that paper argue that Internet fragmentation is increasingly something that influences cybersecurity, trade, national security, and civil liberties. Another study published not that long ago in SciencesPo suggests that fragmentation is shifting from inward-looking national control toward being used as a tool of power projection; i.e. countries not only fence their own access, but use fragmented rules or control of infrastructure to impose influence beyond their borders.

Evidence: How fragmentation is happening

Sounds all conspiracy theory, doesn’t it? Here are some concrete examples and trends.

Divergent regulatory frameworks The European Union, China, and the U.S. are increasingly adopting very different regulatory models for digital platforms, data privacy, and online content. The “prudent regulation” approach in the EU (which tends toward pre-emptive checks, heavy regulation) contrasts with the more laissez-faire (or “permissionless”) philosophy in parts of the U.S. or other jurisdictions. I really like how that’s covered in the Fondation Robert Schuman’s paper, “Digital legislation: convergence or divergence of models? A comparative look at the European Union, China and the United States.“ Countries around the world have passed or are passing online safety laws, content moderation mandates, or rules that give governments broad powers over what gets seen, what stays hidden, and what content is restricted. Check out the paper published in the Tech Policy Press, “Amid Flurry of Online Safety Laws, the Global Online Safety Regulators Network is Growing” for a lot more on that topic. Regulatory divergence not only in content, but in infrastructure: for example laws about mandatory data localization, national gateways, network sovereignty. These increase the cost and complexity for cross-border services. Few organizations know more about that than the Internet Society, which has an explainer entirely dedicated to Internet fragmentation.

While this divergence creates friction for global platforms, it also produces positive spillovers. The ‘Brussels Effect’ has pushed companies to adopt GDPR-level privacy protections worldwide rather than maintain separate compliance regimes, raising the baseline of consumer trust in digital services. At the same time, the OECD’s latest Economic Outlook stresses that avoiding excessive fragmentation will require countries to cooperate in making trade policy more transparent and predictable, while also diversifying supply chains and aligning regulatory standards on key production inputs.

Taken together, these trends suggest that even in a fragmented environment, stronger rules in one region can ripple outward, whether by shaping global business practices or by encouraging cooperation to build resilience. Of course, this can work both positively and negatively, but let’s focus on the positive for the moment. “Model the change you want to see in the world” is a really good philosophy.

Technical / infrastructural separation National shutdowns or partial shutdowns are still used by governments during conflict, elections, or periods of dissent. Internet Society’s explainer catalogues many examples, but even better is their Pulse table that shows where there have been Internet shutdowns in various countries since 2018. Some countries are building or mandating their own national DNS, national gateways, or other chokepoints—either to control content, enforce digital sovereignty, or “protect” their citizens. These create friction with global addressing, with trust, with how routing and redundancy work. More information on that is, again, in that Internet Society fragmentation explainer.

That said, fragmentation at the infrastructure level can also accelerate experimentation with alternatives. In regions that experience shutdowns or censorship, communities have adopted mesh networks and peer-to-peer tools as resilient stopgaps. Research from the Internet Society’s Open Standards Everywhere project, no longer a standalone project but still offering interesting observations, shows that these architectures, once fringe, are being refined for broader deployment, pushing the Internet to become more fault-tolerant.

Commercial & trade-driven fragmentation Platforms serving global audiences must adapt to local laws (e.g., privacy laws, content moderation laws) so they build variants. The result is that features, policies, even user experience diverge by country. I’m not even going to try to link to a single source for that. It’s kind of obvious. Also, restrictive trade policies (export controls, sanctions) affect what hardware/software can move across borders. Fragmentation in what devices can be used, which cloud services, etc., often comes from supply-chain / trade policy rather than purely from regulation. The UNIDIR primer notes how fragmentation when applied to cybersecurity or export controls ripples through global supply.

Yet duplication of supply chains can also help build redundancy. The CSIS reports on semiconductor supply chains notes (see this one as an example) that efforts to diversify chip fabrication beyond Taiwan and Korea, while expensive, reduce systemic risks. Similarly, McKinsey’s “Redefining Success: A New Playbook for African Fintech Leaders” highlights how African fintechs are thriving by tailoring products to fragmented regulatory and infrastructural environments, turning local constraints into opportunities for growth in areas like cross-border payments, SME lending, and embedded finance. There’s a lot to study there in terms of what opportunity might look like.

I’d also like to point to the opportunities described in the AMRO article “Stronger Together: The Rising Relevance of Regional Economic Cooperation” which describes how ASEAN+3 member states are using frameworks like the Regional Comprehensive Economic Partnership (RCEP), Economic Partnership Agreements, and institutions such as the Chiang Mai Initiative to deepen trade, investment, financial ties, and regulatory cooperation. These are not just formal treaties but mechanisms for cross-border resilience, helping supply chains, capital flows, and finance networks absorb external shocks. This blog post is already crazy long, so I won’t continue, but there is definitely more to explore with how to meet this type of fragmentation with a more positive mindset.

Why does it matter?

Why should we care that the Internet is fragmenting? If there are all sorts of opportunities, do we even have to worry at all? Well, yes. As much as I’m looking for the opportunities to balance the breakages, we still have to keep in mind a variety of consequences, some immediate, some longer-term.

Loss of universality & increased friction

The Internet’s power comes from reach and interoperability: you could send an email or view a website in Boston and someone in Nairobi could see it without special treatment. But as more rules, filters, and walls are inserted, that becomes harder. Services may be blocked, slowed, or restricted. Different regulatory compliance regimes will force more localization of infrastructure and data. Users may need to use different tools depending on where they are. Work that used to scale globally becomes more expensive.

However, constraints often fuel creativity. The World Bank has documented how Africa’s fintech ecosystem thrived under patchy infrastructure, leapfrogging legacy systems with mobile-first solutions. India’s Aadhaar program is another case where local requirements drove innovation that now informs digital identity debates globally. Fragmentation can, paradoxically, widen the palette of local solutions while reducing the palette of global solutions.

Security, surveillance, and trust challenges

Fragmentation creates new attack surfaces and risk vectors. For example:

If traffic must go through national gateways, those are chokepoints for surveillance, censorship, or abuse. If companies cannot use global infrastructure (CDNs, DNS, encryption tools) freely, fragmentation may force weaker substitutes or non-uniform security practices. Divergent laws about encryption or liability may reduce trust in cross-border services or require large overheads. The UNIDIR primer emphasizes these concerns. Economic costs and innovation drag Fragmentation means duplicate infrastructure: separate data centres, duplicated content moderation teams, local legal teams. That’s inefficient. Products and platforms may need multiple variants, reducing scale economies. Cross-border collaboration, which has been a source of innovation (in open source, research, startups) becomes more legally, technically, culturally constrained. Unequal access and power imbalances Countries or regions with weaker regulatory capacity, limited infrastructure, or less technical expertise may be less able to negotiate or enforce their interests. They could be “locked out” of parts of the Internet, or forced to use inferior services. Big tech companies based in powerful jurisdictions may be able to shape global norms (via export, legal reach, or market power) in ways that reflect their values, often without much input from places with less power. This may further amplify inequalities. What counters or moderating factors exist?

Fragmentation is not unilateral nor total. There are forces, capacities, and policies that push in the opposite direction, or at least slow things down.

Standardization bodies / global protocols. The Internet Engineering Task Force (IETF), the W3C, ICANN, etc., continue to undergird a lot of the technical plumbing (DNS, HTTP, TCP/IP, SSL/TLS, etc.). These are not trivial to replace, though it seems like some regional standards organizations are trying. Commercial incentives for compatibility. Many platforms serving global markets prefer to maintain a common codebase, or to comply with the most restrictive regulation so it applies everywhere (bringing us back to the Brussels Effect). If a regulation (e.g., privacy law) in one place is strong, firms may just adopt it globally rather than maintain separate versions. User demand and expectation. Users expect services to “just work” across borders—social media, video conferencing, cloud tools. If fragmentation hurts usability, there is political/popular pushback. Cross-border political/institutional cooperation. Trade agreements, multi-stakeholder governance efforts, and international bodies sometimes negotiate common frameworks or minimum standards (e.g., data flow provisions, privacy protections, cybersecurity norms).

These moderating factors mean that fragmentation is not an all-or-nothing state; it will be uneven, partial, and contested.

What we (you, we, society) can do to navigate & shape the outcome

Fragmentation is already happening; how we respond matters. Here are some ways to think about shaping the future so that it is not simply divided, but more resilient and fair.

Advocate for interoperable baselines. Even as parts diverge, there can be minimum standards—on encryption, addressing, data portability, etc.—that maintain some baseline interoperability. This ensures users don’t fall off the map just because their country has different laws. Design for variation. Product and service designers need to think early about how their tools will work under different regulatory, infrastructural, and socio-political regimes. That means thinking about offline/online tradeoffs, degraded connectivity, local content, privacy expectations, etc. Invest in local capability. Regions with weaker infrastructure, less regulatory capacity, or less technical workforce should invest (or have investment from partners) in building up their tech ecosystems, including data centers, networking, local content, and developer education. This mitigates risk of being passive recipients rather than active shapers. Cross-bloc cooperation & treaties. Trade agreements or regional alliances for digital policies could harmonize rules where possible (e.g., privacy, data flows, cybersecurity), reduce compliance burden, and keep doors open across regions. New infrastructural experiments. Thinking creatively: mesh networks, decentralized Internet architecture, peer-to-peer content distribution, alternative routing, redundancy in undersea cables etc. In context of fragmentation, some of these may move from research curiosities to vital infrastructure. Policy awareness & public engagement. People often take the openness of the Internet for granted. Public debates, awareness of policy changes (online safety, surveillance, digital sovereignty) matter. A more informed citizenry can push for policies that preserve openness and resist overly restrictive fragmentation. Anchor in human rights and global goals. Fragmentation debates can’t just be about pipes and protocols. They must also reflect the fundamentals of an ethical society: protecting human rights, ensuring equitable access, and aligning with global commitments like the United Nations Sustainable Development Goals (SDGs) and the Global Digital Compact. These frameworks remind us that digital infrastructure isn’t an end in itself. It’s a means to advance dignity, inclusion, and sustainable development. Even as the Internet fragments, grounding decisions in these principles can help keep diverse systems interoperable not just technically, but socially. Recalibration

The “global Internet” is fragmenting, if it ever really existed at all. That’s a statement I’m not comfortable with but which I’m also not going to approach as the ultimate technical tragedy. Fragmentation brings friction, risks, and challenges, sure. It threatens universality, raises security concerns, and could amplify inequalities. But it also forces us to imagine new architectures, new modes of cooperation, new ways to build more resilient and locally grounded technologies. It means innovation might look different: less about global scale, more about boundary-crossing craftsmanship, local resilience, hybrid systems.

In the end, fragmentation isn’t simply an ending. It may be a recalibration. The question is: will we let it just fragment into chaos, or guide it into a future where multiple, overlapping digital worlds still connect, where people everywhere are participants and not just objects of regulation?

Question for you, the reader: If the Internet becomes more of a patchwork than a tapestry, what kind of bridges do you think are essential? What minimum interoperability, trust, and rights should be preserved across borders?

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Hi everyone, and welcome back to the Digital Identity Digest. Today’s episode is called The End of the Global Internet.

This episode is longer than usual because there’s a lot to unpack. The global Internet, as we once imagined it, is changing rapidly. While it isn’t collapsing overnight, it is fragmenting. That fragmentation brings real risks — but also some surprising opportunities.

Throughout this month, I’ll be publishing slightly longer episodes, alongside detailed blog posts with links to research and source material. I encourage you to check those out as well.

What Fragmentation Really Means

[00:01:15] Many of us grew up hoping for a single, borderless Internet: a vast network of networks without arbitrary firewalls. I’ve always loved that model, perhaps because I’m a globalist at heart. But that’s not where we’re heading.

In recent years, laws, cultures, infrastructure, and politics have pulled the Internet in different directions. The result? An increasingly fragmented landscape.

Researchers describe three key dimensions of fragmentation:

Technical fragmentation – national firewalls, alternative DNS systems, and content filtering that alter the “plumbing” of the Internet. Regulatory fragmentation – divergent laws on privacy, content, and data, such as the GDPR compared with lighter-touch U.S. approaches. Commercial fragmentation – companies restricting services by geography, whether for compliance, cost, or strategy.

Together, these layers create friction in what once felt like a seamless system.

Evidence of Fragmentation in Practice

[00:04:18] Let’s look at how fragmentation is showing up.

Regulatory divergence – The EU, China, and the U.S. are moving in very different directions. The EU emphasizes heavy regulation and precaution. The U.S. takes a lighter (but shifting) approach. China uses regulation to centralize control. Interestingly, strict laws often set global baselines. The Brussels Effect demonstrates how GDPR effectively raised global privacy standards, since it’s easier for companies to comply everywhere. Technical fragmentation – Governments are experimenting with independent DNS systems, national gateways, and even Internet shutdowns during protests or elections. On the flip side, this has fueled mesh networks and decentralized DNS, once fringe ideas that now serve as resilience tools. Commercial fragmentation – Supply chains and trade policy drive uneven access to hardware and cloud services. For example: Semiconductor fabs are being built outside Taiwan and Korea. New data centers are emerging in Africa and Latin America. African fintech thrives precisely because local firms adapt to fragmented conditions.

McKinsey projects African fintech revenues will grow nearly 10% per year through 2028, showing how local innovation can thrive in fragmented markets.

Why Fragmentation Matters

[00:06:45] Fragmentation has profound consequences.

Universality weakens – The original power of the Internet was its global reach. Fragmentation erodes that universality. Security and trust challenges – Choke points and divergent encryption weaken cross-border trust. Economic costs – Companies must duplicate infrastructure and compliance, slowing innovation. Inequality deepens – Weaker regions risk being left behind, forced to adopt systems imposed by stronger players. Moderating Factors

[00:08:30] Fragmentation isn’t absolute. Several forces hold the Internet together:

Standards bodies like IETF and W3C keep core protocols aligned. Companies often adopt the strictest regimes globally, simplifying compliance. Users expect services to work everywhere — and complain when they don’t. Regional cooperation (e.g., EU, ASEAN, African Union) helps maintain partial cohesion.

These factors form the connective tissue that prevents a total collapse.

Possible Future Scenarios

[00:09:45] Looking ahead, I see four plausible scenarios:

Soft fragmentation Internet stays global, but friction rises. Platforms launch regional versions, compliance costs increase. Opportunity: stronger local ecosystems and regional innovation. Regulatory blocks Countries form digital provinces with internal harmony but divergence elsewhere. Opportunity: specialization (EU in privacy tech, Africa in mobile-first innovation, Asia in super apps). Technical fragmentation Shutdowns, divergent standards, and outages become common. Opportunity: mainstream adoption of decentralized and peer-to-peer networks. Pure isolationism Countries build proprietary platforms, national ID systems, and local chip fabs. Opportunity: preservation of local values, region-specific innovation. What Can We Do?

[00:12:28] In the face of fragmentation, individuals, companies, and policymakers can take action:

Advocate for interoperable baselines (encryption, addressing, data portability). Design for variation so systems degrade gracefully under different regimes. Invest in local capacity — infrastructure, skills, developer ecosystems. Encourage regional cooperation through treaties and data agreements. Experiment with alternative architectures like mesh networks and decentralized identity. Anchor change in human rights — align with UN SDGs, protect freedoms, and center people, not just states or corporations. Closing Thoughts

[00:15:50] The global Internet as we knew it may be ending — but that isn’t necessarily a tragedy.

Yes, fragmentation creates friction, risks, and inequality. But it also sparks resilience, innovation, and adaptation. In Africa, fintech thrives under fragmented conditions. In Europe, strong privacy laws raise global standards. In Asia, regional trade frameworks offer cooperation despite divergence.

The real question isn’t whether fragmentation is coming — it’s already here. The question is:

What kind of fragmented Internet do we want to build? Which bridges are worth preserving? Which minimum standards — technical, ethical, social — should always cross borders?

These questions shape not only the Internet’s future, but our own.

[00:18:45] Thank you for listening to the Digital Identity Digest. If you found this episode useful, please subscribe to the blog or podcast, share it with others, and connect with me on LinkedIn @hlflanagan.

Stay curious, stay engaged, and let’s keep these conversations going.

The post The End of the Global Internet appeared first on Spherical Cow Consulting.


Ontology

How Smart Accounts and Account Abstraction Fit Together

Since the dawn of Ethereum, interacting with blockchains has meant using Externally Owned Accounts (EOAs) - simple wallets controlled by a private key. While functional, EOAs expose serious limitations: lose your key, and you lose your funds. Want features like spending limits, session keys, or social recovery? You’re left with clunky, layered workarounds. Enter Account Abstraction (AA) and Smart

Since the dawn of Ethereum, interacting with blockchains has meant using Externally Owned Accounts (EOAs) - simple wallets controlled by a private key. While functional, EOAs expose serious limitations: lose your key, and you lose your funds. Want features like spending limits, session keys, or social recovery? You’re left with clunky, layered workarounds.

Enter Account Abstraction (AA) and Smart Accounts. Together, these innovations are transforming how users engage with Web3 by merging the flexibility of smart contracts with the usability of traditional wallets. Instead of thinking about wallets as rigid containers of keys, we can now imagine them as programmable, customizable gateways into the blockchain world.

This article explores how Smart Accounts and Account Abstraction fit together, referencing key Ethereum proposals EIP-4337, EIP-3074, and EIP-7702 and why this combination is essential for building the next wave of user-friendly, secure, and innovative blockchain applications.

What is Account Abstraction?

Account Abstraction is the idea of treating all blockchain accounts as programmable entities. Instead of separating EOAs (controlled by private keys) and contract accounts (controlled by code), AA allows accounts themselves to act like smart contracts.

Key benefits of AA include:

Gas abstraction: Pay transaction fees in tokens other than ETH.
Programmable security: Add multi sig, time locks, or social recovery. Batched transactions: Execute multiple actions in one click.
Session keys: Grant temporary permissions for games or dApps. Upgradability: Evolve wallet logic without replacing accounts.

With AA, wallets evolve from being passive key holders into active smart entities capable of executing logic on behalf of their users.

What are Smart Accounts?

If Account Abstraction is the theory, Smart Accounts are the practice. A Smart Account is simply a blockchain account that operates under the AA model.

Instead of relying on a single private key, a Smart Account:

Runs customizable logic like a smart contract. Supports flexible authentication methods (biometrics, passkeys, hardware modules). Allows advanced features such as automatic payments, subscription models, or delegated access. Provides recoverability through trusted guardians or social recovery mechanisms.

In short, Smart Accounts are the user-facing manifestation of Account Abstraction. They bring abstract design principles into tangible experiences, making Web3 more accessible for everyday users.

How They Fit Together

Think of Account Abstraction as the architectural blueprint and Smart Accounts as the actual buildings.

AA defines the rules: It sets the framework for programmable accounts. Proposals like EIP-4337 specify how transactions are validated and bundled without relying solely on EOAs.

2. Smart Accounts implement the

rules:

They apply those AA rules to create practical wallets. Through smart contracts, they support features like gasless transactions, account recovery, and key rotation.

Together, AA and Smart Accounts replace the outdated key-wallet model with a flexible, modular system where user experience comes first.

The Role of Key EIPs

Ethereum’s progress toward AA and Smart Accounts has been guided by several proposals:

EIP-4337 (2021):
Introduced the concept of a “UserOperation” and “bundlers.” This allows smart accounts to function without requiring changes at the consensus layer. It is the backbone of today’s AA-compatible wallets. EIP-3074:
Enables EOAs to delegate control to contracts temporarily, bridging the gap between old wallets and smart accounts. EIP-7702 (2024):
Builds on 3074 but provides a safer and more streamlined way for EOAs to transition into smart accounts. This is critical for onboarding existing users without forcing them to abandon their current wallets.

Together, these proposals ensure that Smart Accounts are not just theoretical they’re backward-compatible, forward-looking, and ready for mainstream adoption.

Why This Matters for Users

For users, the combination of AA and Smart Accounts translates into real-world improvements:

Safety: Lose your key? No problem recover your wallet using guardians or multi-sig setups. Simplicity: Pay fees with stablecoins, batch multiple dApp actions into one transaction, or play a blockchain game without constant wallet prompts. Flexibility: Switch security models as your needs change (e.g., from a simple wallet as a beginner to a multi sig or hardware protected wallet as your assets grow). Innovation: Developers can build richer applications subscription based dApps, automated DeFi strategies, or Web3-native identity systems.

This shifts the user experience from fear of making mistakes to freedom to explore.

A Fresh Perspective: Smart Accounts as Digital Personas

One way to think creatively about Smart Accounts is to view them not just as wallets, but as digital personas.

Just as you might have different identities in real life personal, professional, or gaming Smart Accounts allow you to manage multiple digital personas:

A DeFi persona with automated trading strategies. A gaming persona with session keys and gasless interactions. A professional persona tied to your DAO contributions.

Each persona can run its own logic while remaining linked to your overall identity. This flexibility makes Web3 personalized and intuitive, much like the evolution from simple feature phones to today’s smartphones.

Practical Takeaways for the Community

Developers: Start experimenting with Smart Account SDKs built on EIP-4337. Building dApps with native AA support will set you apart in the next wave of adoption. Users: Explore AA wallets like Safe, ZeroDev, or Soul Wallet. Get familiar with recovery options and gas abstraction to see the difference firsthand. Communities: Advocate for dApps that integrate Smart Accounts, since these models reduce onboarding friction for newcomers.

By engaging now, the community can shape how AA and Smart Accounts evolve, ensuring they remain inclusive, secure, and user first.

Conclusion

Smart Accounts and Account Abstraction are not isolated innovations they are two halves of the same revolution. Account Abstraction lays the foundation, while Smart Accounts bring it to life. Together, they unlock a Web3 experience that is safer, simpler, and infinitely more flexible than today’s wallet paradigm.

Just as the smartphone redefined what we expect from communication devices, Smart Accounts will redefine what we expect from blockchain wallets. They are not just tools to hold assets they are programmable, adaptable, and deeply human centric gateways into the decentralized world.

The future of Web3 isn’t just about protocols or assets it’s about empowering people with smarter, safer, and more intuitive digital identities. And that future begins with Smart Accounts powered by Account Abstraction.

How Smart Accounts and Account Abstraction Fit Together was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Design for Chaos: Fastly’s Principles of Fault Isolation and Graceful Degradation

Learn how Fastly builds a resilient CDN through fault isolation & graceful degradation. Discover our principles for minimizing disruption & ensuring continuous service.
Learn how Fastly builds a resilient CDN through fault isolation & graceful degradation. Discover our principles for minimizing disruption & ensuring continuous service.

Monday, 06. October 2025

Ockam

Turn Your Users Into Your Distribution Engine

Engineer Word-of-Mouth Growth Without Paid Ads or Big Budgets Continue reading on Medium »

Engineer Word-of-Mouth Growth Without Paid Ads or Big Budgets

Continue reading on Medium »

Sunday, 05. October 2025

Ockam

Network-Driven Distribution: How to Get Others to Spread Your Work

How to get others to distribute your work (even with $0 budget) Continue reading on Medium »

How to get others to distribute your work (even with $0 budget)

Continue reading on Medium »

Friday, 03. October 2025

1Kosmos BlockID

Customer Identity Verification: Overview & How to Do It Right

Key Lessons Customer identity verification is critical for fraud prevention, compliance, and building trust in digital business. Businesses can use layered methods (document verification, biometrics, MFA, and risk scoring) to ensure security without sacrificing user experience. The biggest challenges include synthetic identity fraud, cross-border verification, and balancing compliance with custome
Key Lessons

Customer identity verification is critical for fraud prevention, compliance, and building trust in digital business.

Businesses can use layered methods (document verification, biometrics, MFA, and risk scoring) to ensure security without sacrificing user experience.

The biggest challenges include synthetic identity fraud, cross-border verification, and balancing compliance with customer convenience.

Adopting best practices like multi-layered verification, advanced AI, and risk-based frameworks ensures security while streamlining onboarding.

What Is Customer Identity Verification?

Customer identity verification confirms that customers are who they claim to be, using digital tools and data checks. It involves validating personal details and credentials against official records, documents, or biometric identifiers.

The purpose is simple: stop fraudsters at the gate while giving legitimate customers a seamless, trusted onboarding experience. Verification is no longer optional in a world where synthetic identities can be spun up with a stolen Social Security number and a fake address.
Modern verification systems use artificial intelligence, machine learning, and biometrics to increase accuracy and speed dramatically. Instead of forcing customers to wait days while documents are manually reviewed, businesses can now verify identities in minutes—or even seconds—with confidence levels above 99%.

What Are The Different Types Of Customer Identity Verification?

The main types are document-based, biometric, knowledge-based, database verification, and multi-factor authentication (MFA).

Document-based verification checks the authenticity of passports, driver’s licenses, and other government IDs. Modern systems analyze holograms, fonts, and machine-readable zones (MRZs) to detect forgery attempts. Biometric verification leverages fingerprints, facial recognition, or iris scans. When paired with liveness detection, biometrics are far harder to spoof than traditional credentials. Knowledge-based authentication (KBA) relies on security questions, but with social media oversharing and widespread data breaches, attackers can easily guess or steal these answers. This method is rapidly losing relevance. Database verification cross-checks a customer’s details against government, financial, and sanctions databases to validate legitimacy. MFA strengthens defenses by requiring two or more identity factors: something you know (password), something you have (token), and something you are (biometric).

Each method has strengths and weaknesses, but the most secure strategies don’t pick one; they combine them into a layered, adaptive verification framework.

How Does Customer Identity Verification Work?

Verification breaks down into four stages: data collection, document assessment, identity validation, and risk assessment.

Everything starts with data collection, where customers provide personal details, government-issued IDs, biometrics, and contact information. Once collected, the data moves to document assessment, where AI tools check submitted IDs for authenticity and signs of tampering. This step catches expired, altered, or synthetic documents before they go any further. Next is identity validation, where the information gets cross-referenced against trusted government and financial databases. Biometrics are compared to ID photos, while watchlist screenings flag individuals who could pose regulatory or fraud risks. Last comes risk assessment that generates a trust score based on behavioral anomalies, device intelligence, geolocation data, and known fraud indicators.

What once stretched across days now happens in seconds, allowing organizations to seamlessly onboard good customers while quietly blocking bad actors.

What Are The Challenges To Customer Identity Verification?

Challenges include synthetic fraud, cross-border complexity, balancing user experience with security, advanced attack vectors, and compliance.

Synthetic identity fraud is the fastest-growing financial crime, estimated to reach $23 billion annually by 2030. Attackers stitch together real and fake data to create new “people” that slip past legacy checks. Cross-border verification struggles with inconsistent ID standards, languages, and regulatory frameworks. A passport in Germany won’t have the same features as a driver’s license in Mexico. User experience vs. security is a constant balancing act. Too much friction leads to legitimate users abandoning onboarding, while too little leads to attackers walking right in. Advanced attacks like deepfakes, AI-generated voice phishing, and synthetic biometrics make fraud detection harder than ever. Compliance obligations vary dramatically across sectors. With the General Data Protection Regulation (GDPR) in Europe, the Anti-Money Laundering (AML) rules for banks, and the Health Insurance Portability and Accountability Act (HIPAA) for healthcare, standards and regulations will run the gamut. Businesses must navigate a minefield of global standards.

The reality is that fraudsters innovate faster than regulators. That means businesses need adaptive, technology-driven defenses that evolve continuously.

What Are The Best Practices To Customer Identity Verification?

The best practices boil down to multi-layered checks, AI-driven analysis, risk-based frameworks, data security, and compliance alignment.

Multi-layered verification: Mix documents, biometrics, and databases for solid defense in depth. Advanced AI: Use machine learning models to catch spoofing, deepfakes, and behavioral red flags in real time. Risk-based approaches: Match verification intensity to transaction risk, including tougher checks for wire transfers, lighter touch for low-value stuff. Data protection: Encrypt sensitive data, store it securely, and run regular audits to stay compliant. Or, with blockchain solutions like 1Kosmos, skip centralized data storage entirely and eliminate that major attack vector. Regulatory alignment: Keep up with changing KYC/AML requirements and privacy laws around the world.

Get these right, and you’ll block fraud while making onboarding so quick and smooth that customers actually choose businesses with stronger verification over the competition.

Why Is Customer Identity Verification Important To Businesses?

It prevents fraud, ensures compliance, builds trust, and drives operational efficiency. By verifying users before granting access, businesses can stop account takeovers, impersonation scams, and synthetic identities. But the benefits go beyond just security. Regulatory compliance, from KYC and AML requirements in financial services to HIPAA rules in healthcare, makes verification a must-have for operations.

In an environment where breaches dominate headlines, demonstrating rigorous verification builds confidence with partners and customers alike.

How Should My Business Verify Customer Identities Step By Step?

Businesses should follow a structured six-step implementation framework.

Assess requirements: Figure out your fraud risks, compliance mandates, and customer demographics. Choose methods: Based on your specific risk profile, you can select verification tools such as customer document verification or biometrics. Implement technology: Set up APIs, document scanning, and biometric integrations that scale without messing up your existing systems. Design journeys: Create user-friendly flows that reduce friction without compromising security. Train staff: Make sure employees can escalate suspicious cases, conduct manual reviews, and help customers when needed. Monitor and optimize: Continuously tweak based on fraud detection outcomes, customer drop-off rates, and regulatory changes.

Following this framework ensures verification is both secure and customer-centric.

What Are The Common Customer Identity Verification Methods?

Standard methods include document scanning, facial recognition, fingerprint scans, SMS OTPs, database checks, and MFA.

Some legacy methods are fading. KBA and SMS one-time passcodes, for example, are easily compromised. Attackers can scrape answers from social media or intercept text messages.

By contrast, modern approaches like AI-powered biometrics and blockchain-backed credentials are gaining traction. They’re faster, harder to spoof, and more transparent for users. Forward-looking businesses are already adopting reusable digital identity wallets, allowing customers to authenticate seamlessly across multiple services without re-verifying.

Trust 1Kosmos Verify for Identity Verification

Passwords and outdated MFA create friction for customers, leaving the door open to fraud, account takeovers, and synthetic identities. These obsolete methods slow onboarding, frustrate legitimate users, and fail to deliver the trust today’s digital economy demands.

1Kosmos Customer solves this by replacing weak credentials with a mighty, privacy-first digital identity wallet backed by deterministic identity proofing and public-private key cryptography. In just one quick, customizable registration, legitimate customers are verified with 99%+ accuracy and given secure, frictionless access to services, while fraudsters are stopped at the first attempt. From instant KYC compliance to zero-knowledge proofs that protect sensitive data, the result is a seamless authentication experience that customers love and businesses can rely on.

Ready to eliminate fraud, streamline onboarding, and delight your customers? Discover how 1Kosmos Customer can transform your digital identity strategy today.

The post Customer Identity Verification: Overview & How to Do It Right appeared first on 1Kosmos.


Recognito Vision

AI Face Recognition Explained with Benefits and Challenges

Artificial Intelligence is no longer science fiction. From unlocking your phone to passing through airport security, AI face recognition has become part of daily life. It is powerful, practical, and sometimes a little controversial. But how does it actually work, and where is it headed? Let’s break it down in simple terms.   What is...

Artificial Intelligence is no longer science fiction. From unlocking your phone to passing through airport security, AI face recognition has become part of daily life. It is powerful, practical, and sometimes a little controversial. But how does it actually work, and where is it headed? Let’s break it down in simple terms.

 

What is AI Face Recognition

At its core, AI and face recognition is a technology that identifies or verifies a person using their facial features. Think of it as a digital detective. It looks at your face the same way you look at a fingerprint, comparing unique details like the distance between your eyes or the curve of your jaw.

This isn’t just about matching a selfie to your phone. The technology is also applied in banking apps, airports, healthcare, and even retail stores. It is driven by facial AI models trained on massive datasets, allowing systems to quickly learn the differences and similarities between millions of faces.

 

How AI Face Recognition Works

The process might sound complex, but let’s simplify it. The system works in three big steps:

Face Detection AI
The camera identifies that a human face is present. It locates key landmarks such as eyes, nose, and mouth.

Face Encoding
The software converts the face into a unique numerical code. This code is like a fingerprint for your face.

Face Match AI
The system compares this code with stored data to verify identity or find a match.

Step Action Real-Life Example Detection Identifies a face Phone camera sees your face Encoding Converts to unique code Creates a “faceprint” Matching Compares with database Unlocks your device

These steps are powered by artificial intelligence face recognition algorithms that become more accurate over time.

 

Accuracy and Global Benchmarks

Not all systems are created equal. Some are lightning fast with near-perfect accuracy, while others struggle in low light or with diverse facial features. The NIST Face Recognition Vendor Test (FRVT) has become the gold standard for measuring how well different systems perform.

Visit NIST FRVT for performance data.

Explore detailed evaluation results on FRVT 1:1 tests.

These benchmarks give businesses and governments confidence before deploying large-scale projects.

 

Everyday Uses of Facial AI

You may not notice it, but facial AI is everywhere. Here are some real-world applications:

Smartphones: Unlocking devices without passwords.

Airports: Quicker boarding with automated gates.

Healthcare: Patient verification for secure records.

E-commerce: AI face search for trying products virtually.

Banking: Identity checks for fraud prevention.

Fun fact: Some retailers even use AI facial systems to analyze customer demographics and improve shopping experiences.

Privacy Concerns and Regulations

With great power comes great responsibility. While the technology is convenient, it also raises concerns about surveillance and misuse. Governments are stepping in with data protection laws like the GDPR to ensure individuals have control over their biometric data.

Companies using AI face recognition must follow strict compliance rules such as:

Informing users how their data will be used.

Allowing opt-outs where possible.

Storing encrypted biometric data securely.

Failure to follow these rules can lead to massive fines and public backlash.

 

Challenges Facing Face Detection AI

Even with rapid progress, the technology isn’t flawless. Common challenges include:

Bias in datasets: Some systems perform better on certain skin tones.

Spoofing attempts: Photos or videos tricking the system.

Environmental issues: Poor lighting or extreme angles can reduce accuracy.

To tackle spoofing, researchers are exploring liveness detection techniques, making sure the system knows the difference between a real human face and a photo.

The Future of AI and Face Recognition

Looking ahead, experts believe ai and face recognition will only get smarter. Here are a few trends shaping the future:

Edge computing: Processing done on local devices for speed and privacy.

Cross-industry adoption: From gaming to education, new uses are emerging.

Open-source innovation: Platforms like Recognito GitHub encourage collaboration and transparency.

As systems improve, the balance between convenience and privacy will continue to dominate the conversation.

 

Final Thoughts

AI face recognition is changing the way the world verifies identity. It simplifies daily tasks, strengthens security, and opens doors to new possibilities. Yet, it also comes with challenges like privacy risks and the need for unbiased data. With organizations such as NIST setting global benchmarks and strict regulations like GDPR shaping policy, the future looks promising but carefully monitored.

And as innovation keeps moving forward, one name that continues to contribute in this space is Recognito.

 

Frequently Asked Questions

 

1. What is AI face recognition used for

AI face recognition is used for unlocking smartphones, airport security checks, banking identity verification, and even retail experiences like virtual try-ons.

2. How accurate is face detection AI

Accuracy depends on the system. Some advanced tools tested by NIST FRVT report accuracy rates above 99 percent, especially in controlled environments.

3. Can AI face search find someone online

AI face search can match faces within specific databases or platforms, but it cannot scan the entire internet. Accuracy depends on the size and quality of the database.

4. Is AI facial recognition safe to use

Yes, when regulated properly. Systems that follow privacy rules like GDPR and use encryption keep user data protected.

5. What is the difference between face match AI and face detection AI

Face detection AI only spots if a face is present. Face match AI goes further by verifying if the detected face matches an existing one in the database.


uquodo

How AI is Enhancing Sanctions Screening and Adverse Media Monitoring

The post How AI is Enhancing Sanctions Screening and Adverse Media Monitoring appeared first on uqudo.

Thursday, 02. October 2025

Holochain

Finding Our Edge: A Strategic Update

Blog

I want to share the Holochain Foundation’s evolving strategic approach to our subsidiary organizations, Holo, and Unyt.

Strategic work always involves paying attention to the match between your efforts, and where the world is ready to receive those efforts. Since our inception there has always been a small group of supporters who have understood the potential and need for the kind of deep p2p infrastructure that we are building, which allows for all kinds of un-intermediated direct interactions and transactions of all types. But at this moment in time we are seeing a new convergence.

As Holochain is maturing significantly, the main-stream world is also maturing into understanding the need for p2p networks and processes. As my colleague Madelynn Martiniere says: “we are meeting the moment and the moment is meeting us.”

And there’s a key domain in which this is happening: the domain of value transfer.  

The Unyt Opportunity

As you know, the foundation created a subsidiary, Unyt, to focus on building HoloFuel, the accounting system for Holo to use for its Hosting platform. But it turns out that the tooling Unyt built has a far broader application than we had initially realized. This is part of the convergence, and also a huge opportunity.

Unyt’s tooling turns out to be what people are calling “payment-rails”: generalized tooling for value tracking, and because it’s built on Holochain, it’s already fully p2p.  This is part of the convergence. There is a huge opportunity for this technology to bring the deep qualitative value that p2p provides: increased transparency, agency, reduced cost, & privacy. And also in huge volumes: when talking about digital payments and transactions you count in the trillions. 

The implications are huge, and they need and deserve the focus of the Foundation and our resources so we can fully develop the opportunity ahead of us.

Interactions with Integrity: Powered by Holochain

Our original mission was to provide the underlying cryptographic fabric to make decentralized computing both easy and real - and ultimately, at a scale that could have a global impact.

That mission remains intact. The evolution we’re sharing today is not only directly connected to our original strategy, and a logical extension of it, but are ones that we believe will - over time - substantially increase the scale of and opportunities for anyone and everyone within the Holochain ecosystem.

When we introduced the idea of Holochain and Holo to the world in December of 2017, our goal was to provide a technology infrastructure that allowed people to own their own data, control their identities, choose how they connect themselves and their applications to the online world, and intrinsic to all of the above, interact and transact directly with each other as producers and consumers of value.

The foundation of the Holochain ecosystem has thus always required establishing a payment system where every transaction is an interaction with integrity: value is supported by productive capacity, validated on local chains (vs. global ledgers) by a unit of accounting - in our case, HoloFuel - and value and supply is grounded by a real-world service with practical value. 

The Holochain Foundation entity charged with developing and delivering the technology infrastructure for this payment system is Unyt Accounting. 

For almost a year now, the team at Unyt has been quietly working hard to develop the payment rails software that will permit users to build and deploy unique currencies (including HoloFuel), allow those currencies to circulate and interact, and ensure the integrity of every transaction. As it turns out, we got more than we bargained for, in the best possible way.

Meaning: in Unyt, we have software that not only enables HoloFuel, but we can see a brilliant way to link into both the blockchain and crypto world, and also the non-crypto world. As Holochain matures, with the application of Unyt technology, we see a major opportunity in the peer-to-peer payments space, and a chance to lead the non-financial transaction space. 

These are, objectively, huge markets, as Unyt products and tools are not only aimed squarely at solving real-world crypto accounting and payment challenges, but will combine to create the infrastructure needed to launch HoloFuel, and additionally address multiple real-world use cases for anyone interested in high-integrity, decentralized, digital interactions.

Given Unyt’s progress, we arrived at a point where it became clear to everyone on our leadership team that it was time to make an important strategic decision about where to best devote our focus, time, and resources. 

Strategic Decisions and Our Path Forward

Here’s where we landed:

When we reorganized Holo Ltd. last year, it was because we wanted to spur growth, and felt having a focus on a commercial application could expand the number of end users. But, it also put us into competition with some of the largest and best-capitalized tech companies on the planet. 

We haven’t gotten enough traction yet for this to be our sole strategy. As part of our ongoing evaluation over the last months, the Holo dev team pursued an exploration of a very different approach - both technical and structural - to deploying Holochain always-on nodes.

Holo is calling it Edge Node, an open-source, OCI-compliant container image that can be run on any device, physical or virtual, to serve as an always-on-node for any hApp .

Today, Edge Node is available on GitHub for technically savvy folks to use. You can run the Docker container image or opt to install via the ISO onto HoloPorts or any other hardware

What’s different about this experiment is that it appeals to a much wider audience - those familiar with running docker containers, rather than the smaller audience who know Nix. And we’re releasing it now, as open-source, and actively seeking immediate feedback from the community on how this might evolve and contribute to Holo’s goals.

Second, it is equally clear we need to accelerate the timeline for Unyt. Unyt’s software underpins the accounting infrastructure necessary to create and launch HoloFuel, and subsequently allow owners of HOT to swap into it. More broadly, the multiple types of connectivity Unyt can foster have enormous potential to influence the size, growth, and overall value of Holochain - it is the substrate of peer-to-peer local currencies, and the foundation for future DePIN alliances. 

This acceleration is already under way - in fact, Unyt has released its first community currency app, Circulo, which is meant for real-world use but also acts as proof-of-concept for the broader Unyt ecosystem.

Third, and finally, the Holochain Foundation will continue to focus on the stability and resilience of the Holochain codebase, prioritize key technical features required for the currency swap execution, and remain at the center of all our entities to ensure cohesion and coordination.

Leadership Transition

As part of the next stage of Holo’s evolution, I want to share an important leadership update.

Mary Camacho, who has served as Executive Director of Holo since 2018, will be stepping down from that role, and I will be stepping in. Mary will continue to support Holo during this transition, particularly in guiding financial and strategic planning. We are deeply grateful for her years of leadership, steady guidance, and dedication to Holo’s vision.

At the same time, we also thank Alastair Ong, who has served as a Director of Holo, for his contributions on the board. We wish him the very best in his next endeavors.

These transitions mark a natural shift in leadership that allows Holo to move forward with renewed focus, alongside ongoing collaboration with Unyt and the wider Holochain ecosystem.

Looking Ahead

From the outset, we knew we were undertaking an extraordinary challenge. In conceiving of and developing Holochain, we set out to compete with some of the largest, best-resourced, and most powerful companies in the world. No part of what we have done, or intend to do, has been easy. 

In many ways Holochain has always been a future-looking technology that users had difficulty fully appreciating and adopting at scale. Now, the world seems to have caught up to us, and is interested in implementing peer-to-peer networks and processes away from centralized structures. 

When we formed Unyt to build the software infrastructure to permit the creation and accounting for HoloFuel, we also caught up to the world: A Major Opportunity Emerges(the volume of digital payments and transactions last year alone are measurable in the trillions).

We’ve spent a long time working to deliver on our commitments to our community, and there is much still to do.

As challenging as it is not to have crossed the finish line yet, it’s exciting to see it appearing on the horizon. We continue to experiment with how to best expand the potential for Holo hosting. And with Unyt, what we’re proposing to do here - if we are successful - is significantly grow the scale, potential, optionality, and value of every aspect of the Holochain ecosystem. 

For those interested, please take the time to watch our most recent livestream, where we talk about this evolution and the opportunities it represents for all of us. 

We have a lot to look forward to, and we look forward to continuing to work closely with our most valuable, and reliable, resource: you, the members of the Holochain community.

Wednesday, 01. October 2025

liminal (was OWI)

Third-Party Fraud: The Hidden Threat to Business Continuity

Last week marked our sixth Demo Day, this one focused on Fighting Third-Party Fraud. Ten vendors stepped up to show how their solutions tackle account takeover (ATO), business email compromise (BEC), and synthetic identity fraud. Each had 15 minutes to prove their case, followed by a live Q&A with an audience of fraud, risk, and […] The post Third-Party Fraud: The Hidden Threat to Business C

Last week marked our sixth Demo Day, this one focused on Fighting Third-Party Fraud. Ten vendors stepped up to show how their solutions tackle account takeover (ATO), business email compromise (BEC), and synthetic identity fraud. Each had 15 minutes to prove their case, followed by a live Q&A with an audience of fraud, risk, and security leaders.

Across the sessions, a consistent theme emerged: the biggest shift in the fraud prevention market isn’t in the tactics fraudsters use, but how enterprises are buying solutions. Detection is expected; what matters now is whether a tool can keep the business running without stalling growth or turning away good customers. Buyers want assurance that fraud prevention supports stability by keeping customers moving, revenue intact, and trust unbroken when fraud inevitably spikes.

What is third-party fraud?

For readers outside the space, third-party fraud happens when criminals exploit someone else’s identity to gain access. Unlike first-party fraud, where the individual misrepresents themselves, third-party fraud relies on stolen or fabricated credentials to impersonate a trusted user.

Classic examples include:

Account takeover (ATO): hijacking legitimate accounts, often through phishing or stolen credentials Business email compromise (BEC): impersonating executives or vendors to redirect payments Synthetic identity fraud: blending real and fake data to create convincing personas

In 2024, consumers reported losing $12.5 billion to fraud, a 25% jump year-over-year and the highest on record. Account takeover attacks alone rose nearly 67% in the past two years as fraudsters leaned on phishing, social engineering, and increasingly AI-driven methods.

As Miguel Navarro, Head of Applied Emerging Technologies at KeyBank, put it: “Think about deepfakes like carbon monoxide — you may think you can detect it, but honestly, it’s untraceable without tools.” That risk is no longer theoretical; it’s already showing up in contact centers and HR pipelines.

Walking the friction tightrope

Every fraud solution has to walk a tightrope: protect the business without slowing customers down. In this Demo Day, that balance was explored in the Q&A, with audience questions focusing on onboarding delays, false positives, and manual review trade-offs. What happens when onboarding drags? How are false positives handled? Where do manual reviews fit?

Miguel also added:“…a tool might be a thousand times more effective, but if it’s too complex for teams to adopt, it’s effectively useless.”

Providers responded with different approaches. Several leaned on behavioral and device-based analytics to make authentication seamless, layering signals like keystroke patterns and device intelligence so genuine users pass in the background. Others showed risk-based orchestration, combining machine learning models and workflows so only high-risk activity triggers extra checks.

Protecting customers from themselves

One theme that stood out was how solutions are evolving to address social engineering. As Mzu Rusi, VP of Product Development at Entersekt, explained: “It’s not enough to protect customers from outsiders — sometimes we have to protect them from themselves when they’re being socially engineered to approve fraud.”

That means fraud platforms are no longer judged only on blocking malicious logins. They’re also expected to intervene in context, analyzing signals like whether the user is on a call while approving a transfer, or whether a new recipient account shows signs of mule activity.

Human touch as a deterrent

Technology was the backbone of every demo, but Proof emphasized how human interaction remains a powerful fraud defense. Lauren Furey, Principal Product Manager, shared how stepping up to a live identity verification can shut down takeover attempts while preserving trust: “The deterrence of getting a fraudster in front of a human with these tools is enormous. Strong proofing doesn’t have to feel heavy, and customers leave reassured rather than abandoned.”

This balance — minimal friction for real customers, targeted intervention for fraudsters — ran through the day.

From fraud loss to balance sheet risk

Fraud was reframed as a balance sheet problem, not just a technology one. As Sunil Madhu, CEO & Founder of Instnt, put it: “Fraud is inevitable. Fraud loss is not. For the first time, businesses can transfer that risk off their balance sheet through fraud loss insurance.”

That comment landed because it spoke to CFO and board-level concerns. Fraud is no longer just an operational hit; it’s a financial exposure that can be shifted, managed, and priced. But shifting fraud into financial terms doesn’t reduce the pressure on prevention teams — it only raises the bar for the technology that keeps fraud within acceptable limits.

How detection is evolving

On stage, several demos highlighted identity and device scoring as the new baseline, layering biometrics, transaction history, and tokenization to judge risk in milliseconds. Others pushed detection even earlier in the journey, using pre-submit screening to catch bad actors before they hit submit.

Machine learning also played a central role in the demos. Several providers showed how adaptive models can cut down false positives while continuously improving through feedback loops. Phil Gordon, Head of Solution Consulting at Callsign, described it as creating a kind of “digital DNA”: “Every customer develops a digital DNA — how they type, swipe, or move through a session. That lets us tell genuine users apart from bots, malware, or account takeover attempts in milliseconds.”

That theme carried into the fight against synthetic identities. Alex Tonello, SVP Global Partnerships at Trustfull, explained how fraudsters engineer personas to slip through traditional checks: “Synthetic fraudsters build identities with new emails, new phone numbers, no history. By checking hundreds of open-source signals at scale, we see right through that façade.”

Others extended the conversation to fraud at the network level. Artem Popov, Solutions Engineer at Sumsub, noted: “Fraudsters reuse documents, devices, and identities across hundreds of attempts. By linking those together, you expose entire networks — not just single bad actors.”

The boardroom shift

Fraud used to be a line item in operations, managed quietly by fraud prevention teams and written off as the cost of doing business. That’s no longer the case. The scale of losses, reputational damage, and operational disruption means fraud has moved up the agenda and onto boardrooms.

Executives now face a harder challenge: choosing tools that don’t just stop fraud, but that protect business continuity. They want proof that investments in prevention will keep revenue flowing when attacks spike, not just reduce fraud losses on a spreadsheet. Boards are asking whether controls are strong enough to protect customer trust, whether onboarding processes can scale without breaking, and whether the business can keep moving if a wave of account takeovers hits overnight.

They are right to pay attention. Fraud and continuity now rank among the top five enterprise risks. Technology shifts like Apple and Google restricting access to device data are making established defenses less reliable, reframing fraud not only as a security issue but as a continuity problem.

Watch the Recording

Did you miss our Third-Party Fraud Demo Day? You can still catch the full replay of vendor demos and expert insights:
Watch the Third-Party Fraud Demo Day recording here

Key Takeaways Liminal’s sixth Demo Day spotlighted 10 vendors tackling third-party fraud. Global fraud losses are nearing $1 trillion annually, with ATO alone costing banks $6,000–$13,000 per incident. Audience Q&A revealed that the hardest problems are manual reviews, onboarding delays, and false positives. Leading vendors balance speed, scale, and user experience, reducing both fraud losses and abandonment. Fraud prevention has shifted from a back-office function to a board-level resilience strategy.

The post Third-Party Fraud: The Hidden Threat to Business Continuity appeared first on Liminal.co.


HYPR

Announcing the HYPR Help Desk Application: Turn Your Biggest Risk into Your Strongest Defense

The call comes in at 4:55 PM on a Friday. It’s the CFO, and she’s frantic. She’s locked out of her account, needs to approve payroll, and her flight is boarding in ten minutes. She can’t remember the name of her first pet, and the code sent to her phone isn’t working. The pressure is immense. What does your help desk agent do? Do they bypass security to help the executive, or do they ho

The call comes in at 4:55 PM on a Friday. It’s the CFO, and she’s frantic. She’s locked out of her account, needs to approve payroll, and her flight is boarding in ten minutes. She can’t remember the name of her first pet, and the code sent to her phone isn’t working. The pressure is immense. What does your help desk agent do? Do they bypass security to help the executive, or do they hold the line, potentially disrupting a critical business function?

This isn’t a hypothetical scenario; it's a daily, high-stakes gamble for support teams everywhere. And it’s a gamble that attackers are counting on. They know your help desk is staffed by humans who are measured on their ability to resolve problems quickly. They exploit this pressure, turning your most helpful employees into unwitting accomplices in major security breaches. It's time to stop gambling.

Why Is Your Help Desk a Prime Target for Social Engineering?

The modern IT help desk is the enterprise's nerve center. It’s also its most vulnerable entry point. According to industry research, over 40% of all help desk tickets are for password resets and account lockouts (Gartner), each costing up to $70 to resolve (Forrester). This makes the help desk an incredibly attractive and cost-effective target for attackers.

Why? Because social engineers don't hack systems; they hack people. They thrive in environments where security relies on outdated, easily compromised data points:

Knowledge-Based Questions (KBA): The name of your first pet or the street you grew up on isn't a secret. It's public information, easily found on social media or purchased for pennies on the dark web. SMS & Email OTPs: Once considered secure, one-time passcodes are now routinely intercepted via SIM swapping attacks and sophisticated phishing campaigns. Employee ID Numbers & Manager Names: This information is often exposed in data breaches and is useless for proving real-time identity.

Relying on this phishable data forces your agents to become human lie detectors, a role they were never trained for and a battle they are destined to lose. The result is a massive, unmitigated risk of help desk-driven account takeover.

Shifting from Guesswork to Certainty with HYPR's Help Desk App

Today, we're fundamentally changing this dynamic. To secure the help desk, you must move beyond verifying what someone knows and instead verify who someone is. That's why we're proud to introduce the HYPR Affirm Help Desk Application.

This purpose-built application empowers agents by integrating phishing-resistant, multi-factor identity verification directly into their workflow. Instead of asking agents to make high-pressure judgment calls, we give them the tools to verify identity with NIST IAL 2 assurance fast. This transforms your help desk from a primary target into a powerful line of defense against fraud.

How Can You Unify Identity Verification for Every Help Desk Scenario?

The core of the solution is the HYPR Affirm Help Desk App, a command center for agents that integrates seamlessly with your existing support portals (like ServiceNow or Zendesk) and ticketing systems. This provides multiple, flexible paths to resolution, ensuring security and speed no matter how an interaction begins.

Initiate Verification from Anywhere: Self-Service: Empower users to resolve their own issues by launching a secure verification flow directly from your company's support portal. Agent-Assisted: For live calls or chats, an agent can use the HYPR Help Desk App to instantly send a secure, one-time verification link via email or SMS. User-Initiated (with PIN): A user can start the verification process on their own and receive a unique PIN. They provide this PIN to a support agent, who uses it to look up the verified session, ensuring a fast and secure handoff without sharing any PII. Verify with Biometric Certainty:
The user is guided to scan their government-issued photo ID with their device's camera, followed by a quick, certified liveness-detecting selfie. This isn't just a photo match; the liveness check actively prevents spoofing and deepfake attacks, proving with certainty that the legitimate user is physically present and in control of their ID. Resolve with an Immutable Audit Trail:
Once verification is complete, the result is instantly reflected in the agent's Help Desk App. The agent can now confidently proceed with the sensitive task – whether it's a password reset, MFA device recovery, or access escalation. Every step is logged, creating a tamper-proof, auditable record that satisfies the strictest compliance and governance requirements. HYPR vs. Legacy Methods: A New Reality for Help Desk Security

The gap between traditional methods and modern identity assurance is staggering. One relies on luck, the other on proof.

End the Gamble: Stop Account Takeover at the Help Desk

Your organization can't afford to keep rolling the dice. Every interaction at your help desk is a potential entry point for a catastrophic breach. The pressure on your agents is immense, the methods they've been given are broken, and the attackers are relentless.

But there is a different path. A path where certainty replaces guesswork. Where your support team is empowered, not exposed. Where your help desk transforms from a cost center and a risk vector into a secure, efficient enabler of the business. By removing the impossible burden of being human lie detectors, you free your agents to do what they do best: help people. Securely. 

Ready to secure your biggest point of contact? Schedule your personalized HYPR Affirm demo today.

Frequently Asked Questions about HYPR Affirm’s Help Desk App (FAQ)

Q. What is NIST IAL 2 and why is it important for help desk verification?
A: NIST Identity Assurance Level 2 (IAL 2) is a standard from the U.S. National Institute of Standards and Technology. It requires high-confidence identity proofing, including the verification of a government-issued photo ID. For help desk scenarios, meeting this standard ensures you are protected against sophisticated attacks, including deepfakes and social engineering, and is crucial for preventing fraud.

Q. How long does the verification process actually take for the user?
A: The entire user-facing process, from receiving the link to scanning an ID and taking a selfie, is designed for speed and simplicity. A typical full verification is completed in under 2 minutes, and the process is completely configurable.

Q. What happens if a user doesn't have their physical ID available?
A: HYPR Affirm's policy engine is fully configurable. While ID-based verification is the most secure method, organizations can define alternative escalation paths and workflows to securely handle exceptions based on their specific risk tolerance and needs.

Q. Is this solution just for large enterprises?
A: HYPR Affirm for Help Desk is for any organization that needs to eliminate the significant risk of account takeover fraud originating from support interactions. It scales from mid-sized companies to the world's largest enterprises, securing sensitive tasks like password resets, MFA recovery, and access escalations.


Dark Matter Labs

Many-to-Many: The Messy, Meta-Process of Prototyping on Ourselves

Welcome back to our ongoing reflections on the Many-to-Many project. In our last three posts, we’ve taken you through the journey of building our digital platform — from initial concepts and wrestling with complexity to creating our first tangible outputs like the Field Guide and Website. We’ve shared how the project’s tools have emerged from a living, iterative process. Today, we’re taking a ste

Welcome back to our ongoing reflections on the Many-to-Many project. In our last three posts, we’ve taken you through the journey of building our digital platform — from initial concepts and wrestling with complexity to creating our first tangible outputs like the Field Guide and Website. We’ve shared how the project’s tools have emerged from a living, iterative process.

Today, we’re taking a step back to look at the foundational methodology behind this entire initiative. How do you go about creating new models for collaboration when no blueprint exists? Our approach has been a “proof of possibility” — a live experiment where we, along with our ecosystem of partners, served as the primary test subjects.

In this post, the initiative’s co-stewards, Michelle and Annette, discuss the profound challenges and unique learnings that come from trying to build the plane while flying it.

How the Proof of Possibility fits within a wider context of predecessor work, and flows into other initiatives and partial testing in live contexts

Michelle: We wanted to reflect on the “proof of possibility” we ran, where we essentially decided to live prototype on ourselves with a small group of partners in a Learning Network. While it sounds simple, we learned it’s incredibly complex. You’re making decisions and sense-making within a specific prototype, but you’re also constantly trying to translate those learnings into something more generalised and applicable for others. In many ways, it’s a cool, experimental way of working, but it was also a bit of a nightmare.

The prototype, test, learn loop that we started to develop in the Proof of Possibility

Annette: It was very meta. In this proof of possibility, one of the things we were testing was a learning infrastructure for the ecosystem itself. So you’re testing learning within the experiment, while also prototyping the experiment, and then you have to step back and ask: what did we learn from this specific context versus what is context-agnostic and applicable elsewhere? Then there’s another layer: what did we learn about the wider external landscape and its readiness for this work? And finally, what did we learn about the process of learning about all of that? There’s this feeling of learning about learning about learning.

It’s representative of the fractal nature of this work. For instance, we were a core team working on our own governance while simultaneously orchestrating and supporting the ecosystem’s governance. The ecosystem itself was then focused on building capabilities of the system for many-to-many governance. It was navigating so many layers. On one hand, this has immense value because you’re looking at one question from multiple angles at once. On the other hand, it has been incredibly cognitively challenging.

Michelle: It’s that old adage of trying to build the plane whilst flying it — except there are no blueprints for the plane. I think the complexity we bumped into is probably present for anyone trying to do this kind of work, because everyone has to work at fractals all the time. So I was thinking, what are some things we bumped into, and how did we overcome them? The first breakthrough that comes to my mind was when we started to explicitly ask, “Are we talking about this specific prototype right now, or are we talking about the generalised model?” Just having that clear distinction, a shared vocabulary that the whole learning network could use, was a huge moment of alignment for us. It gave people a way to see we were working on at least two layers at the same time.

The draft “Layers of the Project” which was created during the project as a visual representation and description of the different spaces we were trying to hold and build all at once. We note that the thinking has evolved and this image has been superseded, but share it here as a point in time image.

Annette: Yes and we found that the difference in thinking required for each of those layers was huge. Thinking through the specifics of what we did in one context versus pulling out principles applicable across all/any contexts was such a massive gear shift. Turning a specific example — “here’s something we tried” — into a generalised tool — “here’s something useful for others” — was probably a five-fold increase in workload, if not more. The amount of planning and thinking required was significantly different.

Michelle: What else comes up for you from this experience of prototyping on ourselves?

If nothing comes to mind, I can jump in. For me it was the dynamic of being the initiators. We were the ones who convened the group and set the mission. In these complex collaborations, the initiator tends to hold a lot of relational capital, power, and responsibility. This was exacerbated because we were managing all these different layers of learning. It centralised the knowledge and the relational dynamics back to us. If one of us was missing from a budget conversation, for example, it was difficult for others to proceed. For me, the bigger point is that to do good demonstration work, it has to be experimental and emergent. But that doesn’t come for free; it has downsides. This re-centralisation was one of them, and it was a lot for us to hold.

Annette: That makes me wonder if a certain degree of that centralisation is inevitable in organising for these kind of ‘proof of possibilities’. When something is this complex and emergent, you can only distribute so much, so early. To meet the real-time needs of the collaboration, you need an agile core team. This is where it gets interesting — we were operating in the thin space between a sandbox environment and a live context. It had to be a genuine live context for people to want to participate, but it was also a sandbox for testing the general model. You have to meet the timelines of the live context; you can’t just pause for six months to work out team dynamics, or the collaboration collapses. So you almost need a team providing strong leadership to hold both realities at once.

Michelle: So, would you do it the same way again?

Annette: I think if we did it again, the things we’ve learned would make it smoother. We’d be more explicit from the start about which layer we’re discussing. We’d have a better sense of how to capture live learning and translate it into a model as we go. When we started, most of our attention was on hosting the live context, and a lot of the synthesis happened afterwards. Having done it once, I’d be more conscious of doing that synthesis in real-time — though the cognitive lift to switch between those modes is still immense.

Michelle: I agree, I would do it again with those additions. The other thing is that when we started, we didn’t even really have the process that we wanted to go through. Now we do. We’ve learned more about what works. Starting fresh, we would have a decent sketch of a process to begin with. Not perfect, and you still have to wing it, but it’s a good start. I’d be interested to do it again and see what happens.

This meta-reflective process — learning about learning while doing — has been a central part of the Many-to-Many initiative creating a ‘Proof of Possibility’ as a way to learn about what’s possible at a system level. While navigating these fractal layers is cognitively demanding, it’s what allows for true emergence, distinguishing this deep, systemic work from simple chaos. It is a messy, challenging, and ultimately fruitful way to discover what’s possible.

In the Many-to-Many website [coming soon] you will find some resources based on what we did in the Proof of Possibility (Experimenter’s Logs and example methods and artefacts like the Contract) and some based on what might be applicable across contexts (a Field Guide, some tools and an overview of System Blockers we’ve encountered) along with case studies and top tips from other contexts in the learning network.

Thanks for following our journey. You can find our previous posts [here], [here] and [here] and stay updated by joining the Beyond the Rules newsletter [here].

Visual concept by Arianna Smaron & Anahat Kaur.

Many-to-Many: The Messy, Meta-Process of Prototyping on Ourselves was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

Decentralized identity: The superpower every 2026 budget needs

The post Decentralized identity: The superpower every 2026 budget needs appeared first on Indicio.
Verifiable Credentials are the foundation for faster, safer, and more cost-effective digital strategy. In this new report from Indicio we look at  examples of successful deployments, the benefits to business, and explain the risks of waiting too long to adopt this technology. We also explain how to eliminate the cost and uncertainty of developing from scratch, laying out a blueprint for making adoption simple.

By Helen Garneau

Every 2026 budget decision will come down to a simple question: does this investment deliver measurable value. Leaders are expected to cut costs, reduce risk, and still deliver growth. In that environment, the way you handle digital identity can no longer be an afterthought—it has to be a priority.

This is especially true as identity fraud accelerates across all fronts, driven by generative AI brute force attacks, deepfakes, and social media scams. Legacy technology isn’t just failing to keep up, it’s the root cause of these problems.

That is why we wrote  Decentralized Identity: The Superpower Every 2026 Budget Needs. It explains why Verifiable Credentials are  a transformational new technology that combines authentication and fraud prevention in one, simple, and cost effective solution that you can easily inject into your systems and operations.

Can you inoculate your IAM processes against deepfakes?

Yes you can — by incorporating authenticated biometrics into Verifiable Credentials. We explain how organizations are already doing just that to cut fraud and costs, and how you can too, by showing a practical path for adoption.

Now is the time to act. As 2026 budgets are finalized, the organizations that plan for Verifiable Credentials today will be the ones that are positioned to lead their markets. Get an in-depth knowledge and actionable insights that you can turn into immediate savings.

Download the report and see how Indicio Proven can help you reduce costs, protect against fraud, and accelerate growth in 2026.

The post Decentralized identity: The superpower every 2026 budget needs appeared first on Indicio.


BlueSky

Bluesky's Patent Non-Aggression Pledge

Bluesky develops open protocols. We're taking a short and simple patent non-aggression pledge to ensure that everybody feels confident building on them.

Bluesky develops open protocols, and we want everybody to feel confident building on them. We have released our software SDKs and reference implementations under Open Source licenses, but those licenses don’t cover everything. To provide additional assurance around patent rights, we are making a non-aggression pledge.

This commitment builds on our recent announcement that we’re taking parts of AT to the IETF in an effort to establish long-term governance for the protocol.

Specifically, we are adopting the short and simple Protocol Labs Patent Non-Aggression Pledge:

Bluesky Social will not enforce any of the patents on any software invention Bluesky Social owns now or in the future, except against a party who files, threatens, or voluntarily participates in a claim for patent infringement against (i) Bluesky Social or (ii) any third party based on that party's use or distribution of technologies created by Bluesky Social.

This pledge is intended to be a legally binding statement. However, we may still enter into license agreements under individually negotiated terms for those who wish to use Bluesky Social technology but cannot or do not wish to rely on this pledge alone.

We are grateful to Protocol Labs for the research and legal review they undertook when developing this pledge text, as part of their permissive intellectual property strategy.


FastID

Fastly's Seven Years of Recognition as a Gartner® Peer Insights™ Customers’ Choice

Fastly was named a 2025 Gartner® Peer Insights™ Customers’ Choice for Cloud WAAP, marking seven consecutive years of recognition driven by customer trust and reviews.
Fastly was named a 2025 Gartner® Peer Insights™ Customers’ Choice for Cloud WAAP, marking seven consecutive years of recognition driven by customer trust and reviews.

Tuesday, 30. September 2025

Mythics

Mythics' Strategic Acquisitions Amplify Cloud-Powered, AI-Driven Transformation at Oracle AI World

The post Mythics' Strategic Acquisitions Amplify Cloud-Powered, AI-Driven Transformation at Oracle AI World appeared first on Mythics.

Spherical Cow Consulting

Delegation and Consent: Who Actually Benefits?

When not distracted by AI (which, you have to admit, is very distracting) I’ve been thinking a lot about delegation in digital identity. We have the tools that allow administrators or individuals grant specific permissions to applications and service.  In theory, it’s a clean model. The post Delegation and Consent: Who Actually Benefits? appeared first on Spherical Cow Consulting.

“When not distracted by AI (which, you have to admit, is very distracting), I’ve been thinking a lot about delegation in digital identity. We have the tools that allow administrators or individuals to grant specific permissions to applications and services.” 

In theory, it’s a clean model: you delegate only what’s necessary to the right party, for the right time. Consent screens, checkboxes, and admin approvals are supposed to embody that intent.

That said, the incentive structures around delegation don’t actually encourage restraint. They encourage permission grabs and reward broader access, not narrower. And when that happens, what was supposed to be a trust-building mechanism—delegation with informed consent—turns into a trust-eroding practice.

A Digital Identity Digest Delegation and Consent: Who Actually Benefits? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:11:54 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Delegation’s design intent versus product incentives

Delegation protocols like OAuth were designed to solve a simple problem: how can an application act on your behalf without you handing over your password? Instead of giving a third-party app your full login, OAuth lets you grant that app a limited token, scoped to specific actions, like “read my calendar” or “post to my timeline.” In enterprise settings, administrators can approve apps at scale, effectively saying, “this tool can access certain company data on behalf of all our employees.”

The intent is least privilege: give just enough access to accomplish the task, nothing more. Tokens should be narrowly scoped, time-bound, and transparent.

But the product incentives push in the opposite direction. If you’re a developer or growth team, every extra permission opens new doors: richer analytics, better personalization, and potentially more revenue. Why ask for the bare minimum when you can ask for a lot more, especially if you can get away with it?

And so the pattern of permission creep emerges. There is an interesting study of Android apps, for example, shows that popular apps tend to add more permissions over time, not fewer. The reason isn’t technical necessity; it’s incentive alignment. More access means more opportunities, even if it slowly undermines the trust that delegation was supposed to build.

This is scope inflation: when “read metadata from one folder” somehow balloons into “read and write all files in your entire cloud drive.” From a delegation perspective, it looks absurd. From an incentive perspective, it looks entirely rational.

Consent as a manufactured outcome

Let’s talk about “consent.” It’s the shiny wrapper that’s supposed to make delegation safe. The idea is simple: a user sees what’s being requested, makes an informed choice, and either agrees or doesn’t. That’s the theory. In practice, consent is manufactured.

Consent screens are optimized like landing pages. The language is written to minimize friction. The buttons are designed to maximize acceptance. Companies treat “consent rates” the same way they treat sign-up conversions or click-through rates: a metric to push upward.

And the tactics aren’t subtle:

Dark patterns in consent UIs. Regulators in the EU have formally called out manipulative design in cookie banners and social media interfaces; tricks like highlighting the “accept” button in bright colors while burying “reject” in a subtle link. That’s not neutral presentation. That’s steering. Consent-or-pay models. The latest battleground is whether “pay or accept tracking” constitutes valid consent. European regulators have said that if refusal carries a cost, then consent may not be “freely given.” Yet many sites lean into exactly this model: you can either hand over your data or hand over your credit card. Consent fatigue. When users see banners, pop-ups, and consent prompts multiple times a day, they stop reading. They click whatever gets them through fastest. At that point, it’s no longer informed consent, it’s consent theater.

Delegation without trust is already fragile. Delegation wrapped in manufactured consent is worse: it’s a contract of adhesion where one party has all the power and the other clicks “accept” because they have no real choice.

If you’d like to dive into the consent debate further, I HIGHLY recommend you follow Eve Maler’s The Venn Factory. She has a great blog series on consent (example here) and an even greater whitepaper (for a fee but totally worth it).

Enterprise delegation and the admin consent problem

It’s tempting to think this is just a consumer problem involving cookie banners and mobile apps. But enterprise delegation has its own set of perverse incentives.

Take Microsoft 365 and Entra ID as an example (though let’s be clear that this is absolutely a common scenario). Enterprises can allow third-party apps to request access to user or organizational data through OAuth. To reduce noise, Microsoft lets administrators “consent on behalf of the organization.” Sounds efficient, right? Fewer pop-ups, fewer interruptions for the workforce, saving time (and time = money, right?).

But that efficiency comes at a cost. Attackers exploit this very model through “consent phishing”: tricking a user or admin into approving a malicious app that requests broad API scopes. Once granted, those permissions are durable and hard to detect. Microsoft now publishes guidance on identifying and remediating risky OAuth apps precisely because the model’s incentives tilt toward convenience over caution.

For administrators, the path of least resistance is to click “Approve for the organization” once and move on. That makes life incrementally easier for everybody: administrators, their users, and the attackers.

Enforcement as a belated correction

If the incentives reward broad access, who actually keeps things in check? Increasingly, it’s regulators and courts.

In the U.S., the Federal Trade Commission has penalized companies like Disney and Mobilewalla for collecting data under misleading labels or without meaningful consent. The penalties aren’t just financial; they force changes in how products are designed and how defaults are set. In Europe, the IAB’s Transparency and Consent Framework—the standard that underpins much of adtech—has faced repeated rulings (see examples here and here) that its consent strings are personal data, that aspects of the framework violate GDPR, and that “consent at scale” is not a free pass. Legal battles continue, but I think the message being sent is pretty obvious: broad, opaque consent mechanisms don’t hold up under scrutiny. Regulators have also zeroed in on “consent-or-pay” and dark pattern interfaces, explicitly saying that these undermine the principle of freely given consent.

What’s happening is essentially a regulatory realignment of incentives. If the market rewards permission grabs, fines, and rulings change the cost-benefit equation. In some markets, but not all, the cheapest path is shifting to grabbing less data, not more.

Why this erodes trust

From the individual’s point of view, none of this is subtle. They notice when an app requests more permissions than it should. They notice when every website they visit demands cookie consent in confusing ways (it is SO ANNOYING). They notice when their IT department approves a sketchy app and they’re the ones who end up phished.

The result is trust erosion. Individuals stop believing that “consent” means choice and assume that every request for access is a data grab in disguise. They are probably not wrong.

And once trust is gone, it’s not easily rebuilt. Every new protocol, every new delegation model, has to fight against that backdrop of suspicion.

What good looks like

If delegation and consent are to survive as trust-building mechanisms, they have to look different from how they look today. Here are a few ways to realign the incentives:

Purpose-bound scopes. Tokens should be tied to specific actions, not broad categories. “Read file metadata for this folder” is a very different ask than “Read all your files.” Time-boxed tokens. Access should expire quickly unless explicitly renewed. Long-lived tokens are an incentive to attackers and a liability for providers. Refusal symmetry. The “reject” button should be as prominent and easy to click as the “accept” button. Anything less is manipulation. Transparent change logs. Apps should publish what scopes they request and why, with a clear history of when those scopes changed. If permissions creep is inevitable, at least make it visible. Admin consent boards. In enterprises, app approval should involve more than a single overworked admin. Formal review processes—similar to change advisory boards—can slow down risky delegation without grinding everything to a halt. Trust reports. Companies could publish regular “trust reports” that show how delegation and consent are actually being managed. Which apps request what? How often are tokens revoked? How many requests are denied? Turning these into KPIs re-aligns incentives toward trust, not just conversion. Who actually benefits?

So, back to the original question: who actually benefits from delegation and consent as practiced today?

Companies benefit from broader access because it feeds product features, analytics, and monetization. Attackers benefit when that broad access is abused, because consent tokens and admin approvals often outlive user awareness. Regulators benefit politically when they enforce, because they’re seen as protecting consumer rights. Users? Users benefit in theory, but in practice, they’re the least likely to see real advantage. Their consent is optimized against, their delegation scopes are inflated, and their trust is constantly eroded.

Delegation and consent were supposed to empower users. Right now, they mostly empower everyone else.

The path forward

Delegation is too valuable to discard; it is definitely having its moment given the complexities of doing it correctly. Consent is too foundational to abandon; the alternative of not asking at all is at least as bad as asking too much. But both need to be reclaimed from the incentive structures that have warped them.

That means treating trust as the KPI, not just consent click-through rates. It means designing delegation flows that prioritize least privilege, not maximum access. It means regulators continuing to push back against manipulative practices, and companies recognizing that the long game is trust, not just data.

If the only people who benefit from delegation and consent are companies and attackers, then the rest of us have been sold a story. And the longer that story holds, the harder it will be to convince users that their “yes” actually means something. If your bosses are having a hard time understanding that, feel free to print out this post and slide it under their office door. They might think a bit more deeply about their decisions going forward.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30] Welcome back to A Digital Identity Digest. Today’s episode is called Delegation and Who Actually Benefits?

[00:00:37] This piece builds on earlier conversations and writing about delegation and digital identity.

[00:00:44] Today, we’ll explore how incentive structures push companies to grab broader permissions than they really need—and how that erodes trust.

The Clean Model of Delegation

[00:00:53] When not distracted by all the AI news—which you have to admit is very distracting—I’ve been thinking a lot about delegation and digital identity.

We have tools that allow administrators or individuals to grant specific permissions to applications and services. In theory, this is a very clean model:

Delegate only what’s necessary To the right party For the right time

[00:01:18] Consent screens, checkboxes, and admin approvals are all supposed to embody this principle.

[00:01:24] Unfortunately, incentives don’t encourage restraint. They encourage permission grabs. That reward system favors broader access, not narrower. What should be a trust-building mechanism often turns into a trust-eroding practice.

OAuth and the Design of Least Privilege

[00:01:40] Delegation protocols like OAuth were created to solve a practical problem:

[00:01:47] How can an application act on your behalf without requiring your password?

Instead of handing over login credentials, OAuth allows granting a limited token. Ideally, that token is:

Scoped to a specific action (e.g., read my calendar) Time-bound Transparent

[00:02:17] In enterprise settings, administrators can approve apps at scale. That way, employees aren’t asked to answer the same questions repeatedly.

[00:02:28] But here’s the issue: incentives push in the opposite direction.

[00:02:32] Service builders want broader access because:

More permissions unlock richer analytics Data enables personalization Extra information can be monetized

[00:02:42] Growth teams treat every consent screen as a conversion funnel to optimize. Why ask for less when asking for more is easier?

[00:02:59] The result is permission creep. Studies of Android apps show that popular apps add permissions over time—not fewer.

Consent in Theory vs. Consent in Practice

[00:03:34] On paper, consent is the safeguard. Users see what’s requested and make an informed choice.

[00:03:48] In practice, consent is manufactured. Consent screens are optimized like landing pages.

Language minimizes friction Buttons maximize acceptance Consent rates are tracked as key metrics

[00:04:00] Dark patterns dominate: cookie banners where “Accept All” is bright and obvious, while “Reject” hides as a faint gray link.

[00:04:15] Regulators in Europe have called this out as manipulative.

[00:04:20] Then there are “consent or pay” models: accept tracking or pay for access. Regulators argue this undermines freely given consent.

[00:04:33] And, of course, there’s consent fatigue. Repeated banners train users to click without thinking. What’s left isn’t informed consent—it’s consent theater.

[00:04:46] Delegation without trust is fragile. Delegation wrapped in manufactured consent is worse.

Enterprise Risks and Consent Phishing

[00:05:01] This isn’t just a consumer problem. Enterprise environments like Microsoft 365 and Entra ID carry their own risks.

[00:05:13] Enterprises can let third-party apps request organizational data. To reduce friction, admins can consent on behalf of the entire company.

[00:05:22] Efficient, yes. Dangerous, absolutely.

[00:05:24] Attackers exploit this through consent phishing—tricking admins into approving malicious apps with broad permissions. Once granted, this access is durable and hard to detect.

[00:05:39] Microsoft even publishes playbooks to spot risky OAuth apps, acknowledging the problem.

[00:05:44] But incentives still tilt toward convenience. For overworked admins, approving once feels easier than vetting thoroughly.

Regulatory Realignment of Incentives

[00:06:03] If incentives reward broad access, who reins it in? Increasingly, regulators.

[00:06:11] In the U.S., the Federal Trade Commission has penalized companies for misleading consent practices.

Disney and Mobilewalla paid fines Companies were required to change product design, not just pay penalties

[00:06:26] In Europe, the IAB’s Transparency and Consent Framework has been ruled non-compliant with GDPR. Courts held that consent at scale does not equal valid consent.

[00:06:46] Regulators are also challenging “consent or pay” models, stating they undermine freely given consent.

[00:06:59] This is a regulatory re-alignment of incentives. If the market rewards permission grabs, fines and rulings push companies in the opposite direction—toward less data collection.

The User’s Perspective and Erosion of Trust

[00:07:14] From the user’s point of view, the problem is visible:

Apps request more permissions than needed Cookie banners are confusing IT teams approve apps that later lead to phishing

[00:07:46] The result is erosion of trust. Users stop believing that:

Consent equals choice Delegation equals least privilege

[00:07:56] Once trust is lost, it’s hard to rebuild. Every new product must fight against this backdrop of suspicion.

How Do We Fix This?

[00:07:58] So how can delegation and consent become real trust-building mechanisms instead of hollow rituals?

[00:08:04] Here’s a list:

Purpose-bound scopes: tokens tied to specific actions, not broad categories Time-boxed tokens: access that expires quickly unless renewed Refusal symmetry: reject buttons as visible and easy as accept buttons Transparent change logs: apps publishing history of permission requests Admin consent boards: enterprise review panels instead of one pressured approver Trust reports: companies disclosing how often requests are denied, access revoked, and policies enforced

[00:09:05] Each of these shifts incentives toward making trust the key performance indicator.

Who Actually Benefits?

[00:09:16] Returning to the original question: who benefits from delegation and consent today?

Companies: more permissions, more data, more revenue Regulators: political capital when stepping in Attackers: durable, broad tokens for persistence People: benefit mostly in theory, but often remain the least protected

[00:09:57] Delegation and consent were meant to empower users. Today, they mostly empower everyone else.

[00:10:04] But both are too important to discard. They must be reclaimed from warped incentives.

[00:10:18] That means:

Treating trust as the KPI Designing delegation for least privilege, not maximum access Regulators continuing to push back against manipulation

[00:10:30] Because if only companies and attackers benefit, we’ve lost the plot.

Closing Thoughts

[00:10:44] If you want to dive deeper, explore the work of Eve Maler at the Venn Factory. Her white paper on consent is a fantastic resource worth reading.

[00:11:06] Thanks again for joining A Digital Identity Digest.

[00:11:17] If this episode made things clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn @hlflanagan.

And don’t forget to subscribe and leave a review on Apple Podcasts or wherever you listen. The written post is always available at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Delegation and Consent: Who Actually Benefits? appeared first on Spherical Cow Consulting.


ComplyCube

The CryptoCubed Newsletter: September Edition

In this month’s edition, we cover Australia’s $16.5 million warning to unlicensed crypto firms, KuCoin’s legal battle with Canada’s FINTRAC, the married duo who scammed over 145 crypto investors, Poland’s new crypto bill, and more! The post The CryptoCubed Newsletter: September Edition first appeared on ComplyCube.

In this month’s edition, we cover Australia’s $16.5 million warning to unlicensed crypto firms, KuCoin’s legal battle with Canada’s FINTRAC, the married duo who scammed over 145 crypto investors, Poland’s new crypto bill, and more!

The post The CryptoCubed Newsletter: September Edition first appeared on ComplyCube.


FastID

Make Sense of Chaos with Fastly API Discovery

Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.
Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.

Monday, 29. September 2025

liminal (was OWI)

Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape

Intelligence for a Changing Landscape The post Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape appeared first on Liminal.co.

Intelligence for a Changing Landscape

The post Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape appeared first on Liminal.co.


Ontology

The Role of EOAs in Long-Term Web3 Identity

Hand someone a ledger full of cold storage and they’ll sleep fine at night. Hand them the same ledger and tell them it’s their daily identity and they’ll start sweating. That’s the dividing line between Externally Owned Accounts (EOAs) and the future of Web3 identity. 👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever] EOAs are the oldest and most widely used model for blockchai

Hand someone a ledger full of cold storage and they’ll sleep fine at night. Hand them the same ledger and tell them it’s their daily identity and they’ll start sweating. That’s the dividing line between Externally Owned Accounts (EOAs) and the future of Web3 identity.

👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever]

EOAs are the oldest and most widely used model for blockchain accounts. They were introduced in Ethereum’s earliest days, designed around a single principle: one private key controls one account. That design is elegant in its simplicity and still unmatched when it comes to long-term security.

But as Web3 evolves into a world of portable, reputation-based, and privacy-first identity, it’s worth asking: where do EOAs fit in?

What Are EOAs in Web3?

An EOA is the most basic account type in Ethereum and many other blockchains. Unlike smart contracts, EOAs have no internal code or logic. They exist to send and receive assets, secured entirely by a private key.

If you control the key, you control the account. Lose the key, and the account is gone forever. There is no backup, no recovery, and no reset button.

That rigidity is why EOAs are perfect for what they were built for: vaults.

EOAs as Vaults in Web3 Identity

When it comes to cold storage and long-term custody, EOAs are unmatched. Pair one with a hardware wallet and you have one of the most secure setups in all of crypto.

Staking: EOAs work perfectly for locking up assets in staking positions. Governance tokens: If you plan to hold voting power for years, an EOA keeps it safe. NFT collections: For high-value NFTs meant for long-term ownership, EOAs are the best option. Institutional custody: Funds and DAOs often rely on EOAs for their simplicity and auditability.

The lack of flexibility is what makes them secure. No extra logic means fewer attack vectors. No recovery flows means fewer trust assumptions. Just a private key, a wallet, and assets locked away until you decide to move them.

Why EOAs Struggle as Daily Web3 Identity

The problem comes when EOAs are forced into a role they weren’t designed for: identity.

Daily Web3 identity requires accounts that are:

Recoverable if a key is lost or a device breaks Readable with human-friendly identifiers instead of 42-character hex strings Portable across chains, dApps, and platforms Flexible enough to hold credentials, permissions, and reputation

EOAs can’t do any of this. They’re silent vaults. They don’t carry context or history. They can’t evolve as your needs change. And they put every bit of risk onto one fragile key.

This is where smart wallets and Account Abstraction take over.

EOAs vs Smart Wallets: Dividing the Labor

It’s easy to frame EOAs and smart wallets as competitors, but that’s the wrong way to look at it. They’re complements. Each plays a specific role in the Web3 stack.

EOAs are vaults: best for long-term asset storage, cold custody, and high-value holdings. Smart wallets are identity: built for daily use, recovery, credentials, cross-chain logic, and compliance.

Instead of replacing EOAs, smart wallets expand Web3 identity beyond them. The vaults still exist, but identity moves into programmable, human-friendly infrastructure.

Why EOAs Still Matter for the Future of Web3

Even as smart wallets gain adoption, EOAs will remain essential for three reasons:

Security: The simplicity of EOAs makes them the most secure baseline for storage. Reliability: They are battle-tested and widely supported across every major blockchain. Foundation: Many smart wallets ultimately anchor to EOAs under the hood, ensuring that the vault layer remains intact.

In other words, EOAs aren’t going away. They are the bedrock of Web3. But they can’t carry the entire weight of identity.

The Balance Ahead

The future of Web3 identity is not either-or. It’s both.

Use EOAs for vaults: keep long-term assets locked down in their simplest, most secure form. Use smart wallets for identity: manage recovery, credentials, and interactions across chains and applications.

Together they cover the full spectrum of what Web3 demands: immovable security on one end, human usability on the other.

Try It Yourself: EOAs with ONT ID in ONTO Wallet

EOAs are the backbone of long-term Web3 security. With ONT ID, you can anchor an EOA to your decentralized identity and keep assets safe while still unlocking future-ready features like staking and verifiable credentials.

Download ONTO Wallet to:

Manage EOAs for secure asset storage Stake directly from your vaults Connect your EOA to ONT ID for portable identity Explore verifiable credentials while keeping full self custody

Whether you’re holding tokens, securing NFTs, or preparing for the next phase of Web3 identity, ONTO Wallet gives you the flexibility of smart features with the permanence of EOAs.

Learn More: How Smart Wallets Complete the Picture

EOAs may be the vaults of Web3, but they’re only half the story. To see how Account Abstraction and smart wallets transform identity into something portable, recoverable, and privacy-first, read the full breakdown:

👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever]

The Role of EOAs in Long-Term Web3 Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

From Climate Week NYC to Fastly’s 100% Renewable Commitment

Fastly commits to 100% renewable electricity coverage across its global network and offices, advancing a sustainable internet and supporting customers' climate goals.
Fastly commits to 100% renewable electricity coverage across its global network and offices, advancing a sustainable internet and supporting customers' climate goals.

Friday, 26. September 2025

Anonym

Your Complete Guide to Online Privacy in 2025: Who is Taking Your Personal Info and How to Stop Them

Every time you buy something, open an account, search the internet, interact on social media, and use smart devices, public WiFi, and AI, you leave a trail of personal information or “personal data” that is being collected, shared, used, and abused. Suddenly you’re getting spam calls, phishing emails, smishing texts, and data breach alerts, all […] The post Your Complete Guide to Online Privacy

Every time you buy something, open an account, search the internet, interact on social media, and use smart devices, public WiFi, and AI, you leave a trail of personal information or “personal data” that is being collected, shared, used, and abused. Suddenly you’re getting spam calls, phishing emails, smishing texts, and data breach alerts, all while someone is booking flights to Ibiza with your credit card and taking out mortgages in your name!   

In 2025, our digital footprints are vast and vulnerable— and online privacy is an urgent issue.

This guide covers everything you need to know about online privacy:

What are personal data and your digital footprint? Who’s collecting your personal information and why? What happens when your information gets into the wrong hands? What is data privacy? Are there data privacy laws? What you can do to protect yourself

What are personal data and your digital footprint?

Your digital footprint is all the information about you that exists on the internet because of your online activity. It’s sometimes called your digital exhaust because, just as engine exhaust is residue from using a car, digital exhaust is residue from using the internet. 

Your data is collected from:

Websites (cookies, tracking pixels, session recording) Mobile apps (permissions, background data sharing) Social media (likes, shares, behaviour analysis in social graphs and interest graphs) Smart devices Artificial intelligence (AI) tools Public WiFi and location tracking

Your digital footprint contains what’s called your personal data. Data is information, and  personal data (or personal information or (to get technical) personally identifiable information) is officially defined as any data that can be used to distinguish or trace an individual’s identity and any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information. 

Examples of PII are:

your full name, maiden name, mother‘s maiden name, and alias your date of birth, place of birth, race, religion, weight, activities, geographical indicators employment information, medical information, education information, financial information personal ID numbers such as your SSN and passport and driver license numbers your addresses your telephone numbers IP or MAC address personal characteristics, including photographic images, x-rays, fingerprints, or other biometric image your vehicle registration number or title number

Who’s collecting your personal information and why?

Our digital world is now so reliant on user data it’s described as surveillance capitalism and the data economy. Loads of players have their fingers in this “personal data pie”, including:

Big tech

Tech companies like Alphabet (Google), Meta, Amazon, Apple, Microsoft are giving you “free” access to their platforms and products in return for your personal information, time, and attention. Have you heard the saying, “If you’re not paying for the product, you ARE the product?”

Part of your digital footprint is also what’s known as your social graph and your interest graph. A social graph is a digital map of who you know—your relationships within a social network including your friends, family, coworkers, etc., while an interest graph maps what you like—it connects you to other people based on shared interests, hobbies and topics, rather than personal relationships.

Big tech uses all this personal data to:

Sell ads to third-party advertisers that serve you personalized ads (those scarily coincidental ads that pop up within seconds of your search for a product) Control the content you see, including news feeds and social media posts Set higher prices (you search for something high risk like “motor racing” and suddenly your insurance premium goes up) Influence your political decisions (read up on Cambridge Analytica for a famous example). 

And here’s another thing: most users never consent to their information being used in these ways. Most privacy policies are long, vague, and unreadable, and user consent is complex. What’s more, many apps use dark patterns—design tricks that pressure users to share more information and buy more products than they want to.

Data brokers

Data brokers, which are about 4000 legitimate but unregulated organizations worldwide, are gathering and collating your lucrative data to sell profiles to advertisers, insurers, and political groups. These profiles can include:

your age marital status where you live your email address employer how much money you make how many children you have where you shop what you buy your medical conditions and health issues who you vote for and support

Data brokers usually sell user information to brands in list form. Your email address on a list of people with a particular medical condition such as diabetes would be worth about $79 and on a list of a particular class of traveller about $251. And that’s another thing: A lot of your personal data online isn’t stuff you’d want to share around. While data brokers say the data is anonymized, it’s scarily simple to re-identify so-called “anonymous” data. In fact, some researchers say anonymous data is a lie, and that unless all aspects of de-identifying data are done right, it is incredibly easy to re-identify the subjects.

Governments

Worldwide, governments use citizens’ personal data for surveillance under the guise of national security, public safety, and crime prevention. For example, Proton recently reported that Google, Apple and Meta have handed over data on 3.1 million accounts to the US authorities over the last decade (regardless of which political party was in the White House), providing information such as emails, files, messages, and other highly personal information.

“In the past, the government relied on massive, complex and legally questionable surveillance apparatus run by organizations like the NSA. But thanks to the advent of surveillance capitalism, this is no longer necessary,” said Raphael Auphan, Proton’s chief operating officer.

“All that’s required for the government to find out just about everything it could ever need is a request message to big tech in California. And as long as big tech refuses to implement widespread end-to-end encryption, these massive, private data reserves will remain open to abuse,” Auphan added.

Hackers and scammers

Criminals exploit stolen data in many different ways, which brings us to the next point …

What happens when your personal information gets into the wrong hands?

We’ve covered what brands and governments do with your personal information. Bad actors can also do a lot of damage with your data:

Identity theft: Using your stolen information to impersonate you for financial gain or to commit crimes Financial fraud: Accessing your bank accounts, credit card information, or other financial accounts to make unauthorized transactions Phishing: Sending fraudulent emails or messages pretending to be from legitimate organizations to trick you into revealing more information or clicking on malicious links Social engineering: Manipulating you into divulging confidential information, often by posing as someone you trust or using your stolen information to build credibility Account takeover: Gaining unauthorized access to your online accounts (email, social media, etc.) using your stolen usernames and passwords Tax fraud: Using stolen personal information to file fraudulent tax returns and claim refunds Medical identity theft: Using your stolen information to get medical services and prescriptions, or to fraudulently file insurance claims Employment fraud: Using your stolen information to illegally gain employment or benefits Blackmail or extortion: Threatening to expose your sensitive information unless you pay a ransom Creating fake identities: Using your stolen information to create new identities for various fraudulent purposes.

Data breaches are the new normal

One way bad actors get your information is through data breaches. A data breach is a security event where highly sensitive, confidential or protected information is accessed or disclosed without permission or is lost.

We’ve almost come to expect massive, damaging data breaches. The year 2024 had the most data breaches on record, and 2025 has already seen the largest data breach of all time: the leaking of more than 16 billion usernames and passwords to user accounts with Apple, Facebook, Google, other social media accounts, and government services.

AI is making data privacy worse

AI is connecting just about everything in our lives, from our vehicles to eyewear, and we’re using it in all sorts of everyday ways. But AI presents privacy risks not only in what we share but also in how AI can analyze, infer, and act on that information without our permission (think: deep fakes, for example).

Academics have already identified at least 12 privacy risks from AI, and safe and ethical AI governance is a priority.

What is online privacy?

You might say, “I have nothing to hide”, “Privacy tools are only for criminals” or “Social media is harmless fun,” but against this backdrop of risks and damage, you can see the urgent need to protect your online privacy (or data privacy). This is about your rights to control your personal information and how it’s used.

Data privacy matters because it protects our fundamental right to privacy and means we can:

Limit others’ control over us to know about us and to cause us harm Better manage our professional and personal reputations  Put in place boundaries and encourage respect Maintain trust in relationships and interactions with others Protect our right to free speech and thought Pursue second chances for regaining our privacy Feel empowered that we’re in control of our life.  Are there data privacy laws?

Data privacy laws are designed to give users more control over their personal data by regulating how organizations can collect, store, and use that information.

As at 2024, 137 countries have national data privacy laws, which means 70% of nations worldwide, 6.3 billion people, or 79.3% of the world’s population is covered by some form of national data privacy law.

Despite many attempts, the United States is one of the only major global economies without a strong national privacy law similar to the European Union’s GDPR—the gold standard for consumer data privacy protections and with regulatory impact around the world. Instead, the US has a patchwork of state-based privacy laws. A dedicated working group was recently formed to try again on a US federal privacy law, so watch this space.

What you can do to protect your personal information and online privacy

Regardless of the laws, you can do a lot to protect yourself. First, you need to cover some basics:

Use strong, unique passwords for each of your online accounts. Store them securely in a password manager. Enable two-factor authentication (2FA). Don’t share sensitive details on public platforms or unsecured websites. Keep your software and devices updated. Be cautious of phishing emails and smishing texts, links, and attachments. Know what to do in the event of a data breach. Switch to a private browser that stops ads and tracking. Use end-to-end encrypted messaging and calling, wherever possible. Regularly review your privacy settings on platforms like Facebook, X, Instagram, and LinkedIn to limit data collection. Limit app permissions to stop third-party services from accessing your data. Regularly audit your online activity to remove old or inactive connections, unfollow accounts, and mute topics you’re not interested in. Unsubscribe from unnecessary services. Clear browsing history and cookies regularly

If that seems a lot, we have good news: MySudo all-in-one privacy app deals with many of those actions in one simple app—and the other apps in the MySudo family take you even further.

MySudo

MySudo all-in-one privacy app is built around the Sudo, a secure digital profile with email, phone, and virtual cards to use instead of your own. Anywhere you usually give your personal details, you simply give your Sudo details instead. Sudos let you live your life online without spam, scams, and constant surveillance.

What’s in a Sudo? 1 email address – for end-to-end encrypted emails between app users, and standard email with everyone else 1 handle* – for end-to-end encrypted messages and video, voice and group calls between app users 1 private browser – for searching the internet without ads and tracking 1 phone number (optional)* – for end-to-end encrypted messaging and video, voice and group calls between app users, and standard connections with everyone else; customizable and mutable 1 virtual card (optional)* – for protecting your personal info and your money, like a proxy for your credit or debit card or bank account

*Phone numbers and virtual cards are only available on a paid plan. Phone numbers are available for US, CA and UK only. Virtual cards are for US only. Handles are for end-to-end encrypted comms between app users.

You can have up to 9 separate Sudos in the app. With your Sudos, you can:

Protect your information. Basically, with MySudo, you decide who gets your personal information, and everyone else gets your Sudo information. 

Instead of using your own email, phone number, and credit card all over the internet, use the alternative contact details from your Sudo. So, you would use your Sudo email and phone number to open and log into accounts and contact people; use the private browser to search online without ads and tracking; and use your Sudo virtual card to pay for purchases without exposing your own credit or debit card. Virtual cards are linked to your own credit card or debit card but don’t reveal those details during transactions.

In this way, you … Break your data trail. When you compartmentalize your life into different Sudos, you silo your information and make it impossible for anyone to track you across sites and apps to sell or steal your personal information. And if one Sudo’s details get caught in a data breach or is heavily spammed, you can either ignore it, mute it, or delete it and start again.

Uses for Sudos are limited only by your imagination. Sign up for deals and discounts, book rental cars and hotel rooms, order food or sell your stuff – all without giving away your personal information. Be creative with your Sudos: Setting up a dedicated Sudo to stay safe while volunteering is a popular choice, for example.

You might like:
How MySudo lets you control who sees your personal info online and in real life
From Yelp to Lyft: 6 ways to “do life” without using your personal details
4 steps to setting up MySudo to meet your real life privacy needs Use the end-to-end encrypted messaging and calling within each Sudo to keep your conversations private. Your Sudo phone number works like a standard number but also gives you secure connections to other MySudo users, making MySudo a great private messaging app.

You can also use your Sudo handle (instead of a phone number) for end-to-end encrypted communications between other MySudo users, too (invite your friends to the app!). Read: How to get 9 “second phone numbers” on one device. Use the end-to-end encrypted email between MySudo users for secure communications. MySudo email is a popular secure email service with full send and receive support. It’s entirely separate from your personal email account and intentionally protects your personal email from spam and email-based scams.

Read: 4 ways MySudo email is better than masked email. Use the private browser within each Sudo in MySudo to search the internet free of ads and trackers. Use the virtual card within each Sudo in MySudo to hide your transaction history from your bank and others that they sell your data to. (Yes, they do!).

Discover more about how MySudo lets you control who sees your personal information online and in real life. Also check out how MySudo keeps you safe on social media even in a data breach.

Once you’ve got MySudo on your side, do these 3 things:

Reclaim your information from companies that store and might sell it with RECLAIM personal data removal tool. See who has your information, discover whether it’s been caught in a data breach, and then either ask the company to delete it or substitute it for your Sudo information using MySudo. RECLAIM is part of the MySudo app family.
Encrypt your internet connection and hide your IP address with MySudo VPN, the only VPN on the market that’s actually private. MySudo VPN is the perfect companion for MySudo privacy app since they’re engineered to work seamlessly together. 
Be first in line to use the new MySudo password manager to securely store, autofill, and organize every log-in, password, and more. Coming soon!

Why should I trust MySudo?

MySudo does things differently from other apps:

We won’t ask for your email or phone number to create an account. You don’t need a registration login or password to use MySudo. Access is protected by a key that never leaves your device. We’ll only ask for personal information for virtual cards, and UK phone numbers, when a one-time identity verification is required.

By securing your own information, you take back control of your life, money, safety, and reputation. There’s never been a better time.

Get started today:

Download MySudo
Download RECLAIM
Download MySudo VPN

You might also like:

What constitutes personally identifiable information or PII? 14 real-life examples of personal data you definitely want to keep private What is digital exhaust and why does it matter? Californians, this is why you still need MySudo despite the new “Delete Act” This is why MySudo is essential, even 10 years after Snowden What is a data breach? What should I do if I’ve been caught in a data breach?

The post Your Complete Guide to Online Privacy in 2025: Who is Taking Your Personal Info and How to Stop Them appeared first on Anonyome Labs.


Recognito Vision

Face Recognition Software Explained in Simple Words

Imagine walking into an airport and breezing through security just because a camera recognized your face. That’s not science fiction anymore. This is the power of face recognition software, a technology that maps your unique facial features and matches them against stored data. From unlocking smartphones to catching criminals, this software is shaping our everyday...

Imagine walking into an airport and breezing through security just because a camera recognized your face. That’s not science fiction anymore. This is the power of face recognition software, a technology that maps your unique facial features and matches them against stored data.

From unlocking smartphones to catching criminals, this software is shaping our everyday lives. But along with convenience come questions about accuracy, privacy, and trust. Let’s break it down in simple words so you know what’s happening behind the lens.

 

What is Face Recognition Software

Face recognition software is a type of biometric technology that identifies or verifies a person by analyzing facial features. Think of it as a digital fingerprint, but for your face.

The process usually starts with face matching software, which compares a captured image to existing images in a database. This allows systems to confirm if two faces belong to the same individual.

For everyday people, the most relatable example is your smartphone. Every time you unlock it by looking at the screen, the phone uses a form of this software to confirm your identity.

 

How Face Recognition Software Works Behind the Scenes

At first glance, it feels magical. But under the hood, face recognition is powered by math, algorithms, and a whole lot of data crunching.

 

1. Data Capture and Photo Face Detection Software

It starts when a camera captures your face. The photo face detection software identifies the position of your eyes, nose, mouth, and chin. These landmarks form the foundation of your facial “map.”

2. Feature Extraction with Algorithms

Next, the software measures distances between facial features, like the space between your eyes or the curve of your jawline. These measurements are converted into numerical data known as a faceprint.

3. Matching Process with Databases

Finally, the system compares this faceprint against a database of known faces. If there’s a match within the confidence threshold, the system identifies the individual.

Best Face Recognition Software Applications in Real Life

This technology is not limited to spy movies. It’s deeply integrated into industries we interact with daily.

Here are the most common applications:

Smartphones and gadgets – Unlocking phones, securing payments, and managing app access.

Airports and border control – Faster identity checks, reducing wait times for travelers.

Healthcare – Identifying patients and protecting medical records.

Banking – Preventing fraud with stronger security measures.

Retail – Recognizing VIP customers or preventing theft.

Law enforcement – Finding missing persons or identifying suspects in crowds.

A growing use is facial recognition software for photos, where apps automatically tag friends or group images. Social media platforms rely heavily on this feature, which has made photo management much easier for users worldwide.

Comparing the Top Facial Recognition Software Options

With so many tools available, how do you know which one stands out? Independent evaluations, like the NIST Face Recognition Vendor Test, provide objective data on performance. You can also check the FRVT 1:1 performance reports for in-depth benchmarking.

Here’s a simplified comparison table of criteria that matter most:

Criteria Why It Matters What to Look For Accuracy Correctly identifying or verifying faces High true positive rate Speed How quickly results are delivered Real-time or near real-time Scalability Handling millions of faces Cloud or distributed systems Compliance Following laws like GDPR Transparent privacy policies Cost Fits your business budget Flexible pricing models

This breakdown helps businesses pick the top facial recognition software for their specific needs.

 

Privacy and Legal Concerns with Face Recognition

Now comes the elephant in the room. As powerful as this technology is, it raises eyebrows when it comes to personal freedom.

Data storage – Where are your facial scans stored, and for how long?

Consent – Are you being recognized without agreeing to it?

Misuse – Could governments or companies abuse this technology for surveillance?

In Europe, these questions tie directly into GDPR compliance. The rules emphasize transparency, data minimization, and user rights. If an organization mishandles face data, the penalties can be steep.

A 2021 study found that 56 percent of people worry about misuse of facial recognition by authorities. This shows that while the tech is impressive, trust remains fragile.

 

Open Source Face Recognition Options for Developers

Not all solutions are locked behind expensive paywalls. Developers and small businesses often turn to face recognition opensource tools. These options allow for flexibility, customization, and cost savings.

Advantages of open-source tools include:

Free or low-cost access to powerful libraries.

Large communities that support development.

Ability to customize for unique projects.

Faster innovation through collaboration.

One notable resource is the Recognito Vision GitHub, where developers can explore codebases, contribute, and experiment with new applications.

 

Future Trends in Face Recognition Technology

The pace of innovation isn’t slowing down. Researchers are refining algorithms to improve speed and reduce bias.

Future trends to watch:

Ethical AI – Systems that reduce bias across race and gender.

Edge computing – Processing data on devices instead of servers for faster results.

Integration with IoT – Smart cities that use recognition for traffic, safety, and efficiency.

Privacy-first models – More tools will adopt privacy-by-design frameworks.

Experts predict that within the next decade, face recognition will be as common as passwords are today, though hopefully far more secure.

 

Conclusion

Face recognition software is no longer futuristic tech, it’s a reality shaping security, convenience, and even social interactions. From photo face detection software to face matching software, its reach is growing rapidly. Yet, the real challenge is balancing innovation with privacy. Companies that master this balance will win trust in the long run.

And speaking of innovation, Recognito is one brand pushing these boundaries with responsible and practical applications.

 

Frequently Asked Questions

 

What is the difference between face detection and face recognition?

Face detection finds and locates a face in an image, while recognition goes a step further by identifying or verifying who that person is.

Is face recognition software always accurate?

No, accuracy depends on the algorithms, quality of data, and lighting conditions. According to NIST tests, top systems can reach over 99 percent accuracy in controlled settings.

Can face recognition software work with old photos?

Yes, many systems can analyze older images. However, accuracy may decrease if the photo quality is low or the person has aged significantly.

Is open source face recognition safe to use?

Yes, but it depends on how it’s implemented. Open-source tools are flexible, but developers must ensure strong security practices when handling sensitive data.

How does face recognition affect privacy rights?

It raises major concerns about surveillance and consent. Laws like GDPR in Europe require companies to handle facial data transparently and responsibly.

Thursday, 25. September 2025

Extrimian

How Extrimian Drives Digital Trust in Healthcare

Why are identity and data critical in healthcare? Healthcare —both public and private— faces a structural challenge: managing massive volumes of sensitive data from patients, professionals, and institutions while ensuring accuracy, security, and transparency. So, how Extrimian Drives Digital Trust in Healthcare? Today’s systems are fragmented. Patient admissions, authorizations, professional valid
Why are identity and data critical in healthcare?

Healthcare —both public and private— faces a structural challenge: managing massive volumes of sensitive data from patients, professionals, and institutions while ensuring accuracy, security, and transparency. So, how Extrimian Drives Digital Trust in Healthcare?

Today’s systems are fragmented. Patient admissions, authorizations, professional validations, or organ transplant waiting lists still rely on manual processes or disconnected databases. The consequences are severe:

Excessive bureaucracy → long delays for authorizations, transplants, or referrals.

Hidden costs → thousands of hours in manual administrative work.

Fraud risks → falsified medical degrees or manipulated patient records.

Social distrust → patients unsure if they are on the correct waiting list; doctors lacking visibility into processes.

In a sector where every minute can make the difference between life and death, the question becomes urgent: How can healthcare systems modernize identity and data management without sacrificing security or trust?

What does Extrimian propose to solve these challenges?

Extrimian provides an ecosystem of Verifiable Credentials (VCs) and digital identity tools enabling hospitals, clinics, insurers, and public agencies to:

Issue and validate credentials in seconds, instead of manual processes taking days.

Guarantee advanced security, with tamper-proof, instantly verifiable records.

Ensure compliance with international standards (W3C, DIF, GDPR, HIPAA).

Optimize costs and resources, cutting bureaucracy and human errors.

Improve patient and professional experience, simplifying access and workflows.

All built on principles of privacy by design, interoperability, and open standards.

How does self-sovereign identity (SSI) apply to healthcare?

Self-Sovereign Identity (SSI) places individuals at the center of control over their personal data.

For patients: medical history, diagnoses, or lab results can be issued as portable, verifiable credentials.

For medical professionals: degrees, licenses, and certifications are turned into tamper-proof VCs that any hospital can instantly verify.

For institutions: each credential is validated without intermediaries and easily integrated into existing hospital systems.

SSI does not replace health systems; it strengthens them with a new layer of trust.

Case Study: How Extrimian helped INCUCAI improve Argentina’s transplant system

The Instituto Nacional Central Único Coordinador de Ablación e Implante (INCUCAI) faced a long-standing challenge: managing the national emergency transplant waiting list.

The problem

Slow processes in organ allocation.

Limited transparency in prioritization.

Patients and families receiving little real-time information.

The Extrimian implementation

Extrimian introduced verifiable credentials to build traceability and trust into the national list:

Every update in the list is issued as a verifiable credential.

Patients and doctors can instantly verify position and status.

All changes are validated securely, without risks of tampering.

The results

Significant time reduction in allocation and updates.

Full transparency for patients, professionals, and regulators.

Improved patient experience through clear communication.

Strengthened trust in one of the most sensitive areas of healthcare.

This pioneering use case demonstrated how Extrimian’s technology can save lives by enhancing transparency and efficiency in public healthcare.

More about this case studie: Extrimian & INCUCAI

What other use cases does Extrimian enable in healthcare? 1. Medical professional identity verification

Problem: manual validation of degrees and licenses.
Solution: verifiable credentials that confirm authenticity instantly, eliminating fraud risks.

2. Verifiable medical records

Problem: fragmented medical histories between hospitals, insurers, and regions.
Solution: interoperable VCs that patients can carry and present anywhere, securely and instantly.

3. Smart access to healthcare services

Secure login for hospital web portals.

QR- and VC-based access control for labs, operating rooms, and medical events.

Automated attendance for in-person and virtual consultations.

4. Patient benefit networks

VCs as digital passes for transportation or pharmacy discounts.

Integration with insurance, pharmacies, and wellness services.

5. Academic and professional certifications

Credentials for courses, residencies, and specializations issued as VCs.

Streamlined hiring and international mobility for healthcare professionals.

What tangible benefits do healthcare institutions gain?

Institutional prestige: issuing VCs with the institution’s brand boosts trust and modernity.

Advanced security: tamper-proof credentials reduce fraud.

Operational efficiency: automated processes cut costs and errors.

Enhanced patient experience: simpler, faster, user-centric interactions.

Strategic partnerships: connection with fintech, insurance, and other key sectors.

Global compliance: alignment with W3C and DIF standards ensures global acceptance.

How is Extrimian implemented in healthcare institutions? Step 1: Personalized demo

Showcasing practical use cases like patient admission or credential verification.

Step 2: Modular implementation

Start with one specific case (e.g., issuing medical certificates) and scale up to a full ecosystem.

Step 3: Continuous support

Training workshops and Extrimian Academy.

Ongoing technical support.

ROI measurement with clear impact metrics.

What is the ROI of verifiable credentials in healthcare?

Administrative savings: up to 60% time reduction in credential verification.

Fraud reduction: fewer legal risks and malpractice cases.

Efficiency gains: processes that once took days now take seconds.

Intangible value: reinforced patient trust and institutional reputation.

For a hospital serving 10,000 patients annually, the potential savings amount to hundreds of thousands of dollars, alongside a substantial boost in credibility.

Conclusion: towards a more trusted, efficient, and human healthcare system

Healthcare needs trust, agility, and security. With Extrimian, identity verification and data management stop being a problem and become a competitive advantage.

The INCUCAI case proves it is possible to reduce delays, increase transparency, and improve patient and professional experiences. And this is just the beginning: from private hospitals to national public networks, verifiable credentials can raise the standard of trust in healthcare worldwide.

👉 Want to explore how these benefits could work in your institution?
Schedule a personalized demo with the Extrimian team today.

The post How Extrimian Drives Digital Trust in Healthcare first appeared on Extrimian.


Holochain

How Does Desirable Social Coherence Evolve?

Blog
Reflections from the DWeb Seminar

In August I had the privilege of participating in the DWeb Seminar 2025, an intimate gathering designed to “map the current DWeb technological landscape, learn from each other, and define the challenges ahead”.  For those unfamiliar with the event, Wendy Hanamura’s excellent recap captures the spirit and outcomes beautifully. As part of the event we were invited to offer a 15 minute “input talk” to the other participants.   I chose to share a fundamental question that has driven Holochain from its inception – and explore how this question shapes not just our technology, but our entire approach to building decentralized systems.

The Core Question: How Does Desirable Social Coherence Evolve?

Everything we do at Holochain (and the projects that I've been nurturing through Lightningrod Labs, like Moss and Acorn) stems from this central inquiry. But what do I mean by “desirable social coherence” and why does it matter? 

You can think of social coherence as a group’s long-term stability. Like most things this property exists along a gradient: some social bodies have more coherence than others, which depends on their capacity to respond and adapt to environmental changes as a result of the patterns, practices and organizing principles that they operate by.  But therein lies the rub.  Some of  these patterns provide lots of coherence, but they may not be desirable or pleasant for the individuals taking part in them!  It’s no fun for almost everybody involved in an authoritarian regime, but it does have a real degree of stability.   My fundamental belief, however,  is that not only is it possible to evolve these patterns and processes in directions that participants will find pleasant and desirable, but that doing so actually yields the most long term stability because they will by that fact not contribute to destabilizing it.

The Challenge: Current digital systems scale through centralization and intermediation of critical social functions. Unfortunately, this creates undesirable forms of social coherence – power imbalances that enable both intentional and unintentional abuse. When a few entities control the platforms where billions interact, we may get coherence, but it's often extractive rather than generative. Furthermore our current systems are difficult to evolve because of their very centralization and the interests that want to keep them that way to maintain power.

The Opportunity: Decentralized technology can create substrates for evolvable social coherence – essentially, DNA for social organisms. Instead of rigid, centralized structures, we can build infrastructure that enables new forms of social fabric to emerge and multiple scales, yielding increasing collective intelligence

A key insight here is that there is no single “correct” form of social coherence. What works is contextual, diverse across time, space, and scale. What we need is infrastructure that enables continuous evolution and discovery – balancing stability with emergence. 

How This Shapes Our Work at Holochain

This framework isn’t abstract philosophy - it directly informs every architectural decision we make. When building technology to support evolvable social coherence, several principles become essential:

Engagement Spaces as Building Blocks

Human social fabric is built out of layers of interacting and layered “engagement spaces” – essentially social contracts with defined rules. We need infrastructure that makes it easy to create, use, and compose these spaces. The current web may have “solved for” decentralization of publishing - anyone can create a website or blog without permission. But the places where people actually interact and engage with each other (social media platforms, forums, collaborative tools, even finance and accounting tools) remain under intermediary controlled web-servers. Our approach requires protocols where neither the data nor the rules of the group interaction are held by intermediaries. 

Agency AND Accountability, Mutually Interwoven

Individuals need genuine agency through their technology - the ability to participate in multiple spaces, move between them, and take their data with them. But this autonomy must be paired with accountability within the contexts where they participate. This tension between empowerment and responsibility is productive, not problematic.

Uncapturable/Unenclosable Carriers: The infrastructure itself must be immune to capture - meaning no single entity can gain enough control to dictate rules, extract value, or shut down the system. We’ve seen far too many examples of infrastructure capture” governments shutting down internet services during protests, platform owners changing terms to benefit their shareholders, or cloud providers being pressured to deplatform users. Even when specific engagement spaces have their own defined rules, the underlying “carrier” of those interactions must remain decentralized. This enables autonomous group formation without intermediation - groups can organize however they choose without worrying that their technological foundation can be pulled out from under them.  

Local State, Global Visibility: Rather than forcing artificial global consensus (like blockchains do), we recognize that state is inherently local but can achieve consistent global visibility if nodes share data.  Operating this way eliminates unnecessary coordination bottlenecks while maintaining system coherence. 

Architectural and Design Consequences

The principles stated above have very concrete design and implementation consequences.  For those technically familiar with Holochain you already know how they show up in the design, but here I list some of the key aspects along with pointers to documentation that describe each consequence in more detail.

Start with a capacity to define & create a known “engagement space”.  The “rules of a game”.  This consists of the hash of a set of data-types & relations and deterministic validation rules for creation of that data. In Holochain we call this the DNA Allow agents to be the authoritative source of all data, i.e. agents “make plays” according to the rules of the DNA.  Ensure that when this data is shared, it has intrinsic data integrity, i.e. it’s a cryptographically signed append-only ledger for that source (in Holochain we call this the Source Chain), and ensure that it is identifiable as being part of an engagement space by having the first entry in the chain being an agent’s signing of the space’s hash.  This is also “I consent to play this game”. Share data to an eventually consistent Graphing Distributed Hash Table (DHT), in which other agents validate that all shared data follows the rules of the game. Ensure that agents who don’t follow the rules can be blocked/ignored.  This prevents capture. Allow for “bridge” calls between engagement spaces at the agentic locus (i.e. not at the group level) for composability of spaces.  This ensures composibility, autonomy, and accountability

There are of course more details in the design, but these are some of the key ones that fall out of the principles.

Resonance at the DWeb Seminar

What struck me most about the seminar was how much of our framework resonated with challenges other participants were grappling with, even when they approached them from different angles.  I would even say that the Seminar itself was fundamentally an example of this thinking.  It was a carefully designed set of patterns and processes  for a literal engagement space (this time physical instead of digital) whose purpose was to increase the social coherence of players in the p2p domain.  These patterns not only included the processes of the input-talks, the unconference sessions, and commitment to production of a collaborative write-up, but also the relational parts of cooking together and sharing non-work time together.  All of this together created desirable social coherence.   And it’s this pattern that we are all trying to create powerful affordances for in the digital world.

Some further examples: During the unconference sessions, conversations kept circling back to fundamental questions about coordination, autonomy, and accountability. 

When we discussed "UI Patterns for Peer-to-peer," I saw it as asking: how do we make decentralized engagement spaces feel natural and empowering to users? When we debated collaborative data model requirements, I saw it as exploring: how do we maintain coherence across distributed participants without sacrificing agency?

When Rae McKelvey shared her focus on "purpose-built apps" that solve real social problems to me that aligned perfectly with the engagement space concept—recognizing that different contexts require different rules and structures. 

At the technical level David Thompson's work on object capabilities and Duke Dorje’s work on recryption and identity both live into the same autonomy-with-accountability tension we see as central to social coherence.  The ever-present discussions about how best to implement CRDTs (Conflict-free Replicated Data Types, of which Holochain’s DHT is an example) revealed the shared underlying assumption: that meaningful coordination really is possible without central control, that local autonomy and global coherence can coexist, and most profoundly that the infrastructure we build shapes the social possibilities it enables.

But if everything resonated so well, what’s the big deal?

Why This Matters for the Decentralized Ecosystem

Probably the most common complaint I’ve heard over the years from folks who see the astounding potential of decentralized infrastructure goes something like this:  “There are so many different p2p solutions, and teams that seem to be working in isolation, why can’t you just agree on a single solution and work together?”  On the surface, this sounds like a reasonable complaint, but the lens of coherence helps understand why “working together” is actually such a hard problem to solve.  

Recalling from the start of this article: what creates coherence are the patterns, practices and organizing principles of a group.  Just because groups have the same goals and want the same outcomes, does not mean that they start their patterns, processes and organizing principles are similar and compatible.  In fact, almost always, they aren’t.  But this relates to why the DWeb Seminar was so important.  It successfully operated according to a higher order organization principle that created an engagement space precisely for the purpose of getting at what patterns, practices and organizing principles folks in the broad DWeb community were operating by, and making them visible and .  

So to me this was an example of exactly the underlying principles that we’ve been embedding in Holochain’s architecture from the start.

So, while the decentralized web movement often focuses on technical capabilities – faster consensus, better cryptography, more efficient protocols, we are now seeing the community beginning to seriously see these as means, not ends. The higher level question remains: what kinds of social possibilities can these technologies enable? 

This approach enables us to build towards greater “commons enabling infrastructure” - technology that strengthens shared resources and collective capacity rather than extracting value. The creation of digital, unenclosable fabric of engagement spaces is central to this goal. Instead of platforms that capture value from user interactions, we can build infrastructure that enables communities to create and govern their own spaces, according to their own values. 

When the decentralized ecosystem embraces this approach, many new possibilities emerge:

Interoperability with Purpose: We can more easily build bridges between systems that share compatible social intentions. A climate action network could seamlessly share data and coordinate with a local food co-op using a different protocol, supporting community resilience initiatives that address both environmental and food security challenges, while using mutual-credit currencies backed by the productive capacity of the local farms supplying the co-op. Governance that Evolves: We can build infrastructure that enables continuous governance innovation rather than trying to solve governance once and for all. A neighborhood mutual aid group could start with simple coordination tools, then gradually evolve more sophisticated decision-making processes as their needs change, without having to migrate to entirely new platforms. Network Effects that Serve Users: We can create composable ecosystems where network effects benefit participants rather than extracting from them. As more people join a decentralized social network, the benefits – better content discovery, richer discussions, stronger community bonds - flow to the participants themselves rather than to a platform owner’s advertising revenue.  The Path Forward

The grand challenge of decentralized software is ensuring it actually delivers on evolvable social coherence. This means building infrastructure that serves the flourishing of people and planet rather than extracting from it. 

At Holochain, we’re committed to this path, not just in our technology choices, but in how we organize ourselves, engage with our community, and collaborate with other projects. The conversations at the DWeb Seminar reinforced that we’re not alone in this commitment. 

The adjacent possibility that Wendy described in her recap isn’t just about new technical capabilities – it’s about new forms of social organization that those capabilities make possible. That’s both a tremendous responsibility and an extraordinary opportunity for all who choose to walk to this path. 


Veracity trust Network

Are AI Agents a threat to all industries or just another digital tool?

AI Agents are a growing influence on how we do business online and it pays to be aware of how they work – and the potential risks they expose. Also known as Agentic AI, they are defined as autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment. The post Are AI Agents a threat to all industries or just another digital tool? appeared f

AI Agents are a growing influence on how we do business online and it pays to be aware of how they work – and the potential risks they expose.

Also known as Agentic AI, they are defined as autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment.

The post Are AI Agents a threat to all industries or just another digital tool? appeared first on Veracity Trust Network.


FastID

4 Tips for Developers for Using Fastly’s Sustainability Dashboard

Track the real-world emissions of your Fastly workloads. This blog shares practical tips on using the Sustainability dashboard for greener, faster code.
Track the real-world emissions of your Fastly workloads. This blog shares practical tips on using the Sustainability dashboard for greener, faster code.

Wednesday, 24. September 2025

liminal (was OWI)

The Silent Killer in Third-Party Risk: Why Behavioral Red Flags Matter More Than Checklists

The hidden risks behind vendor relationships It starts innocently enough. A supplier begins missing deadlines. A long-trusted partner suddenly resists contract changes. Payments arrive late, documentation lags, and small deviations creep into everyday interactions. These aren’t just operational hiccups—they’re behavioral red flags. For years, third-party risk management (TPRM) relied on static com
The hidden risks behind vendor relationships

It starts innocently enough. A supplier begins missing deadlines. A long-trusted partner suddenly resists contract changes. Payments arrive late, documentation lags, and small deviations creep into everyday interactions. These aren’t just operational hiccups—they’re behavioral red flags.

For years, third-party risk management (TPRM) relied on static compliance checklists: audits, certifications, and one-off questionnaires. But today’s risk environment has outpaced that model. Subtle engagement shifts often signal vendor instability—or even fraud—well before a failed audit or regulatory breach brings it to light.The stakes are growing. A single vendor misstep can trigger multimillion-dollar losses, regulatory scrutiny, and reputational fallout. In 2025, the risk that matters most isn’t what the audit catches—it’s what it misses.

What Is Third-Party Risk Management?

Third-party risk management (TPRM) is the discipline of identifying, assessing, and mitigating risks that arise from vendors, suppliers, and business partners. It goes beyond contract compliance to cover financial, cybersecurity, operational, and reputational exposures.

Why compliance checklists fall short

Traditional compliance frameworks provide assurance, but they’re backward-looking. By the time an issue surfaces in an audit, the damage may already be done.

Complex risks are growing: According to Liminal’s Market & Buyer’s Guide for TPRM, 33% of organizations cite complexity of risks as the top barrier to effectiveness—outranking resources or legacy systems. Budgets are shifting: The same research shows that two years ago, 77% of businesses devoted 10% or less of their budgets to TPRM. Today, 84% say funding is sufficient—a 42% improvement. Maturity remains low: Despite rising investment, only 9% of organizations have achieved “advanced” TPRM maturity, underscoring how far the market still has to go.

Static compliance isn’t enough when risk signals emerge daily in behavior, process, and relationships.

The market is moving fast

The risk isn’t just theoretical—the market for third-party risk management is expanding quickly. Liminal’s research shows that while sentiment on budget sufficiency has improved by 42% in two years, only 9% of organizations have achieved advanced maturity.

It’s a sign that boards and executives see TPRM as too important to ignore—but most are still playing catch-up. As Gartner notes, organizations that fail to modernize vendor risk programs face increasing exposure across cybersecurity, compliance, and operational resilience.

Market & Buyer’s Guide for Third-Party Risk Management 2025, p.19 From checklists to behavioral red flags

Behavioral red flags—missed SLAs, contract resistance, data delivery delays, unusual communication shifts—are leading indicators of risk. Unlike static compliance, they reveal real-time vulnerabilities and allow earlier intervention. Behavioral risk monitoring is the practice of tracking deviations in how vendors operate and interact that can signal early signs of instability or misconduct.

The most effective programs are:

Embedding continuous monitoring rather than point-in-time reviews. Integrating behavioral insights into enterprise-wide dashboards. Automating alerts when engagement patterns deviate from norms.

This shift mirrors risk management trends across Data Access Control and AI Data Governance—executives no longer want box-checking. They want predictive visibility into the risks that can derail operations, undermine vendor resilience, and erode supplier trust.

Market & Buyer’s Guide for Third-Party Risk Management 2025, p.18 What executives are demanding now

For boards and CISOs, vendor risk has become strategic infrastructure: as vital to credibility as financial reporting or data security. The new priorities are clear:

Continuous monitoring: Liminal’s Regulatory TPRM Link Index shows that 63% of buyers rank this as their top priority. Automation at scale: 42% cite automation of TPRM activities as their top optimization goal. Data quality: Cybersecurity TPRM buyers emphasize accuracy (89%) and monitoring (85%) as table stakes, guided by emerging frameworks such as NIST’s Cybersecurity Framework. Cross-functional orchestration: Operational buyers demand interoperability across compliance, procurement, and security.

These shifts signal the end of siloed vendor risk teams. The winners will be those who connect behavioral risk detection into broader enterprise resilience strategies.

The executive reality check

Boards no longer accept “checklist compliance” as proof of safety. Regulators and investors expect real-time assurance. Yet with only 9% of organizations achieving advanced TPRM maturity, most enterprises remain exposed.

The Wall Street Journal recently reported on how supply chain disruptions and vendor failures are forcing boards to elevate TPRM to a core resilience strategy—not just a compliance function. It’s a signal that the market is moving fast, and expectations are rising. Regulatory frameworks are evolving in parallel. The SEC now requires detailed cyber disclosures, the EU GDPR continues to impose significant fines, and NIST provides baseline guidance for organizations modernizing their risk programs.

By acting on behavioral red flags, enterprises strengthen resilience and trust. Ignoring them leaves blind spots that regulators and investors won’t overlook.

Turning behavioral insight into advantage

Behavioral risk monitoring isn’t just a compliance upgrade. It’s a competitive advantage. By weaving continuous monitoring and behavioral insights into third-party risk management, executives can:

Protect against operational and financial losses. Demonstrate resilience to regulators. Build stronger trust signals with investors, customers, and suppliers.

👉 Dive deeper in the Market & Buyer’s Guide for Third-Party Risk Management and explore the Cybersecurity, Operational, and Regulatory Link Indexes to see how leading enterprises are raising the bar.

👉 Watch our Webinar on TPRM Strategy & Stronger Risk Management to hear how leaders are operationalizing these shifts in real time.

The post The Silent Killer in Third-Party Risk: Why Behavioral Red Flags Matter More Than Checklists appeared first on Liminal.co.


Indicio

How decentralized identity delivers next generation authentication and fraud prevention

The post How decentralized identity delivers next generation authentication and fraud prevention appeared first on Indicio.
Decentralized identity and Verifiable Credentials remove the vulnerabilities driving generative-AI, social engineering, and synthetic identity fraud at a significantly lower cost than legacy or alternative solutions. How? The technology allows you to just bypass these problems. With Indicio Proven, you get authentication and fraud prevention in a single, affordable, globally interoperable platform.

By Trevor Butterworth

The new report by Liminal — The Convergence of Authentication and Fraud Prevention — makes for stark reading.

Fraud losses in the U.S. alone are projected to double in just three years to $63.9 billion, with account takeover fraud accounting for half. Seventy-one percent of respondents to their survey of 200 buyers in retail, ecommerce, financial services and tech believe current methods of authentication may be insufficient to thwart generative-AI social engineering attacks. And almost two-thirds worry that additional security layers will add unacceptable friction to customer and user experience.

One could say the problem is that the technology powering fraud is more powerful than the technology powering authentication and fraud prevention. And the latter’s weakness is compounded by authentication and fraud prevention being two separate processes, often managed by multiple different vendors.

The solution is more of everything — more layers of defense, multi-level signals analysis, more authentication factors, and good AI to battle the bad AI. All of which translates into more complexity, friction, and cost. No surprise, Liminal also reports increasing budgets for authentication, account takeover protection, and social engineering scam prevention, and it projects these budgets will continue increasing year-on-year.

Meanwhile, customers and consumers — many of whom are digital natives — expect seamless, frictionless interaction and not painful multifactor authentication. As a result, organizations face brutal tradeoffs: cater to digital behavior and increase risk, or decrease risk but make customers pay in friction and risk losing them.

Fix the fundamental problem

There’s a reason the technology powering fraud has the upper hand: The legacy systems organizations rely on — username/password,  stored biometrics, centralized databases filled with personal data — are all vulnerabilities easily exploited by brute-force AI attacks, synthetic identity fraud, and deepfakes.

Remove these vulnerabilities and you remove these problems. That’s what decentralized identity does. It removes the need for usernames, passwords, and the centralized storage of personal data needed to manage identity and access.

That’s what Indicio’s customers are doing — sweeping away the digital structures and processes that are the cause of all these problems.

We replace this with Verifiable Credentials. They’re a simple way for each party in a digital interaction — customers, organizations, employees, devices, virtual assistants — to authenticate the other in a way that can’t be phished, hacked, or faked; and we do this authentication before any data is shared.

Verifiable Credentials reduce fraud by enabling digital credentials to be bound to individuals in a way that is cryptographically tamper-proof, and which can incorporate biometrics that have been authenticated. This closes off attack vectors like phishing, synthetic identities, and — with an authenticated biometric in a Verifiable Credential — deepfakes.

A person with an authenticated biometric in a Verifiable Credential has a portable digital proof of themselves that can be instantly corroborated against a liveness check.

A decentralized identity architecture changes everything. It integrates authentication and fraud prevention, creates unified digital identities, and enables data to be fully portable, trusted and acted on immediately — without friction to businesses or customers.

Just as important, it’s significantly less expensive than legacy or alternative solutions; it can be layered into existing systems, meaning that it’s a solution that, depending on the scope, can be implemented in days or weeks.

Don’t take our word, see what our customers are doing

Indicio and its customers — enterprises, financial services,  governments — have had enough of the same old same old. We and they are using Verifiable Credentials to cross borders, onboard customers, and authenticate account access — all seamlessly with the highest level of digital identity assurance.

It might be hard to believe that a solution could be that simple — that you can just remove the core vulnerabilities fueling the surge in identity-related fraud and not have to rip and replace your entire authentication infrastructure.

Contact us to see a demo — and discover how Indicio Proven is being used as a single authentication and fraud prevention system to create seamless and trusted digital interaction.

The post How decentralized identity delivers next generation authentication and fraud prevention appeared first on Indicio.


FastID

Fastly’s Pillars of Resilience: Building a More Robust Internet

Discover Fastly's Pillars of Resilience: unwavering availability, minimized latency, and disruption resistance for a robust internet experience with our global network.
Discover Fastly's Pillars of Resilience: unwavering availability, minimized latency, and disruption resistance for a robust internet experience with our global network.

Tuesday, 23. September 2025

IDnow

Why banks need modular KYC solutions to future-proof compliance: Insights from Finologee’s Carlo Maragliano.

We sat down with Carlo Maragliano from digital platform Finologee to explore how financial institutions are getting ready for the evolving regulatory landscape and how they use technology to accelerate their go-to-market while staying audit-ready and resilient.  As new regulations such as eIDAS 2.0, AMLR and DORA reshape the compliance landscape across Europe, financial institutions […]
We sat down with Carlo Maragliano from digital platform Finologee to explore how financial institutions are getting ready for the evolving regulatory landscape and how they use technology to accelerate their go-to-market while staying audit-ready and resilient. 

As new regulations such as eIDAS 2.0, AMLR and DORA reshape the compliance landscape across Europe, financial institutions are under pressure to future-proof their onboarding and KYC processes.

Luxembourg-based Finologee, a leading digital platform operator for the financial industry, is helping banks and payment institutions meet regulatory challenges through its KYC Manager, an orchestration layer that combines flexibility with embedded regulatory readiness. By integrating IDnow’s automated identity verification technology, Finologee enables its clients to accelerate go-to-market, simplify compliance and tailor onboarding journeys across regions. With Carlo Maragliano, Head of Delivery and Customer Success at Finologee, we discussed how technology, automation and orchestration are transforming digital identity at scale.

Navigating the evolving regulatory landscape Regulations such as eIDAS 2.0, AMLD6 and DORA are coming into force soon. How are the changes brought about by these regulations influencing you and your banking clients’ KYC and digital onboarding priorities?

Heightened regulatory complexity is pushing banks to adopt more modular and future-proof KYC solutions. These upcoming regulations are significantly reshaping compliance priorities for financial institutions. For example, eIDAS 2.0 introduces Qualified Electronic Identity (QeID), which makes interoperability and eID support essential. AMLD6 expands criminal liability and due diligence obligations, which increases the need for granular audit trails and automated, risk-based workflows. And with DORA, operational resilience becomes a key focus, requiring stronger vendor oversight, digital continuity and secure third-party integrations. Finologee’s orchestration layer, combined with IDnow’s embedded identity verification, equips institutions to meet these regulatory shifts without having to re-engineer their core systems. 

IDnow’s Automated Identity Verification 

IDnow provides a fully automated identity verification solution that integrates seamlessly with Finologee’s KYC Manager. It supports document authentication from more than 215 international issuing authorities, uses AI-driven checks and biometric liveness detection and helps banks and other regulated industries to reduce onboarding times while ensuring full regulatory compliance. This technology enables companies to verify the identities of their users seamlessly and securely.

Ensuring adaptability in a dynamic regulatory environment How do you ensure that your solutions remain adaptable as regulations and customer expectations continue to evolve?

We’ve built everything on an API-first modular architecture that enables quick adaptation to regulatory shifts. On top of that, Finologee continuously engages with clients to align roadmap priorities with industry changes. The platform is also fully customisable and configurable, so institutions can tailor onboarding flows, verification steps and compliance logic to specific regulatory requirements, customer segments and regional markets without extensive development effort.

Did you know? Over 55% of consumers are more likely to apply for services if the onboarding process is entirely digital, including online identity verification.

The role of automation in scaling operations What role does automation play in helping banks scale their operations without sacrificing security or compliance?

Automation is really important for all businesses. It reduces dependency on manual reviews, thus lowering both cost and error rates. Automated decisioning also helps apply consistent compliance logic. With real-time workflows, customers can be onboarded faster without sacrificing auditability, while compliance teams gain transparency and control through dashboards and exception handling flows. 

What challenges do financial institutions face when trying to scale their compliance and onboarding processes across multiple markets and how does KYC Manager help overcome these hurdles?

Scaling across markets brings several hurdles. Institutions face varying regulatory requirements across countries, different acceptable ID document types and verification standards, and operational silos that slow down onboarding harmonisation. With KYC Manager, we address these challenges through a centralised orchestration layer with localised compliance modules, document coverage across 157 countries enabled by IDnow and a flexible flow builder that allows journeys to be adapted by region or customer type.

Did you know? Banks that increased end-to-end KYC-process automation by 20% saw a triple benefit effect : increased quality-assurance scores by 13%; improved customer experience by reducing the number of customer outreaches per case by 18% and enhanced productivity by increasing the number of cases processed per month by 48%. In what ways does the integration between Finologee’s KYC Manager and IDnow’s automated identity verification technology enable faster go-to-market for banks and other financial institutions? Can you share a concrete example?

Because identity verification is pre-integrated, deployment timelines are shortened considerably. This means clients such as banks or other financial institutions can launch new services or expand to new markets faster thanks to embedded regulatory readiness.  

A concrete example: the IDnow verification flow is especially useful when identifying ultimate beneficial owners (UBOs) and persons with significant control (PSCs), so people who ultimately own or control their company and are legally required to identify during onboarding. If the person responsible for their dossier doesn’t have their IDs, they can trigger an SMS to the phone number of the UBO or PSC to complete the verification directly. 

Scaling across markets and customization How do you support financial institutions in customizing onboarding journeys for different regions or customer segments?

The Finologee KYC platform enables journey segmentation by geography, product line or a risk profile. For instance, workflow logic can automatically route high-risk users to manual review or enhanced due diligence paths.

Looking ahead, what trends do you anticipate will most impact the way banks approach digital identity and compliance at scale?

We see AI and biometrics becoming standard components of fraud prevention. There will also be greater emphasis on accessibility, inclusivity and cross-device onboarding. And more broadly, banks and other financial institutions will be looking to reduce fragmentation through orchestration platforms.

On a personal level, what excites you most about working at the intersection of technology, compliance and financial services? Is there a particular moment or project that made you feel especially proud of the impact you’re making?

For me, it’s seeing how all the pieces come together in practice. One moment that really stood out was supporting a client launch in Luxembourg under tight regulatory deadlines they needed to comply with. It was a great example of how the platform can unlock speed, compliance, and user experience all at once – we successfully implemented KYC Manager within just three months, enabling a fully digital account opening process with no paper or printing requirements. On average, our clients see the submission process reduced to under 10 minutes and conversion rates doubled compared to traditional KYC remediation processes, while substantially lowering human error and workload.

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. DGGS’s CEO, Florian Werner, talked to us about how strict regulatory requirements are shaping online gaming in Germany and what it’s like to be the first brand to receive a national slot licence.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Ockto

Column: AI is een briljante puber

Voor de HypoVak-special van InFinance schreef Gert Vasse de volgende column: AI lijkt al snel een BFF © (best friends forever) te worden. Het duikt in rap tempo op in allerlei handige toepassingen. Sindskort geeft bijvoorbeeld Google bij veel zoekopdrachten een handig AI-overzicht. Die functie bespaart je veel zoekwerk en geeft een goede samenvatting inclusief bronvermeldingen.

Voor de HypoVak-special van InFinance schreef Gert Vasse de volgende column:
AI lijkt al snel een BFF © (best friends forever) te worden. Het duikt in rap tempo op in allerlei handige toepassingen. Sindskort geeft bijvoorbeeld Google bij veel zoekopdrachten een handig AI-overzicht. Die functie bespaart je veel zoekwerk en geeft een goede samenvatting inclusief bronvermeldingen.


Duitse en Franse cybersecurity autoriteiten: let op AI-fraude bij digitale identificatie

Betrouwbare en veilige klantidentificatie is binnen de financiële sector een kernvoorwaarde om te voldoen aan wet- en regelgeving (Wwft, AML5, eIDAS, AVG). Met de introductie van ID-Wallets en eIDAS2.0 in 2028/2029 zal vanuit de overheid een structurele oplossing voor veilige digitale identificatie worden geboden.

Betrouwbare en veilige klantidentificatie is binnen de financiële sector een kernvoorwaarde om te voldoen aan wet- en regelgeving (Wwft, AML5, eIDAS, AVG). Met de introductie van ID-Wallets en eIDAS2.0 in 2028/2029 zal vanuit de overheid een structurele oplossing voor veilige digitale identificatie worden geboden.


Geverifieerde brondata: betere risico-inschatting met minder handwerk

Incomplete dossiers, ontbrekende documenten, langdurige doorlooptijden. Het verzamelen van klantdata is in veel kredietprocessen nog een tijdrovende stap. Er zijn meerdere contactmomenten nodig, aangeleverde gegevens zijn onduidelijke en er is het risico op fouten of fraude.

Incomplete dossiers, ontbrekende documenten, langdurige doorlooptijden. Het verzamelen van klantdata is in veel kredietprocessen nog een tijdrovende stap. Er zijn meerdere contactmomenten nodig, aangeleverde gegevens zijn onduidelijke en er is het risico op fouten of fraude.


Spherical Cow Consulting

Pirates, Librarians, and Standards Development

With the right motivation, even I will write a blog post on a dare. And the dare I got was to write a post about what librarians and pirate captains have in common, and why it matters for standards development. (If you can’t have fun when writing, what’s the point?) The post Pirates, Librarians, and Standards Development appeared first on Spherical Cow Consulting.

“With the right motivation, even I will write a blog post on a dare. And the dare I got today was to write a post about what librarians and pirate captains have in common, and why it matters for standards development.”

(If you can’t have fun when writing, what’s the point?)

I’m sure you all want to know what on earth THAT conversation was about. It started with the desire to assign vanity titles to friends. One friend was assigned “Intrepid bass-playing sailor cyber warrior” (though that one is possibly still a work in progress). So, of course, I had to ask what my title would be.

She thought something pirate-based. I thought maybe mob boss was more appropriate. But, no: “Nah, you don’t rule through fear. You set rules, and then people come to learn that obeying the rules brings progress while disobeying the rules brings a walk down the plank. Very impersonal, no bloodshed, just terminal disapproval.” Which I read not so much as Pirate as Librarian, and in either case, reminds both of us of what the standards development process is like.

In a way, this builds on a post I wrote a few weeks ago about needing all kinds of people and skills to develop good standards.

A Digital Identity Digest Pirates, Librarians, and Standards Development Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:07:50 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Librarians and pirates: unlikely comparisons

On the surface, librarians and pirates couldn’t be more different. One rules a quiet, organized room full of catalogues and classification systems. The other shouts orders across a storm-tossed deck, treasure map in hand.

But scratch at the stereotypes, and the similarities pop up:

Both guard treasure — knowledge or gold. Both rely on codes that aren’t strictly laws, but that everyone learns to respect. Both lead crews (or patrons) who don’t always agree but who need to move in the same direction. And both know that without discipline, the whole ship — or library — quickly sinks.

Standards development, in its own way, needs a bit of both. Librarians bring order, taxonomies, metadata, and interoperability. Pirates bring the consequences: if you won’t play along with the standard, good luck finding allies or charting your course without a map.

Leadership characteristics

So what’s actually useful, whether you’re wrangling sailors, cataloguing a collection, or chairing a standards meeting?

Ability to engage people so they pay attention. Whether it’s a weary deckhand, a confused student, or a standards group at the two-hour mark, keeping attention is half the battle. Ability to raise one eyebrow sternly. Every ship, library, or working group needs That Person. The person who has one eyebrow that says: “Are you sure you want to keep going down that path?” Sometimes it’s more effective than three paragraphs of meeting minutes. Ability to lead people to their own conclusions. Neither pirate captains nor librarians hand you the final answer. The captain points at the map and lets you realize the treasure’s yours to dig up. The librarian nudges you toward the right catalogue entry. In standards, this is the art of facilitation — nudging until consensus emerges. What doesn’t work Leading purely through fear. Fear doesn’t build commitment — it drives people away. Pirates who rule by terror end up facing mutiny, and librarians who inspire only dread will find books mysteriously mis-shelved out of spite (I hate it when that happens). In standards, disengagement is fatal: if people only show up to avoid backlash, the work stalls and the draft sinks. Letting others set the tone of fear. A crew ruled by grudges goes nowhere, and a library ruled by petty turf wars becomes unusable. The same is true in standards: if flame wars and side agendas become scarier than the actual process, people stop showing up; without participation, no standard survives. Romance, intrigue, and life

Obviously, this is a very romanticized version of a pirate (and of a librarian, for that matter). Real librarians don’t spend their time swashbuckling, and real pirates were often violent criminals (also without the swashbuckling). But when I’m not writing, editing, researching, or running meetings, I’m reading trashy romance novels. Romanticized life in my spare time is my idea of entertainment.

And maybe that’s the point: we bring our own metaphors and stories to how we think about leadership and collaboration. Whether you fancy yourself the stern-eyebrowed librarian or the captain with a plank, the truth is that standards need both. Someone to keep the ship steady, someone to keep the records straight, and all of us learning when to raise an eyebrow at just the right time.

Hopefully, this post made you smile. And if it didn’t, I have a Very Stern Look at the ready for you.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Introduction

00:00:31 Hello and welcome back to A Digital Identity Digest.

00:00:35 Today’s episode comes from a dare. And honestly, if you know me, you’ll understand that’s a very dangerous way to start anything.

The dare was simple: write a post about what librarians and pirate captains have in common and why that matters to standards.

How could I say no to that?

00:00:52 Because let’s be honest—if you can’t have fun with your writing, what’s the point?

Pirates and Librarians: Not So Different

00:00:57 At first glance, pirates and librarians couldn’t be more different.

Pirates live on the high seas, sword in hand, shouting orders across storm-tossed decks. Librarians work in hushed halls, surrounded by catalogs and metadata, raising an eyebrow when needed.

And yet, if you look closely, there’s surprising overlap.

00:01:25 This all started with a conversation about vanity titles—those fun, unofficial roles we give each other.

A friend was dubbed the Intrepid Bass-Playing Cyber Sailor Warrior. Mine was harder: pirate? mob boss? librarian?

00:02:06 The final suggestion landed: I don’t rule through fear—I set rules. And when followed, they bring progress. Ignore them, and… well, it’s a walk down the plank.

That sounded far less like a pirate and far more like a librarian—which is fitting, since I have a degree in library science.

Shared Treasures and Shared Codes

00:02:24 So, what do pirates and librarians actually do?

Pirates guard treasure: gold, jewels, captured loot. Librarians guard knowledge: books, archives, collections, and digital resources.

00:02:42 Both operate according to a code.

Pirates had their Pirate Code—rules about dividing loot, settling disputes, and running the ship. Librarians have cataloging standards, metadata schemas, and classification systems.

00:03:08 Neither set of rules carries the weight of law, but ignoring them leads to chaos.

00:03:19 And both depend on their crews. Pirates don’t sail alone; librarians don’t run libraries without staff, volunteers, and community support.

This is the essence of standards development:

Gathering crews Establishing codes Protecting shared treasure (protocols, specifications, best practices)

Ignore the structure, and everything sinks fast.

The Keys to Leadership

00:03:39 So, what makes leadership work—whether on a ship, in a library, or in a standards group?

00:03:53 First: the ability to engage people.

Pirates had to keep their crews motivated. Librarians help people navigate information overload. Standards leaders cut through noise and keep focus.

00:04:02 Second: the power of the raised eyebrow.
Every community has that one look that says: “Are you sure you want to go down that path?” Subtle signals can be powerful leadership tools.

00:04:22 Third: leading people to their own conclusions.

Pirates pointed to treasure maps. Librarians point to catalogs and shelves. Standards leaders facilitate consensus rather than forcing agreement. What Doesn’t Work

00:04:41 Now, let’s talk about what doesn’t work.

Leading through fear. Fear breeds disengagement. Pirates who ruled by terror faced mutiny. Librarians who ruled by dread found books deliberately mis-shelved. In standards, disengagement kills progress. Letting others set the tone of fear. If grudges rule the ship, it goes nowhere. If turf wars rule a library, the whole community suffers. If flame wars dominate standards groups, the work halts.

Leaders must set the tone. If fear takes over, participation drops—and without participation, nothing survives.

Romanticizing the Metaphor

00:05:43 If you’ve stayed with me this long, you’re probably either giggling or dismayed.

Yes, this is a romanticized version of pirates and librarians.

Real pirates were often violent criminals. Real librarians are not criminals—and do far more than raise their eyebrows.

00:06:13 But that’s exactly what makes the metaphor fun. We all bring our own stories into how we think about leadership and collaboration.

The Balance We Need

00:06:24 Whether you see yourself as a pirate captain, a librarian, or something in between, the truth is: standards need both.

Someone to keep the ship steady. Someone to keep the record straight. And all of us knowing when to raise that well-timed eyebrow.

00:06:41 This episode was short—part reflection, part fun—but with a reminder: standards are made by people. People with quirks, with stories, and sometimes with pirate hats or card catalogs.

Closing Thoughts

00:06:56 Thanks for listening to A Digital Identity Digest.

If you enjoyed this episode:

Subscribe and share it with someone who needs to know that standards don’t have to be boring. Connect with me on LinkedIn at @hlflanagan. Leave a rating or review on Apple Podcasts or wherever you listen.

00:07:14 You can also find the written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Pirates, Librarians, and Standards Development appeared first on Spherical Cow Consulting.


FastID

The Tools Gap: Why Developers Struggle to Code Green

77% of developers want to code sustainably, but most lack the tools to measure impact. Fastly’s survey reveals the barriers and opportunities in green coding.
77% of developers want to code sustainably, but most lack the tools to measure impact. Fastly’s survey reveals the barriers and opportunities in green coding.

Monday, 22. September 2025

Anonym

How to use MySudo phone numbers for free international calls

If you love travelling, you know the value of unlimited possibilities. That’s why you’re going to want to travel with MySudo app. MySudo is the original all-in-one privacy app that lets you protect your identity and your information with second phone numbers, secure email, private browsers, and virtual cards – all wrapped into secure digital […] The post How to use MySudo phone numbers for free

If you love travelling, you know the value of unlimited possibilities. That’s why you’re going to want to travel with MySudo app.

MySudo is the original all-in-one privacy app that lets you protect your identity and your information with second phone numbers, secure email, private browsers, and virtual cards – all wrapped into secure digital profiles called Sudos.

Every MySudo feature is handy for international travel, but it’s using the phone numbers for free international calls that will really save you money while you’re away.

But even if you’re not about to hop on a plane, MySudo is still your go-to for free international calls to family and friends.

Here’s how to use MySudo for free international calls whether you’re travelling overseas or calling loved ones from home:

Overseas traveller

If you’re travelling overseas, MySudo gives you free international calling in a choice of regions and area codes. That means no fees and no need for an international roaming plan. Here’s how to set it up:

Download MySudo for iOS or Android. Choose Sudo Max plan for unlimited minutes and messages for up to 9 separate Sudo phone numbers. (Read: What do I get with SudoMax?) Choose a phone number and area code in the region you want to travel. MySudo numbers are currently available in the US, UK*, and Canada. Call and message anyone for free within the region under your SudoMax plan. Give your Sudo number to locals and they can call you as if it’s a local call (and you can avoid high inbound charges).

So long as you’ve got access to hotel or public wi-fi you can use MySudo for free calls. If you think you’ll be out of WiFi range sometimes, you can get an e-sim or international data roaming plan to use local data and MySudo will also work with those.

Calling loved ones from home

MySudo lets you call anyone anywhere in the world for free so long as the person you’re calling is using MySudo. Calls between users are end-to-end encrypted, so you can talk privately and securely. Here’s how to Invite your friends to MySudo:

Tap the menu in the top left corner. Tap Invite your friends. Choose to invite your friends from your device via another app or from your MySudo account.  Select the Sudo you want to invite from (if you have more than one Sudo). Follow the prompts.

After you’ve invited a friend, they will receive a link with your MySudo contact information (email, handle and phone number if you have one), which will prompt them to install MySudo. Once they have the app installed, they can instantly start communicating with you. Remember, all video and voice calls, texts and email between MySudo users are end-to-end encrypted.

But wait, there’s more …

7 more facts about MySudo phone numbers MySudo numbers are real, unique, working phone numbers. Each phone number has customizable voicemail, ringtones, and contacts list. You can also mute notifications and block unwanted callers. MySudo numbers are fully functional for messaging, and voice, video and group calling.  Calls and messages with other MySudo users are end-to-end encrypted. Calls and messages out of network are standard. MySudo phone numbers don’t expire. Your phone numbers will auto-renew so long as you maintain your paid plan. Calling with MySudo works like WhatsApp or Signal, but with the privacy advantage that you’re not handing over your real number to sign up. You can manage multiple numbers all in one app (read: How to Get 9 “Second Phone Numbers” on One Device). Under SudoGo plan, you get 1 included phone number; under SudoPro plan, you get 3 included phone numbers; and under SudoMax plan, you get 9 included phone numbers. If you need additional phone number resets, you can purchase them within the app for a small fee. You can always check your plans screen to see how many phone numbers you have remaining before you’ll be prompted to purchase one.

So, to recap how to use MySudo for free international calls:

To make free calls while travelling overseas, choose a Sudo number and area code in your region of travel and get unlimited minutes and messages under SudoMax plan. Available regions are the United States, United Kingdom*, and Canada. To make free, end-to-end encrypted calls anywhere in the world, invite your friends to the app. To call or message regular numbers abroad, use a Sudo number in their region, but sign up to SudoMax so there’s no limit on minutes or messages.

*In order to comply with government and service provider regulations to limit the risk of fraud, users are required to provide their accurate and up-to-date legal identity information before they can obtain UK phone numbers. 
Read: Why are you asking for my personal information when creating a phone number?

Take control and simplify your communication today. Download MySudo.

Before you go, explore the full MySudo suite.

The post How to use MySudo phone numbers for free international calls appeared first on Anonyome Labs.


ComplyCube

ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report

ComplyCube has reinforced its Leader status in G2's 2025 Fall Grid Report. The company has achieved recognition for its ease of implementation and ROI in categories including AML, customer onboarding, and biometric authentication. The post ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report first appeared on ComplyCube.

ComplyCube has reinforced its Leader status in G2's 2025 Fall Grid Report. The company has achieved recognition for its ease of implementation and ROI in categories including AML, customer onboarding, and biometric authentication.

The post ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report first appeared on ComplyCube.


uquodo

UAE’s Move Beyond OTPs: Biometric Authorization for Seamless Transactions

The post UAE’s Move Beyond OTPs: Biometric Authorization for Seamless Transactions appeared first on uqudo.

Kin AI

Kinside Scoop 👀 #14

Better customisation, better memory, better Kin

Hey folks 👋

We’ve kept busy working on Kin - it’s been two weeks already!

Read on to hear what we’ve been up to, and reach the end for this edition’s super prompt.

What’s new with Kin 🚀 Smarter characters, easier flow ✏

We’ve cleaned up the home screen, and made it possible to edit advisor characters right from the homepage selector.

This way, you can make sure all the sides of Kin are exactly who you need them to be - not just your own custom prompt.

Advisors that advise 🧙‍♂️

Your advisors are no longer passive chat partners - when they’ve got something to say (like wondering whether you’ve remembered that meeting you usually forget), they’ll reach out to you personally with a push notification.

You’re in control of this: feel it’s too much? You can turn down the frequency in the app. But if you like it? You can turn it up too.

Memory that remembers who matters 🫂

Our next memory update means Kin now does a better job of extracting people from your messages into your Kin’s private database.

Conversations about important folks should feel more accurate and natural now, as Kin remembers more of the important stuff about them.

Help getting what you need 💡

We’ve also added advisor interaction reminders and frequency tracking. Now you can see how often you’ve chatted with each advisor, and set up reminders to make sure you’re talking with each advisor as often as you’d like.

Voice mode 🎙

We’ve heard your thoughts loud and clear: voice mode is a favorite, but more stability and longer usage times are needed.

There was also an issue for Android users with headsets - we’ve dealt with that, so now Kin’s voice mode shouldn’t get so confused by wires.

For everything else, we’re working on improvements to make it feel seamless. More soon.

Other fixes & polish 🛠

Removed emojis from filter types for better readability

Tweaked chat font design for smoother legibility

Fixed the journal voice button floating mid-screen (no more runaway buttons)

Cleaned up chat formatting in general

Further fixes for Android keyboard issues (hopefully the last!)

Fixed Journal title generation, so auto-generated titles should work much better now

Resolved the double user issue, for those that had it!

Your turn 💭

Kin is moving fast. We have big plans to reach by the end of the year - and we want to make sure we arrive at a place you love as much as us.

So, like we say every time, there are multiple ways to tell us your thoughts about Kin. Good, bad, strange… we want them all!

You can reach out to the KIN team at hello@mykin.ai with anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).

For something more interactive, the official Kin Discord is still the best place to talk to the Kin development team (as well as other users) about anything AI.

We have dedicated channels discussing the tech behind Kin, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week, and you’re invited:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

You’re the centre of this conversation - make sure you take your place. Kin’s for you, not for us.

Finally, you can also share your feedback in-app. Just screenshot to trigger the feedback form!

Our current reads 📚

Article: How people really use AI (Claude vs ChatGPT)
READ - thedeepview.co

Report: Mobile app trends in Denmark
READ - franma.co

Article: Apple launch the iPhone 17 pro, featuring the new A19 chipset built with running LLMs in mind (making a truly-local Kin instance more possible)
READ - Apple

Report: a16z’s app affinity scores for AI users (what other AI apps are users of particular AI most likely to have?)
READ: Olivia Moore via x

This edition’s super prompt 🤖

This time, we’re asking your Kin:

What kind of support do I best respond to?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to explore how you think about pressure, and how you can keep cool under it.

As a reminder, you can do this on both iOS and Android.

Try prompt in Kin

This is your journey 🚢

Kin always has been and always be for you as users. We want to build the most useful and supportive AI assistant we can.

So, please: email us, chat in our Discord, or even just shake the app to reach out to us with your thoughts and ideas.

Kin is only what our users make of us.

With love,

The KIN Team


Veracity trust Network

2025 bot trends see rise of Gen-AI continuing

One of the 2025 bot trends which will continue into the future is the use of GenAI-powered technology to spearhead attacks on both private business and critical infrastructure. This rising trend has been growing at a pace since 2023 and shows no sign of slowing down and, according to many reports, is likely to become an even greater threat. The post 2025 bot trends see rise of Gen-AI continu

One of the 2025 bot trends which will continue into the future is the use of GenAI-powered technology to spearhead attacks on both private business and critical infrastructure.

This rising trend has been growing at a pace since 2023 and shows no sign of slowing down and, according to many reports, is likely to become an even greater threat.

The post 2025 bot trends see rise of Gen-AI continuing appeared first on Veracity Trust Network.


Okta

Introducing the Okta MCP Server

As AI agents and AI threats proliferate at an unprecedented rate, it becomes imperative to enable them to communicate safely with the backend systems that matter the most. A Model Context Protocol (MCP) server acts as the bridge between an LLM and an external system. It translates natural language intent into structured API calls, enabling agents to perform tasks like provisioning users, managi

As AI agents and AI threats proliferate at an unprecedented rate, it becomes imperative to enable them to communicate safely with the backend systems that matter the most.

A Model Context Protocol (MCP) server acts as the bridge between an LLM and an external system. It translates natural language intent into structured API calls, enabling agents to perform tasks like provisioning users, managing groups, or pulling reports, all while respecting the system’s security model. Establishing a universal protocol eliminates the need to build custom integrations. Enterprises can now easily connect their AI agents with Okta’s backend systems to achieve automation of complex chains of activities, quick resolution of issues, and increased performance throughput.

Table of Contents

What the Okta MCP Server brings Tools and capabilities Highlights at a glance Getting started with the Okta MCP Server Initializing the project Authentication and authorization Configuring your client Using the Okta MCP Server with VS Code Enable agent mode in GitHub Copilot Update your VS Code settings Start the server Examples in action Read more about Cross App Access, OAuth 2.0, and securing your applications What the Okta MCP Server brings

The Okta MCP Server brings this capability to your identity and access management workflows. It connects directly to Okta’s Admin Management APIs, giving your LLM agents the ability to safely automate organization management.

Think of it as unlocking a new interface for Okta, one where you can ask an agent:

“Add this new employee to the engineering group.” “Generate a report of inactive users in the last 90 days.” “Deactivate all users who tried to log in within the last 30 minutes.” Tools and capabilities

In its current form, the server allows the following actions:

User Management: Create, list, retrieve, update, and deactivate users. Group Management: Create, list, retrieve, update, and delete groups. Group Operations: View assigned members, view assigned applications, add, and remove users. System Information: Retrieve Okta system logs.

And many more actions with application and policies APIs as well.

Using the above operations as a base, complex real-life actions can also be performed. For example, you can ask the MCP server to generate a security audit report for the last 30 days and highlight all changes to user and group memberships according to your desired report template.

Highlights at a glance Flexible Authentication: The server supports both interactive login (via Device Authorization Grant) and fully automated, browserless login (via Private Key JWT). Whether you’re experimenting in development or running a headless agent in production, you can authenticate in the way that fits your workflow. More Secure Credential Handling: Your authentication details are managed through scoped API access and environment variables, keeping secrets out of code. Tokens are issued only with the permissions you explicitly grant, following least-privilege best practices. Seamless Integration with Okta APIs: Built on Okta’s official SDK, the server is tightly integrated with Okta’s Admin Management APIs. That means reliable performance, support for a wide range of identity management tasks, and an extensible foundation for adding more endpoints over time. Getting started with the Okta MCP Server

Now that you know what the Okta MCP server is and why it’s useful, let’s dive into how to set it up and run it. Before you proceed, you will need VS Code, Python environment (Python 3.9 or above), and uv.

Initializing the project

The Okta MCP server comes packaged for quick setup so you can clone and run it. We use uv (a fast Python package manager) to help ensure your environment is reproducible and lightweight.

Install uv

Clone the repository: git clone https://github.com/okta/okta-mcp-server.git Install dependencies and set up the project: cd okta-mcp-server && uv sync

At this point, you have a working copy of the server. Next, we’ll connect it to your Okta org.

Authentication and authorization

Every MCP server needs a way to prove its identity and access your Okta APIs more securely. We support two authentication modes, and your choice depends on your use case.

Option A: Device authorization grant (recommended for interactive use)

This flow is best if you’re running the MCP server locally and want a quick, user-friendly login. After you start the server, it triggers a prompt to log in via your browser. Here, the server exchanges your browser login for a secure token that it can use to communicate with Okta APIs.

Use this if you’re experimenting, developing, or want the simplest way to authenticate.

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Click Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Native Application as the application type, then click Next

Enter an app integration name

Configure the redirect URIs: Redirect URI: com.oktapreview.{yourOktaDomain}:/callback Post Logout Redirect URI: com.okta.{yourOktaDomain}:/ In the Controlled access section, select the appropriate access level Click Save Where are my new app's credentials?

Creating an OIDC Native App manually in the Admin Console configures your Okta Org with the application settings.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Note: While creating the app integration, make sure to select the Device Authorization in the Grant type.

Once the app is created, follow these steps:

Grant API scopes (for example: okta.users.read, okta.groups.manage).


Copy the Client ID for later use.

Note: Why “Native App” and not “Service”?
Device Auth is designed for user-driven flows, so it assumes someone is present to open the browser.

Option B: Private key JWT (best for automation, CI/CD, and “headless” environments)

This flow is perfect if your MCP server needs to run without human intervention, for example, inside a CI/CD pipeline or as part of a backend service. Instead of prompting a person to log in, the server authenticates using a cryptographic key pair.

Here’s how it works:

You generate or upload a public/private key pair to Okta. The server uses the private key locally to sign authentication requests. Okta validates the signature against the public key you registered, ensuring that only your authorized server can act on behalf of that client.

Use this if you’re automating, scheduling jobs, or integrating into infrastructure.

In your Okta org, create a new API Services App Integration.


Under Client Authentication, select Public Key / Private Key.


Add a public key: either generate it in Okta (recommended) and copy it in PEM format, or upload your own keys.


Copy the Client ID and Key ID (KID).


Disable Proof of Possession in the General tab.

Grant the necessary API scopes (e.g., okta.users.read, okta.groups.manage) and provide Super Administrator access.

Configuring your client

You can use Okta’s MCP server with any MCP-compatible client. Whether running a lightweight desktop agent, experimenting in a local environment, or wiring it into a production workflow, the setup pattern is the same.

For this guide, we’ll walk through the setup in Visual Studio Code with GitHub Copilot - one of the most popular environments for developers. The steps will be similar if you use another client like Claude Desktop or AWS Bedrock.

Using the Okta MCP Server with VS Code Enable agent mode in GitHub Copilot

The Okta MCP server integrates with VS Code through Copilot’s agent mode.

Install the GitHub Copilot extension Open the Copilot Chat view in VS Code.

To enable the Agent mode, checkout the steps mentioned in the VS Code docs.

Update your VS Code settings

Next, you’ll tell VS Code how to start and communicate with the Okta MCP server. Create a folder named .vscode in your project directory, then add a new file inside it called mcp.json. Copy and paste the configuration below into that file and save it.

{ "inputs": [ { "type": "promptString", "description": "Okta Organization URL (e.g., https://trial-123456.okta.com)", "id": "OKTA_ORG_URL" }, { "type": "promptString", "description": "Okta Client ID", "id": "OKTA_CLIENT_ID", "password": true }, { "type": "promptString", "description": "Okta Scopes (separated by whitespace, e.g., 'okta.users.read okta.groups.manage')", "id": "OKTA_SCOPES" }, { "type": "promptString", "description": "Okta Private Key. Required for 'browserless' auth.", "id": "OKTA_PRIVATE_KEY", "password": true }, { "type": "promptString", "description": "Okta Key ID (KID) for the private key. Required for 'browserless' auth.", "id": "OKTA_KEY_ID", "password": true } ], "servers": { "okta-mcp-server": { "command": "uv", "args": [ "run", "--directory", "/path/to/the/okta-mcp-server", // Replace this path with your own project directory "okta-mcp-server" ], "env": { "OKTA_ORG_URL": "${input:OKTA_ORG_URL}", "OKTA_CLIENT_ID": "${input:OKTA_CLIENT_ID}", "OKTA_SCOPES": "${input:OKTA_SCOPES}", "OKTA_PRIVATE_KEY": "${input:OKTA_PRIVATE_KEY}", "OKTA_KEY_ID": "${input:OKTA_KEY_ID}" } } } }

Note: Before running the server, make sure to replace the placeholder path /path/to/the/okta-mcp-server with the actual directory path of your local project

Running the server for the first time prompts you to enter the following information:

Okta Organization URL: Your Okta tenant URL. Okta Client ID: The client ID of the application you created in your Okta organization. Okta Scopes: The scopes you want to grant to the application, separated by spaces. For example: "OKTA_SCOPES": "${input:OKTA_SCOPES = okta.users.read okta.users.manage okta.groups.read okta.groups.manage okta.logs.read okta.policies.read okta.policies.manage okta.apps.read okta.apps.manage}"

Note: Add scopes only for the APIs that you will be using.

Okta Private Key and Key ID: You only need to enter this key when using browserless authentication. If you’re not using that method, just press Enter to skip this step and use the Device Authorization flow instead. Start the server

When you open VS Code, you’ll now see okta-mcp-server as an option to start.

Click Start to launch the server in your mcp.json file.

The server will check your authentication method:

If using Device Authorization, it triggers a prompt to log in via your browser.

If using Private Key JWT, it will authenticate silently using your key.

Once connected, Copilot will automatically recognize the Okta commands you can use.

At this point, the MCP server has established a connection between VS Code and your Okta organization.You can now manage your organization using natural language commands directly in your editor.

Examples in action

1. Listing Users

2. Creating Users

3. Group Assignment

4. Creating an Audit Report

We invite you to try out our MCP server and experience the future of identity and access management. Meet us at Oktane, and if you run into issues, please open an issue in our GitHub repository.

Read more about Cross App Access, OAuth 2.0, and securing your applications Integrate Your Enterprise AI Tools with Cross App Access Build Secure Agent-to-App Connections with Cross App Access (XAA) OAuth 2.0 and OpenID Connect overview Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Secure the SaaS Apps of the Future

Follow us on LinkedIn, Twitter, and subscribe to our YouTube channel for more developer content. If you have any questions, please leave a comment below!

Sunday, 21. September 2025

Rohingya Project

Rohingya Project Launches R-Coin Presale on PinkSale, Powering Blockchain Ecosystem for Stateless Rohingya

The Rohingya Project today announced the launch of its R-Coin token presale on the PinkSale launchpad, inviting impact-driven and crypto-savvy investors to support an innovative social-impact initiative. R-Coin (RCO) is the native token of the project’s SYNU Platform, a blockchain-based network designed to empower over 3.5 million stateless Rohingya refugees worldwide. By participating in the […]
The Rohingya Project today announced the launch of its R-Coin token presale on the PinkSale launchpad, inviting impact-driven and crypto-savvy investors to support an innovative social-impact initiative. R-Coin (RCO) is the native token of the project’s SYNU Platform, a blockchain-based network designed to empower over 3.5 million stateless Rohingya refugees worldwide. By participating in the […]

Saturday, 20. September 2025

Recognito Vision

Everything You Need to Know About Face Recognition Systems

Facial recognition is no longer just a sci-fi plot twist. It is now a part of daily life, from unlocking smartphones to airport security checks. A face recognition system uses advanced algorithms to scan, analyze, and verify identities in seconds. Businesses, schools, and governments are rapidly adopting it, but it’s worth digging deeper into how...

Facial recognition is no longer just a sci-fi plot twist. It is now a part of daily life, from unlocking smartphones to airport security checks. A face recognition system uses advanced algorithms to scan, analyze, and verify identities in seconds. Businesses, schools, and governments are rapidly adopting it, but it’s worth digging deeper into how it works, its benefits, and what challenges still exist.

 

Facial Recognition System

At its core, a facial recognition system relies on biometric technology. It captures a person’s facial features, converts them into a digital template, and compares that data with stored profiles to confirm identity. Unlike fingerprints or ID cards, you don’t need to touch anything. Just look at the camera, and the system does the rest.

This technology uses complex neural networks trained on thousands of images. The system maps out key points like the distance between eyes, nose shape, and jawline. The result is a unique faceprint that is nearly impossible to duplicate. Accuracy levels are improving quickly thanks to evaluations like the NIST Face Recognition Vendor Test, which tracks the performance of leading algorithms worldwide.

 

How Face Recognition Technology Works

Understanding the process makes it clear why it is so widely trusted. Here’s a simple breakdown:

Image Capture – A camera captures a person’s face in real time.

Face Detection – The system locates the face in the image and isolates it from the background.

Feature Extraction – Algorithms analyze facial features such as cheekbones, chin curves, and lip contours.

Template Creation – The extracted data is turned into a digital faceprint.

Comparison and Match – The faceprint is compared with existing records to confirm identity.

Accuracy rates are consistently improving. According to NIST FRVT 1:1 testing, leading systems now achieve over 99% verification success under ideal conditions.

 

Face Anti-Spoofing and Its Role in Security

Every great lock needs a strong defense. This is where face anti spoofing comes in. Without it, someone could trick the system using a photo, video, or even a 3D mask. Spoofing attempts are surprisingly common in fraud-heavy industries like finance.

Modern systems fight this using liveness detection. The camera checks for natural movements such as blinking, skin texture changes, and depth. Some solutions even shine light on the face and measure reflections to confirm the presence of a real person. These layers of defense ensure that recognition remains both fast and secure.

 

Face Recognition Attendance System

Schools, offices, and even factories are adopting a face recognition attendance system. No more long queues at biometric scanners or manual sign-in sheets. Employees just walk in, glance at a camera, and their presence is automatically logged.

The benefits are clear:

No contact required which keeps it hygienic.

Faster processing compared to manual punching.

Reduced buddy punching where one employee marks attendance for another.

Accurate reporting that syncs directly with payroll systems.

Organizations save time and prevent fraud while employees enjoy a hassle-free experience.

 

Face Scanning Attendance System in Education

Schools and universities are also experimenting with a face scanning attendance system. Teachers can focus on teaching instead of wasting class time marking attendance. Parents get real-time updates if their child is present, while administrators gain detailed records for compliance.

Though promising, it does raise questions about student privacy. Educational institutes must handle such systems responsibly and align with global data protection standards like GDPR.

 

Benefits of Face Recognition in Real-World Applications

Let’s talk numbers and impact. The global facial recognition market is projected to reach over $16 billion by 2030. Here’s why it’s growing so fast:

Security – Airports use it to screen passengers quickly.

Fraud Prevention – Banks use it to stop identity theft.

Convenience – Smartphones unlock instantly with a glance.

Efficiency – Attendance and access control become effortless.

Quick Fact Table:

Application Benefit Example Use Case Banking Stops account fraud Mobile banking logins Airports Speeds up security checks Passport verification Education Saves teaching time Student attendance Workplace Prevents time theft Employee attendance tracking

 

Privacy and Ethical Concerns

As powerful as the technology is, it sparks serious debates. Who owns the face data? How securely is it stored? What if it gets misused? Regulations are starting to catch up. In Europe, GDPR rules require companies to get clear consent before storing or using biometric data.

Transparency and user control are key. People need to know how their face data is being used and have the right to opt out. Striking a balance between security and privacy remains one of the biggest challenges for the industry.

 

Case Studies: Where It Works Best Airports – The U.S. Customs and Border Protection agency reported that facial recognition has caught thousands of identity fraud attempts since its rollout.

Corporate Offices – Large firms in Asia have reduced payroll fraud by adopting face-based attendance.

Healthcare – Hospitals use it to secure patient data and restrict access to sensitive areas.

These case studies highlight how versatile and impactful the technology can be when used responsibly.

 

The Future of Face Recognition

Imagine walking into a store, picking items, and leaving without waiting in line. Payment is automatically processed after the system confirms your face. This futuristic scenario is closer than you think. Retailers are already piloting systems where face recognition replaces credit cards.

At the same time, research is focusing on reducing bias. Early systems struggled with accuracy across different ethnicities. Today, continuous improvements are making recognition fairer and more reliable. Open-source contributions on platforms like GitHub are accelerating innovation by giving developers direct access to tools and data.

 

Conclusion

A face recognition system is more than just a tech buzzword. It is reshaping industries by offering speed, security, and convenience. From attendance tracking to fraud prevention, its applications are only expanding. But with great power comes great responsibility, and balancing innovation with privacy will decide how widely it gets adopted in the future. For organizations exploring the technology, brands like Recognito are paving the way with practical, secure, and developer-friendly solutions.

Friday, 19. September 2025

Shyft Network

Middle East Crypto in 2025: From Wild Experiments to Ironclad Rules

The Middle East’s crypto scene is no longer a playground for bold experiments. By September 2025, the region is laying down the law, transforming from a sandbox of ideas into a powerhouse of regulated innovation. Dubai’s regulators are cracking the whip, Bahrain’s rolling out bold new laws, and the UAE’s dirham is staking its claim as the backbone of digital payments. This isn’t just a shift — it’

The Middle East’s crypto scene is no longer a playground for bold experiments. By September 2025, the region is laying down the law, transforming from a sandbox of ideas into a powerhouse of regulated innovation. Dubai’s regulators are cracking the whip, Bahrain’s rolling out bold new laws, and the UAE’s dirham is staking its claim as the backbone of digital payments. This isn’t just a shift — it’s a seismic leap toward a future where compliance fuels growth. Let’s dive into the forces reshaping the region’s crypto landscape.

Dubai: Where Stablecoins Meet Serious Oversight

Dubai’s Virtual Assets Regulatory Authority (VARA) isn’t messing around. Gone are the days of loose guidelines and “let’s see what sticks.” VARA’s 2025 rulebook is a masterclass in clarity, dictating how stablecoins (Fiat-Referenced Virtual Assets) and tokenized real-world assets (RWAs) must be issued, backed, and disclosed. Want to launch a stablecoin or tokenize a skyscraper? You’d better have your paperwork in order.The real game-changer? Enforcement. VARA recently slapped a fine on a licensed firm, sending a crystal-clear message: licenses aren’t just badges of honor — they’re contracts with accountability. Dubai’s saying loud and clear: innovate, but play by our rules. This isn’t just regulation; it’s a blueprint for trust in a digital age.

Abu Dhabi: The Institutional Crypto Haven

While Dubai swings the regulatory hammer, Abu Dhabi Global Market (ADGM) is crafting a different narrative. Its Financial Services Regulatory Authority (FSRA) has fine-tuned its crypto framework to welcome institutional heavyweights. From custody to payment services, ADGM’s rules for fiat-referenced tokens are a magnet for serious players. Yet, privacy tokens and algorithmic stablecoins? Still persona non grata.

ADGM’s approach is a tightrope walk: embrace cutting-edge innovation while ensuring every move can withstand the scrutiny of global finance. It’s less about flashy pilots and more about building a crypto hub that lasts.

UAE’s Central Bank: Dirham Takes the Digital Crown

The Central Bank of the UAE (CBUAE) is drawing a line in the sand. As of September 2025, only dirham-pegged stablecoins can power onshore payments. Foreign tokens? Relegated to niche corners. This isn’t just policy — it’s a bold bet on the dirham as the anchor of the UAE’s digital economy. By prioritizing local currency, the CBUAE is ensuring the UAE doesn’t just participate in the crypto revolution — it leads it.

Dubai’s Real Estate Revolution: Tokenization Goes Big

Remember when Dubai’s tokenized real estate pilots were just a cool idea? Those days are gone. Recent sales, run with the Dubai Land Department, vanished in minutes, pulling in investors from every corner of the globe. The DIFC PropTech Hub is doubling down, turning these pilots into a full-blown movement. Tokenized property isn’t a gimmick anymore — it’s a market poised to redefine how we invest in real estate.

Bahrain and Beyond: The GCC’s Crypto Patchwork

Bahrain’s not sitting on the sidelines. Its new laws for Bitcoin and stablecoins are designed to make trading safer and more attractive to institutions. Meanwhile, Kuwait and Qatar are playing it cautious, keeping their crypto gates tightly shut. The GCC isn’t moving in unison, but the UAE and Bahrain are sprinting ahead, setting the pace for a region-wide crypto renaissance.

The Privacy Puzzle: Navigating the FATF Travel Rule

Behind the headlines lies a thornier challenge: the FATF Travel Rule. Virtual Asset Service Providers (VASPs) now have to share user data across borders, stirring up privacy and operational headaches. Enter Shyft Veriscope, a peer-to-peer platform that lets firms comply without exposing sensitive customer data to centralized risks. In a region obsessed with trust and growth, tools like these are the unsung heroes of crypto’s next chapter.

Why 2025 Is the Year to Watch

The Middle East isn’t just dabbling in crypto anymore — it’s rewriting the rules of the game. From dirham-backed stablecoins to tokenized skyscrapers, the region is building a digital asset economy where compliance isn’t a burden but a springboard. For founders, investors, and innovators, the message is clear: get on board, align with the rules, and seize the opportunity to shape a future where crypto isn’t just a buzzword — it’s a legacy.

About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network


iComply Investor Services Inc.

KYB Compliance Software for Regulated Entities: Navigating Global AML Shifts

KYB requirements are tightening worldwide. This guide helps regulated firms navigate evolving AML expectations and shows how iComply streamlines compliance with secure, scalable software.

Regulated entities – including PSPs, VASPs, investment platforms, and trust companies – must meet rising KYB and AML expectations. This article highlights emerging requirements across the UAE, UK, EU, Singapore, and U.S.

Regulated entities operate in complex environments where KYB and AML compliance are non-negotiable. Whether your firm is a payment service provider (PSP), virtual asset service provider (VASP), investment platform, corporate services provider, a real estate agent, a mortgage broker, regulators are tightening standards.

In 2025 and beyond, firms must demonstrate robust KYB controls, real-time screening, and jurisdictional audit readiness – especially as rules evolve in key markets like the UK, UAE, and EU.

Emerging Global AML Requirements for Regulated Entities United Kingdom Regulators: Companies House, FCA Shifts: Mandatory KYB and identity verification for directors and PSCs; AML registration and sanctions screening under MLR 2017 United Arab Emirates Regulators: CBUAE, DFSA, VARA, ADGM Requirements: Risk-based onboarding, KYB for corporate clients, Travel Rule compliance, UBO discovery, and localized data handling European Union Regulators: AMLA (in development), national competent authorities Shifts: 6AMLD mandates KYB, UBO transparency, risk scoring, and centralized reporting; MiCA introduces crypto-specific controls Singapore Regulator: MAS Requirements: CDD/EDD obligations, sanctions list monitoring, transaction screening, and UBO tracking for regulated businesses United States Regulators: FinCEN, SEC, CFTC, state agencies Shifts: BOI reporting under the Corporate Transparency Act; mandatory KYB and AML controls for regulated financial service providers Compliance Challenges for Regulated Entities

1. Overlapping Regulatory Bodies
Firms often face scrutiny from sector-specific and national agencies.

2. Diverging Standards
KYB requirements vary across regions, and privacy rules complicate data handling.

3. High-Risk Clients and Transactions
Cross-border payments and digital assets raise red flags.

4. Legacy Compliance Systems
Siloed tools delay onboarding and lack real-time visibility.

iComply: Leading KYB Compliance Software for Global Entities

iComply enables regulated firms to standardize and scale AML workflows across jurisdictions with modular tools and built-in localization.

1. KYB + KYC Automation Verify entities and individuals using real-time registry, document, and biometric checks Visualize UBO networks and flag nominee ownership Encrypted edge processing for global data privacy compliance 2. KYT + Risk Monitoring Monitor transactions for suspicious patterns or volume anomalies Score risk based on client type, geography, and transaction behaviour Trigger escalations and audit-logged alerts automatically 3. Centralized Case Management Unify screening, onboarding, and regulatory review workflows Track every decision, flag, and escalation in one dashboard Export formatted reports for FinCEN, FCA, AMLA, and MAS 4. Deployment + Localization Deploy on-prem, in private cloud, or across multiple regions Jurisdiction-specific policies, thresholds, and audit trails Seamless integration with banking, CRM, and identity tools Case Insight: DIFC-Based Corporate Services Firm

A UAE-regulated corporate services firm implemented iComply’s KYB software to unify compliance across business clients:

Cut onboarding time by 70% Automated UBO and sanctions monitoring Passed DFSA audit with zero deficiencies

As KYB expectations evolve globally, regulated entities must modernize fast. iComply’s compliance software simplifies onboarding, standardizes audit preparation, and supports confident cross-border operations.

Talk to iComply to see how our KYB compliance software helps PSPs, VASPs, and financial institutions stay compliant—no matter where they operate.


BlueSky

Building Healthier Social Media: Updated Guidelines and New Features

Public discourse on social media has grown toxic and divisive, but unlike other platforms, Bluesky is building a social web that empowers people instead of exploiting them.

Public discourse on social media has grown toxic and divisive. Traditional social platforms drive polarization and outrage because they feed users content through a single, centralized algorithm that is optimized for ad revenue and engagement. Unlike those platforms, Bluesky is building a social web that empowers people instead of exploiting them.

Bluesky started as a project within Twitter in 2019 to reimagine social from the ground up — to be an example of “bluesky” thinking that could reinvent how social worked. With the goal of building a healthier, less toxic social media ecosystem, we spun out as a public benefit corporation in 2022 to develop technologies for open and decentralized conversation. We built Authenticated Transfer so Twitter could interoperate with other social platforms, but when Twitter decided not to use it, we built an app to showcase the protocol.

When we built the app, we first gave users control over their feed: In the Bluesky app, users have algorithmic choice — you can choose from a marketplace of over 100k algorithms, built by other users, giving you full control over what you see. There is also stackable moderation, allowing people to spin up independent moderation services, and giving users a choice in what moderation middleware they subscribe to. And of course there is the open protocol, which lets you migrate between apps with your data and identity, creating a social ecosystem with full data portability. Just today, we announced that we are taking the next step in decentralization.

Although we focused on building these solutions to empower users, we still inherited many of the problems of traditional social platforms. We’ve seen how harassment, vitriol, and bad-faith behavior can degrade overall conversation quality. But innovating on how social works is in our DNA. We’ve been continuously working towards creating healthier conversations. The quote-post used to let harassers take a post out of context, so we gave users the ability to disable them. The reply section often filled up with unwanted replies, so we gave users the ability to control their interaction settings.

Our upcoming product changes are designed to strengthen the quality of discourse on the network, give communities more customized spaces for conversation, and improve the average user’s experience. One of the features we are workshopping is a “zen mode” that sets new defaults for how you experience the network and interact with people. Another is including prompts for how to engage in more constructive conversations. We see this as part of our goal to make social more authentic, informative, and human again.

We’ve also been working on a new version of our Community Guidelines for over six months, and in the process of updating them, we’ve asked for community feedback. We looked at all of the feedback you gave and incorporated some of your suggestions into the new version. Most significantly, we added details so everyone understands what we do and do not allow. We also better organized the rules by putting them into categories. We chose an approach that respects the human rights and fundamental freedoms outlined in the UN Guiding Principles on Business and Human Rights. The new Guidelines take effect on October 15.

In the meantime, we’re going to adjust how we enforce our moderation policies to better cultivate a space for healthy conversations. Posts that degrade the quality of conversations and violate our guidelines are a small percentage of the network, but they draw a lot of attention and negatively impact the community. Going forward, we will more quickly escalate enforcement actions towards account restrictions. We will also be making product changes that clarify when content is likely to violate our community guidelines.

We were built to reimagine social from the ground up by opening up the freedom to experiment and letting users choose. Social media has been dominated by a few platforms that have closed off their social graph and squashed competition, leaving users few alternatives. Bluesky is the first platform in a decade to challenge these incumbents. Every day, more people set up small businesses and create new apps and feeds on the protocol. We are continuing to invest in the broader protocol ecosystem, laying a foundation for the next generation of social media developers to build upon.

Today’s Community Guidelines Updates

In January, we started down the path of updating our rules. Part of that process was to ask for your thoughts on our updated Community Guidelines. More than 14,000 of you shared feedback, suggestions, and examples of how these rules might affect your communities. We especially heard from community members who shared concerns about how the guidelines could impact creative expression and traditionally marginalized voices.

After considering this feedback, and in a return to our experimental roots, we are going to bring a greater focus to encouraging constructive dialogue and enforcing our rules against harassment and toxic content. For starters, we are going to increase our enforcement efforts. Here is more information about our updated Community Guidelines.

What Changed Based on Your Feedback

Better Structure: We organized individual policies according to our four principles – Safety First, Respect Others, Be Authentic, and Follow the Rules. Each section now better explains what's not allowed and consolidated related policies that were previously scattered across different sections. More Specific Language: Where you told us terms were too vague or confusing, we added more detail about what these policies cover. Protected Expression: We added a new section for journalism, education, advocacy, and mental health content that aims to reduce uncertainty about enforcement in those areas.

Our Approach: Foundation and Choice

We maintain baseline protections against serious harms like violence, exploitation, and fraud. These foundational Community Guidelines are designed to keep Bluesky safe for everyone.

Within these protections, our architecture lets communities layer on different labeling services and moderation tools that reflect their specific values. This gives users choice and control while maintaining essential safety standards.

People will always disagree about whether baseline policies should be tighter or more flexible. Our goal is to provide more detail about where we draw these boundaries. Our approach respects human rights and fundamental freedoms as outlined in the UN Guiding Principles on Business and Human Rights, while recognizing we must follow laws in different jurisdictions.

Looking Forward

Adding clarity to our Guidelines and improving our enforcement efforts is just the beginning. We also plan to experiment with changes to the app that will improve the quality of your experience by reducing rage bait and toxicity. We may not get it right with every experiment but we will continue to stay true to our purpose and to listen to our community as we go.

These updated guidelines take effect on October 15, and will continue to evolve as we learn from implementation and feedback. Thank you for sharing your perspectives and helping us build better policies for our community.

Thursday, 18. September 2025

LISNR

How Mobility Leaders Turn Idle Ride Time into Opportunity

How Mobility Leaders Turn Idle Ride Time into Opportunity Mobility leaders across the globe are searching for a constant communication channel with their end customers. For transit leaders, there are three main touchpoints with their end consumers: Ticketing (Boarding), In-Transit, and Exit (Disembarkation). Most mobility leaders perfect one of the three, leaving possible revenue channels […] Th
How Mobility Leaders Turn Idle Ride Time into Opportunity

Mobility leaders across the globe are searching for a constant communication channel with their end customers. For transit leaders, there are three main touchpoints with their end consumers: Ticketing (Boarding), In-Transit, and Exit (Disembarkation). Most mobility leaders perfect one of the three, leaving possible revenue channels and ideal rider experiences on the table.

What communication channel can be capitalized across all three consumer journey touchpoints within mobility?

The Problem: Current proximity modalities are limited by one of the following: distance, throughput, hardware limitations, and interoperability.

The Solution: LISNR Radius offers a unique proximity modality that changes the way consumers interact throughout the rider journey. Our Radius SDK relies on ultrasonic communication between standard speakers (already installed in transit vehicles and stations) and microphones found in everyday devices like smartphones. By establishing a communication channel directly between the consumer device and the vehicle or station, transit operators can reduce wait times, improve accessibility, capitalize on idle time in transit, and segment their riders for variable pricing. 

Furthermore, LISNR offers Quest, our loyalty and gamification portal, which allows mobility leaders to keep a unified record of key customer interactions. With Quest, mobility leaders can incentivize off-peak rides and partner with nearby shops to offer advertisements directly to a rider in transit.

Talk with Our Team about Mobility Solutions The Proliferation of LISNR-Enabled Digital Touchpoints in Mobility

LISNR empowers businesses to capitalize on the digital touchpoints found in everyday transit experiences. By enabling the delivery of speedy ticketing and personalized offers directly to consumers’ devices, transit operators can engage their riders during all three stages of transit.

Ticketing

Legacy ticketing infrastructure creates long queues, is easy to bypass, and simply doesn’t work without a stable internet connection. Radius redefines this process with our ultrasonic SDK by working at longer ranges than NFC, with more pinpoint precision than BLE, and without a network connection at the time of transaction. Radius is already gaining major traction as a ticketing alternative in the mobility space. With our recent partnership with S-Cube, LISNR has expanded to provide a mass ticketing solution to the busiest transit stations in India.

S-Cube needed a faster and more secure way to enable ticketing for millions of riders. Moreover, S-Cube needed ticketing technology that could perform without a reliable network connection. Radius was able to achieve all of these and more. In testing, S-Cube saw a dramatic increase in rider throughput by switching from QR codes to Radius for ticketed gate access. They moved from processing 35 riders per minute to 60 riders per minute, representing an over 70% improvement.

 

S-Cube uses a Zone 66 broadcast at entry allowing consumers to identify themselves and validate their ticket as they approach the turnstile. Once at the turnstile, consumers can broadcast their account-based ticket information to the ticketing machine (Point1000 on Channel 0 from their device’s speaker). Since they have already been identified and validated, their passengers can breeze through the ticketing process.

See More Product Demos In-Transit Promotion

In-transit promotions are not a new concept, with buses and trains already filled with billboard-like advertising. More recently, rideshare applications have started showing ad space on the home page and key active pages. Unfortunately, these advertisements often go unnoticed, are rarely relevant to the end customer, and for rideshares, are only presented to the paying device. LISNR solves these problems with Radius and Quest.

Using Radius, transit operators can capitalize on idle time in transit by sending promotional offerings directly to all consumers’ devices that are present in the vehicle. For example, businesses at certain stops can target specific riders based on their commute patterns. Furthermore, food/grocery delivery platforms can focus on a tired passenger coming home from work.

By establishing this additional communication channel to their riders, the transit operator can send promotional messages from their partners directly to the most important audiences. By communicating at the device level, promotional offerings can be sent with the preferences of the end customer. Radius’s ultrasonic SDK operates above audible frequencies, meaning that even in noisy conditions, riders are still able to receive their promotions.

By incorporating Quest, transit operators (or their marketing partners) can keep a unified record of customers and the promotions they interact with. Over time, this leads to more relevant promotions and a better experience for marketers and riders alike. With Quest and Radius, transit operators can capitalize on riders’ idle time in transit while establishing a positive connection with them.

Radius tone being broadcast at a frequency higher than human hearing Example of Quest, gamified loyalty for a mobility ecosystem leader Identify the Exit Point

In some modes of transit it’s easy to identify when the consumer exits the vehicle (planes, rideshares), however, most modes of public transit are left in the dark. This lack of visibility into rider disembarkation makes certain variable pricing nearly impossible. With Radius, transit operators can leverage the rider’s microphone when in-app to detect and confirm the presence of the device. With Radius enabled, mobility operators can begin to charge based on a “Be-In-Be-Out” pricing model. These seamless transit experiences are gaining traction with the global contactless transit market projected to grow to $33.5B by 2030 (CAGR ~15%). This major shift is driven by account-based ticketing and distance/usage-based fares (Source: Allied Market Research, 2023). 

LISNR is here to enable transit ecosystem leaders with the technology to support a near-frictionless be-in/be-out user flow for consumers. Our long-range (Zone) ultrasonic tones can broadcast in-vehicle to detect the presence of devices. As riders exit the vehicle, the tones will no longer be detected and the app backend will end the variable pricing model for their trip.

Conclusion

LISNR’s contactless solutions help support the mobility and transit ecosystems across all major digital touchpoints in the consumer journey, from ticketing to exit. With these contactless touchpoints optimized for speed and security, ecosystem leaders can capitalize on variable pricing and answer the growing demand for frictionless experiences; all while establishing new revenue streams with in-transit promotions. 

Our customer loyalty and gamification portal, Quest, can support and optimize consumer touchpoints across the journey. Riders can be incentivized to travel in off-peak hours, receive bonuses for promotions that they convert on, and be rewarded for their achievements such as lifetime rides.

We’ve put together a comprehensive PDF that outlines where LISNR outperforms other contactless technologies commonly found in mobility. If you’re interested in learning more or sharing with a colleague, please feel free to download a copy below.

We’ve created an easily digestible overview of this process, highlighting the digital touchpoints for your passengers. Fill out your contact information below to download a digital copy.

The post How Mobility Leaders Turn Idle Ride Time into Opportunity appeared first on LISNR.


IDnow

Why compliance and player protection define iGaming in Germany – DGGS’s CEO Florian Werner on leading the way.

We spoke with Florian Werner, CEO of Deutsche Gesellschaft für Glücksspiel (DGGS), the operator behind the JackpotPiraten and BingBong brands, to understand how strict regulatory requirements under the Interstate Treaty on Gambling (GlüStV 2021) are shaping iGaming in Germany. From the country’s strong focus on player protection to navigating compliance challenges Werner explains how DGGS […]
We spoke with Florian Werner, CEO of Deutsche Gesellschaft für Glücksspiel (DGGS), the operator behind the JackpotPiraten and BingBong brands, to understand how strict regulatory requirements under the Interstate Treaty on Gambling (GlüStV 2021) are shaping iGaming in Germany.

From the country’s strong focus on player protection to navigating compliance challenges Werner explains how DGGS balances regulation with player experience – and why trusted partners like IDnow are essential for building a sustainable, responsible iGaming market. 

As one of Germany’s earliest licensed iGaming operators, DGGS has taken on both the pride and responsibility of setting industry standards. With its JackpotPiraten and BingBong brands, the company is committed to combining entertainment with strong compliance and social responsibility. In this interview, CEO Florian Werner shares how DGGS works with regulators, leverages technology to protect players and adapts to the challenges of one of Europe’s most tightly regulated markets. 

Why being first in Germany came with pride – and responsibility  In 2022, DGGS’ JackpotPiraten and BingBong became the first brands to receive a national slot licence from the German regulator, GGL. What did that milestone mean for you as an operator – especially in terms of your responsibility to lead in compliance and player protection?

We were delighted and proud to be the first operator to meet the necessary requirements for entering the German market. At the same time, we are fully aware of the responsibility this entails. That is why we are committed to acting responsibly and have deliberately chosen such an experienced partner as IDnow to stand by our side, supporting us actively in key areas such as player protection, account verification, and the safety of our players.

How does IDnow help you protect your players?

IDnow helps us reliably verify the identity of our players, making sure that no one can play under a false name. At the same time, the solution provides a secure and compliant identity check that effectively prevents underage gaming and fraud. This way, we create a trustworthy and protected environment for all our players.

Why regulation in Germany creates both challenges and opportunities  What were the most significant regulatory and operational challenges you faced in those first months?  

The biggest challenges in the regulated German market have remained unchanged since legalization. These primarily include the high tax burdens in Germany, which have a negative impact on payout ratios and the overall gaming experience of virtual slot machines. In addition, requirements such as a €1 stake limit and the mandatory delay between game rounds (the ‘5-second rule’) pose significant challenges. Since many of these regulations were newly introduced under the 2021 Interstate Treaty on Gambling (GlüStV 2021), a meaningful exchange of experiences was initially difficult. However, we are in contact with various industry representatives and remain hopeful for a more attractive offering for German players in the future.

How does the DGGS work with GGL and other regulators to protect players and combat fraud and how does it stay up to date with any regulatory changes to ensure continuous compliance? 

We are engaged in regular dialogue on multiple levels, collaborating closely with both industry associations and regulatory representatives. In particular our compliance team maintains an ongoing exchange that we experience as collegial, constructive, and open.

Why technology and trusted partners are the backbone of compliance  What role do trusted technology or identity verification partners play in maintaining your compliance and risk posture? 

Verification and identity-check technologies are of vital importance. In Germany, strict regulations rightly govern the handling of personal data. To meet these standards effectively, we rely on experienced external providers whose expertise ensures secure, efficient, and reliable processes at a scale that would not be possible manually.

Why responsible gambling is more than a legal requirement  The German GGL regulation is centred on social responsibility and player protection. What specific measures do you have in place to identify and assist players at risk of gambling harm? 

At our online casinos JackpotPiraten and BingBong, we analyze player behavior and ensure a safe gaming experience. If signs of problematic gambling emerge, we are able to reach out directly to the player and if necessary, exclude them from play. As part of the regulated market, we see this consistent and responsible approach as one of our core duties in protecting players.

How do you ensure that your responsible gambling tools are actually effective? Do you measure outcomes or make improvements based on player feedback or behavioral data? 

We take responsible gambling very seriously and therefore conduct ongoing monitoring of player activity. If signs of problematic gambling behavior are detected and cannot be changed, we can take a range of measures, including the closure of the player’s account.

Can you describe how the OASIS self-exclusion system is integrated into your platform and how you handle self-excluded or returning players? 

Players can exclude themselves at any time directly on our platforms through the OASIS self-exclusion system. In addition, a ‘panic button’ is available, enabling an immediate 24-hour break from play. Once registered with OASIS, players are automatically blocked from accessing our platforms and are prevented from receiving any form of personalized advertising. These measures reflect our strong commitment to responsible gambling and player protection.

What trends are you seeing in player behavior since the introduction of the new regulatory framework? 

In international comparison, German legislation for virtual slot games is very strict. Tax rates are set at a high level, which negatively impacts the payout ratios of the games. In addition, there are stake restrictions and a requirement for a minimum game round duration of five seconds. Players view these measures very critically and often turn to the less restrictive and more attractive offerings of the black market. As a result, tax revenues in Germany from virtual slot games have been continuously declining, an unfortunate negative trend.

Transparency is key in regulated markets. How do you communicate responsible gambling features and policy updates to your players in a clear and proactive way? 

At Deutsche Gesellschaft für Glücksspiel, raising awareness among players about responsible gaming is a core priority. We follow a dual strategy that goes well beyond legal requirements. In line with regulations, we provide a dedicated information section on our platforms that explains how to use gambling products safely. Clear warnings about potential risks are displayed transparently, and players can access support organizations directly through the links we provide.

Going further, we actively engage our players through a regular newsletter and our innovative Slot Academy. Here, education takes place via live video sessions that continuously address the risks of virtual slot games and promote responsible, informed play.

Why entertainment and responsibility can go hand in hand  Looking ahead, what’s next for DGGS? Are there upcoming developments, features, or goals you’re particularly excited about? 

This year we are celebrating the Jackpot Video Awards 2025. The idea for an event together with our players came directly from the community. The Jackpot Video Awards combine entertainment with player protection and are eagerly anticipated by both our team and the players.

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


FastID

Publish your website without a host

Deploy static sites to Fastly Compute directly from your browser or IDE. Publish blogs, apps, and websites at the edge without hosting.
Deploy static sites to Fastly Compute directly from your browser or IDE. Publish blogs, apps, and websites at the edge without hosting.

Wednesday, 17. September 2025

Dark Matter Labs

Where to? Five pathways for a regenerative built environment

Where to next? Five pathways for a regenerative built environment Possibilities for the Built Environment, part 2 of 3 This is the second in a series of three provocations, which mark the cumulation of a collaborative exploration between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project. In this piece, we shar
Where to next? Five pathways for a regenerative built environment Possibilities for the Built Environment, part 2 of 3

This is the second in a series of three provocations, which mark the cumulation of a collaborative exploration between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project.

In this piece, we share five pathways toward regenerative practice in the built environment from Dark Matter Labs’ ongoing mission X0 (Extraction Zero). First outlined in the A New Economy for Europe’s Built Environment, these pathways are currently being developed by X0 in partnerships with cities across Europe.

In the first piece, we suggested how six guiding principles for a regenerative built environment could redirect our focus. In this piece, we lay out six pathways toward regeneration, with suggested benchmarks and possible demonstrators, as a means of starting conversations, and identifying allies and tensions. The final piece in the series uses the configuration of the cement industry to explore the idea of nested economies and possible regenerative indicators.

Toward a process-based definition of regeneration

This piece leans into the friction between today’s extractive norms and the regenerative futures we have yet to realise.

We propose five pathways to establish regenerative practices throughout the built environment: these will span scales and sectors while driving change aligned with the principles laid out in the previous provocation. These pathways represent five modes for developing a multiplicity of new metrics, as well as creating the conditions for further progress to be taken on by future generations. Embedded in this logic are multiple and diverse systemic entry points for various actors to engage along the way.

These pathways are directions of travel that can be launched within the current economic system, without adopting a solution mindset. However, there are still real challenges to progress because of today’s political economy and scale of the polycrisis. While these pathways can be initiated within the current economic system, to be fully realised they must transform the system itself along the way.

One aspiration for these pathways is that they can capture the imagination and energies of a range of stakeholders, by creating containers for the changes it will take to bring us to a regenerative built environment. If we assume that to reach this future we will need both paradigm-shifting ‘impossible’ ideas and real demonstrations of best practices within our current contexts, then these pathways can hold together the different strands of effort, from the more feasible to the boundary-pushing, in one directional container. In each pathway, we ourselves look toward collaborators across geographies and disciplines to imagine, visualise and orient ourselves toward where these shifts could take us, in 2030, 2050 and beyond.

On a pragmatic level, structures to support initiation and governance of these pathways already exist and can be further fostered. Ownership for pathways can sit at the city or municipal level, supported by city networks such as Net Zero Cities, C40 cities and others, and further enabled through multi-municipal or regional coalitions to reach national scales. This type of multi-scalar, integrated approaches to the pathways can create the conditions for bottom-up schemes and ideas in communities and allow these to grow. The scale and pace of the transition we need requires governing decision-makers to have visibility over exceptional ideas that can push at the edges of the Overton window.

These pathways are not wholesale solutions to the problem, but rather provocative visions to incite discussion, draw out coalitions, grow a sense of responsibility and build momentum. It’s not that if we do these five things that a regenerative future will be reached. Rather, these are components of a re-envisioning.

For further exploration of these pathways, please see the white paper A New Economy for Europe’s Built Environment, associated with Dark Matter Labs’ X0 (Extraction Zero) mission.

Pathway 1: Maximising utilisation

Maximising the utilisation of our existing resources, spaces and infrastructures is one of the most transformative actions we can undertake in a context of resource shortage, carbon emissions crisis and labour crisis. That is especially relevant in the European context where our resource and space use inefficiencies are massive. Unlocking this latent capacity promises significant advancements in social justice and decoupling space and use creation from extraction and pollution. This develops a range of strategies from full utilisation of existing building stock, sharing models, flexible space use, with instruments such as open digital registries, smart space use platforms, smart contracts, and the like.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Deep structural changes in mechanisms to challenge speculative land markets and reform regulatory frameworks will be needed to embed redistributive and democratic principles into the governance of urban space.

Potential challenges:

The implementation of maximal utilisation is severely constrained by today’s profit-driven development logic, which prioritises profit through new development and property speculation over efficient or shared use. Institutional inertia, entrenched ownership regimes and the financialisation of housing all work against such a shift, while digital tools like registries and smart contracts risk reinforcing existing inequalities if not democratically governed.

System demonstrator: reprogramming office buildings from 35% to a 90% use, increasing financial flows of the building
What could this look like in 2050? Multi-actor spatial governance frameworks and use-based permissions Dynamic pricing structures for building use based on occupancy and social value creation Highly durable building structures with adaptable multi-use internal spaces Outcomes-based financing models tied to social and ecological impacts Mixed use public-private-NGO partnerships Public digital booking platforms for maximised utilisation of spaces Pathway 2: Next-generation typologies

Next typologies are no longer governed by the principle that form follows function. Instead, they transcend traditional asset classes based on programmatic use, as a new asset class valued for the optionality, flexibility, use efficiency and value creation they provide. Decoupling value creation from extraction, systemic inefficiencies and carbon emissions here happens through focusing on social capital–for instance, radical sharing and cooperation models, as well as intellectual capital–as new innovation models and new design typologies.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Without directly challenging speculative land markets, financialisation, and the classed and racialised histories embedded in built form, next-generation typologies may risk becoming a greenwashed evolution of the status quo rather than a transformative departure from it.

Potential challenges:

In capitalist urban systems, typologies and asset classes are produced through financial logics, property relations and commodification. Reframing buildings as flexible, innovation-driven assets may simply reproduce these dynamics in a new guise, reinforcing speculative value creation and market discipline under the banner of sustainability.

System demonstrator: Community living rooms–lightweight extensions on existing buildings, providing amenities with the right to use
What could this look like in 2050? Building public awareness in benefits of social time in relation to mental health New standards and codes for shared spaces and assets Tax reductions linked to carbon reduction impact of maximising efficiency Shared kitchens, living rooms, laundry rooms, appliances, tools and workshops Policy innovation enabling categorisation of shared spaces Increased cross-generational support, decreased loneliness, depression, stress levels Pathway 3: Systems for full circularity

Even though we have comprehensive knowledge on circularity, current levels in Europe are extremely low, and globally its rate is declining, thus this work focuses on the systems unlocking it and instruments driving its advancement on the ground. Apart from a comprehensive understanding of the craft (design for disassembly, development of city-scale material components networks, use of non-composite materials), we need the institutional economy and systems enabling circularity. That includes instruments such as material registries, material passports, financing mechanisms, design regulations, all developed simultaneously to unlock the new systems for circularity.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

For circularity to be genuinely transformative, it must be accompanied by political and economic restructuring — challenging the growth imperative, redistributing material control, and embedding democratic governance into how urban resources are managed and reused.

Potential challenges

Structural barriers hinder circularity. Extraction, planned obsolescence and short-term profit maximisation, which are the main imperatives in the current system, actively disincentivise long-term material stewardship. Circular practices often require slower, more localised and collaborative modes of production, which clash with the logics of global supply chains, speculative development and financialised real estate.

Moreover, without addressing issues of ownership, labour relations and uneven access to materials and technologies, circular systems risk being implemented in ways that benefit private actors while offloading costs onto public bodies or marginalised communities.

System demonstrator: City-scale architectural components bank, with developers’ right-to-use models
What could this look like in 2050? Material data registries and warranties for secondary materials Lightweight extensions, maximising utilisation and reuse of existing buildings City-scale material balance sheets and data registries for localised material cycles Civic material hubs for storage and distribution, zero carbon transport and logistics networks Demountable and highly adaptable building design Sinking funds for facilitating material reuse during deconstruction Pathway 4: Biogenerative material economy

The long-term future of our material economy must be bioregenerative. This transition needs deep understanding of systems impacts, avoiding further global biodiversity and land degeneration through green growth. This shift requires a transformation in land use for materials, moving from “green belts’’ to permaculture and regenerative methods, from supply chains to local supply loops. This requires developing new local material forests, zero-carbon local transport, non-polluting construction methods, as well as the policy, operational and financial innovation for a successful implementation of a fully biocompatible material economy.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

True transformation will involve challenging capitalist land markets, redistributing land and decision-making power and centering indigenous and community-led stewardship practices within the material economy.

Potential challenges:

We must not underestimate how global capitalism — through land commodification, agribusiness and extractive supply chains — actively undermines regenerative potential. Transforming green belts into permaculture zones, or establishing local material forests, requires not just technical and policy innovation, but a fundamental shift in land ownership, governance and power relations. Without addressing who controls land and resources, and whose interests are served by current material economies, there is a danger that biogenerative strategies become niche or elite enclaves, rather than systemic solutions.

System demonstrator: Neighbourhood gardens of biomaterials for insulation panels components for on site retrofitting
What could this look like in 2050? Regenerative agriculture & forestry practices and open education programs Certification for regenerative agriculture & carbon storage Macro-investments in bioregional forests & urban farms Civic biomaterial experimentation workshops & micro-factories Land restoration & rewilding sinking funds Regional, regenerative biomaterial supply chains, zero-carbon logistics networks Pathway 5: Shifting comfort, increasing contact

The ways we live in buildings today alienates us from our environmental and earthly context. Today’s built environment is designed to optimise for sterilisation through conditioned environments, separating us from the biomatter that is both input and output to our livelihoods. In providing comfort, we have been depending on extraction of resources, other species, biodiversity and ironically ourselves. We need to decouple the economy of comfort, which is here a shorthand for human-optimised environmental conditions, from extraction and externalisation. Pathways in driving this shift include participation and care models, increasing social values, shifting human relation to nature, a shift from technological to ecological services providing comfort, an increase in social and physical activity, a shift from the building scale to other scales, such as city-scale nature-based infrastructures and micro-scale furniture or clothing.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Real progress will involve confronting the socio-economic systems that produce uneven access to comfort, land and energy, and reconfiguring them through justice-oriented redistribution, democratic urban governance and decommodified approaches to housing and care.

Potential challenges:

In this pathway, we must not romanticise behavioural or cultural change without sufficiently addressing the structural conditions that produce and maintain the current ‘economy of comfort’. The alienation it describes is not simply the result of misplaced design priorities or cultural habits, but of a capitalist system that commodifies comfort, standardises it through global construction norms, and externalises its costs onto ecosystems and marginalised communities. Some people experience the comfort constructed by today’s systems much more than others.

Shifting toward ecological and participatory models of comfort is valuable, but without challenging the political economy that privileges resource-intensive, climate-controlled lifestyles for some while denying basic shelter or agency to others, such shifts may remain symbolic or limited in scope.

System demonstrator: Retrofitting a neighbourhood to new comfort standards to increase this area’s economic resilience to changing energy landscape.
What could this look like in 2050? New standards and codes for comfort Tax reductions linked to shifts in investments from mechanical towards ecological services Curriculum rethinking lifestyles in relation to health impacts Investments in extending ecological services and permeable surfaces for flood mitigation, indoor and outdoor comfort through passive climatisation Infrastructures for integral value accounting Capturing and measuring physical and mental health impacts More community and individual knowledge about how to deal with the material world, ranging from biomatter to biodegradable consumer goods Local biowaste sorting and utilisation in industry/agriculture From a static to a process-based definition of a regenerative future

In viewing our transition to a regenerative built environment through these core shifts, we look toward a process-based definition of what is regenerative. A process-based definition would be an understanding of the regenerative that is calculated not by fixed, profit-driven metrics, determined on the basis of isolated data-points, or tied to particular policy benchmarks, but rather something dynamic, intuitive, and assembled from across knowledge-spheres and perspectives, with their associated means of measurement.

A process-based definition might adapt to the changing data landscape, material reality, technopolitical ground conditions and Overton windows of different contexts. Whereas absolute metrics like embodied carbon are difficult to attain with accuracy, and fail to capture the whole picture, targets pegged to individual points in time and specific standards can quickly become obsolete. A process-based approach is inspired by DML’s Cornerstone Indicators [more information at this link], a methodology which creates composite, intuitive indicators for assessing change over time, co-developed and governed in place.

Originally co-designed with Dr Katherine Trebeck, the Cornerstone Indicators were initiated in the city of Västerås in Sweden to support citizens to co-design simple, intuitively understandable indicators that encapsulate what thriving means to the people of the Skultana district. The indicators, which align with overall goals like ‘health & wellbeing’ and ‘strong future opportunities’, can facilitate greater understanding of a place, enable further conversation, and guide future decisions. The initial 9-month workshop process to design this first iteration of the Cornerstone Indicators, resulted in indicators such as ‘the number of households who enjoy not owning a car’, and ‘regularly doing a leisure activity with people you don’t cohabit with’ which were analysed and offered to local policymakers. The success of this process has led to explorations of the Cornerstone Indicator process across Europe and North America. Initiatives like the Cornerstone Indicators present a model of how momentum toward a regenerative future for the built environment can be built. It’s urgent that we begin using process-based definitions and practices to bring more people to the table and increase the potential for transition pathways to gain traction.

Conclusion

In the first two pieces in this series, we have explored the idea of a regenerative future in the built environment by examining how our current frameworks for regeneration fall short of meeting the demands of the present moment. We outline principles and pathways for charting a course toward genuine transformation.

In providing examples of leading-edge organisations making progress toward a regenerative future, these pieces are intended to invite conversation, feelings of agency and reflection, even in the face of prevailing systemic constraints. Rather than offering neat solutions, this piece seeks to open doors to new possibilities.

The context and projections offered here raise a number of questions. For a wholesale transition, it will be important to understand what will indicate progress toward regeneration, as well as how decisions will be made in order to resist the co-opting of regenerative principles into status quo ways of operating.

The remaining piece in this series will explore:

How configurations of material extraction, labour and monetary capital entrench nested economies and particular power relations, using the example of the cement industry Possible indicators of progress toward a regenerative built environment, and of the limitations encountered

Together these pieces aspire to introduce the idea of a regenerative built environment and associated promises and challenges, to inspire a sense of direction and to sketch the broader systemic shifts to which we must commit.

This publication is part of the project ReBuilt “Transformation Pathways Toward a Regenerative Built Environment — Übergangspfade zu einer regenerativen gebauten Umwelt” and is funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) on the basis of a resolution of the German Bundestag.

The five pathways in this provocation provocation are based on the white paper A New Economy for Europe’s Built Environment and ongoing work by Ivana Stancic and Indy Johar, as part of the X0 (Extraction Zero) mission at Dark Matter Labs.

In addition, this piece represents the views of the team, including, from Dark Matter Labs, Emma Pfeiffer and Aleksander Nowak, and from Bauhaus Earth, Gediminas Lesutis and Georg Hubmann, among other collaborators within and beyond our organisations.

Where to? Five pathways for a regenerative built environment was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Shyft Network

Shyft Network’s Veriscope Powers Compliant Crypto Trading with Nowory in India

India’s crypto market, with 93 million investors, demands infrastructure that balances innovation with FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol, has partnered with Nowory, an Indian crypto trading platform, to integrate Veriscope, the only frictionless solution for regulatory compliance. This collaboration showcases Veriscope’s ability to enable secure, compl

India’s crypto market, with 93 million investors, demands infrastructure that balances innovation with FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol, has partnered with Nowory, an Indian crypto trading platform, to integrate Veriscope, the only frictionless solution for regulatory compliance. This collaboration showcases Veriscope’s ability to enable secure, compliant digital finance in high-growth markets while prioritizing user privacy.

Why Veriscope Matters for India’s

Crypto EcosystemAs India’s regulatory framework evolves, Virtual Asset Service Providers (VASPs) need tools to ensure compliance without complexity. Veriscope leverages cryptographic proof technology to facilitate secure, privacy-preserving data exchanges, aligning with FATF Travel Rule requirements. By integrating Veriscope, Nowory demonstrates how VASPs can achieve regulatory readiness seamlessly.

Nowory’s Role in the Partnership

Nowory, launched in August 2025, is an Indian crypto trading platform designed to serve India’s 93 million crypto investors with a secure and efficient bank-to-crypto gateway. By integrating Veriscope, Nowory aligns with global compliance standards, eliminating risky P2P trading and supporting India’s growing demand for regulated crypto infrastructure.

Key Benefits of Veriscope’s Integration

The Shyft Network-Nowory partnership highlights Veriscope’s power to transform crypto compliance:

Frictionless Compliance: Simplifies FATF Travel Rule adherence without burdening platforms or users. Privacy-First Design: Protects user data using cryptographic proofs, ensuring autonomy. Scalable Solutions: Supports growing VASPs in dynamic markets like India.

Zach Justein, co-founder of Veriscope, emphasized the integration’s impact:

“India’s crypto market needs solutions that streamline compliance while preserving privacy. Veriscope’s integration with Nowory reflects Shyft Network’s commitment to secure, compliant blockchain infrastructure.”
Powering a Compliant Crypto Future

Nowory joins a global network of VASPs adopting Veriscope to meet regulatory demands seamlessly. This partnership underscores the need for secure, compliant crypto infrastructure in high-growth markets like India.

About Veriscope

Veriscope, built on Shyft Network, is the leading compliance infrastructure for VASPs, offering a frictionless solution for FATF Travel Rule compliance. Powered by User Signing, it enables VASPs to request cryptographic proof from non-custodial wallets, simplifying secure data verification while prioritizing privacy. Trusted globally, Veriscope reduces compliance complexity and empowers platforms in regulated markets.

About Nowory

Nowory is an Indian crypto trading platform launched in August 2025, designed for secure and efficient trading of assets like Bitcoin, Ethereum, and Solana. It provides a direct bank-to-crypto gateway for India’s 93 million crypto investors, emphasizing regulatory readiness and the elimination of risky P2P trading.

Stay ahead in crypto compliance.

Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.

Book a consultation at calendly.com/tomas-shyft or email bd@shyft.network

Shyft Network’s Veriscope Powers Compliant Crypto Trading with Nowory in India was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 16. September 2025

Extrimian

Why Extrimian is an AI-First Company

Why Extrimian’s AI‑First Approach Improves Digital Credential Solutions Let’s start explaining why Extrimian is an AI-First company. Our goal is to give universities and other startups, a faster, more reliable way to issue, manage, and verify credentials—while ensuring our own teams work smarter behind the scenes. This post explains how our AI‑first ethos (via the […] The post Why Extrimian is a
Why Extrimian’s AI‑First Approach Improves Digital Credential Solutions

Let’s start explaining why Extrimian is an AI-First company. Our goal is to give universities and other startups, a faster, more reliable way to issue, manage, and verify credentials—while ensuring our own teams work smarter behind the scenes. This post explains how our AI‑first ethos (via the internal agent Micelya) makes Extrimian more efficient, and how our University Portal product solves the very real problems of diploma fraud, identity theft and manual verifications.

TL;DR Extrimian’s AI‑first philosophy refers to how we work internally, not how we verify credentials. Our agent Micelya organises knowledge and speeds up development and support. Self‑Sovereign Identity (SSI) and cryptographic signatures secure the credentials; AI is not used in the verification flow. By using AI internally and SSI externally, Extrimian delivers more complete features, faster updates and a calmer verification process for universities and students. What does “AI‑first” really mean in Extrimian? AI – Artificial intelligence, future technology innovation. Extrimian AI-First Company

When Extrimian says we are AI‑first, we’re talking about our own processes, not the product’s cryptographic core. We have an internal agent called Micelya that acts like a living knowledge hub for our teams. It stores and organises product specifications, SOPs, design decisions and customer insights, making them easy to find and apply. 

How do we use Micelya internally? Agile and interdisciplinary processes

To keep Micelya truly useful, our product and engineering teams continually feed it with the latest internal documentation, release notes, process playbooks and step‑by‑step guidelines for every product. This curated knowledge helps the agent surface the right answers, recommend the correct templates and shorten hand‑offs across the organisation.

When engineers or product managers work on a new release, Micelya suggests the right protocol or template and reminds us of past decisions. This means we iterate more quickly, avoid duplication, and keep every improvement in play. The agent doesn’t handle your credentials; it powers how we build and support the product.

How does Micelya make Extrimian faster and more consistent?

Micelya’s role is to optimize Extrimian’s internal processes. It automatically flags related resources—SOPs, integration steps, templates—at the moment a team member needs them. It nudges us when something requires approval or when a template must be updated. It also stores lessons learned from support tickets and feature requests, so improvements become part of our future releases. This means we respond to universities more quickly, address issues more consistently, and ship updates faster. Because the agent streamlines our internal workflow, you receive a product that evolves continuously without long delays.

Why does AI‑first matter for universities if it’s only internal?

You might wonder why our internal AI should matter to you. Simply put, Micelya makes Extrimian more efficient, which reflects in our product and support. Faster iteration cycles mean new features and fixes arrive sooner. A shared knowledge hub ensures you receive consistent advice regardless of who answers your call. When updates roll out, they’re informed by a complete history of past decisions and user feedback. Although AI never touches your credentials or verification flow, our AI‑first culture ensures we deliver a more refined, dependable product.

Why is it good to be an AI‑first company?

Being AI‑first has benefits that extend beyond Extrimian; companies in many sectors adopt AI to become more responsive, innovative, and resilient. Here’s a concise summary of key advantages and how they play out in our case:

Benefit of being AI‑first Impact on operations Extrimian example Efficiency Faster decisions & shorter release cycles Micelya surfaces the right SOPs and templates so teams ship updates quicker Knowledge retention Shared, up‑to‑date repository of policies & best practices Our knowledge hub prevents repeated mistakes and speeds new‑hire onboarding Cross‑team alignment Consistent workflows and communication across departments Product, engineering & support teams work from the same playbook Continuous improvement AI highlights patterns & informs roadmaps Micelya captures feedback loops so each release builds on lessons learned Better customer experience Quicker responses & higher‑quality products Universities see faster support, smoother updates and less rework

This table illustrates why an AI‑first mindset isn’t just a buzzword—it underpins real gains in speed, quality and alignment. For Extrimian, those gains help us deliver a stable verification product more rapidly and consistently.

What do students and verifiers experience?

From a student’s perspective, digital credentials mean convenience and control. They receive tamper‑proof proofs right in their ID Wallet and share them through a link or QR code. They aren’t forced to disclose their entire transcript when only enrollment status is needed. For verifiers, checking credentials is just as straightforward: visit the university’s verification page, scan the QR code or paste the link, and see an immediate result with clear guidance. No waiting for emails, no guesswork, and no reliance on appearance. This streamlined experience increases trust and speeds up decision‑making for everyone.

AI for process, cryptography for trust

Extrimian’s approach balances two forces: cryptographic security for credentials and AI‑driven efficiency for internal work. SSI and digital signatures make diplomas and enrolment proofs tamper‑proof, while the AI‑first mindset (through Micelya) reduces friction in our development and support processes. The two realms remain separate; AI does not verify credentials, but it helps us build better products and respond faster. For universities, this means a reliable, ready‑to‑use product backed by a company that continuously improves without sacrificing trust.

Recommended resources: Internal links University Portal overview – Learn more about our University Portal and how it issues tamper‑proof credentials.
ID Wallet page – link to the page that introduces the student/employee wallet used to store and share verifiable credentials.
Anchor text: “See how the ID Wallet lets students carry and share their credentials securely. About Extrimian / Our Story – Discover who we are and why we invest in internal AI to deliver better products.” Blog archive or Learning Resources – For a deeper dive into SSI and digital identities, explore our resources page or related articles. Contact or Demo page – If you’d like to see the portal in action, book a demo with our team.
External links W3C Verifiable Credentials specification – The W3C’s Verifiable Credentials Data Model defines how digital credentials are issued and verified. Self‑Sovereign Identity (SSI) explainer – Self‑Sovereign Identity (SSI) is an approach that puts individuals in control of their data; this SSI overview explains the core principles. Industry research or reports by EDUCASE – Recent studies show credential fraud is on the rise; this EDUCAUSE report outlines the challenge for universities. FIDO Alliance passkey standards – Passkeys are based on the FIDO2/WebAuthn standard for secure, phishing-resistant login.

 

The post Why Extrimian is an AI-First Company first appeared on Extrimian.


Holochain

Dev Pulse 151: Network Improvements in 0.5.5 and 0.5.6

Dev Pulse 151

We released Holochain 0.5.5 on 19 August and all tooling and libraries are now up to date.

Holochain 0.5.5 and 0.5.6 released

With these releases, we’re continuing to work on network performance for the Holochain 0.5.x series. There’s been a bunch of bug fixes and improvements:

New: At build time, Holochain can be switched between libdatachannel and go-pion WebRTC libraries, with libdatachannel currently the default in the Holochain conductor binary release and go-pion the default in Kangaroo-built hApps. go-pion is potentially free from an unwanted behaviour in libdatachannel, in which the connection is occasionally force-closed after a period of time. If you’ve seen this behaviour, consider trying your hApp in a Kangaroo-built binary to see if it’s resolved. Changed: Some tracing messages are downgraded from info to debug in Kitsune2 to reduce log noise. Bugfix: Make sure the transport layer has a chance to fall back to relay before timing out a connection attempt. Bugfix: When Holochain received too many requests for op data, it would start closing connections with the peers making the requests it couldn't handle. This caused unnecessary churn to reconnect, rediscover what ops need fetching, and send new requests. Instead, the requests that can't be handled are dropped and have to be retried. The retry mechanism was already in place, so that part just works. When joining a network with a lot of existing data, the sync process is now a lot smoother. Bugfix: Established WebRTC connections would fall back to relay mode when they failed; now the connection is dropped, and peers will try to establish a new WebRTC session. Bugfix: If a WebRTC connection could not be established, the connection would sometimes be left in an invalid state where it could not be used to send messages and Holochain wouldn't know to replace the connection to that peer. Bugfix: Holochain was using the wrong value for DHT locations. This was leading to differences being observed in the DHT model between peers, who would then issue requests for the missing data. The data couldn't be found because the requested locations didn't match what was stored in the database. This led to DHT sync failing to proceed after some period of time. Note: updating a hApp from Holochain 0.5.4 or earlier might cause a first-run startup delay of a few seconds as the chunk hashes are recalculated. Bugfix: Kitsune2 contains a fix for an out-of-bounds array access bug in the DHT model. Shifted priorities for 0.6

We’d originally planned to start the groundwork for coordinator updates (allowing a cell’s coordinator zomes to be updated) and DHT read access gating via membrane proofs in Holochain 0.6. We’re now going to push those to a later release in favour of focusing on warrants and other features that offer functionality that considers the strategically critical priorities of our partners.

These are the major themes of our work on 0.6:

Resolving incomplete implementations of the unstable warrants feature, writing more tests, and marking the feature stable for all app and sys validation except chain forks. Finishing the work that allows Holochain to block peers at the network level if they publish invalid data. Making sure that the peer connection infrastructure is production-ready. Continuing to build out the Wind Tunnel infrastructure and test suite.

There are a few smaller themes; check out the 0.6 milestone on GitHub for the full story.

Wind Tunnel updates

With many of the big gains in network performance and reliability realised in the 0.5 line and two new developers joining our team, we’ve freed up developer hours to focus on the Wind Tunnel test runner once again. Our big goal is: make it more usable and used. To this end, here are our plans:

We want to run the tests on a regular, automated schedule to gather lots of data and track changes over Holochain’s development. Rather than it being a requirement that a conductor is running alongside Wind Tunnel, Wind Tunnel itself will run and manage the Holochain conductor, allowing us to test conductors with different configs or feature flags within a scenario. Wind Tunnel already collects metrics defined in each scenario, but we are expanding on this to collect metrics from the host OS, such as CPU usage, and from the conductor itself. This will give us insight into system load and how the conductor is performing during the scenarios. More scenarios will be written, including complex ones involving malfunctioning agents and conductors with different configurations. More dashboards are being created to display the new metrics and give us insight into how the scenarios perform from version to version. These will then make it easy for us to track how Holochain's performance envelope changes as new features are added, and also to make it easier to prioritize where to focus our optimization efforts. We plan to run multiple scenarios on a single runner in parallel to make better use of the compute resources we have in our network. Along with adding more runners to the network, this will reduce the time it takes to run all of the tests, which will let us run the tests more often. We’re creating an OS installation image for Wind Tunnel runners, allowing any spare computer to be used for Wind Tunnel testing. This will let people support Holochain by adding their compute power to our own network. Holochain Horizon livestream

If you’re reading this, you probably care about more than just the state of Holochain development. We’re starting a series of livestreams that talk about things like where the Holochain Foundation is headed and what’s happening in the Holochain ecosystem.

The first one, a fireside chat between Eric Harris-Braun, the executive director of the Foundation, and Madelynn Martiniere, the Foundation’s newest council member and ecosystem lead, was on Wed 30 Jul at 15:00 UTC. Watch the replay on YouTube.

Next Dev Office Hours call: Wed 17 Sept

Join us at the next Dev Office Hours on Wednesday 17 Sept at 16:00 UTC — it’s the best place to hear the most up-to-date info about what we’ve been working on and ask your burning questions. We have these calls every two weeks on the DEV.HC Discord, and the last one was full of great questions and conversations. See you there next time!


Indicio

From paper to Proven: what the EUDI wallet means for the secure document printing industry

The post From paper to Proven: what the EUDI wallet means for the secure document printing industry appeared first on Indicio.
The shift to digital identity is accelerating and 2026 will be a critical year for the security printing and paper businesses. Now is the time to prepare.

By Helen Garneau

For decades, trust has been printed. Passports, ID cards, certificates, and other official, government-issued, and securitized documents have been how people prove who they are.  The European Digital Identity (EUDI) wallet signals the end of the era for exclusive use of paper and plastic-based identity. 

The regulation, set to be mandated with new technologies rolled out within the next year, introduces a way for citizens, residents, and businesses to securely share digital identity data in the form of  Verifiable Credentials across all EU member states; banking, travel, enterprises and government services are already piloting credential implementations. 

As with many transformative technologies, change happens slowly and then very fast.  

Companies that adapt quickly will stay relevant and leverage digital identity to deliver better products and services and innovate around seamless authentication and digital trust. Those that delay risk being left behind.

The question for companies in the secure document printing market is: how to not become obsolete when cryptography can make digital credentials every bit as trustworthy as the most secure physical document?

Just because the EUDI wallet framework architecture describes Verifiable Credentials, a digital identity technology that is interoperable, secure, and easy to use, the shift to digital identity doesn’t spell the end of physical documents.

Position for the great transition

The next few years will see a transition to verifiable digital identity and verifiable digital data and identity documents are the on-ramp. A key example: The International Civil Aviation Organization (ICAO) specifications for Digital Travel Credentials start with self-derived credentials (DTC-1), which means people are able to extract the data in the passport’s RFID chip then comparing the image in the chip with a real-time liveness check of the person scanning the passport and issuing a digital credential version of the passport. The passport can then be validated to confirm the data came from an official government source. They’ll still need their physical passport when they travel but it will only be for backup. 

The next step will be governments directly issuing digital passport credentials (following DTC-2 specifications) along with a person’s physical passport. The person will still need this physical passport when they travel.

In both cases, the digital passport credential will do all the heavy lifting in terms of identity authentication that enables the passenger to seamlessly check-in, access a lounge, cross a border, pick up a rental car, and check into their hotel. 

After these have been successfully implemented, we’ll move to a DTC-3 type credential — a fully digital passport where no physical back up is required. 

Where are we in the transition process? Well, with Indicio Proven, governments are able to issue DTC-2 type credentials. Expect to see them soon.

Driver’s licenses, diplomas

It’s not just passports that are being digitalized. The same liveness check and face-mapping that happens with DTCs can be done with government-issued documents, such as driver’s licences and Optical Character Recognition can read the data in the absence of the RFID chip. More US states are adopting Mobile Driver’s Licenses (mdoc/mDL), while the European Union expects this standard to be implemented in Europe by 2030

One bug in this rollout is that many mDL implementations don’t include the verification software businesses need to validate digital versions. These businesses still rely on physical driver’s licenses for customer identity authentication. If you want an mDL with simple, mobile, scalable verification Indicio Proven has you covered.

Diplomas, degrees, course transcripts and certificates are also being rendered as tamper-proof digital credentials through the Open Badges 3.0 specification. While their physical counterparts are not secured in the same way as government-issued identity, the Open Badges 3.0 standard makes these documents impossible to fake, binds them to their rightful holders, and renders them instantly verifiable.

The key to managing the transition to digital identity documents is to enable transition to these documents. And this is where Indicio Proven is unique in the marketplace.

Indicio Proven: your bridge from the physical to digital

Indicio Proven® gives printing companies a direct path into the digital era by transforming secure physical documents into Verifiable Credentials, the same technology outlined in the EUDI specification.

With Proven, your physical products become anchors, on-ramps, or companions to digital credentials. Passports can be turned into DTCs, and more than 15,000 types of identity documents from 250+ countries and territories can be credentialized. Driver’s licenses and other official documents can also be validated, bound with biometrics, and issued as tamper-proof digital Verifiable Credentials that are:

Fraud-resistant and cryptographically secure Combine with biometrics and stored on individual’s own device Portable across borders Instantly verifiable without complex checks

Proven is a fast, simple, and cost-effective way to extend your role in the EUDI realm today that helps your customers:

Save costs by reducing manual checks Protect against fraud with secure digital credentials Unlock new revenue by offering digital trust services alongside physical products

This technology also opens the door to offering new services in identity verification. When passports become Digital Passport Credentials and driver’s licenses become mobile driver’s licenses, organizations like financial institutions, airlines, and government agencies can verify and trust the information. Processes that were once inefficient and cumbersome—such as age verification, KYC, and cross-border travel—become seamless, premium services that create value and potential revenue streams every time they’re issued and verified.

The next chapter for printing and paper

Physical cards and certificates will not disappear overnight, but their primary value will shift. And that doesn’t mean paper-based industries are left out—your expertise in trust, security, and document integrity is more valuable than ever. 

Proven makes this transition easy, enabling your business to grow as identity goes digital. With Indicio, you can carry that expertise into the digital age and position your company at the center of the EUDI wallet revolution.

The world is moving from paper to Proven. The opportunity is here—are you ready to take it? 

Contact us today to get your complimentary EUDI digital identity strategy from one of our experts.

###

The post From paper to Proven: what the EUDI wallet means for the secure document printing industry appeared first on Indicio.


Ontology

How Smart Accounts Are Reinventing The Web3 Wallet

If you’ve ever used a crypto wallet like MetaMask, you’ve used an externally owned account (EOA). It’s a simple pair of keys: a public address that acts as your identity and a private key that proves you own it. This model is powerful but rigid, putting the entire burden of security and complexity on the user. Lose your seed phrase? Your funds are gone forever. Find transactions confusing? The eco

If you’ve ever used a crypto wallet like MetaMask, you’ve used an externally owned account (EOA). It’s a simple pair of keys: a public address that acts as your identity and a private key that proves you own it. This model is powerful but rigid, putting the entire burden of security and complexity on the user. Lose your seed phrase? Your funds are gone forever. Find transactions confusing? The ecosystem has little flexibility to help.

A new standard is emerging to solve these problems, moving us from rigid key-based wallets to programmable, user-friendly interfaces. The answer is smart accounts.

What is a smart account?

A smart account (or smart wallet) is not controlled by a single private key. Instead, it is a smart contract that acts as your wallet. This shift from a key-based account to a contract-based account is revolutionary because smart contracts are programmable. They can be designed to manage assets and execute transactions based on customizable logic, enabling features that were previously impossible.

This transition is powered by account abstraction (AA), a concept that “abstracts away” the rigid requirements of EOAs, allowing smart contracts to initiate transactions. While the idea isn’t new, it recently gained mainstream traction thanks to a pivotal Ethereum standard: EIP-4337.

EIP-4337 (the game changer)

EIP-4337: Account Abstraction via Entry Point Contract achieved something critical: it brought native smart account capabilities to Ethereum without requiring changes to the core protocol. Instead of a hard fork, it introduced a higher-layer system that operates alongside the main network.

Here’s how it works: UserOperations: You don’t send a traditional transaction. Instead, your smart account creates a UserOperation — a structured message that expresses your intent. Bundlers: These network participants (such as block builders or validators) collect UserOperation objects, verify their validity, and bundle them into a single transaction. Entry Point Contract: A single, standardized smart contract acts as a gatekeeper. It validates and executes these bundled operations according to the rules defined in each user’s smart account.

This system is secure, decentralized, and incredibly flexible.

Other key proposals (EIP-3074 and EIP-7702)

The journey to account abstraction has involved other proposals, each with different approaches.

EIP-3074: This proposal aimed to allow existing EOAs to delegate control to smart contracts (called invokers). While simpler in some ways, it raised security concerns due to the power given to invoker contracts. It has since been paused. EIP-7702: Proposed by Vitalik Buterin, this upgrade would allow an EOA to temporarily grant transaction permissions to a smart contract. It offers a more elegant and secure model than EIP-3074 and may complement — rather than replace — the infrastructure built around EIP-4337.

For now, EIP-4337 is the live standard that developers and wallets are adopting.

Why smart accounts matter

The real value of smart accounts lies in the user experience and security improvements they enable.

Gas abstraction: Apps can pay transaction fees for their users or allow payment via credit card, removing a major barrier to entry. Social recovery: Lose your device? Instead of a single seed phrase, you can assign “guardians” — other devices or trusted contacts — to help you recover access. Batch transactions: Perform multiple actions in one click. For example, approve a token and swap it in a single transaction instead of two. Session keys: Grant limited permissions to dApps. A game could perform actions on your behalf without being able to withdraw your assets. Multi-factor security: Require multiple confirmations for high-value transactions, just like in traditional banking. The future is programmable

Smart accounts represent a fundamental shift in how we interact with blockchains. They replace the “all-or-nothing” key model with programmable, flexible, and user-focused design. Major wallets like Safe, Argent, and Braavos are already leading the way, and infrastructure from providers like Stackup and Biconomy is making it easier for developers to integrate these features.

We’re moving beyond the era of the seed phrase. The future of Web3 wallets is smart, secure, and designed for everyone.

How Smart Accounts Are Reinventing The Web3 Wallet was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Turning Competitive Intelligence into Messaging That Wins (with examples)

Why competitive intelligence often fails in messaging I’ve seen it firsthand: competitor battlecards stacking up in shared drives, analyst PDFs collecting dust, and persona research tucked into charts that never see daylight. It’s easy to feel overwhelmed by the noise and unsure where to even start. I’ve been there more times than I’d like to […] The post Turning Competitive Intelligence into Me
Why competitive intelligence often fails in messaging

I’ve seen it firsthand: competitor battlecards stacking up in shared drives, analyst PDFs collecting dust, and persona research tucked into charts that never see daylight. It’s easy to feel overwhelmed by the noise and unsure where to even start. I’ve been there more times than I’d like to admit. The problem isn’t a lack of data; it’s the ability to digest it and translate it into messaging that actually differentiates. Without that step, teams fall back on the same empty claims: “innovative,” “customer-first,” “the most trusted.” Buyers tune it out. Competitive intelligence only works when it becomes narrative. The raw material exists: persona insights, competitor positioning, feature data – but without the right framework, it collapses into noise. In fact, 83% of B2B buyers now expect personalization on par with consumer experiences, which means vague promises no longer earn attention.

The three pillars of messaging that stand out Buyer persona insights

Great messaging doesn’t start with features; it starts with people. Early on, I wrote messaging as if “the buyer” was a monolith. It fell flat. A CMO trying to differentiate a brand doesn’t think like a sales leader trying to speed up onboarding. Persona-based marketing insights can surface those distinctions, but the job of messaging is to speak to those specific goals and pain points, not to the broadest common denominator.

Competitor messaging & positioning

Copycat messaging is the silent killer of differentiation. Throw the first stone if you’ve never obsessed over a competitor’s launch while paying too little attention to how they positioned their value. Competitive benchmarking is useful, but not if it leads you to recycle the same message with a “we do it better” twist. The real win comes from understanding where you truly differentiate and telling the story of why that matters in the first place.

Feature differentiation that resonates

I used to think listing every capability would convince buyers, but it never did. Features only matter when they connect to buyer outcomes that feel tangible. In fraud prevention, that might mean reducing chargeback losses by 40%. In cybersecurity, it might mean cutting breach detection time in half. The point is not to list what your product does but to anchor why it matters in the buyer’s world, and only nerd about the specifics once you have their undivided attention.

Generic vs persona-informed messaging

To show the difference, here’s a snapshot of how messaging shifts when intelligence is applied. Generic copy focuses on features and broad claims, while persona-informed messaging uses ICP data and persona pain points to connect with specific buyers.

DomainGeneric MessagePersona-Informed MessagePersona ExampleFraud Prevention“We help enterprises stop fraud before it happens by detecting suspicious activity, flagging risky transactions, and protecting customer accounts. Our platform is designed to keep your business safe and secure.”
“You’re responsible for revenue protection across global sales flows, which means chargebacks and payment fraud land on your desk. Teams like yours cut chargeback losses by 40% with real-time fraud alerts that protect revenue without slowing deals. Buyers expect both outcomes: silent protection and measurable margin impact.”
VP of Sales, BDR LeaderFinancial Crimes Compliance (AML/KYC)“We help compliance teams stay audit-ready with AML and KYC tools that reduce risk, cut down on false positives, and keep your business aligned with evolving regulations.”“As Chief Compliance Officer, you know false positives are the hidden tax on your team. Cutting them by 50 percent means analysts focus on true risk while you stay audit-ready against FATF and DOJ scrutiny. Clients report faster SAR filing cycles and stronger exam outcomes that regulators can see.”Chief Compliance OfficerCybersecurity / Threat IntelligenceWe help enterprises stay ahead of account takeover, session hijacking, and phishing attacks with advanced detection and monitoring that safeguard sensitive data and protect customer accounts.”“Your bottleneck probably isn’t a lack of MFA; it’s gaps in mobile session integrity and weak recovery bindings. Leading platforms now combine FIDO2 passkeys, device certificates, runtime attestation, and behavioral biometrics into a single API. Results often show 90–99% reductions in ATO flows and deployments measured in weeks, not quarters, while fitting directly into CI/CD pipelines.”CISOTrust & Safety (Age Assurance, Platform Integrity)“We help platforms create safe online spaces by stopping fake accounts, preventing underage sign-ups, and protecting users from harmful activity. Our solution builds trust across your community.”“You’ve grown marketplaces quickly, but fake accounts and underage signups erode trust as fast as growth builds it. Trust & Safety leaders block fraudulent accounts at scale, improving conversion while lifting NPS. Clients see measurable drops in fake account creation alongside sustained growth.”
Head of Trust & Safety
Risk Management“We help companies manage third-party risk by identifying potential vulnerabilities, monitoring vendor compliance, and providing visibility across your supply chain.”“Your mandate is to catch vendor risk before it turns into tomorrow’s crisis. Risk leaders using continuous monitoring spot supplier red flags weeks earlier. That foresight prevents compliance failures and costly breaches that would otherwise reach the boardroom.”
CRO, Risk Manager

This table turns the theory into practice: with competitive intelligence in play, messaging shifts from broad and forgettable to precise and compelling.

The challenge, of course, is scale. Tailoring a handful of persona-informed messages is one thing. Refreshing them continuously across dozens of campaigns, competitors, and markets is another. That’s where AI-enhanced intelligence platforms become indispensable. By monitoring live market signals, competitor narratives, and persona insights, AI can help us surface fresh message updates, stress-test positioning, and keep playbooks aligned with the market, so teams never slip back into generic messaging.

A framework for refreshing messaging without reinventing the wheel

High-performing teams do not wait for annual off-sites to rethink their messaging. They run refreshes as an ongoing discipline. So, how do we actually keep messaging fresh without burning cycles? Here is a practical process that has worked for us:

Collect signals continuously – competitor launches, persona survey data, market shifts. Map signals to differentiation – identify where buyer priorities intersect with unique strengths. Stress-test narratives – run them through sales conversations, campaign pilots, and post-call analytics. Refresh, don’t rewrite – evolve messaging every few weeks, not every few quarters.

The result is messaging that stays alive, tuned to the market, and sharper than the competition.

How leading teams operationalize competitive intelligence

It’s one thing to know the process, another to make it work at scale. The best GTM teams operationalize competitive intelligence through three capabilities:

1. Always-on market signals

Static PDFs cannot keep up with dynamic markets. Teams that win track real-time signals like funding rounds, regulatory shifts, competitor campaigns, and feed them straight into campaign planning.

2. Persona-level insights at scale

Instead of treating personas as theater, leading teams embed real-time buyer insights into campaigns and sales workflows. Every refresh reflects what buyers are actually thinking now, not last year.

3. Embedded intelligence in workflows

Intelligence only works if it lives where teams work: Slack alerts pushing industry shifts in real time, SEO content built on market truth, email campaigns aligned with buyer signals, and sales calls armed with live AI intelligence. Intelligence becomes actionable in the moment, not theoretical.

The challenge of messaging in niche markets

As adoption grows, so does the data: companies using competitive intelligence report a 15% boost in revenue growth. Platforms like Link are built to deliver these capabilities, from event monitoring and perpetual surveys to dynamic playbooks and post-call analytics. The real challenge is not more data, but the right data — intelligence that is specific enough to your market to make messaging credible and differentiated.

And this is where it gets tricky in niche markets. Sure, we can create a neat competitive battlecard, but what do we actually put on it if I don’t understand how the ICP is behaving in the real world? We can send a well-designed email, but if the target is a cybersecurity leader, they might care more about an upcoming TPRM webinar than a case study from the banking sector. The reality is that without specific, contextual intelligence, even polished campaigns miss the mark without the right segmentation.

At the end of the day, buyers don’t want platitudes; they want proof. In specialized markets, the cost of undifferentiated messaging isn’t just lost deals, it’s lost trust and stalled growth.

Key Takeaways Competitive intelligence fails when it sits in decks and PDFs. It only creates value when it fuels differentiated narratives buyers actually hear. Messaging that stands out comes from three things: persona insights, competitor positioning, and outcomes buyers can measure. Refreshing messaging is not a one-off exercise. The teams that win treat it as an ongoing discipline. Intelligence has to live where teams work: in Slack alerts, sales calls, campaigns, and content, so it becomes actionable in the moment. In niche markets, buyers don’t want platitudes, they want proof. Miss that, and you lose both deals and trust.

The post Turning Competitive Intelligence into Messaging That Wins (with examples) appeared first on Liminal.co.


Spherical Cow Consulting

Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill

Google recently gave us something we’ve been waiting on for years: hard numbers on how much energy an AI prompt uses. According to their report, the median Gemini prompt consumes just 0.24 watt-hours of electricity — roughly running a microwave for a second — along with some drops of water for cooling. The post Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill a

“Google recently gave us something we’ve been waiting on for years: hard numbers on how much energy an AI prompt uses.”

According to their report, the median Gemini prompt consumes just 0.24 watt-hours of electricity — roughly running a microwave for a second — along with a few drops of water for cooling.

On its face, that sounds almost negligible. But the real story isn’t the number itself. It’s about incentives: who benefits, who pays, and how those dynamics shape how we deploy AI.

A Digital Identity Digest Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:11:21 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

A history lesson from the cloud

To understand how incentives can blindside us, let’s revisit the cloud computing boom. You remember that, right? “Save all the money! Get rid of your datacenter! Cloud computing ftw!”

In 2021, Sarah Wang and Martin Casado of Andreessen Horowitz published “The Cost of Cloud: A Trillion-Dollar Paradox.” They showed how cloud services, while indispensable for speed and agility, became a drag on profitability at scale. Dropbox famously repatriated workloads back from public cloud and saved $75 million over two years — a shift that doubled their gross margins from 33% to 67%. CrowdStrike and Zscaler adopted hybrid approaches for similar reasons.

The takeaway: Early incentives reward adoption. But when the bills grow large enough, cost discipline suddenly becomes a board-level issue. By then, inefficiency is already baked into operations.

AI energy use is following the same arc. Vendors and enterprises alike are celebrating adoption, but the hidden costs are waiting to surface.

The incentives for vendors

AI vendors want mass adoption, and their incentives reflect that. They’ll emphasize efficiency gains — like Gemini’s 33-fold reduction in energy per query from 2024 to 2025, according to their recent report — but those are selective disclosures.

As the MIT Tech Review story “In a first, Google has released data on how much energy an AI prompt uses” pointed out, disclosures become marketing tools without standardized metrics. Vendors reveal what flatters them, not necessarily what helps customers make better choices.

And the race to ship bigger, more capable models only deepens this misalignment. Scale brings revenue. The energy, water, and carbon costs? Those are someone else’s problem.

The incentives for enterprises

Enterprises often don’t see the full picture either. A cloud invoice hides the per-prompt costs. IAM and security teams grant permissions to agents, but they don’t own the sustainability budget. Sustainability teams, meanwhile, don’t have visibility into permissions and entitlements.

The result: over-provisioning goes unnoticed. AI agents are allowed to “just run,” and every permissioned action quietly consumes resources. Those costs add up, but they land in someone else’s ledger, often long after the decisions were made.

This is the same organizational mismatch cloud adoption created: IT ops pays the bill, developers get the flexibility, and the CFO finds out later. AI is just the next chapter.

Incentives and regulation

Here’s where things start to change. Environmental, Social, and Governance (ESG) reporting isn’t optional anymore; regulators are giving incentives real teeth.

United States: The SEC’s new climate disclosure rule requires large public companies to report greenhouse gas emissions. Failure to comply has already resulted in multimillion-dollar fines for ESG misstatements, like Deutsche Bank’s $19M settlement. Europe: The EU’s Corporate Sustainability Reporting Directive (CSRD) sets steep penalties. In Germany, fines can reach €10 million or 5% of turnover. In France, executives risk prison time for obstructing disclosures. Australia: Directors must certify sustainability data as part of financial filings. Failure to comply can trigger civil penalties in the hundreds of millions, with individuals personally liable for up to AUD 1.565 million.

None of this is about fearmongering. (OK, maybe it’s a little bit of fearmongering in the hope of catching your attention.) It’s also a reality. Boards are now directly accountable for climate and resource disclosures. AI usage may feel “small” at the per-prompt level, but at enterprise scale, it becomes part of that regulatory picture.

Where identity comes in

So where does identity fit?

Every AI-agent action isn’t just a governance event; it’s also a consumption event. Permissions are no longer just about who can do what. They’re also about what we’re willing to pay, financially and environmentally, for them to do it.

Standing access matters here, too. A human user with unused entitlements is a risk; an AI with broad entitlements is a resource leak. It will happily keep churning until someone tells it to stop — and by then the costs have already piled up.

Imagine if your audit logs evolved to show not just “who accessed what,” but “how much energy and water those actions consumed.” It sounds futuristic, but sustainability reporting is heading in that direction. IAM teams may find themselves pulled into ESG conversations whether they want to be or not.

Runtime governance as sustainability

Earlier, I argued that runtime governance is essential when AIs can act faster than human oversight cycles. Here’s the sustainability angle: runtime checks can throttle not just security risks, but waste.

Deny agents the ability to hammer a system with brute-force permutations. Flag actions that consume far more resources than typical queries. Revoke unnecessary entitlements before they become both a risk and an expense.

Governance is shifting from “is this allowed?” to “is this worth it?”

Bridging past lessons with today’s challenges

The hidden costs of the cloud were supposed to teach us that efficiency ignored eventually becomes inefficiency entrenched. I’m not convinced people and organizations have learned that lesson, but regardless, AI is repeating that story, with energy, water, and carbon as the currencies.

Like cloud spend, AI resource usage may start small, but it scales faster than oversight cycles. And when regulations demand transparency, boards will want answers.

Identity leaders are uniquely positioned here. Permissions are the gate between an agent’s intent and its actions. Expanding the governance lens to include consumption could help organizations stay ahead of both the bills and the regulators.

Putting it together

So let’s put this together:

Vendors are incentivized by adoption and scale, not efficiency. Enterprises have silos that hide true costs. Regulators are introducing real penalties for climate and resource misstatements. Identity teams are sitting at the chokepoint, granting permissions that double as consumption choices.

The shift isn’t about turning identity professionals into sustainability officers. It’s about recognizing that incentives travel with permissions. And when permissions scale through AI, the hidden costs travel with them.

So here’s my question for you: have you seen incentives around AI use in your organization, good or bad? And if so, how did those incentives shape the choices your teams made?

Because incentives aren’t just a policy issue or a compliance box. They’re the difference between governance, which you can explain to your board, and governance, which you only notice when the bill or the fine arrives.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Hi everyone, and welcome back to A Digital Identity Digest. I’m Heather Flanagan, and today we’re going to talk about something that’s only just starting to make the headlines: what happens when AI agents run wild—and who actually ends up footing the bill.

Spoiler alert: it’s probably not the vendors themselves, and it’s probably not who you think inside your own organizations either.

[00:00:53] In this episode, we’ll explore:

The incentives driving AI adoption The role of identity in hidden costs The growing regulatory landscape around sustainability Setting the Stage

[00:01:04] What inspired today’s conversation is a recent Google report that finally revealed some long-awaited data: how much energy a single AI prompt consumes.

[00:01:20] Their findings? The median Gemini prompt uses about 0.24 watt hours of electricity.

[00:01:28] To put it in perspective:

That’s like running your microwave for one second, plus a few drops of water for cooling. At first glance, it seems tiny. But at scale, millions of these “drops in the ocean” can eventually flood entire continents.

[00:01:46] The real story isn’t about that single number. Instead, it’s about the incentives behind those numbers—who benefits, who pays, and how those dynamics shape AI deployment.

Lessons from the Cloud

[00:01:57] To understand today’s AI landscape, let’s rewind to the early days of cloud computing. Remember the pitch? “Save money, get rid of your data center—cloud computing for the win.”

[00:02:20] But by 2021, Sarah Wang and Martin Casado at Andreessen Horowitz highlighted the Trillion Dollar Paradox:

Cloud was amazing for speed and agility. Yet at scale, it dragged on profitability.

[00:02:30] Dropbox learned this firsthand, repatriating workloads from the public cloud and saving $75 million over two years—doubling their margins in the process.

[00:02:51] The key lesson? Early incentives reward adoption. But once costs balloon, discipline becomes a board-level issue.

[00:03:10] AI is following the same arc. We’re in the “woohoo adoption” phase now, but hidden costs are waiting to catch up.

Vendor Incentives

[00:03:24] Let’s start with the incentives for LLM vendors. These are crystal clear: encourage mass adoption.

[00:03:33] Vendors emphasize efficiency gains. Google bragged about a 33-fold reduction in energy per query between 2024 and 2025.

[00:03:43] Sounds impressive. But disclosures are:

Not standardized Highly selective Designed to flatter the vendor, not inform customers

[00:03:53] Meanwhile, the race for bigger, flashier, more capable models continues. The revenue comes in, but the energy, water, and carbon costs are left as someone else’s problem.

Enterprise Incentives

[00:04:09] For enterprises, the picture is murkier. Why? Because:

Cloud invoices hide the per prompt cost. IAM and security teams grant permissions but don’t own the sustainability budget. Sustainability teams lack visibility into entitlements.

[00:04:34] The result?

Over-provisioning goes unnoticed. AI agents run unchecked. Bills land on someone’s desk long after the fact—often someone who had no say in granting permissions.

[00:04:58] This is déjà vu from the cloud era. Ops pays the bill, developers enjoy flexibility, and the CFO discovers the hit too late.

Regulators Enter the Chat

[00:05:03] Unlike the early cloud days, regulators are already watching. ESG (Environmental, Social, and Governance) reporting is now mandatory in many regions.

[00:05:15] Examples include:

United States: SEC Climate Disclosure Rule, with fines already issued (e.g., Deutsche Bank’s $19M settlement). Europe: Corporate Sustainability Reporting Directive (CSRD), with penalties up to €10 million or 5% of turnover. France: Executives can face prison time for obstructing disclosures. Australia: Civil penalties can reach hundreds of millions, with directors personally liable.

[00:06:20] This isn’t fearmongering—it’s reality. Boards are accountable, and one AI prompt may seem trivial, but multiplied across millions of queries, it becomes a regulatory reporting item.

Where Identity Comes In

[00:06:38] Every AI agent action is more than a governance event—it’s also a consumption event.

Permissions = not just who can do what, but what we’re willing to pay financially and environmentally. An unused human entitlement is a risk. An AI with broad entitlements is a resource leak that runs until stopped.

[00:07:15] Imagine if audit logs didn’t just say who accessed what, but also recorded how much energy and water were consumed.

[00:07:24] That may sound futuristic, but sustainability reporting is moving that way. IAM teams could soon be pulled into ESG discussions—whether they feel it’s their role or not.

Governance Shifts

[00:07:37] Governance isn’t just about security anymore. With AI, it’s about balancing risk and resource consumption.

Runtime checks can throttle wasteful AI actions. Agents can be denied brute-force or high-cost queries. Entitlements can be revoked before they pile up into risks—or expenses.

[00:08:07] Governance now asks not only “Is this allowed?” but also “Is this worth it?”

History Repeats Itself

[00:08:14] Cloud should have taught us that ignored inefficiency becomes entrenched inefficiency. Once it’s embedded in infrastructure, it’s painfully hard to extract.

[00:08:38] AI is repeating that story—with water, energy, and carbon as the new currencies.

[00:08:54] When regulators demand transparency, boards will expect clear, defensible answers. And that’s where identity leaders can step up.

[00:09:01] Permissions sit at the choke point between agent intent and agent action. Expanding governance to include consumption metrics gives organizations a head start on both the bills and regulatory scrutiny.

Bringing It All Together

[00:09:16] To recap:

Vendors chase adoption and scale, not efficiency. Enterprises operate in silos that hide true costs. Regulators are introducing significant penalties for ESG misstatements. Identity teams control permissions, which now double as consumption risks.

[00:09:41] IAM professionals don’t need to become sustainability officers. But they must recognize that incentives travel with permissions—and when AI scales, costs scale too.

[00:09:57] So here’s the key question:
Have you seen incentives around AI use in your organization—good or bad? And how are those incentives shaping your team’s decisions?

Because incentives aren’t just about compliance checkboxes. They’re the difference between proactive governance, you can explain to your board, and reactive governance, you only notice when the bill—or the fine—lands on your desk.

Closing Thoughts

[00:10:23] That’s it for this episode of A Digital Identity Digest. If you found it useful, subscribe to the podcast or visit the written blog at sphericalcowconsulting.com for reference links.

[00:10:45] If this episode brought clarity—or at least sparked curiosity—share it with a colleague and connect with me on LinkedIn at lflanagan. Don’t forget to subscribe and leave a review on Apple Podcasts or wherever you listen.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill appeared first on Spherical Cow Consulting.


iComply Investor Services Inc.

AML in Real Estate: Source of Funds, Identity, and Global Risk Controls

From complex ownership to offshore funding, real estate is high-risk for money laundering. This guide shows how iComply helps brokers, lawyers, and lenders simplify AML compliance across jurisdictions.

Real estate professionals face rising AML scrutiny across markets. This article breaks down identity verification, source of funds, and beneficial ownership rules in the U.S., Canada, UK, EU, and Australia – and shows how iComply helps automate compliance across agents, lawyers, and lenders.

Real estate is a prime target for financial crime. High-value transactions, opaque ownership structures, and limited oversight have made the sector vulnerable to money laundering worldwide.

From regulators to investigative journalists, scrutiny is intensifying, compliance expectations are evolving. Brokers, lawyers, developers, mortgage professionals, and title companies all have a role to play.

Shifting AML Expectations in Real Estate United States Regulators: FinCEN, state real estate commissions Requirements: Geographic targeting orders (GTOs), beneficial ownership reporting (CTA), SARs, and KYC for buyers and entities Canada Regulators: FINTRAC, provincial real estate councils Requirements: KYC, source of funds verification, PEP/sanctions screening, STRs, and compliance program requirements (as reinforced by the Cullen Commission) United Kingdom Regulators: HMRC, FCA (for lenders), SRA (for law firms) Requirements: Client due diligence, UBO checks, transaction monitoring, and compliance under MLR 2017 European Union Regulators: National AML authorities under AMLD6 Requirements: Risk-based customer due diligence, UBO transparency, STRs, and GDPR-aligned reporting Australia Regulator: AUSTRAC (legislation pending for real estate-specific coverage) Requirements: AML risk management for law firms, lenders, and trust accounts; expected expansion to include property professionals Real Estate-Specific Risk Factors

1. Complex Ownership Structures
Use of shell companies, nominees, and trusts can obscure true buyers.

2. Source of Funds Obscurity
Large cash deposits or offshore funding require enhanced scrutiny.

3. Multi-Party Transactions
Buyers, sellers, agents, lawyers, lenders, and developers often use disconnected systems.

4. Regulatory Patchwork
Requirements vary by jurisdiction and professional role.

How iComply Helps Real Estate Professionals Stay Compliant

iComply enables unified compliance across real estate workflows—from individual onboarding to multi-party coordination.

1. Identity and Entity Verification KYC/KYB onboarding via secure, white-labeled portals Support for 14,000+ ID types in 195 countries UBO discovery and documentation 2. Source of Funds Checks Collect and validate financial statements, employment records, or declarations Risk-based automation of EDD triggers Document retention for regulator inspection 3. Sanctions and Risk Screening Real-time screening of all participants (buyers, sellers, brokers, law firms) Automated refresh cycles and trigger alerts 4. Cross-Party Case Collaboration Connect agents, legal counsel, and lenders in a single audit-ready file Assign roles, track tasks, and escalate within shared dashboards 5. Data Residency and Privacy Compliance Edge computing ensures PII is encrypted before upload Compliant with PIPEDA, GDPR, and U.S. state laws On-premise or cloud deployment options Case Insight: Vancouver Brokerage

A Canadian real estate firm used iComply to digitize ID checks and SoF verification for domestic and foreign buyers:

Reduced onboarding time by 65% Flagged two nominee structures linked to offshore trusts Passed a FINTRAC audit with zero deficiencies Final Take

Real estate professionals can no longer afford fragmented compliance. With global pressure mounting, smart automation ensures faster onboarding, better oversight, and fewer audit risks.

Talk to iComply to learn how we help brokers, lawyers, and lenders unify AML workflows – without slowing down the deal.


PingTalk

Accelerating Financial Service Innovation With Identity-Powered Open Banking in the Americas

Explore how financial institutions across the Americas are using open banking and identity-powered APIs to drive innovation, enhance security, and deliver personalized customer experiences.

Open banking is rapidly becoming a critical plank of digital innovation in the financial services industry across both North and South America. Whether driven by regulation, market innovation, or consumer demand, the financial industry across both continents is increasingly embracing a standards-based, application programming interface (API)-first mindset in a bid to accelerate hyper-personalization, trust-based relationships, and value upsell.

 

While digital challengers continue to capture digitally-savvy customers, incumbent providers are scrambling to meet the increasing demand for seamless and customer-centric experiences in a bid to maintain competitiveness. What might come as a surprise, is this paradigm shift is underpinned by technical standards that govern financial-grade APIs (FAPIs) interacting with enterprise-grade identity and access management (IAM). 

 

The battle for market share in North and South American banking, and indeed the wider financial services industry, will hinge on the degree to which financial service providers embrace these technologies and industry standards and leverage underlying investments to deliver differentiated customer experiences.

 


FastID

Teach Your robots.txt a New Trick (for AI)

Control how AI bots like Google-Extended and Applebot-Extended use your website content for training. Update your robots.txt file with simple Disallow rules.
Control how AI bots like Google-Extended and Applebot-Extended use your website content for training. Update your robots.txt file with simple Disallow rules.

Monday, 15. September 2025

Dark Matter Labs

What’s guiding our Regenerative Futures?

Expanding our view toward six guiding principles for regenerative practice. Image: Dark Matter Labs. Adapted from Jan Konietzko, ‘Carbon Tunnel Vision’. Possibilities for the Built Environment, part 1 of 3 This is the first in a series of three provocations, which mark the cumulation of a collaborative effort between Dark Matter Labs and Bauhaus Earth to consider a regenerative future fo
Expanding our view toward six guiding principles for regenerative practice. Image: Dark Matter Labs. Adapted from Jan Konietzko, ‘Carbon Tunnel Vision’. Possibilities for the Built Environment, part 1 of 3

This is the first in a series of three provocations, which mark the cumulation of a collaborative effort between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project.

In this publication, we lay out the historical, professional and theoretical context for the contemporary push toward regenerative practice, and offer six guiding principles for a regenerative built environment, looking beyond profit tunnel-vision. In the second and third pieces, we propose pathways, configurations and indicators of the transformation our team envisions.

What isn’t regenerative? Debunking a misconception

When it was completed in 2014, Bosco Verticale, a pair of 40-story residential towers on Milan’s outskirts, was celebrated as an example of leading-edge regenerative building design for the 800 or so trees cascading from its balconies. In describing the project, its architect Stefano Boeri sketches the figure of the “biological architect”, who is driven by biophilia and prizes sustainability above other design concerns. Praise for Bosco Verticale, in the architectural press and beyond, implies that the development’s vegetal adornments represent a meaningful substitution of traditional building materials with bio-based ones, and further that measures supporting biodiversity constitute climate-positive architecture.

The list of green credentials associated with the project ignores other characteristics of Bosco Verticale that don’t align with this vision. The steel-reinforced concrete structure was designed with unusually substantial 28cm deep slabs to support the vegetation’s weight (which totals an estimated 675 metric tons) and associated dynamic loads. Considering that this slab depth is about twice that of comparable buildings without the green facade, the embodied carbon associated with the project’s 30,000m² floor slabs alone is approximately double that of a standard building.

In tandem, an existing workspace for local artists and artisans based in a former industrial building was demolished to make space for the premium residential units accessible only to the few. Although a replacement workspace was eventually built nearby, the structure’s regenerative aspirations are weighed down by profound contradictions beneath the leafy surface.

Certainly, Bosco Verticale is significant as an exceptional investment in urban greening on the part of the developer, and as a leading-edge demonstration of innovations that enhance the multiple benefits of green infrastructure. Bosco Verticale contributed to the viability of future developments that extend the geographic reach of urban greening discourse into new geographies: copy-cat schemes have been built in East Asia and elsewhere. However, it’s clear that Bosco Verticale fails to stand up to a holistic consideration of what regenerative building looks like. Many voices overlooked the social and material impacts of the project, instead dazzled by the urban greening.

Puzzle pieces of the regenerative

In recent years, societies worldwide have become familiar with weather events and political shifts that were unprecedented or previously unthinkable. Six of the nine planetary boundaries that demarcate the safe operating space for humanity were crossed as of 2023. There is now a strong case for the idea that our entangled human and planetary systems exist in a state of polycrisis. Bearing this in mind, what do we mean when we refer to a built environment that is regenerative?

This piece aims to add nuance and system-scale perspective to our working definitions. As examples like Bosco Verticale show, it’s possible to be green in the public eye while counteracting what is regenerative. Perhaps we need new methods to help us understand:

How long a building will last, How its materials will be stewarded, Whether it is built in a context that enables low-carbon living, And what its end of life might involve.

System-scale perspective is needed because the built environment cannot be disentangled from systemic needs like the demand for affordable housing and the reality of physical, material constraints. Although we do need initial demonstrations to spark change, a single, locally-sourced timber building constructed with ethical labour does not define wholly regenerative practice in itself.

What is regenerative?

Regenerative is the term of the moment, yet it remains loosely defined in public discourse: we rely on examples, implicit understandings, and theoretical frameworks to give it meaning. How, then, is it used in particular contexts?

Beyond ‘green’

Regeneration refers to approaches that seek to balance human and natural systems, allowing for coexistence, repair and self-regulation over time.

The regenerative paradigm seeks to look beyond what’s merely ‘green’, and to do net good. A broader lineage of thinking around the term spans agriculture, biology and ecology, medicine, urbanism and design: disciplines and industries that connect to the health and wellbeing of biomes, bodies and buildings. Variation in definition can be observed in different contexts, sectors and aims.

‘Regenerative’: a brief history of the term
The term regenerative began to gain traction in fields including agriculture and development to outline a new paradigm from the 1980s. The US’ Rodale Institute popularised the term ‘regenerative agriculture’ to describe farming systems that go beyond sustainability by improving soil, biodiversity and ecosystem health. The practices invoked are ancient, with precedents across the globe, and rooted in Indigenous land management. However, this specific application of the term ‘regenerative’ articulated an emergent attitude in this period that focused on renewal and improvement of ecological and social systems. The Rodale Institute advanced this concept through research, advocacy, farmer training, publications and consumer education geared toward regenerative organic agriculture, laying the groundwork for its integration into mainstream agricultural discourse and integration into other disciplines.
From the early 2000s, the work of Bill Reed and the Regenesis Institute for Regenerative Practice has anchored the application of regeneration to design fields and the built environment in particular. With a focus on ecosystem renewal and coevolution of human and natural systems, Reed’s framework implies that regenerative design goes beyond sustainability by restoring and renewing ecosystems, integrating humans and nature in a symbiotic relationship. Expanding this idea beyond ecology, many architects and urbanists have adapted Reed’s model to their own corners of their fields, looking for design that doesn’t simply do less harm, but does more good. Bauhaus Earth maps Reed’s familiar bowtie-shaped diagram onto four basic categories for the built environment: from conventional, to green, to restorative and finally regenerative–that which has the greatest positive environmental and social impact.
Across applications, several elements of a core meaning of what is regenerative exist: a focus on supporting systems of different scales to recover from loss, to take on new life, to grow responsively. The evocative nature of this idea, easily applied across different disciplines, has inspired a range of permutations and schools of thought.
Other key references on the regenerative:
1 Regenerative Development, Regenesis Group, 2016.
2 Regenerative Development and Design, Bill Reed and Pamela Mang, 2012.
3 Shifting from ‘sustainability’ to regeneration, Bill Reed, 2007.
4 Towards a regenerative paradigm for the built environment, Chrisna du Plessis, 2011.
5 Doughnut for Urban Development, Home.Earth, 2023.
6 The Regenerative Design Reading List, Constructivist, 2024.
Image: Bauhaus Earth, adapted from Bill Reed’s ‘Trajectory of Ecological Design’

The term’s uses have gained traction and proliferated within the particular historical context of the last half-century, during which concepts like the anthropocene and the full extent of human impact on the planet have been evidenced. As technology has enabled our understanding of the ways in which humanity has degraded our environments — at scales from the cellular to whole earth systems — to grow, so too has our desire for models that point to possible ways to repair this damage. Conceptualising the regenerative across scales and disciplines opens the door to alternative futures in which planetary demise at the hands of humans is not inevitable. The application of the core elements of regenerative theory to fields like architecture has spurred a range of generative and planet-benefitting practices. However, these individual actions, and even the rise of the sustainability paradigm across design fields, cannot override the prevailing limitations of capitalism that continue to increase rates of extraction, social inequality and environmental degradation. As it stands, regenerative approaches continue to be exceptions working against the odds.

The main limitation: political economy

These frameworks were written within academic and industrial contexts, largely from a Western, wealthy nations’ perspective. While regenerative thinking has inspired thinkers across the planet and across fields, attempts to translate these concepts into a global, political economic scale fails to account for deep-seated inequalities. We are limited by the systems and power imbalances in which we’re working. Capitalism, in particular, compounds these blindspots, limiting attempts to translate regenerative thinking into other spaces such as the built environment. As such, while trailblazing organisations, communities and individuals are offering proofs of possibilities in regenerative infrastructure and urbanism, these are currently exceptional cases. It is not yet evident how these ideas can be instantiated at scale to benefit all people and meaningfully address systemic inequalities.

The role for and responsibilities of professionals

The interconnected challenges of this moment invoke new layers of complexity. But if professionals can’t understand or deploy the idea of regeneration, then it won’t guide their decisions and actions.

Extractive activities led by the industrialised global North continue to irreversibly alter our planet at pace, while the transition to renewable energy will involve even higher rates of extraction of critical minerals than those of today. As such, the earth’s systems’ ability to regenerate is stressed more than ever. The built environment, with its outsized responsibility for global carbon emissions associated with construction, building operations and demolition, must admit these impacts and face up to its epoch-defining responsibility. So how do we get off the one-way road of identifying problems without solutions?

There is a separation between perceived responsibility and power in today’s professional landscape. This moment necessitates a shift from individual to collective agency in taking on advocacy for the regenerative potential of the built environment.

Imagine this: you are an architect today, trying to answer the client’s brief by maximising the use of responsibly-sourced bio-based materials, embedding social justice in your design processes and objectives, and considering carbon-storage potential and place stewardship for future generations, while accepting that your brief is to create market-rate apartments. This is nearly impossible in the context of today’s imperative to maximise profits and commodify housing. Architects in the current professional environment are profoundly limited in means to meaningfully address these intersecting priorities, whether one at a time or in concert. Our current economic system simply does not position architects to be the core innovators, as much as Stefano Boeri’s reflections on the Bosco Verticale boast otherwise.

These professional limitations are an indirect signal of the political economy of real estate development and the power relations underpinning the construction industry. Only a systemic shift can address the limitations facing individuals operating within a design scope. To genuinely take on the intersections of ecology, social justice and the built environment, architects need to see their work for all its entanglement with the broader political, economic and social forces, using the tools of the profession and connections, bolstered by connections with aligned collaborators, and their collective power to dismantle the systems of power that limit transformation at across scales.

We’re orienting ourselves toward a future in which there is more latitude for these crucial priorities to be addressed. This future will hold an altered scope for decisions made by architects and other built environment professionals in the course of development processes, and a transition to a regenerative built environment driven by collective commitment.

A growing field: precedents and trailblazers

A range of contemporary initiatives, programmes and projects aim to establish frameworks to define the idea of a regenerative built environment. Drawing on advancements in circular economic thinking, increasing recognition of the significance of embodied carbon in addition to operational carbon in buildings, and as the industry’s understanding of indicators like biodiversity and water use that are tied to planetary boundaries grows, these programmes help experts and the general public to move beyond misconceptions.

Bauhaus Earth emerged in 2021 as an initiative around the use of timber and other bio-based materials for construction and their ability to store carbon. Today, Bauhaus Earth is a research and advocacy organization dedicated to transforming the built environment into a regenerative force for ecological restoration. It brings together experts from architecture, planning, arts, science, governance, and industry to promote systemic change in construction practices.

Index of aligned enquiries

A global range of community-led and grassroots organisations focusing on the work and needs of underserved groups receive grant funding from and can be discovered via the Re:arc Institute.
Non-Extractive Architecture(s)’ directory gathers a global index of projects that rethink the relationship between human and natural landscapes, alongside questions about the role of technology and politics in future material economies. The directory is an ongoing project itself.
A range of related organisations and initiatives in the working ecosystem of Europe can be found in the table below. The range in types of these enquiries represents the broad coalition of stakeholders and types of activity that will be required to activate transformation toward a regenerative built environment.
Index of related initiatives in Europe. For links, see the end of this post.
Bio-based building materials are an important nexus of social and material relations. These materials, which bridge human and earth-based capacities for creation, urge an expanded view of stewardship. Understanding this will enable us to move past a paradigmatic dichotomy between the human and the natural, which enables humans to exploit planetary resources. Bio-based building materials were humans’ first building materials, and over millennia the practices, most notably agricultural and indigenous ones, that created the materials we work with today, have developed in concert with human civilisations and material realities. Holding these strands together, it’s evident why a maintained focus upon bioregionally-sourced and bio-based materiality is crucial for a regenerative future.
For a contemporary design and research practice that focuses on this intersection of agendas, see Material Cultures.
Regeneration across time horizons: shortsightedness and the Capitalocene

As Reed’s Trajectory of Ecological Design diagram and the examples above indicate, regeneration of ecosystems and societies are continuous, open-ended processes that occur over time, at scales from the cellular, to the neighbourhood, and to the planetary. As the repair and balancing of regenerative processes have occurred in many contexts across eons, we need to understand regeneration across multiple accordant time horizons. Within this complex and extensive landscape, time horizons can act as organising units that help make sense of interconnections and nested scales of action.

In construction, key processes take place across different timescales. These range from time needed for a regenerative resource such as a forest to grow, to the lifespan of a building, to the longer time periods associated with meaningful carbon sequestration. In each of these cases, regenerative interventions involving acts of maintenance and design directly modulate the temporal register of the built environment. For example, extending a building’s lifespan through processes of care and preventing demolition impacts the future form of its locale and pushes back against the conceptualisation of buildings strictly as sources of profit within capitalist logic–that is, viewing buildings primarily in terms of their capacity to generate immediate economic returns through cycles of development, exploitation and obsolescence. By this means, it is within the medium of time that a regenerative lens on the built environment can be most revealing.

Regeneration in deep time and at the timescale of ecosystems has been disrupted by human processes. We are accustomed to the idea of the Anthropocene, in which an epoch defined by human activity has become the dominant influence on climate and the environment, which was initiated by the industrial revolution. However, recent discussions by Jason W Moore, Andreas Malm and others offer a critique of this concept in making the case for the Capitalocene as a more precise term. Rather than treating humanity as a homogenous force as Anthropocene theory does, the Capitalocene examines how differences in responsibility, power and agency within societies have been compounded in the context of the capitalist system, and how this system has driven ecological crisis. Rather than humanity as a whole, Moore argues that we should examine how the social, economic and political processes that have shaped recent centuries, and which reach back to the early modern period, provide a better basis for understanding the relationship between human activity and planetary wellbeing, and how this dynamic produces ecological crises. Using this focus on the un-natural and political origins of the crisis we face today, it’s possible to see how shifting senses of responsibility, agency and relationships, operating against capitalist logics, are essential for developing effective pathways toward planetary regeneration. In the predominant logic of the Capitalocene, short-term profits, increases in productivity, and optimisation around flawed ideas of efficiency are necessitated–and regeneration could be mistaken for a loss, an indicator of inefficacy, a concession to the ineffable–and as such, unwarranted. This is the systemic logic that must be resisted.

The prevalence of demolition today is one example of how this systemic short-sightedness is bad for people and the planet. The UK is now facing the consequences of the prevalent use of reinforced aerated autoclave concrete (‘RAAC’), in municipal buildings nationwide during the 1980s. With a material lifespan of only 30 years, many hospitals and schools built of RAAC are now being demolished. Indeed, the lifespan of many of the structures that are most viable in our current urban development models are steadily decreasing in spite of increasing awareness of the embodied carbon impacts of demolition.

We would do well, in looking toward a regenerative future for the built environment, to retune our time horizons. This might involve syncing carbon sequestration time with lifecycles for construction that create value over time, taking into account things like municipal land leases and emerging whole life carbon regulations. What if we had a way to see the long-term impact of decisions made today?

In this effort to hold more timescales in mind when we consider processes of regeneration, we can learn a great deal from indigenous cultures from across the world, many of whom have developed, over the course of millennia, methods and ideologies supporting the human ability to connect with scales of time beyond our species-specific and news-cycle dependent parameters. Some of these examples are evidenced in the above Index of enquiries.

Theoretical underpinnings: what constitutes a regenerative built environment?

The built environment is both a physical and a social construct: it’s not fitting in this moment of polycrisis to continue to abstract the physical materials that shelter us from the labour that built them, the livelihoods that maintain them, the design processes that make them fit for purpose, and the policies or decisions that keep them standing.

To identify ways to directly address the injustices to people and the planet engendered by the Capitalocene, we need to look to historical and political decisions that have driven the crises in housing affordability and race-based inequality that are defining features of cities today. In recent years, there has been a greater focus on how the built environment can benefit from the application of lenses that focus on the distribution of power and agency within societies, including critical theory and urban political ecology. These approaches can help us to articulate how the built environment and natural resources can be viewed in the context of human struggles to meet their needs in the context of today’s critical conditions.

David Harvey, most notably in Social Justice and the City, points to how a purely quantitative or spatial design-based approach to understanding urban space consistently fails to engage socioeconomic phenomena like inequality and urban poverty, while arguing for the necessity of approaches that integrate the spatial with the social. Harvey’s reading, grounded in radical geography, makes clear how spatial development processes are driven by financial capital, which keeps governments, civil society, communities and individuals in predetermined roles, ill-equipped to resist the calcification of capitalised space. Recently, climate justice movements like the Climate Justice Alliance (on the grassroots side) have formed alliances with decision-makers and activists in the built environment around causes like health and buildings, retrofit poverty and feminist approaches to building, under banners like a Global Green New Deal, in which a spatialised social justice lens can be directly applied.

Harvey’s work is a key influence on urban political ecology approaches, which assist us in understanding of how cities are hybrids of natural and social processes, rejecting a dichotomy between people and nature. Similarly, Marxist political economic thinkers like Raymond Williams have pointed to how capitalism organises space and produces environmental inequalities, as analysed using multiscalar analysis, among other techniques. Through a political ecology lens, we see that developers and investors, not communities or ecological needs, shape the built environment, often through speculative real estate practices that exploit labour and resources. These critiques of the built environment emphasise that urban development is driven primarily by capitalist interests, prioritising profit over social and environmental well-being, leading to inequality, displacement, and environmental degradation. Theory can support an analysis of exclusion in planning, and advocacy for participatory processes that could support socially regenerative places.

In sum, focusing exclusively on buildings misses the point that cities are fluid, open, contested multivocal landscapes. At scales from the individual building, to the neighbourhood, including infrastructure like street systems, as well as cities and regions, the built environment is a negotiation between matter, human behaviour and social systems over time.

As we look to the future, how will our urban environments be produced? Who will benefit from them? And how can we challenge the environmental injustices inherent to the systems we live in?

Guiding principles for regenerative practice Six layered principles for a regenerative built environment

Expanding our definition of what’s regenerative in the built environment calls for clear ways to speak to the material, economic and social dimensions of cities. We need ways of accessing and assessing regeneration that cut across disciplinary boundaries, invite broader participation in these conversations, and account for future risks and technological developments.

What layers and principles might expand and deepen our understanding of systemic interactions as we work toward more holistic indicators? Below are six suggestions to focus our gaze.

Time horizons and generational preparedness

Future indicators of a regenerative built environment must take a long-term view. If the built environment is to form a matrix in support of human life for generations to come, it should fundamentally be building material preparedness for the future. This means the way we measure and quantify what the built environment does ought to speak to this extended time horizon, for example by considering how much carbon is stored for three generations to come, how much of our timber is sourced in a way that will allow for replanted trees that will mature over decades, or how much of a building’s material stock can be disassembled and reused within the same settlement.

Today we have standard metrics like Floor Area Ratio (FAR) that are aligned with present development models and profit-driven logics requiring maximum saleable use of space, fundamentally constraining possibilities for the built environment. Foregrounding time horizons for change enables retooling of these ways of measuring cities, focusing not on short-term, singular profits and benefits, but rather on the future generations and our planetary resources.

Geopolitical resilience and security

Future indicators for a regenerative built environment should address the geopolitical stakes of decisions.This is especially relevant now in Europe, with regard to geopolitical dynamics within and between the US, Russia and China, in light of multipolarity and the EU Strategic Autonomy conversation. Can we refashion the socioeconomic and material dependencies in cities so that they are resilient to the crises that may face future generations, while supporting enhanced responses to geopolitical dangers? We should look to modes of resilience that address the political and economic systems that exacerbate geopolitical precarity, such as the extractive nature of global trade, and the ongoing influence of multinational corporations in shaping environments across scales. The status quo propositions toward resilience often fall short of addressing geopolitical power structures.

Place-based and planetary approaches

Future policies and indicators should adopt a multiscalar view that takes into account the unique local context to which it’s applied, as well as the transformative potential and influence interventions may leverage across scales (e.g. throughout the value chain). Contextual specificity is associated with direct impact in regenerative efforts, but these must be connected to transformative change that fundamentally alters the properties and functions of systems.

Living systems approach

Actions should help to shift thinking towards more holistic and ecocentric worldviews, in which non-capitalistic, nature-centred systems of values are given primacy. This layer considers interventions as part of dynamic social-ecological systems rather than isolated components. It is crucial to see these social-ecological systems for their complex adaptive qualities, in which people and nature are inextricably linked.

A living systems approach supports biogenerative thinking, in which processes, systems, or designs that actively promote, support, and regenerate life — both biological and ecological — create conditions for continuous growth, renewal, and self-sustaining ecosystems.

Co-evolutionary and community-led

Interventions should structurally empower communities to act and evolve in line with their ecosystems. Structural empowerment means building systems and resources to make communities stronger and self-sufficient and allowing nature to flourish in tandem. This approach foregrounds the utility of feedback mechanisms from nature, like soil health indicators, phenological changes, and biodiversity and species presence, to support the co-evolution and improvement of social-ecological systems.

Supporting holistic value creation

A regenerative built environment should operate on the basis of a broad definition of value, from economic, to ecological and social. As the theoretical approaches discussed previously indicate, the built environment is a hybrid of natural and social processes occurring in the constraints of systems that thrive on extraction and inequality. A holistic approach that combines material, interpersonal and spatial integrators to consider what is regenerative generates cascading value across multiple scales.

“Measuring the impact of regenerative practices on living systems must therefore recognise entangled systemic value flows. Current economic approaches fail to account for this complexity.”
— Dark Matter Labs, A New Economy for Europe’s Built Environment, white paper, 2024
Conclusion

In the context of the polycrisis, we need to move beyond notions of sustainability, toward, as Bill Reed’s diagram suggests, creating healthy, counter-extractive communities and bioregions that can scale from exceptions to define new norms.

Embracing a broadened definition of regenerative practice — one which is informed by the historical and contemporary context of such practices — will evidence the potential contradictions and tensions in the current system. Deploying multimodal metrics and indicators, of the type that the principles introduced in this piece imply, will enable new thinking for net-regenerative outcomes in our cities. Without redirecting our points of orientation toward these six principles, even motivated actors will be limited by today’s system, which allows only for shifting of blame and incremental, localised improvements in the status quo. We will never reach a regenerative built environment without transformational change.

Further pieces in this series will explore in more detail the systemic shifts we envision, pathways toward regenerative practice, and possible indicators for recognising progress.

This publication is part of the project ReBuilt “Transformation Pathways Toward a Regenerative Built Environment — Übergangspfade zu einer regenerativen gebauten Umwelt” and is funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) on the basis of a resolution of the German Bundestag.

This piece represents the views of its authors, including, from Dark Matter Labs, Emma Pfeiffer, Aleksander Nowak, and Ivana Stancic, and from Bauhaus Earth, Gediminas Lesutis and Georg Hubmann.

We extend our thanks to additional collaborators within and beyond our organisations who informed this discussion.

Additional links: Built By Nature Material Cultures Ecococon LUMA Arles / Le Magasin Électrique HouseEurope! Rotor Gleis 21 Home Silk Road Kalkbreite La Borda Living for Future Habitat for Humanity Poland

What’s guiding our Regenerative Futures? was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

How Businesses Can Detect Crypto Fraud and Protect Digital Assets

The post How Businesses Can Detect Crypto Fraud and Protect Digital Assets appeared first on uqudo.

ComplyCube

Online Safety Act 2023 vs. EU DSA: What You Need to Know

Discover how the UK Online Safety Act 2023 and the EU Digital Services Act differ on age verification, compliance, and platform accountability to protect children online. The post Online Safety Act 2023 vs. EU DSA: What You Need to Know first appeared on ComplyCube.

Discover how the UK Online Safety Act 2023 and the EU Digital Services Act differ on age verification, compliance, and platform accountability to protect children online.

The post Online Safety Act 2023 vs. EU DSA: What You Need to Know first appeared on ComplyCube.


IDnow

Why eID will be key in Germany’s digital future – Docusign’s Kai Stuebane on trust, timing and transformation.

We spoke with Kai Stuebane, Managing Director for DACH at Docusign, to explore how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. From navigating increasing compliance demands to delivering seamless user experiences, we discussed why eID (Electronic Identification) is becoming a strategic priority for faster, more secure, and legal
We spoke with Kai Stuebane, Managing Director for DACH at Docusign, to explore how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape.

From navigating increasing compliance demands to delivering seamless user experiences, we discussed why eID (Electronic Identification) is becoming a strategic priority for faster, more secure, and legally compliant digital signatures – and how Docusign’s partnership with IDnow is empowering enterprises to stay ahead with secure, scalable and user-centric digital workflows.

Why now: Perfect conditions for eID to scale In today’s rapidly evolving regulatory landscape, particularly in Germany but also across Europe, digital identity is becoming increasingly significant. From Docusign’s perspective, what factors are driving the growing importance of secure digital identity solutions in the enterprise environment? 

First, regulatory compliance is a major driver. Regional laws such as eIDAS, and the impending eIDAS 2.0 in the EU are enhancing the need for digital authentication solutions across the region by introducing initiatives such as European Digital Identity Wallets (EUDI). In Germany, the focus on digital trust services, enforced by institutions such as BaFin and regulations like GwG, demand robust, verifiable digital identity solutions. Enterprises must meet strict requirements for customer identification and authentication when signing or executing agreements electronically. 

Second, security concerns and fraud prevention are top priorities. According to a recent Docusign global survey into the identity verification landscape, 70% of organisations agree that identity fraud attempts are on the rise, as remote and hybrid work models become the norm and businesses continue digitising their operations. As a result, companies require robust authentication solutions that ensure document integrity and signer identity across borders and devices.

A third major driver is that user expectations have shifted. Both customers and employees now expect seamless, secure digital experiences, with 50% of organisations actually prioritising customer experience over fraud prevention, given its perceived importance. Organisations like Docusign enable enterprises to deliver this through a frictionless signing experience while maintaining high standards of security and trust. For example Grenke who, in addition to offering IDnow’s videoident process through Docusign, decided to also add the new eID capability in order to offer more convenience to their customers.  

Finally, digital transformation continues to accelerate. Enterprises are modernising legacy workflows at an exponential rate, and secure digital identity is foundational to automating agreement processes end-to-end. Digital-first solutions empower businesses to operate faster, more efficiently, and with greater legal certainty – particularly in highly regulated markets like Germany.

As Germany advances its digital transformation initiatives, how do you anticipate electronic identification (eID) solutions will reshape document signing processes for both enterprises and consumers in the German market?

There is an overall shift within the identity verification and authentication landscape  where organisations are actively seeking-out solutions that enable them to maintain security and compliance, without impacting the user experience.  

For enterprises, eID solutions will help streamline identity verification, enabling faster onboarding, contract execution, and compliance with stringent regulatory requirements such as eIDAS and Germany’s Trust Services Act. Again, take Grenke as an example, the ability to integrate German eID schemes into their existing signing workflow – especially for digital signatures – means they can ensure the highest level of legal validity while reducing manual processes and streamlining the customer experience.

For consumers, eID will offer a more seamless and familiar experience whilst maintaining security – something we pride ourselves on delivering here at Docusign. With familiar national identity methods integrated into digital transactions, users will be able to verify their identity and complete agreements with confidence and ease. This not only enhances trust but also accelerates adoption in regulated sectors like finance, insurance, and real estate.

Through our partnership with IDnow, Docusign is committed to supporting the German market by leaning into evolving regulations and integrating eID solutions into its portfolio, meeting local regulatory needs while delivering the trusted experience that users expect.

The eID advantage: Seamless UX meets compliance How can Germany unlock and accelerate the full potential of eID?

Based on our experience, accelerating eID adoption in Germany hinges on three key factors: user experience, awareness, and interoperability. 

First, simplifying the user experience is critical. For individuals to embrace eID for digital agreement completion, the process must be intuitive, fast, and secure. Reducing friction, such as removing lengthy registration steps or complex verification methods, can significantly increase user adoption. By leveraging the familiar eID methods, this will streamline this experience while maintaining high levels of identity assurance.

Second, education and awareness are essential. Many individuals are unaware that their national eID can be used as part of the digital agreement process. Promoting the benefits (legal validity, security, and convenience, etc.) will help build trust and drive usage across different age and user groups.

Third, ensuring broad interoperability with public and private identity schemes is key. Businesses need confidence that the eID solutions they implement will work across sectors and meet local (GwG) and regional (eIDAS) regulatory standards.

In what ways has Docusign enhanced its signing workflows by incorporating eID with other IDnow-powered verification solutions?

Docusign has a long-standing partnership with IDnow. The evolution of this partnership to now include IDnow’s eID capabilities enhances the security and user experience of its joint offering in the following ways:

Automation: Customers can make the most of an Identification method that simply relies on  the electronic identification (eID) function of the German national identity card.  Security: Two factors of authentication for additional security:  PIN entry  Scanning of the near field communication (NFC) chip contained within German eIDs  Familiarity and ease of use: not only are eIDs increasingly adopted across Germany, but the fact we leverage new technology such as NFC provides an additional element of ease of use. Real-world application: GRENKE’s eID-first transformation For businesses that already use Docusign but haven’t yet implemented eID-based signing, what are the key benefits they might be missing out on?

Ultimately, we can distill the key benefits to: 

Increased completion rates, driven through familiarity: enable customers to use their German eID for straightforward, intuitive identity verification that supports compliance obligations.  Secure, simplified signing:  built-in security enhancements (i.e. use of PIN, scanning of NFC, etc.) mean that SMS re-authentication and live video interactions are no longer required, resulting in an even faster identification process for signers. Storage and centralisation of key identity information: continue to download or easily access required signer identity information through Docusign and IDnow, to demonstrate compliance with BaFin GwG requirements  Can you share a real-world example of how a customer of Docusign is using eID to improve efficiency and achieve measurable business outcomes?

A strong example is our long-standing collaboration with Grenke, a leading provider of leasing and financing services. For several years, Grenke has enabled customers and dealers to digitally sign contracts using Docusign eSignature, with IDnow’s VideoIdent solution supporting identity verification.

Recently, Grenke enhanced this process by integrating IDnow’s eID solution as an alternative verification method. The impact has been clear: the introduction of eID has helped Grenke accelerate contract turnaround times, reduce reliance on physical materials, and improve the overall user experience. This has translated into greater operational efficiency, enhanced customer satisfaction, and measurable progress toward the company’s digital and sustainability goals.

What’s next: Looking beyond legal requirements As we anticipate the implementation of eIDAS 2.0 and the European Digital Identity framework in the coming months, how do you envision these regulatory advancements shaping the evolution of electronic identification and digital signature solutions across Germany and the broader European market?

These regulatory advancements will establish a unified, interoperable framework for digital identity across EU member states, enabling individuals and businesses to authenticate and complete digital agreements securely and seamlessly across borders. For Germany, this means greater alignment with a pan-European standard that facilitates trust, legal certainty, and smoother cross-border transactions.

eIDAS 2.0 introduces the concept of the European Digital Identity Wallet (EUDI), which empowers citizens to manage, store and share verified identity attributes as they wish. This will significantly enhance user control, reduce onboarding friction, and boost adoption of high-assurance digital signatures, particularly Qualified Electronic Signatures (QES). At Docusign our stated ambition is to become an federator of identities, where all EUDI wallets are available through our platform . 

For businesses, these changes will reduce complexity in managing multiple identity systems while improving compliance and scalability. 

We’re excited for what’s to come. 

Interested in more from our customer conversations? Check out: Holvi’s Chief Risk Officer, René Hofer, sat down with us to discuss fraud, compliance, and the strategies needed to stay ahead in an evolving financial landscape.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn

Sunday, 14. September 2025

Innopay

Mariane ter Veen to speak on responsible AI adoption at MyData 2025

Mariane ter Veen to speak on responsible AI adoption at MyData 2025 from 24 Sep 2025 till 26 Sep 2025 Trudy Zomer 14 September 2025 - 16:36 Helsinki, Finland 60.110698558061, 25.01868035 We’re e
Mariane ter Veen to speak on responsible AI adoption at MyData 2025 from 24 Sep 2025 till 26 Sep 2025 Trudy Zomer 14 September 2025 - 16:36 Helsinki, Finland 60.110698558061, 25.01868035

We’re excited to announce that Mariane ter Veen, INNOPAY’s Director Data Sharing, will speak at the MyData 2025 conference, taking place in Helsinki from 24–26 September 2025.

MyData 2025 is one of the world’s leading conferences on human-centric data sharing, bringing together innovators, policymakers, and experts from across the globe. This year’s programme highlights the growing importance of digital sustainability, with a dedicated track exploring how organizations can innovate responsibly in the age of AI.

In her session, Mariane will introduce INNOPAY’s Triple AI framework (Access, Integrity & Intelligence): a practical approach to adopting artificial intelligence effectively, responsibly, and sustainably. She’ll share insights on how organizations can:

Align digital innovation with societal values while safeguarding trust and inclusivity Gain control over AI strategies to unlock responsible innovation at scale Create long-term value by linking environmental, social, and economic sustainability goals

Drawing on INNOPAY’s expertise in creating trusted digital ecosystems, Mariane will explore how AI, data, and governance can work together to deliver innovation with purpose.

Event details
 

Date: 24–26 September 2025
Location: Helsinki, Finland
More information — MyData 2025 programme