Last Update 1:15 PM September 05, 2025 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Friday, 05. September 2025

Dock

How to Create Government-Issued Digital Identities using Truvera [Demo]

Most interactions with local government still rely on paper forms, manual checks, and brittle PDFs. In this demo, Richard Esplin (Head of Product at Truvera) shows how a city can issue a digital residency credential and then use it to verifyeligibility across departments—from getting

Most interactions with local government still rely on paper forms, manual checks, and brittle PDFs. In this demo, Richard Esplin (Head of Product at Truvera) shows how a city can issue a digital residency credential and then use it to verifyeligibility across departments—from getting a library card to scheduling trash pickup—using verifiable credentials.

The front end for this proof-of-concept was spun up in an afternoon with an AI code generator, while Truvera handled issuance, verification, selective disclosure, revocation, and ecosystem governance. 

Watch the video above to see how easily digital IDs can slot into existing workflows.


How to Create Digital Verifiable Certificates with Truvera [Demo]

Managing and verifying professional certificates is still stuck in the paper era: paper documents are slow, insecure, and easy to fake. Digital PDFs aren’t much better — they can be forged, misplaced, or become outdated as soon as someone changes jobs. That’s where verifiable credentials come

Managing and verifying professional certificates is still stuck in the paper era: paper documents are slow, insecure, and easy to fake. Digital PDFs aren’t much better — they can be forged, misplaced, or become outdated as soon as someone changes jobs.

That’s where verifiable credentials come in. In this demo, Richard Esplin (Head of Product at Truvera) shows how fast and simple it is to build a credential issuance and verification solution using the Truvera platform. In just an afternoon, our team put together a proof of concept for issuing and verifying safety training certificates.


auth0

Fine-Grained Authorization in ASP.NET Core with Auth0 FGA

Learn how to implement fine-grained, relationship-based authorization in an ASP.NET Core minimal API using Auth0 FGA.
Learn how to implement fine-grained, relationship-based authorization in an ASP.NET Core minimal API using Auth0 FGA.

iComply Investor Services Inc.

AML and KYB for Commercial Lenders: Enabling Compliance Across Borders

Lenders face growing AML demands for business onboarding and UBO checks. This guide shows how iComply helps automate compliance and accelerate decision-making across jurisdictions.

Commercial lenders face heightened global AML expectations, especially around KYB, UBO verification, and ongoing monitoring. This article outlines key obligations across the U.S., UK, Canada, EU, and Australia—and how iComply helps automate compliance for business loan onboarding and risk management.

Commercial lenders – from banks to fintech platforms to leasing companies – are under increasing pressure to validate the legitimacy of the businesses they serve. Regulators worldwide now expect lenders to implement robust know-your-business (KYB) procedures, identify beneficial owners (UBOs), and monitor ongoing risk across their business lending portfolios.

With varying standards across borders and complex corporate structures at play, automation is no longer optional – it’s essential.

AML and KYB Expectations for Lenders United States Regulators: FinCEN, OCC, FDIC, state banking departments Requirements: BOI reporting under the Corporate Transparency Act, CDD Rule compliance, SAR filings, and sanctions screening United Kingdom Regulator: FCA, PRA Requirements: KYB, UBO verification, transaction monitoring, and enhanced due diligence (EDD) for high-risk entities Canada Regulator: FINTRAC Requirements: Business client verification, beneficial ownership discovery, ongoing monitoring, and STRs for suspicious transactions European Union Regulators: National regulators under AMLD6 framework Requirements: KYB and UBO collection, EDD for complex structures, and real-time transaction tracking Australia Regulator: AUSTRAC Requirements: AML/CTF compliance for non-bank lenders, UBO transparency, and reporting obligations for high-value transactions Lending-Specific Risk Factors

1. Opaque Business Structures
LLCs, trusts, and holding companies often obscure real ownership.

2. High Application Volume
Manual KYB checks don’t scale with demand.

3. Evolving Regulatory Standards
CTA in the U.S., EU AMLA rollout, and FATF alignment create shifting expectations.

4. Loan Fraud and Misuse of Funds
Inadequate checks can lead to reputational damage, defaults, and penalties.

How iComply Supports AML in Lending

iComply provides a configurable platform that simplifies KYB, UBO discovery, and AML monitoring for commercial lenders.

1. Streamlined KYB Onboarding Verify legal entities through registry and document checks Identify directors, shareholders, and authorized signatories Localized workflows and multilingual support 2. Beneficial Ownership Mapping Visual UBO trees across jurisdictions Automated detection of nominee owners and shell structures Apply configurable thresholds for deeper review 3. AML and Sanctions Screening Real-time screening of businesses and individuals against global watchlists Continuous monitoring with refresh cycles and trigger-based reviews Risk scoring by industry, geography, and transaction patterns 4. Case Management and Reporting Unified dashboard for all onboarding and screening activity Audit-ready logs and regulatory export templates (FinCEN, FCA, AUSTRAC, etc.) Track escalations, reviews, and resolution timelines Case Insight: SME Lender in the UK

A UK-based lender adopted iComply to digitize business borrower onboarding. Within 6 weeks:

Cut average application processing time by 45% Flagged 3 UBO anomalies across high-value applicants Passed an FCA review of UBO verification procedures and audit trails Final Word

Commercial lenders must scale responsibly. Those who embrace KYB automation now can:

Reduce onboarding friction Improve risk visibility Meet cross-border AML expectations with confidence

Talk to iComply to see how we help lenders automate 90% of compliance tasks—so your team can focus on building relationships, not chasing paperwork.


Herond Browser

Herond Browser: August 2025 Report

This August, we're combining meaningful product updates with exciting new Engage Quests to keep the momentum going with Herond Browser report The post Herond Browser: August 2025 Report appeared first on Herond Blog. The post Herond Browser: August 2025 Report appeared first on Herond Blog.

This August, we’re combining meaningful product updates with exciting new Engage Quest to keep the momentum going. From platform enhancements to interactive campaigns, here’s a look at what’s shaping the month ahead through Herond Browser’s report.

Product Updates: Login & Onboarding Improvements

We’re excited to announce two major updates designed to make your experience with Herond more seamless than ever.

Easier Login Options: You can now log in using one-time passcodes sent to your email. We’ve also added social login options, so you can sign in quickly with your Facebook, Google, or Apple accounts.

Simplified Onboarding: Our new browser onboarding flow will now automatically generate a Herond ID for every new user, making it faster and easier to get started.

Community and Events Engage Quest

This new season of quests, formerly known as Action Surge, comes with a bold new look, bigger rewards, revamped game rules, and even more thrilling updates to get you engaged.

We are thrilled with the results of our latest community campaign! Throughout the season, we saw incredible engagement, with 355 registered users and 335 active participants. On average, each post received engagement from about 45 users, and the campaign generated nearly 3,500 interactions, including likes, retweets, and comments. Looking ahead, our Engage Quests are leveling up with even bigger rewards, and we have an exciting new Herond Browser update on the way that promises an exceptional experience.

SIHUB – The New Home of Innovation and Startups in Ho Chi Minh City

Herond Browser is proud to have partnered with the inauguration of Saigon Innovation Hub as a Vietnamese tech startup, offering a browser that delivers a safe, private, and optimized experience for Web 2.0 & Web 3.0 users. The event welcomed over 200 VIP guests and more than 2,000 attendees, marking an impressive scale and strong community interest. In addition, 248 participants downloaded and registered for the app, reflecting positive engagement and adoption.

The value brought by Herond Browser’s presence includes:

Reinforcing the Vietnamese brand in the technology and software development sector. Accompanying users with a leading product for the new era of web browsing.

Thank you for being a part of the Herond adventure. We can’t wait to see you next month.

About Herond Browser

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Herond Browser: August 2025 Report appeared first on Herond Blog.

The post Herond Browser: August 2025 Report appeared first on Herond Blog.

Thursday, 04. September 2025

SC Media - Identity and Access

Sitecore deployments exploited with reused sample machine key

Google researchers found attackers achieved RCE through ViewState deserialization.

Google researchers found attackers achieved RCE through ViewState deserialization.


Updated Tycoon phishing kit emerges

Operators of the Tycoon phishing-as-a-service platform have enhanced the phishing kit's ability to conceal illicit links in emails amid the growing effectiveness of email security tools in determining such links, reports Infosecurity Magazine.

Operators of the Tycoon phishing-as-a-service platform have enhanced the phishing kit's ability to conceal illicit links in emails amid the growing effectiveness of email security tools in determining such links, reports Infosecurity Magazine.


Apitor subjected to $500K FTC fine over unconsented children's data collection

China-based toy manufacturer Apitor Technology was proposed by the Federal Trade Commission to pay a $500,000 penalty after allegedly permitting a Chinese third-party to obtain children's geolocation data without parental consent, which violates the Children's Online Privacy Protection Rule, reports The Record, a news site by cybersecurity firm Recorded Future.

China-based toy manufacturer Apitor Technology was proposed by the Federal Trade Commission to pay a $500,000 penalty after allegedly permitting a Chinese third-party to obtain children's geolocation data without parental consent, which violates the Children's Online Privacy Protection Rule, reports The Record, a news site by cybersecurity firm Recorded Future.


Keeping AI under control: What to expect at Oktane 2025

Okta's annual conference next month will focus on how identity security can help manage AI agents, what they can access, and who has access to them.

Okta's annual conference next month will focus on how identity security can help manage AI agents, what they can access, and who has access to them.


liminal (was OWI)

Why Ransomware Prevention Needs Intelligence, Not Just Defense

Ransomware prevention is no longer about defense alone. It’s a Monday morning at a global consumer bank. Customers logging into online banking suddenly can’t access their accounts. Behind the scenes, ransomware has encrypted core systems and stolen millions of customer records. The attackers aren’t only demanding payment to restore access, they’re also threatening to release […] The post Why Ran

Ransomware prevention is no longer about defense alone. It’s a Monday morning at a global consumer bank. Customers logging into online banking suddenly can’t access their accounts. Behind the scenes, ransomware has encrypted core systems and stolen millions of customer records. The attackers aren’t only demanding payment to restore access, they’re also threatening to release personally identifiable information (PII), exposing customers to fraud and the bank to severe regulatory penalties. This isn’t a nightmare scenario, but the reality that many financial institutions are already facing. According to the Link Index for Ransomware Prevention (2025), ransomware incidents are rising year-over-year in the financial services sector, with projected damages exceeding $30 billion annually by 2026. The Link Index echoes findings from Cybersecurity Ventures, which identify ransomware as one of the fastest-growing forms of cybercrime worldwide, with a new attack occurring every two seconds as perpetrators refine their malware payloads and extortion tactics.

What is Ransomware?

The Link Index defines ransomware as malicious software that encrypts or steals an organization’s data and demands payment for its return or release. Once considered a technical nuisance, ransomware has become a systemic cyber risk impacting industries from financial services to healthcare.

Types of Ransomware Attacks Encryption-based ransomware: Locks critical systems until ransom is paid. Double extortion: Combines encryption with data theft, threatening to publish sensitive data if payment is refused. AI-enabled ransomware: Accelerates the threat further, mutating payloads faster than defenders can respond. Why Traditional Defenses Fail

The Link Index highlights a persistent reliance on backups, endpoint detection (EDR), and extended detection and response (XDR) that are proving inadequate:

Backups no longer guarantee resilience, since stolen data can still be weaponized for extortion. EDR/XDR tools overwhelm analysts, with over 40% of ransomware alerts flagged as false positives in some enterprises.

These findings are reinforced by IBM and Ponemon Institute, which identify alert fatigue as one of the costliest inefficiencies for enterprise security teams. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) echoes this challenge, noting in its #StopRansomware Guide that traditional defenses often fail against modern double extortion and data destruction tactics.Perhaps most concerning: defenses can’t keep up with the speed of ransomware evolution. By the time a signature is written, AI-enabled ransomware variants like LockBit 3.0 and BlackCat have already mutated, leaving enterprises one step behind.

The Stakes Are Rising

According to Liminal’s research, the top drivers of enterprise adoption for ransomware prevention solutions are regulatory pressure, insurance mandates, and operational continuity. These forces are intensifying across global markets.

Top buyer adoption drivers for ransomware prevention solutions Regulatory Pressure: In the U.S., the SEC now requires public companies to disclose material cyber incidents on Form 8-K. In Europe, the EU NIS2 Directive enforces similarly strict resilience standards. Insurance Mandates: The Link Index found that insurance mandates rank among the top three adoption drivers, with industry leaders like Marsh confirming stricter underwriting standards. Operational Continuity: Downtime remains one of the most critical financial risks. Studies show a single day of ransomware downtime can cost enterprises $1M per day (ITIC via ransomware.org).

For broader strategies around managing supplier and insurer demands, see the Link Index for Cybersecurity Third-party Risk Management.

“We’re seeing ransomware shift from being an IT headache to a full-blown business crisis. The data shows damages climbing past $30B by 2026, and the old playbook of backups and detection just isn’t enough anymore. Enterprises need intelligence-first prevention to stay ahead.”— Jonathan Gergis, Insights Team Lead, Liminal

The Solution: How Intelligence-Driven Ransomware Prevention Works

The Link Index identifies a decisive shift toward intelligence-driven prevention as the new enterprise standard. Rather than waiting for alerts, enterprises are adopting solutions that:

Correlate weak signals across endpoints, cloud, and networks. Apply behavioral analytics to detect credential abuse and lateral movement. Provide real-time business context to analysts for decisive action.

This shift is visible in the market. Vendors are retooling product roadmaps to deliver ransomware-specific intelligence capable of detecting advanced variants like LockBit and BlackCat. Importantly, 63% of CISOs surveyed in the Link Index now rank intelligence-first ransomware prevention above legacy tool upgrades. This trend is echoed by Gartner, which emphasizes that behavioral detection and intelligence-driven strategies must replace signature-based tools.

Leading security vendors are already pivoting toward this model:

Microsoft has embedded ransomware-specific intelligence into its Defender platform. CrowdStrike has expanded its Falcon platform to correlate signals across endpoints and cloud. Palo Alto Networks is retooling its Cortex suite to emphasize prevention through behavioral analytics and automated response.

These shifts reflect a broader industry recognition that traditional defenses cannot keep pace with AI-enabled ransomware variants.

For broader strategies around managing AI Data Governance, see the AI Data Governance Link Index.

What CISOs Should Do Now

CISOs looking to strengthen resilience against ransomware should prioritize intelligence-first strategies. Key actions include:

Build cross-platform intelligence pipelines to unify data across endpoints, cloud, and network environments. Validate vendor claims by demanding proof of real-time ransomware variant detection, not just signature-based defenses. Update incident response playbooks to address modern double extortion scenarios. Align prevention strategies with regulations like the SEC’s cyber disclosure rules and the EU’s NIS2 Directive, ensuring compliance and insurer coverage. Invest across five prevention categories from the Ransomware Prevention Link Index: endpoint protection, backup and recovery, identity security, detection and response, and email/web security.

By embedding these practices into a unified, intelligence-driven prevention framework, enterprises can reduce reliance on reactive defenses and build resilience that meets both regulatory scrutiny and insurance mandates.

Key Takeaways $30B in annual ransomware damages by 2026 (Link Index). Traditional defenses fail against AI-enabled ransomware like LockBit 3.0 and BlackCat; false positives drain analyst resources. Intelligence-driven prevention is the new enterprise standard: signal correlation, behavioral analytics, and real-time context. Regulatory, insurance, and financial pressures SEC, EU NIS2, and leaders like Marsh, are accelerating adoption. CISOs must act now: align strategies with regulations and insurance standards while investing in intelligence-led prevention.

For deeper insights and data, access the full Link Index for Ransomware Prevention (2025) via Link.

The post Why Ransomware Prevention Needs Intelligence, Not Just Defense appeared first on Liminal.co.


Thales Group

Powering Defence Agility: from innovation to impact – together

Powering Defence Agility: from innovation to impact – together Language English simon.mcsstudio Thu, 09/04/2025 - 14:15 At DSEI 2025, Thales is showing how partnerships with UK SMEs are turning innovation into operational impact — faster, more affordably, and at the pace of the threat. SMEs at the heart of UK Defence innovation Small and medium enterp
Powering Defence Agility: from innovation to impact – together Language English simon.mcsstudio Thu, 09/04/2025 - 14:15

At DSEI 2025, Thales is showing how partnerships with UK SMEs are turning innovation into operational impact — faster, more affordably, and at the pace of the threat.

SMEs at the heart of UK Defence innovation

Small and medium enterprises are the lifeblood of Defence innovation. They bring pace, agility and fresh thinking — often generating the breakthroughs in areas such as AI, autonomy, digital trust and identity, cyber, quantum and advanced sensors.

For Thales, SMEs are not just suppliers. They are partners in capability and sovereignty — and our role as a prime is to help them scale, industrialise, and deliver into critical programmes.

What we’re already doing — in action

Thales employs more than 7,000 people across the UK and invests around £575 million annually with UK suppliers. One in four pounds we spend already goes to SMEs — bringing agility and disruptive thinking into major programmes.

Across 2024, we engaged ~1,700 suppliers, including ~1,000 UK SMEs delivering on 1,345 projects — showing the depth and breadth of our SME ecosystem.

This isn’t theory — it’s already happening today:

MindFoundry — working together on Sonar 2087 to apply AI to detection and classification, speeding decision-making for Royal Navy operators.  Unsung Ltd — collaborating on digital trust and identity solutions, supporting both MOD and central government programmes. And across the country we are partnering with a wide range of specialist SMEs — from Faculty.ai and Montvieux in behavioural analytics, to Nightball in resilient systems, and Hippo Digital in identity management. The role of a prime

Our job as a prime is to give SMEs the runway:

Opening access to our global purchasing and R&D networks. Supporting them with standards, security and integration so their innovations can enter service quickly and safely. Connecting them with export routes and international markets.

This is underpinned by Thales’ €1bn+ annual Group R&D — giving us the pedigree and experience to help SMEs industrialise their concepts and bring them into service.

It’s about making sure great ideas don’t stay in the lab, but become deployed capability that strengthens the UK’s resilience.

Strategic SME Partner Programme

Building on what already works, Thales is now formalising its approach with a new Strategic SME Partner Programme.

The programme will:

Focus on a smaller, strategic cohort of SMEs, working more deeply to help them scale. Provide access to Thales’ global networks, R&D, and digital transformation services. Deliver targeted support on integration, security assurance and open interfaces. Include networking and community-building to connect SMEs with government and industry. Take a UK-wide lens, building regional ecosystems across all four nations. Aligning with Government ambitions

This approach directly supports the UK Government’s Small Business Plan and the MOD’s SME Action Plan — both of which aim to speed innovation cycles, strengthen supply chain resilience, and spread prosperity across the UK.

By backing SMEs and helping them grow, we’re not just supporting the Defence sector — we’re strengthening national sovereignty and fuelling wider economic growth.

Making connections at DSEI

At DSEI 2025, Thales is proud to be part of the Tech Zone SME programme and to showcase our UK ecosystem. Alongside our UK CEO, Phil Siveter, we are hosting SME networking sessions and working with senior MOD leadership to highlight the importance of SMEs in powering Defence agility.

Partnership is the multiplier. SMEs bring the agility; our job is to help them scale — safely, at speed, and into service.

Phil Siveter, CEO Thales UK

Interested in partnering with us?

Find out more about the Strategic SME Partner Programme and co-innovation opportunities with Thales.

Scan the QR code or contact the Thales UK team to get involved.

/sites/default/files/database/assets/images/2025-09/Powering-Defence-Agility_SMEs_Insight-Banner.png 04 Sep 2025 United Kingdom For Thales, SMEs are not just suppliers. They are partners in capability and sovereignty — and our role as a prime is to help them scale, industrialise, and deliver into critical programmes. Type News Hide from search engines Off

Thales ready to deliver high performance mission and combat systems to the Royal Navy for the Type 31 Frigate programme

Thales ready to deliver high performance mission and combat systems to the Royal Navy for the Type 31 Frigate programme prezly Thu, 09/04/2025 - 13:30 Thales has successfully completed Factory Acceptance Tests (FATs) for both the Mission System and the Combat System on the Royal Navy’s new Type 31 Inspiration-class frigates – marking major milestones in one of the UK’s most signif
Thales ready to deliver high performance mission and combat systems to the Royal Navy for the Type 31 Frigate programme prezly Thu, 09/04/2025 - 13:30 Thales has successfully completed Factory Acceptance Tests (FATs) for both the Mission System and the Combat System on the Royal Navy’s new Type 31 Inspiration-class frigates – marking major milestones in one of the UK’s most significant naval programmes. Working as part of an integrated international team, and in close partnership with Babcock and the Royal Navy, Thales has now completed all core factory-based activity for the programme - further reinforcing its role as a trusted partner in the delivery of complex naval systems for the UK. Thales’s Combat Management System (CMS), TACTICOS, functions as the operational heart of UK’s Type 31 frigates. It will be the central command and decision-making part of these frigates' combat systems. Its function and performance – supporting sensor control, picture compilation, situation assessment, action support and weapon control – are critical to the operational effectiveness of the naval vessel.
HMS Venturer, the first of five Type 31 frigates during her first entry into the water. (c)Babcock

The Mission System FATs were completed at the end of April 2025. Delivered to a high standard by Thales’s international team, in close collaboration with industry partners, this achievement showcases the quality, openness and technical expertise that have defined Thales’s approach to the Type 31 delivery – earning praise from the Royal Navy.

The Combat System FATs followed at the end of June 2025. Built around Thales’s latest version of its TACTICOS Combat Management System – delivered by Thales in the Netherlands – it includes the latest software release that enables Type 31’s operational capabilities, reinforcing TACTICOS's position as a leading CMS for next-generation naval platforms. With all Factory Acceptance Tests now complete, the programme will move onto land-based testing at the Shore Integration Facility, before being installed on board the HMS Venturer, first of the five Type 31 Inspiration-class frigates, the construction of which is underway at Babcock’s Rosyth facility.

Paul Watson, Arrowhead Managing Director at Babcock said: “The successful completion of the Mission and Combat Systems FATs marks another significant step forward for the Type 31 programme and reflects the strength of our collaboration with Thales and our wider industry partners. Together, we are delivering a world-class capability for the Royal Navy and creating a strong foundation for the future of the Inspiration Class frigates.”

Andy Laing, Managing Director, Above Water Systems UK, Thales, added: “Working closely with our Royal Navy and Babcock colleagues, we are delighted to have successfully completed this critical stage in the development of the Royal Navy’s new Type 31 frigates. It represents another demonstration of Thales's proven ability to deliver integrated naval mission systems to the highest standards.”

The Type 31 programme underscores Thales's long-standing commitment to support the Royal Navy with world-leading maritime technologies, while strengthening UK and European defence capability and industrial resilience.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in the UK

Thales in the UK is proud to design and manufacture world class, battle proven technology, equipment and training solutions for the British Army and export customers around the world. It has been operating for over 140 years with our partners across the supply chain to ensure the operational effectiveness of our customers.

Building on this heritage, it is committed to nurturing a highly skilled workforce and supply chain, through the development and delivery of UK sovereign products.

Leveraging Thales’ global expertise and portfolio, it helps our customers to modernise their capabilities and deter adversaries of today and tomorrow.

CONTACTS MEDIA RELATIONS

/sites/default/files/prezly/images/avec%20A-1920x480px_1.jpg Contacts Camille Heck, Thales, Media Relations Land & Naval Defence UK Press Contact Naval Adrian Rondel 04 Sep 2025 Type Press release Structure Defence and Security Defence United Kingdom The Mission System FATs were completed at the end of April 2025. Delivered to a high standard by Thales’s international team, in close collaboration with industry partners, this achievement showcases the quality, openness and technical expertise that have defined Thales’s approach to the Type 31 delivery – earning praise from the Royal Navy. prezly_793912_thumbnail.jpg Hide from search engines Off Prezly ID 793912 Prezly UUID 59858a09-fee2-4171-8b44-034b802e353f Prezly url https://thales-group.prezly.com/thales-ready-to-deliver-high-performance-mission-and-combat-systems-to-the-royal-navy-for-the-type-31-frigate-programme Thu, 09/04/2025 - 15:30 Don’t overwrite with Prezly data Off

ComplyCube

The CryptoCubed Newsletter: August Edition

Sit tight as we welcome you back to the latest edition of CryptoCubed. From Ripple Lab's high-stakes lawsuit to President Trump's executive orders, the crypto scene is buzzing with drama. Read on to learn more latest crypto news. The post The CryptoCubed Newsletter: August Edition first appeared on ComplyCube.

Sit tight as we welcome you back to the latest edition of CryptoCubed. From Ripple Lab's high-stakes lawsuit to President Trump's executive orders, the crypto scene is buzzing with drama. Read on to learn more latest crypto news.

The post The CryptoCubed Newsletter: August Edition first appeared on ComplyCube.


Herond Browser

Herond Browser at SIHUB Inauguration: A New Icon of Vietnamese Technology

Herond Browser is proud to have partnered with the inauguration of Saigon Innovation Hub as a Vietnamese tech startup, offering a browser that delivers a safe, private, and optimized experience for Web 2.0 & Web 3.0 users. The post Herond Browser at SIHUB Inauguration: A New Icon of Vietnamese Technology appeared first on Herond Blog. The post Herond Browser at SIHUB Inauguration: A New Ic
Introduction

On August 23, 2025, the Saigon Innovation Hub (SIHUB) building was officially inaugurated and began operations at 123 Truong Dinh, District 3. This event marks a significant step in realizing Resolution No. 57 of the Politburo, focusing on breakthrough development in science, technology, innovation, and digital transformation for Ho Chi Minh City.

SIHUB – The New Home of Innovation and Startups in Ho Chi Minh City

Spanning an area of 17,000m², SIHUB serves as a “shared home” that connects resources within the startup ecosystem and acts as a testing ground for new policies. With its impressive scale, SIHUB is not only a support center but also a nurturing hub for the startup community in Ho Chi Minh City, where creative ideas can be rapidly and effectively brought to life.

SIHUB – The New Home of Innovation and Startups in Ho Chi Minh City Herond Browser – A Highlight of Tech Innovation at the Event

Herond Browser is proud to have partnered with the inauguration of Saigon Innovation Hub as a Vietnamese tech startup, offering a browser that delivers a safe, private, and optimized experience for Web 2.0 & Web 3.0 users. The value brought by Herond Browser’s presence includes:

Reinforcing the Vietnamese brand in the technology and software development sector. Accompanying users with a leading product for the new era of web browsing. Herond Browser – A Highlight of Tech Innovation at the Event Exhibition Booth – Where Big Ideas Converge

The Herond Browser exhibition booth not only showcased its browser products and advanced technological solutions, but also provided an opportunity for the community to experience them firsthand. This initiative helped visitors learn more about Herond Browser and gain a deeper understanding of our commitment to delivering a safe, private, and efficient web browsing environment. The event thus fostered meaningful connections, enhancing interaction and support between Herond Browser and the tech community.

Exhibition Booth – Where Big Ideas Converge Benefits for the Startup Community

Herond Browser’s presence at SIHUB has delivered tangible benefits to the community:

Access to cutting-edge technology: Startups can explore modern solutions that meet the demand for fast and secure web browsing. Comprehensive ad-blocking solutions: Herond Browser offers a specialized browser for Web 2.0 & 3.0 users, protecting privacy during website access. On-site support: The exhibition booth enabled startups to experience solutions firsthand and receive immediate tech consultations. Thanks to these contributions, Herond Browser is recognized not only as a tech brand but also as a trusted partner in the startup journey. Positive Feedback from the Event

Throughout the event, the Herond Browser booth attracted a large number of visitors, including startups, students, and investors. Many praised its standout features, such as web browsing speeds three times faster, enhanced privacy protection, and user safety.

Positive Feedback from the Event

Several visitors shared that their experience at the booth helped them discover a browser tailored to their needs, boosting their confidence when using the internet.

Future Expectations

The event at SIHUB marks the beginning of a sustainable development journey for Herond Browser. We are committed to continuing our support for the startup community through diverse activities, including building networking platforms, collaborating, and partnering with other tech startups.

Future Expectations from Herond Browser Conclusion – A New Leap Forward for Herond Browser and the Startup Community

The inauguration of the Ho Chi Minh City Innovation and Startup Center not only signifies a pivotal moment for the startup ecosystem but also opens new growth opportunities for tech brands. Herond Browser’s prominent participation, highlighted by its impressive exhibition booth, has solidified its position as a leading Vietnamese enterprise in the innovation journey. With a long-term strategic vision, Herond Browser pledges to stand alongside Vietnamese startups and businesses, contributing to the creation of a sustainable and promising future.

The post Herond Browser at SIHUB Inauguration: A New Icon of Vietnamese Technology appeared first on Herond Blog.

The post Herond Browser at SIHUB Inauguration: A New Icon of Vietnamese Technology appeared first on Herond Blog.


PingTalk

Trust at the Speed of Innovation: How Digital Identity Is Transforming Financial Services in ASEAN and ANZ

Discover how digital identity is reshaping financial services across ASEAN and ANZ. Learn how banks are fighting fraud, enabling seamless payments, and driving inclusion—at scale, with trust.

Across the Association of Southeast Asian Nations (ASEAN), including mature economies like Singapore and Malaysia, and high-growth markets like Indonesia, Vietnam, and the Philippines, as well as in Australia and New Zealand (ANZ), the future of finance is being written in real time. From Jakarta to Sydney, Bangkok to Wellington, financial institutions are embracing rapid digitization. Regulatory reform, fintech competition, rising fraud threats, and shifting consumer expectations are all pushing the industry to evolve rapidly.

 

But in this rush to innovate, success is no longer just about launching the next digital wallet, cashless payment option, or open banking Application Programming Interfaces (API). It’s about trust.

 

To grow and scale in today’s connected economy, financial services organizations must continuously prove who a customer is, whether a transaction is safe, and how data should be shared. That’s where digital identity, or identity and access management (IAM) comes in. Digital identity is not simply a technical enabler - it’s the connective tissue between trust, innovation, security, and scale. And across ASEAN and ANZ, it’s increasingly being recognized as the foundational capability that determines how, and how fast, financial services can evolve.

 

Wednesday, 03. September 2025

Extrimian

How to Protect Students Data: Digital Diplomas & Credentials

Your diploma, on your phone: a student-first guide to secure digital credentials Who this is for: students (and anyone helping students—career services, program leads, registrars) Promise: zero paper chase, faster opportunities, more privacy—without you learning any tech. TL;DR (read this if you’re between classes) Show proof in seconds. Instead of digging for PDFs or waiting […] The post How to
Your diploma, on your phone: a student-first guide to secure digital credentials

Who this is for: students (and anyone helping students—career services, program leads, registrars)
Promise: zero paper chase, faster opportunities, more privacy—without you learning any tech.

TL;DR (read this if you’re between classes) Show proof in seconds. Instead of digging for PDFs or waiting on office emails, you share a secure link or QR from your phone. Employers, scholarships, other schools—everyone gets a clear yes/no instantly. You control your info. Share only what’s needed (e.g., “enrolled this term” or “degree awarded”). No oversharing, no surprises. Built for real life. Lost your phone? Credentials can be re-issued. Name spelled wrong? They can revoke and fix fast, and verifiers always see the latest version. AI-first for safety. Extrimian uses AI to protect your identity and speed up university workflows—not to snoop on you or cut corners.

Why should you care (today, not “someday”)?

Scholarships and benefits need status now, not next week. Many proofs are simple: “Is this student currently enrolled?” Your enrollment credential answers that without dumping your full transcript. Fewer forms; faster yeses.

Study abroad and transfers are smoother. Another school can confirm a course completion or degree without emailing five offices. You share once; they verify independently; your application keeps moving.

Privacy actually improves. You don’t have to forward ancient PDFs that reveal way too much. With a digital credential, you show the minimum required—and only when you choose.

You always have it with you. Your phone already holds tickets, payments, and boarding passes. Your diploma and key proofs belong there too—secure, portable, ready when opportunity calls.

How it works—without the nerd talk

Think of each credential (your diploma, enrollment status, course badge) as a sealed envelope with your university’s unique stamp.

If someone opens it and changes even a line, the stamp breaks, and the checker immediately says Not valid. Extrimian provides the stamp (digital signature), the envelope (the credential in your wallet), and the counter window (the university’s one-page verification site) where anyone can check it—no emails, no guessing.

You don’t manage keys, blockchains, or any of that. You just receive, store, and share—and it works.

 

Real moments you’ll use it (and how it feels) 1) “Can you prove you actually graduated?”

You tap Share diploma, send a link or show a QR. The recruiter scans and sees: Valid — Degree: [Your Degree], Issuer: [Your University], Date: [Month/Year]. Done. No PDF edits, no “I’ll pass it to my manager,” no waiting.

2) “We need proof you’re enrolled for this semester.”

You share Enrollment: Current Term. It shows exactly that—and nothing else. If your status changes, the old credential is revoked and anyone who checks it sees that it’s no longer valid.

3) “Upload a course completion for credit transfer.”

You share a verifiable course credential that confirms you passed the class. The other school verifies it themselves and moves on to the next step. Less paperwork, fewer delays.

4) “Student discount—show ID?”

You present a student ID credential. The vendor or campus service scans and gets a simple Yes without seeing your grades, address, or anything personal they don’t need.

5) “Oh no, I lost my phone…”

If you lose your device, you tell the university. They revoke the old credentials and re-issue to your new device after confirming it’s you. The verification page always shows the latest truth, so you’re covered.

6) “There’s a typo on my diploma.”

It happens. The registrar revokes the old one and re-issues a corrected credential. Anyone who checks the old link sees “Revoked,” and the new link shows your accurate details. No awkward explanations.

Your data, your call (how privacy works in plain language) Share the minimum. Many checks only need a yes/no on a specific fact (enrolled, degree awarded, course completed). Your credential can provide just that. You choose when to share. Nothing leaves your wallet until you decide to present it. You’re in the driver’s seat. It’s obvious if someone tampers. If a file is altered, the verification fails immediately. You don’t have to argue; the page tells the truth. Clean history for you. When a credential is revoked and re-issued (e.g., to fix a typo), everyone sees the updated version at the same link. No “which PDF is the latest?” chaos. Get started in 3 easy steps (what you’ll actually do) Receive your credential.
When your university turns this on, you’ll get instructions to add a wallet (mobile or web) and receive your diploma/enrollment credentials securely. Keep it safe.
Set a PIN or biometric lock for the digital identity wallet (Face ID, fingerprint). If you change phones, you’ll have a simple way to recover or re-issue with university support. Share when needed.
For scholarships, or transfers: open the wallet → Share → send link or show QR. The other side gets a clear answer in seconds, and you keep control.

See a real live demo here from UAGRO, one of our succesfull cae studies: UAGRO – Students Credentials & Digital ID Wallet Demo

University Secure Identity & Data

FAQ about verified digital diplomas: Can I still get a paper diploma?

Yes, if your university offers it. The digital version is the official way to prove authenticity online—and you can even print a QR on the paper diploma that points back to it.

Do I need the internet to show it?

You can open your wallet and show the QR; the verifier needs a connection to check the status. If you’re somewhere with poor signal, you can share the link later. Many events now have scanners or staff with connectivity.

What if I don’t want to share my grades?

You don’t have to. Most checks only need a degree or enrollment. Share the minimum required for the situation.

What if something’s wrong on my credential?

Ask the registrar to revoke and re-issue. You’ll get the corrected one quickly, and anyone using the old link will see it’s no longer valid. No awkward “ignore my last attachment” moments.


Which benefits this tech offers to student & clubs Club badges and event passes: your university may issue digital badges for roles or events. They’re easy to share with sponsors or include in portfolios. Volunteering & labs: log verified hours or lab competencies as mini-credentials you can show to research programs or NGOs. Community trust: a simple Valid check reduces ticket fraud and line headaches at big events.

(Availability depends on what your university enables—ask your student affairs office what’s planned.)

What this means for your university

With Extrimian, the university issues tamper-proof digital credentials, offers one official page to verify them in seconds, and uses AI internally to spot risk and speed corrections. Students get control and privacy; employers get instant answers; staff spend less time on inbox ping-pong. It’s security and simplicity, together.

Ready when you are

When your university enables Extrimian credentials, you’ll receive a message with simple steps to get your wallet and your first credentials. Until then, save this page, tell your career office what you’d love to see first (diploma, enrollment, course badges), and get ready to retire the messy PDF folder.

Extrimian: AI-first for safety, student-first for experience.

Contact us

Further reading & internal links Fundamentals of SSI (plain-English intro): https://academy.extrimian.io/fundamentals-of-ssi/
Integrate Solution (connect issuer/verifier to SIS/LMS): https://academy.extrimian.io/integrate-solution/
Masterclass (training for registrar & IT/security): https://academy.extrimian.io/masterclass/

Contact Extrimian (book a 30-minute review): https://extrimian.io/contact-us

The post How to Protect Students Data: Digital Diplomas & Credentials first appeared on Extrimian.


SC Media - Identity and Access

NBMiner cryptojacking on e-commerce company opens up identity issues

Experts say once an endpoint gets cryptojacked, attackers can follow-up by stealing credentials, secrets, and sessions.

Experts say once an endpoint gets cryptojacked, attackers can follow-up by stealing credentials, secrets, and sessions.


Elliptic

OFAC Sanctions Guangzhou Tengyue Chemical Co., Ltd., – a China-based chemical manufacturing company – two individuals, for trafficking drugs into the United States

On September 3, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC)sanctioned Guangzhou Tengyue Chemical Co., Ltd., a Chinese company, along with two company representatives, Huang Xiaojun and Huang Zhanpeng. The only cryptocurrency address sanctioned today was associated with Huang Xiaojun. According to OFAC’s press release, Guangzhou Tengyue Chemical is

On September 3, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC)sanctioned Guangzhou Tengyue Chemical Co., Ltd., a Chinese company, along with two company representatives, Huang Xiaojun and Huang Zhanpeng. The only cryptocurrency address sanctioned today was associated with Huang Xiaojun.

According to OFAC’s press release, Guangzhou Tengyue Chemical is “a chemical company operating in China that is involved in the manufacture and sale of synthetic opioids to Americans. In addition to opioids, Guangzhou Tengyue has also sold dangerous analgesic chemicals often used as cutting agents that are mixed with synthetic opioids and other illicit drugs.” Both individuals mentioned above are “representatives of Guangzhou Tengyue” who OFAC says “were directly involved in coordinating the shipments of these illicit drugs and cutting agents to the United States.”

Huang Zhanpeng is the executive director and 50 percent shareholder of Guangzhou Tengyue. He also is listed as the company’s legal representative. Huang Xiaojun is the owner of the bitcoin account the company used to sell controlled substances in 2023 to a U.S. buyer.

The designations reflect wider US efforts against the drug trade within and into the United States, particularly where Chinese fentanyl producers and cartels, namely from Mexico and Columbia, import and sell illicit drugs in US cities and launder their funds in the United States using local professional money laundering organizations (PMLO). In this context, in addition to OFAC’s designations, the Federal Bureau of Investigations (FBI) is also announcing a federal criminal indictment against the abovementioned and other individuals and companies “for their roles in facilitating the flow of illicit drugs… The charged defendants include three individuals in the United States and approximately 22 individuals and businesses based in China. The indictment is based upon a joint investigation by [the] FBI and DEA, which commenced in January 2024.”

For context, “Opioid overdose remains the leading cause of death for Americans aged 18 to 45. Since 2021, more than 70 percent of all reported drug overdose deaths have involved synthetic opioids, with fentanyl being the primary synthetic opioid driving this crisis.  China-based chemical manufacturing companies remain the primary source of fentanyl precursor chemicals and other illicit opioids entering the United States,” the press release states. Elliptic has conducted extensive research on the use of crypto in the trafficking of fentanyl and related synthetic opioids

As noted, OFAC listed one crypto address associated with Huang Xiaojun. Elliptic’s data shows that this address has received funds directly from multiple known Fentanyl precursor vendors we have labelled in our dataset. It has also sent funds directly to multiple known stolen credit card data vendors, thieves, and scams. 


Indicio

Indicio to advance trusted digital identity with APTITUDE, Europe’s newest Large Scale Project for digital wallet travel and payments

The post Indicio to advance trusted digital identity with APTITUDE, Europe’s newest Large Scale Project for digital wallet travel and payments appeared first on Indicio.
Through its partnership with SITA, Indicio will advance government-issued digital travel credentials in this two-year digital identity wallet trial, building on its success as the first to implement biometric-enabled credentials for international travel and border crossing.

By James Schulte

APTITUDE, one of the newest Large Scale Pilots backed by the European Commission, has officially launched, marking a major milestone in the EU’s drive to equip 80% of residents with a digital identity wallet by 2026 and setting out to prove how digital wallets can transform travel and payments across Europe.

What is APTITUDE?

APTITUDE is a groundbreaking €20 million cross-border initiative coordinated by the French government that brings together 118 partners from 11 EU Member States and Ukraine to analyze, integrate, and pilot real-world use cases for travel and payment within the European Digital Identity Framework (EUDI).

Backed by funding under the €8.1 billion Digital Europe Programme, APTITUDE is part of the EU’s broader effort to drive digital transformation and operationalize the European Digital Identity Wallet by demonstrating its value across critical industries.

The scale of APTITUDE reflects the importance of digital identity in enabling secure, efficient, and interoperable services across borders. The project is a milestone in the global movement to make digital identity secure, interoperable, and practical at scale. Uniting governments, technology providers, and industry leaders, APTITUDE will test and validate solutions that meet EU standards and deliver practical benefits to citizens and businesses.

The digital transformation of travel

Travel and payments are critical touchpoints between people, governments, and businesses. As more governments and businesses worldwide build new digital identity ecosystems, Indicio’s leadership and expertise in decentralized identity, its expertise in combining biometrics in Verifiable Credentials, and its focus on interoperability based on open standards, make digital identity in travel work seamlessly, simply, and cost-effectively across ecosystems while delivering real value.

With our partner SITA, a recognized leader in aviation technology, Indicio is helping to create the infrastructure and software solutions that allow digital identity to be securely verified and reused across airlines, airports, border control checkpoints, and payment channels.

Indicio’s technology streamlines the traveler’s journey by reducing repeated identity checks, reliance on paper documents, manual data entry, and visual inspections. It also increases airport capacity and enables governments to control their borders with the highest level of identity assurance.

The result is measurable value:

Airlines and airports reduce bottlenecks and improve operational flow without the need to increase resources or higher costs. Governments gain secure, interoperable systems that strengthen compliance and protect against identity fraud and document abuse. Travelers and citizens enjoy a  faster, seamless experience that safeguards their privacy and personal data. Global leadership, local impact

Indicio’s contributions to APTITUDE are part of our broader leadership in building solutions that deliver digital trust worldwide. From Africa, the Middle East, and Asia to Europe, the Carribean and the Americas, Indicio is connecting industries, governments, and citizens in ways that are fast, secure, private, and valuable.

APTITUDE shows what is possible when expertise and collaboration come together. By contributing to this large-scale pilot, Indicio is helping shape the future of travel and payments in Europe and the global framework for trusted digital identity.

If you are an organization preparing for the shift to digital identity, now is the time to act. Connect with Indicio to stay up-to-date with this project and to book a call with one of our experts to discuss how our solutions can rapidly deliver the benefits of trusted digital identity and data.

###

The post Indicio to advance trusted digital identity with APTITUDE, Europe’s newest Large Scale Project for digital wallet travel and payments appeared first on Indicio.


SC Media - Identity and Access

ICE contract with Paragon spyware revived

TechCrunch reports that the U.S. Immigration and Customs Enforcement has reactivated a $2 million contract with Israeli spyware vendor Paragon, lifting a Biden-era stop work order meant to evaluate the agreement's adherence to a commercial spyware-focused executive order.

TechCrunch reports that the U.S. Immigration and Customs Enforcement has reactivated a $2 million contract with Israeli spyware vendor Paragon, lifting a Biden-era stop work order meant to evaluate the agreement's adherence to a commercial spyware-focused executive order.


Disney to pay $10M to resolve unlawful child data collection claims

Disney has agreed to a $10 million settlement for a Federal Trade Commission complaint alleging its unwarranted collection of personal information from children watching its videos on YouTube, which violates the Children's Online Privacy Protection Rule, according to The Record, a news site by cybersecurity firm Recorded Future.

Disney has agreed to a $10 million settlement for a Federal Trade Commission complaint alleging its unwarranted collection of personal information from children watching its videos on YouTube, which violates the Children's Online Privacy Protection Rule, according to The Record, a news site by cybersecurity firm Recorded Future.


Azure AD credentials exposed by unsecured JSON config file

GBHackers News reports that threat actors could leverage Azure Active Directory credentials leaked by a misconfigured ASP.NET Core appsettings.json file to compromise organizations' cloud environments.

GBHackers News reports that threat actors could leverage Azure Active Directory credentials leaked by a misconfigured ASP.NET Core appsettings.json file to compromise organizations' cloud environments.


Innopay

FiDA Data Studios: Shaping The Future Of Financial Data

FiDA Data Studios: Shaping The Future Of Financial Data from 29 Sep 2025 till 29 Sep 2025 Trudy Zomer 03 September 2025 - 10:19  EintrachtLab, Deutsche Bank Park, Frankfurt am Main, Germany 50.06661481897, 8.64804
FiDA Data Studios: Shaping The Future Of Financial Data from 29 Sep 2025 till 29 Sep 2025 Trudy Zomer 03 September 2025 - 10:19  EintrachtLab, Deutsche Bank Park, Frankfurt am Main, Germany 50.06661481897, 8.6480471

What if regulation wasn’t a brake on innovation, but your biggest opportunity?

On 29 September 2025, innovators, strategists, regulators and product leads will gather at the EintrachtLab in Deutsche Bank Park, Frankfurt, for the FiDA Data Studios event. Together, we’ll explore how the Financial Data Access (FiDA) regulation could reshape Europe’s financial services industry.

Through scenario planning, deep-dive sessions and real-world use cases, the event goes beyond compliance to focus on new business and operating models, AI-powered products and how banks can strategically position in the data economy.

Join our expert roundtable:
INNOPAY’s Mounaim Cortet, Vice President, will host an expert roundtable on how financial institutions can strategically position to leverage the opportunities of FiDA to drive innovation and value creation.

Event highlights 5 expert sessions on AI, API infrastructure, strategy & more Executive briefings & high-level networking Shuttle service from TechQuartier Co-hosted by Deutsche Bank and TechQuartier

 

Program


Daytime sessions: 10:00 – 17:00 (expert sessions)
Evening program: from 17:00 onwards (presentation & networking)
Location: EintrachtLab, Deutsche Bank Park, Frankfurt am Main, Germany
Date: 29 September 2025

⚠️ Limited seats available - registration is on a first-come, first-served basis.

Register now through the event website.


Okta

Build Secure Agent-to-App Connections with Cross App Access (XAA)

Secure access with enterprise IT oversight between independent applications that communicate with each other is a recognized gap in OAuth 2.0. Enterprises can’t effectively regulate cross-app communication, as OAuth 2.0 consent screens rely on users granting access to their individual accounts. Now, with the advent of AI agents that communicate across systems, the need to solve the gap is even gre

Secure access with enterprise IT oversight between independent applications that communicate with each other is a recognized gap in OAuth 2.0. Enterprises can’t effectively regulate cross-app communication, as OAuth 2.0 consent screens rely on users granting access to their individual accounts. Now, with the advent of AI agents that communicate across systems, the need to solve the gap is even greater – especially given the growing importance of enterprise AI security in protecting sensitive data flows.

What is Cross App Access (XAA)?

Cross App Access (XAA) is a new protocol that lets integrators enable secure agent-to-app and app-to-app access. Instead of scattered integrations and repeated logins, enterprise IT admins gain centralized control: they can decide what connects, enforce security policies, and see exactly what’s being accessed. This unlocks seamless, scalable integrations across apps — whether it’s just two like Google Calendar and Zoom, or hundreds across the enterprise. Read more about Cross App Access in this post:

Integrate Your Enterprise AI Tools with Cross-App Access

Manage user and non-human identities, including AI in the enterprise with Cross App Access

Semona Igama

Or watch the video about Cross App Access:

In this post, we’ll go hands-on with Cross App Access. Using Todo0 (the Resource App) and Agent0 (the Requesting App) as our sample applications, and Okta as the enterprise Identity Provider (IdP), we’ll show you how to set up trust, exchange tokens, and enable secure API calls between apps that enable enterprise IT oversight. By the end, you’ll not only understand how the protocol works but also have a working example you can adapt to your own integrations.

If you’d rather watch a video of the setup and how XAA works, check this one out.

Prerequisites to set up the AI agent to app connections using Cross App Access (XAA)

To set up secure agent-to-app connections with Cross App Access (XAA), you’ll need the following:

Okta Developer Account (Integrator Free Plan): You’ll need an Okta Developer Account with the Integrator Free Plan. This account will act as your Identity Provider (IdP) for setting up Cross App Access. If you don’t already have an account, sign up for a new one here: Okta Integrator Free Plan Once created, sign in to your new org AWS Credentials: You’ll need an AWS Access Key ID and AWS Secret Access Key The IAM user or role associated with these credentials must have access to Amazon Bedrock, specifically the Claude 3.7 Sonnet model, enabled If you don’t know how to obtain the credentials, follow this guide Developer Tools: These tools are essential for cloning, editing, building, and running your demo applications Git – to clone and manage the repository VS Code – for reading and modifying the sample source code Dev Containers Extension (VS Code) – recommended, as it automatically configures dependencies and environments when you open the project Docker – required by the Dev Container to build and run the sample applications in isolated environments

Table of Contents

What is Cross App Access (XAA)? Prerequisites to set up the AI agent to app connections using Cross App Access (XAA) Use Okta to secure AI applications with OAuth 2.0 and OpenID Connect (OIDC) Enable Cross App Access in your Okta org Create the resource app (Todo0) Create the requesting app (Agent0) Establishing connections between Todo0 & AI agent (Agent0) Set up a test user in Okta org Create the test user Assign the Okta applications to the test user Configure the Node.js Cross App Access project The Cross App Access MCP project at a glance Configure OAuth 2.0 and AI foundation models environment files Generate OIDC configuration and access token files Configure AI and resource application connection values Register OAuth 2.0 redirect URIs for both apps Initialize the database and run the project Bootstrap the project Run and access the apps in your browser Testing the XAA flow: From Bob to Agent0 to Todo0 Interact with Todo0, the XAA resource app, by creating tasks Let the AI agent, the requesting app, access your todos Behind the scenes: the OAuth 2.0 Identity Assertion Authorization Grant Need help setting up secure cross-domain enterprise AI application access? Learn more about Cross App Access, OAuth 2.0, and securing your applications Use Okta to secure AI applications with OAuth 2.0 and OpenID Connect (OIDC)

Before we dive into the code, we need to register our apps with Okta. In this demo:

Agent0: the AI agent requesting app (makes the API call on behalf of the user) Todo0: the resource app (owns the protected API) Managed connection: the trust relationship between the two apps, created in Okta

We’ll create both apps in your Okta Integrator Free Plan account, grab their client credentials, and then connect them.

Enable Cross App Access in your Okta org

⚠️ Note: Cross App Access is currently a self-service Early Access (EA) feature. It must be enabled through the Admin Console before the apps appear in the catalog. If you don’t see the option right away, refresh and confirm you have the necessary admin permissions. Learn more in the Okta documentation on managing EA and beta features.

Sign in to your Okta Integrator Free plan account In the Okta Admin Console, select Settings > Features Navigate to Early access Find Cross App Access and select Turn on (enable the toggle) Refresh the Admin Console

Create the resource app (Todo0) In the Okta Admin console, navigate to Applications > Applications Select Browse App Catalog Search for Todo0 - Cross App Access (XAA) Sample Resource App, and select it Select Add Integration Enter “Todo0” in the Application label field and click Done Click the Sign On tab to view the Client ID and Client secret. These are required to include in your .env.todo

Create the requesting app (Agent0) Go back to Applications > Applications Select Browse App Catalog Search for Agent0 - Cross App Access (XAA) Sample Requesting App, and select it Select Add Integration Enter Agent0 in the Application label field and click Done Click the Sign On tab to view the Client ID and Client secret. These are required to be included in your .env.agent

Establishing connections between Todo0 & AI agent (Agent0) From the Applications page, select the Agent0 app Go to the Manage Connections tab Under App granted consent, select Add requesting apps, select Todo0, then Save Under Apps providing consent, select Add resource apps, select Todo0, then Save

Now Agent0 and Todo0 are connected. If you check the Manage Connection tab for either app, you’ll see that the connection has been established.

Set up a test user in Okta org

Now that the apps are in place, we need a test user who will sign in and trigger the Cross App Access flow.

Create the test user In the Okta Admin Console, go to Directory > People Select Add Person Fill in the details: First name: Bob Last name: Tables Username / Email: bob@tables.fake Under Activations, select Activate now, mark ☑️ I will set password, and create a temporary password Optional: You can mark ☑️ User must change password on first login Select Save (If you don’t see the new user right away, refresh the page)

Assign the Okta applications to the test user Open the Bob Tables user profile Select Assign Applications Assign both Agent0 (requesting app) and Todo0 (resource app) to Bob

This ensures Bob can sign in to Agent0, and Agent0 can securely request access to Todo0 on his behalf.

⚠️ Note: Bob will be the identity we use throughout this guide to demonstrate how Agent0 accesses Todo0’s API through Cross App Access.

Configure the Node.js Cross App Access project

With your Okta environment (apps and user) ready, let’s set up the local project. Before we dive into configs, here’s a quick look at what you’ll be working with.

Clone the repository:

git clone https://github.com/oktadev/okta-cross-app-access-mcp

Change into the project directory:

cd okta-cross-app-access-mcp

Open VS Code Command Palette and run “Dev Containers: Open Folder in Container”
To open Command Palette, select View > Command Palette…, MacOS keyboard shortcut Cmd+Shift+P, or Windows keyboard shortcut Ctrl+Shift+P

⚠️ Note: This sets up all dependencies, including Node, Redis, Prisma ORM, and Yarn.

The Cross App Access MCP project at a glance okta-cross-app-access-mcp/ ├─ packages/ │ ├─ agent0/ # Requesting app (UI + service) – runs on :3000 │ │ └─ .env # Agent0 env (AWS creds) │ ├─ todo0/ # Resource app (API/UI) – runs on :3001 │ ├─ authorization-server/ # Local auth server for ID-JAG + token exchange │ │ └─ .env.agent # IdP creds (Agent0 side) │ │ └─ .env.todo # IdP creds (Todo0 side) │ ├─ id-assert-authz-grant-client/ # Implements Identity Assertion Authorization Grant client logic ├─ .devcontainer/ # VS Code Dev Containers setup ├─ guide/ # Docs used by the README ├─ images/ # Diagrams/screens used in README ├─ scripts/ # Helper scripts ├─ package.json └─ tsconfig.json Configure OAuth 2.0 and AI foundation models environment files

At this point, you have:

Client IDs and Client Secrets for both Agent0 and Todo0 (from the Okta Admin Console)

Your Okta org URL, visible in the Okta Admin Console profile menu of the Admin Console. It usually looks like

https://integrator-123456.okta.com

This URL will be your IdP issuer URL and is shared across both apps.

Generate OIDC configuration and access token files

From the project root, run:

yarn setup:env

This scaffolds the following files:

packages/authorization-server/.env.todo packages/authorization-server/.env.agent packages/agent0/.env Configure AI and resource application connection values

Open each file and update the placeholder with your org-specific values:

authorization-server/.env.todo

CUSTOMER1_EMAIL_DOMAIN=tables.fake CUSTOMER1_AUTH_ISSUER=<Your integrator account URL> CUSTOMER1_CLIENT_ID=<Todo0 client id> CUSTOMER1_CLIENT_SECRET=<Todo0 client secret>

authorization-server/.env.agent

CUSTOMER1_EMAIL_DOMAIN=tables.fake CUSTOMER1_AUTH_ISSUER=<Your integrator account URL> CUSTOMER1_CLIENT_ID=<Agent0 client id> CUSTOMER1_CLIENT_SECRET=<Agent0 client secret>

agent0/.env

AWS_ACCESS_KEY_ID=<your AWS access key id> AWS_SECRET_ACCESS_KEY=<your AWS secret access key>

⚠️ Note:

The issuer URL (CUSTOMER1_AUTH_ISSUER) is the same in both .env.todo and .env.agent The Client ID/Client secret values differ because they come from the respective apps you created AWS credentials are required only for Agent0 (requesting app)
Register OAuth 2.0 redirect URIs for both apps

Finally, we need to tell Okta where to send the authentication response for each app.

For Agent0:

From your Okta Admin Console, navigate to Applications > Applications Open the Agent0 app Navigate to the Sign On tab In the Settings section, select Edit

In the Redirect URIs field, add

http://localhost:5000/openid/callback/customer1 Select Save

Repeat the same steps for Todo0:

Open the Todo0 app Go to the Sign On tab > Settings > Edit

In the Redirect URIs field, add:

http://localhost:5001/openid/callback/customer1 Select Save

Now both apps know where to redirect after authentication.

Initialize the database and run the project

With the apps and environment configuration in place, the next step is to prepare the local project, set up its databases, and bring both applications online.

Bootstrap the project

From the root of the repo, install all workspaces and initialize the databases:

yarn bootstrap

Since this is your first run, you’ll be asked whether to reset the database. Type “y” for both Todo0 and Agent0.

Run and access the apps in your browser

Once the bootstrap is complete, start both apps (and their authorization servers) with:

yarn start

Open the following ports in your Chrome browser’s tab:

Todo0 (Resource App): http://localhost:3001 Agent0 (Requesting App): http://localhost:3000

At this point, both apps should be live and connected through Okta. 🎉

Testing the XAA flow: From Bob to Agent0 to Todo0

With everything configured, it’s time to see Cross App Access in action.

Interact with Todo0, the XAA resource app, by creating tasks In the Work Email field, enter: bob@tables.fake, and select Continue You’ll be redirected to the Okta Login page. Sign in with the test user credentials: Username: bob@tables.fake Password: the temporary password you created earlier The first time you sign in, you’ll be prompted to: Set a new password Enroll in Okta Verify for MFA Once logged in, add several tasks to your to-do list Select one of the tasks and mark it as complete to verify that the application updates the status accurately Let the AI agent, the requesting app, access your todos Open the Agent0 app in your browser Select Initialize to set up the AWS Bedrock client. Once connected, you’ll see the following message:
✅ Successfully connected to AWS Bedrock! You can now start chatting. Select the Connect to IdP button Behind the scenes, Agent0 requests an identity assertion from Okta and exchanges it for an access token to Todo0 If everything is configured correctly, you’ll see the following message
Authentication completed successfully! Welcome back. To confirm that Agent0 is actually receiving tokens from Okta: Open a new browser tab and navigate to: http://localhost:3000/api/tokens You should see a JSON payload containing: accessToken, jagToken, and idToken This verifies that Agent0 successfully authenticated through Okta and obtained the tokens needed to call Todo0 Now interact with Agent0 using natural prompts. For example: write this prompt What's on my plate in my to-do list?

⚠️ Note: Agent0 will call the Todo0 API using the access token and return your pending tasks

Let’s try some more prompts Ask Agent0 to add a new task Ask it to mark an existing task complete Refresh the Todo0 app — you’ll see the changes reflected instantly Behind the scenes: the OAuth 2.0 Identity Assertion Authorization Grant

✅ Bob Tables logs in once with Okta
⏩ Agent0 (requesting app) gets an identity assertion from Okta
🔄 Okta vouches for Bob and exchanges that assertion for an access token
👋 Agent0 uses that token to securely call the Todo0 (resource app) API

🎉 Congratulations! You’ve successfully configured and run the Cross App Access project.

Need help setting up secure cross-domain enterprise AI application access?

If you run into any issues while setting up or testing this project, feel free to post your queries to the forum: 👉 Okta Developer Forum

If you’re interested in implementing Cross App Access (XAA) in your own application — whether as a requesting app or a resource app — and want to explore how Okta can support your use case, reach out to us at: 📩 xaa@okta.com

Learn more about Cross App Access, OAuth 2.0, and securing your applications

If this walkthrough helped you understand how Cross App Access works in practice, you might enjoy diving deeper into the standards and conversations shaping it. Here are some resources to continue your journey

📘 Cross App Access Documentation – official guides and admin docs to configure and manage Cross App Access in production 🎙️ Developer Podcast on MCP and Cross App Access – hear the backstory, use cases, and why this matters for developers 📄 OAuth Identity Assertion Authorization Grant (IETF Draft) – the emerging standard that powers this flow

If you’re new to OAuth or want to understand the basics behind secure delegated access, check out these resources:

What the Heck is OAuth? What’s the Difference Between OAuth, OpenID Connect, and SAML? Secure Your Express App with OAuth 2.0, OIDC, and PKCE Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Get Going with the On-Demand SaaS Apps Workshops

Follow us on LinkedIn, Twitter, and subscribe to our YouTube channel for more developer content. If you have any questions, please leave a comment below!


SC Media - Identity and Access

Cloudflare, Palo Alto Networks affected by Salesloft Drift attack campaign

Attackers gained access to customer contact and support case information in Salesforce.

Attackers gained access to customer contact and support case information in Salesforce.


auth0

An Accessible Guide to WCAG 3.3.8: Authentication Without Frustration

Logging in can be tough for users with cognitive disabilities. WCAG's Success Criterion 3.3.8, "Accessible Authentication," provides guidance.
Logging in can be tough for users with cognitive disabilities. WCAG's Success Criterion 3.3.8, "Accessible Authentication," provides guidance.

Tuesday, 02. September 2025

SC Media - Identity and Access

Rinoa Poison, Scambaiter Extraordinaire - Rinoa Poison - SWN #508


ComplyCube

How to Comply with Failure to Prevent Fraud

The UK’s Failure to Prevent Fraud offence holds large firms liable for fraud by employees or agents unless “reasonable procedures” are in place. Finance and fintech face early scrutiny, with the SFO leading enforcement. The post How to Comply with Failure to Prevent Fraud first appeared on ComplyCube.

The UK’s Failure to Prevent Fraud offence holds large firms liable for fraud by employees or agents unless “reasonable procedures” are in place. Finance and fintech face early scrutiny, with the SFO leading enforcement.

The post How to Comply with Failure to Prevent Fraud first appeared on ComplyCube.


auth0

API Keys and AI Agents: Four Common Risks

Building powerful AI agents means connecting them to third-party APIs, but just pasting in API keys is a recipe for disaster. Dive into four unique security risks and how to solve for them with Auth0's Token Vault.
Building powerful AI agents means connecting them to third-party APIs, but just pasting in API keys is a recipe for disaster. Dive into four unique security risks and how to solve for them with Auth0's Token Vault.

Thales Group

Securing the Backbone: Defending the UK’s Critical Infrastructure in an Era of Hybrid Threats

Securing the Backbone: Defending the UK’s Critical Infrastructure in an Era of Hybrid Threats Language English simon.mcsstudio Tue, 09/02/2025 - 17:18 From hostile state actors probing our networks to drones targeting physical sites, the threats facing the UK’s critical national infrastructure (CNI) are escalating in scale, sophistication, and persistenc
Securing the Backbone: Defending the UK’s Critical Infrastructure in an Era of Hybrid Threats Language English simon.mcsstudio Tue, 09/02/2025 - 17:18

From hostile state actors probing our networks to drones targeting physical sites, the threats facing the UK’s critical national infrastructure (CNI) are escalating in scale, sophistication, and persistence. In the defence and security community, the urgency is clear: the defence systems we need to have in place are more than assets - they are the backbone of our national resilience and military capability.

The Strategic Defence Review (SDR) 2025 makes this point explicitly. Energy distribution, transportation, communications, and defence supply chains form an interconnected web. A disruption in one domain can ripple rapidly across others - compromising not just economic stability, but operational readiness in defence.

At Thales, we believe the UK’s approach to CNI protection must evolve in step with the threat landscape, combining policy innovation, public–private collaboration, and the latest cyber-physical security technologies.

From energy to defence: a shared threat landscape

While much recent debate has centred on the resilience of energy systems, the principles and technologies required to protect them are equally applicable to defence infrastructure. In fact, energy security is a direct enabler of military operations - from powering bases and command centres to sustaining defence industrial bases.

Three overlapping categories of risk demand urgent attention:

•    Systemic threats - such as the lack of redundancy in critical systems, reliance on aging infrastructure, and exposure to extreme weather events that can degrade operational availability.
•    Cyber threats - including state-sponsored campaigns designed to undermine national defence capability, ransomware operations intended to cause economic and operational disruption, and supply chain compromises that plant dormant vulnerabilities in hardware and software.
•    Physical threats - ranging from sabotage and insider action to the increasing weaponisation of drones for surveillance and direct attack.

These threats are not hypothetical. They are present, active, and increasingly integrated - part of a hybrid warfare model that blurs the lines between civilian and military targets.

Policy and the Defence–CNI nexus

Recent legislative and regulatory moves, including the Cyber Security and Resilience Bill and SDR 2025, point towards a “whole-of-nation” model for resilience. They mandate improved incident reporting, greater protections for critical suppliers, and a more integrated approach between government, industry and academia.

For the defence sector, this shift is pivotal. Base security, command-and-control resilience, and logistics protection all rely on the same secure energy, communications, and transportation systems used by civilian operators. Strengthening these systems strengthens the UK’s defensive position.

Strategic principles for defence-ready CNI protection

Thales’ experience in securing high-consequence systems worldwide suggests eight strategic pillars for enhancing CNI resilience, each of which applies equally to defence and energy infrastructure:

Secure and resilient by design: Build resilience into systems from inception, using Digital Twin modelling to simulate failures and interdependencies. For defence applications, this can stress-test mission-critical infrastructure under cyber-attack and kinetic strike scenarios. Tailored threat intelligence: Defence and CNI operators need intelligence that fuses public, private, and classified data sources, tracking adversary tactics across open, deep, and dark web channels. This enables early detection of hostile reconnaissance and emerging attack vectors.  Zero trust architecture: Move beyond perimeter security to verification-based access control. For military networks and base systems, this approach minimises the blast radius of any breach. Regular cyber audits: Continuously assess systems against frameworks such as the Cyber Assessment Framework (CAF) and DEFSTAN standards, ensuring compliance and proactive risk mitigation.  Cyber-physical systems protection: Defence facilities operate complex operational technology (OT) - from airfield lighting and radar systems to weapons storage environments. Securing these systems requires specialised approaches that integrate safety and security. Rapid detection and response: Use AI-enabled monitoring for real-time containment of known threats, while maintaining trained human teams to handle novel or adaptive attacks. Physical protection measures: Enhance perimeter defence with integrated surveillance, private 4G/5G IoT networks for secure communications, and counter-drone systems - including advanced RF-Directed Energy Weapons (RF-DEW) where policy allows. Training and exercising: Build readiness through regular cyber and physical incident simulations, testing both technology and decision-making under pressure. Hybrid threats demand hybrid solutions

The reality is that adversaries are increasingly combining cyber and physical tactics. A drone swarm might be used as a diversion while a cyber intrusion targets base logistics systems. Or, a ransomware attack on civilian power distribution could degrade a defence installation’s operational capability.

Countering this requires integrated solutions that merge cyber defence, physical security, intelligence, and operational resilience. Thales supports operators through managed service models that deliver these capabilities “as-a-service,” ensuring agility without the need for constant capital investment.

Key capabilities include:

Digital Twins and simulation to anticipate vulnerabilities and automate corrective actions. End-to-end cyber security, from audit and governance through to Zero Trust implementation. Counter-drone protection aligned to site criticality and live threat data. Threat-specific training and exercises to harden both technical systems and human responses. The Call for Defence–Industry Partnership

No single organisation can defend CNI in isolation. SDR 2025 rightly calls for enhanced collaboration between the Ministry of Defence, government agencies, industry, and academia. 

This includes:
•    Information sharing to accelerate detection and response.
•    Joint investment in testbeds, training facilities, and advanced technology trials.
•    Shared services to make cutting-edge capabilities available to both large and small operators.

Thales’ role in this ecosystem is as a trusted partner - bringing decades of experience in securing military systems, delivering sovereign technology, and integrating cyber and physical defence.

Resilience as a Force Multiplier

In a contested and uncertain world, resilience is not simply about preventing disruption - it is about ensuring operational continuity in the face of disruption. For defence, this translates directly into readiness, deterrence, and the ability to project force.

By embedding secure-by-design principles, leveraging AI and advanced modelling, and integrating cyber-physical protection into every layer of infrastructure, the UK can stay ahead of evolving threats.

See Us at DSEI 2025

The UK’s defence and CNI operators face a common challenge: ensuring that the backbone of national capability - from energy to defence bases, from logistics hubs to communications networks - remains secure, resilient, and mission-ready.

At DSEI 2025, Thales will showcase our latest integrated solutions for cyber and physical security, Digital Twin resilience modelling, and advanced counter-drone technologies.

Visit the Thales stand to explore how we can work together to secure your mission-critical infrastructure and give the UK a decisive resilience advantage.

You can also download our full whitepaper on Critical National Infrastructure protection below.

/sites/default/files/database/assets/images/2025-09/2025.08.04-CNI-Insights-article-v1-Banner.png Documents Securing the Backbone 02 Sep 2025 United Kingdom From hostile state actors probing our networks to drones targeting physical sites, the threats facing the UK’s critical national infrastructure (CNI) are escalating in scale, sophistication, and persistence… Type News Hide from search engines Off

AI-powered mission-critical systems

AI-powered mission-critical systems Language English simon.mcsstudio Tue, 09/02/2025 - 16:29 Across Defence, mixed fleets, unstructured data and constrained comms are today’s reality—while the threat cycle keeps accelerating. At DSEI, demonstrations across our stand show how AI-powered mission-critical systems deliver trusted, explainable AI from sense to
AI-powered mission-critical systems Language English simon.mcsstudio Tue, 09/02/2025 - 16:29

Across Defence, mixed fleets, unstructured data and constrained comms are today’s reality—while the threat cycle keeps accelerating. At DSEI, demonstrations across our stand show how AI-powered mission-critical systems deliver trusted, explainable AI from sense to effect—keeping humans in control, moving at pace, and scaling outcomes with partners worldwide.

See more. Decide faster. Act smarter – with humans in control. Mission context: pressures and priorities

Commanders need uplift now: faster, clearer decisions in contested environments, without losing accountability. Defence organisations require primes and SMEs to collaborate with agility, iterate safely, and deliver measurable advantage today—not in a decade.

From principle to practice

cortAIx accelerates adoption of AI-powered mission-critical systems, bringing trusted, explainable AI into sensors and platforms—designed for constrained, safety-critical environments and with meaningful human control.

Globally, cortAIx brings together best-in-class capability—either developed directly by Thales or by our SME, big tech and academic partners—under open, interoperable architectures. This reduces time-to-value, speeds integration, and optimises investment across coalitions and domains.

Security and governance are built-in: model assurance, zero-trust digital backbones, and operator-centred interfaces that explain machine rationale—ensuring trust and accountability at tempo.

What’s new at DSEI DigitalCrew computer vision: reducing crew burden and enhancing situational awareness. LLM-powered maintenance assistant: exploring generative AI for faster availability. Behaviour alerting: deep-learning anomaly detection to shorten sense-to-decide loops. AI for sonar (Type 2087/CAPTAS): improving detection and classification. Proof of momentum Autonomous mine countermeasures (MMCM): an AI-enabled, unmanned “system of systems” now in service with navies. DigitalCrew operational demonstrations: fused sensing and AI that improve situational awareness and decision tempo. AI applied across domains: from undersea to land and air, with trusted autonomy, decision support and predictive maintenance patterns fielded worldwide. Why it matters

Trusted AI is now central to availability, tempo and survivability. Defence organisations globally are calling for partners who can deliver faster—while keeping humans firmly in control. cortAIx turns that ambition into reality: AI-powered systems that measurably improve sensing, deciding and acting, today.

See it at DSEI (Stand S8-110) Start at the AI wall for the overview. Discuss DigitalCrew, MMCM or HMT with our subject matter experts Explore demonstrations across the stand showing AI in action. Related reads Enabling the Future Force DigitalCrew: fight faster and smarter Autonomous mine countermeasures (MMCM) cortAIx: AI-powered mission-critical systems
  /sites/default/files/database/assets/images/2025-09/Thales-CortAIx-1920x640-visuel.jpg 02 Sep 2025 United Kingdom Across Defence, mixed fleets, unstructured data and constrained comms are today’s reality—while the threat cycle keeps accelerating. At DSEI, demonstrations across our stand show how AI-powered mission-critical systems deliver trusted, explainable AI from sense to effect—keeping humans in control, moving at pace, and scaling outcomes with partners worldwide. Type News Hide from search engines Off

liminal (was OWI)

Fighting Third-Party Fraud

The post Fighting Third-Party Fraud appeared first on Liminal.co.

The post Fighting Third-Party Fraud appeared first on Liminal.co.


SC Media - Identity and Access

The hidden costs of fraud: Beyond financial loss

By shifting fraud prevention earlier in the digital journey—during account creation and login—organizations can cut off fraud before it escalates.

By shifting fraud prevention earlier in the digital journey—during account creation and login—organizations can cut off fraud before it escalates.


ScreenConnect super admins targeted in spearphishing campaign

Attackers use the EvilGinx framework to harvest credentials and MFA tokens.

Attackers use the EvilGinx framework to harvest credentials and MFA tokens.


The ultimate guide to online fraud prevention

With global cybercrime costs projected to exceed $8 trillion, companies must recognize that fraud prevention is no longer optional—it’s a business-critical function.

With global cybercrime costs projected to exceed $8 trillion, companies must recognize that fraud prevention is no longer optional—it’s a business-critical function.


The new rules of fraud prevention: Keeping out fraudsters, not customers

The old rulebook—focused on stopping fraud only at the point of transaction—is no longer enough.

The old rulebook—focused on stopping fraud only at the point of transaction—is no longer enough.


Dock

5 Identity Gaps That Put AI Agents at Risk

AI agents will soon be booking travel, managing workflows, and making purchases on our behalf. By next year, non-human agents may outnumber human users online.  The problem is, our identity systems were built for people, not for autonomous software. During our recent “Know Your Agent” 

AI agents will soon be booking travel, managing workflows, and making purchases on our behalf. By next year, non-human agents may outnumber human users online. 

The problem is, our identity systems were built for people, not for autonomous software.

During our recent “Know Your Agent” live session with Peter Horadan, CEO of Vouched, we went through the five critical identity problems we need to solve before agents become the default way we interact online:


Elliptic

Crypto regulatory affairs: From China to Russia to South Korea to the EU - Stablecoin and digital payments work accelerates following US GENIUS Act

The passage of major stablecoin legislation in the United States this summer is prompting countries around the world to reassess their strategies and timelines around digital asset-linked payments - demonstrating that innovation in the stablecoins and digital payments space has geopolitical implications. 

The passage of major stablecoin legislation in the United States this summer is prompting countries around the world to reassess their strategies and timelines around digital asset-linked payments - demonstrating that innovation in the stablecoins and digital payments space has geopolitical implications. 


Spherical Cow Consulting

Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure

We don’t spend much time thinking about the roads we drive on—until one cracks, collapses, or dumps us somewhere we didn’t mean to be. Identity in the age of agentic AI? Same deal. It’s infrastructure. Like a good road, it needs to be ready for traffic we can’t imagine. The post Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure appeared first on Spherical Cow Consul

“We don’t spend much time thinking about the roads we drive on—until one cracks, collapses, or dumps us somewhere we didn’t mean to be.”

Identity in the age of agentic AI? Same deal. It’s infrastructure. And just like a good road system, it needs to be engineered with care, built on solid standards, and ready for traffic we can’t even imagine yet.

Right now, autonomous agents are already taking actions on behalf of people and businesses—booking meetings, writing and summarizing emails, pushing code, moving money. Which means we should probably stop and ask: how are those identity and access decisions getting made? Are they secure? Reviewed? Built to best practices? Or are we flooring it across an uninspected bridge, hoping the potholes aren’t too deep?

The protocols making this possible—things like the Model Context Protocol (MCP) and Google’s Agent2Agent (A2A)—are still wet cement. If we want to go from today’s cow paths (cow poop included) to tomorrow’s superhighways, we can’t just slap on more lanes later. We need a strong identity layer poured in from the start.

This post is based on a keynote I gave recently at a large corporate event, where the audience was asking the right questions. If you’re building or maintaining systems that will eventually include autonomous agents, or you’re already there, this is for you.

A Digital Identity Digest Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:12:22 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What I mean by identity, identity infrastructure, and agentic AI

“Identity” and “identity infrastructure” can mean different things depending on who you ask. (Get a hundred IAM professionals in a room and you’ll get a thousand definitions.) Since this is my blog post, here’s how I’m using the terms:

Identity – a persistent, verifiable representation of an entity—human or non-human—that other systems can use to decide what it can do, when, for what purpose, and under what conditions. Identity infrastructure – the shared, stable, and standards-based systems, protocols, and governance that make those identities usable across teams, organizations, and technologies, securely, interoperably, and at scale. Agentic AI – borrowing NVIDIA’s phrasing, an AI system (often powered by large language models) with sophisticated reasoning and iterative planning that can autonomously solve complex, multi-step problems. The key word here is autonomous. Generative AI creates content; agentic AI takes action.

Without grounding in these definitions, it’s easy to talk past each other. With them, we can focus on the real issue: building identity infrastructure that works across both human and non-human actors, especially when those non-humans are making decisions at machine speed.

AI’s upside is real, but it’s missing a foundation

When most people talk about AI, we talk about the upside:

Faster iteration cycles Smart automation Real productivity gains Code generation Helpful chatbots that can field questions at scale

GitHub’s Octoverse report showed a 59% surge in contributions to generative AI projects and a 98% increase in the number of projects overall. Many contributions came from India, Germany, Japan, and Singapore. Interestingly, they also reported that AI hasn’t flooded open source with low-quality junk—if anything, it’s drawing more people into development. (I’m not sure I believe their assertion about the junk. That doesn’t match what I’m hearing anecdotally, but then again, that’s why there are actual studies to balance perception with facts.)

That’s all impressive, even when the results aren’t perfect. These tools are still young, evolving fast, and unlocking new creativity across the stack.

But there’s a missing question in all this excitement: who is acting? On whose behalf? And with what authority?

That’s the identity layer. Without it, all this innovation becomes harder to govern, harder to scale, and harder to trust.

Agents are already in your systems

This isn’t hypothetical. Agents are in your tools, updating dependencies, answering tickets, creating calendar invites, summarizing documents, pushing code, and talking to customers.

Microsoft’s 2025 Work Trend Index reports that global leaders rank customer service, marketing, and product development as the top three areas for accelerated AI investment in the next 12–18 months. Seventy-three percent of leading-edge companies will use AI for marketing. Sixty-six percent for customer success. Even internal communications sees 68% adoption.

That’s a lot of automation acting in our name. Without clear identity controls, there is also a lot of potential for AI “marketing fails” or, worse, high-stakes errors.

A few examples:

A rogue AI coding assistant wiped out a startup’s production database. AI-powered recruiting software rejected qualified applicants based purely on age and gender, landing the company in court with the EEOC.

These tools are powerful and fast—but oversight around identity and accountability hasn’t kept up.

Identity isn’t just a login box

Identity is infrastructure. And infrastructure is more than a username and password. When humans act, we typically have an audit trail: who did what, when, and why. We rely on login sessions, logs, access controls, and behavioral patterns.

But when AI agents act, especially ones with high autonomy, we need something more durable. We need fine-grained delegation models, audit trails tied to machine-driven decisions, and identity primitives that work across humans and non-humans alike.

Identity systems that recognize both human and non-human actors Delegation models that can express “who can do what, for whom, under what conditions” Clear provenance: who authorized the action, and is it appropriate in this context? Verifiability—so we can prove what happened, after the fact

Without that infrastructure, the entire agentic AI ecosystem risks becoming a black box. And for security teams, DevOps leads, and auditors, that’s a non-starter.

The right questions lead to better systems

If an agent makes a change, you should be able to answer: Was it authorized? Who delegated the authority? What policy applied?

Microsoft’s report hints at this by asking leaders: how many agents are needed for which roles and tasks, and how many humans to guide them? Those are good but very surface-level questions.

We can push further:

Do you have enough data to clearly scope the role for an AI? Can you give it only the access it needs, when it needs it, for the specific task at hand?

These questions aren’t just risk management. They’re a chance to improve system hygiene and clarity across the board.

Protocols are evolving but identity hasn’t caught up

You might be thinking: okay, so what’s out there to support this?

Protocols like the Multi-Agent Communication Protocol (MCP) and Agent2Agent (A2A) messaging are early candidates. They enable agents to communicate and coordinate in powerful ways. But they were designed to simplify agents’ communication with agents; they weren’t designed with identity in mind.

Even folks who helped shape OAuth are wrestling with how traditional delegation models fit—or don’t fit—into this space. The communication protocols aren’t broken, they’re just early. Identity hasn’t caught up yet.

And if we don’t make faster progress on these issues, we’ll be forever retrofitting trust into systems that were never built to handle it.

Why this can’t be proprietary

You might be tempted to solve this in-house. Build your own delegation model, your own trust chain, your own method for agentic AI authorization. This scenario freaks me out. If every organization invents its own approach to agent identity, we’ll end up right back where we started, in a world of fragile integrations, inconsistent assumptions, and big gaps in accountability.

We’ve ALL seen this before, and the result is always the same:

Fragile integrations Misaligned assumptions between systems Gaps in visibility and accountability Security holes you can drive a nation-state through

That’s why open standards matter, not as a checkbox, but as the only viable way to scale trust across systems, companies, and industries.

And to be clear, “open” doesn’t just mean “you can download the spec.” It means:

Shared governance Transparent development Real-world applicability Participation from a broad mix of stakeholders, including security, product, legal, and compliance

This isn’t easy work. But it’s the work that makes the rest possible. And when it works, we get something better than “compliant.” We get trustworthy infrastructure that scales.

What to do now—before the collapse

So where does that leave us?

If you’re building agentic AI capabilities into your platform, or even just experimenting with automation, you’re already laying infrastructure. The question is whether that infrastructure will support accountability, or collapse under the weight of delegation you can’t verify. Either we bolt identity onto agentic systems after the fact, or we treat identity like the infrastructure it is, and build it into the foundation.

You don’t need to have all the answers today. But you do need to start asking better questions:

Is identity part of the design, or bolted on later? Are we modeling trust relationships clearly, or making assumptions? Will our logs stand up in an audit, or are we relying on magic?

Start there.

And if you’re in a position to influence the broader direction of the industry, join a standards group. Challenge assumptions in product reviews. Push for interoperability, not lock-in. Make identity part of the foundation, not just a feature.

We don’t have to wait for things to fall apart. We can build roads we actually want to drive on.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Roads as a Metaphor

[00:00:29] Welcome back to A Digital Identity Digest. I’m Heather Flanagan, and today we’re going to talk about roads. Yes, roads. They’re an amazing metaphor, and I’m just going to drive this one all night long.

[00:00:42] We usually don’t think about the roads we drive on—until one cracks, collapses, or leaves us stranded somewhere we never meant to be.

[00:00:49] Identity in the age of agentic AI works the same way. It is infrastructure. And like any good road system, it must be:

Engineered with care Built on solid standards Ready for traffic we can’t even imagine yet The Rise of Autonomous Agents

[00:01:04] Autonomous agents are already taking actions on behalf of people and businesses. They’re:

Booking meetings Writing and summarizing emails Pushing code Moving money

[00:01:14] Which raises the key question: how are identity and access management decisions being made for those actions?

Are they secure? Reviewed? Designed according to best practices? Or are we flooring it across an uninspected bridge, hoping the potholes aren’t too deep?

Protocols in Wet Cement

[00:01:34] Many of the protocols enabling this—such as Model Context Protocol (MCP) and Google’s Agent-to-Agent (A2A)—are still wet cement.

[00:01:44] If we want to move from today’s cow paths (cow poop included) to tomorrow’s superhighways, we can’t just slap on more lanes later. We need a strong identity layer poured in from the start.

Defining Identity and Agentic AI

[00:02:19] Let’s pause and define a few key terms. Because “identity” can mean wildly different things depending on who you ask.

Identity → A persistent, verifiable representation of an entity (person or machine) that other systems use to decide what it can do, when, and under what conditions. Identity Infrastructure → Shared, stable, standards-based systems and governance that make identity portable, interoperable, and reliable at scale. Agentic AI → Borrowing from Nvidia: AI, usually powered by large language models, that doesn’t just generate code but plans and reasons through complex multi-step problems on its own.

[00:03:46] Generative AI writes things.
[00:03:52] Agentic AI acts on things.

And that difference matters.

Productivity Gains vs. Identity Risks

[00:04:11] Conversations around agentic AI often emphasize upsides:

Faster iteration cycles Smarter automation Productivity gains Code generation Scalable chatbots

[00:04:25] GitHub’s Octoverse report shows:

59% surge in contributions to generative AI projects 98% increase in overall projects Growth driven by developers in India, Germany, Japan, Singapore, and Latin America

[00:05:15] But what’s often missing is the question: who or what is acting on whose behalf, and with what authority? Without identity, this innovation becomes harder to govern, scale, and trust.

Real-World Consequences

[00:06:19] Consider these examples:

An AI coding assistant that wiped out a startup’s production database. AI-powered recruiting software that rejected qualified applicants based on age and gender, resulting in lawsuits.

[00:06:47] These tools are fast and powerful—but oversight around identity and accountability has not caught up.

Why Identity Infrastructure Matters

[00:06:59] Infrastructure is more than usernames and passwords. When humans act, we leave audit trails.

[00:07:15] But when AI agents act at machine speed, we need more durable systems:

Identity recognition for both human and non-human actors Delegation models clarifying who can do what for whom Provenance signals to confirm authorization Verifiability to prove what happened

[00:07:42] Without this infrastructure, agentic AI becomes a black box—and that’s a nonstarter for security teams, DevOps leads, and auditors.

Open Standards, Not DIY

[00:09:34] You may be tempted to build your own delegation models and trust chains.

[00:09:42] Please don’t.

Doing so leads to:

Fragile integrations Misaligned assumptions Gaps in visibility and accountability Security holes you could drive a nation-state through

[00:09:56] That’s why open standards matter—not as a compliance checkbox, but as the only viable way to create scalable trust across companies and industries.

Building Roads That Last

[00:10:27] If you’re building agentic AI capabilities, you’re already laying down infrastructure. The question is:

Will your road support accountability? Or will it collapse under unverifiable delegation?

[00:10:49] Ask yourself:

Is identity part of the design—or bolted on later? Are trust relationships clearly modeled—or just assumed? Will logs stand up in an audit—or are you relying on magic?

[00:11:03] If you want to shape the standards of the future, join standards groups, challenge assumptions in product reviews, and push for interoperability—not lock-in.

[00:11:21] We don’t need to wait for the bridge to collapse. We can build roads we actually want to drive on.

Closing Thoughts

[00:11:28] Thanks for listening to A Digital Identity Digest. If this sparked questions or gave you something to debate, share it with your colleagues—the more voices in this conversation, the stronger our identity infrastructure can be.

[00:11:46] If you enjoyed this episode:

Share it with a friend or colleague Connect with me on LinkedIn Subscribe and leave a rating or review on Apple Podcasts or wherever you listen Read the full post at sphericalcowconsulting.com

Stay curious, stay engaged, and let’s build identity systems that last.

The post Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure appeared first on Spherical Cow Consulting.


Thales Group

A tank commander’s take on Challenger 3: delivering overmatch to a leaner, meaner army

A tank commander’s take on Challenger 3: delivering overmatch to a leaner, meaner army Language English simon.mcsstudio Tue, 09/02/2025 - 08:27 By Syd, Sales Manager, Thales in the UK Lethality starts with what you can see. This is as true now as it was in back when I was stationed far outside of Basra’s city walls. The success of my unit’s mission was
A tank commander’s take on Challenger 3: delivering overmatch to a leaner, meaner army Language English simon.mcsstudio Tue, 09/02/2025 - 08:27

By Syd, Sales Manager, Thales in the UK

Lethality starts with what you can see. This is as true now as it was in back when I was stationed far outside of Basra’s city walls. The success of my unit’s mission was defined not by the size of our shells but by the power of our sighting systems. 

I joined the British Army and would go on to serve 25 years with armoured fighting vehicles at the heart of it all: first within the Heavy Tank Regiment working on Chieftain through to commanding Challenger 2 as part of the Queen’s Royal Lancers.

Later, I crewed smaller, more agile Combat Reconnaissance Vehicles (CRV) like the Jackal and Husky. I learned which vehicles could pack a punch and which ones helped them punch harder (and first).

Beyond that, I came to understand how sensors, sights and stabilisation systems could mean the difference between mission success and stasis. I’ve worked with many different combinations of such systems, fitted to successive incarnations of armoured fighting vehicles, each adapting and evolving in response to burgeoning threats and operational demands. I recall the strain of operating manual gunnery over rough terrain, guided only by tired eyes and cold glass – a challenge gradually offset by innovations like thermal sights, laser rangefinders and basic stabilisation.

Soon, the British Army will operate its “most lethal and survivable tank”. Fitted with sensors and kitted with advanced sighting systems, Challenger 3 represents a mighty leap in armoured capability. With monumental change comes proportional challenge, however, and this next generation of Main Battle Tank (MBT) is emerging against a backdrop of resource and force structure constraints.

“We’re not ready for what is coming our way in four or five years.”

Mark Rutte’s recent warning comes amidst a swirl of strategies and sentiment increasingly focused on UK resilience. We must, according to the 2025 Strategic Defence Review, “move to warfighting readiness”. Yet questions persist around the UK’s ability to defend against threats that come from everywhere, all the time. Concerns around capability gaps are oft-repeated, and Challenger 3 has not been exempt from such headlines.

The Challenger 3 programme’s commitment to 148 tanks by 2030, for instance, is below the threshold that some deem typical for a combat division (170+). Timescales, too, present issues. While trials for Challenger 3 are progressing at pace – with basic firing and structural strength already validated – the imperative now is to complete trials and deliver these advanced systems to the Army. 

Such strategic implications have operational ramifications. In an era where the UK will field fewer MBTs than its potential adversaries, every Challenger 3 must punch above its weight.

Challenger 3: Outnumbered but never outgunned

In planning to triple its lethality by the end of the decade, the British Army has defined the strategic end state for a force that must increasingly do more with what it already has. The ways and means to achieving these ends are writ large in initiatives, projects and programmes like ASGARD and Land ISTAR with their focus on integrated sensors, systems and effectors for a better-connected, better-protected and perpetually prepared force.

Challenger 3 stands out as a heavy-metal example of what this approach looks like in practice, where stabilised, multi-sensor sighting systems amplify the value and impact of every single vehicle. So, commanders can see first. Operators can react sooner. Gunners can shoot faster, armed with Thales’ periscopic stabilised TrueHunter Gunner sight which helps them fire with precision when on the move, even in the most hostile of environments. 

Thales is also integrating sophisticated algorithms into its next-generation sighting systems to assist the operator at the sharp end of the fight – enabling faster, more accurate target detection, acquisition and tracking, and ultimately complementing the human crew with a new DigitalCrew®.

More broadly, decision makers can achieve overmatch not through more tanks but better intel. Intuitive interfaces and panoramic sighting systems like the TrueHunter Commander Sight converge to streamline and accelerate long-range surveillance, threat detection and target engagement.

The relieving effect these have on an operator’s headspace cannot be understated. The more straightforward questions I had when serving – Have I stopped moving? Is the graticule cutting through to the target? – have been replaced by those more reflective of today’s complex operating environment: How many targets can I engage? Do I have the drone in my sight? When operators must now ask and answer these in less time and under more pressure, any capability that shoulders their stress, fatigue and mental load becomes ever more essential to mission success.

Recce-strike, reinvented

Perhaps the most important shift is the easiest to overlook, though I’ve the benefit of my time in the commander’s seat across strike and reconnaissance vehicles to bring it into sharp relief.

In the past, MBTs would wait for recon units to find the enemy, feedback target data and provide battlefield context. As MBTs have evolved, so too has their role. Modern platforms like Challenger 3 are no longer just recipients of reconnaissance; rather, they extend and strengthen the recce-strike kill chain. In this way – and in multi-domain operations, where coordination between recon and strike assets is critical – Challenger 3’s integrated capabilities force multiply the value of both MBTs and more agile reconnaissance forces.

Stabilised, long-range sights, for instance, help Challenger 3 to exploit recon data at the pace of relevance and point of need. Its operators, armed with this intel, can act swiftly and with precision, minimising collateral damage and maximising efficiency.

Posturing for the future

Despite swapping the commander’s seat in defence for a desk chair within industry, I can still imagine the sense of anticipation that service personnel must feel on reading any new Challenger 3 headline. I share it too. More than a tank, this highly integrated battlefield platform will transform how they Observe, Orient, Decide and Act.

As NATO allies rally around the most potent, important threat to Western democratic values in decades, further proposed advancements to Challenger 3’s capabilities – such as AI-assisted target recognition and deeper integration with real-time ISR feeds – promise to sharpen a competitive edge that the UK so desperately needs.

It’s a need that can’t be met with endless investment into shinier kit or new, exquisite capabilities. We must instead shore up what we have. Whitehall knows this; so too does the British Army. Programmes and platforms like Challenger 3 come in response to this complex, enduring problem – one that industry suppliers, both large and small, stand ready to solve alongside UK MoD. 

/sites/default/files/database/assets/images/2025-09/Rheinmetall-C3-Hurn-may-2025-10-_002-Banner.png 04 Sep 2025 United Kingdom Lethality starts with what you can see. This is as true now as it was in back when I was stationed far outside of Basra’s city walls. The success of my unit’s mission was defined not by the size of our shells but by the power of our sighting systems. Type News Hide from search engines Off

iComply Investor Services Inc.

Legal KYC and AML: What Global Law Firms Need to Know About Client Verification

Law firms face growing AML pressure worldwide. This guide shows how to streamline compliance workflows without compromising client confidentiality or jurisdictional privacy laws.

Law firms face rising global AML expectations, especially for client onboarding, source of funds checks, and beneficial ownership verification. This article explores evolving KYC and KYB rules across Canada, the UK, the U.S., Australia, and the EU – and how iComply automates compliance without compromising client confidentiality.

For legal professionals, client trust is everything. But across key jurisdictions, law firms are being asked to do more: verify client identity, trace beneficial ownership, and flag suspicious behaviour—all while protecting solicitor-client privilege and meeting strict privacy laws.

In Canada, the U.S., UK, and beyond, anti-money laundering regulations are evolving quickly. Firms must now demonstrate that they not only follow procedures – but that their systems can withstand audits and adapt to new threats.

AML Obligations for Law Firms by Jurisdiction Canada Regulators: Law societies, FINTRAC Requirements: Client Identification Procedures (CIP), ongoing monitoring, beneficial ownership checks, privacy compliance (PIPEDA) United Kingdom Regulator: SRA (Solicitors Regulation Authority) Requirements: AML risk assessment, KYC for clients, source of funds/source of wealth checks, SARs, and recordkeeping under MLR 2017 United States Regulators: ABA model rules, BOI reporting (Corporate Transparency Act) Expectations: Evolving best practices for law firm AML controls, especially in real estate and corporate formation Australia Regulator: Legal Services Commissions, AUSTRAC guidance Requirements: Identification and verification for clients in regulated transactions; alignment with AML/CTF Act for high-risk sectors European Union Regulators: National bar associations, 6AMLD Requirements: Client due diligence, UBO transparency, suspicious transaction reporting, GDPR compliance Common Challenges in Legal Compliance

1. Confidentiality vs. Transparency
Law firms must balance their duty to clients with the obligation to detect and report suspicious activity.

2. Manual and Fragmented Workflows
Paper forms, email, and disconnected tools result in audit gaps and inefficiencies.

3. Complex Entity Structures
Client organizations often involve trusts, layers of ownership, or offshore nominees.

4. Jurisdictional Conflicts
Global clients mean law firms must harmonize privacy, AML, and risk obligations across borders.

iComply: Legal-Grade KYC and AML for Modern Firms

iComply offers a configurable platform designed to help law firms automate AML compliance while preserving client confidentiality.

1. Secure Client Onboarding (KYC/KYB) Edge-based identity and document verification No raw PII leaves the client device unencrypted Supports Canadian, U.S., UK, EU, and Australian standards 2. Beneficial Ownership Mapping Automatically uncover UBOs across jurisdictions Flag nominee structures and offshore shell patterns Enable configurable thresholds for review and escalation 3. Risk-Based Screening and Case Management Sanctions, PEP, and adverse media checks Centralized dashboard for audits, escalations, and decision documentation Secure retention policies to meet legal recordkeeping duties 4. Privacy and Privilege Safeguards Local hosting or on-prem options for law firm control Full audit logs without exposing client communications Compliance with GDPR, PIPEDA, and solicitor-client privilege standards Case Insight: Canadian Corporate Law Firm

A Toronto-based firm specializing in incorporations and M&A deals implemented iComply to digitize its CIP and UBO review processes. Results:

Reduced due diligence time by 70% Flagged two nominee structures with high-risk SOEs in a single case Expanded ability to engage directors, officers, and key stakeholders anywhere in the world Final Word

Legal compliance is evolving fast. Law firms that modernize with purpose-built, privacy-first tools can stay ahead of audits, reduce admin burden, and build deeper client trust.

Schedule a walkthrough with iComply to see how we help law firms automate AML obligations – without sacrificing discretion or control.

Monday, 01. September 2025

Ontology

After the Banking Data Leak Scandal

Ontology’s DIDs as a Solution for Global Financial Security The recent massive leaks of banking data have highlighted the vulnerability of centralized financial systems and the urgent need to rethink the security of personal information. In an increasingly digital world, where cyberattacks are commonplace, users’ trust in financial institutions has been shaken. In the face of this crisis of confi
Ontology’s DIDs as a Solution for Global Financial Security

The recent massive leaks of banking data have highlighted the vulnerability of centralized financial systems and the urgent need to rethink the security of personal information. In an increasingly digital world, where cyberattacks are commonplace, users’ trust in financial institutions has been shaken. In the face of this crisis of confidence, Decentralized Identities (DIDs), offered by platforms like Ontology, are emerging as a promising solution to strengthen global financial security. This article will explore how Ontology’s DIDs, by returning control of data to users, can transform the financial security landscape and prevent future scandals.

The Achilles Heel of Centralized Systems: Banking Data Leaks

Traditional banking systems rely on a centralized model where clients’ personal and financial information is stored in vast databases managed by institutions. Although these systems are protected by sophisticated security measures, they remain prime targets for cybercriminals. Each year, millions if not billions of customer records are compromised in data breaches, leading to significant financial losses, identity theft, and the erosion of public trust.

These incidents underline a fundamental weakness: the concentration of data creates a single point of failure. Once an attacker breaches an institution’s defenses, they potentially gain access to a goldmine of sensitive information. Moreover, the fact that data is managed by third parties means users have little to no control over how their information is stored, used, or shared. This lack of sovereignty over data is at the heart of today’s security issues.

ONT ID: Ontology’s Decentralized Identity Solution

Ontology offers a radically different approach to identity and data management through its decentralized identity framework, ONT ID. Based on W3C recommendations for Decentralized Identifiers (DID) and Verifiable Credentials (VC), ONT ID enables individuals and organizations to create and control their own digital identities. Unlike centralized systems, where data is held by third parties, ONT ID restores data sovereignty to the user.

With ONT ID, users can generate unique, self-sovereign identifiers that are not tied to any central entity. They can then collect verifiable credentials (e.g., diplomas, driver’s licenses, proof of residence) from trusted issuers and store them securely in their digital wallet. The crucial aspect is that the user decides when and with whom to share this information, and only the necessary data. For instance, to prove their age, a user would not need to reveal their exact date of birth, but only a verifiable proof that they are over 18. This approach minimizes the exposure of sensitive data and drastically reduces the attack surface for cybercriminals.

How DIDs Improve Global Financial Security

The integration of DIDs, and specifically ONT ID, into the financial sector provides several key advantages for security:

Reduced risk of massive data leaks: By decentralizing the storage of identity data and allowing users to control their information, DIDs eliminate the single point of failure represented by centralized databases. Even if a system is compromised, attackers would only access pseudonymous identifiers rather than full personal data. Stronger, passwordless authentication: DIDs enable more robust authentication methods than traditional passwords, which are frequent targets of hacking. DID-based authentication can leverage cryptographic keys, making phishing and identity theft attempts far more difficult. Improved regulatory compliance with privacy protection: DIDs allow for more effective and privacy-preserving KYC/AML compliance. Financial institutions can verify necessary credentials without storing full copies of clients’ documents. Zero-Knowledge Proofs (ZKP), often associated with DIDs, enable proving possession of an attribute (e.g., being of legal age) without disclosing the attribute itself. Fraud and identity theft prevention: By ensuring the authenticity of digital identities and making it harder to create fake ones, DIDs can significantly reduce financial fraud and identity theft. Each transaction or interaction can be tied to a verifiable identity, without revealing the real identity to all parties. Enhanced auditability and traceability: DID-related transactions are recorded on a blockchain, providing an immutable and transparent audit trail. This facilitates the detection of suspicious activities and the tracing of funds in case of fraud, while protecting legitimate users’ privacy through pseudonymization. Challenges and Future Outlook

Despite their revolutionary potential, widespread adoption of DIDs in the financial sector is not without challenges. Interoperability between different DID implementations, raising awareness and educating both users and institutions, and achieving global regulatory harmonization are all crucial steps. Governments and regulators will need to collaborate with decentralized technology developers to create frameworks that foster innovation while ensuring consumer protection and financial stability.

Ontology, with its commitment to W3C standards and its growing ecosystem, is well-positioned to play a leading role in this transition. By continuing to develop user-friendly tools and forging strategic partnerships, Ontology can help bridge the gap between blockchain technology and the needs of the traditional financial sector paving the way for a future where banking data security is inherently tied to digital identity sovereignty.

Conclusion

Banking data leak scandals are a stark reminder of the fragility of centralized systems and the urgent need to adopt more resilient solutions. Decentralized Identities (DIDs), and especially Ontology’s ONT ID, offer a promising path to redefining financial security. By empowering individuals to regain control of their data, strengthening authentication, enabling privacy-preserving compliance, and reducing fraud, DIDs can radically transform how we interact with our finances.

The future of global financial security no longer lies in reinforcing centralized fortresses but in distributing the power and responsibility of digital identity to the users themselves. Ontology is at the forefront of this revolution, providing the necessary tools to build a safer, fairer, and more resilient financial ecosystem.

After the Banking Data Leak Scandal was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group

The post SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group appeared first on Tokeny.
August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group

Last month, together with Apex Group, we introduced Apex Digital 3.0, the first truly global single-source infrastructure designed to handle the full lifecycle of tokenised products. That includes fund creation, issuance, administration, custody, connectivity to multiple distribution channels, as well as the broader DeFi ecosystem.

$300m of hedge funds go on-chain for always-on services

In less than a month after the launch, SkyBridge Capital, founded by Anthony Scaramucci, a believer of bitcoin and former White House Communications Director, is moving $300m of its flagship hedge funds on-chain through Apex Digital 3.0.

Hedge funds are now open to invest in cryptocurrencies. These assets are designed to settle instantly, without friction. However, investors in those funds face the opposite reality as they are distributed on traditional rails.

It causes slow subscription, redemptions, and transfers due to fragmented settlements, as a transfer often has to pass through multiple layers of middlemen. It results in high transactional costs and delays, which in turn limit liquidity.

That’s why SkyBridge is moving on-chain to eliminate fragmentation and deliver real-time services. By tokenising its hedge funds, subscriptions, redemptions, and transfers can run 24/7 with full transparency. The result is lower costs, faster operations, and an investor experience that finally matches the always-on expectations of today’s markets.

Tokenisation market challenges

For years, tokenisation struggled to take off. Most early projects weren’t true tokenisation but digitalisation experiments. The problem wasn’t legal, but operational.

The key actors, including transfer agents, custodians, and asset managers, simply weren’t ready. They could put the asset on-chain, but struggled to manage subscriptions, redemptions, and custody on-chain. As a result, many institutional projects ended up with assets merely represented on the blockchain, while the servicing processes remained off-chain.

The market has been maturing, service providers have built the capabilities, custodians can hold tokenised assets, more people are equipped and accepting self-custody wallets, and regulators have set clearer frameworks. But integration remains critical and without it tokenisation risks becoming another silo.

On-chain finance adoption accelerates for real

That’s why we built Apex Digital 3.0. For too long, firms were promised “end-to-end” tokenisation, only to discover critical gaps. No legal structuring, no compliance support, no custody of the underlying assets, and no real distribution. The result was complexity, with issuers forced to juggle multiple providers and still falling short of scale.

Apex Digital 3.0 changes that. It brings everything together: legal setup, compliance advisory, issuance, custody, servicing, and cross-platform distribution in one infrastructure. Clients who want a complete 0-to-1 solution rely on us without the headache of managing separate partners. And, for those who already have preferred tools, our open architecture makes integration seamless.

With 22 years of proven trust and $3.5 trillion of assets under administration, Apex Group is the trusted bridge to on-chain finance, giving institutions the confidence to move massively.

SkyBridge’s $300m project is a live example, with more in the pipeline. This time, institutional adoption at scale is real.

Tokeny Spotlight

Press Release

SkyBridge Capital is tokenising $300m of hedge funds with Tokeny via Apex Digital 3.0.

Read More

SEC mentions ERC-3643

Paul S. Atkin, mentioned ERC-3643 in his speech for launching the Project Crypto.

Read More

Welcome to the Team

Meghavi Raval joins Tokeny. Learn about why she is a great fit to the team.

Read More

Exclusive Interview

Our CCO and Global Head of Digital Assets at Apex Group, Daniel Coheur, talks about Apex Digital 3.0

Read More

Apex Digital 3.0 is Live

Tokenisation is full of promise. But in reality, it’s still hard to execute. That Apex Digital 3.0 solves.

Read More

DAW NY Panel

“In 10 years time there won’t be any fiat left” – Peter Hughes Founder and CEO of Apex Group.

Read More Tokeny Events

Spark 25 by Fireblocks
September 8th-10th, 2025 | 🇪🇸 Spain

Register Now

Apex Invest Global Event Series 2025
September 22nd-23rd, 2025 | 🇨🇭 Switzerland

Register Now

Sibos 2025
September 29th-October 2nd, 2025 | 🇩🇪 Germany

Register Now

Tokeny Team Building
September 17th-19th, 2025 | 🇪🇸 Spain

Learn More

KCMC 2025
September 29th-30th, 2025 | 🇰🇷 South Korea

Register Now ERC3643 Association Recap

Stellar Development Foundation Joins ERC3643 Association

The Stellar Development Foundation (SDF), a non-profit organisation supporting the development and growth of the Stellar network, today announced it has joined the ERC3643 Association.

Learn more here

The U.S. White House has highlighted the growing impact of tokenisation in its newly released report.

On page 40, a market sizing chart for RWAs, provided by our member Plume, includes a small but meaningful footnote: the chart begins in September 2021, the month the Ethereum community officially recognised the ERC-3643 tokenisation protocol as an official standard for permissioned tokens.

Learn more here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Sep1 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group Last month, together with Apex Group, we introduced Apex Digital 3.0, the first… Aug1 Apex Digital 3.0 is Live – The Future of Finance Starts Now July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now To truly scale tokenisation, we need a global force at the… Jul1 Real Estate Tokenization Takes Off in Dubai June 2025 Real Estate Tokenization Takes Off in Dubai Dubai’s real estate market is breaking records. According to data shared by Property Finder, Dubai recorded… May13 Is the UAE Taking the Lead in the Tokenization Race? April 2025 Is the UAE Taking the Lead in the Tokenization Race? As you know, the U.S. is now pushing to become the crypto nation.…

The post SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group appeared first on Tokeny.


uquodo

Deepfakes

The post Deepfakes appeared first on uqudo.

The post Deepfakes appeared first on uqudo.

Saturday, 30. August 2025

Aergo

Cut the Noise, Find Conviction: Crypto’s Next Chapter with DeFAI and ArenAI

As markets oscillate between euphoria and despair, investors are left asking the timeless question: What do I own, and what do I trade? Yet in today’s world of TradingView charts, Twitter threads, Telegram calls, Medium deep-dives, and endless newsletters, the real challenge is not just deciding between BTC, ETH, or the next altcoin. It is cutting through the noise. Everyone has an opinion, and co

As markets oscillate between euphoria and despair, investors are left asking the timeless question: What do I own, and what do I trade? Yet in today’s world of TradingView charts, Twitter threads, Telegram calls, Medium deep-dives, and endless newsletters, the real challenge is not just deciding between BTC, ETH, or the next altcoin. It is cutting through the noise. Everyone has an opinion, and consuming them all takes enormous energy. Finding a strategy that actually fits your needs is harder than ever.

And even if you manage to find the right answer, that is only half the battle. Implementing it in your portfolio means identifying the best yield models, deciding between staking and re-staking, and continually rebalancing across different chains and platforms. For most, this becomes a full-time job.

That reality is about to change. DeFAI (Decentralized Finance + AI) is poised to unlock possibilities that many investors were previously unaware of. Intelligent systems can filter the noise, craft strategies tailored to your goals, and execute them automatically across chains. Instead of waking up at 3 AM to react to volatility, DeFAI agents will monitor, rebalance, and compound for you while you sleep.

ArenAI: The Investor’s Edge

For investors, ArenAI offers a straightforward way to access sophisticated strategies without requiring coding or tracking every market tick. You can browse models created by experts, select those that fit your goals, and let them work on your behalf in real time. Whether you want a conservative ETH staking strategy, an aggressive momentum trader, or a balanced multi-chain allocator, ArenAI lets you plug into ready-made intelligence that adapts as conditions change.

From Consumer to Creator

But ArenAI does not stop at consumption. If you have an investment thesis or a trading style that works, you can turn it into a model and offer it to others on a subscription basis. Your trading perspective becomes more than just a personal experiment. It becomes a product that others can use and pay for, creating an entirely new revenue stream. Instead of being lost in a sea of opinions, you can build, implement, and monetize your own edge.

Ready to cut through the noise and find your edge? Start the journey at hpp.io

Cut the Noise, Find Conviction: Crypto’s Next Chapter with DeFAI and ArenAI was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 12. May 2025

Radiant Logic

Modernizing Healthcare IAM: From Legacy Pain Points to Unified Identity

Explore how modernizing healthcare IAM with RadiantOne transforms legacy pain points into unified identity solutions, enabling faster provisioning, improved security, and seamless access for caregivers across all systems. The post Modernizing Healthcare IAM: From Legacy Pain Points to Unified Identity appeared first on Radiant Logic.

SC Media - Identity and Access

Immediate remediation of severe Passwordstate flaw recommended

Organizations using Passwordstate have been urged by its developer, Click Studios, to promptly implement the latest version of the enterprise-grade password manager to address a high-severity authentication bypass vulnerability, according to BleepingComputer.

Organizations using Passwordstate have been urged by its developer, Click Studios, to promptly implement the latest version of the enterprise-grade password manager to address a high-severity authentication bypass vulnerability, according to BleepingComputer.


Malevolent extensions threaten passkeys, study shows

Enterprise software-as-a-service, banking, and e-commerce apps could be compromised through malicious browser extensions exploiting a critical vulnerability concerning passkeys' dependence on browser integrity, reports SiliconANGLE.

Enterprise software-as-a-service, banking, and e-commerce apps could be compromised through malicious browser extensions exploiting a critical vulnerability concerning passkeys' dependence on browser integrity, reports SiliconANGLE.


Fake ID market VerifTools disrupted by joint US, Dutch operation

VerifTools, an international market for counterfeit identity documents, had its operations disrupted by the FBI and the Politie, the Netherlands' national police, in a law enforcement operation that resulted in the sequestration of nearly two dozen physical and virtual servers, as well as its domains, reports BleepingComputer.

VerifTools, an international market for counterfeit identity documents, had its operations disrupted by the FBI and the Politie, the Netherlands' national police, in a law enforcement operation that resulted in the sequestration of nearly two dozen physical and virtual servers, as well as its domains, reports BleepingComputer.


Recognito Vision

How Facial Recognition Attendance System Is Changing Attendance Management

Tracking attendance has evolved significantly from the old methods of paper registers and punch cards. Today, organizations are increasingly adopting facial recognition attendance systems to streamline tracking, improve security, and save time. These systems combine advanced face detection attendance system technology with AI to ensure accurate and hassle-free employee management. With workplaces g

Tracking attendance has evolved significantly from the old methods of paper registers and punch cards. Today, organizations are increasingly adopting facial recognition attendance systems to streamline tracking, improve security, and save time. These systems combine advanced face detection attendance system technology with AI to ensure accurate and hassle-free employee management.

With workplaces getting more tech-driven, integrating an AI attendance system not only improves efficiency but also reduces human errors associated with traditional attendance methods. This technology ensures that attendance is accurate, instantaneous, and tamper-proof, offering a significant upgrade over legacy systems.

 

What is a Facial Recognition System for Attendance

A facial recognition system for attendance uses biometric technology to identify individuals based on their unique facial features. Unlike cards or fingerprint scanners, face recognition offers touch-free verification, making it quicker and more sanitary.

These systems capture the user’s face using cameras and match it against a stored database. The system analyzes unique facial features, such as eye spacing, nose structure, and jaw contours, to confirm a person’s identity. Modern AI-powered systems even adapt to changes in lighting, angle, and facial accessories like glasses or masks.

 

Key Features of a Facial Recognition System for Attendance Contactless Verification: Reduces physical touchpoints, improving hygiene.

High Accuracy: Advanced algorithms minimize false positives and negatives.

Real-Time Tracking: Attendance logs update instantly.

Integration with Payroll: Automatically syncs attendance data with payroll systems.

Multi-Device Support: Works on cameras, smartphones, and tablets.

A study by NIST FRVT highlights the high accuracy rates of modern facial recognition algorithms, proving their reliability in real-world applications. For detailed technical insights, you can check their Face Recognition Technology Evaluation.

 

Benefits of Using a Face Detection Attendance System

Switching to a face detection attendance system offers several benefits for organizations, large or small:

Time Efficiency: Employees no longer wait in lines to clock in. Attendance is recorded within seconds.

Reduced Buddy Punching: Eliminates the risk of proxy attendance since the system identifies each individual uniquely.

Enhanced Security: Only authorized personnel can gain access to the premises.

Data Analytics: Offers detailed insights into attendance trends, overtime, and staff punctuality.

Cost Savings: Cuts down administrative work and prevents errors in manual attendance management.

 

AI-Based Face Recognition Attendance System in Modern Workplaces

The rise of AI has transformed traditional face recognition systems into AI-based face recognition attendance systems. These solutions not only identify faces but also analyze patterns, detect anomalies, and prevent fraud.

For instance, AI algorithms can detect if someone is trying to spoof the system using a photo or video. This adds an extra layer of security that older systems lacked. Additionally, AI models continuously learn from new data, improving their accuracy over time.

Companies implementing AI attendance systems have reported up to 30% reduction in payroll errors and significant improvements in attendance management efficiency.

 

Face Recognition Time Attendance vs Traditional Methods Feature Traditional Methods Face Recognition Time Attendance Verification Speed Slow (manual punch cards) Instant (seconds per employee) Contactless No Yes Accuracy Prone to errors High accuracy with AI algorithms Security Vulnerable to proxy Secure and tamper-proof Integration Manual processing Automated with payroll and HR tools

Switching to a face recognition time attendance system modernizes workplace management while improving employee experience and operational efficiency.

 

Implementing a Facial Recognition Attendance System

Setting up a facial recognition attendance system needs thoughtful preparation:

Assess Needs: Identify the number of employees, office layout, and data management requirements.

Choose the Right Software: Select software that is AI-driven, scalable, and compatible with your current HR systems.

Hardware Setup: High-quality cameras and controlled lighting improve recognition accuracy.

Training & Onboarding: Educate staff about system usage and privacy measures.

Regular Audits: Monitor accuracy and performance regularly to ensure reliability.

Organizations that implement these steps experience smoother adoption and minimal disruptions.

 

Use of Facial Recognition Attendance System in Different Industries

Facial recognition attendance systems are versatile and can benefit a wide range of industries, from education to healthcare. This is how various industries are putting this technology to use:

 

Education: Colleges and Schools Schools and colleges implement facial recognition system for attendance to automate class attendance, reduce buddy punching, and maintain secure campuses.

Real-time tracking helps identify latecomers and monitor classroom occupancy efficiently.

Corporate Offices Large companies use AI attendance systems to streamline employee check-ins and integrate with payroll.

Face recognition time attendance ensures only authorized personnel access sensitive areas.

Small Businesses Small businesses benefit from reduced administrative overhead with face detection attendance systems.

These systems require minimal hardware, making them cost-effective while still accurate.

Hospitals Medical facilities adopt AI-based face recognition attendance systems to track doctors, nurses, and staff shifts accurately.

Contactless verification also reduces infection risks in sensitive environments.

Banks and Financial Institutions Banks use facial recognition attendance systems to secure entry points and monitor staff presence efficiently.

Integrating attendance data with HR systems ensures compliance and improves operational reporting.

Across all these industries, facial recognition attendance systems provide a reliable, secure, and efficient method for managing workforce attendance while reducing errors and administrative work.

Case Study: Efficiency Gains from AI Attendance Systems

A mid-sized tech company in the US replaced their fingerprint-based attendance system with an AI attendance system. Within three months:

Attendance errors dropped by 85%.

Payroll processing time reduced by 50%.

Employee satisfaction improved due to reduced waiting times and contactless check-ins.

This shows how integrating a facial recognition system for attendance directly impacts operational efficiency.

 

Common Challenges and Solutions

Even with advanced technology, some challenges may arise:

Lighting Variations: Use cameras with wide dynamic range or adjust indoor lighting.

Mask or Accessories: AI algorithms trained with partial face data can still recognize employees.

Data Privacy: Store data securely, follow local regulations, and inform employees about usage.

By addressing these challenges proactively, organizations can fully leverage the benefits of a modern AI-based face recognition attendance system.

 

Conclusion

A facial recognition attendance system is more than a convenience; it’s a strategic investment that improves accuracy, efficiency, and security in workforce management. From face detection attendance systems to face recognition time attendance, integrating AI transforms how organizations track attendance.

For organizations looking to adopt cutting-edge solutions, learning from NIST FRVT evaluations can guide technology selection. Recognito offers solutions that combine advanced AI with user-friendly implementation, making it easier for businesses to adopt modern facial recognition attendance systems without hassle.

Explore more at Recognito GitHub for tools and resources related to AI attendance management.

 

Frequently Asked Questions

 

1. How accurate are facial recognition attendance systems?

Modern AI-based systems can achieve over 99% accuracy. Accuracy may vary slightly depending on lighting, camera quality, and employee positioning.

2. Can facial recognition systems work with masks or glasses?

Yes. Advanced algorithms recognize partial faces, so verification works even when employees wear masks, glasses, or hats.

3. Are facial recognition attendance systems safe and secure?

Yes. Data is encrypted, access is restricted to authorized personnel, and AI prevents spoofing or fraud attempts.

4. How do facial recognition systems compare to traditional methods?

They are faster, contactless, more accurate, and prevent buddy punching. Integration with payroll and HR systems also saves time and reduces errors.

5. Which industries benefit most from facial recognition attendance systems?

Schools, colleges, corporate offices, banks, hospitals, and small businesses use these systems to improve attendance tracking, security, and operational efficiency.


Thales Group

The new playbook for effective multilayered air defence: adaptation, not escalation

The new playbook for effective multilayered air defence: adaptation, not escalation Language English simon.mcsstudio Fri, 08/29/2025 - 11:13 By Ivor, Thales in the UK The arms race was forged in an era of binary threats, when overmatch was measured in mass and tonnage. Today’s battlespace is asymmetric, accelerated and unbounded. As faster, more fragmen
The new playbook for effective multilayered air defence: adaptation, not escalation Language English simon.mcsstudio Fri, 08/29/2025 - 11:13

By Ivor, Thales in the UK

The arms race was forged in an era of binary threats, when overmatch was measured in mass and tonnage. Today’s battlespace is asymmetric, accelerated and unbounded. As faster, more fragmented, more unpredictable threats take to the skies, the idea of outpacing them simply with larger numbers of heavier munitions grows increasingly obsolete.

The concept of an arms race assumes a finish line but in an environment of evolving, multi-vector threats, that assumption no longer holds. 

Relevance, not dominance, is the new measure of air superiority

On today’s frontlines, a tactical advantage can expire in a matter of weeks. In Ukraine, where countermeasure cycles move fast and systems evolve in near real time, a capability that takes too long in transit may already be outdated on arrival. What matters isn’t the volume of the capability, but how fast it arrives, how easily it adapts, and how well it integrates.  

This is a fast-moving operational reality with far-reaching implications. Modern threats – from hypersonic glide vehicles and autonomous drone swarms, to smaller, faster loitering munitions – have shifted the ground under traditional air defence, prompting nations to invest in multilayered systems that promise comprehensive protection – a dome guarding troops from above as well as in front. In the UK’s case, and as set out in its 2025 Strategic Defence Review, £1bn is being earmarked for Integrated Air and Missile Defence (IAMD).

Whether this investment turns promise into real deterrence depends on how ready, relevant and integrated those capabilities truly are. This often means adapting what we have rather than racing to replace existing capability with the shiniest kit that could be outmanoeuvred and outgunned before they leave the production line.

Technical integration as a strategic weapon

By adapting, I mean layering resilience into existing systems and platforms by ensuring they can communicate, adapt and respond as one – helping operators do the same.

While I don’t want to repeat myself, I do want to emphasise a point made in my previous article: just as no single service, government or nation can fend off the array of threats they face alone, no single product, capability or solution can hope to arm these entities with everything they need. Myriad threats require integrated, multilayered solutions which work seamlessly not only within but between Front Line Commands, across allies, domains, borders and time zones.

A NATO ally that goes beyond co-operation, to be truly integrated – capable, for instance, of firing another’s missiles – is one that can respond faster, adapt on the fly, and turn interoperability into a real-time strategic advantage. In this way, agile, agnostic technical integration fosters the kind of Integrated Force outlined in the SDR’s vision for 2035 – one that “deters, fights, and wins through constant innovation at wartime pace.”

Better connected and better protected, allied militaries can gain a competitive edge that’s out of reach of autocratic adversaries characterised by top-down, centralised control. They can exploit the imagination and experience of decision makers at every level by giving them the agility, ability and authority they need to move fast and strike first, augmented by integrated capabilities that can flex to meet any mission and be updated as the threat evolves. 

Integration in action: the ACE advantage

In partnership with L3 Harris Technologies, Thales is developing an integrated short-range air defence (SHORAD) Command and Control (C2) capability. 

The new capability integrates L3Harris’ Target Orientated Tracking System (TOTS) into Thales’ Agile C4I @ Edge (ACE) system to enhance C2 capabilities. This collects, fuses and correlates data from sensors and effectors across the battlespace, providing a common operating picture and accelerating decision making.

Cultural adaptation as an enduring, evolving edge

Such capabilities are only as valuable as their perceived utility. Without a clear idea of their role within the wider kill chain, decision makers may struggle to deploy them effectively. Without the requisite training and support, end users likely won’t advocate for their adoption, instead leaving them to gather dust on the shelf in favour of what they’re comfortable and familiar using.

It’s why talk of effective technical integration must begin with a foreword that addresses the necessary cultural transformation: the mindsets, habits, incentives and partnerships that matter as much as the technology.

To start with, suppliers and customers must work hand in glove to foster a deeper understanding – on the part of industry – of what’s needed to drive change in the right direction, at the relevant pace. The palpable sense of trust and shared intent at joint hubs like Thales’ facility at Thorney Island is testament to what’s possible when one closes the distance between industry and defence. Here, our engineers, including some ex-service personnel, work side by side with MoD teams: training new air defenders, refining systems and gathering feedback from live experiences to ensure every solution is grounded in operational realities.

As service wrappers go, it extends from system design and kit delivery all the way through to mission support. More broadly, it’s the mindset and trust that makes a difference: availability, willingness, reliability and responsiveness on the part of industry fosters competence at the front line, continuity across operations, and a culture with backbone: confident, composed, and agile enough to evolve with the threat.

The arms race that never ends

New and emerging threats are relentlessly lapping procurement cycles. Traditional air defences are struggling to keep up with a landscape where threats multiply and adapt faster than they can be contained – where eliminating one threat leads to two more appearing in its place. Amidst it all, the imperative for the UK’s Armed Forces to stay ready, responsive and relevant is both an unignorable challenge and an unambiguous aim.

It is not, thankfully, an unattainable one. What’s needed is a mindset shift from periodic reinvention to continuous evolution: spiral development over wholesale replacement, cultural adaptability over rigid process and integration over isolation. The UK must not just be ready to fight tonight, but also able to adapt tomorrow with the systems and skills we already possess.

/sites/default/files/database/assets/images/2025-08/Uncorrelated-Tracks-Banner.png 29 Aug 2025 United Kingdom The arms race was forged in an era of binary threats, when overmatch was measured in mass and tonnage. Today’s battlespace is asymmetric, accelerated and unbounded. As faster, more fragmented, more unpredictable threats take to the skies, the idea of outpacing them simply with larger numbers of heavier munitions grows increasingly obsolete. Type News Hide from search engines Off

Herond Browser

Top 5 Doodle Football Games You Can Play Online Now

If you're looking for the best places to play, you've come to the right place. We've compiled a list of the top doodle football games you can start playing right now, all from the convenience of your browser. The post Top 5 Doodle Football Games You Can Play Online Now appeared first on Herond Blog. The post Top 5 Doodle Football Games You Can Play Online Now appeared first on Herond Blog.

Playing doodle football is a fantastic way to pass the time with a simple and fun browser game. With its charming style and straightforward gameplay, it’s no wonder that countless versions have popped up online since the original became a viral sensation. If you’re looking for the best places to play, you’ve come to the right place. We’ve compiled a list of the top doodle football games you can start playing right now, all from the convenience of your browser.

What Are Doodle Football Games?

Doodle football games are a popular genre of browser-based soccer games, often recognized for their charming, cartoon-style graphics and simple mechanics that are easy for anyone to pick up and play. Inspired by the viral success of Google’s own interactive doodles, their main appeal lies in their incredible accessibility. Because they require no downloads or installations, these games are perfect for casual gamers looking for a quick, fun session with simple controls on any device.

Top 5 Doodle Football Games to Play Now

Google Doodle Soccer 2012 (sites.google.com)

The Google Doodle Soccer 2012 is a classic example, originally released for the UEFA Euro 2012 tournament. This simple yet addictive game, often found on sites.google.com and other archives, challenges players to control a team of stick-figure characters. The controls are incredibly straightforward: players use simple mouse clicks to move their team and score goals, making it a perfect, nostalgic browser game for a quick session.

Doodle Soccer (doodlecricket.github.io)

Found at doodlecricket.github.io, this version of doodle football offers a unique twist on the classic game: you play as a goalkeeper. With charming hand-drawn graphics, the game’s objective is to block incoming shots from the AI using only your arrow keys. This simple but engaging challenge makes for a fun, quick-play experience that focuses on reflexes and defense rather than offense, perfect for a short break.

Doodle Football (arcadespot.com)

The appeal of Doodle Football comes from its mix of strategy and creativity. Instead of traditional football gameplay, you use your imagination to outsmart tricky level designs and keep the ball rolling toward victory. Whether you’re a casual gamer or puzzle fan, Doodle Football offers a relaxing yet challenging experience. Its unique drawing-based gameplay makes it stand out among online arcade and puzzle games.

Soccer Random (soccerrandom.io)

The game Soccer Random at soccerrandom.io fits right into the doodle football genre with its fun, pixel-art style and simple controls. Designed as a two-player game, it challenges you and a friend to a chaotic physics-based match. The goal is straightforward: be the first to score five goals to win the game, but with its unpredictable movements and simple one-button controls, every match is a hilarious and action-packed surprise.

Football Legends (googledoodlegames.net)

Football Legends, often found on sites like googledoodlegames.net, is a prime example of a game that fits the doodle football genre, even with its more advanced features. This popular multiplayer game challenges you to compete globally using realistic physics and simple controls. Players can customize their teams with unique characters before facing off against opponents from around the world, making it a competitive and highly engaging online experience.

How to Play Doodle Football Games Safely Avoid Unverified Websites

The first rule of thumb is to avoid unverified gaming sites. Many unofficial versions of doodle football are hosted on websites that harbor malware and intrusive pop-up ads. Sticking to well-known or trusted platforms is the easiest way to keep your personal information and device secure.

Use a Secure Browser

To truly protect yourself, use a browser with built-in security features. Herond Browser is designed to keep you safe with its powerful ad-blocker and anti-phishing features. This is especially important on sites like doodlecricket.github.io or arcadespot.com where you can play doodle football without worrying about security risks.

Trust the Community

If you’re looking for new or unblocked versions of the game, check community reviews on platforms like X. Players often share links to safe and trusted versions of doodle football and quickly flag any malicious content or scams. It’s a great way to find the game you want while minimizing risk.

Tips and Tricks for Mastering Doodle Football Games Master Your Timing

Success in doodle football hinges on perfect timing. In games like the classic Google Doodle Soccer 2012, learning when to take a precise shot or make a well-timed save is crucial. Instead of just pressing a button, focus on the moment the ball is in the perfect position to ensure your moves are accurate.

Use Simple Controls Effectively

Don’t let simple controls fool you; they can be used to outsmart any opponent. In doodle football, mastering the basics, like using your arrow keys and the spacebar effectively, can give you a huge advantage. Practice quick, strategic movements to fake out opponents, whether you’re facing a challenging AI or a friend.

Plan Your Path in Puzzle Games

If you’re playing a puzzle-based version of doodle football, your strategy should shift from reflexes to planning. These games often require you to draw a path or use the environment to guide the ball to the goal. Take a moment to analyze the obstacles and plan a precise route to ensure a successful shot.

Conclusion

So there you have it – a guide to some of the best doodle football games available right now. Each game offers its own unique twist on the simple, fun genre, whether you prefer classic arcade action, physics-based chaos, or multiplayer competition. No matter your style, these browser-based gems are the perfect way to enjoy a quick, fun game without any downloads. All you have to do is pick your favorite, and start playing!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Top 5 Doodle Football Games You Can Play Online Now appeared first on Herond Blog.

The post Top 5 Doodle Football Games You Can Play Online Now appeared first on Herond Blog.


Thales Group

New Sovereignty Challenges in Europe and Worldwide: Resilient Space Solutions for a Changing World

New Sovereignty Challenges in Europe and Worldwide: Resilient Space Solutions for a Changing World Language English simon.mcsstudio Fri, 08/29/2025 - 08:42 In a world defined by shifting geopolitics and the resurgence of intense conflicts in Europe and across the world, nations are facing unprecedented sovereignty challenges. The urgency of secure, sovere
New Sovereignty Challenges in Europe and Worldwide: Resilient Space Solutions for a Changing World Language English simon.mcsstudio Fri, 08/29/2025 - 08:42

In a world defined by shifting geopolitics and the resurgence of intense conflicts in Europe and across the world, nations are facing unprecedented sovereignty challenges. The urgency of secure, sovereign capabilities is driving new requirements for defence communications, Earth observation, and satellite-based navigation. Today, Thales Alenia Space provides solutions with their advanced capabilities needed for this new era of space-enabled defense and autonomy.

The Sovereignty Imperative: Why Secure Space Infrastructure Matters

Space is the ultimate high ground in military operations. Forces across all domains (Air, Navy and Army) face increasing challenges operating without or with very limited space-based infrastructure. Such infrastructure provides critical and robust Position, Navigation and Timing (PNT), highly secure and reliable communications, plus Intelligence, Surveillance, and Reconnaissance (ISR).

Recent geopolitical developments have revealed a profound shift: countries are no longer content to rely solely on existing alliances for their critical defense and space needs. Instead, even if they can complement it by less robust and less performant assets, they are seeking national control—true sovereignty—over their data, communications and surveillance capacities. This trend is not limited to Europe; across Africa, Oceania, Asia, the Middle East and South America governments are increasingly interested in European space manufacturers to build independence and resilience in their space activities.

Thales Alenia Space provides trustable, true end-to-end cyber security for all data, communications and surveillance capacities.

Space to Observe, Secure and Defend

At the forefront of this is global transformation, Thales Alenia Space partners with countries to design military and dual-use communications systems that support troops in the field and protect critical national interests. Our advanced optical and radar payloads deliver very-high resolution imaging for Earth observation—empowering governments with the rich, near-real-time data for surveillance, detection and identification of critical areas of interest. Coupled with innovative constellation solutions and multifunction ground segments, we enable both persistent vigilance and operational flexibility.
And our technology goes beyond Observation. The Sicral 3 system, for example, represents the next generation of secure communications, supporting strategic and tactical operations at home and abroad. Built for cybersecurity, these systems put data control back into national hands—no compromises.

High Revisit Frequency: Enhancing Information Intelligence, Security, and Resilience

In today’s fast-paced and unpredictable threat landscape, next to very high performance systems, the ability to rapidly access accurate, actionable intelligence for information dominance is pivotal for national security and resilience. Thales Alenia Space’s satellite constellations, so called ALL-IN-ONE, deliver frequent, near-real-time Earth observation. This system combines high revisit rates and precise control, enabling 24-hour, all-weather capability, and includes a dedicated ground segment to ensure powerful system reactivity. By integrating our sophisticated Intelligence, Surveillance, and Reconnaissance (ISR) capabilities, leveraging both optical and radar technologies, we provide secure, multi-layered data streams—enabling decision-makers to act decisively, safeguard critical assets, and build greater national resilience in the face of ever-evolving security challenges. The multi-mission ground segment will leverage artificial intelligence to optimize constellation operations, facilitate intelligent tasking, and enable efficient data handling. It will also be integrated with digital service platforms to provide data analytics and value-added services for multiple users.

Connectivity through Multi-Orbit Innovation

As the appetite for connectivity grows, operators are embracing multi-orbit models, harnessing the unique benefits of GEO, MEO and LEO satellites. Thales Alenia Space leads this revolution, with three major operational constellations already in service and industry-firsts such as digital payload processing for terabit-class communications. Our satellites feature powerful fifth - and sixth-generation digital processors, ready to meet evolving operational needs and the ever-increasing demand for resilience, bandwidth and flexibility.

Building a Resilient Space Future

Space is already congested and contested and the threat from both adversaries and from space debris is rapidly increasing. Sovereign space assets must be sustainable and enduring, and it is essential that we ensure the long-term orbital safety of assets. 

Our pioneering work in on-orbit servicing vehicles stands testament to this principle. These highly versatile robotic spacecraft will soon conduct repairs, refueling, inspection, and active debris removal—directly in orbit—extending satellite lifespans and mitigating space debris to safeguard the space environment for future generations.

In this dynamic landscape, sovereignty is a necessity. Whether it is secure communications, advanced observation, information intelligence, resilient connectivity, or responsible stewardship of space, Thales Alenia Space stands as a trusted partner for our customers across Europe and around the world. Together, we are advancing intelligence, strengthening security, and building resilience — shaping a more secure, autonomous, and sustainable future, both on Earth and beyond.

Visit Thales Alenia Space at DSEI 2025

Visit us at DSEI 2025 (Thales stand S8-110) to discover more about how Thales Alenia Space’s secure connectivity solutions, high-precision observation capabilities, and advanced, flexible satellite platforms enable our customers to safe guard critical assets and operate with confidence.

/sites/default/files/database/assets/images/2025-09/All-in-One-Thales-Alenia-Space_E.Briot-Banner.png 29 Aug 2025 United Kingdom In a world defined by shifting geopolitics and the resurgence of intense conflicts in Europe and across the world, nations are facing unprecedented sovereignty challenges… Type News Hide from search engines Off

Herond Browser

The Ultimate Guide to Doodle Soccer: Tips, Tricks & How to Play

Get ready to dive into the fun, fast-paced world of Doodle Soccer! This ultimate 2025 guide covers everything you need to know The post The Ultimate Guide to Doodle Soccer: Tips, Tricks & How to Play appeared first on Herond Blog. The post The Ultimate Guide to Doodle Soccer: Tips, Tricks & How to Play appeared first on Herond Blog.

Get ready to dive into the fun, fast-paced world of Doodle Soccer! This ultimate 2025 guide covers everything you need to know about this browser-based, soccer-inspired game. From simple controls to winning strategies, we’ll share tips, tricks, and how to play like a pro.

What Is Doodle Soccer?

Doodle soccer is a free, browser-based game known for its charming, cartoonish style and simple controls, making it easy for anyone to pick up and play. While the most famous version is from the Google Doodle archives, its appeal lies in its incredible accessibility. Because it requires no downloads or installations, you can enjoy a quick, fun session of doodle soccer instantly on any device, whether you’re on a computer, tablet, or smartphone.

How to Play Doodle Soccer Step 1: Access the game via a browser

The first step to playing doodle soccer is to find the game. Because it’s a browser-based game, you can simply access it on platforms like the original Google Doodle archives or various other gaming sites. Simply open your browser, search for the game, and you’re ready to play instantly.

Step 2: Learn controls

Next, take a moment to understand the controls. Doodle soccer is known for its simplicity, so the controls are often intuitive. Most versions use the arrow keys for movement and the spacebar for kicking the ball. A few quick practice rounds will help you get the hang of it in no time.

Step 3: Understand objectives

The objective of doodle soccer is straightforward: score goals and beat your opponent. Whether you’re playing against the computer or a friend, the goal is always the same. Focus on controlling the ball and timing your kicks to send it past the goalie and into the net.

Using Herond Browser for smooth, secure gameplay

For the best experience, consider playing with a browser built for speed and security. A browser like Herond ensures you get smooth, lag-free gameplay and protects you from pop-up ads and malicious scripts that often appear on gaming sites. Enjoy a more secure and uninterrupted game of doodle soccer.

Top Tips and Tricks for Winning Master timing for accurate shots and defensive moves

Success in doodle soccer hinges on perfect timing. Whether you’re taking a shot on goal or blocking an incoming ball, practice your timing to ensure your moves are accurate. Learning when to strike the ball with the right force is key to scoring and defending effectively.

Practice quick reflexes to counter AI or multiplayer opponents

To counter fast-moving opponents, you need quick reflexes. The more you play doodle soccer, the better you’ll become at anticipating your opponent’s moves. Spend time practicing quick directional changes and rapid-fire kicks to give yourself an edge in both single-player and multiplayer matches.

Use power-ups strategically to gain an edge.

Many versions of doodle soccer include special power-ups that can change the game. Don’t just use them as soon as you get them. Instead, save your power-ups for critical moments, such as when you need to make a game-winning shot or a crucial defensive block. Using them at the right time can be the difference between winning and losing.

Play on a fast, secure browser like Herond for lag-free performance

Your browser choice can significantly impact your performance. To ensure a lag-free experience, it’s crucial to play doodle soccer on a fast and secure browser. Using a tool like Herond can provide the performance you need, while also protecting you from pop-up ads and malicious software that often disrupt gameplay.

Safe and Secure Gameplay in 2025 Play Smart, Stay Safe

Ready to jump into the fun world of doodle soccer? While it’s a blast to play, it’s crucial to be smart about where you find the game. Many unverified gaming sites are full of annoying pop-up ads and even malicious software that can harm your device. Sticking to trusted platforms is the easiest way to keep your gameplay fun and secure.

A Better Way to Browse

To truly protect yourself, use a browser built with security in mind. Herond Browser is equipped with Herond Shield, a powerful tool that automatically blocks intrusive ads and warns you about dangerous websites. This means you can play doodle soccer without worrying about interruptions or security risks, giving you a smooth and safe gaming experience.

Trust the Community

When searching for new or unblocked versions of the game, get your links from the community. Platforms like Reddit and X often have dedicated forums where players share links to safe and verified versions of doodle soccer. Relying on these communities is a great way to find the game you want while avoiding potential online threats.

Common Mistakes to Avoid Avoid Over-Controlling

When playing doodle soccer, less is often more. Overusing your controls by pressing keys too quickly can lead to missed shots, fouls, or poor defensive positioning. Instead, focus on precise movements and a simple, well-timed kick. Patience and accuracy are the keys to outsmarting your opponent.

Don’t Play on Unreliable Sites

Nothing ruins a game of doodle soccer faster than a laggy connection or an unstable website. Playing on unsecure sites can not only disrupt your gameplay with constant ads and pop-ups but also put your device at risk of malware. Always choose a reliable platform to ensure a smooth and uninterrupted experience.

Use Herond’s Advanced Security

To keep your gameplay safe, consider using a browser with advanced security features. Herond’s Advanced Security Alert System (ASAS) is specifically designed to detect and warn you about risky websites before you even open them. This powerful tool helps you avoid potential malware and phishing scams, so you can focus on mastering your game of doodle soccer with peace of mind.

Conclusion

Now that you have all the tips, tricks, and strategies you need, you’re ready to become a doodle soccer champion. Remember to focus on your timing, stay calm under pressure, and choose a secure browser to guarantee a smooth, lag-free experience. Whether you’re playing for a few minutes or challenging a friend, these simple secrets will help you dominate the field. It’s time to put your skills to the test and enjoy the game!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post The Ultimate Guide to Doodle Soccer: Tips, Tricks & How to Play appeared first on Herond Blog.

The post The Ultimate Guide to Doodle Soccer: Tips, Tricks & How to Play appeared first on Herond Blog.


iComply Investor Services Inc.

AML for Insurers: Global Regulatory Pressures and Smart Automation Solutions

Insurers face growing global AML scrutiny. This guide shows how to simplify compliance, monitor brokers, and meet multi-jurisdictional requirements using iComply.

Insurance firms face increasing AML scrutiny across jurisdictions—from onboarding to broker due diligence. This article explores key KYB, KYC, and AML obligations in Australia, Canada, the U.S., UK, and Singapore—and how iComply simplifies compliance workflows with edge-secure automation.

Insurers are no longer flying under the AML radar. Regulatory bodies from AUSTRAC to the FCA are sharpening expectations for identity verification, beneficial ownership checks, transaction monitoring, and third-party oversight—particularly for insurers operating across regions or managing delegated broker networks.

In this increasingly complex environment, manual compliance approaches can’t scale. The solution? Intelligent, flexible, and automated AML tools tailored to insurance workflows.

Global AML Standards for Insurers Australia Regulator: AUSTRAC Requirements: AML/CTF program, CDD/EDD on policyholders and beneficiaries, broker monitoring, and suspicious matter reporting Canada Regulator: FINTRAC + OSFI Requirements: Identification of policyholders, UBO checks for corporate accounts, source of funds verification, and transaction monitoring United States Regulators: State DOIs, FinCEN, NAIC guidance Requirements: Customer identification programs (CIP), sanctions/PEP screening, and STRs for high-value or suspicious policies United Kingdom Regulator: FCA Requirements: CDD for life insurance clients, ongoing monitoring of brokers, sanctions screening, and AML risk assessments under MLR 2017 Singapore Regulator: MAS Requirements: AML/CFT policyholder and intermediary due diligence, transaction reviews, and suspicious transaction reporting (STR) Unique Insurance-Specific Risks

1. Broker and MGA Delegation
Insurers rely on brokers and MGAs to onboard and service clients—creating compliance gaps without centralized oversight.

2. Long-Term Policies and Beneficiaries
Life insurance, annuities, and trusts require deeper due diligence due to multiple parties and beneficiary changes over time.

3. Geographic Expansion
Insurers expanding across jurisdictions must manage overlapping and conflicting compliance frameworks.

4. High-Value Transactions
Single-premium life insurance or corporate policies may attract financial crime risk, especially when funded through offshore accounts or third parties.

How iComply Helps Insurance Firms Stay Ahead

iComply provides modular tools designed for real-world insurance compliance—covering policyholder, broker, and partner workflows with full auditability.

1. KYC + KYB for Policyholders and Brokers Onboard individuals and legal entities via branded portals Edge-based identity checks support secure document and biometric verification Automate UBO discovery and documentation 2. AML Monitoring + Screening Screen policyholders, brokers, and payees against sanctions, PEP, and adverse media Monitor payments and claim patterns using configurable risk models Trigger alerts based on policy type, geography, or source of funds 3. Broker Oversight Tools Centralized broker verification and periodic review cycles Assign compliance ownership and flag issues within shared dashboards 4. Privacy-First Architecture Deploy on-prem or in region to support data residency needs Encrypt personal data before transit; manage user consent 5. Audit-Ready Case Management Maintain logs of onboarding decisions, escalations, and communications Generate compliance reports for internal audits or regulator reviews Case Insight: Commercial Insurer in Australia

A national property and casualty insurer used iComply to centralize onboarding and screening for commercial policyholders and their brokers. Key results:

50% reduction in business client onboarding time Improved detection of shell companies and nominee directors Passed AUSTRAC inspection with full audit traceability and no findings Final Take

Insurers that rely on outdated compliance processes are exposed—not just to enforcement, but to inefficiencies and missed risk signals.

Connect with iComply to learn how our platform helps insurance providers simplify AML tasks, reduce broker risk, and stay compliant—across borders and business lines.


Herond Browser

How to Choose the Right Movie Streaming Sites

Finding the perfect movie streaming sites in 2025 can be overwhelming with countless options available. This guide simplifies your choice The post How to Choose the Right Movie Streaming Sites appeared first on Herond Blog. The post How to Choose the Right Movie Streaming Sites appeared first on Herond Blog.

Finding the perfect movie streaming sites in 2025 can be overwhelming with countless options available. This guide simplifies your choice, highlighting top platforms based on content, cost, and safety. Whether you’re after Netflix’s originals or free services like Tubi, we’ve got you covered.

Key Factors to Consider When Choosing Movie Streaming Sites Seek Diverse Genres and Exclusive Titles

When exploring movie streaming sites, prioritize platforms offering varied genres, exclusive titles, and new releases to suit all tastes. In 2025, a robust content library ensures endless entertainment options for movie fans.

Compare Subscription Costs

Netflix ranges from $7.99-$24.99/month, while Hulu offers plans starting at $9.99/month with ads. Evaluating these costs ensures you select the best value.

Explore Free Trials and Ad-Supported Options

Maximize savings on movie streaming sites by seeking free trials or ad-supported plans like Tubi, which is free, or Hulu’s ad-supported tier at $9.99/month in 2025. Free trials let you test platforms like Disney+ before committing.

Prioritize 4K and Reliable Playback for Streaming

For the best experience, choose streaming sites like Netflix or Amazon Prime, offering 4K/HD streaming with crisp visuals in 2025. Ensure your internet speed hits 25 Mbps for smooth, uninterrupted playback. Reliable streaming quality enhances every movie night.

Choose Device-Compatible

Select movie streaming sites like Disney+ or Hulu that support smart TVs, mobiles, and desktops for seamless viewing in 2025. Compatibility with iOS, Android, and Roku ensures you can watch anywhere.

Safety and Security When Streaming Avoid Unverified Movie Streaming Sites

Steer clear of unverified movie streaming sites to protect against malware and data theft risks in 2025. These platforms can compromise your device’s security. Prioritize trusted services for safe viewing.

Stick to Trusted Platforms Like Netflix and Hulu

For safe streaming, choose trusted streaming sites like Netflix or Hulu in 2025. These platforms offer reliable, high-quality content without the risks of unverified sites. Enjoy peace of mind with secure access.

Top Movie Streaming Sites for 2025 Netflix

Vast Library and Original Content

Netflix leads streaming sites with a vast library and originals like Squid Game. Offers diverse, high-quality content for all viewers in 2025.

Flexible Plans with 4K and Offline Downloads

Netflix’s plans (~$7.99-$24.99/month) top streaming sites in 2025 Includes 4K streaming and offline downloads for flexible viewing. Disney+

Disney+ Shines with Family-Friendly Content

Disney+ excels among streaming sites with Marvel, Star Wars, Pixar titles. Features series like The Mandalorian for all ages in 2025.

Affordable Ad-Free Streaming with Disney+

Disney+ offers ad-free streaming sites at ~$7.99-$14.99/month in 2025. Delivers high-quality content across devices for budget-friendly viewing. Hulu

Hulu’s Affordable Plans with Live TV

Hulu offers streaming sites plans at ~$9.99-$19.99/month with ads or live TV in 2025. Affordable pricing suits budget-conscious viewers seeking quality content.

Diverse Shows and Movies on Hulu

Hulu provides a diverse library for streaming sites in 2025. Includes current series and classic films for varied tastes. Tips for a Better Streaming Experience Ensure Fast Internet for 4K Streaming Maintain 25 Mbps for 4K streaming on streaming sites like Netflix in 2025. Stable connection prevents buffering for seamless movie viewing. Test speed with tools like Speed test to optimize movie streaming sites setup. Use Speed test to Avoid Buffering Avoid buffering on streaming sites with Speed test checks in 2025. Ensure connection supports HD/4K playback on platforms like Netflix. Stick to Trusted Movie Streaming Platforms Choose verified movie streaming sites like Disney+, Hulu; avoid pirated platforms. Trusted sites ensure quality and safety from malware in 2025. Conclusion

Choose trusted platforms like Netflix or Disney+ for premium, ad-free viewing, or try free options like Tubi with caution. Ensure a smooth experience by checking internet speed and user reviews. Dive into your favorite movies with confidence and enjoy seamless streaming today!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post How to Choose the Right Movie Streaming Sites appeared first on Herond Blog.

The post How to Choose the Right Movie Streaming Sites appeared first on Herond Blog.


How Long Is a Football Game, Including Halftime & Overtime?

Wondering how long is a football game, including halftime and overtime? This guide breaks down game durations across leagues, helping you plan your game day. The post How Long Is a Football Game, Including Halftime & Overtime? appeared first on Herond Blog. The post How Long Is a Football Game, Including Halftime & Overtime? appeared first on Herond Blog.

Wondering how long is a football game, including halftime and overtime? In 2025, NFL and college games typically run 3-3.5 hours, while high school games take 2-2.5 hours, factoring in 12-15 minute halftimes and potential overtime. This guide breaks down game durations across leagues, helping you plan your game day.

Standard Football Game Duration NFL and College Football Game Duration

How long is a football game in the NFL or college level? These games last 60 minutes, with four 15-minute quarters, totaling ~3-3.5 hours in 2025. This includes ~12-15 minute halftimes, commercial breaks, and stoppages like timeouts or reviews.

High School Football Game Length

For high school? It spans 48 minutes across four 12-minute quarters, typically lasting ~2-2.5 hours in 2025. Halftimes (~10-12 minutes) and fewer stoppages make it shorter than NFL games. Fans can enjoy a compact yet exciting game day.

Youth and Other League Game Times

These feature shorter quarters, often 8-10 minutes, totaling ~1.5-2 hours in 2025. With brief halftimes and minimal stoppages, they’re perfect for quick, fun matches.

Factors Affecting Game Duration Stoppages, Overtime, and TV Breaks in Football

Timeouts, injuries, and reviews, plus TV commercial breaks, extend NFL and college games to ~3-3.5 hours in 2025. NFL overtime follows a 10-minute sudden-death format, adding ~10-15 minutes if needed. These interruptions prolong the game beyond its 60-minute playtime.

Real-Time vs. Playtime Differences

NFL games, though 60 minutes in regulation, feature only ~11 minutes of real action due to stoppages, per 2025 data. The rest includes timeouts, huddles, and commercials, stretching games to 3+ hours. Understanding this gap helps fans plan better.

Special Cases: Super Bowl Duration

In 2025, it lasts ~4 hours due to an extended halftime (~25-30 minutes) with star-studded performances, plus extra commercials and ceremonies. Unlike regular NFL games, this spectacle demands more viewing time.

Halftime Duration and Impact NFL and College Halftime Duration

In NFL and college games, halftime lasts ~12-15 minutes, but the Super Bowl extends to ~25-30 minutes due to high-profile performances in 2025. These breaks allow for player rest and fan entertainment, adding to the 3-3.5-hour total game time.

High School and Youth League Halftime

High school halftimes run ~10-12 minutes, while youth leagues have shorter breaks of ~5-10 minutes in 2025. These compact pauses keep games at ~2-2.5 hours for high school and ~1.5-2 hours for youth.

Activities Extending Halftime

Performances, ceremonies, or band shows, especially in college or the Super Bowl, can stretch halftimes to 15-30 minutes in 2025. These events enhance the fan experience but extend overall game time.

Overtime Rules and Duration NFL Overtime Duration

In 2025, NFL overtime follows a 10-minute sudden-death format, adding ~10-15 minutes if needed. This applies to regular season games, with playoffs potentially extending further. Overtime ensures thrilling resolutions but impacts total game time.

College Football Overtime Length

College football uses unlimited overtime periods, each lasting ~5-10 minutes, potentially adding 15-30+ minutes in 2025. Close games may significantly extend the ~3-3.5-hour duration. Fans should prepare for longer matches.

High School Overtime Variations

In 2025, high school overtime varies by state, typically using 8-10 minute periods, adding ~10-20 minutes. These shorter overtimes keep games around 2-2.5 hours.

Rare Cases of Extended Overtimes

Close games, especially in college or NFL playoffs, can see multiple overtime periods, adding 30+ minutes in 2025. These nail-biting matches create memorable moments but require extra planning. Stay updated via X to anticipate how long is a football game in such cases.

Securely Stream and Research Football Games

To plan for how long is a football game, stream or research safely on trusted platforms like ESPN or NFL.com in 2025. Avoid unverified sites that risk malware. Check X for reliable game schedules and updates. Use a secure browser to access streaming services and ensure a smooth, safe experience for every football game.

Securely Stream and Research Football Games Stream Safely with Herond Browser

Wondering how long is a football game while streaming? Use Herond Browser’s ad-blocker and anti-phishing features to safely access ESPN, NFL.com, or check schedules on X in 2025. Its ASAS (Advanced Security Alert System) blocks intrusive ads, ensuring uninterrupted viewing of 3-4 hour NFL games. Download at herond.org to stream and plan how long is a football game securely.

Protect Against Scams for Tickets and Streams

When planning how long is a football game, avoid scams when buying tickets or accessing live streams. Unverified sites may steal data or deliver malware in 2025. Stick to trusted platforms like ESPN or NFL.com for schedules and streams. Use Herond Browser’s anti-phishing protection (herond.org) to safely navigate ticket purchases and streams for how long is a football game.

Download Herond Browser for Secure Access

To stay informed on how long is a football game, download Herond Browser at herond.org for secure streaming in 2025. Its ad-blocker and tracker protection safeguard against malicious ads on sites like ESPN or X. Enjoy peace of mind while checking game schedules or streaming 2-4 hour football games, ensuring a safe and seamless experience.

Conclusion

Understanding how long is a football game – 2-4 hours with halftimes (~12-30 minutes) and potential overtime – helps you plan the perfect 2025 game day. From NFL’s 3-3.5 hours to high school’s 2-2.5 hours, our guide covers it all. Stay updated via X and stream safely on trusted platforms like ESPN. Get ready for every thrilling moment with confidence!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us: Technical support topic on https://community.herond.org On Telegram https://t.me/herond_browser DM our official X @HerondBrowser

The post How Long Is a Football Game, Including Halftime & Overtime? appeared first on Herond Blog.

The post How Long Is a Football Game, Including Halftime & Overtime? appeared first on Herond Blog.


auth0

Implementing Asynchronous Human-in-the-Loop Authorization in Python with LangGraph and Auth0

This tutorial demonstrates how to implement asynchronous authorization in a LangGraph application using Auth0 and the CIBA flow for secure, human-in-the-loop actions
This tutorial demonstrates how to implement asynchronous authorization in a LangGraph application using Auth0 and the CIBA flow for secure, human-in-the-loop actions

Friday, 29. August 2025

SC Media - Identity and Access

TransUnion says 4.4 million customers affected by third-party breach

Stolen information included names, SSNs and dates of birth, but not credit information.

Stolen information included names, SSNs and dates of birth, but not credit information.


HYPR

The CBUAE's SMS and OTP Ban is a Golden Opportunity

The Central Bank of the UAE has drawn a line in the sand. By March 2026, the era of the SMS and One-Time Passwords will be over for the nation's financial institutions. This is not a minor policy tweak. It's a seismic shift. For years, the SMS/OTP has been the default security blanket for digital banking. A familiar, but flawed, solution. But the CBUAE's directive acknowledges a h

The Central Bank of the UAE has drawn a line in the sand. By March 2026, the era of the SMS and One-Time Passwords will be over for the nation's financial institutions.

This is not a minor policy tweak. It's a seismic shift.

For years, the SMS/OTP has been the default security blanket for digital banking. A familiar, but flawed, solution. But the CBUAE's directive acknowledges a harsh reality: in the face of sophisticated phishing, SIM-swapping, and social engineering attacks, this legacy method has become a critical liability. It creates unacceptable financial and reputational risk.

For the C-suite in the UAE's banking sector, it's easy to view this as another compliance burden. Another costly, complex project to manage. But that’s a limited view. The leaders who will win the next decade of digital banking will see this mandate for what it truly is: a strategic inflection point. This is your opportunity to leapfrog the competition by building a digital experience that is not only radically more secure, but also profoundly simpler for your customers.

Phishing-Resistant Passkeys: The Secure Alternative to SMS OTP

The CBUAE recommends a move toward robust, risk-based authentication. The golden standard that unequivocally answers this call is passkeys.

Passkeys are not just an incremental improvement. They represent a fundamental change in authentication technology, offering a rare combination of superior security and a user experience that is genuinely effortless. Built on FIDO standards, passkeys replace passwords and OTPs entirely. They use the biometrics already built into your customers' devices, like Face ID or a fingerprint, to create a login experience that is fast, familiar, and frictionless.

So, why are passkeys the definitive solution to the CBUAE mandate?

They are Inherently Phishing-Resistant. A passkey is cryptographically bound to your bank's specific website or app. There is no password to steal, no code to intercept. The primary attack vector for financial fraud is neutralized at its source, directly protecting your customers and your firm’s bottom line. They Create a World-Class Customer Experience. No more waiting for delayed SMS messages. No more frustrated calls to the help desk. A frictionless, biometric login increases digital channel adoption, boosts customer satisfaction, and builds loyalty in a fiercely competitive market. They Lower Your Operational Costs. The business case is undeniable. You can immediately eradicate the significant and rising costs of SMS delivery. More importantly, passwordless authentication slashes password-related help desk inquiries, lowering your total cost of ownership (TCO) and freeing up valuable IT resources to focus on innovation, not resets. From Onboarding to Transactions: A CIAM Approach to Customer Identity

True digital leadership isn't just about a secure login. It’s about securing the entire customer relationship. This is where HYPR’s Customer Identity and Access Management (CIAM) solution extends the power of passkeys across the entire user journey.

Our unified framework allows you to:

Onboard Customers with Trust: Securely register new customers and establish confidence from the very first interaction, accelerating their transition into high-value digital clients. Deliver Effortless Authentication: Provide a consistent, best-in-class login experience across all your digital properties, reinforcing your brand’s commitment to innovation and security. Protect High-Value Transactions: Implement seamless, biometric step-up authentication for sensitive actions, preventing fraud without adding frustrating friction for your legitimate customers. The HYPR Advantage: Proven Results and Accelerated Time-to-Market

Navigating this transition requires more than just new technology; it requires a proven, globally-deployed partner.

HYPR is not a startup testing a new theory. We are the trusted identity partner to the world's most demanding financial institutions, including two of the four largest US banks. Our FIDO-certified solutions are architected for the scale, reliability, and security your institution demands. And with our flexible SDKs and APIs, we enable rapid integration with your existing infrastructure, ensuring you lead the market in this transition, not follow it.

Conclusion

The CBUAE’s SMS OTP ban is far more than a compliance requirement — it’s a turning point for the UAE’s financial sector. Institutions that treat it as a checkbox exercise will fall behind, while those that embrace phishing-resistant passkeys will gain a lasting competitive edge.

Now is the time to act. With the March 2026 deadline fast approaching, early movers will be the ones to set the standard for secure, passwordless digital banking in the region.

Related Resources Preventing Social Engineering Attacks on the Helpdesk Best Practices for Identity Proofing in the Workplace NIST SP 800-63-3 Review: Digital Identity Guidelines Overview Passwordless MFA Security Evaluation Guide


1Kosmos BlockID

Addressing AI-Enabled Hiring Fraud: The Remote Work Identity Challenge

Hiring fraud is in the news. Google recently announced it’s bringing back in-person job interviews, citing concerns about AI cheating during technical assessments. But there’s a bigger issue lurking beneath the surface: how do companies verify that the identity of the person who applied, interviewed remotely, and got hired is actually the same and is … Continued The post Addressing AI-Enabled Hi

Hiring fraud is in the news. Google recently announced it’s bringing back in-person job interviews, citing concerns about AI cheating during technical assessments. But there’s a bigger issue lurking beneath the surface: how do companies verify that the identity of the person who applied, interviewed remotely, and got hired is actually the same and is who they claim to be?

This identity verification challenge has created an opening that sophisticated fraud networks and state-sponsored actors are actively exploiting. The US government has recognized the serious issue of North Korean operatives successfully impersonating American tech workers, securing remote positions and gaining access to sensitive corporate systems.

The Scale of the Problem

According to the Federal Trade Commission, financial losses from job and employment scams have exploded from $90 million in 2020 to more than $501 million in 2024, a staggering 456% increase that signals the emergence of hiring fraud as a major profit center for organized criminal networks.

These financial losses, while significant, represent only the measurable impact. The broader concern is operational disruption, intellectual property theft, and potential access to sensitive systems that could compromise business operations or customer data.

How Modern Hiring Fraud Works

Today’s hiring fraud has evolved beyond simple resume padding. We’re witnessing the emergence of “synthetic identities,” completely fabricated personas backed by AI-generated credentials, deepfake technology, and sophisticated social engineering.

AI-Powered Identity Fabrication

Modern fraud networks deploy AI tools that can generate convincing resumes and cover letters in minutes. More concerning, they’re using deepfake technology to mask fraudsters’ appearances and voices during video interviews, creating personas that can pass human scrutiny while bypassing traditional verification methods.

These aren’t isolated incidents. According to Google’s Mandiant threat intelligence team, one American facilitator working with North Korean IT workers “compromised more than 60 identities of U.S. persons, impacted more than 300 U.S. companies, and resulted in at least $6.8 million of revenue” over just three years. The report notes it’s “not uncommon for a DPRK IT worker to be working multiple jobs at once, pulling in multiple salaries on a monthly basis.”

The Challenges with Current Hiring Processes

The very technologies designed to streamline hiring (automated applicant screening, virtual interviews, and rapid onboarding) have become tools that sophisticated adversaries exploit at unprecedented scale.

The traditional hiring trust model, built on static documents, phone interviews, and the assumption that remote workers are who they claim to be, has proven insufficient in an era of AI-enabled deception.

Moving Beyond Point Solutions: The Identity Assurance Approach

Most cybersecurity vendors are approaching hiring fraud with the same mindset they apply to email phishing or malware detection as a point-in-time problem requiring better filters. But hiring fraud isn’t just a detection problem; it’s an identity assurance challenge.

Current solutions fall into two categories: applicant filters that optimize recruitment by culling suspicious applications, and breach prevention tools that try to catch infiltrators before they access sensitive systems. Both approaches treat symptoms while ignoring the root cause: the absence of a verifiable, persistent digital identity foundation.

How 1Kosmos Addresses Identity Assurance

1Kosmos has developed a different approach to this challenge, one that establishes verified identity proofing at the very first interaction and maintains that assurance throughout the entire employee lifecycle.

Our platform’s LiveID technology performs real-time liveness detection while cross-referencing live biometrics with verified government-issued credentials from issuing authorities. This creates a triangulation of identity claims that is exponentially more difficult for synthetic identities or deepfakes to spoof than traditional document-plus-selfie verification methods.

Continuous Identity Assurance Beyond Hiring

The value of true identity assurance extends beyond initial hiring decisions. Once an employee’s digital identity is established through the 1Kosmos platform, it becomes the foundation for every subsequent authentication, access request, and sensitive transaction throughout their tenure.

While point solutions focus exclusively on the hiring moment, 1Kosmos provides continuous identity-backed security. This addresses an important gap in most security strategies: the reality that threats don’t end once someone is hired. Account takeovers, insider threats, and credential compromise can still occur unless there’s a persistent, biometrically-backed identity foundation preventing them.

Building the Identity Foundation for Modern Workforce Security

The hiring fraud challenge represents more than a cybersecurity issue. It’s a trust challenge that requires organizations to rethink how they establish and maintain confidence in their workforce’s identity.

Companies can no longer afford to treat identity as a point-in-time checkbox in their security strategy. In an environment where sophisticated adversaries can manufacture convincing digital personas and nation-state actors are actively infiltrating American businesses through fraudulent hiring, identity assurance must become the foundational layer upon which all other security measures are built.

1Kosmos addresses hiring fraud by:

Establishing Trust on First Use: Securely onboarding new hires with high-assurance, government-verified identity proofing Maintaining Trust Continuously: Providing continuous monitoring and persistent identity assurance for every login, access request, and sensitive operation Empowering User Control: Making employees partners in their own security by giving them control over their identity data Future-Proofing the Enterprise: Creating a zero-trust foundation that protects against the full spectrum of identity-based threats Looking Forward: Evolution in Identity Security

As AI-powered deception capabilities continue to advance and organized fraud networks become increasingly sophisticated, companies face an important choice: evolve their identity assurance strategies or remain vulnerable to an escalating threat.

The companies that recognize identity as the new security perimeter and invest in platforms that provide verified identity assurance rather than point-in-time fraud detection will gain a significant advantage in both security and talent acquisition.
The question isn’t whether your organization will encounter hiring fraud. The question is whether you’ll detect it before it impacts your business, or better yet, prevent it entirely by building your workforce on a foundation of verified, persistent digital identity.

Is your organization prepared for the next wave of AI-enabled hiring fraud? Discover how 1Kosmos provides the identity foundation your workforce security strategy needs. Watch a Demo

The post Addressing AI-Enabled Hiring Fraud: The Remote Work Identity Challenge appeared first on 1Kosmos.


Indicio

Answering IATA’s call: how Indicio solves the bottlenecks in air travel

The post Answering IATA’s call: how Indicio solves the bottlenecks in air travel appeared first on Indicio.
Combining Verifiable Credentials with biometric authentication, Indicio Proven underpins secure, privacy-preserving, and scalable travel tech solutions for governments, airlines, and airports.

By Trevor Butterworth

The International Air Transport Association (IATA) recently released a new white paper, Unlocking the Future – The Passenger’s Journey Toward a Seamless and Contactless Experience which highlights the pressing challenges facing aviation: growing passenger volumes, repeated identity checks, and the burden of manual document verification at every stage of the journey. 

From baggage drop to security,  lounge access to boarding, and crossing international borders, travelers are asked to present documents again and again. This creates bottlenecks, frustrates passengers, consumes staff resources, and creates long queues that overwhelm airport infrastructure.

With air travel numbers growing each year and expected to reach eight billion by 2041, IATA argues that it is imperative to shift identity and admissibility checks as quickly as possible into a seamless, contactless system. 

This is what Indicio’s decentralized identity technology makes possible.

Indicio was the first to successfully develop and deploy Digital Passport Credentials following the International Civil Aviation Organization’s (ICAO) specification  for Digital Travel Credentials (DTC) to allow travelers to cross borders in seconds.

Indicio was also the first to implement IATA’s OneID for seamless check in, lounge access, and boarding — and show how it could be combined with a Digital Passport Credential for international travel in a simple, single workflow for a traveler.

And Indicio will be the first to deploy a Digital Passport Credential issued by governments by the end of the year — and will use its expertise to help develop these credentials for issuance by European governments as part of the APTITUDE Large Scale Pilot project.

Streamline the traveler experience with reusable digital identity

Airlines, airports, and border agencies all need to confirm passenger identity. This means repeating the same check multiple times. 

Indicio Proven® turns these manual checks into automated, seamless experiences  by creating “government-grade” digital identities that combine authenticated biometrics with Verifiable Credentials. 

Verifiable Credentials are tamper-proof digital credentials held in a digital wallet on a mobile device. They can be cryptographically verified — which is instantaneous. 

With Digital Passport Credentials, travelers  scan the electronic chips in their passports. Indicio’s software ingests the data and then checks the electronic image of the person against a liveness check to make sure that the person doing the scan is the same as the person the passport belongs to. The passport data is then cryptographically verified to make sure it was issued by a legitimate passport office, upon which the traveler receives a Digital Passport Credential. This credential follows the ICAO specifications for a DTC-1.

Alternatively, the Digital Passport Credential is issued directly by the passport office as a counterpart to a traveler’s physical passport. This credential follows the ICAO specifications for DTC-2.

The result is that travelers have digital identities that can be seamlessly authenticated, either through a face scan or a contactless corridor.

In effect, the traveler’s face becomes their boarding pass. Airlines, airports, and border authorities can trust that the person cleared at the start is the same person moving through each step, without requiring the passenger to repeatedly present paper documents or plastic cards.

Meeting immigration and transit requirements

IATA’s white paper also notes the strain on airlines from needing to confirm that every international traveler has been approved for entry. Manual checks slow down processing and errors can lead to costly delays and fines. 

Indicio solves this by making it possible to verify a passenger’s immigration status before they even arrive at the airport. 

Credentials that prove visa status or travel authorization can be issued and shared as Verifiable Credentials, and instantly authenticated by the airline and immigration control. 

 Airlines now have a secure way to stop inadmissible travelers from boarding and ensure smooth operations for carriers.

Why Indicio is the right partner

IATA’s call for an interoperable, end-to-end solution that transforms the passenger journey isn’t something in the distant future, it’s here today at Indicio with technology already in use by airports and governments worldwide.

Our technology gives travelers a consistent, reliable experience anywhere, while providing authorities and airlines the assurance they need everywhere.

The future of aviation depends on removing bottlenecks and maintaining security. Indicio Proven® makes that future possible today.

Get your free travel architecture consultation from one of our experts here.  

 

###

The post Answering IATA’s call: how Indicio solves the bottlenecks in air travel appeared first on Indicio.


SC Media - Identity and Access

Palo Alto's deal with CyberArk proves identity has become the center of the cybersecurity universe

The $25 billion Palo Alto-CyberArk deal signals that all eyes in cybersecurity are now focused on identity.

The $25 billion Palo Alto-CyberArk deal signals that all eyes in cybersecurity are now focused on identity.


Securing the perimeter-free enterprise

This article examines how organizations can secure today’s borderless networks by embracing zero trust, secure access service edge (SASE), and identity-centric access controls.

This article examines how organizations can secure today’s borderless networks by embracing zero trust, secure access service edge (SASE), and identity-centric access controls.


Elliptic

Ruble-backed stablecoins: the importance of identifying indirect sanctions exposure

In a recent blog post, the Elliptic Research Team revealed how sanctioned Russian actors are relying increasingly on ruble-backed stablecoins in an effort to bypass international financial restrictions imposed on Russia since its 2022 invasion of Ukraine. 

In a recent blog post, the Elliptic Research Team revealed how sanctioned Russian actors are relying increasingly on ruble-backed stablecoins in an effort to bypass international financial restrictions imposed on Russia since its 2022 invasion of Ukraine. 


auth0

How to Build a Python MCP Server to Consult a Knowledge Base

Turn a blog into a searchable knowledge base for your AI assistant. Follow this step-by-step guide to build a local MCP server in Python, giving an MCP client the ability to interact with a blog without you ever having to leave the chat
Turn a blog into a searchable knowledge base for your AI assistant. Follow this step-by-step guide to build a local MCP server in Python, giving an MCP client the ability to interact with a blog without you ever having to leave the chat

Wednesday, 27. August 2025

Anonym

Rethinking Identity Insurance: From Payouts to Prevention  

For many insurers, identity insurance is still framed as a safety net. It’s only there if something goes wrong. For customers, that means help only arrives after fraud has already caused real problems. For insurers, it means bigger payouts.  This old way of doing things is expensive, and it’s quickly losing relevance.  The better way: […] The post Rethinking Identity Insurance: From Pa

For many insurers, identity insurance is still framed as a safety net. It’s only there if something goes wrong. For customers, that means help only arrives after fraud has already caused real problems. For insurers, it means bigger payouts. 

This old way of doing things is expensive, and it’s quickly losing relevance. 

The better way: Add proactive protection to your offerings.  

What is a proactive identity protection suite?  

A proactive identity protection suite has privacy and security tools that actively prevent fraud before it happens. These solutions work continuously, blocking threats in the background and seamlessly integrating safer tools into your customers’ daily lives.  

Key capabilities include: 

Credit freezes prevent criminals from opening fraudulent accounts.  Dark-web monitoring scans hacker forums for exposed data such as SSN and credit card details.  Personal data removal finds sensitive information from people-search and data broker sites.  Phishing and malware defense protect users with secure email and browser tools that block scams at the source.  Password manager creates and stores strong, unique passwords for every account, preventing credential theft and account takeover.  Private browser blocks ads, trackers, and cookies while keeping browsing history fully encrypted, ensuring no one can follow users online.  VPN (Virtual Private Network) encrypts internet connections on any network, protecting data and activity from hackers, snoops, or unsafe Wi-Fi. 

Why prevention works 

Credit monitoring, real-time alerts, and dark-web scanning allow insurers and customers to spot threats early. Stopping fraud at the source not only saves customers from stress but also reduces costly claims for insurers. The result is greater peace of mind for policyholders and stronger retention for providers. 

By catching suspicious activity quickly, these solutions prevent minor issues from turning into significant losses. Customers see the benefit every day, which keeps them engaged and feeling secure. And for insurers, that proactive approach means fewer claims to cover, lower costs, and stronger long-term relationships with policyholders. 

The dual benefit of proactive insurance 

Proactive identity protection turns a one-time claim process into a continuous service that builds trust and engagement. Better yet, this trust becomes a business advantage:  

Customer retention matters. Improving retention by just 5% can boost insurer profits by 25% to 95%. 

In the insurance sector, offering seamless digital engagement and proactive protection can significantly increase customer loyalty, resulting in more renewals and upsell opportunities. 

Prevention pays (literally) 

For insurers, prevention benefits customers and helps the bottom line. Every fraudulent incident avoided is a claim you don’t have to pay. At scale, that translates into substantial savings. Add in the fact that subscription-based identity protection creates recurring, high-margin revenue, and the business case becomes clear.  

A proactive model lowers overall risk exposure while generating consistent income, all while positioning your brand as a leader in trust and innovation. The end result is stronger financial performance and stronger customer relationships. 

Protect your customers and your bottom line  

The old way of handling identity theft, waiting until fraud happens and then covering the losses, isn’t enough anymore. Customers don’t just want reimbursement, they want protection. Proactive tools turn identity insurance into something customers use every day, not just in a crisis. 

The benefits flow both ways: 

Customers avoid the stress and frustration of fraud before it ever impacts their lives.  Insurers cut down on claim payouts, reduce operating costs, and strengthen long-term loyalty. 

With Anonyome Labs’ Digital Identity Protection suite, insurers can deliver these proactive tools under their own brand. It’s a simple way to move from reactive payouts to daily, trust-building protection by creating a win for both policyholders and your bottom line.  

Get a demo today!  

The post Rethinking Identity Insurance: From Payouts to Prevention   appeared first on Anonyome Labs.


Elliptic

OFAC sanctions a fraud network for assisting the North Korean (DPRK) regime

On August 27, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC)sanctioned Russian national Vitaliy Sergeyevich Andreyev (along with a North Korean individual and two entities), for assisting the DPRK in targeting “American businesses through fraud schemes involving its overseas IT workers, who steal data and demand ransom,” according to OFAC’s press relea

On August 27, 2025, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC)sanctioned Russian national Vitaliy Sergeyevich Andreyev (along with a North Korean individual and two entities), for assisting the DPRK in targeting “American businesses through fraud schemes involving its overseas IT workers, who steal data and demand ransom,” according to OFAC’s press release. These designations also build on several other actions OFAC has taken in the last several months to stop the DPRK’s IT worker schemes, including sanctions on July 8 and July 24.

Andreyev is the only designated person who was associated with a crypto address in today’s action. However, OFAC also designated Kim Ung Sun, who is believed to have facilitated “multiple financial transfers worth a total of nearly $600,000, by converting cryptocurrency to cash in U.S. dollars.” 

As noted in the press release, North Korea continues to embed IT workers in overseas companies, especially crypto and Web3 companies. These workers use curated fake identities–many of which are reused–and take advantage of the predominantly remote working culture among these companies. The workers provide legitimate services for the companies, sending their pay back to North Korea to support the regime’s weapons of mass destruction and ballistic missile programs. At the same time, the IT workers remain on the lookout for ways their employer could be exploited in the future, either for financial gain or to steal sensitive data.

Andreyev is said to be linked to Chinyong Information Technology Cooperation Company, a North Korean employer of IT workers that operate in Russia and Laos. Two other companies that were sanctioned today are Shenyang Geumpungri Network Technology, “a Chinese front company for Chinyong” and Korea Sinjin Trading Corporation. OFAC assesses that “since 2021, Shenyang Geumpungri’s delegation of DPRK IT workers has earned over $1 million in profits for Chinyong and Korea Sinjin Trading Corporation”. Korea Sinjin Trading Corporation operates under the Ministry of People’s Armed Forces General Political Bureau, which was sanctioned by OFAC in June 2017.

OFAC only listed a single address associated with today’s designations. However, Elliptic has several additional addresses labelled in our dataset associated with DPRK IT workers.

Elliptic’s data shows that today’s sanctioned Bitcoin address has received over $600,000 of payments and has source exposure back to the Atomic Wallet exploit of June 2023, itself attributed to the DPRK (Lazarus Group). Other addresses along the path of funds take part in cross-chain bridging, which is very much in keeping with favoured North Korean actor obfuscation tactics.




How we can help


SC Media - Identity and Access

Aggressive user data collection mostly done by Chinese-based apps

HackRead reports that more than half of the most downloaded foreign mobile apps in the U.S. with highly aggressive data gathering and sharing practices were based in China.

HackRead reports that more than half of the most downloaded foreign mobile apps in the U.S. with highly aggressive data gathering and sharing practices were based in China.


Extensive Salesforce data theft campaign fueled by stolen Salesloft Drift OAuth tokens

Third-party artificial intelligence chat agent Salesloft Drift had its OAuth tokens pilfered by the UNC6395 threat operation to exfiltrate troves of information from over 700 organizations using Salesforce systems as part of an attack campaign that ran from August 8 to 18, according to CyberScoop.

Third-party artificial intelligence chat agent Salesloft Drift had its OAuth tokens pilfered by the UNC6395 threat operation to exfiltrate troves of information from over 700 organizations using Salesforce systems as part of an attack campaign that ran from August 8 to 18, according to CyberScoop.


Elliptic

Introducing Issuer Due Diligence: The first stablecoin compliance solution for banks, custodians, and asset managers

Stablecoins are rapidly reshaping the future of money. As regulatory frameworks begin to align with digital asset innovation, financial institutions have a unique opportunity to play a foundational role in facilitating the growth of this market, not just by issuing or investing, but by holding reserve assets on behalf of stablecoin issuers. But with opportunity comes risk and uncertain

Stablecoins are rapidly reshaping the future of money. As regulatory frameworks begin to align with digital asset innovation, financial institutions have a unique opportunity to play a foundational role in facilitating the growth of this market, not just by issuing or investing, but by holding reserve assets on behalf of stablecoin issuers.

But with opportunity comes risk and uncertainty.

How can institutions confidently hold reserve assets - whether that’s fiat in a bank account, short-term gilts in custody, or a money market fund - without visibility into the on-chain activity surrounding the issuer? How can compliance teams meet evolving expectations if they can’t assess the behavior of the wallets they’re indirectly supporting?

Elliptic is solving that challenge, and today, we are proud to introduce Issuer Due Diligence: the first fit-for-purpose solution enabling banks and other financial institutions to assess issuer wallet risk before holding stablecoin reserves.


ComplyCube

ComplyCube wins 2025 Tech Cares Award for Third Consecutive Year

In recognition of the IDV leader’s consistent efforts and innovative approach in the tech sector, ComplyCube has been presented with the Tech Cares Award by TrustRadius for Corporate Social Responsibility. The post ComplyCube wins 2025 Tech Cares Award for Third Consecutive Year first appeared on ComplyCube.

In recognition of the IDV leader’s consistent efforts and innovative approach in the tech sector, ComplyCube has been presented with the Tech Cares Award by TrustRadius for Corporate Social Responsibility.

The post ComplyCube wins 2025 Tech Cares Award for Third Consecutive Year first appeared on ComplyCube.


Aergo

House Party Protocol (HPP) Integrates with Orbiter Finance: Accelerating the AI-Native Future

The AI-native era is not a distant vision. It’s here today. A new class of infrastructure is emerging, designed to power real-time autonomous agents, verifiable off-chain inference, and multi-chain economies. In this movement, House Party Protocol (HPP) is excited to announce its integration with Orbiter Finance. Through this partnership, Orbiter becomes HPP’s official cross-chain bridge par

The AI-native era is not a distant vision. It’s here today. A new class of infrastructure is emerging, designed to power real-time autonomous agents, verifiable off-chain inference, and multi-chain economies. In this movement, House Party Protocol (HPP) is excited to announce its integration with Orbiter Finance.

Through this partnership, Orbiter becomes HPP’s official cross-chain bridge partner, enabling fast, low-cost, and secure transfers of $ETH, $USDC, and $HPP directly to the HPP Mainnet. This integration does more than connect assets. It opens a new gateway for developers, enterprises, and communities to build, deploy, and scale in the AI-native economy.

The HPP-Orbiter Vision

The HPP-Orbiter partnership goes beyond bridging — it lays the foundation for a new wave of possibilities in decentralized AI. By channeling liquidity into HPP, Orbiter enables AI-native DeFi through platforms like ArenAI, where autonomous agents can power next-generation trading and yield strategies.

Its interoperability also opens the door for cross-chain AI markets, allowing tokens, data, and intelligent agents from other ecosystems to seamlessly participate in HPP’s verifiable AI economy. As HPP expands its partner stack, Orbiter serves as the key connectivity hub, ensuring that liquidity, agents, and innovation flow freely across the broader Web3 and AI landscape.

Orbiter’s Role: Fueling Cross-Chain Agility

As HPP’s official bridge partner, Orbiter Finance plays a pivotal role in expanding accessibility to HPP’s ecosystem. By providing low-fee and near-instant bridging, Orbiter ensures that liquidity, developers, and users can flow seamlessly into HPP.

With Orbiter, participants gain:

Effortless onboarding from major chains into HPP. Trusted security with decentralized liquidity pathways. Frictionless access to HPP-native dApps, AI agents, DeFi protocols, and enterprise-grade integrations. About HPP and Orbiter Finance

About House Party Protocol (HPP)

House Party Protocol (HPP) is an AI-native Layer 2 network designed to power decentralized intelligence. Evolving from Aergo’s enterprise-grade legacy, HPP serves as the AI-native reactor for decentralized systems, transforming intelligence into energy and providing a scalable foundation for autonomous agents, verifiable off-chain inference, and multi-chain economies.

About Orbiter Finance

Orbiter Finance is a decentralized cross-rollup bridge that offers secure, low cost and almost instant transfer. It has supported transactions of assets over 70 networks including Ethereum, Arbitrum, Optimism, Base, Sonic, Starknet, Berachain, Solana, Sui, Movement and other ETH L2s & BTC L2s.

House Party Protocol (HPP) Integrates with Orbiter Finance: Accelerating the AI-Native Future was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Level Up Your Security Posture with the Auth0 Security Detection Catalog

Discover the Auth0 Security Detection Catalog, a new open-source resource on GitHub that provides actionable intelligence and proactive threat updates to boost your security efforts.
Discover the Auth0 Security Detection Catalog, a new open-source resource on GitHub that provides actionable intelligence and proactive threat updates to boost your security efforts.

FastID

Vibe Shift? Senior Developers Ship nearly 2.5x more AI Code than Junior Counterparts

Fastly’s survey shows senior developers trust gen AI tools enough to ship 2.5x more AI code, while juniors stick to traditional coding and caution.
Fastly’s survey shows senior developers trust gen AI tools enough to ship 2.5x more AI code, while juniors stick to traditional coding and caution.

Tuesday, 26. August 2025

Radiant Logic

California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic

Discover how Radiant Logic’s SCIMv2 support simplifies identity management, enabling seamless automation, governance, and Zero Trust alignment across hybrid environments. The post California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic appeared first on Radiant Logic.

Indicio

Build a better mDL with biometrics using Indicio Proven®

The post Build a better mDL with biometrics using Indicio Proven® appeared first on Indicio.
Don’t settle for basic when you can get portable, fraud-proof mDLs biometrically bound to the holder.

By: Helen Garneau

Mobile driver’s licenses (mDLs) are digital credentials that replace physical, plastic driver’s licenses. You hold them on your phone and present them for instant identity authentication. Many states in the U.S. have begun issuing them, but they don’t always make them easy to verify, which inhibits adoption. The European Union has made mDL adoption a goal of its digital identity and digital wallet rollout.

Still, the promise is strong. What makes an mDL useful is the ability to verify it and confirm that the person presenting it is the rightful holder — and other useful information, such as their age.

But in an age of increasing biometric fraud, mDLs already need an upgrade — authenticated biometrics.

By authenticating the image of the person on the license as the person holding the license and then binding both to their device, an mDL becomes a much more powerful form of digital identity — capable of defense against biometric identity fraud.  

With Indicio Proven®, you have the power to authenticate biometrics using facemapping and liveness checks, validate official documents and the information they contain, and bind them to their rightful owner by issuing a Verifiable Credential.

This is the process used to create Digital Passport Credentials (DTCs)— ”government-grade” digital identities for crossing a border seamlessly — and a similar process can be used to create mDLs with authenticated biometric benefits.

Advanced mDL verification

With authenticated biometrics, mDLs are much more powerful. For businesses, this level of assurance opens the door to workflows that were previously too risky or too expensive to conduct remotely. Businesses and government services can confirm a person’s identity at a distance with confidence. This saves money and reduces fraud by eliminating duplicate verification steps and cutting back on time consuming and faulty manual processes. 

For example, a bank can onboard a customer remotely, knowing that the individual on the other side of the screen is the same person to whom the mDL was issued. This cuts the cost of in-branch visits, prevents fraud, and accelerates account openings. 

A healthcare provider can verify a patient’s identity before a telemedicine appointment, reducing the risk of errors or misuse of insurance. 

Employers can complete I-9 or age-verification processes remotely, without waiting for physical documents.

In travel, an mDL with authenticated biometrics can streamline every stage of domestic travel. Airlines can verify identity at booking, security, and boarding without requiring multiple checks of physical documents. 

The TSA can process travelers more efficiently, confident that the person presenting the mDL matches the credential issued to them. The result is a smoother passenger experience and stronger security at the same time.

Every successful mDL verification builds trust and adoption, creating an ecosystem where identity works as seamlessly as swiping a card, but with stronger security and privacy.

The bigger mDL picture

The long-term value is clear. An mDL that is biometrically bound to its holder moves across use cases and industries for faster, simpler, and more secure experiences. For the governments that have invested in issuing mDLs, adoption means their investment in mDLs is paying off.

Contact us to see how Proven can help you turn mDLs into the foundation for a trusted digital identity infrastructure that your business can rely on.

###

The post Build a better mDL with biometrics using Indicio Proven® appeared first on Indicio.


SC Media - Identity and Access

TheTruthSpy spyware app impacted by critical data-exposing issue

TechCrunch reports that Vietnamese mobile spyware app TheTruthSpy, which has rebranded to PhoneParental, is impacted by a critical security flaw, which could be exploited to facilitate user account hijacking and the subsequent theft of victims' information.

TechCrunch reports that Vietnamese mobile spyware app TheTruthSpy, which has rebranded to PhoneParental, is impacted by a critical security flaw, which could be exploited to facilitate user account hijacking and the subsequent theft of victims' information.


Kin AI

The Kinside Scoop 👀 #12

A peek at release week

Hey folks 👋

Two weeks has gone by in a flash for us - and we’ve got a lot to show for it.

So much in fact, that this email is a day late!

Before all of that, though, we have a quick favour to ask:

Turning up the pressure 🔥

We’ve started digging into a big question:
How does Kin serve people in high-pressure environments?

Founders. Entrepreneurs. Elite athletes.

People who operate under constant stakes and stress.

We’re listening, learning, and seeing how Kin can evolve into a real edge for how they prepare, recover, and perform.

If that sounds like you, or someone you know, we’d love to hear your story.

Get in contact with us at hello@mykin.ai, or reach out to us on our Discord, to start a conversation.

Anyway, now on to the changes…

What’s new with Kin 🚀 Advisors enter the chat 🧑‍🏫

Starting this week, we’re introducing Advisors into Kin - a collection of independent specialist personas designed to give you sharper, more tailored support.

At launch, you’ll meet:

Thinking Partner → to bounce around ideas

Strategic Assistant → to cut through the noise and plan

Relationship Coach → to help you navigate the messy stuff between people

We did this because, instead of a one-size-fits-all model, Advisors allow Kin to feel more like the roundtables and podcasts that inspire so many of us.

It’s a cleaner way to consider conflicting viewpoints, and to provide you with a range of dedicated fresh perspectives on whatever choice you decide to take.

Hands-free is free to roam 🎙️🚴

If you’ve been trying to use Kin while biking to work (very Danish of us), driving around, or squeezing in a jog, we have some good news.

Our new, updated hands-free experience will roll out this week, so you can properly chat with Kin on the move.

This mode lets Kin enter those short, in-between moments when you don’t have time to sit down - but still want to clear your head or prep for the day ahead.

Memory that actually remembers 🧠✨

We’ve been talking about our memory upgrades for a while, and this week, one of the first is going out.

This update makes Kin’s Memory clearer and more comprehensive, so that it’s easier to control, easier to review, and easier to give feedback on. That means more accurate recall, fewer frustrations, and better conversations over time.

It also means we’ll get more feedback from you, so we can make it better faster.

Come chat with us 🔊

You can always reach out to the KIN team at hello@mykin.ai with anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).

For something more interactive, the official Kin Discord is still the best place to talk to the Kin development team (as well as other users) about anything AI.

We regularly run three casual weekly calls, and you’re invited:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

Our current reads 📚

Article - OpenAI want future ChatGPT models to be more user-customisable
READ - CNBC

Article - Cloudfare begins their AI Week
READ - Cloudfare

Article - AI ‘Immune’ system for tech Phoebe lands Google backing
READ - Sky News

Article - Why the future of AI is collaboration, not automation
READ - The Atlantic

This week’s super prompt 🤖

This week’s super prompt is:
“How do I tend to approach situations?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to immediately jump into discussing how you personally approach difficult situations, an gaining an insight to which of our three new Advisors you most closely identify with.

As a reminder, you can do this on both iOS and Android.

Open prompt in Kin

Keep talking 🗣

We’re gearing up for some big changes and big releases, both this week and beyond.

More importantly, what we’re bringing to you isn’t fully fleshed out - we’re building this plane while we’re flying it.

Which means, your voices are as needed as ever.

So please - reply to this email, chat in our Discord, or even just shake the app to reach out to us.

Without knowing how you feel, we can’t make Kin the best app it can be for you.

With love,

The KIN Team


1Kosmos BlockID

The Silent Payroll Heist Hitting Universities

As campuses gear up for another academic year, a quieter — but equally damaging — threat is draining university budgets: direct deposit fraud. This isn’t ransomware that makes headlines by shutting down networks. Instead, it slips through unnoticed. Fraudsters steal credentials, log in like a legitimate user, and quietly reroute paychecks, stipends, and refunds to … Continued The post The Silent

As campuses gear up for another academic year, a quieter — but equally damaging — threat is draining university budgets: direct deposit fraud.

This isn’t ransomware that makes headlines by shutting down networks. Instead, it slips through unnoticed. Fraudsters steal credentials, log in like a legitimate user, and quietly reroute paychecks, stipends, and refunds to their own accounts. By the time faculty or students realize a payment is missing, the money is long gone.

Why Universities Are Prime Targets

Universities process millions in payments every semester:

Faculty and staff payroll Student worker wages Research and graduate stipends Tuition refunds and financial aid

The attack surface is huge. Thousands of new students and employees join each term, many with limited cybersecurity awareness. Add in multiple disconnected systems (HR, payroll, bursar) and self-service portals that let users update bank info with little verification, and it’s a fraudster’s dream.

Anatomy of a Campus Heist

The playbook is simple:

Compromise credentials – via phishing or stolen logins. Access payroll/portal – log in as the user. Change direct deposit info – update bank details to a mule account. Wait for payday – the next paycheck or refund flows to the fraudster. No malware. No alarms. Just stolen wages. The True Cost

Beyond the missing funds, universities are left scrambling:

Covering replacement paychecks Hours of admin and IT investigation Damaged credit and financial stress for victims Reputational hits that erode trust with faculty and students

Worse, once a fraudster succeeds at one campus, the same playbook spreads quickly to others.

The Identity Gap

The weakness isn’t the technology — it’s the assumption. Most systems trust that if you know the password, you must be the rightful owner. In today’s world of credential compromise, that assumption is broken.

How to Stop It: Three Layers of Protection

Universities can close the gap by verifying more than just passwords:

Verify the person Step-up identity checks at the moment of a bank account change — government ID scan + selfie match, or biometric re-authentication. Verify the account Use services like Plaid to confirm the bank account is actually owned by the verified user, not a money mule. Verify the risk Apply risk-based rules: if the request comes from a new device or unusual location, enforce stronger checks before approving changes.

Together, these controls stop fraudsters cold, even if they’ve stolen valid credentials.

The Path Forward

Direct deposit fraud may not make headlines, but it’s quietly siphoning millions from universities. The fix is both available and practical: add identity verification at the exact point where sensitive changes happen.

For faculty and students, it’s 30 seconds of extra security. For universities, it’s the difference between a secure paycheck and a stolen one.

Contact us to learn how to implement identity verification on your campus.

The post The Silent Payroll Heist Hitting Universities appeared first on 1Kosmos.


Thales Group

Enabling the Future Force: trusted AI, resilient C2 and human–machine teaming at the pace of the threat

Enabling the Future Force: trusted AI, resilient C2 and human–machine teaming at the pace of the threat Language English simon.mcsstudio Tue, 08/26/2025 - 14:50 Decision advantage that scales with the fight, with humans firmly in control Modern operations are dynamic, integrated and more data-driven, placing greater cognitive burden on operators and com
Enabling the Future Force: trusted AI, resilient C2 and human–machine teaming at the pace of the threat Language English simon.mcsstudio Tue, 08/26/2025 - 14:50 Decision advantage that scales with the fight, with humans firmly in control

Modern operations are dynamic, integrated and more data-driven, placing greater cognitive burden on operators and commanders. To out-decide adversaries, forces need trusted AI, interoperable C2, resilient sensing and effect orchestration — all designed around the human. At DSEI, Thales shows how “Enabling the Future Force” turns these needs into outcomes: see more, think faster, decide faster and act smarter.

Why this matters now

Peer competitors, rapid technology cycles and coalition operations are rewriting concepts of operations. The goal isn’t “more tech”; it’s mission outcomes at the pace of the threat — with accountability and meaningful human control built in. Human–machine teaming (HMT) blends human judgement with machine speed across human-in/on/out-of-the-loop models to reduce cognitive burden without losing oversight.

“Trusted AI should help commanders and operators decide faster — and explain why.”

See more: clearer, wider, earlier situational awareness

At DSEI we will be demonstrating sensing and ISR integration capabilities that expand coverage, filter noise and cue action. ELIX-IR integrated into a Helmet-Mounted Display provides rapid threat/friendly awareness. PAAG/TrueHunter delivers long-range precision cueing. Undersea vignettes show MCUBE & ASW Hub fusing data for earlier detection. The outcome is clear: getting the right picture to the right decider, at speed.

On-stand to look for: ELIX-IR, PAAG/TrueHunter briefings, MCUBE/Blue MMS & ASW Hub, and ISTAR Node.

Think faster: decision tools at the edge

Edge-ready C2 and operator-centred UIs compress the time between sensing and effect. ACE (Agile C4I at the Edge) and the DigitalCrew® interface patterns standardise workflows, surface machine rationale and cut through information overload — improving cross-domain coordination while keeping humans as the ultimate decision-makers.

On-stand to look for: ACE (screen), Helmet-Mounted Display, ISTAR Node (screen), Alt-NAV module, DigitalCrew® UI patterns.

Act smarter: orchestrating proportional effects

Current and future operations demand a broader set of options — both kinetic and non-kinetic — aligned to the threat and the rules of engagement. Our DSEI vignettes show machine-assisted weapons optimisation (ACE > IWO > Multi-Mission Fire Control) with humans in the loop. Aviator HMT threads link helmet sighting with Peregrine and LMM concepts for faster, explainable engagements when seconds count.

On-stand to look for: IWO (screen), Multi-Mission Fire Control model, aviator HMT vignette, Next Generation and Remote Weapons Systems.

Built for coalition operations: secure, interoperable, resilient

Agility without resilience is not an operational benefit. “Enabling the Future Force” is about open interfaces, disaggregated C2, and multi-domain networks that keep working under both cyber and EMS threats. Zero-trust by design is essential, sovereign capability where required, and NATO/partner interoperability to deliver on NATO commitments.

What to see at DSEI (quick guide)  Decision tools: ACE C4I, ISTAR Node Operator systems: Helmet-Mounted Display, Alt-NAV Effect orchestration: IWO, Multi-Mission Fire Control ndersea ops: MCUBE/Blue MMS & ASW Hub

Find us: Thales stand S8-110, ExCeL London.

Outcomes that matter

When programmes pair trusted AI with operator-centred design and interoperable C2, they see measurable gains: faster target identification, reduced cognitive load, improved cross-domain coordination, and smoother coalition integration. That’s how forces see more, think faster and act smarter — with humans setting the tempo.

Join us on the stand [S8-110] to walk through land, air, maritime, or strategic communications mission threads. Attend a capability briefing on applied AI, HMT and multi-domain vignettes. Download our Human–Machine Teaming insight for governance, ethics and adoption roadmaps.
  /sites/default/files/database/assets/images/2025-08/Enabling-the-future-force-banner_0.png 26 Aug 2025 United Kingdom Modern operations are dynamic, integrated and more data-driven, placing greater cognitive burden on operators and commanders. To out-decide adversaries, forces need trusted AI, interoperable C2, resilient sensing and effect orchestration — all designed around the human. At DSEI, Thales shows how “Enabling the Future Force” turns these needs into outcomes: see more, think faster, decide faster and act smarter. Type News Hide from search engines Off

liminal (was OWI)

5 Takeaways from Our IAM Demo Day

IAM Demo Day 2025: What Buyers Need to Know When we set out to host the IAM Demo Day, the goal was not just to showcase products. It was to answer a bigger question: what does the future of identity access management actually look like in practice? On August 20, 12 leading vendors gave us […] The post 5 Takeaways from Our IAM Demo Day appeared first on Liminal.co.
IAM Demo Day 2025: What Buyers Need to Know

When we set out to host the IAM Demo Day, the goal was not just to showcase products. It was to answer a bigger question: what does the future of identity access management actually look like in practice? On August 20, 12 leading vendors gave us their answer. From adaptive authentication and orchestration layers to access governance and Zero Trust enforcement, what became clear is that IAM is no longer a background IT function. It has become the connective tissue between security, compliance, and user experience.

For buyers such as CISOs under pressure to reduce risk, product leaders balancing login friction with conversion, and compliance officers navigating new regulatory regimes, the demos were a reminder that vendor selection in IAM is not about comparing feature checklists, but choosing the architecture your organization will be living with for the next decade.

1. Adaptive Authentication Is the New Baseline

The age of static passwords and “MFA everywhere” is over. Nearly 90% of enterprises experienced an account takeover attempt last year, and the vendors on stage were unanimous: authentication must adapt dynamically to context. Device, location, behavioral signals — these are the new inputs to trust.

As Joe Palmer, CIO at iProov, explained: “Not all biometrics are equal. Device biometrics like Face ID prioritize convenience over security. Cloud biometrics, tied to a trusted ID, are what stop deepfakes.”

He then added, “You don’t need to force a face scan on every login. Zero Trust means you always authenticate, but when risk is high, that’s when biometrics shine.”

The takeaway is clear: if your authentication system treats a suspicious login attempt the same way it treats a low-risk returning user, you are already behind. And for product leaders, the stakes are even higher. Every unnecessary prompt is a drop in conversion, every extra click a lost customer. Adaptive authentication is parting away from being just a security control, and it’s becoming a growth strategy.

2. Identity Orchestration Is Becoming the Control Layer

If authentication is the frontline, orchestration is the command center. Time and again, vendors showcased orchestration layers that knit together logins, consent flows, and user data across disparate systems. Why? Because 74% of organizations still cite fragmented identity data as their biggest challenge.

Think about that. For all the talk of digital transformation, most enterprises are still piecing together identity flows that do not talk to each other. Orchestration platforms promise to end that.

As David Mahdi, CIO at Transmit Security, explained: “You bring in all these third-party solutions… it adds to the complexity. And frankly, attackers love this. That’s where orchestration comes in as the baseline — to unify identity and give you a single confident view of the user.”

Orchestration becomes the difference between weeks of custom integration and a few clicks. It ensures that consent and access policies are enforced consistently, no matter whether a customer logs in through an app, a website, or a third-party service.

But as orchestration stitches systems together, it also shines a light on the next pressing question: who has access, and can you prove it?

3. Access Governance Moves to the Forefront

Once upon a time, access governance was the unglamorous corner of IAM, more a compliance checkbox than a competitive differentiator. Not anymore. Workforce IAM demos leaned heavily on role-based access, privileged account management, and Zero Trust enforcement. The subtext was clear: governance is now make-or-break.

As Filip Verley, our CIO, reminded the audience:  “Zero Trust isn’t just about denying access, it’s about proving the right access. And proving it again and again, in ways that regulators can see.”

Least privilege is not a nice-to-have; it is the only defensible posture in a regulatory audit. The burden of proof is now squarely on the enterprise, and as governance rises in importance for the workforce, a parallel trend is reshaping the customer side of IAM. 

If IAM is splintering into so many dimensions, then buyers themselves must become more specific about what success looks like.

4. Customer IAM Is Becoming the Source of Truth

If there was one forward-looking theme cutting across the demos, it was the push for customer IAM to become the authoritative record of identity. Liminal research shows that 91% of businesses want CIAM solutions that integrate with MDM, and while vendors didn’t use the term “MDM,” they clearly pointed in that direction. Several positioned themselves not just as login providers, but as the backbone for a unified, trustworthy customer record.

As Brook Lovatt, CPO at SecureAuth, explained: “Identity is more than just people; it’s about agents, APIs, and systems that act on behalf of people. If you don’t extend IAM guardrails to them, you’re blind to half your attack surface.”

Why does this matter? Because fragmented identity data is more than a compliance risk; it is a drag on the business. Personalization fails when profiles are scattered, silos expand the attack surface, and regulators will not accept “we couldn’t reconcile the data” as an excuse. The takeaway is clear: IAM is evolving into the system of record for identity, spanning both human and non-human users.

5. Buyer Priorities Are Becoming Role-Specific

The final, and perhaps most important, takeaway is that IAM is no longer one market with one buyer. The demos underscored how fragmented the buyer landscape has become. CISOs want measurable threat reduction. Product leaders demand orchestration that accelerates, not slows, development. Compliance executives expect governance to be mapped cleanly to regulatory frameworks.

In other words, IAM vendors cannot win with generic pitches anymore. They have to prove value to each buyer persona on their terms. And for buyers, the lesson is just as stark: do not settle for “good enough across the board.” Choose the vendor that solves your highest-stakes problem, because IAM is now a competitive differentiator, not a background system.

Watch the Recording

Did you miss IAM Demo Day 2025? You can still catch the full replay of vendor demos and expert insights:
Watch the IAM Demo Day recording here

The post 5 Takeaways from Our IAM Demo Day appeared first on Liminal.co.


Spherical Cow Consulting

Bot or Not? Why Incentives Matter More Than Identity

Let’s start with a confession: I love bots. Or at least, I love the idea of them. They’re efficient, tireless, and, if designed well, can be downright helpful. (They can also be downright unhelpful, but that's a topic for a different blog post.) But the incentives around bot traffic are completely out of balance, and that makes things messy. The post Bot or Not? Why Incentives Matter More Than I

“Let’s start with a confession: I love bots. Or at least, I love the idea of them.”

They’re efficient, tireless, and, if designed well, can be downright helpful. (They can also be downright unhelpful, but that’s a topic for a different blog post.) But the incentives around bot traffic are completely out of balance, and that makes things messy.

Not all bots are bad, but they all cost someone something. Until we fix the incentives for identifying and managing automated traffic, we’ll keep having the same tired fight: block all bots and break useful functionality, or get overrun by them and save our content and services.

A Digital Identity Digest Bot or Not? Why Incentives Matter More Than Identity Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:09:35 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What do we mean by “bot”?

Let’s clarify the terminology. “Bot” is a term that covers everything from benign automation to outright criminal activity. For the purposes of this post, we’re talking about non-human actors who interact with web services, some with permission and some without.

That includes:

Helpful bots: search crawlers, accessibility checkers, uptime monitors AI agents: tools that retrieve or generate content, often hitting your site through a third-party interface Enterprise automation: scripts and services performing integrations across APIs Malicious bots: scrapers, credential-stuffers, spam engines, DDoS zombies

Some are essential, some annoying, and some outright hostile. According to the 2025 Imperva Bad Bot Report, automated traffic now makes up 51% of all web traffic, with 37% of that classified as malicious. Cloudflare Radar data has its own stats that indicate bots account for approximately 30% of global web traffic.

Regardless of the type of bot, they all generate load at a rate faster than humans can manage on their own. And that’s where things get tricky.

Identity is only the first step

There’s been a lot of work recently on figuring out how bots can identify themselves in a standardized, trustworthy way. The Web Bot Authentication discussion at the IETF is a good example. More and more content and service providers are demanding the ability to identify and/or differentiate bot traffic from human. Fewer (but not zero) bot developers are eager to support that goal. A handful want to be good actors, to say clearly, “Hey, I’m not a human, but I’m not here to cause trouble either.”

It probably goes without saying (but I’m going to say it anyway): If you’re building a polite, well-behaved bot, the last thing you want is to be lumped in with attackers. But the other side of the equation is the cost to the site your bot is connecting to. Knowing a bot’s identity doesn’t change the fact that other organizations’ infrastructures are paying the price; they may want to block you to protect themselves.

Even a verified, well-meaning AI agent scraping a site to summarize its content for someone’s personalized feed still hits that site’s CDN, database, and cloud compute budget.

And if they’re not charging for that access—if there’s no business model that connects bot traffic to revenue—then the only thing that providing some form of identity to a bot does is to give that polite visitor a name tag before they raid the pantry.

Why incentives matter

The developers building these bots often say, “We just want access. Don’t block us.” And the site operators reply, “We just want you not to break our infrastructure.”

That’s not a disagreement. That’s a misaligned incentive.

From the bot developer’s perspective:

Self-identifying should reduce the risk of being blocked (though, for the moment, it doesn’t) A clear spec helps them integrate in good faith (if they can figure out where the spec is being developed) They’d rather focus on product, not evasion tactics (who wouldn’t rather make real progress than jump through hoops?)

From the service provider’s perspective:

Every request has a cost Authentication doesn’t offset bandwidth Good behavior still eats resources

Even well-behaved bots can DDoS you by accident. You can’t fix that with certificates or signatures.

Emerging ideas from the Web Bot Auth conversation

The Web Bot Auth mailing list had some smart commentary recently on what incentives actually look like:

Reputation and differentiation: Bot operators don’t want impostors ruining their good name. Self-identification helps create reputational trust. Better treatment through transparency: Authenticated bots could be treated as “allowed by default” rather than punished by default, which would flip the current anti-abuse script. Load management: Many sites are being overloaded, not just attacked. Infrastructure strain is forcing even friendly sites to take defensive measures. This opens the door to load-based incentives: service operators could offer higher rate limits or more reliable access to bots that self-identify and follow documented behavior guidelines. Rather than treating all automation as abusive, a tiered system could encourage cooperative bots to behave responsibly in exchange for stable access.

All of that leads to an observation: identity is useful, but it doesn’t answer the real question. Who decides if the bot is worth the load it brings? That’s a value judgment that falls outside the scope of identity systems. What it does highlight is that services can’t ignore that automated traffic is hitting their infrastructure, and they need tools, not just blind faith, to manage it.

What could a better system look like?

Imagine a world where bots:

Register and authenticate using open standards Earn a reputation score over time Get tiered access based on usage patterns and benefit to the service Pay—or pass value back—in proportion to their impact

This isn’t a fantasy. We already do this for humans via OAuth scopes, rate limiting, and usage tiers. The challenge is applying it to non-human actors in a way that scales.

(As an aside here, there are two people I recommend you follow if you’d like to dig into the gory, gory details of NHI taxonomy and the practical realities of NHI: Erik Wahlstöm and Pieter Kasselman.)

What you can do today

If you’re a product manager or DevOps lead, this doesn’t have to wait on a new IETF spec. You can start with:

Separate metrics for bot vs. human traffic – Understand where your resources are going and whether that automation is helping or hurting. This won’t be perfect. If it were, we wouldn’t need to figure out how to differentiate the traffic in the first place. But you can get a gross approximation to start, using things like user-agent parsing, request behavior patterns, or identity-aware proxies. That can help you make smarter decisions about rate limits, caching strategies, or whether to even allow certain types of traffic at all. Bot policy transparency – If you expect bots to authenticate, say so. If you want them to throttle, document it, ideally in machine-readable formats. That could include published API docs, robots.txt extensions, or structured metadata in your OpenAPI spec. You could also express bot policies via HTTP headers, usage dashboards, or identity-aware gateways. Don’t hide the rules in your EULA; bots don’t read fine print, but their developers might parse structured access guidance. Selective encouragement – Are there bots that drive value? Give them the green light, but with boundaries. Tools like API gateways (e.g., Kong, AWS API Gateway, Apigee) already support rate limiting and tiered access policies that can help enforce those boundaries. Standards such as OAuth 2.0 and mutual TLS (mTLS) can be used to verify identity and scope access. Emerging efforts like the Web Bot Authentication discussions and SPIFFE/SPIRE for workload identity also offer structured ways to manage and audit bot and automation access without resorting to total denial or blanket approval.

And if you’re building a bot:

Respect the load you place on services. Identify yourself if you want a long-term relationship. Assume you’re not entitled to the same treatment as a human user unless you bring similar value. Final thought

This isn’t about punishing bots or yelling at them to get off your lawn. Automation is here to stay. But if we want to coexist, we have to stop pretending that identification alone is the solution.

Identity without incentives is just surveillance.

Incentives without constraints are just spam.

Let’s aim for something better than either.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Bots, Incentives, and Identity

Hi, welcome back to A Digital Identity Digest. I’m Heather Flanagan, and today we’re talking about bots.

[00:00:36] Speaker A: Not the horror stories, not the buzzwords, but the real, practical tension that comes up when your system starts to feel the weight of automated traffic.

If you’re a product manager, DevOps lead, or identity architect managing automated requests—or even a bot developer or AI agent creator—this episode has insights for you.

Because here’s the challenge:

Not all bots are bad. But all bots cost someone something. What We Mean When We Say Bot

Bots cover a wide spectrum of activity. They can mean:

Helpful automation: search engine crawlers, uptime checkers, accessibility tools AI agents: fetching, generating, or summarizing content Enterprise scripts: internal automation and integrations Malicious actors: scrapers, spam bots, credential stuffers

Some are essential, some are annoying, and some are harmful. But regardless of intent, they all place a burden on infrastructure—and that cost usually lands on the target system.

Identity Is Only the First Step

Identity is one of my favorite topics, but identity alone doesn’t solve the bot challenge.

There is growing interest in creating standardized, trustworthy ways for bots to identify themselves. For example:

Efforts like the Internet Engineering Task Force (IETF) are exploring bot authentication. Bots could say, I’m not human, but I’m not here to cause trouble either. Some providers welcome this—they want visibility into what’s hitting their sites.

However, many bot developers aren’t eager to adopt these practices because:

Self-identification doesn’t currently bring them benefits. Even when they do identify, site operators still pay the cost in bandwidth, compute, and storage.

So the tension remains: without incentives for both sides, we’re stuck in the cycle of block everything or get overrun.

Why Incentives Matter

Bot developers want access without being mistaken for abusers.
Site operators want reliable service for their human users.

This isn’t pure conflict—it’s misalignment.

Developers want recognition as ecosystem contributors. Operators need to judge whether a bot’s value is worth its load.

Identity management can’t make that judgment. It requires a value framework.

Emerging Ideas for Bot Incentives

Public discussions around bot authentication highlight some promising concepts:

Legitimate bots want protection from imposters who could damage their reputation. Self-identification and adherence to published guidelines could be rewarded. Smaller site operators are overwhelmed and often default to blocking—even when bots aren’t malicious.

This points toward load-aware incentives. Imagine:

Anonymous bots: limited access Known bots: more access Trusted, valuable bots: highest access

In other words, an API-style approach with access tiers.

Designing a Smarter System

We already use systems like OAuth scopes, rate limits, and usage quotas for humans. Why not for bots?

A smarter system could include:

Open standards for bot registration and authentication Tiered access based on reputation, utility, and resource use Machine-readable policies that clearly define what’s allowed Monetization or resource-sharing models for high-impact bots

Companies like Cloudflare are already experimenting in this space, and it’s worth tracking their efforts.

What You Can Do Today

You don’t have to wait for global standards. There are steps you can implement right now:

Measure traffic: Separate bot and human activity with user agent parsing, behavioral analysis, or identity-aware proxies. Be transparent: Publish expectations in API specifications, robots.txt, or OpenAPI metadata. Avoid hiding them in user agreements. Encourage good bots: Support bots that drive value (like discoverability or user support) through API gateways, OAuth scopes, or mTLS.

By doing this, you’re not punishing automation—you’re designing for it, while keeping your infrastructure sustainable.

Closing Thoughts

At the end of the day:

Identity without incentives is just surveillance. Incentives without constraints are just spam.

The goal is something more useful—balanced systems where automation and infrastructure coexist productively.

Thank you for listening. You can find the full blog post with links and further reading at sphericalcowconsulting.com. Please share this with colleagues, encourage them to subscribe, and stay tuned for next week’s episode.

[00:08:59] Speaker B: And that’s it for this week’s Digital Identity Digest.

[00:09:03] Speaker A: If it made things a little clearer—or at least more interesting—share it with a friend or colleague.

[00:09:04] Speaker B: Connect with me on LinkedIn @hlflanagan. If you enjoy the show, subscribe and leave a rating wherever you listen.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Bot or Not? Why Incentives Matter More Than Identity appeared first on Spherical Cow Consulting.


iComply Investor Services Inc.

Fintech and AML: How to Stay Fast, Compliant, and Scalable Across Markets

Fast-growing fintechs face rising AML obligations. This article shows how to build scalable, API-first compliance infrastructure with iComply across key regulatory jurisdictions.

Fintechs are reshaping finance—but AML expectations are intensifying. This article covers KYB, KYC, KYT, and AML requirements across the U.S., UK, EU, Australia, and Singapore, and shows how iComply helps automate compliance without sacrificing speed, security, or user experience.

Speed, scale, and seamless UX have defined the fintech revolution. But in 2024 and beyond, compliance is just as critical. Regulators worldwide are tightening scrutiny of digital finance—from embedded lending to neobanking, payments, crypto apps, and B2B platforms.

For fintechs serving global users, managing AML obligations across jurisdictions can become a scaling bottleneck—unless you have the right tools.

Changing AML Expectations for Fintechs by Jurisdiction United States Regulators: FinCEN, CFPB, OCC, state authorities Requirements: MSB licensing, BOI reporting, CDD rule compliance, SAR filing, and sanctions/PEP screening United Kingdom Regulator: FCA Requirements: AML registration, customer due diligence, transaction monitoring, and data protection (UK GDPR) European Union Regulators: National authorities + EU-wide AMLA Requirements: 6AMLD, MiCA (for tokenization), data privacy (GDPR), UBO transparency, and secure onboarding Australia Regulator: AUSTRAC Requirements: AML/CTF program, customer ID checks, PEP/sanctions screening, SMR reporting, and risk-based onboarding Singapore Regulator: MAS Requirements: AML risk assessments, transaction monitoring, UBO identification, and Travel Rule compliance for crypto Compliance Challenges for Fintechs

1. Velocity vs. Verification
Users expect real-time onboarding—regulators require thorough checks.

2. Multi-jurisdictional Complexity
Serving global clients means navigating overlapping, sometimes conflicting compliance rules.

3. Developer Disruption
Fragmented vendor stacks burden product teams and delay launches.

4. Trust and Brand Risk
Poor compliance not only invites fines but erodes customer confidence.

iComply: AML Infrastructure for Fast-Moving Fintechs

iComply offers a modular, developer-friendly platform that gives fintechs the power to build, scale, and prove compliance without slowing down.

1. KYC + KYB with Edge Security On-device ID and biometric checks for individuals KYB and UBO verification with registry and document data Reduce friction while protecting user privacy (PIPEDA, GDPR, etc.) 2. AML + KYT for Risk Monitoring Real-time transaction scoring, behaviour detection, and alerting Sanctions, PEP, and adverse media screening Automated SAR/STR triggers with full case traceability 3. Localization and Data Governance Support for 140+ languages and 14,000+ global ID types Localized workflows and data residency for U.S., UK, EU, AUS, and SG 4. API-First Integration REST APIs and developer docs SDKs and white-label options for fintech UX teams Webhooks and cloud/on-prem deployment options 5. Audit-Ready Case Management Centralized review, escalation, and reporting interface Export logs for regulators, banks, or investors Satisfy compliance diligence during fundraising or partnerships Case Insight: Embedded Finance Startup

A U.S.-based embedded payments app integrated iComply’s KYC and AML stack. In 90 days:

Onboarding speed improved by 40% KYC verification success rate increased to 93% Passed SOC2 and FinCEN diligence with full audit traceability Final Take

Compliance doesn’t need to compete with UX or product speed. Fintechs that embed smart AML tools can:

Scale faster across regulated markets Build trust with users and partners Avoid fines, audits, and reputational harm

Schedule a call with iComply to learn how we help fintechs move fast and stay compliant – without the trade-offs.


Herond Browser

A Quick Guide to Find Your Solana Address Instantly

Need to locate your Solana address fast? Our quick guide simplifies the process, helping you access your SOL wallet address in seconds The post A Quick Guide to Find Your Solana Address Instantly appeared first on Herond Blog. The post A Quick Guide to Find Your Solana Address Instantly appeared first on Herond Blog.

Need to locate your Solana address fast? Our quick guide simplifies the process, helping you access your SOL wallet address in seconds for seamless transactions in DeFi or meme coin trading. Whether you’re using Phantom or Solflare, we’ve got you covered with easy steps. Navigate securely and dive into Solana’s vibrant ecosystem with confidence!

What Is a Solana Address?

A Solana address is a unique string of characters, typically starting with a number or letter, that identifies your wallet on the Solana blockchain. Used for sending, receiving, or storing SOL and tokens like $BONK, it ensures secure transactions in DeFi and meme coin trading. Find yours easily in wallets like Phantom or Solflare.

Step-by-Step Guide to Find Your Solana Address Instantly

Step 1: Download and Set Up Herond Wallet

Install Herond Wallet on PC, tablet, or mobile with a quick social login. Enjoy Herond’s user-friendly setup for fast access to Solana.

Step 2: Select Solana Network

Navigate to wallet settings and select Solana from multi-chain options. Easily switch to Solana’s high-speed blockchain for DeFi or meme coins.

Step 3: Locate Your Solana Address

Go to the “Receive” section in Herond Wallet to copy your Solana address. Capture a screenshot or GIF for easy reference and sharing.

Step 4: Verify and Use Safely

Double-check your Solana address to ensure accuracy before transactions. Use Herond Shield’s anti-phishing features to protect against scams. Common Mistakes to Avoid Avoid Common Solana Address Mistakes

Protect your crypto assets by never sharing private keys, copying incorrect Solana addresses, or using unverified platforms. These errors can lead to scams or lost funds, especially in the fast-paced 2025 crypto market. Stick to trusted wallets like Herond or Phantom and verify platforms before transacting. Stay secure with Herond Browser’s anti-phishing protection (herond.org) to safeguard your Solana trading experience.

Double-Check Addresses and Use Herond Browser

Always double-check your Solana address before sending or receiving funds to avoid costly errors. A single wrong character can result in lost assets. Use Herond Browser’s tracker protection (herond.org) to browse safely and block malicious links while managing your wallet. This ensures secure DeFi or meme coin transactions in 2025, keeping your Solana investments safe and sound.

Tips for Safe Solana Address Best Practices for Safe Solana Transactions

Secure your Solana transactions in 2025 by never sharing private keys and always verifying wallet addresses before sending or receiving funds. A single mistake can lead to lost assets in the fast-moving crypto market. Use trusted wallets like Herond or Phantom and double-check recipient addresses to avoid scams. Trade confidently with Herond Browser’s anti-phishing protection to ensure safe Solana DeFi and meme coin transactions.

Use Herond Shield for Scam and Malware Protection

Herond Shield, integrated with Herond Browser, offers robust scam and malware detection for secure Solana transactions in 2025. It blocks phishing links and malicious sites, protecting your wallet from threats while trading tokens like $BONK. Stay safe in the volatile crypto space by downloading Herond Browser for a seamless, scam-free experience in Solana’s vibrant ecosystem.

Conclusion

Finding your Solana address is simple with our quick guide, empowering you to trade SOL or meme coins like $BONK with ease. Use trusted wallets like Herond or Phantom, verify addresses, and stay secure. Dive into Solana’s vibrant 2025 ecosystem confidently and unlock seamless DeFi and crypto opportunities today!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us: Technical support topic on https://community.herond.org On Telegram https://t.me/herond_browser DM our official X @HerondBrowser

The post A Quick Guide to Find Your Solana Address Instantly appeared first on Herond Blog.

The post A Quick Guide to Find Your Solana Address Instantly appeared first on Herond Blog.


Dogen Crypto Price Prediction: What’s Next for DOGEN?

Whether you're a meme coin enthusiast or a savvy investor, discover what's next for DOGEN Crypto and seize opportunities in this dynamic market! The post Dogen Crypto Price Prediction: What’s Next for DOGEN? appeared first on Herond Blog. The post Dogen Crypto Price Prediction: What’s Next for DOGEN? appeared first on Herond Blog.

As the 2025 crypto bull run heats up, DOGEN, a Solana-based meme coin, is capturing attention with its viral hype and explosive growth potential. Our price prediction guide analyzes DOGEN’s Crypto market trends, community momentum, and key drivers to forecast its future. Whether you’re a meme coin enthusiast or a savvy investor, discover what’s next for DOGEN Crypto and seize opportunities in this dynamic market!

What Is DOGEN? DOGEN: Solana-Based Meme Coin with Swole Doge Branding

DOGEN, a Solana-based meme coin launched in late 2024, leverages the bold “Swole Doge” branding, featuring a muscular Shiba Inu symbolizing strength and ambition. Built on Solana’s high-speed, low-cost blockchain, DOGEN blends viral meme culture with DeFi potential, appealing to risk-tolerant investors. Its unique identity and fast transactions make it a standout in the 2025 meme coin market, poised for explosive growth.

Presale Success and Ambitious Roadmap for Dogen Crypto

DOGEN’s Crypto presale raised $3.19M, crossing its midpoint with a 300% price surge from $0.0003 to $0.0011, targeting $5.5M by Stage 13. Its vibrant community of over 15,000 holders drives momentum, while the roadmap includes DogenTap, a Telegram-integrated app, and staking for passive income. Audited by SpyWolf and SmartState, DOGEN’s transparent strategy positions it for long-term success in 2025’s crypto boom.

Comparison to Dogen Crypto and SHIB: Solana’s Edge

Unlike Dogecoin ($66.39B market cap) and Shiba Inu ($18.87B), DOGEN Crypto leverages Solana’s high-performance blockchain for faster, cheaper transactions than Ethereum-based SHIB or DOGE’s slower network. While DOGE relies on payment integrations and SHIB expands into DeFi, DOGEN’s “Swole Doge” appeal and Solana’s scalability attract bold investors seeking 2025 gains, potentially outpacing its competitors with innovative features like DogenTap and staking.

DOGEN Crypto Price Performance and Current Market Status DOGEN’s Crypto Current Price and Trends in August 2025

As of August 2025, DOGEN Crypto trades at ~$0.0001376, reflecting a modest 0.19% increase over the past 24 hours. Despite its low price, the Solana-based meme coin shows signs of stabilization after a volatile year. With a market cap of ~$1.37M and $43,045 daily trading volume. DOGEN remains a speculative favorite for 2025 investors seeking high-risk, high-reward opportunities in the meme coin market.

2024 Peak and Ongoing Volatility

DOGEN Crypto reached an all-time high of $0.005077 in February 2024 but has since declined over 90%. Now trading at ~$0.0001376 in August 2025. This sharp drop underscores its extreme volatility, typical of meme coins. Despite the decline, DOGEN’s low entry point attracts investors anticipating a rebound, especially in Solana’s thriving ecosystem. However, price swings remain a key challenge for holders.

Bullish Sentiment and Presale Momentum

DOGEN enjoys bullish sentiment in August 2025, with the Fear and Greed Index signaling optimism among investors. Its presale raised $3.19M, with a 300% price surge from $0.0003 to $0.0011, fueled by a 15,000-strong community and hype around DogenTap and staking. This momentum, paired with Solana’s 2025 bull run, positions DOGEN for potential short-term gains, drawing risk-tolerant traders.

Risks: Volatility, Limited Utility, and Competition

DOGEN’s high volatility, with potential 20-50% daily swings, poses significant risks for investors. Its limited utility as a meme coin, lacking the robust DeFi features of competitors like SHIB, and fierce competition from tokens like WIF and TRUMP could hinder growth. Investors should exercise caution, conduct thorough research, and diversify to mitigate risks in the unpredictable 2025 crypto market.

DOGEN Crypto Price Predictions: Short-Term and Long-Term Outlook Short-Term (2025-2026) Bullish Scenario: DOGEN’s Potential Surge to $0.004-$0.01

In a bullish 2025, DOGEN could soar to $0.004-$0.01, driven by major exchange listings like Binance.US and viral social media buzz on platforms like X. With Solana’s ecosystem thriving and a 15,000-strong community fueling hype, DOGEN’s “Swole Doge” branding could spark a rally. Investors eyeing 2025’s altcoin season see high upside, but caution is needed due to meme coin volatility.

Bearish Scenario: Stagnation at $0.0001-$0.0005

Without new utility or sustained hype, DOGEN risks stagnating at $0.0001-$0.0005 in 2025. Its current $0.0001376 price reflects a 90%+ drop from its 2024 peak, and limited DeFi features compared to SHIB could hinder growth. If community momentum fades or competitors like WIF dominate, DOGEN may struggle. Diversification is key to mitigate losses in this bearish outlook.

Key Factors: DogenTap, Staking, and Altcoin Season

DOGEN’s price trajectory in 2025 hinges on DogenTap, a Telegram-integrated app boosting engagement, and staking with 30-50% APY for passive income. The anticipated altcoin season, fueled by Bitcoin’s ~$119K price and Solana’s DeFi growth, could amplify DOGEN’s gains. These catalysts, paired with its $3.19M presale success, position DOGEN as a high-risk, high-reward meme coin for savvy investors.

Long-Term (2030+) Optimistic Price Scenarios for DOGEN Crypto in 2025

In an optimistic 2025, DOGEN could surge to $0.05-$0.15 in a bullish scenario, driven by major exchange listings, DogenTap’s viral adoption, and Solana’s altcoin season momentum. A conservative estimate predicts $0.000465-$0.00175, fueled by steady community growth and staking rewards. With a current price of ~$0.0001376, DOGEN’s potential makes it a top meme coin pick for risk-tolerant investors in the dynamic 2025 crypto market.

Risks: Obscurity Without Utility

DOGEN faces risks of obscurity in 2025 if it fails to develop meaningful utility beyond its “Swole Doge” branding. With limited DeFi features compared to competitors like SHIB or WIF, and a 90%+ drop from its 2024 peak, DOGEN could stagnate if community hype fades. Investors should diversify and research thoroughly to mitigate losses in the volatile meme coin landscape of 2025.

Factors Influencing DOGEN’s Future Price Positive Drivers for DOGEN’s Growth in 2025

DOGEN’s potential in 2025 is fueled by its vibrant 15,000-strong community, Solana’s high-speed scalability, and anticipated CEX listings like Binance.US, which could boost visibility and price. DogenTap, a Telegram-integrated app, enhances user engagement, while staking offers 30-50% APY, attracting investors. With Solana’s DeFi ecosystem thriving, DOGEN’s “Swole Doge” branding positions it as a top meme coin for explosive gains in the 2025 bull run.

Challenges Facing DOGEN Crypto in 2025

DOGEN’s path to success in 2025 faces hurdles like extreme volatility, with potential 20-50% daily swings, and regulatory risks impacting meme coins. Its limited utility compared to DeFi-heavy tokens like SHIB, coupled with competition from WIF and TRUMP, could lead to obscurity if hype fades. Investors must navigate these challenges by diversifying and researching thoroughly to mitigate risks in the volatile 2025 crypto market.

Conclusion

DOGEN’s bold “Swole Doge” branding, Solana’s scalability, and features like DogenTap position it for potential gains in the 2025 crypto bull run. With a $3.19M presale and optimistic forecasts of $0.004-$0.15, it’s a high-risk, high-reward pick. However, volatility and limited utility pose challenges. Diversify, monitor X for hype, and research thoroughly to seize DOGEN’s potential in the dynamic meme coin market!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

Technical support topic on https://community.herond.org On Telegram https://t.me/herond_browser DM our official X @HerondBrowser

The post Dogen Crypto Price Prediction: What’s Next for DOGEN? appeared first on Herond Blog.

The post Dogen Crypto Price Prediction: What’s Next for DOGEN? appeared first on Herond Blog.


Easy Ways to Buy USDT with PayPal in 2025

Want to buy USDT with PayPal in 2025? Our guide simplifies the process, offering quick, secure methods to purchase this leading stablecoin The post Easy Ways to Buy USDT with PayPal in 2025 appeared first on Herond Blog. The post Easy Ways to Buy USDT with PayPal in 2025 appeared first on Herond Blog.

Want to buy USDT with PayPal in 2025? Our guide simplifies the process, offering quick, secure methods to purchase this leading stable coin. Ideal for DeFi investments or trading meme coins, USDT’s stability makes it a top crypto choice. Follow our easy steps to navigate trusted platforms and avoid scams, ensuring a seamless buying experience in the dynamic 2025 crypto market!

Why buy USDT with PayPal in 2025? USDT’s Role as a Stablecoin for Trading and DeFi

USDT, a leading stablecoin pegged to the US dollar, is a cornerstone for trading and DeFi in 2025. With a ~$139B market cap, it offers stability in volatile crypto markets, making it ideal for trading pairs or DeFi protocols on platforms like Solana. Its reliability ensures seamless transactions, perfect for investors seeking low-risk crypto exposure in the booming 2025 market.

PayPal’s Accessibility and Security for Fiat-to-Crypto

PayPal’s widespread use and robust security make it a top choice for buying USDT in 2025. Its user-friendly interface allows easy fiat-to-crypto conversions, appealing to beginners and seasoned traders alike. With built-in fraud protection, PayPal ensures secure transactions, bridging traditional finance and crypto markets effortlessly for a smooth USDT purchase experience.

2025 Trends: DeFi Growth and PayPal’s Crypto Policies

The 2025 crypto bull run, driven by DeFi’s expansion and Bitcoin’s ~$119K price, boosts USDT’s demand for stable transactions. PayPal’s pro-crypto policies, including expanded coin support and wallet integrations, streamline USDT purchases. These trends make buying USDT with PayPal a smart move to capitalize on DeFi and market growth in 2025.

Easy Ways to Buy USDT with PayPal in 2025

Method 1: Buy USDT with Paypal using Herond Wallet

Download Herond Wallet, link PayPal, and buy USDT directly. Non-custodial, multi-chain platform with secure, user-friendly interface. Start at herond.org for safe 2025 USDT purchases.

Method 2: Purchase USDT via Third-Party Exchanges

Sign up on exchanges like Coinbase or Paxful (if PayPal-supported). Link PayPal, buy USDT with ease; Coinbase charges 3.99% fee. Reliable platforms for secure USDT trading in 2025.

Method 3: Buy USDT on P2P Platforms

Use LocalBitcoins to find trusted sellers for USDT via PayPal. Negotiate terms, use escrow, and verify seller ratings to avoid scams. Flexible option for USDT purchases in 2025 crypto market. Step-by-Step Guide to Buy USDT with PayPal

Step 1: Set Up Your Wallet

Install Herond Wallet on PC, tablet, or mobile with social login. Quick, user-friendly setup for secure USDT purchases in 2025.

Step 2: Access the On & Off Ramp

Select PayPal in the wallet’s fiat-to-crypto section. Herond’s on-and-off ramp ensures a secure, streamlined process.

Step 3: Purchase USDT

Enter amount, confirm via PayPal, and receive USDT in your wallet. Take a screenshot or GIF for transaction reference.

Step 4: Trade or Store USDT Securely

Trade USDT on DeFi platforms like Raydium or store safely. USDT’s stability suits 2025’s volatile market. Tips for Safe USDT Purchases with PayPal Verify Platforms with Herond Shield for Scam Detection

Before buying USDT with PayPal in 2025, verify platforms using Herond Shield’s advanced scam detection. Its ASAS (Advanced Security Alert System) flags malicious sites, protecting you from phishing and fraud. Ensure you’re using trusted exchanges like Coinbase or Herond Wallet to avoid risks. Browse securely with Herond Browser (herond.org) to safely purchase USDT in the booming crypto market.

Check PayPal’s Crypto Limits and Regional Rules

PayPal’s crypto policies in 2025 vary by region, with limits on purchase amounts and supported coins like USDT. For example, US users face a $100,000 annual crypto purchase cap. Check PayPal’s guidelines at paypal.com to ensure compliance and avoid transaction issues. Stay informed to seamlessly buy USDT and navigate the crypto space with confidence.

Store USDT in Herond’s Non-Custodial Wallet

After buying USDT, store it securely in Herond’s non-custodial wallet, giving you full control over your assets. Supporting multi-chain networks like Solana, it ensures safe storage for DeFi or trading. With social login and robust security, Herond Wallet simplifies USDT management in 2025. Download at herond.org for a scam-free, secure crypto experience.

Conclusion

Buying USDT with PayPal in 2025 is simple with trusted methods like Herond Wallet, exchanges like Coinbase, or P2P platforms. Verify platforms, check PayPal’s regional limits, and store USDT safely in a non-custodial wallet. With USDT’s stability fueling DeFi and trading, seize this opportunity to invest wisely.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

Technical support topic on https://community.herond.org On Telegram https://t.me/herond_browser DM our official X @HerondBrowser

The post Easy Ways to Buy USDT with PayPal in 2025 appeared first on Herond Blog.

The post Easy Ways to Buy USDT with PayPal in 2025 appeared first on Herond Blog.


Top Relaxing Music for Sleep: Fall Asleep Fast & Deeply

This Top Relaxing Music for Sleep is curated with the most soothing sounds and gentle melodies, specifically designed to guide you into a state of total relaxation The post Top Relaxing Music for Sleep: Fall Asleep Fast & Deeply appeared first on Herond Blog. The post Top Relaxing Music for Sleep: Fall Asleep Fast & Deeply appeared first on Herond Blog.

In our fast-paced world, finding a moment of calm can feel impossible, especially when it’s time to sleep. If you’ve ever spent hours tossing and turning, you know the power of a quiet mind. The right kind of music can be a powerful tool to help you unwind, ease stress, and prepare your body for a night of deep, restorative sleep. This Top Relaxing Music for Sleep is curated with the most soothing sounds and gentle melodies, specifically designed to guide you into a state of total relaxation. Get ready to leave your worries behind and discover the secret to falling asleep fast and sleeping deeply.

Why Relaxing Music Helps You Sleep The Science Behind Relaxing Music Slows Heart Rate & Lowers Cortisol: The gentle, predictable rhythm of relaxing music signals your nervous system to calm down. This helps to slow your heart rate and reduce the stress hormone cortisol, preparing your body for rest. Key Benefits for a Restful Night Promotes Relaxation & Reduces Anxiety: Soothing sounds are a powerful tool for releasing daily tension and quieting racing thoughts that often keep you awake. Improves Sleep Quality: By creating a peaceful state of mind, calming music can lead to deeper, more restorative sleep. Best Genres for Unwinding Ambient & Classical Music: Look for pieces with a slow, consistent tempo and minimal dynamic changes. Natural Sounds & Lo-fi: Sounds like rain, waves, or gentle lo-fi beats can create a peaceful atmosphere, helping you drift off peacefully. Top Relaxing Music for Sleep in 2025 Top Relaxing Music for Sleep – Ambient Sleep Sounds

This playlist is designed to create a calm sonic landscape that helps your mind slow down.

Music type: Soothing drones and white noise (e.g., “Deep Sleep Ambient“). Why it works: The slow, consistent tempos (60-80 BPM) reduce mental chatter and prepare your brain for rest. Top Relaxing Music for Sleep – Classical Music for Sleep

For those who find comfort in classic melodies, this playlist offers gentle, timeless pieces to help you drift off.

Music type: Gentle piano and orchestral compositions (e.g., “Clair de Lune”). Why it works: The melodic simplicity and familiar patterns create a sense of peace and security. Top Relaxing Music for Sleep – Lo-Fi Sleep Vibes

This collection of lo-fi beats provides a modern, low-key rhythm that’s perfect for unwinding.

Music type: Chill lo-fi beats with a gentle rhythm (e.g., “Lo-Fi Sleep Chill”). Why it works: The repetitive, calming rhythms help quiet the mind and block out distracting external noises. Top Relaxing Music for Sleep – Nature Sounds for Sleep

Connect with the calming power of the natural world through this playlist.

Music type: Sounds like rain and ocean waves (e.g., “Rainy Night Serenity”). Why it works: These natural sounds have a built-in calming effect that can lower stress and create a tranquil atmosphere for sleep. Tips for Better Sleep with Top Relaxing Music for Sleep Set the Scene

Creating the right environment is just as important as the music itself.

Use quality headphones or a speaker to ensure the sound is clear and consistent. Set a sleep timer on your device so the music fades out after you’ve fallen asleep, preventing any disturbance later in the night. Choose the Right Music

The type of music you listen to is crucial for a successful night’s rest.

Opt for playlists with a tempo of 60-80 BPM (beats per minute). This rhythm mimics a resting heart rate and helps your body relax. Look for genres like ambient, classical, or natural sounds for the best results. Protect Your Peace of Mind

In an online world full of risks, your peace of mind is paramount.

Use Herond Shield to protect yourself from scams and phishing attempts while streaming music online. This built-in security feature blocks malicious pop-ups and ads that can disrupt your relaxation and compromise your safety. Integrate with Your Bedtime Routine

Music works best when it’s part of a larger, calming ritual.

Pair your music with a bedtime routine, such as light reading or gentle stretching. Dim the lights and avoid screens to signal to your brain that it’s time to prepare for sleep. Conclusion

So there you have it-a collection of playlists and tips to help you transform your nightly routine. The right music is more than just background noise; it’s a powerful, science-backed tool for calming your mind and preparing your body for rest. Whether you prefer the gentle ebb of classical music or the subtle hum of ambient drones, the key is to find the soundscape that works for you. With these curated selections, you can finally leave the day’s stress behind and fall into the deep, restful sleep you deserve.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

Technical support topic on https://community.herond.org On Telegram https://t.me/herond_browser DM our official X @HerondBrowser

The post Top Relaxing Music for Sleep: Fall Asleep Fast & Deeply appeared first on Herond Blog.

The post Top Relaxing Music for Sleep: Fall Asleep Fast & Deeply appeared first on Herond Blog.


auth0

A Step-by-Step Guide to Securing Amazon Bedrock Agents with Auth0

Learn how to enhance Amazon Bedrock security with Auth0 for GenAI. This guide provides a complete walkthrough for implementing robust AI agent authentication and authorization, enabling agents to securely act on a user's behalf without static credentials.
Learn how to enhance Amazon Bedrock security with Auth0 for GenAI. This guide provides a complete walkthrough for implementing robust AI agent authentication and authorization, enabling agents to securely act on a user's behalf without static credentials.

FastID

Fastly DDoS Protection wins SiliconANGLE TechForward Cloud Security Award

Fastly DDoS Protection wins SiliconANGLE TechForward Cloud Security Award after rigorous analysis by 32 industry peers.
Fastly DDoS Protection wins SiliconANGLE TechForward Cloud Security Award after rigorous analysis by 32 industry peers.

Monday, 25. August 2025

Ontology

THE ONTOLOGY NETWORK

The Blockchain That Could Reshape Global Finance The global financial system is at a turning point. For decades, centralized banking, legacy infrastructure, and opaque financial intermediaries have defined how money flows across borders. But the rise of blockchain technology is rewriting the rules, and among the networks leading this transformation is Ontology (ONT). Ontology is more t
The Blockchain That Could Reshape Global Finance

The global financial system is at a turning point. For decades, centralized banking, legacy infrastructure, and opaque financial intermediaries have defined how money flows across borders. But the rise of blockchain technology is rewriting the rules, and among the networks leading this transformation is Ontology (ONT).

Ontology is more than just another blockchain project it is an ecosystem designed to bring trust, identity, and data solutions to real world financial systems. Its potential to reshape the way we interact with money, identity, and institutions cannot be overstated.

1. Trust Without Middlemen

Traditional finance relies heavily on intermediaries like banks, brokers, and clearinghouses. These entities provide trust but at a cost slow transaction speeds, high fees, and limited access for billions of people worldwide.

Ontology introduces decentralized identity (DID) and data attestation, enabling users to prove their identity and ownership without relying on centralized authorities. In practice, this means that financial transactions can be conducted directly between individuals or institutions, reducing friction and costs while maintaining security.

2. Financial Inclusion for the Underserved

According to the World Bank, nearly 1.4 billion adults remain unbanked. Traditional banking systems often exclude people due to lack of documentation, geographical barriers, or high account maintenance fees.

Ontology’s DID system empowers individuals to create and control their digital identity, opening the doors to financial services such as lending, insurance, and cross border payments. By cutting down on paperwork and enabling trustless verification, Ontology can help bring billions of people into the global financial ecosystem.

3. Secure Data Sharing in Finance

Data is the new oil, and in finance, it determines everything from creditworthiness to fraud detection. Unfortunately, the way data is currently managed is fragmented and prone to abuse. Customers often have little control over their personal financial information.

Ontology enables secure, decentralized data sharing where users maintain ownership of their data and grant access selectively. This ensures both compliance with privacy laws (like GDPR) and better efficiency for financial institutions. Imagine a world where you can share your verified credit history instantly with a lender without exposing sensitive details unnecessarily.

4. Cross Border Transactions and DeFi Growth

Global remittances are projected to surpass $700 billion annually, yet fees remain painfully high, with an average of 6–8% per transaction. Ontology’s blockchain can cut down these costs drastically while ensuring instant settlement.

Moreover, Ontology supports DeFi (Decentralized Finance) applications that provide alternatives to traditional banking products loans, yield farming, staking, and more. This opens up new opportunities for individuals and businesses to grow wealth without being tied to centralized institutions.

5. A More Transparent Financial System

Trust in financial institutions has been eroded by crises, scandals, and mismanagement. Ontology’s transparent, tamper proof ledger ensures accountability, auditability, and fairness. Regulators and institutions alike can benefit from greater visibility into transactions without compromising user privacy.

Conclusion: Ontology’s Financial Revolution

The Ontology Network is positioning itself as a cornerstone of the new financial era one that is inclusive, secure, transparent, and globally accessible. By integrating decentralized identity, trusted data, and efficient transaction systems, Ontology could fundamentally reshape how money and trust move across the world.

In the next five years, we may see Ontology not just as a blockchain project, but as a key infrastructure layer for global finance, bridging the gap between traditional banking and the decentralized economy.

The world is moving toward a trustless, borderless, and more equitable financial system and Ontology is paving the way.

THE ONTOLOGY NETWORK was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FinTech4Good

Certified Responsible AI Leaders Course- Oct 1- Dec 10, 2025

Program Overview: The Certified Responsible AI Leader Program is an immersive, high-energy program designed for AI professionals who are eager to lead in the rapidly evolving AI landscape. This program, developed by AI 2030, combines cutting-edge insights, hands-on activities, and interactive discussions with industry leaders to equip participants with the strategies and tools needed to […]

Program Overview:

The Certified Responsible AI Leader Program is an immersive, high-energy program designed for AI professionals who are eager to lead in the rapidly evolving AI landscape. This program, developed by AI 2030, combines cutting-edge insights, hands-on activities, and interactive discussions with industry leaders to equip participants with the strategies and tools needed to drive responsible AI innovation across industries.

Specifically tailored for senior AI leaders, the program offers the knowledge and practical tools required to address critical AI challenges such as privacy, transparency, fairness, and accountability. Participants will learn how to operationalize responsible AI practices while fostering innovation and ensuring ethical outcomes in their organizations.

Program Highlights:

Interactive Sessions: Engage in dynamic discussions, live simulations, and collaborative problem-solving exercises. Real-World Applications: Learn from real-world case studies and cutting-edge AI technologies. Expert-Led Workshops: Gain practical knowledge from industry leaders and AI regulators. Collaborative Networking: Build connections with top AI professionals, innovators, and thought leaders. Actionable Takeaways: Leave with a certification and actionable strategies to implement responsible AI in your organization.

 

Program Curriculum:

Module 1: Enterprise AI Strategy and Responsible AI Framework

Description:

This module focuses on the opportunities presented by AI, particularly Generative AI, along with global and industry trends and practical use cases. Participants will explore how to align AI strategies with business objectives while ensuring that responsible AI practices are deeply embedded into their organization’s broader strategy. The module covers key foundational principles of responsible AI, including transparency, accountability, fairness, privacy-preservation, safety, security, and sustainability.

 

Module 2: AI Risk Management

Description:

This module delves into the various risks associated with AI, from ethical and operational challenges to legal and regulatory concerns. Rather than just providing a generic framework, it dives deep into different AI risk management frameworks. Participants will explore global AI regulatory evolution, including trends in compliance and governance. The module will also share best practices for managing these risks in a rapidly evolving AI landscape, helping participants implement effective risk management strategies.

 

Module 3: Operationalizing Responsible AI

Description:

This module prepares senior leaders to operationalize responsible AI across their organizations, emphasizing responsible AI design, procurement, and implementation. Participants will learn how to embed responsible AI principles into AI systems, acquire AI solutions that meet ethical and regulatory standards, and deploy scalable, secure, and sustainable AI initiatives.

 

Module 4: Responsible AI Leaders Talk

Description:

In this module, participants will engage with thought leaders and industry experts to explore the challenges and opportunities of leading responsible AI initiatives. Through interactive discussions and real-world examples, they will gain valuable insights into effective AI leadership. Participants will also complete a capstone project, applying responsible AI principles to real-world challenges, ensuring practical learning and readiness to lead responsible AI efforts within their organizations.

 

Program Structure:

Duration: 8 Weeks (Oct 1 – Dec 10, 2025) Delivery: Weekly virtual sessions with one-day in-person optional conference.

Certification: Participants will receive the AI 2030 Certified Responsible AI Leader credential upon completion.

Featured Guest Lecturers

Marianna B. Ganapini, Associate Professor in Philosophy and Data Science, University of North Carolina at Charlotte

Usha Jagannathan, Director of AI ProductsDirector of AI Products, IEEE Standards Association

Soheil Feizi, Founder & CEO, RELAI.ai | CS Prof, UMD

James Gatto, AI Team Leader, Sheppard Mullin Richter & Hampton LLP

Phaedra Boinodiris, IBM Consulting’s Global Leader for Trustworthy AI

Nick Schmidt, Chief Technology and Innovation Officer, SolasAI

Arjun Ravi Kannan, Director, Data Science Research, Discover Financial Services

 

Program Benefits:

Cutting-Edge Knowledge: Stay ahead with insights into the latest trends, regulations, and practices in responsible AI. Practical Skills: Gain hands-on experience with tools and frameworks that you can apply directly to your projects. Professional Recognition: Earn a certification that highlights your expertise in responsible AI. Networking Opportunities: Build relationships with industry leaders, experts, and fellow AI professionals. Click here to Join the program: https://ai2030.circle.so/checkout/certified-responsible-ai-leadership-program-

AI 2030 Catalyst Portfolio Program

Welcome to the AI 2030 Catalyst Portfolio Application! We’re excited to learn more about your startup. This program supports early-stage and growth-stage responsible AI ventures with mentorship, product validation, GTM support, and strategic investor/partner activation. AI 2030 Catalyst Portfolio is a 4-month accelerator-style program supporting startups that are committed to building and scaling r

Welcome to the AI 2030 Catalyst Portfolio Application!


We’re excited to learn more about your startup. This program supports early-stage and growth-stage responsible AI ventures with mentorship, product validation, GTM support, and strategic investor/partner activation.
AI 2030 Catalyst Portfolio is a 4-month accelerator-style program supporting startups that are committed to building and scaling responsible AI solutions.

APPLY HERE

Program Duration: September 15 – December 31, 2025
Format: Hybrid (Virtual + Select In-Person Events)
Global Applicants Welcome
Eligibility Criteria

To be eligible, startups must:

Be building an AI-powered solution aligned with Responsible AI principles (e.g., transparency, fairness, accountability, privacy, security and safety, and sustainability) or AI for Good / Impact (e.g., climate, health, education, equity, global development) or AI for All, promoting inclusivity, accessibility, and equity in AI development or use
Have at least a Minimum Viable Product (MVP) or prototype.
Preferably have 5+ paying enterprise customers or 100 active users, or a comparable level of traction
Be committed to actively engaging in the program through mentorship sessions, workshops, and partner meetings.
Be open to global exposure through AI 2030 member portal, events, media, and partnerships.

AWS Credit Benefit: 

All of our portfolio startups will receive $25,000 in AWS credits, valid for use over the next two years.

Key Dates

September 15: Program Launch & Onboarding
Focus: Product Validation + Mentor Matching
September 24: AI 2030 Summit – NYC
October: Go-to-Market (GTM) Acceleration Month
October 15: DC Responsible AI Policy & GTM Summit
November: Investment & Capital Activation Month
November 5: Silicon Valley Investment Summit
December: Global Reach & Partnershi

 

APPLY HERE

 


myLaminin

Breaking Down the Essentials of HIPAA Compliance

Managing and protecting health information is both a legal and ethical obligation for healthcare and research institutions. HIPAA, the Health Insurance Portability and Accountability Act, sets federal standards to safeguard Protected Health Information (PHI). It gives individuals rights over their data, restricts disclosure, and requires safeguards. Covered entities and business associates, such as
Managing and protecting health information is both a legal and ethical obligation for healthcare and research institutions. HIPAA, the Health Insurance Portability and Accountability Act, sets federal standards to safeguard Protected Health Information (PHI). It gives individuals rights over their data, restricts disclosure, and requires safeguards. Covered entities and business associates, such as research platforms, must comply through privacy, security, and breach notification rules.

FinTech4Good

AI 2030 Global Fellow Program

AI 2030 Global Fellow Program-Application & Selection Timeline 📅 Application Period: Opens June 17, 2025, at the Chicago AI Week, and closes September 1, 2025 🔎 Selection Process: Evaluation by the AI 2030 selection committee 🎉 Global Fellow Announcement: September 24, 2025, at the AI 2030 Summit in New York APPLY HERE Background As AI […]
AI 2030 Global Fellow Program-Application & Selection Timeline

Application Period: Opens June 17, 2025, at the Chicago AI Week, and closes September 1, 2025
Selection Process: Evaluation by the AI 2030 selection committee
Global Fellow Announcement: September 24, 2025, at the AI 2030 Summit in New York

APPLY HERE

Background

As AI continues to reshape industries, societies, economies, and the environment there is an urgent need for ethical, responsible, and inclusive AI leadership. AI 2030 is committed to mainstreaming Responsible AI by fostering a global network of AI leaders who drive ethical innovation, industry best practices, and policy engagement. To achieve this mission, AI 2030 launched the  Global Fellow Program—a  highly selective initiative designed to recognize and empower top AI 2030 members who are shaping the future of AI.

In 2024, 50+ exceptional fellows from 10 countries were selected, representing diverse expertise across AI policy, research, corporate leadership, and entrepreneurship. These fellows played a pivotal role in high-impact initiatives, shaping global AI frameworks, driving responsible AI adoption in enterprises, and engaging in high-level dialogues with policymakers, business leaders, and investors at the world’s most influential forums like G20 Tech Sprint, the World Bank Group (WBG) and the International Monetary Fund (IMF) annual meeting, Dialogue of the Dialogue of Continents, AWS re:Invest, etc. They were featured at major AI events, including Chicago AI Week, the AI Governance Forum, and the AI 2030 Responsible AI Leaders Forum, amplifying their thought leadership and shaping industry conversations. Additionally, fellows contributed to some of AI 2030’s most transformative initiatives, such as the Responsible AI Marketplace, Global AI Regulation Index, and Responsible AI Design Labs, fostering practical solutions for ethical AI implementation. These outcomes demonstrate the power of global collaboration in advancing Responsible AI, setting a strong foundation for the 2025 cohort.

Program Objectives

AI 2030 Global Fellows will participate in a prestigious one-year leadership and engagement program, representing AI 2030’s highest-tier membership dedicated to shaping the future of Responsible AI. The Program aims to:

Recognize and elevate leaders and innovators in Responsible AI. Build a Trusted Network of AI Trailblazers – Create a high-impact community of visionary leaders, fostering collaboration, mentorship, and global influence to drive groundbreaking advancements in Responsible AI. Expand High-Impact Initiatives – Empower fellows to lead and collaborate on transformative AI 2030 initiatives Equip fellows with exclusive opportunities, resources, and networks to amplify their impact. Position AI 2030 Global Fellows as influential thought leaders driving the advancement of AI technology, governance, policy, ethical adoption, and industry best practices. AI 2030 Global Fellow Program Tiers

The program consists of three prestigious categories, recognizing individuals at different stages of leadership and influence in Responsible AI. To sustain the prestige, exclusivity, and high-impact opportunities of the AI 2030 Global Fellow Program, an annual membership fee will be introduced for each tier starting in 2025. However, as part of AI 2030’s commitment to fostering an inclusive, high-impact global community, all 2024 Fellows and Senior Fellows will receive a full membership fee waiver for 2025—but they must reapply to maintain their status in the program.

AI 2030 Fellow (Emerging Leaders & Rising Experts)

Annual Membership Fees: $1000 per year
Who Qualifies?

Early to mid-career professionals making significant contributions to Responsible AI. AI researchers, policymakers, and industry professionals driving responsible AI innovation. Entrepreneurs and corporate leaders advancing ethical AI solutions.

Key Benefits:
Recognition as an AI 2030 Global Fellow on our website, member portal, and social media channels.
Access to AI 2030 exclusive events, working groups, and leadership forums.
Opportunities to contribute and advance AI thought leadership (panel discussions, roundtables, and publications).
Collaboration with fellow experts and AI 2030 partners on Responsible AI initiatives.

AI 2030 Senior Fellow (Proven Leaders & Industry Influencers)

Annual Membership Fees: $2000 per year
Who Qualifies?

Established professionals with 10+ years of experience in AI, policy, ethics, or industry. Senior executives, researchers, or policymakers shaping AI governance and standards. Influencers in AI strategy, innovation, or regulation.

Key Benefits:
Recognition as an AI 2030 Senior Fellow on our website, member portal,AI 2030 Show, and social media channels.
Opportunities to mentor AI 2030 Fellows.
Exclusive invitations to closed-door policy roundtables and strategy meetings.
Eligibility to serve on the AI 2030 Advisory Board & Selection Committee.
High-profile speaking opportunities at AI 2030 Summits and industry events.

AI 2030 Distinguished Fellow (Global Visionaries & AI Trailblazers)

Annual Membership Fees: $5000 per year
Who Qualifies?

By invitation only; not open for applications. Visionary leaders who have demonstrated outstanding contributions to Responsible AI. Global AI policymakers, C-suite executives, leading academics, and industry pioneers. Individuals shaping the future of AI through policy, governance, research, or large-scale innovation.

Key Benefits:
Lifetime designation as an AI 2030 Distinguished Fellow.
Invitation to AI 2030 Leadership Council for high-impact strategy development.
Direct engagement with global policymakers and CEOs to shape AI’s future.
VIP access to AI 2030’s top-tier networking events, policy summits, and innovation forums.
AI 2030 Spotlight Feature – high-visibility thought leadership across AI 2030 platforms.

Selection Criteria

Selection Criteria: The selection committee will evaluate applicants based on:

Expertise & Impact – Proven contributions to AI research, industry, or policy. Leadership & Community Empowerment– Demonstrates strong leadership in AI while actively engaging and empowering communities through AI initiatives. Collaboration & Influence – Ability to drive cross-sector collaboration and industry-wide change. Innovation & Vision – Unique approaches to addressing AI’s ethical, regulatory, and technological challenges.

A selection committee of AI 2030 executive team members and esteemed AI leaders will evaluate applications and select fellows based on merit.

Conclusion

The AI 2030 Global Fellow Program is a game-changer in Responsible AI leadership. By recognizing, empowering, and mobilizing top AI professionals, this program will accelerate AI’s positive impact on society while addressing its risks.

Join us in shaping the future of Responsible AI. Apply now and become part of this exclusive leadership network.


1Kosmos BlockID

1Kosmos Ranked #1 in Workforce Product Score by Gartner

We’re proud to share that 1Kosmos has been recognized in the 2025 Gartner Magic Quadrant as a Challenger and for Identity Verification (IDV) — and we’ve earned the #1 Product Score for Workforce based on the Critical capability Matrix. This recognition reinforces what we’ve believed from the beginning: workforce identity isn’t just a productivity concern. … Continued The post 1Kosmos Ranked #1 i

We’re proud to share that 1Kosmos has been recognized in the 2025 Gartner Magic Quadrant as a Challenger and for Identity Verification (IDV) — and we’ve earned the #1 Product Score for Workforce based on the Critical capability Matrix.

This recognition reinforces what we’ve believed from the beginning: workforce identity isn’t just a productivity concern. It’s a frontline security imperative.

The Reality of Workforce Identity Today

Workforce identity isn’t just inefficient — it’s under active attack.

Employees still wrestle with password resets. Security teams drown in manual reviews. HR leaders lose valuable time onboarding talent. But the stakes have escalated far beyond productivity.

Groups like Scattered Spider and state-sponsored impersonators from North Korea are exploiting weak workforce identity systems with alarming success. According to recent reports, 1 in 5 organizations have already raised concerns about deepfake-driven fraud, synthetic identities, and credential-based attacks targeting their workforce.

This isn’t merely an optimization issue anymore. It’s a frontline security challenge. And without robust identity verification at the core of the workforce experience, enterprises remain exposed to attackers who can impersonate employees as easily as they once stole passwords.

We’ve seen the impact firsthand. A large Fortune 100 organization recently shared that they were struggling with impersonation attempts during onboarding, where fraudulent applicants were slipping past legacy checks. In one case, attackers even used synthetic identities to try and gain employee-level access.

After deploying 1Kosmos, the organization was able to verify identities in real time, flagging suspicious patterns and blocking bad actors before they entered the workforce. At the same time, legitimate new hires were onboarded in under 30 minutes instead of days, giving HR confidence in security without sacrificing speed.
That’s the impact we’re most proud of.

Proof Points from the Field

What sets 1Kosmos apart is that we don’t just verify identity once — we make that verification persistent and reusable through LiveID.

LiveID is our biometric credential recovery solution. Once an employee has been verified, they never need to go through repeated document checks or manual identity proofing again. If they forget a password, lose a device, or need to reset credentials, they can simply look into any camera — laptop, mobile, or kiosk — and recover access instantly. No helpdesk calls. No new onboarding cycles. And critically, no opportunity for attackers to impersonate them with stolen credentials.

Our customers are already seeing this difference:
• Eliminated the need for repeat ID verification during credential resets at a global enterprise, saving IT teams thousands of hours per year.

• Cut helpdesk password reset requests by over 60%, thanks to employees recovering access through LiveID without intervention.

• Prevented synthetic identity attacks during onboarding at a financial services customer — LiveID established a trusted baseline and stopped fraudulent applicants before they got inside.

• Blocked impersonation attempts from advanced threat groups, including Scattered Spider-style social engineering, because attackers couldn’t replicate a live biometric identity check.

With LiveID, identity proofing becomes an always-on assurance mechanism. Employees are empowered with seamless recovery, and enterprises gain a durable defense against the most sophisticated impersonation threats.

Where We’re Heading Next

Recognition in Gartner’s Magic Quadrant is an important milestone. But our roadmap makes clear: we’re just getting started.

• Smarter fraud detection – We’re piloting AI-driven pattern recognition with select customers today, helping them spot anomalies before they turn into incidents.
• Defending against deepfakes and synthetic IDs – Our liveness and biometric verification is evolving with AI to stay ahead of attackers who are weaponizing generative media.
• Natural language queries for identity data – Imagine asking, “Show me anomalies in remote access logins over the last 24 hours” and getting real-time insights. Early prototypes are already in testing.
• Developer-first flexibility – Our newest SDK release enables customers to customize verification workflows within minutes, making it easier to adapt identity flows to their unique business processes.

These aren’t aspirations — they’re real initiatives already underway with customers who are shaping the next generation of workforce identity with us.

Industry Context: Why Now

The stakes for workforce identity have never been higher. Attackers are outpacing legacy solutions, weaponizing AI to impersonate employees and create synthetic identities at scale. Passwords, static credentials, and bolt-on MFA simply don’t stand a chance.

Enterprises need more than incremental fixes. They need a frontline security control that delivers continuous verification, privacy by design, and frictionless usability at enterprise scale.

That’s where 1Kosmos stands apart. With innovations like LiveID, we’ve redefined workforce identity proofing into a durable, reusable assurance mechanism that blocks impersonation threats while empowering employees with effortless access.

That’s why Gartner’s recognition matters. Ranking 1Kosmos #1 in Workforce Product Score validates what the market is already demanding: a shift away from fragmented, outdated tools to an integrated identity platform that makes workforce verification both secure and seamless.

Our recognition in the Gartner Magic Quadrant isn’t just about us. It’s about the customers and partners who have trusted us to protect their workforce identities. Together, we are proving that identity can be more than a checkpoint. It can be a foundation of trust, resilience, and innovation across the enterprise.

The post 1Kosmos Ranked #1 in Workforce Product Score by Gartner appeared first on 1Kosmos.


Dock

mDLs, Privacy, and User Tracking: What You Need to Know [Video and Takeaways]

Mobile driver’s licenses (mDLs) and mobile identity documents (mDocs) are rapidly moving from pilot projects to mainstream adoption. With more than five million mDLs already in circulation and half of U.S. states announcing plans to issue them, the identity community is asking an important question: what do

Mobile driver’s licenses (mDLs) and mobile identity documents (mDocs) are rapidly moving from pilot projects to mainstream adoption. With more than five million mDLs already in circulation and half of U.S. states announcing plans to issue them, the identity community is asking an important question: what do these standards really mean for privacy, interoperability, and real-world implementation?

To explore these issues, we hosted a live podcast featuring two leading experts. Andrew Hughes, VP of Global Standards at FaceTec, has spent more than a decade shaping international ISO standards for digital identity, credentials, and biometrics. Ryan Williams, Program Manager of Digital Credentialing at the American Association of Motor Vehicle Administrators (AAMVA), leads the subcommittee responsible for translating ISO standards into North American implementation guidelines.

Moderated by Richard Esplin, Head of Product at Dock Labs, the conversation offered a rare opportunity to connect the dots between how the ISO standards are written, how they are being interpreted in practice, and what identity practitioners need to know as mDLs roll out worldwide.


1Kosmos BlockID

From CISO to Startup Founder: The 1Kosmos Journey

The Early Days: A Security Obsession I’ve always been a security geek. Back before Information Security was a thing, I was figuring out ways to get into systems or keep people out. This goes all the way back to the days of dial-up modems, bulletin boards, and online services like CompuServe and AOL. A large … Continued The post From CISO to Startup Founder: The 1Kosmos Journey appeared first on
The Early Days: A Security Obsession

I’ve always been a security geek. Back before Information Security was a thing, I was figuring out ways to get into systems or keep people out. This goes all the way back to the days of dial-up modems, bulletin boards, and online services like CompuServe and AOL.

A large portion of my security career was spent building the Information Security program at Lehman Brothers. During that 12-year run, the focus was on perimeter security, endpoint protection, and network monitoring – the first forms of intrusion detection/prevention. We wrote our own tools to do what Splunk and CyberArk do today.

The Convergence Vision

I was not only engrossed in information security but also in physical security. I spent the last few years of my career at Lehman Brothers, before their bankruptcy, managing physical security technology. My vision was to someday position myself as a CISO who would manage both worlds, as there was considerable discussion back then about the unification of those two disciplines.

What I didn’t realize at the time was that the missing piece in my security toolkit wasn’t physical security but a verifiable digital identity. The issue was that it didn’t exist yet. Of course, we had usernames and passwords, which don’t confirm someone’s identity but only offer a guess or hope about who they are. I deployed the company’s first SecureID server with hardware tokens sometime in the late ’90s, adding more layers but not necessarily increasing the certainty of identity. We also had PKI, PGP, and other acronyms.

The Pivot to Startups

My aspirations of becoming a leader in physical and information security shifted after Lehman’s bankruptcy, prompting me to explore the venture-backed startup world. I partnered with Chris Rouland (former ISS, EndGame, Bastille, Phosphorous) on a journey at Bastille Networks. After Bastille’s successful launch and securing a total of $100 million in VC funding, I saw an opportunity to begin the process of founding 1Kosmos. While raising VC money isn’t a guarantee of ultimate success, it indicates a certain level of traction and confidence in our value proposition.

The Genesis Moment – 1Kosmos

But I’m here today to talk about digital identity and the genesis of 1Kosmos, and what led us down the path of creating the world’s first unified digital identity platform. For starters, there is the name: 1Kosmos. Kosmos means “universe” in Greek. I partnered with the

serial entrepreneur Hemen Vimadalal (Vaau, Simeio, Brinqa, Securonix, Saviynt, etc.) to launch the company. The idea we were kicking around was that someday you would own your own identity and be able to use it anywhere on the internet (or in the Cosmos!). Imagine a digital wallet that doesn’t just hold your credit cards, but your key identity information.

After early traction, we partnered with ForgePoint Capital for a Series A, and again with ForgePoint and now Oquirrh Ventures in our recent $57 million Series B.

I got really excited about identity when we first started 1Kosmos. Our CTO and fellow co-founder, Rohan Pinto, showed me how decentralized identity could be a real game-changer back in 2018.

I quickly realized, after only a few months of trying to tell this story, that the world wasn’t ready for this approach because of the classic chicken-and-egg problem with digital wallets: you need widespread adoption for it to be useful, but you also need it to be helpful to get widespread adoption. Without a major platform provider like Google, Apple, or perhaps a government player pushing it into the market, you won’t see broad-scale adoption from or for individual users.

The Strategic Pivot

So, we pivoted. The core principles of the product and decentralized identity stayed the same, but our go-to-market strategy changed. We became the first to combine verified digital identity with phishing-resistant, passwordless access, using the same proof that defines a digital wallet. When paired with biometrics, it provides a great user experience and significantly boosts security.

We didn’t realize at the time that this would become a key aspect of zero trust: knowing exactly who is accessing the data or service.

The Power of Decentralized Architecture

Because we are built on a decentralized identity model (and still operate on it), the user always controls their own identity and authentication. This allows us to offer employers, businesses, and governments a much better way to verify and demonstrate their users’ identities.

Market Validation and Evolution

As I refined the story in the early days of the company, I tested the solution with my friends in the industry, who are now CISOs of Fortune 100 companies. In those early days, it wasn’t

a top priority because there was so much else to focus on, with everyone concentrating on cloud and other hot topics of the moment.

But one thing they all agreed on was that passwords had to be eliminated. The methods to accomplish this would evolve over the years, but the core principles remained the same. They also agreed that verifying a user’s identity was vital for key access points into their organizations, such as calls to the service desk and confirming the identity of new hires.

The Perfect Storm

When the Scattered Spider attacks began, we were well positioned to capitalize on the increased focus on digital identity. Our competitors in the industry were only concentrating on passwordless solutions without verified identities, leaving them with ineffective, patchwork solutions. The surge in security incidents and breaches motivated us to go to market and test the waters for a Series B raise. This belief was shared by our entire team.

Betting on Our Vision: The Series B

When we secured our $57 million Series B funding, my leadership team and I invested a substantial portion of our own personal wealth. As I mentioned at the time, “We’re not just confident in our pitch deck and customer base. We’re betting our personal wealth on our vision.”

We are addressing the core flaw in traditional identity and access management. By linking biometrics to a verified identity, we are re-confirming a user’s identity at every login, not just verifying a credential. We are truly transforming authentication from being about “something you have” to “who you are.”

The AI Challenge and Opportunity

As we look ahead, the threat landscape continues to evolve. The next major challenge for every CISO is how AI will change business operations, attacks, and defenses. We’re observing AI being weaponized, but also leveraged for defensive opportunities.

Once again, we were lucky to be in the right place at the right time regarding how we verify human identities. We’ve been using deepfake mitigation tools for years and continuously improving them. Once again, we are years ahead of our competitors, and this will be our key to winning the AI arms race. I am confident in our ability to deliver this high level of assurance, which will be our main differentiator between leading and lagging identity platforms.

Coming Full Circle: The Decentralized Future

But returning to what Rohan showed me in 2018 with decentralized identity and verifiable credentials: I see this not as a competing technology because it’s been integrated into the platform from the start. Instead, I view it as the ultimate realization of the 1Kosmos vision—getting this form of identity into the hands of every person.

The original idea might have been years too early, but having this capability in the platform will be the fourth “right time at the right place” moment for 1Kosmos.

To recap, our four key timing moments have been:

1. Verified identity

2. Passwordless access

3. Unifying those two principles by linking them to a biometric

4. And now, decentralized identity is coming of age

Reflection

My journey from CISO to company founder has been truly remarkable. I’m very fortunate to be surrounded by great fellow founders, and I couldn’t be more excited about our journey and the path we’re creating for our customers.

The post From CISO to Startup Founder: The 1Kosmos Journey appeared first on 1Kosmos.


auth0

Protect Your Access Tokens with DPoP (Demonstrating Proof of Possession)

Learn what DPoP is and how it works under the hood to enhance your application security and mitigate the effects of access token theft.
Learn what DPoP is and how it works under the hood to enhance your application security and mitigate the effects of access token theft.

Sunday, 24. August 2025

Dock

How the Philippines Hit 73% Digital ID Adoption

73% of Filipinos now have a national digital ID. 🇵🇭 That’s 84 million registrations in a country of 115 million people. One of the highest adoption rates of digital ID systems globally. These credentials, issued through the Philippine Identification System (PhilSys), have already been

73% of Filipinos now have a national digital ID. 🇵🇭

That’s 84 million registrations in a country of 115 million people. One of the highest adoption rates of digital ID systems globally.

These credentials, issued through the Philippine Identification System (PhilSys), have already been used in over 100 million transactions across both public and private services. From national and local government agencies to banks and other financial institutions. 

Citizens can now verify their identity more quickly, securely, and conveniently.

One of the most impactful measures to boost adoption was linking ID issuance with birth registration. By assigning PhilSys numbers at birth, the government eliminated friction early and laid the foundation for lifelong identity coverage. 

The rollout also took a digital-first approach by providing citizens with an ePhilID, a digital version of the ID that can be stored on a phone. 

Saturday, 23. August 2025

Lockstep

Making cyberspace healthier

I was delighted and honoured to be invited by Professor Katina Michael to provide input to the Social Cyber Institute Australia-India consultation on Technology Impact Assessment (TIA). Katina and I had a wide-ranging discussion about technology, data protection and digital transformation. A video recording is posted on YouTube and I am writing a few blogs... The post Making cyberspace healthier

I was delighted and honoured to be invited by Professor Katina Michael to provide input to the Social Cyber Institute Australia-India consultation on Technology Impact Assessment (TIA).

Katina and I had a wide-ranging discussion about technology, data protection and digital transformation. A video recording is posted on YouTube and I am writing a few blogs to consolidate some of the topics we had fun traversing.

This first blog concerns the untapped potential of applying public health principles to cyber security.

In praise of public health

I have come to understand a little and appreciate a lot about public health through my extraordinary life partner, Dr Elizabeth (Lizzie) Denney-Wilson, a leading researcher in preventive health and Professor of Nursing at the University of Sydney. Through a bit of home office serendipity, Lizzie happened to meet Katina as we were warming up to record the TIA interview. This prompted me to share a few reflections on the differences I’ve observed between public health and cyber security professionals when it comes to human factors.

The thing is, people make bad decisions. People smoke and gamble; they eat too much but don’t exercise enough.

Human error is notoriously blamed for most cyber security problems. But in contrast to epidemiologists, information technologists have little sympathy for regular people and their bad decisions. We can’t fathom why users clicked on links and got phished. Or why they reused the same password across multiple sites. Or why people choose such stupid passwords to begin with!

In contrast, public health professionals long ago stopped blaming people for making harmful choices. “Bad decisions” isn’t even part of their frame of reference. Instead, preventive health researchers focus on human behaviour and working out the pathways to changing behaviour.

We need to stop the victim-blaming in cyber security. Regular folks are lumbered with complex, brittle, unforgiving Internet systems, designed by engineers, most often for engineers.

Security need not be difficult by design

Lizzie taught me the public health policy maxim, Make the best choice the easy choice.

Think about passwords. It’s not the users’ fault that they need passwords!

The password is a relic of 1960s computing, where it suited highly technical network administrators. In the good old days before global public networks, computers were only accessible from inside secure buildings, so single factor passwords were perfectly adequate.

The password must be the only piece of IT where effectiveness is inversely proportional to ease of use. That is, the harder a password is to use, the better it is! Technicians in data centres can deal with that, but the general public cannot, while they have come to use modern pocket-sized supercomputers for everything from home security to grocery shopping.

It wasn’t until the FIDO Alliance launched Passkeys that regular users’ easy choice of authenticator became the best choice.

Photo: The Pickle Guys, NYC, https://pickleguys.com. Image Copyright (c) Stephen Wilson 2022.

The post Making cyberspace healthier appeared first on Lockstep.

Friday, 22. August 2025

Spruce Systems

How VDCs Are Transforming Customer Experience

Verifiable digital credentials (VDCs) are reshaping how businesses interact with customers by reducing friction, building trust, and creating seamless digital experiences.

Every business strives to provide its customers with the best possible experience. A significant challenge, however, is identity friction. This issue does not stem from a lack of effort but rather from the fact that traditional identity systems were not designed for today’s digital-first environment.

There is encouraging news. Verifiable digital credentials (VDCs) are already helping organizations create seamless customer experiences. Below is an overview of how VDCs are addressing long-standing challenges and enabling business success.

Understanding the Customer Experience Challenge

Research consistently shows that identity friction is one of the largest barriers to customer satisfaction in the digital age. 70% of global shoppers abandon their online carts across the board, but when forced to create an account, 26% of U.S. shoppers specifically drop off due to that friction point. When businesses fail to reduce barriers at critical moments like onboarding, checkout, or account recovery, they risk not only losing immediate sales but also eroding long-term customer trust and loyalty. This is where verifiable digital credentials can make a measurable difference.

How VDCs Are Creating Better Customer Experiences

Verifiable digital credentials provide a fundamentally different approach to identity management. Instead of requiring users to repeatedly prove who they are, VDCs enable a single credential that can be used across services. This allows for instant verification without friction while giving users complete control over the information they share.

The technology behind VDCs offers cryptographic security for assurance, real-time verification capabilities, and interoperability across systems. Most importantly, VDCs are designed with user-friendly interfaces that make advanced security processes seamless and unobtrusive for end users.

How are VDCs Being Used in the Wild?

Age Verification at Checkout
When a shopper adds an age-restricted product to their cart, the system can instantly request proof of age via a verifiable digital credential. Instead of uploading an ID or typing in sensitive details, the customer shares only a simple confirmation - “Over 18” or “Over 21.” The purchase moves forward without unnecessary exposure of birthdate or address, keeping checkout fast and private.

Ticket and Booking Confirmation
During checkout for flights, concerts, or sporting events, customers can present a digital credential linked to their booking. At entry, that same credential confirms validity in seconds, no paper tickets or manual lookups needed. This reduces fraud, shortens wait times, and makes the overall customer flow smoother.

Insurance or Payment Eligibility
In a healthcare checkout or appointment booking flow, a patient can present a digital credential that proves both identity and insurance eligibility. This replaces lengthy form-filling and card uploads, allowing check-in and payment confirmation to happen in one click. The result: less administrative friction, faster processing, and a better overall experience.

The Customer Experience Transformation

These use cases highlight key areas where VDCs generate significant business value.

First, they eliminate friction through one-click authentication processes that replace password reset frustrations with instant verification. Second, VDCs build trust by ensuring security and empowering users with control over their data. Third, they increase conversion by reducing abandonment and accelerating onboarding, leading to higher completion rates and improved engagement. Finally, VDCs lower operational costs by reducing support overhead and automating verification, freeing resources for company growth initiatives. Explore VDCs for Your Brand

The customer experience opportunity is not only about fixing inefficiencies, but about unlocking new possibilities. VDCs improve security while enabling businesses to deliver seamless, trustworthy experiences that meet modern customer expectations.

The key question is not whether businesses can afford to implement VDCs, but how VDCs can strengthen customer experience and drive long-term success. Organizations that adopt VDCs are not simply solving technical challenges. They are building competitive advantages that will define the future of digital engagement.

If your organization is ready to explore how VDCs can transform customer experience and create opportunities for growth, SpruceID can help design and deploy systems that enhance customers' interactions with your brand.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


liminal (was OWI)

Link Index for Ransomware Prevention 2025

The post Link Index for Ransomware Prevention 2025 appeared first on Liminal.co.

Thales Group

Data Superiority: Securing the Digital Backbone of Defence

Data Superiority: Securing the Digital Backbone of Defence Language English simon.mcsstudio Fri, 08/22/2025 - 16:01 In modern defence, data is as decisive as any platform or weapon system. The British Army’s Digital & Data Plan puts it plainly: “Data is our most valuable asset after our people and underpins our competitiveness.” This makes data securi
Data Superiority: Securing the Digital Backbone of Defence Language English simon.mcsstudio Fri, 08/22/2025 - 16:01

In modern defence, data is as decisive as any platform or weapon system. The British Army’s Digital & Data Plan puts it plainly: “Data is our most valuable asset after our people and underpins our competitiveness.” This makes data security not just a technical requirement but a strategic imperative for resilience and mission success.

At DSEI 2025, this theme is front and centre in our “Protecting Our Freedoms” zone. We’ll show how trusted, secure and resilient data ecosystems—built on proven COTS where appropriate and sovereign encryption where required—can transform effectiveness and deliver operational advantage.

Data as the cornerstone of operational advantage

Modern warfare is increasingly data-centric: success is defined by how fast you sense, process, decide and act, underpinned by resilient, secure communications networks and an assured data fabric. Consequently, data must be treated as a strategic asset—protected by secure computing and data infrastructure, with assured end-to-end data flows.

The threat: when data becomes the target

State-linked actors now pair cyber with kinetic operations to undermine C2 and decision-making. Supply-chain exposure (including semiconductors) and adversarial AI expand the attack surface—lessons reinforced in recent conflicts.

What works: layered, evidence-based security

Defence organisations are combining proven controls with forward-leaning innovation:

•    Quantum-safe cryptography to prepare for a post-quantum world.
•    Zero Trust architectures (continuous verification of users, devices and apps).
•    Edge security to keep mission-critical processing resilient at the tactical edge.
•    AI security-by-design (including AI red-teaming for adversarial robustness).

Thales’ approach

Thales secures data at rest, in transit and in use across civil and defence sectors. At enterprise scale in finance and other critical industries, we secure high volumes of transactions, protect large fleets of endpoints and operate multiple SOCs—experience that translates directly to defence.

In defence, we protect sensitive data end-to-end—on secure tactical networks, in hybrid cloud and across multi-domain operations—while aligning to MOD doctrine and cloud posture:

Secure multi-cloud to the edge. Designs spanning Secret and below, resilient under DIL conditions, aligned to cloud-first guidance (e.g., MoDCloud) and the Cloud Strategic Roadmap. Policy-based access control. Authenticate every person and entity based on digital identity; authorise by policy so users and services see only what they are entitled to and need. Anomaly detection. Monitor for abnormal behaviour and automatically contain or block suspicious activity. Protect data at rest, in transit and in use. Ensure data is useless to anyone not entitled to access it, through strong encryption and robust key management. Fast, low-friction deployment. COTS solutions configured for high-assurance protection on-prem or in cloud; sovereign crypto where the highest link assurance is mandated. Assured AI. Through cortAIx, we focus on explainability, safety and mission-fitness—including testing against adversarial techniques and contested EM/cyber conditions. Lessons from the digital battlespace

Trusted, real-time data flows backed by resilient connectivity are decisive. MOD’s digital doctrine calls for a system-of-systems model in which data integration and protection are as critical as platforms.

Working with partners and suppliers

Protecting national data is a collective effort. We work with government, primes and SMEs to raise resilience via the existing DCPP/SAQ approach and the new Defence Cyber Certification (DCC) scheme developed with IASME in 2025.

Collaboration across programmes

From maritime mine countermeasures to collaborative air, we enable interoperable, secure data-sharing that connects platforms, allies and industry—supporting UK, European and NATO priorities.
Securing the future digital battlespace

Emerging tech—post-quantum cryptography, AI assurance/red-teaming and resilient tactical networks—will define the next wave of challenges. Our vision is simple: treat data as a strategic asset and secure it across the full operational lifecycle.

Visit Thales at DSEI 2025 — Protecting Our Freedoms

Explore how we: Protect data & applications with Zero Trust-aligned controls and high-assurance encryption. Secure identity & policy-based access control across physical/digital domains. Detect and respond to threats on warships and submarines. Strengthen CNI, borders & digital infrastructure against hybrid threats. Deliver sovereign cloud & comms so sensitive and critical data remains available—even in ever-changing operational environments demanding agility. /sites/default/files/database/assets/images/2025-08/Data-superiority-Banner.png 22 Aug 2025 United Kingdom In modern defence, data is as decisive as any platform or weapon system. The British Army’s Digital & Data Plan puts it plainly: “Data is our most valuable asset after our people and underpins our competitiveness.” Type News Hide from search engines Off

Elliptic

Beyond Zelle: A joint stablecoin is banking's next interoperability frontier

The GENIUS Act has given American banks something they didn’t have before: clear rules for issuing stablecoins. The legislation requires stablecoins to maintain one-to-one backing with high-quality assets like cash and Treasury securities, allows issuers to choose between state or federal regulatory oversight, and crucially excludes stablecoins from securities regulations that would oth

The GENIUS Act has given American banks something they didn’t have before: clear rules for issuing stablecoins. The legislation requires stablecoins to maintain one-to-one backing with high-quality assets like cash and Treasury securities, allows issuers to choose between state or federal regulatory oversight, and crucially excludes stablecoins from securities regulations that would otherwise complicate their use as payment instruments. Now that President Trump has signed the Bill into law, banks can move forward with confidence.


uquodo

Ensuring Compliance: Optimizing PEP Screening Processes

The post Ensuring Compliance: Optimizing PEP Screening Processes appeared first on uqudo.

Ontology

Who Owns Web3’s Data? 7 Questions for the Community

Inspired by an article from Geoffrey Richards (Ontology’s Head of Community), let’s pressure-test our assumptions about data, identity, and reputation in Web3 👉 LinkedIn Geoff’s EthCC reflections spotlight a creeping habit: treating user data as a private moat. If Web3 is about user ownership, we need to design like we mean it — starting with decentralized identity and consented, privacy-pre

Inspired by an article from Geoffrey Richards (Ontology’s Head of Community), let’s pressure-test our assumptions about data, identity, and reputation in Web3 👉 LinkedIn

Geoff’s EthCC reflections spotlight a creeping habit: treating user data as a private moat. If Web3 is about user ownership, we need to design like we mean it — starting with decentralized identity and consented, privacy-preserving reputation.

7 questions for the community

Moats vs. Markets: If your competitive edge depends on locking in user data, are you building Web3 — or rebuilding Web2 with tokens? Consent by Design: Where — and how — do users grant, view, and revoke consent for every data use? Portability: Can users take their identity and reputation to another app today without losing status or access? Proofs, Not Dumps: Which flows can switch from raw data sharing to zero-knowledge proofs (prove X without revealing Y)? Agent-Age Identity: As AI agents arrive, what’s your plan for agent identity that’s transparently tied to a real user’s intent and permissions? LinkedIn Value Share: If data creates value (better matching, lower fraud), how do users capture a fair share? Exit Rights: What’s the one-click path for users to export, delete, or re-permission their footprint?

If we wouldn’t be proud to explain our data model to users, it’s the wrong model. Read Geoff’s original article and tell us how you’d implement user-owned identity and reputation in your corner of Web3. 👉LinkedIn

Who Owns Web3’s Data? 7 Questions for the Community was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


iComply Investor Services Inc.

Crypto Compliance in a Connected World: Aligning KYT, KYC, and AML Across Jurisdictions

Global regulators are tightening AML expectations for crypto firms. This guide explains how VASPs can streamline compliance using iComply’s edge-based KYC and blockchain-native KYT tools.

Crypto platforms must comply with tightening AML laws worldwide—from MiCA in the EU to Travel Rule enforcement in the U.S., UK, Singapore, and UAE. This article explores global KYT, KYC, and AML expectations for VASPs and how iComply helps automate screening, verification, and cross-chain compliance.

The crypto industry has grown from fringe innovation to a core component of global finance – but with that growth comes regulation. In every major market, Virtual Asset Service Providers (VASPs) are now expected to meet traditional financial crime standards. For crypto exchanges, custodians, token issuers, and wallets, this means embracing full-spectrum AML compliance: from real-time identity verification to transaction monitoring and data sharing protocols. The Global AML Landscape for Crypto European Union Frameworks: MiCA, AMLD6, and Travel Rule compliance Expectations: KYC for all users, KYB for corporate clients, transaction monitoring (KYT), and cross-border data sharing via TRP (Travel Rule Protocol) United States Regulators: FinCEN, SEC, CFTC, state regulators Requirements: MSB licensing, Travel Rule compliance, sanctions screening (OFAC), suspicious activity reporting (SARs), and BOI reporting for corporate accounts United Kingdom Regulator: FCA Requirements: Registration, AML risk assessment, PEP and sanctions screening, transaction monitoring, and Travel Rule data transfer Singapore Regulator: MAS Requirements: VASP licensing, CDD/EDD, KYT, and secure data transfer of originator/beneficiary details under the Travel Rule United Arab Emirates Regulators: VARA (Dubai), SCA (federal) Requirements: KYC, transaction monitoring, UBO reporting, and Travel Rule compliance for all virtual asset transfers Core Compliance Responsibilities for Crypto Firms KYC/KYB: Identity verification of users and business clients KYT: Monitoring of blockchain transactions for anomalies, structuring, and prohibited counterparties Sanctions + PEP Screening: Ongoing checks of users, addresses, and counterparties Travel Rule: Transmitting originator and beneficiary information securely and in real time Audit-Ready Documentation: Logging all decisions, escalations, and screening events Why Compliance Is Harder in Crypto

1. Pseudonymity: Wallet addresses lack inherent identity linkage

2. Cross-border complexity: Differing enforcement timelines and data localization laws

3. Fragmented tooling: Most tools only cover part of the AML process

4. User drop-off risk: Friction-heavy verification drives away users if poorly implemented

How iComply Helps VASPs Stay Compliant and Competitive

iComply delivers a modular, API-friendly platform tailored to VASPs across jurisdictions:

1. Edge-Based KYC + KYB Verify individuals and businesses using local devices before encryption Avoid transmitting raw PII or breaching GDPR or UAE data rules Supports 14,000+ global ID types in 140+ languages 2. KYT: Smart Blockchain Monitoring Monitor wallet behaviour and transaction patterns Score and escalate suspicious flows (e.g., tumblers, DEX swaps, sanctions exposure) Correlate blockchain data with user risk profiles 3. Travel Rule Compliance Integrate with TRISA, OpenVASP, or TRP Securely send and receive originator/beneficiary info Log data sharing and counterparty responses for audits 4. Sanctions + PEP Screening Screen individuals, addresses, and corporate entities Configure alerting thresholds and refresh cycles 5. Unified Case Management Assign investigators, log decisions, and export regulatory reports Full traceability across onboarding, transactions, and disposition Case Insight: US Crypto Exchange

A mid-sized US exchange adopted iComply’s full-stack compliance suite. Results:

Reduced onboarding drop-off by 35% Achieved KYB, KYC and Travel Rule readiness in under 60 days Improved screening accuracy and reduced processing time

Crypto compliance isn’t just about checking a box – it’s about building trust, enabling scale, and staying ahead of regulators. VASPs that embed KYT, KYC, and AML at the infrastructure level are best positioned for global growth.

Book a call with iComply to learn how our platform helps crypto firms stay secure, compliant, and customer-friendly – across jurisdictions and chains.


Aergo

Official Path to HPP: Portal and Bridge Opening Soon

The soon-to-be-released HPP Migration Portal and the Bridge will serve as the official gateways for transitioning from legacy tokens (AERGO and AQT) into the new HPP economy. Through the portal, holders will be able to seamlessly convert their tokens into the unified HPP Token, while the bridge guarantees integrity and security across chains. Both the portal and bridge will be rolled out pro

The soon-to-be-released HPP Migration Portal and the Bridge will serve as the official gateways for transitioning from legacy tokens (AERGO and AQT) into the new HPP economy. Through the portal, holders will be able to seamlessly convert their tokens into the unified HPP Token, while the bridge guarantees integrity and security across chains.

Both the portal and bridge will be rolled out progressively as the migration advances, ensuring a smooth and reliable transition for all participants.

The following is a preview of how the migration will be carried out, outlining the two-step process required to complete the transition into the HPP ecosystem.

https://portal.hpp.io/ Migration Steps

Step 1: Swap AERGO / AQT → HPP(Ethereum)

AERGO(both native and ERC-20) and AQT(ERC-20) are converted into HPP(Ethereum). Ratios: 1 AERGO = 1 HPP 1 AQT = 7.43026 HPP (only the whole number will be converted; decimal remainders will not carry over)

Why this step is required

Ethereum is the most liquid and interoperable base layer. Converting legacy tokens into HPP (Ethereum) ensures:

A unified token supply across chains Compatibility with existing exchange infrastructure Secure accounting and custody before moving into the new Layer 2

Step 2: Finalize HPP(Ethereum) → HPP(Mainnet)

Convert your HPP(Ethereum) to HPP(Mainnet) through the official portal. HPP(Mainnet) is for Exchange listings, DAO governance & rewards, and Ecosystem utilities

Why this step is required

HPP(Mainnet) is the execution layer optimized for AI-native workloads. To participate in governance, earn rewards, and access dApps, tokens must reside on the HPP Mainnet. Keeping Ethereum as an intermediate layer ensures smooth bridging, liquidity routing, and compliance.

Key Notes Both the portal and bridge will be rolled out as the migration advances. A detailed, step-by-step migration guide will be released concurrently with the launch of the portal. To fully participate in the HPP ecosystem, including governance, staking, rewards, and trading, it is essential to complete both steps of the migration.

Official Path to HPP: Portal and Bridge Opening Soon was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Secure Third-Party Tool Calling: A Guide to LangGraph Tool Calling and Secure AI Integration in Python

Learn how to use LangGraph to integrate with external APIs with tool calling in Python, FastAPI, and LangChain.
Learn how to use LangGraph to integrate with external APIs with tool calling in Python, FastAPI, and LangChain.

BlueSky

Our Response to Mississippi’s Age Assurance Law

A new Mississippi law requires us to block full access to Bluesky unless all users complete age checks. We have concerns about this law’s implementation.

Keeping children safe online is a core priority for Bluesky. We’ve invested a lot of time and resources building moderation tools and other infrastructure to protect the youngest members of our community. We’re also aware of the tradeoffs that come with managing an online platform. Our mission is to build an open and decentralized protocol for public conversation, and we believe in empowering users with more choices and control over their experience. We work with regulators around the world on child safety—for example, Bluesky follows the UK's Online Safety Act, where age checks are required only for specific content and features.

Mississippi's approach would fundamentally change how users access Bluesky. The Supreme Court’s recent decision leaves us facing a hard reality: comply with Mississippi’s age assurance law—and make every Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site—or risk massive fines. The law would also require us to identify and track which users are children, unlike our approach in other regions. We think this law creates challenges that go beyond its child safety goals, and creates significant barriers that limit free speech and disproportionately harm smaller platforms and emerging technologies.

Unlike tech giants with vast resources, we’re a small team focused on building decentralized social technology that puts users in control. Age verification systems require substantial infrastructure and developer time investments, complex privacy protections, and ongoing compliance monitoring — costs that can easily overwhelm smaller providers. This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.

We believe effective child safety policies should be carefully tailored to address real harms, without creating huge obstacles for smaller providers and resulting in negative consequences for free expression. That’s why until legal challenges to this law are resolved, we’ve made the difficult decision to block access from Mississippi IP addresses. We know this is disappointing for our users in Mississippi, but we believe this is a necessary measure while the courts review the legal arguments.

Here’s more on our decision and what comes next.

Why We’re Doing This

Mississippi’s HB1126 requires platforms to implement age verification for all users before they can access services like Bluesky. That means, under the law, we would need to verify every user’s age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare as we invest in developing safety tools and features for our global community, particularly given the law's broad scope and privacy implications.

Our Concerns About Mississippi’s Approach

While we share the goal of protecting young people online, we have concerns about this law’s implementation:

Broad scope: The law requires age verification for all users, not just those accessing age-restricted content, which affects the ability of everyone in Mississippi to use Bluesky. Barriers to innovation: The compliance requirements disadvantage newer and smaller platforms like Bluesky, which do not have the luxury of big teams to build the necessary tooling. The law makes it harder for people to engage in free expression and chills the opportunity to communicate in new ways. Privacy implications: The law requires collecting and storing sensitive personal information from all users, including detailed tracking of minors. What We’re Doing

Starting today, if you access Bluesky from a Mississippi IP address, you’ll see a message explaining why the app isn’t available. This block will remain in place while the courts decide whether the law will stand.

How This Differs From Our Approach in Other Places

Mississippi’s new law and the UK’s Online Safety Act (OSA) are very different. Bluesky follows the OSA in the UK. There, Bluesky is still accessible for everyone, age checks are required only for accessing certain content and features, and Bluesky does not know and does not track which UK users are under 18. Mississippi’s law, by contrast, would block everyone from accessing the site—teens and adults—unless they hand over sensitive information, and once they do, the law in Mississippi requires Bluesky to keep track of which users are children.

Other Apps on the Protocol

This decision applies only to the Bluesky app, which is one service built on the AT Protocol. Other apps and services may choose to respond differently. We believe this flexibility is one of the strengths of decentralized systems—different providers can make decisions that align with their values and capabilities, especially during periods of regulatory uncertainty. We remain committed to building a protocol that enables openness and choice.

What’s Next

We do not take this decision lightly. Child safety is a core priority, and in this evolving regulatory landscape, we remain committed to building an open social ecosystem that protects users while preserving choice and innovation. We’ll keep you updated as this situation develops.

Thursday, 21. August 2025

Indicio

Make mobile driver’s licenses work everywhere, easily, with Indicio Proven®

The post Make mobile driver’s licenses work everywhere, easily, with Indicio Proven® appeared first on Indicio.
Indicio Proven® makes mobile driver’s licenses (mDLs) practical and interoperable, delivering secure, privacy-preserving identity verification across borders, platforms, and industries. By Helen Garneau

Mobile driver’s licenses (mDLs) are gaining traction worldwide as governments look for secure, digital alternatives to physical identity documents. The promise is clear: residents and citizens can carry a government-issued license on their phone, verify their identity, keep their privacy, and reduce reliance on physical cards.

But there’s a challenge to adoption. While it’s easy to issue an mDLs, verifying one across different systems, borders, and industries is not. Without an easy way to verify an mDL, its usefulness is limited.

This is what Indicio Proven® solves. Proven makes mDL verification simple, mobile, and cost-effective. With Proven, an mDL can be used and trusted at banks, airports, and businesses.

Closing the mDL verification gap

Proven bridges the verification gap in two ways.

For organizations that want integration at the system level, Proven provides APIs that can be embedded into existing workflows.

And for those who want to verify quickly, without system integrations, Proven provides a mobile verifier, downloadable as an app that is simple to use and cost-effective to adopt.

This makes it easy to verify an mDL: at a government agency, at a retailer for proof of age — anywhere, anyone needs to prove who they are.

Interoperability by design

With Proven, you have a system for digital identity that scales to your needs and meets the requirements for verifiable digital identity around the world. Proven enables you to issue, verify, and combine with Digital Travel Credentials (DTC), W3C Verifiable Credentials, SD-JWT VC, AnonCreds, and interoperate with the European Digital Identity Wallet. (EUDI).

Proven also allows you to combine authenticated biometrics with Verifiable Credentials to provide the highest level of digital identity assurance and mitigate the threats of biometric identity fraud and AI-generated deepfakes.

Through our partnership with Regula, Proven now provides access to document validation for 250-plus countries and territories — all of which can be combined with biometric authentication.

This breadth of options in Proven provides enterprises and governments with a range of innovative, easy-to-implement solutions that meet current and future document and identity authentication needs — all with the assurance of interoperability, privacy-by-design, and open-standard robustness.

Lower cost, increase revenue

Indicio Proven isn’t just about saving you on identity verification, fraud, and security costs, it provides a way to monetize verification. Very small fees for verification are imperceptible to users but soon add up when your solution scales. It’s a boon for government infrastructure — rapid deployment, user simplicity, immediate revenue.

No other digital identity technology offers this benefit trifecta.

 Indicio Proven — your mDL solution

Talk to us about the emerging verification economy, and how Proven can power your mDL program and meet your document and identity authentication needs.

Learn about how our customers are using Proven to manage everything from account access to border crossing and what that means for your business or agency.

Or you can just deploy today. We’re here to drive your success — contact us.

The post Make mobile driver’s licenses work everywhere, easily, with Indicio Proven® appeared first on Indicio.


1Kosmos BlockID

A New Approach to Identity

Identity has become one of the most vulnerable parts of the digital world. Every week, we see headlines about new attacks: deepfakes tricking people into wiring money, social engineering scams bypassing help desks, and stolen credentials fueling large-scale breaches. Groups like Scattered Spider and North Korea’s “shadow IT” workers have shown just how easily attackers … Continued The post A New

Identity has become one of the most vulnerable parts of the digital world. Every week, we see headlines about new attacks: deepfakes tricking people into wiring money, social engineering scams bypassing help desks, and stolen credentials fueling large-scale breaches. Groups like Scattered Spider and North Korea’s “shadow IT” workers have shown just how easily attackers can manipulate outdated processes. These incidents reveal a simple truth: the old ways of handling identity are no longer enough.

Why Traditional Identity Falls Short

Most identity systems were built decades ago, around usernames, passwords, and static credentials stored in centralized databases. At the time, it made sense. But in today’s environment, this approach creates more risk than protection. Centralized stores are tempting targets, and when they fall, millions of records go with them. Meanwhile, adversaries are using AI to generate fake voices and faces, harvest credentials in real time, and run scams that outpace static security controls.

The human impact is also clear. People are asked to hand over personal information to countless organizations without knowing how it is stored or shared. They have almost no control, and when regulations tighten, enterprises end up scrambling to catch up.

Putting People Back in Control

At 1Kosmos, we believe the answer is not to add more layers to a broken model, but to rethink identity from the ground up. That is why we are building privacy-first, decentralized identity solutions that shift control back to the user.

The heart of this approach is the digital wallet. Instead of credentials living in a central database, they are stored securely on a person’s device. That means an individual can carry verified credentials, like a driver’s license, proof of employment, or a biometric factor, and choose exactly when and with whom to share them. Organizations, in turn, can instantly verify authenticity without holding sensitive data themselves.

The result is stronger privacy, reduced risk of mass breaches, and a much better experience. No more resets. No more risks. No more roadblocks. We’re already seeing the impact of this approach. Enterprises using 1Kosmos have cut fraud loss, reduced fraud losses by 90% and eliminated millions of password resets. For employees and customers, it means faster access, fewer frustrations, and stronger trust in every interaction.

Fueling the Shift with $57 Million

We’re already seeing strong traction with enterprises and governments embracing a privacy-first identity model. Our recent $57 million Series B funding is the catalyst to move even faster—expanding innovation, integrations, and global reach. The investment validates the market’s urgent need for identity modernization and accelerates our ability to deliver it.

With this funding, we are advancing AI-powered defenses against deepfakes and impersonation attempts, expanding enterprise-ready digital wallets, and strengthening integrations with IAM, CIAM, PAM, and zero-trust platforms. We are also growing our reach into new markets across North America, EMEA, and APAC.

These steps build on the momentum we have already achieved, including becoming the only full-service Kantara-certified credential service provider with FedRAMP High authorization, and winning a 10-year, $194.5 million Login.gov agreement to supply next-generation identity proofing.

Where We Go From Here

For too long, identity has been the weakest link. We see it as the foundation of trust—and the key to better digital experiences when built on privacy and user control.

That is the vision we are pursuing at 1Kosmos. By combining verified identity proofing, passwordless authentication, and blockchain-based privacy, we are giving organizations the tools to stop impersonation attacks before they start, while giving individuals more control over their digital lives.

The stakes have never been higher, but neither have the opportunities. With our Series B investment, we are moving faster toward a world where identity is no longer the first step in the kill chain, but the first line of defense.

The post A New Approach to Identity appeared first on 1Kosmos.


Elliptic

Bybit exploit six months on: Novel laundering tactics, techniques and procedures and the looming threat of DPRK

August 21st marks six months since the infamous Bybit exploit Here we discuss some of the laundering methodologies and tactics observed, including use of refund addresses, cross-chain laundering, mixers, and the creation of new, worthless tokens On February 21st 2025, Dubai-based exchange Bybit fell victim to the largest confirmed crypto theft in history. Across just two
August 21st marks six months since the infamous Bybit exploit Here we discuss some of the laundering methodologies and tactics observed, including use of refund addresses, cross-chain laundering, mixers, and the creation of new, worthless tokens

On February 21st 2025, Dubai-based exchange Bybit fell victim to the largest confirmed crypto theft in history. Across just two transactions, approximately $1.46 billion in Ether (ETH) and ERC-20 tokens were transferred to a single attacker-controlled address. Elliptic was one of the first to publicly call the exploit a North Korean act.

In our February blog we explained how initial stolen assets were distributed across multiple addresses for the first stage of laundering. In this article we’ll discuss some of the other techniques and methods employed to launder the funds to eventual endpoints, with a particular focus on those which differed from North Korea’s usual laundering tactics, techniques and procedures.

zeroShadow’s recent report indicates that over $1 billion of the stolen funds have now been laundered. It is unlikely that funds remained in the control of DPRK operatives at all stages. Professional ‘laundering as a service’ operations are thought to have been employed from early stages of the laundering, with ‘North Koreans receiving the face value of the funds to be laundered, minus their fee, at the point of exchange’. This theory is bolstered by multiple reports of ‘user’ complaints being raised on occasions when Bybit stolen funds have been frozen by services; i.e., launderers seeking to maximise their own, personal profits as opposed to recovering a loss for their client.


Datarella

Centralized and Decentralized Systems: A Symbiosis for Greater Prosperity – Insights from Organization Theory

Decentralized systems have been in vogue at least since the rise of Web3, particularly in Europe. Unlike in the USA or China, where centralized structures prevail, Europe consists of many […] The post Centralized and Decentralized Systems: A Symbiosis for Greater Prosperity – Insights from Organization Theory appeared first on DATARELLA.

Decentralized systems have been in vogue at least since the rise of Web3, particularly in Europe. Unlike in the USA or China, where centralized structures prevail, Europe consists of many comparatively small democratic nations that must coordinate in all areas of life to provide their citizens with a high quality of life.

Similar to participants in decentralized Web3 networks, individual citizens in Europe enjoy a high degree of autonomy, freedom, and self-determination. While this autonomy is inherently embedded in the software code of Web3, in Europe, national governments create the legal frameworks. Examples like eIDAS and Self-sovereign Identities (SSI) establish EU-wide standards that enable secure cross-border digital transactions.

At Datarella, we have actively participated in decentralized systems through our projects, most recently in the GAIA-X funding project moveID. The experiences gained lead to two key conclusions: The values and benefits of decentralized systems are recognizable and measurable, offering flexibility and innovation in dynamic environments. However, decentralized systems are not feasible or value-creating without a direct connection to centralized elements. This may sound contradictory at first, but it is not.

The Necessity of Centralized Elements in Decentralized Systems

Decentralized systems do not develop from within themselves; they always require a central idea or organization as the initial spark. Furthermore, a central entity must permanently handle tasks in governance, administration, and management. Without this, decentralized systems tend toward apathy or inactivity, as current incentive models do not ensure long-term constructive activity. A decentralized system remains active only as long as central functions provide the necessary incentives. Additionally, basic infrastructure must be created and operated – a task typically handled centrally, with costs distributed among participants.

From the perspective of organization theory, this aligns with contingency theory: There is no universally best structure; the choice between central and decentralized depends on the environment. In stable contexts, centralized systems provide efficiency and control, while decentralized ones promote agility in volatile markets. Henry Mintzberg describes in his organizational models that centralized structures (e.g., Machine Bureaucracy) are suitable for standardization, whereas decentralized ones (e.g., Adhocracy) foster innovations. Disadvantages of centralized systems include the lack of flexibility, while decentralized systems can lead to coordination issues.

Symbiosis as the Path to Success

In short, decentralized and centralized systems can form a beneficial symbiosis that compensates for the drawbacks of monolithic approaches and generates more prosperity for all participants. Hybrid models, as recommended in organization theory, combine stability with agility and are exceptionally sensitive in complex environments.

A necessary prerequisite for this symbiotic interplay is the ability and willingness of participants to understand the advantages and limitations of each system, along with the commitment to contribute to governance constructively. Only then do the positive outcomes emerge. Participants who see only the benefits of a monolithic structure should be excluded to maintain integrity.

At Datarella, we apply these insights in our data-driven solutions for health and sustainability, developing hybrid systems that link autonomy with reliable governance.

Do you have experience with such structures? Please share them in the comments!

The post Centralized and Decentralized Systems: A Symbiosis for Greater Prosperity – Insights from Organization Theory appeared first on DATARELLA.


Ontology

The Age of Digital Distrust

How ONT ID Can Restore Trust in the Face of Manipulative AI The rise of generative artificial intelligence has opened unprecedented technological horizons, but it has also raised a fundamental question: how can we distinguish truth from falsehood in a world where content text, images, videos, audio can be manipulated or entirely created by algorithms? AI-generated deepfakes and fake news threaten
How ONT ID Can Restore Trust in the Face of Manipulative AI

The rise of generative artificial intelligence has opened unprecedented technological horizons, but it has also raised a fundamental question: how can we distinguish truth from falsehood in a world where content text, images, videos, audio can be manipulated or entirely created by algorithms? AI-generated deepfakes and fake news threaten to undermine trust in the media, institutions, and even human interactions. In the face of this digital truth crisis, innovative solutions are needed. This article explores how ONT ID, Ontology’s decentralized identity solution, can serve as a robust verification mechanism to authenticate content and restore trust in the digital ecosystem.

The Trust Challenge in the Age of Generative AI

Generative AI, capable of producing text, images, sounds, and videos with striking realism, has transformed the content creation landscape. However, this capability comes with a major downside: the ease with which it can be used to generate misleading or outright false information. Deepfakes of public figures, AI-generated news articles spreading misinformation, and automated online comments manipulating public opinion have become tangible threats.

The problem goes beyond the technical detection of manipulated content, which is an endless race between creators and AI detectors. The real issue is the erosion of trust. If users can no longer distinguish authentic content from synthetic content, the value of information itself diminishes. This has profound implications for democracy, commerce, education, and social relationships. Developing mechanisms to prove the origin and integrity of content and restore trust in the digital ecosystem has become imperative.

ONT ID: An Anchor of Trust in a Sea of Content

This is where ONT ID, Ontology’s decentralized identity solution, comes into play. Built on blockchain technology and adhering to W3C standards for Decentralized Identifiers (DID) and Verifiable Credentials (VC), ONT ID provides a robust framework to establish the provenance and authenticity of digital content. Instead of relying on centralized platforms that can be compromised or manipulated, ONT ID allows content creators to digitally sign their work with their decentralized identity.

Imagine a content creator whether a journalist, artist, or researcher using their ONT ID to cryptographically link their identity to each piece of content they produce. This digital signature, recorded on the blockchain, becomes immutable proof of the content’s origin. Any consumer can then verify this signature using an ONT ID-compatible tool, confirming that the content genuinely comes from the claimed source and has not been altered since its creation.

Moreover, ONT ID can be used to attest to the nature of the content. For example, a creator could label content as “AI-generated” or “human-verified.” This transparency allows users to make informed decisions about the credibility and nature of the content they consume. By providing a verifiable and decentralized anchor of trust, ONT ID offers a powerful means to combat misinformation and restore confidence in the digital ecosystem.

Practical Implementation and Benefits

Applying ONT ID for content verification can take several forms:

Digital content signatures: Content creation platforms (news outlets, press agencies, digital art studios) could integrate ONT ID to allow authors and artists to digitally sign their work. These signatures would be visible and verifiable by the public, adding a layer of trust. Marking AI-generated content: Generative AI tools could be required to integrate ONT ID to indelibly mark the content they produce as AI-generated. This allows consumers to immediately know whether they are interacting with human or synthetic content. Source identity verification: In journalism and research, ONT ID could be used to verify the identity of information sources, ensuring that the information comes from legitimate people or organizations rather than bots or malicious entities. Digital reputation and credibility: Over time, content creators who regularly sign their work with ONT ID and are recognized for the reliability of their information could build a verifiable digital reputation on the blockchain. This would encourage the production of authentic, high-quality content.

The advantages of this approach are manifold: a significant reduction in misinformation, restoration of public trust in media and online information, increased protection against fraud and identity theft, and a more transparent and accountable digital ecosystem. By shifting the burden from detection to proof of authenticity, ONT ID offers a proactive and sustainable solution to the trust challenge in the AI era.

Conclusion

The rise of generative AI has created a digital landscape where truth is increasingly difficult to discern. In the face of proliferating manipulated content and deepfakes, restoring trust has become a top priority. Ontology’s ONT ID provides a powerful, decentralized solution to this challenge. By enabling creators to digitally sign their content and providing consumers with tools to verify authenticity and provenance, ONT ID can serve as an essential anchor of trust in the digital ecosystem.

By adopting decentralized identity technologies like ONT ID, we can not only fight misinformation but also build a more transparent, secure, and trustworthy digital future one in which truth can once again flourish.

The Age of Digital Distrust was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


paray

Banking’s Unwise Genius Act

For years now, stablecoins have quietly led the DeFi assault on the banking industry.  Given standard trading markets dwarf the miniscule numbers thrown by stablecoins, the banking industry never openly feared them.  Indeed, Big Banks chose to spin stablecoin buzz into PR by launching their own projects.  For example, J.P. Morgan launched its inter-bank JPM … Continue reading Banking
For years now, stablecoins have quietly led the DeFi assault on the banking industry.  Given standard trading markets dwarf the miniscule numbers thrown by stablecoins, the banking industry never openly feared them.  Indeed, Big Banks chose to spin stablecoin buzz into PR by launching their own projects.  For example, J.P. Morgan launched its inter-bank JPM … Continue reading Banking’s Unwise Genius Act →

auth0

AI in Financial Services Demands a New Trust Layer: Why Identity Security Is the Answer

As financial services adopt AI, a new risk emerges: autonomous AI agents accessing sensitive data. Learn why traditional identity security is no longer enough and how identity and access management for GenAI is crucial for secure innovation
As financial services adopt AI, a new risk emerges: autonomous AI agents accessing sensitive data. Learn why traditional identity security is no longer enough and how identity and access management for GenAI is crucial for secure innovation

Wednesday, 20. August 2025

Dark Matter Labs

Many-to-Many: From Abstract Ideas to a Living System

Welcome back to our series on building the Many-to-Many System. In our first two posts, we explored the project’s origins, the challenge of structuring our complex knowledge, and the human pace required to do it well. We left off discussing the need to create a digital “guide” to help people navigate the deep and interconnected learnings from our work. Over the past three months, that abstra

Welcome back to our series on building the Many-to-Many System. In our first two posts, we explored the project’s origins, the challenge of structuring our complex knowledge, and the human pace required to do it well. We left off discussing the need to create a digital “guide” to help people navigate the deep and interconnected learnings from our work.

Over the past three months, that abstract idea has become a tangible reality. We have been working in parallel on two major outputs: a linear, narrative-driven Field Guide and a modular, interactive website. These two pieces have been in constant conversation, shaping each other as they evolve. In this post, we, Arianna, Gurden, and Michelle, share our reflections on bringing this part of the system to life, the power of a good design process, and what it feels like to see emergence in action.

Arianna: Maybe I can start. The last three months have been a back-and-forth conversation between our two major outputs: the Field Guide and the website. We were working on them in parallel, so every new page or piece of content for the Field Guide would influence the website, and the website would influence the Field Guide. A really interesting part was categorising all the tools. For you too, Michelle, I imagine writing in the Field Guide and then seeing the first draft of the website that Gurden built really helped clarify what should remain in a linear format and what could become an interactive element.

Working on copy and storytelling on the website, it’s not easy to go from the Field Guide which has 80+ pages to a 3 paragraphs maximum format.

This double narration is key. The Field Guide is linear, so people can follow page by page, and we’ve put a lot of effort into diagrams that synthesise and distinguish each section clearly. On the website, we’re trying to simplify the experience with shortcuts and modular recalls so that everything is interconnected. That has been our core challenge and focus these last months.

Our two ways to discover and learn about the Many-to-Many System.

Gurden: Yeah, listening to you, Arianna, it’s really cool to watch this flow of content between the Field Guide and the website. We made these structural decisions months ago, sitting in a park during our workshop, and it’s a great feeling to see them validated now. We were a bit unsure at the time because we’re dealing with so much complexity: many, many, many things, as the name suggests! But we made a conscious decision to have both a linear and an interactive flow, and the process has proven that was the right call.

The Field Guide is not simply a PDF report, it has guiding elements, small interactive buttons, guiding diagrams, and visual elements to help navigate complexity.

As the Field Guide grew, the website structure grew with it. To make sure the website is structurally sound, I set up a skeleton database in Notion for the main content. To be honest, my expectations were a bit low when I asked Michelle and Annette to fill it, but big shoutout to Annette, her mind works just so quickly. She immediately got the object-oriented structure and filled it up, making the connections brilliantly. That gave me a distilled version of the content to populate our website via the Content Management System.

Screenshot from Sanity, our content management system, showing linkages between data.

I’m glad we didn’t try to perfect everything at once. We moved fast, built a rough first version, and brought it to life, which is now live internally. We’ve already done a few quick user tests. If we had just stayed in Figma, we’d still be there six months from now.

Michelle: I have so little to add because you’ve both covered most of it! My main addition is to highlight how everything has informed everything else. You’ve talked about the Field Guide and website, but they, in turn, provided enough structure for the database so Annette could go in and finish it. That process gave all of us a deeper understanding of our tools, examples, case studies and other assets we needed to create.

It’s a good example of real emergence. A lot of people talk about emergence when they’re really just describing chaos without enough boundaries. But here, we had pieces that genuinely formed the next piece, that formed the next piece. What was supposed to come out and what would be useful for other people was illuminated by going through this process. It gave birth to key assets we hadn’t yet imagined, like Angela’s “Experimenter’s Logbook”, which will be available soon on the website. I’m not sure that would have been conceived in the same way without this interplay. It’s a testament to what a good design process does, and even though we didn’t invent the design process, it was nice to be part of one that was so fruitful.

Index preview of the Field Guide, which will be avilable soon.

Gurden: I agree. The process exists, but I’ve seen so many teams not follow it well. And credit to you, Arianna, the interconnected diagrams you created are now coming to life. When you navigate the website, you see a problem and the related tools linked directly to it. These interlinkages are what make it a living system, not just a static page.

Of course, that’s also our next challenge: making sure the user experience works, that people don’t get lost. A website is a living organism, and new ideas will constantly come in. The hard part now is making conscious decisions about what we need to fix before launch versus what can wait for the next version.

Arianna: That brings me to another point: how we are all holding many hats. We aren’t a typical product team where each person has one defined role. The core team is tiny, and each of us holds three or four different roles. This has positives, we can communicate rapidly, and as designers and coders, we are deeply embedded in the content, thanks to the time Michelle and Annette took to teach us. But on the other side, by holding many roles, we have to compromise. We can’t excel at everything. So, for this first version, we might focus less on perfecting accessibility, for example, because our goal is to launch an alpha or beta version. When we have more time to focus, we can scale it and do it better.

Michelle: That’s a super good reflection on the human side of the process. So, to wrap up, we’ve now asked a set of close collaborators to give us feedback over the next month. Our hope is that the website, the core tools, and the Field Guide will be ready to share more widely in late September or mid-October. Then we’ll put it out into the world, get a wider set of feedback, and see what people think.

Our next step is to incorporate feedback from our close network before sharing it with all of you.

Thanks for following our journey. You can find our previous posts here and here and stay updated by joining the Beyond the Rules newsletter here.

And a big thanks, as always, to the other members of our team — Annette and Angela — who are key stewards of this work.

Many-to-Many: From Abstract Ideas to a Living System was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

IDnow becomes one of the first providers in Europe to achieve ETSI certification for eIDAS 2.0 compliance.

The certification reinforces IDnow’s role as a trusted, future-proof partner for regulated businesses and ensures that IDnow’s core solutions are fully aligned with current and upcoming EU compliance requirements, including EUDI Wallet readiness by 2027.  IDnow has become one of the first companies in Europe to receive certification under the latest ETSI standard for remote […]
The certification reinforces IDnow’s role as a trusted, future-proof partner for regulated businesses and ensures that IDnow’s core solutions are fully aligned with current and upcoming EU compliance requirements, including EUDI Wallet readiness by 2027. 

IDnow has become one of the first companies in Europe to receive certification under the latest ETSI standard for remote identity proofing, which is a key requirement for eIDAS 2.0 compliance and EUDI Wallet readiness by 2027. 

This milestone cements IDnow’s position as a trusted and future-proof partner for regulated industries and ensures its solutions are aligned with both current and upcoming EU compliance frameworks. 

Certified for the future of European Digital Identity 

The certification, developed by the European Telecommunications Standards Institute (ETSI) and officially endorsed by the European Commission, confirms that several of IDnow’s core products meet the strict biometric integrity, security, and assurance levels required for digital onboarding in highly regulated sectors. These solutions are now certified as compliant with eIDAS 2.0 at the Extended Level of Identity Proofing (LoIP) – the highest level defined under the revised regulation. 

The certified identity verification methods include: 

Expert-led video verification  Automated identity verification  NFC-based ID verification  eID (electronic ID) verification  EU Digital Identity (EUDI) Wallet verification  Why ETSI standards are so important for remote identity verification 

In achieving certification, IDnow demonstrated compliance with the following ETSI standards: 

ETSI TS 119 461 V2.1.1 – Identity proofing component requirements  ETSI EN 319 401 V3.1.1 – General policy requirements for Trust Service Providers  ETSI EN 319 411-1 V1.5.0 – Requirements for TSPs issuing certificates  ETSI EN 319 411-2 V2.6.0 – Requirements for TSPs issuing qualified certificates  ETSI EN 319 412-2 V2.3.1 – Certificate profiles for legal persons  ETSI EN 319 412-5 V2.4.1 – Certificate profiles for identity proofing  Addressing rising fraud with certified security 

With AI-powered fraud tactics like deepfakes and injection attacks growing rapidly in both sophistication and scale, IDnow’s certification arrives at a critical moment for digital security. According to a 2025 report by Pindrop, deepfake fraud attempts have surged urged by over 1300% in 2024, escalating from an average of one per month to seven per day. This alarming increase underscores the growing sophistication and frequency of AI-driven fraud, particularly targeting financial services and contact centers. The report also forecasts a 162% rise in deepfake-related fraud in 2025, highlighting the urgent need for robust verification solutions. 

The ETSI TS 119 461 standard provides essential safeguards through its strict requirements for features such as presentation attack detection (PAD), injection attack detection (IAD), biometric integrity assurance, and real-time fraud prevention – offering businesses and users protection at the highest level of assurance. 

Built for Europe’s regulatory future 

This certification reinforces IDnow’s commitment to providing secure, flexible and scalable identity verification solutions across Europe. The platform is uniquely positioned to support evolving compliance needs, including: 

eIDAS 2.0  Sixth Anti-Money Laundering Directive (AMLD6EU Digital Identity (EUDI) Wallet readiness 

“This latest certification confirms IDnow’s position as a trusted and future-proof technology partner for regulated businesses across Europe,” says Armin Berghaus, founder and Managing Director at IDnow. “It represents our intention for IDnow to continue to provide the most flexible and future-proof identity verification and fraud prevention platform for businesses navigating complex European compliance and customer experience demands.”  

By 2027, all banks operating in the EU will be required to work with providers certified under ETSI TS 119 461. This gives IDnow customers the confidence that their identity verification processes are not only secure and compliant today, but also ready for what’s next. 

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Okta

How to Build a Secure iOS App with MFA

Modern mobile applications require robust security solutions, especially when handling sensitive user data or enterprise-level access. Okta offers a powerful identity platform, and with the BrowserSignIn module from its Swift SDK, adding secure login to your iOS app becomes scalable and straightforward. In this post, you’ll learn how to: Set up your Okta developer account Configure yo

Modern mobile applications require robust security solutions, especially when handling sensitive user data or enterprise-level access. Okta offers a powerful identity platform, and with the BrowserSignIn module from its Swift SDK, adding secure login to your iOS app becomes scalable and straightforward.

In this post, you’ll learn how to:

Set up your Okta developer account Configure your iOS app for authentication using best practices Customize the authentication experience with MFA policies Create an AuthService testable protocol Showcase a SwiftUI example on how to integrate the AuthService

Note: This guide assumes you’re comfortable working in Xcode with Swift.

If you want to skip the tutorial and run the project, you can follow the instructions in the project’s README.

Table of Contents

Use Okta for OAuth 2.0 and OpenID Connect (OIDC) Prefer phishing-resistant authentication factors Create an iOS project with Okta’s mobile libraries for authentication Creating your Xcode project Authenticate your iOS app using OpenID Connect (OIDC) and OAuth 2.0 with Okta Add the OIDC configuration to your iOS app Manage authentication actions for your iOS app using the Okta Swift SDK Add handling for OAuth 2.0 and OIDC tokens and the authenticated session Use the auth service in your Swift app Add backend authorization using a custom resource server Set up a customer resource server for your mobile app Make authorized API requests from your iOS app Check out these resources about iOS, building secure mobile apps, and Okta mobile SDKs Use Okta for OAuth 2.0 and OpenID Connect (OIDC)

The first step is registering your app in Okta as an OpenID Connect (OIDC) client using Authorization Code Flow with Proof Key for Code Exchange (PKCE), the most secure and mobile-friendly OAuth 2.0 flow. PKCE is a best practice for mobile apps to prevent authorization code interception attacks.

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Click Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Native Application as the application type, then click Next

Enter an app integration name

Configure the redirect URIs: Redirect URI: com.okta.{yourOktaDomain}:/callback Post Logout Redirect URI: com.okta.{yourOktaDomain}:/ (where {yourOktaDomain}.okta.com is your Okta domain name). Your domain name is reversed to provide a unique scheme to open your app on a device. In the Controlled access section, select the appropriate access level Click Save

NOTE: When using a custom authorization server, you need to set up authorization policies. Complete these additional steps:

In the Admin Console, go to Security > API > Authorization Servers Select your custom authorization server (default) On the Access Policies tab, ensure you have at least one policy: If no policies exist, click Add New Access Policy Give it a name like “Default Policy” Set Assign to to “All clients” Click Create Policy For your policy, ensure you have at least one rule: Click Add Rule if no rules exist

Give it a name like “Default Rule”

Set Grant type is to “Authorization Code”

Set User is to “Any user assigned the app” Set Scopes requested to “Any scopes” Click Create Rule

For more details, see the Custom Authorization Server documentation.

Where are my new app's credentials?

Creating an OIDC Native App manually in the Admin Console configures your Okta Org with the application settings.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Replace {yourOktaDomain} with your Okta domain.

Prefer phishing-resistant authentication factors

Every new Integrator Free Plan admin account must use the Okta Verify app by default to set up MFA (multi-factor authentication). We’ll retain the default settings for this project, but you can tailor the authentication policy for your organization’s needs. We recommend phishing-resistant factors, such as Okta Verify with biometrics and FIDO2 with WebAuthn. These configurations help defend against credential theft and phishing and align with Okta’s Secure Identity Commitment, standards like NIST SP 800-63, and industry regulations like SOC 2 or HIPAA.

Prefer MFA or phishing-resistant factors for real users Tailor policies based on risk level, environment (dev vs prod), and user behavior

Thoughtfully configuring your authentication policies protects your users while maintaining a seamless login experience.

Create an iOS project with Okta’s mobile libraries for authentication

Before diving into integration, ensure you have the following prerequisites:.

Xcode version 15.0 or later. This guide assumes you’re comfortable working in Xcode and building iOS apps in Swift. Swift - This guide uses Swift 5+ features. Swift Package Manager (SPM) - We’ll use Swift Package Manager for managing dependencies. Ensure it’s available in Xcode. Node and npm installed locally to run the backend server Creating your Xcode project

If you are starting from scratch, create a new iOS app:

Open Xcode Go to File -> New -> Project Select iOS App and select Next Enter the name of the project Set the Interface to SwiftUI or UIKit, depending on your preference

In this post, we will be using SwiftUI

Select Next and save your project locally

You’re now ready to add Okta’s SDK into your project.

Authenticate your iOS app using OpenID Connect (OIDC) and OAuth 2.0 with Okta

To integrate the Okta SDK into your iOS app, follow these detailed steps using Swift Package Manager (SPM), the recommended and modern way to manage dependencies in Xcode.

Follow these steps:

Open the project if it’s not already open Select File → Add Package Dependencies In the search bar at the top right of the window that appears, add the https://github.com/okta/okta-mobile-swift repository and select Enter. Xcode will fetch the package details. Choose the latest version available (recommended) or the version you prefer. When prompted to choose the products to add, make sure to select your project next to BrowserSignin in the Add to Target column Select Add Package

This package provides the full login UI experience and token handling utilities for OAuth 2.0 with PKCE. It’s the core component for authentication in your iOS app.

Once added, you’ll see the Okta SDK listed under your project’s Package Dependencies.

Add the OIDC configuration to your iOS app

To use the OktaBrowserSignin flow, initialize the shared client with your specific app credentials.

The cleanest and most scalable way to manage configuration is to use a property list file for Okta stored in your app bundle.

Create the property list for your OIDC and app config by following these steps:

Right-click on the root folder of the project Select New File from Template (New File in legacy Xcode versions) Ensure you have iOS selected on the top picker Select Property List template and select Next Name the template Okta and select Create to create a Okta.plist file

You can edit the file in XML format by right-clicking and selecting Open As -> Source Code. Copy and paste the following code into the file.

<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>scopes</key> <string>openid profile offline_access</string> <key>redirectUri</key> <string>com.okta.{yourOktaDomain}:/callback</string> <key>clientId</key> <string>{yourClientID}</string> <key>issuer</key> <string>{yourOktaDomain}/oauth2/default</string> <key>logoutRedirectUri</key> <string>com.okta.{yourOktaDomain}:/</string> </dict> </plist>

Replace {yourOktaDomain} and {yourClientID} with the values from your Okta org.

If you use something like this now in code, you can directly access the BrowserSignin shared object, which will already be allocated and ready for use.

Manage authentication actions for your iOS app using the Okta Swift SDK

We’ll build the core authentication layer for our app, the AuthService. This service handles login, logout, token refresh, and user info retrieval using the OktaBrowserSignin module.

Create a new folder named Auth under your project’s folder structure. We’ll use this folder to organize our authentication code. Inside that folder, create a new Swift file named AuthService.swift and define the protocol and class:

import BrowserSignin protocol AuthServiceProtocol { var isAuthenticated: Bool { get } var idToken: String? { get } func tokenInfo() -> TokenInfo? func userInfo() async throws -> UserInfo? func signIn() async throws func signOut() async throws func refreshTokenIfNeeded() async throws } final class AuthService: AuthServiceProtocol { // Implementation will go here }

After doing this, you will get an error message saying that the AuthService does not conform to protocol AuthServiceProtocol because we haven’t implemented the functions yet. We will implement the functions as we progress.

Create a folder named Models inside the Auth folder. Within the Models folder, create a new file named TokenInfo.swift, and add the code shown:

struct TokenInfo { // we will add properties in the next section }

Next, we will add the signIn and signOut methods inside the AuthService class. With the Okta Swift SDK, handling user authentication is straightforward and secure – thanks to the built-in signIn and signOut methods in the BrowserSignin client. Let’s break down how to build these methods in your AuthService.

The signIn method

The signIn method redirects the user to authenticate using Okta, handles the PKCE flow, and retrieves the authentication tokens upon successful login. Open the AuthService class, find the comment Implementation will go here, replace the comment with the following code:

@MainActor func signIn() async throws { BrowserSignin.shared?.ephemeralSession = true let tokens = try await BrowserSignin.shared?.signIn() if let tokens { _ = try? Credential.store(tokens) } }

Let’s unpack this:

BrowserSignin.shared?.ephemeralSession = true

This property controls the type of browser session used for authentication:

If set to true, it forces an ephemeral browser session, meaning no cookies or session state will persist across authentication attempts. It’s like opening a private/incognito window for each login attempt. If set to false, it shares the browser state with the system browser, allowing Okta to remember the user’s login state across sessions (for example, for single sign-on across apps).

In our demo, we set ephemeralSession = true to treat each login as a fresh authentication, which is ideal for testing.

signIn(from: window) This function launches the Okta-hosted sign-in page. The window parameter provides context for where to present the login UI, typically your app’s current window if building in UIKit. Credential.store(tokens) After login, we store the tokens securely (e.g., access token, ID token, and refresh token) using Okta’s built-in Credential storage helper. The signOut method

Signing out is also straightforward. We will proceed by adding it immediately below the signIn method in the AuthService class:

@MainActor func signOut() async throws { guard let credential = Credential.default else { return } try await BrowserSignin.shared?.signOut(token: credential.token) try? credential.remove() }

Here’s what happens:

We check if there’s a current credential by calling Credential.default. We call signOut on the shared BrowserSignIn instance, passing the current token for session revocation. After a successful logout, we remove the credential from secure storage.

This ensures the user’s session is entirely revoked and cleared from the app and Okta’s backend.

Add handling for OAuth 2.0 and OIDC tokens and the authenticated session

Once we’ve set up authentication flows, we must handle token management and session state. This step ensures that your app knows when the user is authenticated, how to access their tokens, and how to refresh tokens when needed.

The protocol requires implementing two computed variables and three functions to help us manage the tokens and the session.

Add the following code in the implementation of the AuthService class right above the signIn method:

var isAuthenticated: Bool { return Credential.default != nil }

Let’s go through the code.

The isAuthenticated computed property checks whether there’s a valid token stored in the app:

It uses Credential.default, a singleton that securely stores the user’s tokens. If a valid token exists, the user is considered authenticated; otherwise, they are not.

Next, we’ll add the second helper computed property, which we will use to retrieve the user’s ID token. In the AuthService class, under the isAuthenticated property, add the following code:

var idToken: String? { return Credential.default?.token.idToken?.rawValue }

The idToken property retrieves the raw value of the ID token from the stored credential:

The ID token is a signed JSON Web Token (JWT) containing user identity information, such as the user’s email, name, and subject (sub).

We successfully implemented the computed properties required by the protocol. Next, we’ll add the implementation for the three helper functions.

Tokens always expire, which means that at some point, they are no longer valid, and we must refresh them. Lucky, Okta’s SDK provides us with a solution for this need. We can leverage the refresh function, which is part of the Credential object.

Inside the AuthService class, right after the signOut method, add the refreshTokenIfNeeded() function:

func refreshTokenIfNeeded() async throws { guard let credential = Credential.default else { return } try await credential.refresh() }

The refreshTokenIfNeeded method ensures that tokens are up-to-date by attempting a token refresh when necessary:

It calls the Credential.refresh() method, which uses the refresh token (if available) to get a new access token and ID token. This helps avoid token expiration issues that could interrupt the user’s session.

At this point, we’ll add an empty implementation to the other two functions, which will help us get some information about the token and the user. In our case, we will present some data on the screen. Add the following code after the refreshTokenIfneeded() function:

func tokenInfo() -> TokenInfo? { return nil } func userInfo() async throws -> UserInfo? { return nil }

With this added, we resolved the errors we saw in AuthService, and you’ll be able to build the project successfully.

Use the auth service in your Swift app

Now that we’ve built the AuthService to handle sign in, sign out, token management, and user info retrieval, let’s see how to integrate it into your app’s UI.

Use AuthService in your views

Since this app is about authentication, rename the auto-generated view ContentView to AuthView and rename the file to match. Don’t forget to rename all the existing and auto-generated references to ContentView and use AuthView instead.

Next, in the same folder as the AuthView, we will create the AuthViewModel. The AuthViewModel handles all user actions and authentication:

import Foundation import Observation import BrowserSignin @Observable final class AuthViewModel { // MARK: - Dependencies /// This is the service that handles all the sign-in, sign-out, token, and user info logic. private let authService: AuthServiceProtocol // MARK: - UI State Properties /// True if the user is currently logged in. var isAuthenticated: Bool = false /// The user's ID token (used for secure backend communication). var idToken: String? /// Shows a loading spinner while something is happening in the background. var isLoading: Bool = false /// If something goes wrong (e.g., login fails), the error message will show in the UI. var errorMessage: String? /// This holds a message returned from the resources server. var serverMessage: String? // MARK: - Initialization /// Create the view model and immediately update the UI with the current authentication status. init(authService: AuthServiceProtocol = AuthService()) { self.authService = authService updateUI() } // MARK: - UI State Management /// Updates the `isAuthenticated` and `idToken` values from the authentication service. func updateUI() { isAuthenticated = authService.isAuthenticated idToken = authService.idToken } // MARK: - Authentication /// Called when the user taps the "Sign In" or "Sign Out" button. /// Signs the user in or out, updates the UI, and handles any errors. @MainActor func handleAuthAction() async { setLoading(true) defer { setLoading(false) } do { if isAuthenticated { // User is signed in → sign them out try await authService.signOut() } else { // User is signed out → sign them in try await authService.signIn() } updateUI() } catch { errorMessage = error.localizedDescription } } // MARK: - Token Handling /// Refreshes the user's token if it's about to expire. /// Keeps the user logged in longer without needing to manually sign in again. @MainActor func refreshToken() async { setLoading(true) defer { setLoading(false) } do { try await authService.refreshTokenIfNeeded() updateUI() } catch { errorMessage = error.localizedDescription } } // MARK: - User Info /// Requests user information (like name, email, etc.) from the authentication service. @MainActor func fetchUserInfo() async -> UserInfo? { do { let userInfo = try await authService.userInfo() return userInfo } catch { errorMessage = error.localizedDescription return nil } } // MARK: - Token Info /// Retrieves token metadata like expiry time or claims. /// Returns nil if no token is available. func fetchTokenInfo() -> TokenInfo? { guard let tokenInfo = authService.tokenInfo() else { return nil } return tokenInfo } // MARK: - Helpers /// Sets the loading state (used to show/hide a spinner in the UI). private func setLoading(_ value: Bool) { isLoading = value } }

Next, we must extend the AuthView to use the view model and all the properties and functions we added. This view will change depending on whether the user is authenticated and will incorporate displaying the ID token and a button to refresh the token. Open AuthView.swift and replace the code with the following.

import SwiftUI import BrowserSignin /// The main authentication screen that shows the current login state, /// allows the user to sign in or out, and access token/user info and server message. struct AuthView: View { // View model manages all auth logic and state @State private var viewModel = AuthViewModel() // Presentation control flags for full-screen modals @State private var showTokenInfo = false // Holds the fetched user info data when available // And presents the UserInfoView when assigned value @State private var userInfo: UserInfoModel? var body: some View { VStack(spacing: 20) { statusSection tokenSection authButton if viewModel.isAuthenticated { refreshTokenButton } if viewModel.isLoading { ProgressView() } } .padding() .onAppear { // Sync UI state on view load viewModel.updateUI() } .alert("Error", isPresented: .constant(viewModel.errorMessage != nil)) { Button("OK", role: .cancel) { viewModel.errorMessage = nil } } message: { // Show error message if available if let message = viewModel.errorMessage { Text(message) } } } } private extension AuthView { /// Displays "Logged In" or "Logged Out" depending on current state. var statusSection: some View { Text(viewModel.isAuthenticated ? "✅ Logged In" : "🔒 Logged Out") .font(.system(size: 24, weight: .medium)) .multilineTextAlignment(.center) } /// Shows the user's ID token in small text (only when authenticated). var tokenSection: some View { Group { if let token = viewModel.idToken, viewModel.isAuthenticated { Text("ID Token:\n\(token)") .font(.system(size: 12)) .multilineTextAlignment(.center) } } } /// Main login/logout button. Text and action change based on login state. var authButton: some View { Button(viewModel.isAuthenticated ? "Sign Out" : "Sign In") { Task { await viewModel.handleAuthAction() } } .buttonStyle(.borderedProminent) .disabled(viewModel.isLoading) } /// Opens the full-screen view showing token info. var refreshTokenButton: some View { Button("🔄 Refresh Token") { Task { await viewModel.refreshToken() } } .font(.system(size: 14)) .disabled(viewModel.isLoading) } } struct UserInfoModel: Identifiable { let id = UUID() let user: UserInfo }

With this in place, you can run the application and test the authentication. Currently, we are not using the TokenInfo and the UserInfo from the ViewModel because we will expand the view in the next section.

Read token info

After successfully authenticating a user, it’s helpful to extract meaningful details from the ID token and present them in a user-friendly format. For this purpose, we created a TokenInfo model in the previous sections. It will be initialized from the ID token and includes a toString() function to generate a nicely formatted description of the token data for display in the UI.

Open TokenInfo.swift and add the code shown.

import Foundation import BrowserSignin struct TokenInfo { var idToken: String var tokenIssuer: String var preferredUsername: String var authTime: String? var issuedAt: String? init?(idToken: JWT) { guard let idToken = Credential.default?.token.idToken else { return nil } self.idToken = idToken.rawValue self.tokenIssuer = idToken.issuer ?? "No Issuer found" self.preferredUsername = idToken.preferredUsername ?? "No preferred_username found" let formatter = DateFormatter() formatter.dateStyle = .medium formatter.timeStyle = .medium if let authTime = idToken.authTime { self.authTime = formatter.string(from: authTime) } if let issuedAt = idToken.issuedAt { self.issuedAt = formatter.string(from: issuedAt) } } func toString() -> String { var result = "" result.append("ID Token: \(idToken)") result.append("\n") result.append("Preffered username: \(preferredUsername)") result.append("\n") result.append("Token Issuer: \(tokenIssuer)") result.append("\n") if let authTime { result.append("Auth time: \(authTime)") result.append("\n") } if let issuedAt { result.append("Issued at: \(issuedAt)") result.append("\n") } return result } }

In the previous sections, we introduced two methods for fetching information about the token and the authenticated user. However, we left their implementation empty. It’s now time to implement those functions, and we will start by implementing the tokenInfo() function.

Navigate to your AuthService class and in there find the tokenInfo() function, which should look something like this:

func tokenInfo() -> TokenInfo? { return nil }

To initialize our TokenInfo modal view, we need the ID token. Okta’s Swift SDK lets us fetch the ID token directly from the Credential.default. Remove the return nil from the function implementation and add the following code:

func tokenInfo() -> TokenInfo? { guard let idToken = Credential.default?.token.idToken else { return nil } return TokenInfo(idToken: idToken) }

This implementation extracts the ID token from the default Credential and tries to instantiate the TokenInfo object.

Next, we must implement the second empty function introduced in the previous sections, userInfo(). We’ll use the SDK’s UserInfo model to pass the data around. Replace the existing implementation of userInfo() with the code shown.

func userInfo() async throws -> UserInfo? { if let userInfo = Credential.default?.userInfo { return userInfo } else { do { guard let userInfo = try await Credential.default?.userInfo() else { return nil } return userInfo } catch { return nil } } }

If your Okta setup includes them, you could extend this method to extract more claims, such as email, given name, family name, or custom claims.

With this code in place, we need to display the information to the user somehow in the UI. First, we’ll create a TokenInfoView to display all the information we fetched previously. Create a new Swift file in the root folder of your application and name it TokenInfoView.swift. After creating the file, add the following code:

import SwiftUI struct TokenInfoView: View { let tokenInfo: TokenInfo @Environment(\.dismiss) var dismiss var body: some View { ScrollView { VStack(alignment: .leading, spacing: 20) { Button { dismiss() } label: { Image(systemName: "xmark.circle.fill") .resizable() .foregroundStyle(.black) .frame(width: 40, height: 40) .padding(.leading, 10) } Text(tokenInfo.toString()) .font(.system(.body, design: .monospaced)) .padding() .frame(maxWidth: .infinity, alignment: .leading) } } .background(Color(.systemGroupedBackground)) .navigationTitle("Token Info") .navigationBarTitleDisplayMode(.inline) } }

Proceed with adding one more Swift file named UserInfoView.swift. This view displays previously fetched information about the User. In your newly created file, add the following code:

import SwiftUI import BrowserSignin struct UserInfoView: View { let userInfo: UserInfo @Environment(\.dismiss) var dismiss var body: some View { ScrollView { VStack(alignment: .leading, spacing: 20) { Button { dismiss() } label: { Image(systemName: "xmark.circle.fill") .resizable() .foregroundStyle(.black) .frame(width: 40, height: 40) .padding(.leading, 10) } Text(formattedData) .font(.system(size: 14)) .frame(maxWidth: .infinity, alignment: .leading) .padding() } } .background(Color(.systemBackground)) .navigationTitle("User Info") .navigationBarTitleDisplayMode(.inline) } private var formattedData: String { var result = "" result.append("Name: " + (userInfo.name ?? "No Name set")) result.append("\n") result.append("Username: " + (userInfo.preferredUsername ?? "No Username set")) result.append("\n") if let updatedAt = userInfo.updatedAt { let dateFormatter = DateFormatter() dateFormatter.dateStyle = .medium dateFormatter.timeStyle = .short let date = dateFormatter.string(for: updatedAt) result.append("Updated at: " + (date ?? "")) } return result } }

Finally, we need to add some actions to the AuthView to see the views we just created. In the AuthView class at the end of the file, you will find the private extension that we previously defined. After the refreshTokenButton in the private extension of AuthView, add the following buttons:

/// Opens the full-screen view showing token info. var tokenInfoButton: some View { Button { showTokenInfo = true } label: { Image(systemName: "info.circle") .foregroundColor(.blue) } .disabled(viewModel.isLoading) } /// Loads user info and presents it full screen. var userInfoButton: some View { Button("👤 User Info") { Task { if let user = await viewModel.fetchUserInfo() { await MainActor.run { userInfo = UserInfoModel(user: user) } } } } .font(.system(size: 14)) .disabled(viewModel.isLoading) }

Now that we have the buttons implemented, we need to add them to the body of AuthView so that the user can see them and click them. Scroll to the top of the file and find struct AuthView:View. Add both buttons right after refreshTokenButton, and then the VStack in your body should look like this:

VStack(spacing: 20) { statusSection tokenSection authButton if viewModel.isAuthenticated { refreshTokenButton tokenInfoButton // tokenInfoButton added here userInfoButton // userInfoButton added here } if viewModel.isLoading { ProgressView() } }

Within the definition for body, we need to call two view modifiers to be able to see the TokenInfoView and UserInfoView. You’ll add the following code right after the closing brace of the message: {} property:

// Show Token Info full screen .fullScreenCover(isPresented: $showTokenInfo) { if let tokenInfo = viewModel.fetchTokenInfo() { TokenInfoView(tokenInfo: tokenInfo) } } // Show User Info full screen .fullScreenCover(item: $userInfo) { info in UserInfoView(userInfo: info.user) }

Now, if you run the application, you should be able to click on the Info button to get the token information and the User Info button to get the user information.

And there you have it! 🎉 We built a sample app from scratch using Okta’s new Swift SDK and the BrowserSignin module to show the authenticated user’s ID claims. By following these steps, you’ve learned how to:

✅ Configure Okta and set up your application ✅ Implement a robust AuthService to handle login, logout, and token management ✅ Build a SwiftUI interface that displays user info and handles authentication flows seamlessly

With just a few lines of code, you have a fully functional, secure login flow integrated into your iOS app – no more OAuth headaches or token handling nightmares.

Authentication is the first step in an app, but we want to display data from a backend resource securely.

Add backend authorization using a custom resource server

If you want to go beyond authentication and add authorization checks for your APIs, we can experiment using Okta’s Node.js Resource Server example as a starting point.

Here’s how to connect your iOS app to a backend that validates access tokens:

Set up a customer resource server for your mobile app

Clone the example Node.js resource server:

git clone https://github.com/okta/samples-nodejs-express-4.git cd samples-nodejs-express-4 npm ci

Open the project in an IDE like Visual Studio Code. I like Visual Studio Code because it has a built-in terminal, but you can make the required code changes directly to the file. Open resource-server/server.js. Look for the configuration block where oktaJwtVerifier is initialized. Update it like this:

const oktaJwtVerifier = new OktaJwtVerifier({ issuer: '/oauth2/default', clientId: '{yourClientID}', });

Replace the {yourOktaDomain} with your Okta org domain, and replace the {yourClientID} with the client ID of your iOS project.

Serve the resource server by running the following command in the terminal.

npm run resource-server

You should see your server running locally at: http://localhost:8000/

This server will validate incoming access tokens and respond with two messages if the token is valid:

{ "messages":[ { "date":"2025-07-03T19:06:59.799Z", "text":"I am a robot." }, { "date":"2025-07-03T18:06:59.799Z", "text":"Hello, world!" } ] }

Let’s create the model conforming to this payload.

Create a file named MessagesResponse.swift in the Auth/Models folder and add the code.

import Foundation struct MessageResponse: Codable { let messages: [Message] } struct Message: Codable { let date: String let text: String } Make authorized API requests from your iOS app

To call the resource server API from our iOS code, we must first implement a function inside our AuthService to fetch messages.

Open the AuthService file and add one more function at the end of the AuthServiceProtocol:

protocol AuthServiceProtocol { var isAuthenticated: Bool { get } var idToken: String? { get } func tokenInfo() -> TokenInfo? func userInfo() async throws -> UserInfo? func signIn() async throws func signOut() async throws func refreshTokenIfNeeded() async throws func fetchMessageFromBackend() async throws -> String // added }

Because we introduced a new function to the protocol, it requires an implementation. In the AuthService class, immediately after the implementation of userInfo(), add the following code:

@MainActor func fetchMessageFromBackend() async throws -> String { guard let credential = Credential.default else { return "Not authenticated." } var request = URLRequest(url: URL(string: "http://localhost:8000/api/messages")!) request.httpMethod = "GET" await credential.authorize(&request) let (data, _) = try await URLSession.shared.data(for: request) let decoder = JSONDecoder() let response = try decoder.decode(MessageResponse.self, from: data) if let randomMessage = response.messages.randomElement() { return "\(randomMessage.text)" } else { return "No messages found." } }

With this, you will get an error message that some classes aren’t found. That’s because we must import Foundation into our AuthService.swift file just below import BrowserSignIn.

import BrowserSignin import Foundation // added

Okta’s iOS SDK provides a handy method for automatically adding your access token as an Authorization header on a URL request.

We need to go back to the AuthViewModel and add a function to call fetchMessageFromBackend() and set the server message to our serverMessage property of the viewModel. Add the following code right after fetchTokenInfo():

// MARK: - Server Messages /// Asks the backend for a message and saves it for display in the UI. @MainActor func fetchMessage() async { setLoading(true) defer { setLoading(false) } do { let message = try await authService.fetchMessageFromBackend() serverMessage = message } catch { errorMessage = error.localizedDescription } }

We need to extend the AuthView to use this function and show the fetched server message as an alert to the user. For this purpose, go to the AuthView file and in the extension just below the userInfoButton, we will add one more button like this:

/// Requests a message from the backend and shows it in the UI. var getMessageButton: some View { Button("🎁 Get Message") { Task { await viewModel.fetchMessage() } } .font(.system(size: 14)) .disabled(viewModel.isLoading) }

Next, we need to present this button to the view. In the bodyof AuthView, let’s add getMessageButton and the body will look like this:

VStack(spacing: 20) { statusSection tokenSection authButton if viewModel.isAuthenticated { refreshTokenButton tokenInfoButton userInfoButton getMessageButton // getMessageButton added here } if viewModel.isLoading { ProgressView() } }

Lastly, we’ll alert the user with the message contents received from our backend if the authentication is successful. To do so, we need to add the .alert view modifier to the body of AuthView after the final fullScreenCover closing bracket, like this:

// Show Alert with the fetched message .alert("Message Response", isPresented: .constant(viewModel.serverMessage != nil)) { Button("OK", role: .cancel) { viewModel.serverMessage = nil } } message: { // Show message if available if let message = viewModel.serverMessage { Text(message) } }

With all this in place, you’ll see a message alert when pressing the Get Messages button.

This is the recommended approach for securely connecting your mobile app to backend APIs using OAuth 2.0 and JWT validation. You can find the completed project in a GitHub repo.

🎉 And that’s it! Your iOS app now has authentication and calls a backend API with the access token for fully integrated authorization verification.

Check out these resources about iOS, building secure mobile apps, and Okta mobile SDKs

If you found this post interesting, you may want to check out these resources:

Introducing the New Okta Mobile SDKs A History of the Mobile SSO (Single Sign-On) Experience in iOS

Follow OktaDev on Twitter and subscribe to our YouTube channel to learn about secure authentication and other exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


Recognito Vision

Understanding the Advantages and Disadvantages of Facial Recognition Technology

Facial recognition technology has become increasingly common in our daily lives. From unlocking phones to airport security, it is changing the way we identify and verify people. This technology uses advanced software to analyze facial features and match them with existing databases, making many tasks faster and more secure. While facial recognition offers convenience and...

Facial recognition technology has become increasingly common in our daily lives. From unlocking phones to airport security, it is changing the way we identify and verify people. This technology uses advanced software to analyze facial features and match them with existing databases, making many tasks faster and more secure.

While facial recognition offers convenience and improved safety, it also comes with challenges. Privacy concerns, accuracy issues, and ethical questions are important factors to consider before adopting this technology. Understanding the advantages and disadvantages of facial recognition can help businesses and individuals make informed choices.

 

What Is Facial Recognition Technology?

Facial recognition technology is a type of software that identifies or verifies a person by analyzing their facial features. Cameras with facial recognition capture images or video of faces. The software then compares these images to a database to identify a match.

It is widely used in security, banking, law enforcement, and even marketing. While it offers convenience, it also raises concerns about privacy and accuracy.

 

Advantages of Facial Recognition Technology

Facial recognition technology offers several important benefits. Here are some of the key advantages:

1. Enhanced Security

One of the main advantages of facial recognition is improved security. It can identify individuals in crowded places, detect unauthorized access, and prevent fraud. For businesses, this technology can help protect sensitive areas.

Banks use facial recognition to verify clients for online transactions. Airports and government buildings use it to control access and ensure safety.

By providing a layer of security beyond passwords or ID cards, facial recognition technology reduces the risk of theft and unauthorized entry.

 

2. Convenience and Speed

Facial recognition can make daily tasks faster and easier. Unlike traditional authentication methods, it does not require remembering passwords or carrying cards.

Phones and laptops can be unlocked instantly. Airports can speed up boarding with automated facial scans. Offices can track attendance without manual checks.

This convenience saves time for both individuals and organizations.

 

3. Contactless Identification

In the era of health concerns and pandemics, contactless systems have become essential. Facial recognition is a non-intrusive, touch-free technology. It reduces the need for physical contact, which is safer in public spaces and healthcare environments.

This feature makes it ideal for hospitals, banks, airports, and retail stores.

 

4. Law Enforcement and Public Safety

Facial recognition technology can assist law enforcement agencies in tracking and identifying suspects. It is useful for finding missing persons, preventing crimes, and investigating incidents.

Cameras with facial recognition can scan crowds in real time, alerting authorities to potential threats.

 

5. Integration with Other Systems

Facial recognition can work with other technologies, such as security cameras, mobile apps, and access control systems. This integration allows for smarter solutions. For example, smart home systems can use facial recognition to unlock doors for family members but restrict access to strangers.

Platforms like Recognito make it easy to integrate facial recognition into existing systems efficiently.

 

Disadvantages of Facial Recognition Technology

Despite its advantages, facial recognition technology has limitations and potential risks. Here are some of the main disadvantages:

1. Privacy Concerns

One of the biggest disadvantages of facial recognition is the potential invasion of privacy. Constant surveillance can make people feel watched and uncomfortable.

Data collected through facial recognition can be misused if not properly secured. Some governments and companies have faced criticism for using this technology without consent.

 

2. Risk of Misidentification

Facial recognition is not 100% accurate. Lighting, camera quality, facial expressions, and changes in appearance can affect results. Misidentification can lead to wrongful accusations or denied access.

This is a serious concern, especially in law enforcement and security applications.

 

3. High Costs

Facial recognition requires good cameras, software, and secure storage, which can be expensive. Maintenance and upgrades also add to the cost. Tools like Recognito offer cost-effective solutions, making it easier for businesses to adopt without overspending.

 

4. Ethical and Legal Issues

Facial recognition raises ethical questions. How and where should the technology be used? What limits should be set? Different countries have different laws regarding the use of facial recognition, making compliance challenging.

Improper use could lead to legal penalties or public backlash.

 

5. Potential for Bias

Some facial recognition systems have shown bias against certain ethnic groups or genders. This can result in unfair treatment, especially in hiring processes, law enforcement, or financial services.

Developers are working to reduce bias, but it remains a concern.

 

Balancing the Pros and Cons

When considering this technology, it is essential to carefully weigh the advantages and disadvantages of facial recognition. For example, businesses may benefit from enhanced security and convenience. However, they must also address privacy, accuracy, and ethical concerns.

Public awareness, transparency, and regulations play a key role in ensuring facial recognition is used responsibly.

 

The Future of Facial Recognition

Facial recognition technology continues to evolve. Advances in artificial intelligence and machine learning are improving accuracy and reducing bias. We are likely to see more industries adopt this technology for secure, efficient, and contactless operations.

However, its future will also depend on legal frameworks and public acceptance. Companies must use it responsibly and prioritize the protection of user data.

 

Conclusion

The advantages and disadvantages of facial recognition highlight both its strengths and limitations. Facial recognition technology offers many advantages, including better security, convenience, and integration with modern systems. At the same time, it comes with risks such as privacy concerns, misidentification, and ethical challenges.

For businesses or developers looking for facial recognition solutions, platforms like Recognito provide advanced facial recognition SDKs. These tools allow companies to integrate secure and accurate facial recognition into their applications while focusing on privacy and compliance.

By understanding the pros and cons of facial recognition, individuals and organizations can make smarter decisions about adopting this technology in a safe and responsible way.

 

Frequently Asked Questions

 

1. Can facial recognition technology invade personal privacy?

Yes, facial recognition can raise privacy concerns since it involves surveillance and data collection. If misused, it may track people without consent. Strong regulations and secure data storage are vital to protect privacy.

 

2. Is facial recognition technology 100% accurate?

No, it isn’t fully accurate. Lighting, camera quality, or changes in appearance can cause errors. AI has improved accuracy, but misidentifications still occur, especially in law enforcement.

 

3. Can facial recognition be integrated with smart home devices?

Yes, facial recognition integrates well with smart homes. It can unlock doors for family members, restrict strangers, and personalize settings.

 

4. Can facial recognition technology be fooled by photos or masks?

Sometimes. Older systems may be tricked by photos or masks. Modern solutions use liveness detection and 3D imaging, making it much harder to bypass.

 

5. How Much Does It Cost to Implement a Facial Recognition System?

Costs vary by scale. Small setups may cost a few thousand dollars, while large projects can reach hundreds of thousands. Hardware, software, and maintenance all influence the total price.


FastID

Why Paying Copyright Holders for AI Training is Essential

AI and creator rights don’t need to clash. A fair, consent-based model can drive innovation without exploiting creative work.
AI and creator rights don’t need to clash. A fair, consent-based model can drive innovation without exploiting creative work.

Tuesday, 19. August 2025

Indicio

Governments can now directly issue Digital Passport Credentials using Indicio Proven

The post Governments can now directly issue Digital Passport Credentials using Indicio Proven appeared first on Indicio.
Indicio Proven adds Digital Passport Credentials aligned to ICAO DTC-2 Specifications

SEATTLE / AUGUST 19, 2025 — Governments are now able to issue Digital Passport Credentials directly to their citizens using Indicio Proven. These credentials align with the International Civil Aviation Organization’s (ICAO) specifications for Digital Travel Credentials (DTC-2) that are government issued. 

Indicio developed and successfully deployed the world’s first Digital Passport Credential which involved travelers deriving a Verifiable Credential from the electronic chip in their passport and combining it with face-mapping, liveness check, and document validation. This Digital Passport Credential aligned with ICAO DTC-1 specifications and allowed for preauthorized travel and seamless border crossing.

By combining authenticated biometrics in Verifiable Credentials, Indicio transformed portable digital identity and provided a simple way to mitigate the growing risks of biometric and AI-identity fraud. In recognition, Acuity Market Intelligence’s 2024 Prism Report described Indicio’s solution as “masterful.” 

Indicio has now developed the next step in “government-grade” digital identity technology — Digital Passport Credentials that are issued directly by governments to accompany physical passports. These credentials follow the DTC-2 specifications outlined by ICAO.

A traveler is now able to carry a cryptographically secure digital passport on their smartphone, smartwatch, or fob linked to their existing physical passport. This can be instantly verified with simple software when crossing borders or presented anywhere across the travel ecosystem, including bookings, check-in, boarding, and hotel arrival.

“Our mission is to create digital identities that make life easier, safer, and more streamlined for everyone,” said Heather Dahl, CEO of Indicio. “We’ve  shown how easy it is to create privacy-preserving, government-grade digital identity. We’ve shown the kinds of transformation this can achieve — the benefits to airlines, airports, and travelers that follow from having a fast and reliable way to streamline identity authentication. Now, we’re making it easy for governments to issue interoperable Digital Passport Credentials, secure borders, and welcome tourists.” 

Indicio Proven is the world’s most advanced system for creating and deploying decentralized identity and Verifiable Credentials in interoperable workflows. It allows users to select from SD-JWT VC, AnonCreds, and mdoc/mDL credential types, customizable schemas, mobile SDK for Android and IoS, a white-label digital wallet, field-leading mediation to handle issuance and verification at any scale, and a global ledger network to support deployments. 

For Digital Passport Credentials, Indicio Proven’s Issuer software is easy to implement into existing systems and combine with biometric infrastructure and vendors. Verifier software is available in server or mobile configurations. Implementation is rapid, scale is simple, and compliance with data protection easy. Above all, Indicio Proven provides highly cost-effective ways to create both enterprise and public sector digital infrastructure that meets current and evolving business, consumer, and traveler needs.

To see a demonstration of Indicio Proven’s Digital Passport Credentials and to learn more about our government and border solutions, contact our team here.

 

###

The post Governments can now directly issue Digital Passport Credentials using Indicio Proven appeared first on Indicio.


1Kosmos BlockID

Why I’m More Bullish Than Ever on 1Kosmos

As CMO of 1Kosmos, I’ve had a front-row seat to watch this company evolve over the past five years. When we founded the company, the cybersecurity world was still obsessed with building bigger walls around the perimeter. Password managers were the hot solution. Multi-factor authentication was cutting edge. We saw something different coming. Identity would … Continued The post Why I’m More Bullis

As CMO of 1Kosmos, I’ve had a front-row seat to watch this company evolve over the past five years. When we founded the company, the cybersecurity world was still obsessed with building bigger walls around the perimeter. Password managers were the hot solution. Multi-factor authentication was cutting edge.

We saw something different coming. Identity would become the new perimeter. Passwords would become the liability, not the solution. And users would demand authentication that was both more secure and more convenient than anything available.

The market needed time to catch up to our vision. But it has, in a big way.

The Market Caught Up

Our recent $57 million Series B brings our total funding to over $72 million. This isn’t just validation of our technology. It’s proof that the market has fundamentally shifted. Our recent $194.5 million agreement for Login.gov through Carahsoft shows that enterprise buyers are ready for what we’ve been building.

The timing makes sense. Every week brings news of another devastating breach, another ransomware attack, another company targeted by AI-generated deepfakes. Traditional passwords and even SMS-based two-factor authentication aren’t cutting it anymore. We’re seeing North Korean operatives deepfake their way into US companies through remote interviews and social engineers extract hundreds of millions in losses through the IT Service Desk.

What We Built

Instead of adding another layer to broken systems, we rebuilt the foundation with a platform that verifies you are who you say you are, every single time you log in. No more checking passwords – we verify actual identity.

We use biometrics tied to verified credentials on a private, permissioned ledger. The technology is sophisticated, but the user experience is intuitive: biometric authentication replaces passwords entirely. Users authenticate once with their identity, then access digital and in-person services seamlessly.

The results speak for themselves – millions of daily users, zero successful account takeovers on our platform, and deployment times measured in hours instead of months. Our customers routinely tell us their users prefer our authentication to anything they’ve used before. Business efficiency improves.

The Certifications That Matter

We’re the only platform with full service NIST 800-63-3, FIDO2, and iBeta certifications – and that’s more than alphabet soup. These certifications prove our solution has the highest level of interoperability and works at the highest levels of government security requirements, which is why FedRAMP gave us High Authorization for national security applications.

Where We Go From Here

I’ve been in this industry long enough to know that technology alone doesn’t win markets, but when your technology solves a real problem that’s only getting worse and major enterprises are making significant commitments to deploy it, you know you’re onto something.

The identity verification market is exploding because every organization needs to know their users are who they claim to be – not just once during onboarding, but every time they log in.

Seven years ago, we bet that the future of cybersecurity would be identity-first, passwordless, and privacy-preserving. Today, that future is here, and we’re leading it.

Ready to see what passwordless looks like in practice? Let’s talk.

The post Why I’m More Bullish Than Ever on 1Kosmos appeared first on 1Kosmos.


Tokeny Solutions

SkyBridge Capital Partners with Tokeny to Tokenize $300Min Hedge Funds on Avalanche

The post SkyBridge Capital Partners with Tokeny to Tokenize $300Min Hedge Funds on Avalanche appeared first on Tokeny.
Tokeny–recently acquired by leading global financial services provider Apex Group–is set to tokenize two of SkyBridge’s funds on the Avalanche blockchain network.

NEW YORK, 19th August 2025 – Skybridge Capital today announced it will tokenize $300 million of its flagship hedge funds on the Avalanche blockchain network. This landmark initiative represents a collaboration with enterprise-grade tokenization leader Tokeny and its parent company, Apex Group Ltd., a global financial services provider servicing over $3.5 trillion in assets.

Tokenizing our funds on Avalanche, supported by the technology and operational infrastructure of Tokeny and Apex Group, represents a significant step forward in modernizing the alternative
investment landscape. We look forward to bringing our hedge funds into the digital, on-chain era, improving transparency, liquidity, and accessibility for our investors, and demonstrating how traditional finance and blockchain can work together to create smarter, more efficient investment solutions. Anthony ScaramucciFounder & CEO of SkyBridge Capital

A former Goldman Sachs executive and White House Communications Director, Scaramucci has long been a prominent voice in alternatives and digital assets, with deep networks across pensions, sovereign wealth funds, and family offices.

Under the agreement, SkyBridge will tokenize its Digital Macro Master Fund Ltd and Legion Strategies Ltd leveraging the proven ERC-3643 standard with operational infrastructure delivered through Apex Group’s Digital 3.0 platform. The platform offers a single-source solution for the entire investment lifecycle, enabling institutional clients to seamlessly transition their funds to blockchain-based rails with integrated capabilities for creation, issuance, administration, and distribution.

This milestone shows how Apex Group and Tokeny are breaking down the operational and technology barriers that have historically slowed institutional tokenization. SkyBridge’s tokenization on Avalanche proves that with the right technology, trusted operators, and regulatory clarity, tokenization at scale is not just possible, it’s happening. Daniel CoheurGlobal Head of Digital Assets at Apex Group and Co-Founder of Tokeny

Avalanche was selected for its institutional-grade architecture, offering the transaction speed and near-instant finality required for large-scale tokenization. As a leading blockchain for real-world assets (RWAs), Avalanche’s rapidly expanding institutional ecosystem already hosts regulated offerings in tokenized money market funds, private credit, and more. The network’s EVM compatibility and scalability make it an ideal foundation for bringing traditional assets on chain to unlock new distribution channels, utility, and blockchain-naive products and services.

Our work with Tokeny, Apex Group, and SkyBridge marks a pivotal moment for institutional adoption and serves as a powerful market signal that tokenization has entered the mainstream. SkyBridge Capital’s leadership and network within the allocator community makes this a strong validation of Avalanche's position as the premier platform for connecting capital. John WuPresident of Ava Labs.

This collaboration brings together next-generation technology, enterprise-grade infrastructure, and institutional credibility, a critical combination for accelerating the adoption of RWAs across hedge funds, private credit, and multi-strategy vehicles.

About Apex Group

Apex Group is dedicated to driving positive change in financial services while supporting the growth and ambitions of asset managers, allocators, financial institutions, and family offices. Established in Bermuda in 2003, the Group has continually disrupted the industry through its investment in innovation and talent. Today, Apex Group sets the pace in fund and asset servicing and stands out for its unique single-source solution and unified cross asset-class platform which supports the entire value chain, harnesses leading innovative technology, and benefits from cross-jurisdictional expertise delivered by a long-standing management team and over 13,000 highly integrated professionals.

Apex Group leads the industry with a broad and unmatched range of services, including capital raising, business and corporate management, fund and investor administration, portfolio and investment administration, ESG, capital markets and transactions support. These services are tailored to each client and are delivered both at the Group level and via specialist subsidiary brands. The Apex Foundation, a not-for-profit entity, is the Group’s passionate commitment to empower sustainable change.

Website

About Tokeny

The award-winning fintech provides compliant tokenization with the open-source ERC-3643 token standard and advanced white-label software solutions for financial institutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows,integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

About SkyBridge Capital

SkyBridge Capital is a global alternative investment firm specializing in financial technology, digital assets, venture capital and multi-manager solutions. The firm, founded by Anthony Scaramucci in 2005, has allocated over half of SkyBridge’s assets under management to digital assets, an emerging asset class that is reshaping the future of finance.

About Avalanche

Avalanche is an ultra-fast, low-latency blockchain platform designed for builders who need high performance at scale. The network’s architecture allows for the creation of sovereign, efficient and fully interoperable public and private layer 1 (L1) blockchains which leverage the Avalanche Consensus Mechanism to achieve high throughput and near-instant transaction finality. The ease and speed of launching an L1, and the breadth of architectural customization choices, make Avalanche the perfect environment for a composable multi-chain future. Supported by a global community of developers and validators, Avalanche offers a fast, low-cost environment for building decentralized applications (dApps). With its combination of speed, flexibility, and scalability, Avalanche is the platform of choice for innovators pushing the boundaries of blockchain technology.

The post SkyBridge Capital Partners with Tokeny to Tokenize $300Min Hedge Funds on Avalanche appeared first on Tokeny.


Elliptic

Crypto regulatory affairs: Private sector in US and Hong Kong push for changes in new stablecoin rules

With the ink barely dry on new stablecoin rules, the private sector in both Hong Kong and the United States are seeking clarification and changes on certain requirements - demonstrating that the journey to regulate stablecoins is still evolving.

With the ink barely dry on new stablecoin rules, the private sector in both Hong Kong and the United States are seeking clarification and changes on certain requirements - demonstrating that the journey to regulate stablecoins is still evolving.


Spherical Cow Consulting

Working Group Chair Skills: Standards Work Isn’t Just for Coders

This one’s for everyone who’s ever said, "I’m not technical enough to participate in standards development." If you’ve wondered what working group chair skills actually matter, I have news for you: you don’t need to be a spec-writing wizard to be effective. I do get it, though. The post Working Group Chair Skills: Standards Work Isn’t Just for Coders appeared first on Spherical Cow Consulting.

“This one’s for everyone who’s ever said, ‘I’m not technical enough to participate in standards development.'”

If you’ve wondered what working group chair skills actually matter, I have news for you: you don’t need to be a spec-writing wizard to be effective.

I do get it, though. I chair working groups, and I still can’t read specs the way implementers do. Half the time I open a technical specification, my eyes glaze over after the abstract. I couldn’t code myself out of a wet paper bag. (Unless that wet paper bag happens to include m4 and sendmail rulesets. Those, I can do.)

That said, if you take nothing else away from this post, take this and embed it in your brain: you don’t have to be a spec-writing wizard to be an effective contributor, or even to chair a working group.

In fact, some of the most valuable skills in standards work have nothing to do with writing code.

A Digital Identity Digest Working Group Chair Skills: Standards Work Isn’t Just for Coders Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:13:35 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What a chair actually does (and doesn’t do)

There’s a misconception that working group chairs must be the ultimate subject-matter experts, the kind of person who can answer any question off the cuff. I used to believe that too. *buzzer sound* Wrong, try again! The job is about facilitation and neutrality, not encyclopedic knowledge.

A good chair brings the right working group chair skills to the table:

Manages time, the agenda, and the meeting queue. Keeps the room neutral and separates their “chair hat” from their personal opinions. Tests for consensus and documents objections. Guides the group back to its charter when things start to sprawl.

What a chair doesn’t do:

Decide the technical design. “Win” arguments on the mic. Gatekeep new contributors.

Yes, a baseline of knowledge helps, mainly to keep the group on track and ask the right questions. But if you’re the loudest voice in the room or the person with the most opinions? That’s a red flag.

The Madrid moment

This post was inspired by a moment at the recent IETF meeting in Madrid.

I was sitting in a session (not one I was running) feeling wildly inadequate. The chair of that meeting seemed to know everything: answering questions without pause, bouncing ideas around, and generally radiating expertise. I thought, I can’t do that. I’ll never be that person.

Later, I mentioned my self-doubt to a friend who’s been in standards work since Gandalf was a baby (i.e., a very long time). They said something that stuck:

“That kind of deep subject-matter expertise isn’t what makes a good chair. The job is to be neutral, help the group come to consensus, and keep the process fair. If you’re too invested in the outcome, you can’t do that well.”

And they were right. In a session I chaired the next day, I reminded myself of that. Instead of getting pulled into the details, I paused, restated the questions we were debating, and asked for commitments to review from experts in the room. We left the meeting with clear decisions and action owners. Go, team!

No wizard-level technical knowledge required.

Where non‑coders shine

Even if you’re not a chair, these are the same skills that make you a great contributor and help you develop working group chair skills if you want to take that step later. Non‑coders are often the ones who:

Turn pain points into crisp user stories. “We need X because Y breaks if we don’t.” Write readable summaries for product managers, lawyers, and execs. Triage issues on GitHub—label them, close duplicates, line up proposals for meetings. Spot interoperability gaps. Help set up test plans, track what passes/fails, and document blockers. Herd the cats. Draft pre-reads, take notes, and make sure decisions actually get written down.

These are the unglamorous but critical pieces that keep work moving forward. Without them, groups stall, meetings get repetitive, and good ideas die in the noise. Groups well and truly need someone brave enough to ask “stupid” questions (that are never actually stupid) about how it all works.

How to start if you’re “not technical enough”

If you want to get involved but don’t know where to start, try this:

Read the charter of the group and two or three recent GitHub issues. Introduce yourself on the mailing list with a sentence about what you can help with (e.g., “I’m a PM and can help with use cases or meeting notes”). Volunteer once. Scribe a meeting, write a summary, or draft a use-case doc. Ask the chairs: “What would unblock you this week that doesn’t require coding?” Shadow a consensus call and notice how the chair phrases questions and records outcomes.

These small steps get you known as someone who adds value quickly. That reputation goes a long way.

For managers: send the right people

If you lead a team, the worst thing you can do is assume only senior engineers belong in standards groups. You should be sending:

PMs, solution architects, and analysts. Tech writers who can make decisions and docs accessible. Support or operations leads who know what customers actually need.

Give them specific assignments: draft use cases, clean up issues, track interop progress. And measure their impact by outcomes that matter: fewer rehashed meetings, clearer issues, faster consensus.

Common traps (and how to avoid them)

Even experienced chairs and contributors stumble into these:

Wearing the company hat too heavily. Remember: your job is to move the group’s work forward. Turning meetings into tutorials. Save deep dives for dedicated docs or issues. Letting the loudest voice set direction. Ask for explicit “can live with it” signals from the room. Treating silence as consent. It usually isn’t.

A simple chair’s checklist helps: agenda posted in advance (see the Important Dates page for IETF meetings) → pre‑reads linked → scribe/timekeeper assigned → decisions written down → minutes posted within 48 hours.

Why this matters

Standards work touches every part of the identity world—federation, digital credentials, browsers, payments. These decisions shape products and policies your teams will live with for years. I mentioned this a while back in Standards Versus Reality with how even well‑intentioned technical choices can clash with deployment realities, which is why diverse voices are so important.

But the process isn’t self-sustaining. If only engineers and spec authors show up, important perspectives get lost. And that hurts interoperability, usability, and deployment.

You don’t need to be the smartest engineer in the room to develop strong working group chair skills. You need to listen, ask questions, and help the group get to a decision.

Your turn

If you’ve thought, standards aren’t for me, try showing up once. Scribe a meeting. Draft a use-case doc. Or if you lead a team, send someone who can listen, write, and keep things moving.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Introduction

[00:00:26] Let’s get into it.

[00:00:29] Hi everyone, and welcome back to the Digital Identity Digest.

[00:00:33] Today I want to talk about something I hear often from people who are curious about standards work but never take the leap to participate.

[00:00:42] It’s that feeling of “I’m not technical enough. Standards are for engineers, so I’ll just sit this one out.”

[00:00:50] If that’s you, this episode is for you.

[00:00:54] The truth is, standards work is not just for coders. Some of the most important roles in working groups don’t involve writing a single line of code.

*** Why Standards Work Isn’t Just for Engineers

[00:01:04] I want to pull back the curtain on my own experience as a working group chair. I’ll share:

The skills that actually make standards work successful How non-coders can contribute How to build the confidence to jump in

[00:01:24] Here’s my confession: I chair groups in the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C).

[00:01:33] But if you handed me a technical specification to implement — I couldn’t do it. I can’t code my way out of a wet paper bag.

[00:01:42] I can read the text of a specification.
[00:01:48] I can follow the arguments and track design decisions.
[00:01:53] But I don’t read specs like implementers do.

And that’s okay. Because being a chair is not about being the ultimate expert — it’s about facilitating process, guiding discussion, and keeping the group focused.

*** What a Working Group Chair Actually Does

[00:02:16] So what does a chair really do?

[00:02:24] On a good day, you’re:

Setting the agenda and keeping discussions on track Managing the queue so everyone knows when it’s their turn Listening carefully and testing for consensus Making sure objections are heard and documented Guiding the group back to its agreed charter

[00:03:06] Importantly, chairs do not decide technical design. They don’t win arguments or gatekeep contributors.

[00:03:14] Yes, technical knowledge helps — but being the loudest, most opinionated person in the room? That’s a liability for a chair.

*** Lessons from Experience

[00:03:30] This all came to mind recently at an IETF meeting in Madrid.

[00:03:36] I was sitting in a session I wasn’t chairing and felt completely inadequate. The chair was brilliant — answering every question instantly and recalling years of history.

[00:04:06] I shared my self-doubt with a long-time standards veteran. Their advice stuck: A good chair isn’t about deep expertise. It’s about neutrality and guiding consensus.

[00:04:28] The very next day, while chairing my own session, I leaned on that advice. Instead of diving into debate, I paused, restated the question, and asked the group to continue the discussion asynchronously.

[00:05:09] The result? We left the room with clear decisions and owners for next steps.

*** Why Non-Coders Are Essential

[00:05:19] Here’s the truth: non-coders bring incredible value to standards development.

[00:05:31] Examples include:

Turning pain points into crisp user stories Writing clear summaries for product managers, executives, and legal teams Triage work: labeling GitHub issues, closing duplicates, organizing proposals Spotting interoperability gaps between specifications Setting up test plans and documenting blockers “Herding cats” by taking notes and tracking decisions

[00:06:44] None of these require writing code — but all are vital to progress.

*** How to Get Started in Standards Work

[00:06:58] If you’re wondering where to begin, here are practical steps:

Read the group’s charter and skim recent issues to understand scope Introduce yourself on the mailing list — even a short message offering to scribe is welcome Ask the chairs directly what one thing you could do this week to help Observe consensus calls to learn how experienced chairs guide groups

[00:08:20] These small steps build confidence and help you learn core chairing skills: listening, documenting, and facilitating decisions.

*** Advice for Managers

[00:08:31] If you lead a team, don’t assume only senior engineers should join standards groups.

Send your:

Product managers Solution architects Analysts Technical writers

[00:08:56] Give them specific assignments like drafting use cases, cleaning up issues, or tracking interoperability progress. Measure outcomes such as:

Fewer rehash meetings Closed issues and pull requests Documented progress

[00:09:23] This builds institutional knowledge in your organization while helping the group succeed.

*** Common Traps to Avoid

[00:09:31] Even experienced chairs fall into these traps:

Wearing your company hat too heavily Turning meetings into tutorials that eat time Letting the loudest voice set direction instead of true consensus Treating silence as consent instead of seeking explicit signals

[00:11:04] A simple checklist can help avoid these pitfalls:

Post agendas in advance Share pre-reads Assign a scribe and timekeeper

It’s basic, but it works.

*** Why This Matters

[00:11:30] Standards work touches every part of digital identity — from federation to credentials, browsers to payments.

[00:11:46] But if only engineers show up, critical perspectives are lost. That hurts usability, interoperability, and adoption.

[00:11:57] You don’t need to be the smartest engineer in the room to contribute. You just need to listen, ask questions, and help the group reach decisions.

*** Final Thoughts

[00:12:10] If you’ve thought “standards development isn’t for me,” I challenge you to reconsider.

Show up once.
Scribe a meeting.
Draft a use case.
Or, if you lead a team, send someone who can write, listen, and help keep things moving.

[00:12:26] These are the skills that keep standards work healthy.

[00:12:30] So ask yourself: What non-coding skill could you bring into a working group this month?

[00:12:48] Thank you for listening to the Digital Identity Digest.

[00:12:59] If this episode helped make standards clearer — or at least more interesting — share it with a friend. You can connect with me on LinkedIn @hlflanagan.

And don’t forget to subscribe, leave a rating, and find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Working Group Chair Skills: Standards Work Isn’t Just for Coders appeared first on Spherical Cow Consulting.


Ontology

The Brutal Truth About Stablecoin Adoption: Speed is Solved. Identity Isn’t.

Trust Crisis in Stablecoins The biggest challenge in scaling stablecoin payments isn’t speed. It’s trust. Stablecoins are everywhere. They’re powering remittances, cross-border commerce, crypto payroll, and even merchant checkout systems. From Stripe to Shopify to major exchanges, stablecoin adoption is accelerating. But while blockchains have solved the problem of speed and cost, they’ve qu
Trust Crisis in Stablecoins

The biggest challenge in scaling stablecoin payments isn’t speed. It’s trust.

Stablecoins are everywhere. They’re powering remittances, cross-border commerce, crypto payroll, and even merchant checkout systems. From Stripe to Shopify to major exchanges, stablecoin adoption is accelerating.

But while blockchains have solved the problem of speed and cost, they’ve quietly ignored the biggest bottleneck in real-world use: identity.

If stablecoins are going to scale globally, across regions, merchants and users, they need more than fast rails. They need a decentralized identity layer.

The Problem With Stablecoin Compliance Today

Most stable coins weren’t built with compliance in mind. And now that they’re being used in payments, cracks are showing.

Merchants don’t know who they’re accepting money from Platforms are duct-taping KYC providers into apps Users go through verification again and again There’s no standard for crypto KYC that works across wallets, bridges, and dApps

The result? A fragile trust layer built on centralized data silos and repetitive identity checks. The exact same problems crypto was supposed to solve.

Stablecoin Payments Need Verifiable Identity, Not Just Wallets.

Verifiable Identity is the missing layer in the stablecoin stack.

As governments push for stablecoin regulation such as MiCA in the EU and the GENIUS Act in the US, platforms are scrambling to become compliant.

But compliance doesn’t have to mean surveillance.

With decentralized identity, users can hold their own credentials, verify once, and move between apps and services without repeating KYC. This is the foundation for self-sovereign identity, where users control their data and platforms remain compliant without storing sensitive information.

What’s Needed: A Portable, Privacy-First Trust Layer

Stablecoin adoption at scale will only happen if three things become possible:

Users prove who they are without exposing everything Merchants can verify transactions without handling private data Developers can plug into identity infrastructure that works cross-chain

That’s what Ontology is building. A modular identity and privacy framework that makes stablecoin payments secure, compliant, and user-controlled.

Ontology: The Identity Infrastructure for Stablecoin Adoption

Unlike issuers, Ontology isn’t creating another dollar-pegged token. We’re building the trust infrastructure that makes stablecoins usable in the real world.

Here’s what that looks like:

DID-based KYC that users control Zero Knowledge Proofs to verify facts without revealing data Reusable identity credentials for wallets, dApps, and fiat on-ramps Cross-border compliance without centralized trust

This infrastructure goes beyond payments. The next era of Web3 relies on a rebuilding of the crypto identity infrastructure.

The Future of Stablecoin Compliance is User-Controlled

If stablecoins want to compete with traditional infrastructure, they can’t just be faster. They have to be trusted. And that trust can’t be outsourced to centralized APIs or third party data silos.

It has to be built into the protocol layer and embedded in how users verify themselves, how dApps authorize transactions, and how compliance gets done in a decentralized world.

Speed is solved. Identity isn’t. Ontology is solving it.

About Ontology

Ontology is a high-performance, open-source blockchain specializing in decentralized identity and data infrastructure. Built to power the next generation of Web3 applications, Ontology provides developers with the tools to build secure, privacy-preserving systems through Decentralized Identifiers (DIDs) and Verifiable Credentials. With a focus on self-sovereign identity, compliance-ready infrastructure, and cross-chain interoperability, Ontology enables trust in every transaction, without sacrificing user control. Whether you’re building for payments, DeFi, or real-world digital identity, Ontology offers the modular trust layer Web3 has been missing.

Connect with Us

Stay up to date on decentralized identity, privacy infrastructure, and everything Ontology is building:

LinkedIn X (Twitter) Telegram

Have questions or want to collaborate? Drop us a message, we’re always open to building with developers, creators, and partners shaping the future of Web3.

The Brutal Truth About Stablecoin Adoption: Speed is Solved. Identity Isn’t. was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


iComply Investor Services Inc.

AML in Capital Markets: Global Controls in a Borderless Sector

Capital markets face complex cross-border AML demands. This article explores regulatory expectations and shows how iComply streamlines compliance from onboarding to audit.

Capital markets firms face unique AML challenges across jurisdictions due to their cross-border activity and high-risk products. This article outlines key KYB, KYC, KYT, and AML expectations in the U.S., UK, EU, and other financial centre – and how iComply helps automate compliance workflows with speed and precision.

Global capital markets are fast, fluid, and increasingly regulated. Broker-dealers, custodians, exchanges, and asset managers operate across jurisdictions where expectations for AML, sanctions screening, and beneficial ownership verification continue to grow.

In high-risk sectors like trading, custody, private placements, and tokenization, regulators want more than just client onboarding—they expect continuous monitoring, automated escalation, and clear audit trails.

AML Frameworks Shaping the Sector United States Regulators: SEC, FINRA, FinCEN Requirements: CDD Rule, ongoing customer due diligence, transaction monitoring, suspicious activity reporting, sanctions screening (OFAC) United Kingdom Regulator: FCA Requirements: CDD/EDD, transaction monitoring, PEP screening, audit logs, and AML controls under MLR 2017 European Union Regulators: ESMA, local NCAs Requirements: 6AMLD, MiCA (for tokenized assets), UBO verification, and harmonized AML rules under AMLA (in progress) Switzerland & Luxembourg Regulators: FINMA, CSSF Requirements: KYC/AML for securities and fund transactions, strong data protection, and beneficial ownership transparency Key Compliance Tasks

Capital markets participants must:

Verify legal entities and individuals across onboarding and lifecycle events Monitor transactions for anomalies or regulatory breaches Screen all clients against sanctions, PEP, and adverse media lists Capture beneficial ownership for institutional and private placements Log decisions and escalate based on internal risk policies Industry-Specific Challenges

1. Cross-border account flows → Require localized data handling and multilingual tools

2. Institutional onboarding → Often slow due to document-heavy workflows and complex UBO structures

3. Layered due diligence → Multiple parties, custodians, and intermediaries complicate audit trails

4. Tokenized and digital assets → Face rapidly evolving rules under MiCA, AMLD, and SEC guidance

How iComply Accelerates AML for Capital Markets

iComply provides a secure, modular platform that streamlines compliance from onboarding to monitoring:

1. KYB + UBO Automation Validate entities using public and commercial registries Map complex ownership and nominee structures Generate audit-ready UBO reports 2. Edge-Based Identity Verification Fast, private KYC flows for individuals across global jurisdictions On-device processing for secure and compliant identity checks 3. Transaction Monitoring (KYT) Score trades and transactions by geography, frequency, value, and behavioural anomalies Custom rules for escalations and risk segmentation 4. Centralized Case Management Combine onboarding, AML, and due diligence into a unified audit trail Assign reviews, manage escalations, and export regulatory reports 5. Flexible Deployment On-premise, private cloud, or hybrid environments Data localization and language support for global operations Case Insight: Cross-Border Broker-Dealer

A multinational brokerage integrated iComply across its onboarding and compliance ops. Key results:

Cut entity onboarding time by 60% Streamlined UBO discovery for global accounts Improved internal SAR processing and response tracking The Takeaway

Capital markets compliance is high-stakes and high-volume. Firms that embrace AML automation can:

Reduce onboarding friction Catch risk signals faster Satisfy multi-jurisdictional requirements from day one

Talk to iComply today to learn how we help capital markets firms eliminate compliance bottlenecks and stay ahead of global regulations.


Aergo

[Aergo Talks #21] Public Mainnet, buybacks, and ArenAI

1. Why are AergoTalks in English and not Korean? English is used as the international community language. While Aergo has its roots in Korea, the project has expanded globally and is listed on major international exchanges, including Coinbase. Korean subtitles are provided as a nod to Aergo’s origins, but not for other languages. 영어는 글로벌 커뮤니티가 공통으로 사용하는 언어입니다. Aergo는 한국에서 시작되었지만 현재는 글로벌로 확장
1. Why are AergoTalks in English and not Korean?

English is used as the international community language. While Aergo has its roots in Korea, the project has expanded globally and is listed on major international exchanges, including Coinbase. Korean subtitles are provided as a nod to Aergo’s origins, but not for other languages.

영어는 글로벌 커뮤니티가 공통으로 사용하는 언어입니다. Aergo는 한국에서 시작되었지만 현재는 글로벌로 확장되었고, Coinbase를 포함한 주요 해외 거래소에도 상장되어 있습니다. 한국어 자막은 Aergo의 기원을 존중하는 차원에서 제공되지만, 다른 언어는 지원하지 않는 점 확인 부탁드립니다.

2. When is the Public Mainnet? Will we meet the Q3 deadline?

“Hell yeah, we are.” The team is on track for Q3, with strong progress. No specific date is provided — as with all digital products, launch occurs when everything is ready.

The public mainnet is now live! Please check the full article for more details: https://medium.com/aergo/house-party-protocol-public-mainnet-is-live-29be91574da4

물론입니다. 팀은 3분기 목표를 향해 순조롭게 진행 중입니다. 다만 구체적인 날짜는 공개하지 않았습니다. 모든 디지털 제품과 마찬가지로, 준비가 완벽히 끝났을 때 출시하기 위함입니다.

퍼블릭 메인넷이 공식 런칭됐습니다! 자세한 내용은 전체 공지를 확인해 주세요: https://medium.com/aergo/house-party-protocol-public-mainnet-is-live-29be91574da4

3. Why don’t we give a specific schedule/date?

Digital releases don’t require fixed launch dates like physical products (e.g., movies, retail). Avoids pressure that can lead to mistakes, citing the Columbia Shuttle disaster as an example of deadline-driven risk. Launch will be announced when ready, ensuring fairness and transparency to all participants simultaneously.

출시 일자는 유연하게 설정하여 충분히 준비가 완료된 후에 진행하는 것이 무엇보다 중요합니다. NASA 콜롬비아호 사고 역시 일정 압박이 한 원인이 된 사례로 자주 언급됩니다. HPP는 모든 준비가 철저히 마무리된 시점에, 모든 참여자에게 공정하고 투명하게 동시에 출시 사실을 알리는 방식으로 진행됩니다.

4. Are we going to do buybacks?

Buybacks are usually done by companies with revenue or exchanges tied to trading volume. HPP is performing strongly compared to the CMC100, even in a tough market.

Additional Context

Buybacks are more of a market signal than a sustainable growth strategy. They are often tied to short-term campaigns or exchange-driven burn mechanisms, which can create only temporary price support without strengthening the fundamentals. The reality is that their impact tends to be limited and short-lived, making them less effective for long-term ecosystem growth. Our approach is different. We aim to ensure that value stems from genuine fundamentals and usage, rather than from market engineering.

바이백은 보통 매출을 내는 기업이나 거래량과 연결된 거래소에서 진행합니다. HPP는 어려운 시장 환경 속에서도 CMC100 대비 강한 성과를 내고 있습니다.

추가 설명

바이백은 장기적 성장 전략보다는 시장에 “신호”를 주는 성격이 강합니다. 단기 캠페인이나 거래소 주도의 소각 메커니즘과 연결되는 경우가 많으며, 일시적인 가격 지지 효과를 내지만 근본적인 가치 강화에는 기여하지 못합니다. 즉, 효과는 제한적이고 단명하는 경우가 대부분입니다. HPP의 접근 방식은 다릅니다. 우리는 인위적인 시장 개입이 아니라, 토큰의 실제 사용성·펀더멘털·생태계 확장을 통해 지속적인 가치를 만들어가고 있습니다.

5. Why doesn’t HPP pump with other tokens?

HPP has outperformed the CMC100 over 12 months and the past 5 months. Movement doesn’t always align with BTC/ETH, which are influenced by ETFs and macro flows.

HPP는 지난 12개월과 최근 5개월 동안 CMC100보다 우수한 성과를 보여왔습니다. 단, 움직임이 항상 BTC/ETH와 일치하지는 않습니다. BTC와 ETH는 ETF나 거시경제 자금 흐름 등 외부 요인에 더 큰 영향을 받습니다.

6. What are cryptocurrency investment agents (on the roadmap)?

The specific product details are not disclosed yet. However, the vision and target direction have already been introduced under ArenAI. ArenAI will serve as an intelligent, AI-driven trading and asset management layer, enabling users to interact with DeFi and exchange platforms through natural language, agent automation, and portfolio optimization.

For more background and context, please refer to our previous article on ArenAI, which outlines the concept and future potential of this product: https://medium.com/aergo/arenai-the-ai-powered-command-center-for-intelligent-asset-management-d6910742bad2

구체적인 제품 내용은 아직 공개되지 않았습니다. 하지만 방향성은 이미 ArenAI를 통해 제시되었습니다. ArenAI는 AI 기반의 지능형 자산 관리 및 트레이딩 레이어로, 사용자들이 자연어 인터페이스, 자동화된 에이전트, 포트폴리오 최적화를 통해 DeFi 및 거래소와 상호작용할 수 있도록 설계됩니다.

자세한 내용은 ArenAI에 관한 이전 아티클을 참고해 주세요: https://medium.com/aergo/arenai-the-ai-powered-command-center-for-intelligent-asset-management-d6910742bad2

7. Updates on Upleat collaboration and stablecoin?

Aergo(HPP) provides tech infrastructure to Blocko and enterprise clients. No direct connection to the referenced stablecoin announcement.

Additional Context

Stablecoins are more than just blockchain-based technology. They interface directly with monetary policy tools, such as M1 (currency in circulation plus demand deposits) and M2, meaning their issuance may affect the real-world money supply and economic dynamics. That is why any project launching a stablecoin without operating within the bounds of formal governmental frameworks, such as Korea’s K-BTF (Korea-Blockchain Trust Framework), runs the risk of being inconsequential. These frameworks are crucial for establishing trust, gaining legal recognition, and integrating with national payment systems.

Without adherence to such regulatory and policy frameworks, a stablecoin project may technically function, but it will lack the necessary legitimacy, operational resilience, and broader institutional acceptance. To be meaningful, stablecoins must not only run on code but also align with economic infrastructure and compliance standards.

Aergo(HPP)는 Blocko와 엔터프라이즈 클라이언트들에게 기술 인프라를 제공합니다. 하지만 이번에 언급된 스테이블코인 발표와 직접적인 연결은 없습니다.

추가 설명

스테이블코인은 단순한 블록체인 기술이 아니라 통화정책 도구(M1, M2 등)와 직접 연결됩니다. 즉, 발행은 실제 화폐 공급과 경제 역학에 영향을 줄 수 있습니다. 그렇기 때문에 한국의 K-BTF(Korea-Blockchain Trust Framework) 같은 정부 주도의 제도적 틀 안에서 운영되지 않는 스테이블코인 프로젝트는 실질적 의미가 크게 제한될 수밖에 없습니다.

이러한 제도적·정책적 프레임워크는 신뢰 확보, 법적 인정, 국가 결제 시스템과의 통합을 위해 반드시 필요합니다. 이런 기반이 없다면 스테이블코인은 기술적으로는 작동할 수 있어도 제도적 정당성, 운영 회복력, 기관 차원의 수용성을 확보하지 못합니다. 진정한 의미의 스테이블코인은 코드만으로 존재하는 것이 아니라, 경제 인프라와 규제 준수 기준에 맞추어야 합니다.

Closing

The session closed with reminders that no universal solution exists to “make a token pump.” The team continues to deliver milestones on time, with the imminent launch of the Public Mainnet.

이번 세션은 “토큰 가격을 펌핑시킬 수 있는 보편적 해법은 없다”는 점을 다시 확인하며 마무리되었습니다. 팀은 퍼블릭 메인넷을 비롯한 주요 마일스톤을 계획대로 달성해나가고 있습니다.

[Aergo Talks #21] Public Mainnet, buybacks, and ArenAI was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

AI Bots in Q2 2025: Trends from Fastly's Threat Insights Report

Fastly's Q2 2025 Threat Insights Report uncovers how Meta, OpenAI, and others are shaping web traffic and what organizations need to do to stay in control.
Fastly's Q2 2025 Threat Insights Report uncovers how Meta, OpenAI, and others are shaping web traffic and what organizations need to do to stay in control.

The Truth About Blocking AI, And How Publishers Can Still Win

Many AI crawlers aren’t following the rules, and robots.txt can’t stop them. Blocking Google’s AI means killing your SEO, but publishers aren’t completely out of options. Edge control is becoming their last real defense.
Many AI crawlers aren’t following the rules, and robots.txt can’t stop them. Blocking Google’s AI means killing your SEO, but publishers aren’t completely out of options. Edge control is becoming their last real defense.

Monday, 18. August 2025

Dock

Will The Future of Cross-Company Access Be Federated?

One of the most insightful moments in our recent live session with Tim Cappalli (Okta) and Henrique Teixeira (Saviynt) came from a discussion about one of IAM’s most persistent pain points: cross-company access. For decades, the standard approach has been federation: establishing direct, trusted connections between

One of the most insightful moments in our recent live session with Tim Cappalli (Okta) and Henrique Teixeira (Saviynt) came from a discussion about one of IAM’s most persistent pain points: cross-company access.

For decades, the standard approach has been federation: establishing direct, trusted connections between organizations so users can securely access each other’s systems.

But here’s the truth: federation, as it is, doesn’t scale.

Setting it up is time-consuming and rigid:

You need technical integrations for each partner. You need legal agreements. You need alignment between IT teams on both sides. And you need to repeat the process for every new identity provider.

That works if you're collaborating with a few long-term vendors. But it completely breaks down when you’re dealing with hundreds of external users—freelancers, contractors, suppliers, or ecosystem partners—who need access today, not after a three-week firewall security review.


Safle Wallet

Safle x Concordium

Bringing Privacy First Identity and CCD to Your Wallet In a space where trust is often promised but rarely proven, Concordium has been quietly solving one of Web3’s hardest problems proving who is behind a transaction without exposing everything about them. Now, through our latest integration, you can create and manage Concordium accounts natively in the Safle Mobile Wallet bringing ve
Bringing Privacy First Identity and CCD to Your Wallet

In a space where trust is often promised but rarely proven, Concordium has been quietly solving one of Web3’s hardest problems proving who is behind a transaction without exposing everything about them.

Now, through our latest integration, you can create and manage Concordium accounts natively in the Safle Mobile Wallet bringing verifiable on-chain identity and CCD transactions to your fingertips, without compromising privacy.

Whether you’re an everyday crypto user or a developer building the next generation of decentralized apps, this partnership opens the door to secure, compliant, and user-friendly blockchain interactions.

1. What is Concordium?

Think of Concordium as a blockchain with a built-in ID card system except your “ID card” is cryptographically secure and only revealed when absolutely necessary.

Privacy-focused — Your personal details aren’t public, but can be verified when required by law or regulation. Compliance-ready — Businesses and regulated platforms can operate on-chain without fear of violating KYC/AML rules. On-chain identity — Every wallet is linked to an identity verified by trusted providers, ensuring accountability without sacrificing decentralization. Native token (CCD) — Used for transactions, delegation, and interacting with Concordium-based apps.

In short: Concordium bridges the gap between Web3 privacy and Web2 accountability.

2. Why This Partnership Matters

Identity in Web3 is usually either:

Fully anonymous (great for privacy, bad for regulation) Fully public (great for compliance, bad for privacy)

Concordium finds the sweet spot. And now, with Safle’s secure,
non-custodial wallet and Safle Vault SDK, both users and developers can:

Store their identity-linked accounts securely Access Concordium features with the same ease as other chains Build applications that require trust without reinventing the identity wheel 3. New Features in Safle Wallet

With our latest update on iOS and Android, you can:

Create Concordium accounts directly inside the Safle app Send & receive CCD instantly View balances and transaction history for complete visibility Delegate to validators and earn rewards Enjoy native Concordium functionality without needing a separate wallet

All powered by the Safe Vault SDK, now updated to handle Concordium interactions for developers.

4. Benefits for Everyday Users One wallet, more chains — Manage your Concordium accounts alongside your existing assets Privacy without compromise — Verified identity stays private unless disclosure is required Easy setup — Create an account in just a few taps Earn rewards — Delegate your CCD and participate in the network 5. Benefits for Developers SDK-ready — The Safe Vault SDK now supports Concordium, so you can integrate CCD transactions and account creation directly into your apps Identity assurance — Build dApps that require verifiable users without handling KYC data yourself Cross-chain experience — Tap into Safle’s multi-chain capabilities while leveraging Concordium’s compliance-first architecture 6. How to Get Started

For users:

Update your Safle Wallet on iOS or Android Open the app and select Create Concordium Account Start sending, receiving, and delegating CCD instantly

For developers:

Request the updated Safle Vault SDK documentation here Integrate Concordium account creation and CCD transfers into your app Build privacy-first, compliance-ready Web3 experiences Final Word

This isn’t just another chain integration it’s a step toward a Web3 where privacy and accountability can co-exist.

With Concordium inside Safle, you can trust your wallet to keep you secure, compliant, and ready for whatever the decentralized future holds.

Update your Safle Wallet today and start exploring Concordium.

Best,

Team Safle ✨


Aergo

House Party Protocol Public Mainnet Is Live

On the Starting Grid of the AI-Native Era, Ready to Go Full Throttle. Today marks a historic milestone: the official launch of the HPP Public Mainnet, the evolution of Aergo. This AI-native network fuses over a decade of enterprise-grade blockchain expertise with AI-native Layer 2 infrastructure, purpose-built for the AI era to power real-time autonomous agents, verifiable off-chain inference, an

On the Starting Grid of the AI-Native Era, Ready to Go Full Throttle.

Today marks a historic milestone: the official launch of the HPP Public Mainnet, the evolution of Aergo. This AI-native network fuses over a decade of enterprise-grade blockchain expertise with AI-native Layer 2 infrastructure, purpose-built for the AI era to power real-time autonomous agents, verifiable off-chain inference, and a thriving multi-chain economy.

Note: Updates regarding token-related matters, including the TGE, exchange listings, and other migration announcements, will be released progressively as we move forward.

Key Features

1. AI-Native Infrastructure
Purpose-built for agent economies, modular AI services, and verifiable off-chain inference:

ArenAI: Intelligent, autonomous DeFi trading and portfolio management portal. Noösphere: Secure, verifiable off-chain inference for heavy AI tasks, simulations, and multi-source aggregation.

Together, these components transform HPP into an AI operating system for Web3.

2. Multi-Chain Utility
A single, unified token economy operating seamlessly across HPP Mainnet, Ethereum, and the legacy Aergo Mainnet with no supply fragmentation.

3. Security First
Institutional-grade protections with BitGo custody, Fraud Detection Systems, and a canonical bridge architecture ensuring supply integrity across all chains.

A Connected, Verifiable Ecosystem Execution Layer: HPP Mainnet (Arbitrum Orbit-based L2) — Primary home for AI agents, dApps, and DAO governance. Settlement Layer: HPP Ethereum (L1) — High-security finality and deep liquidity. Legacy Layer: HPP (AERGO Mainnet) — Preserves enterprise and public sector deployments while connecting them to the AI-native economy.

Canonical bridges ensure frictionless, secure movement between all layers without compromising integrity.

HPP Partners

HPP unites with a network of strategic partners whose infrastructure and domain expertise form the backbone of our AI-native ecosystem. These collaborators bring battle-tested technology, proven real-world deployments, and deep vertical specialization.

Foundational Partners Aergo: The enterprise-grade blockchain backbone powering compliance-ready smart contracts, secure data exchange, and verified pipelines for mission-critical deployments in both public and private sectors. AQT (Alpha Quark): An asset intelligence platform delivering blockchain-based RWA and NFT valuation, price discovery, and on-chain analytics to bring transparency and trust to digital asset markets. Booost: A human and synthetic data layer offering personhood verification, Sybil-resistant identity tools, and curated datasets that underpin trustworthy agent economies. W3DB: A decentralized trust layer providing model and dataset certification through Verification-as-a-Service (VaaS), enabling AI agents to operate on verifiably accurate and authenticated data. Ecosystem Partners BitGo: Institutional-grade custody securing HPP treasury and reserves with multi-sig control, insured protection, and regulatory compliance. Arbitrum: High-performance Layer 2 rollup powering HPP’s scalable, low-cost infrastructure for verifiable AI and smart contract execution. Conduit: Infrastructure platform for deploying and scaling rollups, enabling HPP’s Arbitrum Orbit Layer 2 environment to achieve high throughput, security, and reliability without sacrificing flexibility. EigenLayer: Ethereum’s leading restaking and data availability protocol, providing HPP with decentralized data availability through EigenDA and enhancing cross-chain security guarantees for verifiable off-chain inference. Orbiter Finance: Cross-chain bridge for HPP, enabling low-fee, fast asset transfers to major blockchains.

Together, these partners extend HPP’s capabilities far beyond what a single network could achieve, creating a unified AI-blockchain infrastructure that seamlessly connects real-world data, verifiable off-chain inference, and autonomous agent execution.

What’s Next Migration Portal: Swap your AERGO tokens to HPP (HPP Mainnet ERC-20) at a 1:1 ratio, or swap your AQT tokens to HPP at a 1 AQT = 7.43026 HPP ratio. Exchange Integrations: HPP will prioritize securing listings on all exchanges that currently support AERGO. This effort is essential to maintaining liquidity continuity, minimizing migration friction, and ensuring institutional-grade market accessibility. MVP Rollout: Launch of the Noösphere SDK, ArenAI portal, and partner integrations to equip developers with the core tools for building the AI-native economy. Governance Transition: Migration of DAO governance to the HPP Mainnet with optimized voting mechanics, lower gas fees, and community participation incentives. Ecosystem Expansion: Incentive programs for builders, early adopters, and community contributors to accelerate adoption across DeFi, DeSci, RWA, and AI-native applications.

HPP Public Mainnet marks not the end, but the moment the lights go out and we push full throttle into the AI-native era. The entire team is now preparing to push into full throttle, building the foundations for a decentralized AI economy.

Please visit our newly renovated official website for the latest updates, key resources, and the soon-to-be-released migration guides: https://www.hpp.io/

House Party Protocol Public Mainnet Is Live was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 17. August 2025

Ockam

What Two Years of Bootstrapping an AI Startup in India Taught Us

Lessons from building YourGPT with a small team and a vision. Back in 2023 it was just us (Rohit and Sahil) starting. When we started in 2023, our mission was simple: help businesses build with AI. Our first product was a fine-tuning tool for businesses to customise AI models. At the time, fine-tuning was resource-heavy & challenging. Fine-tuning required preparing datasets (and ev
Lessons from building YourGPT with a small team and a vision. Back in 2023 it was just us (Rohit and Sahil) starting.

When we started in 2023, our mission was simple: help businesses build with AI.

Our first product was a fine-tuning tool for businesses to customise AI models. At the time, fine-tuning was resource-heavy & challenging.

Fine-tuning required preparing datasets (and even synthetic data generation was not as highly feasible as today), running heavy compute, and testing multiple iterations to avoid issues like overfitting or underfitting. In practice, this meant months of work and high costs—something only big tech firms could manage.

We saw this gap. Instead of making everyone rebuild models, we introduced advanced RAG-based AI chatbots that could train on a company’s own data while using existing models. RAG allowed companies to use powerful existing models while still grounding answers in their own data—giving them customisation without the cost of fine-tuning.

As we worked with more customers it became clear that most businesses did not want multiple tools. They wanted one solution that could handle conversations, support teams, help them grow, and actually grow with them.

We realised many teams face the same challenge: managing multiple disconnected systems slows them down. We combined our fine-tuning capabilities with the YourGPT Chatbot for enterprise users who still need customised models, while also building a single platform for conversations, training, and automation so teams no longer have to juggle separate tools.

We are a small, fast-moving team bootstrapped from day one. We learn by shipping, watching, and listening. Over time every feature we built, from the action-oriented Copilots builder to AI Agents to AI Studio, Helpdesk, Voice Agents was brought together in one product: YourGPT.

It’s been a pleasure building our product from Mohali, India.

Mohali, where our story began

With a lean, customer-focused team, we help businesses unlock value and maximize innovation. One big lesson: hire for mindset. India has incredible talent, and we now prioritize curiosity, passion, and ownership over resumes—skills can be taught, but hunger to solve problems can’t.

We also learned that partnerships matter as much as technology. As developers we love to build, but growing a business is more than code. Relationships with customers, vendors, and other startups help keep the momentum going. If someone has ideas or wants to discuss potential collaborations, they can reach us at pr@yourgpt.ai.

For other bootstrapped builders, here is one thing I wish I had known earlier, and I am sharing it in case it helps you too:Focus on one strong product. Do not get distracted by vibe coding, which will scatter your efforts across too many directions. Grow one vertical well and then expand it horizontally. This creates more value for your users and makes your product journey clearer.

Two years in, our focus is clear: help businesses automate support, sales, and operations, and scale with AI.

Real progress comes from building, shipping, and learning.

Bootstrapping taught us this: momentum matters more than money. Keep building.

These are lessons we continue to learn every day. If you are building something now, what is the one challenge slowing you down the most?


Ontology

THE ONTOLOGY NETWORK

Unlocking Africa’s Digital Identity and Web3 Future Africa is standing on the brink of a digital revolution. With a youthful population, rising smartphone adoption, and a fast growing blockchain ecosystem, the continent is well positioned to leapfrog traditional systems into the decentralized future. Yet, one major challenge persists: trust, identity, and access. This is where Ontology
Unlocking Africa’s Digital Identity and Web3 Future

Africa is standing on the brink of a digital revolution. With a youthful population, rising smartphone adoption, and a fast growing blockchain ecosystem, the continent is well positioned to leapfrog traditional systems into the decentralized future. Yet, one major challenge persists: trust, identity, and access.

This is where Ontology Network (ONT) steps in.

Ontology is a high-performance, open-source blockchain specializing in decentralized identity (DID) and data management solutions. Unlike many blockchains that focus only on transactions, Ontology is designed to empower individuals and businesses with ownership and control over their data, all while enabling trust across borders.

Why the Ontology Network is important for Africa

1. Solving the Digital Identity Gap

Across Africa, millions remain unbanked or underserved due to the lack of reliable identity systems. Traditional ID infrastructures are often fragmented, slow, or inaccessible in rural areas. Ontology’s ONT ID solution provides a blockchain based identity that is secure, verifiable, and user controlled.

This means a young entrepreneur in Nigeria, a farmer in Kenya, or a freelancer in Ghana can create a trusted digital identity without relying on centralized institutions. With ONT ID, they can access banking, healthcare, education, and even global job opportunities.

2. Empowering Financial Inclusion

Blockchain has long been seen as a gateway to financial freedom in Africa. Ontology takes this further by enabling cross-border payments and DeFi (Decentralized Finance) applications with lower costs and faster processing compared to traditional systems.

By combining digital identity with financial tools, Ontology makes it easier for Africans to build credit histories, secure micro-loans, and engage with the global digital economy without being excluded by legacy systems.

3. Data Ownership in the Web3 Era

In the Web2 world, users give up their data for free while tech giants profit. Ontology flips this model. With its self sovereign data framework, Africans can own, control, and monetize their data.

Imagine a student in South Africa who shares academic records with universities abroad, or a healthcare worker in Uganda who securely exchanges medical credentials across borders all on their own terms, without third party exploitation.

4. Building Trust in Governance and Trade

Trust remains a key challenge in African governance, business, and cross border trade. Ontology’s blockchain infrastructure makes it possible to verify supply chains, authenticate documents, and increase transparency in governance.

For example, farmers can prove the authenticity of their produce in export markets, while governments can use tamper-proof systems to reduce fraud and corruption.

5. A Bridge Between Web2 and Web3

Ontology is not just about the future it is building bridges to the present. Its technology integrates easily with existing systems, meaning African startups, SMEs, and governments can adopt Web3 without completely abandoning current tools. This makes the transition smoother, faster, and more inclusive.

Final Thoughts

Africa’s future is digital, and the Ontology Network provides the infrastructure to make that future more inclusive, trustworthy, and empowering. By addressing challenges like identity, financial exclusion, data ownership, and trust, Ontology positions itself as a game changer for the continent’s growth in the Web3 era.

THE ONTOLOGY NETWORK was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 15. August 2025

iComply Investor Services Inc.

AML Made Scalable: How Community Banks Can Simplify Compliance

Community banks face rising AML expectations worldwide. This guide explains how to simplify compliance and scale operations using iComply’s integrated platform.

As AML enforcement expands globally, community banks must modernize their compliance operations to remain efficient, accurate, and audit-ready. This article outlines KYB, KYC, KYT, and AML expectations in key jurisdictions—and shows how iComply helps automate up to 90% of the compliance workload.

 

Community banks play a crucial role in local economies, offering relationship-based financial services that foster small business growth and household stability. But in 2025, global AML regulators are raising the bar—and community banks, no matter how small, are expected to meet the same compliance standards as national institutions.

Whether you operate in the U.S., UK, Canada, or Australia, your bank must now prove it can detect, deter, and report financial crime with the same rigour as the biggest players.

Global AML Standards for Community Banks United States Regulators: OCC, FDIC, Federal Reserve, FinCEN Requirements: CDD Rule, BOI reporting (Corporate Transparency Act), SARs, sanctions screening (OFAC), and ongoing AML program testing United Kingdom Regulators: FCA, PRA Requirements: Customer due diligence (CDD), enhanced due diligence (EDD) for high-risk clients, transaction monitoring, suspicious activity reporting, and PEP/sanctions screening Canada Regulator: FINTRAC Requirements: Identity verification, beneficial ownership discovery, recordkeeping, and mandatory STR reporting. Provincial oversight may add regional layers. Australia Regulator: AUSTRAC Requirements: AML/CTF program, member verification, source of funds checks, transaction monitoring, and ongoing risk assessments What Community Banks Must Implement KYB for Business Accounts: Verify legal status, beneficial owners, and operating legitimacy KYC for Individuals: Confirm identity, address, and biometric match if applicable KYT: Monitor transactions for structuring, velocity, or sanctioned entities AML: Risk-based programs, SAR/STR filing, audit trails, staff training The Pain Points

1. Manual Compliance Workflows → Slows onboarding, increases error rates

2. Fragmented Vendor Stack → No single view of client risk or activity

3. Limited IT and Compliance Staff → Resource constraints delay implementation of controls

4. Regulatory Complexity → Different reporting formats, rules, and thresholds by country or region

iComply: Built for Community Banking

iComply enables community banks to meet modern AML obligations with a single, modular platform that integrates with your core systems and scales to your needs.

1. Seamless KYB + KYC Natural person and business verification Real-time UBO discovery and registry validation Edge-based identity checks (data processed locally on device) 2. Automated KYT and Risk Monitoring Transaction scoring based on behaviour, geography, and value Alerts for unusual activity, layering, or sanctioned exposure Dynamic refresh cycles for high-risk accounts 3. Case Management and Reporting Built-in workflows for escalation, review, and SAR filing Preformatted exports for U.S. (FinCEN), UK (FCA), Canada (FINTRAC), Australia (AUSTRAC) Timestamped audit logs for every action taken 4. Compliance Without Complexity No-code policy configuration White-labeled portals for customer onboarding Multilingual and localization support across jurisdictions

 

The Bottom Line

AML compliance doesn’t need to be a burden. Community banks that automate early gain:

Faster customer onboarding Reduced regulatory risk Scalable operations without hiring more compliance staff

Let iComply show you how to automate up to 90% of AML tasks—so your team can focus on serving your community, not battling spreadsheets.


FastID

DDoS in July

July 2025 DDoS attack trends: Fastly's report reveals infrequent but massive enterprise attacks & insights on attack volume, industries targeted, and company size.
July 2025 DDoS attack trends: Fastly's report reveals infrequent but massive enterprise attacks & insights on attack volume, industries targeted, and company size.

Thursday, 14. August 2025

HYPR

The Cost of NYDFS Cybersecurity Noncompliance: What You Need to Know in 2025

The New York State Department of Financial Services (NYDFS) has long been a leader in setting cybersecurity standards for the financial services and insurance sectors. Under 23 NYCRR Part 500, regulated entities are required to implement a comprehensive cybersecurity program that addresses governance, access controls, incident response, and ongoing risk management.

The New York State Department of Financial Services (NYDFS) has long been a leader in setting cybersecurity standards for the financial services and insurance sectors. Under 23 NYCRR Part 500, regulated entities are required to implement a comprehensive cybersecurity program that addresses governance, access controls, incident response, and ongoing risk management.

As we move through 2025, NYDFS has signaled that enforcement is accelerating. The recent $2 million settlement with Healthplex, Inc., announced on August 14, 2025, underscores the steep cost of falling short. This case serves as a timely reminder for all covered entities: compliance is not a once-a-year paperwork exercise; it is a continuous obligation with real financial stakes.

What you need to know about NYDFS Cybersecurity Regulations

Part 500 applies to most banks, insurers, and financial service providers operating in New York. At its core, the regulation mandates that each covered entity maintain a written cybersecurity policy approved by the board, conduct periodic risk assessments, limit access to sensitive systems and data, and implement robust security measures such as phishing-resistant multi-factor authentication (MFA).

Equally important is the incident reporting requirement, which mandates that breaches meeting certain criteria must be reported to NYDFS within 72 hours of determination. In addition, every covered entity must file an annual certification of compliance, or acknowledgment of noncompliance, by April 15 each year.

What are the Key Requirements & Upcoming Deadlines?

In 2025, several deadlines and requirements should be top-of-mind for compliance teams. The annual compliance certification for the 2024 calendar year must be submitted by April 15, 2025. Before that filing, organizations must ensure their risk assessment is current and documented.

MFA enforcement is also a major focus for NYDFS this year. Covered entities are expected to have phishing-resistant MFA in place not only for remote network access but also for certain internal systems that handle sensitive information. The expectation is clear: email-only MFA or weaker second factors like SMS one-time codes no longer meet the standard.

Finally, the 72-hour breach reporting requirement remains one of the most critical obligations. Delays in reporting can lead to enforcement actions - even if the breach itself could not have been prevented.

Healthplex Case Study - A $2 Million Lesson

The Healthplex enforcement action provides a clear example of what can happen when these requirements are not met. In this case, a service representative at Healthplex clicked on a phishing email, giving an attacker access to sensitive consumer data stored in the employee’s Outlook 365 account.

Several compliance failures compounded the incident. First, Healthplex had not deployed MFA for its email system, leaving it vulnerable to credential-based attacks. Second, the company lacked an email retention policy, meaning that sensitive data remained in mailboxes far longer than necessary, increasing exposure. Finally, Healthplex failed to notify NYDFS of the breach until more than four months after discovery – well beyond the mandated 72-hour reporting window.

The result was a $2 million penalty, mandatory remediation measures, and a requirement for independent cybersecurity audits focused on MFA deployment. The costs extended far beyond the fine itself, including reputational damage and the operational burden of implementing corrective actions under regulatory scrutiny.

The True Cost of Noncompliance

While the $2 million fine is headline-grabbing, the broader impact of NYDFS noncompliance is often far greater. Legal costs, remediation expenses, internal resource strain, and lost customer trust can quickly escalate. Regulatory investigations can also distract leadership and IT teams from strategic priorities, creating a sustained operational drag.

For regulated entities, noncompliance can also lead to increased cyber liability insurance premiums - or difficulty obtaining coverage at all. And reputational harm, especially in the financial and insurance sectors, can have lasting effects on customer acquisition and retention.

How to Stay Ahead of NYDFS

Proactive compliance requires more than simply meeting the bare minimum. Covered entities should:

Implement phishing-resistant MFA such as FIDO2 hardware keys or device-bound passkeys across all systems that store or process sensitive information. Automate breach detection and reporting to ensure the 72-hour notification rule is met without exception. Establish clear data retention policies to limit the amount of information that could be exposed in the event of a breach. Conduct annual independent audits to validate that cybersecurity controls meet or exceed NYDFS expectations.

By integrating these measures into their cybersecurity programs, organizations not only reduce enforcement risk but also strengthen overall resilience against evolving threats.

Conclusion

NYDFS has made one thing clear in 2025: compliance with 23 NYCRR Part 500 is not optional, and the cost of failure is steep. The Healthplex settlement illustrates how a single phishing email, combined with gaps in MFA, data retention, and reporting, can spiral into a multi-million-dollar regulatory penalty.

For financial and insurance organizations, the message is simple – treat NYDFS compliance as an ongoing operational imperative. Investing in phishing-resistant authentication, robust governance, and disciplined reporting processes can save millions and protect hard-earned reputations.

Learn how HYPR helps financial and insurance organizations exceed NYDFS requirements with passwordless, phishing-resistant MFA. 

 

Key Takeaways NYDFS is aggressively enforcing 23 NYCRR Part 500, and penalties are climbing. Annual compliance certification is due April 15, 2025; phishing-resistant MFA and timely breach reporting are top priorities. Healthplex’s $2 million fine shows the financial and reputational risks of noncompliance. Proactive, continuous compliance strengthens both security posture and business trust.

Extrimian

The Future of University Credentials: Secure, Transparent, and User-Friendly

For university leaders, registrars, student services, IT/security, and anyone who wants fewer emails, faster checks, and clearer truth—backed by cryptographic credentials. TL;DR: What do you actually get—and how does it run on campus? How can a university stop fake diplomas and identity theft What problem are we solving—right now, in your inbox? PDFs and screenshots […] The post The Future of Un

For university leaders, registrars, student services, IT/security, and anyone who wants fewer emails, faster checks, and clearer truth—backed by cryptographic credentials.

TL;DR: What do you actually get—and how does it run on campus? Issuance that’s controlled and auditable: Admins authenticate, prepare credential templates, and issue individually or in bulk from CSV, with a two-person approval policy you define. The portal logs who did what and when, so corrections are clean and traceable. A single verification page you can host or embed: Verifiers scan a QR or use a public URL and get a clear result (Valid / Revoked / Incorrect/Unknown), with the option to copy/embed verifier HTML in your own site. Admin UX that matches real registrar work: Backoffice lists admins and entities with statuses (Sent, Active, Revoked), invitation resend, disable/revoke with reason, and a guided “Alta de Administrador y Entidad” flow that emails the invite with wallet links, QR, and deep link. Extrimian AI-First company: Even if AI agent Micelya is internal-only this help us to build and deliver a more accurate product for our clients and final users; it improves our delivery speed and consistency (knowledge hub, handoffs, SOPs), which you feel as better support and final product experience. How can a university stop fake diplomas and identity theft What problem are we solving—right now, in your inbox? Stop fake diplomas with crypto tamper-proof digital credentials and a one-page verifier. Students scan a QR; employers get answers in seconds.

PDFs and screenshots look official but aren’t proof. Every week, employers and partner schools ask for confirmations; staff forward attachments; someone “just checks quickly,” and doubt lingers. Extrimian moves trust from appearance to cryptographic proof: students share a link/QR; your public verifier (hosted or embedded) returns Valid / Revoked / Unknown issuer in seconds—no inbox ping-pong, no guesswork.

What’s the solution in plain words (no acronym soup)?

Give each important proof a tamper-proof, shareable version—and give the world one official place to check it.

1) Issue credentials with control and clarity Backoffice setup for admins & entities: list admins with status (Sent, Active, Revoked), resend invites, revoke/disable with reason. This keeps who-can-issue under tight control. Admin invites by email: your portal sends a welcome email with wallet links (iOS/Android), a QR to issue the admin credential, and a deep link for mobile. This is the clean onramp to wallet-based admin auth. ID Wallet sign-in for admins (Login con credential): admins authenticate by presenting the admin credential’s QR, then land in their Home. Projects and points: the Admin Home shows your default Project and two panels: Issuing Points, and Verification Points, with quick actions to create/edit each. This mirrors how registrars think about “where we issue” and “where we verify.” 2) Design and issue the right credential (one-off or at scale) Types & templates: for each Point of Emission, you manage Credential Types with editable name, description, hero, icon, background, plus attributes you can enable/disable or add. You also get a live preview before saving, which reduces surprises at graduation time. Individual issuance: pick a template, fill recipient name/email, complete the dynamic attributes required by that template, preview, and Issuewhen everything’s correct. Bulk issuance (CSV): select a template, upload .CSV, preview, and Issue Credentials when all fields validate. This is designed for large cohorts and reduces manual entry risk. Safe edit/copy flows: you can only edit a type before issuing; you can also duplicate a type to iterate safely without touching live cohorts. 3) Give the world a one-page verifier (hosted or embedded) Create aVerification Point: name it, choose the credential type, optionally set the issuer DID, add a webhook for events, and generate it. Publish & embed: copy the verifier HTML for your site and/or copy the public verifier URL hosted by Extrimian—both are provided at creation. What verifiers see: your public verifier shows your university name, verifier name, and QR; when a presentation arrives, it updates the status to Successful Verification or Incorrect Verification—plain language for third parties.

Important clarity: Verification uses digital signatures and status (cryptography). We don’t run AI to “guess” authenticity. Your result is deterministic and transparent.

How does this look in real university life (concrete, day-to-day use)? Diplomas without drama: students receive a digitally signed diploma they can share as link/QR. Employers use your verifier once and get a clear result, not a long thread. If a typo slips through, registrar follows revoke → re-issue → notify; the public link always shows the latest truth. (Flows supported by individual/bulk issuance, revocation controls, and public verifier.) Enrollment status that respects privacy: most checks just need “enrolled this term.” You issue a minimal credential and point verifiers to your page. If status changes, the old one is Revoked, the new one is Valid, and external parties naturally see the right answer at the right time. (Backed by verifier status and revocation model.) Transfers & course recognition without email chains: shareable course completion credentials replace scans that go stale. The link stays constant while the truth stays current. (Template attributes + preview reduce errors before they happen.) Career fairs & outreach with momentum: students show a QR; recruiters scan at your public verifier URL and see Successful verification right there. (Easy to host or embed.) Alumni support that actually helps: years later, alumni can request a re-issue; you revoke the old and issue the new. Anyone using the old link sees Incorrect/Reversed verification and requests the updated proof. (Admin list and actions maintain control.) Who does what so this runs smoothly (roles mapped to the portal) Registry & Academic Records: Designs templates (name/description/hero/icon/background), sets attributes (on/off or new), and previews before saving; runs individual or bulk issuance; performs revocations and re-issues when needed; documents reasons for changes. IT & Security: Controls admin authentication (credential-based login via QR), configures/verifies Verification Points (webhook, optional issuer DID), and embeds verifier HTML or publishes the public verifier URL; ensures backups and uptime. Student Services & Comms: Educates students to share link/QR using the digital wallet on their mobiles, instead of PDFs and guides employers/partners to the official verifier URL; keeps a short FAQ aligned with the page’s Valid/Revoked/Incorrect outcomes. Leadership (Provost, CIO/CTO, Risk & Compliance): Endorses the one-page policy: “Verify here; PDFs aren’t official proof,” and monitors adoption (usage of the public verifier vs. email requests). (Policy and messaging are supported by the portal’s embed/public URL model.)

See a real live demo here from UAGRO, one of our succesfull cae studies: UAGRO – Students Credentials & Digital ID Wallet Demo

How do we handle privacy, consent, and accessibility—without slowing anyone down?

Minimum disclosure by design: verifiers see exactly what’s needed to trust a result—no more. The public verifier returns a status and human-readable guidance, not full records. (This is inherent to the verifier’s status model.)

Consent that makes sense: students control when to present their credential (via link/QR). Because the verification lives on your official page, the experience is consistent and auditable across departments and partners.

Clear language and supportability: outcomes are stated plainly (e.g., Successful verification), and you can embed the verifier into familiar web contexts to reduce friction for external parties.

Where does “AI-first” fit—and why should you care if it isn’t inside verification?

We keep AI out of the verification path. Your truth is based on cryptography and status, not AI guesses. Where AI helps is inside Extrimian, through our agent Micelya:

Shared Knowledge Hub: policies, templates, integration notes, and client context live in a role-based, searchable space so our teams respond with consistent, up-to-date answers. Faster handoffs and fewer do-overs: Micelya suggests next steps for our internal tasks (who approves, which template, what changed), so corrections move faster and communication is aligned. Continuous improvement that sticks: when we learn a better placement for the verifier link or a clearer outcome message, it enters our playbook and stays there—even as teams change.

You feel Micelya in response times, consistency, and smoother rollouts—not in your verifier stack.

FAQs about Extrimian Identity Verification Solution for Universities

Do you use AI to verify credentials?
No. Verification uses digital signatures and status checks only. The public verifier returns a deterministic result (Successful/unsuccessful verification) that doesn’t depend on AI.

Can we embed the verifier into our website?
Yes. When you create a Verification Point, the portal gives you HTML to embed and the public verifier URL. You can copy either—or both—depending on your deployment.

How do admins authenticate?
Through credential presentation (QR) on the Admin Login, which takes them to their Home. The initial admin credential is issued via the invitation email with wallet links, QR, and deep link.

How do we issue a whole cohort?
Use Mass Issuance: pick a template, upload CSV, preview, and Emitir when checks pass. For single cases, use Individual Issue with dynamic attributes and preview.

How do partners verify?
They scan your QR or open your public verifier URL. The page shows university name, verifier name, and a status (Successful/unsuccessful verification) when a presentation arrives.

Contact us to avoid diploma and data fraud

Let’s map your flow and harden it—without slowing anyone down.
In one session, we’ll share a demo on how you can issue diplomas, and how the verification process works, identify quick wins, and hand you a short, clear plan: which credentials to start with, how your verification page should look, the approval steps to lock in, and the four KPIs you’ll track.

Not jargon. Not heavy lift. Just a safer, calmer way to run credentials in the AI era, with Extrimian as your AI-first partner for security and trust.

Further reading & internal links Fundamentals of SSI (plain-English intro): https://academy.extrimian.io/fundamentals-of-ssi/
Integrate Solution (connect issuer/verifier to SIS/LMS): https://academy.extrimian.io/integrate-solution/
Masterclass (training for registrar & IT/security): https://academy.extrimian.io/masterclass/

Contact Extrimian (book a 30-minute review): https://extrimian.io/contact-us

 

The post The Future of University Credentials: Secure, Transparent, and User-Friendly first appeared on Extrimian.


Indicio

Portable Authenticated Biometrics 101

The post Portable Authenticated Biometrics 101 appeared first on Indicio.
Portable Authenticated Biometrics are the future of Biometric Authentication, one where you can use your biometric information across platforms and services. Learn how this powerful new technology allows users to hold their sensitive biometric information securely on their mobile device and what benefits it can offer your organization.

By: Tim Spring

What are Portable Authenticated Biometrics?

Many people are familiar with the term Biometric Authentication. It refers to the use of unique physical characteristics to verify a person’s identity, such as their fingerprint, voice, or face scan, and many people use it daily to access their phones or other technologies.

Portable Authenticated Biometrics take these characteristics out of a siloed database, and store them inside of a Verifiable Credentials on the user’s smart phone.

The problem they solve

Current methods of biometric authentication rely on databases of authenticated biometrics that have been proven to be able to be tied to their users. The main benefits being that biometrics are harder to impersonate than traditional passwords, offer more convenience for the user, and cannot be forgotten. 

Unfortunately, as technology has advanced, we have realized a few major problems with this system: 

Large databases will always represent a lucrative target for bad actors. There is no amount of security that can guarantee that these large collections of personal information stay safe.

Current systems rely on a connection to the database to function. No internet or service means that you cannot share your biometric data or prove your identity, representing another point of failure.

Storing your biometrics with a third party means that they have control over that information and use it as they see fit, including using it to track your digital or physical actions or sharing it without your consent. 

How do Portable Authenticated Biometrics work?

Every time a new person is added to a biometric authentication system a template of that person’s biometric data is created, and the system learns what you look like. This is what the system compares your new scan to when you are trying to access your phone or documents. In the system these authenticated biometrics are tied to you, and enable you alone to be granted access.

Portable Authenticated Biometrics are a method Indicio has created of allowing users to hold their biometric data securely on their mobile device. Because of the way it is stored (inside a Verifiable Credential) the data inside cannot be manipulated once the credential is created. 

The Benefits

The biggest benefit of this system is that a large database is no longer required to use biometric authentication. This reduces costs and liability for the organization, and offers a huge increase in security for the user’s data.

The second advantage is that by having each user submit their biometric template alongside the new scan, we achieve multifactor authentication built in without any additional effort from the user. All the increased security without the need to check your email or text messages for a one-time code.

The third, and maybe most game-changing feature is the portability inherent in this system. If an organization that you trust — for example the government — issues someone a biometric credential, you can set your systems to accept that biometric information without needing the user to create a new biometric template with your organization. Think of the ease of “login with Google” but even more secure, and backed by any organization that you trust.

Benefits in Context: A Banking Call Center

Let’s walk through a quick example.

When calling your bank to fix an issue — such as a declined payment — you will need to prove your identity. Currently, most banks will ask you for information like your name, account number, social security number, or security questions to try to positively identify you before sharing any personal information. This process is not typically particularly long, but it is also not particularly secure. Any bad actor can also collect this information from a data breach and pretend to be you, increasing the risk of fraud and being dependent on the bank representative to catch it.

With Portable Authenticated Biometrics, the bank representative can digitally request your biometric scan and authenticated biometric to identify you instantly, in a way that cannot be impersonated. Once identification has been achieved, you can move on with the purpose of the call, without having to jump through any additional hoops, saving both the user and the call center time while reducing the chance of fraud.

Getting Started

The technology behind Portable Authenticated Biometrics is built to easily integrate into existing systems to create a faster, more secure experience for users. 

If you would like to learn more about Indicio’s system for streamlined user authentication and access management you can read about Indicio Proven here. If you are ready to have a more specific conversation about how to implement this system for your organization you can reach out to Indicio’s team of industry experts for a free consultation here.

The post Portable Authenticated Biometrics 101 appeared first on Indicio.


Holochain

Holochain Horizon: Foundation Forward

Blog

We recently hosted what will be the first of many livestream events for everyone in our community, a series we’re calling Holochain Horizon (for those who want to go to the primary source, here’s a link to that conversation).

Here, I want to do three things:

Provide both context for, and a summary of, this first conversation – especially as it gave an opportunity for many of you to hear directly from Madelynn Martiniere for the first time, who recently joined our board and is providing direct support to the leadership team and broad community as we move forward and build out our ecosystem Identify where we are right now as an organization – we’ve been working hard, for years, on an incredibly ambitious project: finding a path to building out open-source tech in a way that's actually viable in the world. While significant challenges remain, we benefit from having a clear picture of the often hard choices we have to make in order to make good on the promises we’ve made to both our community, and ourselves And finally, having first written directly about the organizational shifts needed to provide our developers and community with the time, and space, necessary to deliver on our commitments (here) in November 2024, I want to offer some specifics about what lies ahead – which, while I am admittedly biased, I am genuinely excited about

ONE: Foundation Forward 

I began the call by characterizing myself – accurately – as a little nervous, partially because it was our first such event as the Foundation, and partially because I was and remain genuinely excited about the direction these key decisions have led us to.

At the highest level, as I’ve said, it means operationalizing the Holochain Foundation itself – a shift from IP stewardship to active and direct involvement and management.   This allows the Foundation to hold coherence for all our stakeholders, internal and external, to benefit from a strategic allocation of resources so that we can accelerate toward appropriately phased delivery.

Back in November I wrote that “part of our coming of age is realizing that we can’t do everything we might like. Focus matters.” From a technology infrastructure perspective, that means strategically advancing the capability, and durability, of Holochain. 

As Madelynn and I discussed on the call, we clearly recognize the need to engage with our community and like-minded partners via formalized processes that will migrate one-off engagements to defined projects that benefit everyone by advancing the infrastructure itself. 

Madelynn has a lifetime of experience in building healthy and robust technology ecosystems, and practically speaking, that means much of her role is to continuously iterate on improving surface-areas of engagement for all types of folks adopting Holochain, from individual developers, to enterprises and organizations looking for robust decentralized solutions to tough problems. A big part of that is engagement and communication, and as Madelyn herself said, her role is to ask, “how do we create processes and pathways for the community to be in deeper dialogue with us about what it is that they're building? How can we best support that? How do we engage them in actual development?”

So far, the concrete steps we’ve taken in bringing the Foundation forward include strengthening the technical team, along with a corresponding improvement in release structure and quality, and enhanced transparency, as embodied by our operational roadmap (which you can see here) to provide the community with a clearly delineated roadmap showing the scoping, planning, and evolution of our ongoing work.

For our community, the takeaway should be clear: the Foundation’s leadership, and the organization itself, are orienting around proactive engagement to move us – all of us – from the strategic to the tactical. 

TWO: Where Are We Now?

On the call, I made a plain but accurate observation: “to build out open-source tech in a way

that's actually viable in the world… is a hard problem.” 

From an organizational perspective, we’re evolving to meet the world as it is becoming. As I previously wrote, this means operationalizing the Foundation to ensure that while we’re always mindful of our ambitions, we remain connected and committed to action. In turn, that means constant and deliberate self-interrogation, making sure we have the right resources delivered to solve for the most important problems.

We talked about it at length in the livestream, but a clear example and core initiative for us is the continued build-out of our “Wind Tunnel” performance testing framework. One of the conundrums of technology is that while it sometimes appears there is unlimited capital to build out certain ideas (it is hard to observe without jealousy the trillions of dollars that have been dumped into AI), there is proportionately much less patience. Distributed technologies, by their nature, demand economic patience as they’re a half step slower to commercialize as the very decentralization creates different economic incentive structures.  From a performance perspective, decentralized systems also have a different profile due to their architecture. 

This is what makes Wind Tunnel so important: we want (and developers need) to be able to verify that Holochain’s operating envelope will meet the demands of Holochain applications. And that's what Wind Tunnel can do. It allows developers to create a scenario to drive a network of Holochain nodes, see what happens, record the rates at which data is synchronized (or any other parameter they want to measure, like DHT synchronization speeds, CPU usage, bandwidth usage across different nodes, etc.) and have the metrics reported. 

THREE: Where We’re Going

Having shifted the structure of our organization, and as we continue to evolve and direct our resources at our highest-priority opportunities, you can expect to see some exciting developments in the near term. 

In particular, in our past configuration we spent a significant amount of time working on developing the Holochain app and infrastructure necessary to support HoloFuel. Effecting the conversion of HOT into HoloFuel – a mutual credit currency anchored in the value created by Holo hosting - has, from the beginning, been a stated goal

Though we already knew the concept of HoloFuel had a much broader application as a pattern, we also realized that we could implement a generalized version for mutual credit currencies that other decentralized infrastructure projects could use, while also creating more value for current HOT holders. 

Accounting for value flow and creating a fabric for establishing rules and systems to support and govern these flows, enables networks, communities, economies and cultures to grow and thrive. Recognizing this opportunity led to our strategic decision to create Unyt, a separate subsidiary organization designed to implement a generalized version for mutual credit currencies that decentralized infrastructure projects could use, while also creating more value for current HOT holders. 

While Unyt is in its early days, they’re getting close to being able to launch beta versions of their multi-unit accounting framework and open them to our community for testing via a scavenger hunt. I won’t say too much more here about Unyt, but expect to hear more from them soon. 

More broadly, we’re working hard on supporting this and other key initiatives at the Foundation that we believe will not only represent significant milestones, but genuinely put us on a path to delivering on the vision we’ve had, and shared, since Holochain’s inception.

Thanks to everyone for your continued support, and confidence.

Eric 


Elliptic

OFAC targets use of stablecoins for Russian sanctions evasion

OFAC has today targeted a number of businesses and individuals linked to the use of stablecoins for Russian sanctions evasion. The following entities involved in this activity were added to the Specially Designated Nationals list:

OFAC has today targeted a number of businesses and individuals linked to the use of stablecoins for Russian sanctions evasion. The following entities involved in this activity were added to the Specially Designated Nationals list:


Radiant Logic

Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity

Discover how Radiant Logic’s SCIMv2 support simplifies identity management, enabling seamless automation, governance, and Zero Trust alignment across hybrid environments. The post Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity appeared first on Radiant Logic.

Aergo

Stablecoins built the bridge for money. Noosphere builds the bridge for AI.

TL;DR Stablecoins bridge traditional finance and the crypto-native world, enabling payments, global liquidity, and Web3 growth. In the AI economy, that bridge is native off-chain computation and verifiable inference, directly linking AI workloads to blockchain trust. HPP’s Noosphere delivers this at the protocol level, unlocking scalable, trustworthy AI integration. Today, many of Korea’s largest
TL;DR
Stablecoins bridge traditional finance and the crypto-native world, enabling payments, global liquidity, and Web3 growth. In the AI economy, that bridge is native off-chain computation and verifiable inference, directly linking AI workloads to blockchain trust. HPP’s Noosphere delivers this at the protocol level, unlocking scalable, trustworthy AI integration.

Today, many of Korea’s largest companies, including Naver, Toss, and Kakao are preparing to launch their own stablecoins.

Why? Because stablecoins are the essential entry point for entering and expanding into the crypto-native ecosystem.

In traditional finance, value transfer is limited by banking rails, operating hours, and jurisdictional boundaries. Stablecoins remove those barriers, enabling:

Frictionless on/off-ramps between fiat and crypto 24/7, borderless settlement for payments, remittances, and commerce Direct integration into DeFi, GameFi, NFT, and RWA markets without requiring volatile assets Programmable money that can be embedded into smart contracts, loyalty programs, and digital marketplaces

Beyond domestic use, these tokens also position Korean tech giants for global Web3 expansion, enabling them to directly integrate into international crypto liquidity, DeFi protocols, and cross-chain payment networks.

In short, stablecoins are not just a payment tool; they serve as a strategic bridge from Web2 scale to Web3 opportunities.

If so, what is the essential gateway to the AI economy?

In the same sense, off-chain computation and inference are a must for AI-native infrastructures. Just as stablecoins serve as a bridge between traditional finance and the crypto-native ecosystem, off-chain computing acts as a bridge between AI workloads and blockchain trust.

If a blockchain project claims to be “AI-powered” but lacks AI-native infrastructure, such as native off-chain computation, verifiable inference, governance over AI agents, and protocol-level integration, it is merely a marketing label and not a genuine AI platform.

On-chain environments are excellent for verification, consensus, and transparency, but they are not optimized for heavy computation or real-time AI inference. That’s why it must be natively implemented at the protocol level, not added later through an oracle.

This is the design principle behind HPP’s Noosphere:

Protocol-native off-chain AI execution for inference, data aggregation, and simulation On-chain verification to ensure results are correct and tamper-proof Scalability for enterprise-grade and consumer-facing applications without congesting the main chain

Potential Use Cases:

Enterprise: Fraud detection in financial services, medical diagnostics in healthcare, real-time logistics optimization in supply chain networks DeFi: AI-driven trading strategies, dynamic risk assessment, predictive yield optimization Identity & Security: Instant biometric verification, decentralized KYC/AML checks RWA & NFTs: Dynamic NFTs that change with market or environmental data, real-time asset valuation for tokenized real-world assets

By embedding Noosphere directly into HPP, developers gain a built-in, verifiable AI execution layer, not a fragile add-on dependent on external services.

Just as stablecoins open the door to Web3 adoption, Noosphere unlocks scalable, trustworthy AI integration for blockchain ecosystems.

Stablecoins built the bridge for money. Noosphere builds the bridge for AI. was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Updated Terms and Policies

We’re updating the language in our terms and policies to better explain our approach and provide more detail.

Since launching Bluesky two years ago, we’ve grown tremendously. As our community has expanded, feedback on our terms of service, community guidelines, copyright, and privacy policies has surfaced opportunities to improve clarity. With more experience under our belt and an evolving regulatory landscape, we’re updating the language in our terms and policies to better explain our approach and provide more detail.

For our Community Guidelines, we’re asking for input from the community. The proposed guidelines enhance clarity, add user safety details, and provide more transparency around moderation. You’ll have until August 28th to submit comments, and then they’ll go into effect on October 15th. You can view our updated Community Guidelines on our Support Page.

Our Terms of Service have been updated to reflect new legal requirements and give users more control in case of disagreements. Changes include clarifying eligibility and age assurance to comply with new regional regulations, as well as introducing a formal appeals process. We’ve also expanded our dispute resolution section to prioritize informal resolution.

The new Terms of Service, Privacy Policy, and Copyright Policy will go into effect on September 15th. You can view these updated policies on our Support Page.

Below is an overview of what’s being updated:

1. Community Guidelines

We believe the best policies are created in partnership with the people they serve. Our draft Guidelines incorporate lessons from our community's growth and feedback, and your input will help us ensure they're ready to support Bluesky's future. We started by creating a draft that we think improves on our current Guidelines. Below are some of the updates we’ve proposed; here is the form to use for feedback.

Clearer Structure: Organized around four key principles—Safety First, Respect Others, Be Authentic, and Follow the Rules—with specific examples of allowed and prohibited content under each. Harm Categories Clarified: Updated with more examples, to align with the UK Online Safety Act (OSA), the EU Digital Services Act (DSA), and the US TAKE IT DOWN Act. Enforcement Procedures: Introduced progressive enforcement model and clarified content moderation logic as required by the OSA and DSA. Appeals & Redress: Aligned with the DSA and now includes out-of-court and judicial remedy guidance for EEA users. 2. Terms of Service (ToS) Eligibility and Age Assurance Section: Updated to clarify compliance with online safety laws and regulations (e.g., US Children's Online Privacy Protection Act [COPPA], OSA, DSA) and to require age assurance where necessary. Moderation and Illegal Content: Added detail on proactive moderation and illegal content responses to comply with the DSA. Complaints and Appeals: Introduced detailed appeals process to comply with the DSA. Dispute process: We have provided users with greater control in the event that we have a disagreement. We’ve added details about using the informal dispute resolution process, including an agreement that we will talk on the phone before proceeding to any formal dispute process. This is because we think most disputes can be resolved informally. We’ve added a specific detail around liability: if a user makes a claim that we did something wrong that caused certain types of harm, they can choose to resolve that claim in court rather than through arbitration. We are giving users a choice when it comes to any dispute we must resolve through arbitration. That means users choose one arbitrator, Bluesky chooses one arbitrator, and those two arbitrators choose a third. 3. Privacy Policy User Rights: Enhanced transparency around data subject rights under the EU and UK General Data Protection Regulation (GDPR) and other global privacy laws. International Data Transfers: Updated to explain safeguards used for transfers outside EU and UK. Retention and Deletion: Strengthened clarity on deletion limitations due to decentralized architecture; consistent with DSA and data minimization requirements. Jurisdiction-Specific Sections: Added information that applies to some of our users based on the jurisdiction where they live. 4. Copyright Policy Streamlined Takedown Procedure: Ensures compliance with US Digital Millennium Copyright Act (DMCA), DSA, and similar laws. Abusive Reporting Clause: Added mechanisms to deter fraudulent takedown misuse, compliant with the DSA. Transparency Reporting: Aligned with the DSA by clarifying that we will include required information in transparency reports.

FastID

Request Collapsing Demystified

Boost website performance with request collapsing! Learn how it improves efficiency, reduces origin load, and optimizes caching for a snappy user experience.
Boost website performance with request collapsing! Learn how it improves efficiency, reduces origin load, and optimizes caching for a snappy user experience.

Wednesday, 13. August 2025

1Kosmos BlockID

How 1Kosmos Became the Reference Architecture for Modern Digital Identity

On the cusp of our series B funding, and as I look ahead to many big and bright developments for our company and for Identity and Access Management at large, I can’t help but look back at the vision and design intent behind the 1Kosmos platform and how it became reality. Eight years ago, we … Continued The post How 1Kosmos Became the Reference Architecture for Modern Digital Identity appeared fi

On the cusp of our series B funding, and as I look ahead to many big and bright developments for our company and for Identity and Access Management at large, I can’t help but look back at the vision and design intent behind the 1Kosmos platform and how it became reality.

Eight years ago, we had a simple but audacious goal: fix digital identity once and for all. Not with incremental improvements, but by rebuilding the entire foundation from scratch. Looking back now, our early design decisions have become the blueprint that the entire industry follows.

The vision was clear. People were drowning in passwords, companies were struggling with ransomware and breaches, and personal data was being treated as a corporate asset rather than a fundamental right. We knew that any real solution would need to solve identity verification and passwordless authentication simultaneously, not as separate problems but as two sides of the same coin.

What we didn’t anticipate was how quickly our architectural approach would become the standard. Today, when industry analysts discuss “best practices” in digital identity, they’re describing the principles we built into 1Kosmos from day one.

Identity Verification: Inclusion Was Always the Goal

When we designed our verification system, we made a controversial decision. While everyone else bet everything on smartphone-first experiences, we insisted on building multiple pathways that would work for everyone. The mobile experience had to be exceptional, but it couldn’t be the only option.

That decision proved prescient. Organizations discovered that when you make identity verification truly accessible, adoption rates soar. Our verification architecture combines real document authentication with live biometric matching across multiple platforms.

Looking ahead, this foundation is proving essential for the digital wallet revolution. As governments and enterprises begin issuing verified credentials, the ability to verify identity anytime, anywhere and with or without a mobile phone proves to be a core requirement, not just a nice-to-have feature.

Authentication: The Distributed Biometric Breakthrough

The breakthrough that really set us apart came from our approach to biometric authentication. Everyone else was building bigger central databases. We asked a fundamental question: what if we could authenticate users without ever storing their biometric data centrally?

The answer was distributed biometric verification across a private blockchain. Your face or fingerprint gets distributed and encrypted in ways that neutralize threats from centralized breaches. You can authenticate anywhere in our network, but your biometric data never leaves your control.

This architecture has become what security experts call the gold standard for biometric authentication. The user experience is seamless—your face becomes your password—but there’s simply no central target for attackers because there’s no central database to breach.

Privacy: Building What We Couldn’t See

Perhaps our most important early decision was to architect the entire platform around data we couldn’t access ourselves. This wasn’t just about compliance—it was a fundamental design constraint that shaped every technical choice we made.

We built zero-knowledge proof capabilities into the core platform. We could verify that someone was over 21 without knowing their exact birthdate or confirm employment status without accessing salary information.

This privacy-first architecture seemed radical when we first deployed it. Now it’s becoming a requirement. As verified credentials become mainstream, the ability to selectively disclose information through zero-knowledge proofs will transform everything from border crossings to loan applications.

Verified Credentials: The Network Effect We Envisioned

When we started building support for verified credentials, we were betting on something that didn’t quite exist yet. The standards were emerging, use cases were theoretical, and most organizations had never heard the term. But we could see where things were heading.

Today, verified credentials are transforming how organizations think about identity and access. The employee badge, professional license, customer verification—all can now exist as cryptographically signed digital credentials that work across platforms and organizations.

Our early investment is paying dividends! Organizations using 1Kosmos can issue credentials to employees that work seamlessly with partners’ systems. Customers get verified once and use that verification across multiple services. The network effects we envisioned are becoming reality.

Decentralized Identity: The Vision Realized

The most ambitious part of our original vision was true decentralized identity—putting users in complete control while maintaining the security that organizations require. Users own their identity information completely. Organizations get stronger security and easier compliance. The system becomes more resilient as it grows.

The decentralized approach has proven essential as digital wallets evolve from concept to reality. When your identity isn’t locked in corporate databases, you can present it anywhere, anytime, for any purpose you authorize. The wallet becomes truly portable because the identity itself is truly yours.

As we look ahead, this foundation supports use cases we’re only beginning to explore. International travel with digital passports. Seamless access across different countries. Professional credentials that work globally. Age verification that protects privacy completely.

The Reference Platform: Looking Forward

What started as our vision for fixing digital identity has become the reference architecture that defines how modern identity platforms should work. When analysts evaluate new solutions, they measure them against capabilities we pioneered. When enterprises set requirements, they’re describing features we built years ago.

Looking forward, the most exciting applications are just beginning. Digital wallets that work across borders and platforms. Verified credentials that enable instant, private verification of any attribute. Zero-knowledge proofs that let you prove exactly what you need without revealing anything else.

The next chapter is already being written. Your digital wallet will soon hold not just payment cards but professional licenses, educational credentials, government documents, and membership cards—all cryptographically verified and completely under your control.

The 1Kosmos architecture is ready for this future because we built it into the foundation from the beginning.

The infrastructure is built. The standards are emerging. The future of digital identity isn’t coming—it’s here, working, and ready to transform how we interact with the digital world.

Ready to see how the 1Kosmos reference architecture can transform your organization’s approach to identity and access? Let’s talk about what’s possible.

The post How 1Kosmos Became the Reference Architecture for Modern Digital Identity appeared first on 1Kosmos.


liminal (was OWI)

Link Index for AI Data Governance 2025

The post Link Index for AI Data Governance 2025 appeared first on Liminal.co.

Elliptic

Eight areas where crypto may already be in your banking ecosystem

Where do cryptoassets intersect with your banking operations? It sounds like a simple question, but it isn’t. Digital assets often have many more touchpoints than financial institutions realize. It’s important to be aware of these touchpoints, so your coordinated teams can develop comprehensive risk management practices and better serve evolving customer expectations.

Where do cryptoassets intersect with your banking operations? It sounds like a simple question, but it isn’t. Digital assets often have many more touchpoints than financial institutions realize. It’s important to be aware of these touchpoints, so your coordinated teams can develop comprehensive risk management practices and better serve evolving customer expectations.


Okta

Find the intersection of security, AI, IAM, and fun at Oktane

AI is taking over the world by storm! This year, AI is our focus at Oktane. We want to ensure you have the tools, the know-how, and solutions to keep your software systems secure, from traditional user apps to AI agents. We can’t wait to meet you and hear about your application needs and challenges. Join us at Caesars Forum in Las Vegas, NV, on September 24-26, 2025, for Oktane, and let’s nerd

AI is taking over the world by storm! This year, AI is our focus at Oktane. We want to ensure you have the tools, the know-how, and solutions to keep your software systems secure, from traditional user apps to AI agents.

We can’t wait to meet you and hear about your application needs and challenges. Join us at Caesars Forum in Las Vegas, NV, on September 24-26, 2025, for Oktane, and let’s nerd out on security, AI, and identity. Throw in a dash of fun for good measure!

We planned engaging events to help you navigate the evolving world of AI and identity. Stop by and chat with us at these activities:

Stop by the Oktane Developer Lounge

Find our lounge in the Oktane Expo Hall, where you’ll discover the ways Okta can help you create secure applications with human and non-human identities. 🤖

Learn more about securing AI with Cross App Access, and hear lightning talks about on-point security and identity topics!

You’ll also have a chance to connect with identity experts from our friends on the Developer Support teams. Do you have a question about your Okta implementation? The Developer Support team is here to help!

We want your feedback on our documentation! Visit our booth’s interactive games and tell us how you learn about identity concepts. Your input will help us organize and present our docs in a clearer, more intuitive way.

We’ll also have more going on in the Developer Lounge. You won’t want to miss out on the action.

Check out the Oktane hands-on labs for interactive learning opportunities

Roll up your sleeves and get your coding on. This is your chance to build code using Okta solutions and network with like-minded developers. Sign up for admin and developer labs and save your spot for great hands-on experiences such as:

Secure Your Enterprise AI with the new OAuth Extension Protocol Cross App Access (XAA) and Model Context Protocol (MCP)
Secure AI access to enterprise applications using the new OAuth Cross App Access (XAA) extension and Model Context Protocol (MCP)

Terraform 101: Automating Okta
Learn the basics of Terraform and get hands-on using it to manage an Okta tenant

Use Okta Identity Governance to Replace Standing Admin Access with Time-Bound Requests
Get more out of Okta Identity Governance and reduce your attack surface by leveraging Workflows to enhance streamlining least privilege access for administrative permissions

Use Possession-proof Tokens to Protect Your Apps with Okta
Leverage the OAuth 2.0 Demonstrating Proof of Possession (DPoP) spec to add an extra protection mechanism on access tokens. This lab upgrades a Single Page Application (SPA) – using an OAuth 2.0 Bearer access token – into a more secure DPoP token.

Okta Workflows community meetup

Join the Okta Workflows community meetup during Oktane 2025 in Las Vegas. Meet Workflows community members, colleagues, and friends over drinks and delicious appetizers.

Find resources, solutions, and networking opportunities at Oktane

We’re excited to connect with you and learn about your application needs! Please find us at Oktane, and feel free to comment if you have any questions or requests in the meantime.

Remember to follow us on Twitter and subscribe to our YouTube channel for exciting content.


FastID

Maximizing Compute Performance with Log Explorer & Insights

Monitor and troubleshoot Fastly Compute services with Log Explorer & Insights. Gain granular insights, optimize performance, and debug faster for efficient applications.
Monitor and troubleshoot Fastly Compute services with Log Explorer & Insights. Gain granular insights, optimize performance, and debug faster for efficient applications.

Tuesday, 12. August 2025

Indicio

Virtual identity verification in Finance

The post Virtual identity verification in Finance appeared first on Indicio.
In a recent Indicio Meetup, Elijah Levine, CEO of Black Mountain, Indicio CEO Heather Dahl, and CTO Ken Ebert discussed the digital transformation of banking and finance, driven not only by decentralization and digital assets, but by consumer preference for mobile interaction. An opportunity or a migraine? That all comes down to digital identity — and how banks and financial services manage customer authentication.

By Tim Spring

Banking and financial services are in the mobile age, whether they like it or not. With 55% of customers preferring mobile banking, adaptation to a world of seamless digital interaction isn’t a choice. Many customers — but especially those who have grown up digitally native — don’t see the point of going into a brick and mortar building any more; they want to open an account, manage their finances, and access services as they do everything else — through their phones.

Here’s the problem. The scale of this opportunity to reinvent banking is anchored to a pre-digital, show up in person, world. And its a drag.

“We’re working with the biggest and best bank in the world,” said Elijah Levine, CEO of Black Mountain at the Indicio Meetup,  “and for them to verify our identity, we need to bring two physical forms of identity into an in-person branch for a physical person to do a touch and feel test… using their banker’s best judgement for if this paper document is real.”

How, then, do you onboard and authenticate a new customer remotely — especially when paper documents and pictures can be easily forged with ubiquitous AI tools?

Verifiable Credentials with biometrics: A revolution in  identity verification

A Verifiable Credential makes information tamper proof and cryptographically verifiable — meaning you don’t have to check the data in a credential against the same data stored somewhere in a database.

For a customer logging into an account this means no more passwords or usernames — by presenting an account credential from a digital wallet, they gain seamless access to their account. What this means for both the bank and the customer is that a key vulnerability — stolen login credentials — is removed as a security risk. You can’t phish for data that doesn’t exist.

The process of verifying the credential also doesn’t just include verifying the customer, it also means verifying the identity of the bank so the customer can be certain it is their bank. Again, the shuts down a common phishing tactic of luring customers with emails or sms messages that purport to be from their bank.

But Verifiable Credentials reach peak usability when they combine verified biometrics with biographical data. By using Indicio software, a bank can enable a customer to turn the data in their passport into a credential, a process that includes taking the embedded image in the passport and performing a real-time face map of the passport holder to ensure both match.

“The beauty of combining a verifiable credential with a biometric,” said Ken Ebert, CTO of Indicio, “is that it can bind the data in the credential to the person who is presenting it and in a way that’s stronger than just having a set of data in a file… If you’ve bound the biometrics to opening the wallet as well, you can tell that the person is not only present, but they’re the one that the credential was issued to. And you can match that data to the data in the credential for purposes of ascertaining that you’re dealing with the right person”

This functionality allows us to not only verify the data presented, but that the correct user is there at the time of data submission, making presenting someone else’s credentials without their consent essentially impossible.

It also provides a simple way to dodge a deepfake: simply ask the person to present an authenticated biometric. The verifying software will compare the live image to the biometric in the credential to see if it matches.

All this allows customers to verify their identity information from anywhere without the need to bring their paper documents to a building. And there’s no need for a bank or a relying party to have to store a person’s biometrics to verify them, simplifying the stringent privacy compliance around biometric data too.

What does this mean for KYC?

This technology is poised to change KYC, but it doesn’t need to be an all or nothing replacement of current infrastructure. “Although this is a new and cool technology,” said Ebert, “it doesn’t have to immediately supplant everything that’s already going on in KYC. It’s an add-on. It’s an enhancement. It’s a boost of confidence or a higher assurance level. It’s an immediate benefit, but it doesn’t have to be a rip and replace. It can integrate with existing systems. That’s part of the beauty of it —we’re not reinventing KYC”

But KYC will be reinvented. Gradually, the collection of physical documents will become unnecessary, as everything can be turned into a Verifiable Credential and leverage portable digital trust.

You can start using Verifiable Credentials to transform IDV right now.

Want to start offering virtual identity verification? Indicio Proven contains all the components you need to quickly set up Verifiable Credentials for your users and manage accounts and access through your admin portal.  Just ask us for a demo and we’ll show you how easy it is to get started. Or, if you’d like a more customized solution our team would be happy to offer a free consultation and work with you to meet your exact requirements.

The post Virtual identity verification in Finance appeared first on Indicio.


Trinsic Podcast: Future of ID

Thomas Mayfield – Building Interoperable Web3 Identity with the Veridian Platform

In this episode of The Future of Identity Podcast, I’m joined by Thomas Mayfield, Head of Decentralized Trust & Identity Solutions at the Cardano Foundation. Thomas leads the development of the Veridian Wallet, an open-source digital identity platform built on the KERI (Key Event Receipt Infrastructure) protocol and funded by the Foundation. Our conversation explores the rapidly evolving Web3

In this episode of The Future of Identity Podcast, I’m joined by Thomas Mayfield, Head of Decentralized Trust & Identity Solutions at the Cardano Foundation. Thomas leads the development of the Veridian Wallet, an open-source digital identity platform built on the KERI (Key Event Receipt Infrastructure) protocol and funded by the Foundation.

Our conversation explores the rapidly evolving Web3 digital identity ecosystem—and how Veridian aims to bridge Web2 and Web3 with universal interoperable identifiers that cut through today’s fragmented identity landscape. We also dig into the growing urgency to rebuild digital trust as data breaches, ransomware, and AI-powered threats escalate.

In this episode we explore:

Why interoperability—across Web2, Web3, and beyond—is essential to breaking down identity “walled gardens.” How the KERI protocol enables quantum-proof, tamper-evident, and recoverable identifiers for individuals, organizations, and AI agents. Real-world adoption: how the United Nations is using Veridian for organizational identity and passwordless authentication. The potential for verifiable IoT and AI agent identities to transform trust in machine-to-machine and human-to-machine interactions. How developers can leverage Veridian’s open-source infrastructure, sandbox environments, and tooling to build secure, compliant identity solutions faster. The role of regulation in driving adoption—and why future-proofing identity systems now could save billions in breach-related costs.

This episode is essential listening for anyone working on decentralized identity—whether you’re building infrastructure, integrating identity into products, or shaping policy. Thomas offers a rare, in-depth look at how to design for both future-proof security and real-world interoperability.

Enjoy the episode, and don’t forget to share it with others who are passionate about the future of identity!

Learn more about the Cardano Foundation.

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.

Listen to the full episode on Apple Podcasts or Spotify, or find all ways to listen at trinsic.id/podcast.


ComplyCube

How to Compare KYC Platforms: A Feature-by-Feature Checklist

Comparing leading KYC platforms can help firms effectively evaluate and decide the right provider. However, knowing how to compare KYC providers effectively may look different for each company, depending on their unique case. The post How to Compare KYC Platforms: A Feature-by-Feature Checklist first appeared on ComplyCube.

Comparing leading KYC platforms can help firms effectively evaluate and decide the right provider. However, knowing how to compare KYC providers effectively may look different for each company, depending on their unique case.

The post How to Compare KYC Platforms: A Feature-by-Feature Checklist first appeared on ComplyCube.


Dock

NetBr Partners With Dock Labs to Streamline Identity Across IAM and CIAM Systems

NetBr, a leading Brazilian cybersecurity company specializing in identity and access management for the largest companies in Latin America, announced the integration of Dock Labs’ verifiable credential technology to enhance and extend the capabilities of existing Identity and Access Management systems. This integration will enable NetBr’s

NetBr, a leading Brazilian cybersecurity company specializing in identity and access management for the largest companies in Latin America, announced the integration of Dock Labs’ verifiable credential technology to enhance and extend the capabilities of existing Identity and Access Management systems. This integration will enable NetBr’s clients to reuse and share verified ID data across departments, business units, and partner ecosystems, streamlining onboarding and access.


Spherical Cow Consulting

Agentic AI in the Open Standards Community: Standards Work or Just Hype?

If you want to follow what's happening in AI, it helps to know where the conversations are happening. That doesn't just mean the headlines and white papers; it means the standards bodies, working groups, and protocol discussions shaping the infrastructure AI systems will have to live with (and live inside). The post Agentic AI in the Open Standards Community: Standards Work or Just Hype? appeare

“If you want to follow what’s happening in AI, it helps to know where the conversations are happening.”

That doesn’t just mean the headlines and white papers; it means the standards bodies, working groups, and protocol discussions shaping the infrastructure AI systems will have to live with (and live inside). Some of these efforts put “AI” right in the name. Others are quietly solving problems that have been around for a while, which AI has now made urgent.

At IETF 123 in Madrid, AI topics were everywhere, sometimes explicitly, sometimes not. Just like every other event I’ve been to this year, it’s clear that AI is no longer a side topic. But it’s also not one big monolith. A working group with “AI” in the title might be useful, or it might be entirely orthogonal to the problems you’re facing. And meanwhile, some of the most critical technical work is happening in groups that never mention AI at all.

This post is a snapshot of both: a look at where the “AI conversations” are happening in the standards world, and where the deeper technical groundwork is being laid, whether or not anyone’s calling it AI.

A Digital Identity Digest Agentic AI in the Open Standards Community: Standards Work or Just Hype? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:12:37 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Where AI is the elephant in the room

Some of the most relevant work wasn’t framed as AI-specific at all… at least, not when it started.

Delegation chaining, for example, is a topic that’s been simmering in OAuth land for a while. The identity chaining draft defines a way to preserve identity and authorization information across trust domains. Useful for distributed architectures in general and now getting a lot more attention thanks to agentic AI models that need to act across domains, on behalf of users, and maybe other agents.

If you’re designing systems that involve third-party APIs, partner orchestration, or AI-driven workflows, this isn’t theoretical. It’s the difference between “this agent can complete a task” and “this agent just leaked PII across environments you can’t audit.” (This is often what’s happening right now; it’s a terrifying prospect, but I digress.)

Same story for WIMSE (Workload Identity in Multisystem Environments). AI doesn’t appear in the charter, but the group is wrestling with exactly the kinds of problems that show up when AI agents act like software workloads, make API calls, and need identity and trust across services.

These efforts weren’t built for AI, but they are shaping the environment in which AI agents will operate.

Where AI is the headline

There’s also a growing set of efforts waving the AI banner from the start. Here are a few places to watch if you want to keep a product roadmap aligned with emerging standards and activities.

AI Preferences (IETF AIPREF)

This working group is focused on standardizing how people (and systems acting on their behalf) express preferences about how their data is used in AI systems. Think training, inference, and deployment. Their charter is about giving users the power to say “yes,” “no,” or “only under these conditions.”

Why this matters: Consent banners and privacy policies are blunt instruments. If your app collects user content, you might soon need a finer-grained way to handle “don’t train on this” or “only use for personalization.” Product teams working on personalization, LLM features, or customer data ingestion should keep this on their radar.

Web Bot Authentication (BoF)

Born out of a hallway conversation, the Web Bot Authentication group is asking what it means to authenticate bots—especially AI-powered ones—when they interact with websites meant for humans.

Why this matters: If your web properties are being used (or abused) by AI scrapers, this work could define how to tell the difference between legitimate agents and free-riders. This could impact content licensing models, rate-limiting strategies, and even customer support bots.

AI Agent Protocol (side meeting)

This one hasn’t formalized into a working group yet, but a side meeting at IETF 123 kicked off discussions about protocols for AI agents to act autonomously online by invoking APIs, collaborating with each other, making decisions, etc.

Why this matters: If you’re building or integrating with AI agents—anything from internal copilots to customer-facing assistants—expect questions soon about how they authenticate, how their actions are logged, and what delegation looks like at runtime.

(Also, please don’t schedule the next AI Agent meeting opposite WIMSE again. Some of us have to clone ourselves as-is.)

Beyond the IETF

Other standards bodies are also entering the fray. Here’s a quick tour of where else things are heating up:

W3C AI Agent Protocol Community Group (CG) is developing protocols for AI agents to find each other, identify themselves, and collaborate across the web. It’s early days, but think of it as DNS and HTTP for agentic AI. W3C AI KR CG is focused on knowledge representation, i.e., how to structure information so AI systems (and people) can reason over it consistently. It is relevant to anyone dealing with search, ontologies, or explainability. OpenID Foundation AI Identity Management CG is mapping out how identity systems need to adapt to agentic AI. It’s not creating protocols (yet), but its members are watching government regulation closely. If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here] Signals to watch

Standards are slow… until they’re not. You don’t need to read every draft, but here are some signs that these efforts are going mainstream:

MCP (Model Context Protocol), which lets AI agents act autonomously by invoking APIs or services, is not a standard, but it’s being adopted or piloted by major platforms like cloud providers and browsers. To function securely, it depends on underlying standards for identity chaining, authentication, and authorization—things like OAuth, delegation models, and token handling. Vendor AI agent SDKs start referencing delegation models or bot authentication best practices. Your compliance team starts asking about AI consent and model provenance.

When that happens, product managers will need to have answers or at least know where to look for them.

If you’re building anything touched by AI

This is just one slice of what’s happening in the standards space. No one—myself included—can keep up with it all. And if I try to AI-clone myself, who knows what hallucinations might creep in! But hopefully there’s enough cross-pollination between these (and other) efforts that we won’t be reinventing wheels or missing blind spots entirely.

If you’re an architect, engineer, or product leader, now’s a good time to:

Start mapping where AI agents (or their proxies) may interact with your system Review your assumptions about trust, delegation, and human intent Assign someone to monitor the relevant working groups or participate, if you can

Standards work isn’t glamorous, but it’s how the internet keeps functioning. And right now, the decisions being made will shape how agentic AI interacts with everything from your login flows to your support tools.

With luck—and a little planning—the next wave of automation won’t break the web. Or your roadmap.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

00:00:26 Welcome back to the Digital Identity Digest.

Today we’re diving into the latest round of AI buzz — but from the standards world. Specifically, we’ll unpack what happened at IETF123 in Madrid and how it connects to a much bigger, messier, and louder story: the infrastructure needed to support AI.

00:00:51 If you feel like there’s way too much going on in AI and standards right now to keep track of, you’re absolutely correct.

00:01:00 One of my goals in this episode is to give you a map — not of every working group or proposal, but of the most relevant conversations shaping the AI systems your teams will build on, run into, or be regulated by.

Agentic AI and Why It Matters

00:01:22 Let’s talk about agentic AI, because it’s especially interesting.

00:01:28 The term refers to AI systems that can take autonomous action.

Large language models integrated into agents that can invoke APIs Systems that make decisions and complete multi-step tasks Agents that interact across systems — for a user, or even for another agent

00:01:53 This is a big shift. Like most computing shifts, it won’t work unless the plumbing underneath is solid — identity delegation, authentication, policy enforcement.

00:02:18 So where is that plumbing discussed? Some of it happens in AI-specific groups, but much of the critical work is in mature standards groups that don’t even mention AI in their charters.

Delegation Chaining

00:02:51 One example is delegation chaining in OAuth/authorization.

00:03:01 This draft defines a way to preserve identity and authorization across multiple trust domains.

Why it matters:

AI agents often act on behalf of users across multiple systems Without it, product teams end up “duct taping” credentials to every interaction — not scalable A scheduling agent booking travel crosses multiple trust boundaries

00:03:44 This work began before AI hype took off — but agentic AI makes it urgent.

Workload Identity in Multisystem Environments (WHIMSY)

00:04:05 Another crucial effort is WHIMSY — short for Workload Identity in Multisystem Environments.

00:04:21 It tackles how services, bots, APIs, and AI agents assert identity across environments.

Relevant for agentic AI because these identities aren’t tied to human sessions Helps establish runtime identity for autonomous systems

00:04:52 Takeaway: If it doesn’t say AI, it can still be vital to AI infrastructure.

AI-Focused Standards Groups at IETF

00:05:01 Of course, there are groups with AI in their name and charter.

AI Preferences Working Group (AI-Pref)

00:05:08 This group is creating a standard way for users (or systems) to express data-use preferences for AI:

Training Inference Deployment

The aim is to move beyond vague privacy policies toward technical mechanisms for enforcing user preferences.

Web Bot Authentication (BoF)

00:05:58 A discussion about how bots — especially AI-powered ones — should identify themselves when accessing human-oriented websites.

Questions under debate:

Are bots allowed? How should they authenticate? How do we distinguish helpful agents from malicious scrapers? Who’s accountable when things go wrong? AI Agent Protocol (Side Meeting)

00:07:02 This informal discussion explored whether the IETF should standardize protocols for AI agents to discover, invoke, and communicate.

Connections to existing work:

MCP (Model Context Protocol) is emerging in pilots Secure use depends on OAuth delegation chaining and other identity models Beyond IETF: W3C and OpenID Foundation

00:08:14 Standards work isn’t just at the IETF.

W3C Community Groups:

AI Agent Protocol CG – protocols for how agents identify, collaborate, and operate on the web AI Knowledge Representation CG – structuring domain knowledge so AI systems can reason and explain themselves

OpenID Foundation:

AI Identity Management CG – mapping use cases, identifying gaps, tracking regulations Not building protocols, but providing a regulatory and technical landscape view What Product Teams Should Do

00:09:27 For product managers and executives, here are the practical takeaways:

Understand where delegation fits in your systems Define identity for non-human actors — avoid relying on user credentials Implement technical enforcement of consent for AI agent actions Track compliance triggers early to avoid future architectural rework

00:10:44 Watch for signals:

MCP or delegation models adopted by major vendors New authentication guidance for bots and agents Increased compliance chatter about AI-related access Final Thoughts

00:11:10 This is just one slice of a fast-moving standards space.

If the right people connect across groups, we can avoid duplication, fill gaps, and lay the groundwork for agentic AI that’s safe, scalable, and standards-aligned.

00:11:39 Keep your eye on the standards — even if your platform isn’t “AI-first,” its infrastructure is being shaped right now.

00:12:01 Thanks for listening. If you found this helpful, share it, connect with me on LinkedIn, and subscribe for more conversations that matter.

The post Agentic AI in the Open Standards Community: Standards Work or Just Hype? appeared first on Spherical Cow Consulting.


1Kosmos BlockID

Founders and Team Members Are Investing Their Own Personal Capital in the Series B – We’re Backing Our Vision!

When a company’s own founders and team members dig deeper into their pockets to invest more of their personal money, you know something big is happening. That’s exactly what just occurred at 1Kosmos—and it’s sending ripples through the digital identity world. Today we announced $57M in Series B funding, bringing our total funding to $72M. … Continued The post Founders and Team Members Are Invest

When a company’s own founders and team members dig deeper into their pockets to invest more of their personal money, you know something big is happening. That’s exactly what just occurred at 1Kosmos—and it’s sending ripples through the digital identity world.

Today we announced $57M in Series B funding, bringing our total funding to $72M. But here’s what makes this round extraordinary: it was led by Forgepoint Capital and Origami Capital’s Oquirrh Ventures, with every prior investor participating, and included new funding from some of our own team members.

This blows past the typical $38M Series B, but the real story isn’t the size—it’s who’s writing the checks.

Momentum Building to a Crescendo

This funding caps off a remarkable year of wins. We’ve achieved 3x revenue growth while becoming the only full-service Credential Service Provider with both FedRAMP High authorization and NIST 800-63-3 certification. KuppingerCole named us a leader across Product, Innovation, and Overall categories in their 2025 Leadership Compass for Identity Verification and passwordless authentication, and we scored “perfect” marks in Presentation Attack Detection level 2 (PAD 2) certification.

Our new Microsoft partnership for External Authentication Methods to Entra ID and the Carahsoft-awarded Login.gov blanket purchase agreement worth up to $194.5M prove the market is ready for what we’re building. Add our 1Kosmos biometric security key (aka1Key) for frontline workers with LiveID (real biometrics), and it’s clear why everyone—including us—wanted in on this round.

Riding the Digital Identity Wave

Digital identity has become one of tech’s hottest sectors, and for good reason. On the heels of warnings from none other than Sam Altman about the coming “fraud crisis” powered by artificial intelligence to enable bad actors to impersonate other people, major breaches and rising identity fraud have exposed fatal flaws in passwords and fragmented ID checks.

Recent news of Palo Alto Network acquiring CyberArk provides further evidence of a turning point in Identity. Cybersecurity stalwarts that traditionally considered identity verification peripheral to their mission now recognize it as the new security perimeter. It’s clear organizations worldwide are scrambling for solutions that deliver both security and user-friendly identity verification experiences.

The shift to zero trust security models puts verified digital identity at the center of access control. Cybersecurity investors are flocking to identity-focused startups because these solutions tackle the most pressing issues in our digital-first world.

What’s Next: Putting $57M to Work

This funding enables us to accelerate on three fronts: expanding our engineering team to push product innovation faster, scaling our go-to-market efforts globally, and building partnerships that bring passwordless multi-factor authentication to more organizations.

Our Engineering team is already using powerful AI tools in areas such as prototyping UI / UX designs to reduce cycle times and recommending microservices to update, data models to use, and APIs to build for speeding implementation planning. We have site reliability engineering in our target, using AI to handle complex operational functions independently, moving beyond simple automation to true autonomous decision-making. We’ll continue to work harder and smarter in engineering.

With the injection of fresh capital, Marketing and Sales will move beyond regional focus, tapping into global demand for improved identity and authentication services. Partnering will also benefit from the cash infusion. We plan to engage large global systems integrators, regional systems integrators, and various government entities to expand the 1Kosmos footprint into ready industries including Healthcare, FED/SLED and Manufacturing,

The included $10M credit line also gives us financial flexibility to pursue growth opportunities without immediate dilution—letting us move fast when the right deals or talent emerge.

Our platform uniquely addresses both identity proofing and authentication, providing comprehensive protection against identity fraud and account takeover. Organizations need to stop bad actors without creating obstacles for legitimate users, and that’s exactly what 1Kosmos delivers.

The Bottom Line

When founders and executives invest their own money alongside institutional investors, it sends an unmistakable signal: we’re not just confident in our pitch deck—we’re betting our personal wealth on our vision.

With $57M in the bank and our team doubling down with their own money, we’re not just riding the digital identity wave—we’re creating it. The mission we founded 1Kosmos on—secure digital identity under user control, enabling ubiquitous access that can’t be stolen, borrowed or shared—is no longer just a vision.

It’s happening. And we’re just getting started.

The post Founders and Team Members Are Investing Their Own Personal Capital in the Series B – We’re Backing Our Vision! appeared first on 1Kosmos.


IDnow

How ETSI and CEN standards are shaping the future of digital identity in Europe, one regulation at a time.

Behind every digital interaction is a technical standard working quietly in the background. Here, we explore some of the most important in the digital identity space and explain how they’ll affect how you verify customers very soon. Over the past few years, the European Union (EU) has garnered much attention in its goal to transform […]
Behind every digital interaction is a technical standard working quietly in the background. Here, we explore some of the most important in the digital identity space and explain how they’ll affect how you verify customers very soon.

Over the past few years, the European Union (EU) has garnered much attention in its goal to transform how people interact with governments, businesses, and each other online.  

Central to its mission is the rollout of the European Digital Identity EUDI) Wallet, a secure, portable digital identity solution set to become the foundation for cross-border services, e-government, and AML-compliant onboarding.  

Two regulations in particular, eIDAS 2.0 and the Anti-Money Laundering Regulation (AMLR), are redefining how identity and trust are managed across the EU. 

At the core of the transformation lie technical standards developed by the European Telecommunications Standards Institute (ETSI), the European Committee for Standardization (CEN), and the International Organization for Standardization (ISO). These are legally binding through Commission Implementing Regulations (CIRs), which lay the groundwork for secure, interoperable, and compliant digital identity services.

For example, earlier this year, IDnow became one of the first companies in Europe to receive certification under the latest ETSI standard for remote identity proofing, signifying a crucial step toward meeting eIDAS 2.0 and AMLD 6 requirements. One of the most important technical standards we achieved was the ETSI 119 461 v2.1.1 certification, which is widely considered the ‘compliance benchmark’ for remote identity verification in Europe.

Technical standards ensure interoperability across borders, promote security by design, and create the certainty businesses need to scale.  

At IDnow, we’ve championed the EU’s vision for a more harmonized approach to digital identity from the start. Our priority has always been to promote open, secure standards that allow technical innovation to thrive. In a regulatory environment that began shifting from voluntary best practices to mandatory rules, we welcome the clarity and opportunity that harmonized standards bring to the market, to institutions, and above all, to users. 

For IDnow’s regulatory and standards department it is important to keep looking ahead to what comes next and how it will affect the everyday lives of people. Standards play a crucial role in ensuring that impact is not only manageable, but truly beneficial. This remains the bedrock of our mission.

The Trust Playbook. Discover how top teams from your industry are turning digital identity and trust into growth strategies.​ Download now Why standards matter.

Behind every digital interaction, such as verifying your identity for bank account opening or digitally signing a contract, there are technical standards working quietly in the background. These standards ensure that systems speak the same language, that data remains secure, and that services can be trusted no matter where in the EU (or beyond) they are used. 

Organizations like ETSI, CEN, and ISO play a critical role in setting these benchmarks. They bring together experts from governments, tech companies, and the public sector to define how trust services, digital identity, and secure onboarding should work. 

As the EUDI Wallet and new trust services move from concept to implementation, these standards are not just technical specifications or guidelines; they are becoming the legal foundation for the way digital identity is handled across Europe.

Laying the foundation for trust.

The shift toward standards-driven digital identity and trust services offers better technology and builds a harmonized, secure, and legally enforceable ecosystem across borders. With the EUDI Wallet and the upcoming AMLR, Europe is setting a global precedent in how identity, privacy, and trust should be handled in the digital age. 

Standards from ETSI and CEN are already embedded in the first wave of EU’s CIRs and will directly guide how identities are verified, wallets are certified, and trust services are recognized. These aren’t abstract technical documents anymore; they’re becoming the blueprint for how millions of people will access digital services, from banking to healthcare. 

What often goes unseen is the deep, ongoing work that makes this possible. At IDnow, our regulatory and standards team has been deeply engaged behind the scenes as we work alongside our colleagues at ETSI, CEN and ISO to shape these standards in a way that balances security, innovation, and practical deployment. It’s this kind of collaboration that enables digital services to scale with confidence and compliance. 

More specifically, IDnow has recently contributed to the following standards and technical reports:

ETSI TR 119 476-1. A feasibility study of how selective disclosure and zero-knowledge proofs can be implemented in the EUDI Wallet. IDnow was one of the main contributors to this technical report. 
  ETSI TS 119 461: Requirements that define how identity proofing must be performed when enrolling for a Qualified Certificate or a Qualified Electronic Attestation of Attributes. The standard can also be applied to Personal Identification Data (PID) enrollment to the EUDI Wallet and AMLR guidelines. IDnow has been co-editors and contributors to both versions of this standard. 
  ETSI TS 119 431: Specifies how remote signing with Qualified Certificates should be deployed in a remote Qualified Signature Creation Device. IDnow designed the new approach to only rely upon identification for one-time Qualified Electronic Signatures, which streamlines the remote signing process. 
  CEN TS 18098: Outlines how to onboard PID to EUDI Wallets. IDnow has been co-editors of the chapters related to identity proofing during the on-boarding process.

Regardless of whether you’re a wallet provider, financial institution, or public sector authority, now is the time to take a close look at how your internal systems align with these evolving standards. After all, these specifications are not optional; they are rapidly becoming the rulebook for trust in Europe.

Want to stay ahead of the regulatory game? Read our recent press release to discover how ‘IDnow sets new standard as one of Europe’s first identity verification providers to meet latest eIDAS 2.0 regulations.’

Interested in more insights from our subject matter experts? Click below!

Former INTERPOL Coordinator, and current Forensic Document Examiner at IDnow, Daniela Djidrovska explains why IDnow offers document fraud training to every customer, regardless of sector.
Research Scientist in the Biometrics Team at IDnow, Elmokhtar Mohamed Moussa explores the dangers of face verification bias and what steps must be taken to eradicate it.
Research Scientist at IDnow, Nathan Ramoly explores the dangers of deepfakes and explains how identity verification can help businesses stay one step ahead of the fraudsters and build real trust in a digital world.
One of the Heads of Product at IDnow, Jonathan Underwood shares his eight defining moments from the history of identity verification and ponder what’s coming next.

By

Sebastian Elfors
Senior Architect
Connect with Sebastian on LinkedIn


IDnow sets new standard as one of Europe’s first identity verification providers to meet latest eIDAS 2.0 regulations

Munich, August 12, 2025 – IDnow, a leading identity verification platform provider in Europe, today announced that several of its flagship products had achieved ETSI 119 461 v2.1.1 certification, the technical standard widely considered the ‘compliance benchmark’ for remote identity verification in Europe.  Developed by the European Telecommunications Standards Institute (ETSI), ETSI 119 461 v

Munich, August 12, 2025 – IDnow, a leading identity verification platform provider in Europe, today announced that several of its flagship products had achieved ETSI 119 461 v2.1.1 certification, the technical standard widely considered the ‘compliance benchmark’ for remote identity verification in Europe. 

Developed by the European Telecommunications Standards Institute (ETSI), ETSI 119 461 v2.1.1 was selected by the European Commission as the standard for AML-compliant identity verification for qualified trust services and the upcoming Anti-Money Laundering Regulation (AMLR).

The certification was awarded following rigorous testing by the accredited conformity assessment body, QSCert.  

Obtaining certification in ETSI 119 461 v2.1.1 establishes IDnow as one of the first providers in Europe to fulfill the stringent biometric and security standards necessary for compliant identity verification in line with evolving European regulations, such as eIDAS 2.0 and AMLD6.  

Why this matters for European businesses 

With rising threats from deepfakes and increasingly sophisticated types of online fraud, especially in the finance, mobility, and telecom sectors, the ETSI 119 461 v2.1.1 standard outlines a comprehensive European framework for compliant identity verification services, including requirements for presentation attack detection, injection attack detection, and biometric-integrity assurance. 

This milestone is not only a technical and security achievement; it is a practical commitment to advancing digital onboarding in the most regulated industries. We’re proud to lead the way in enabling compliant, secure, and user-friendly identity verification across all major digital channels.

Armin Berghaus, Founder and Managing Director at IDnow. 
What comes next 

As one of the first identity verification providers to be certified under the latest eIDAS 2.0 requirements, and the Extended Level of Identity Proofing (LoIP) security standards, IDnow enables customers to adopt a variety of compliant identity verification solutions, each of which fulfill existing and upcoming European regulatory expectations and trust service requirements: 

Expert-led video identity verification  Automated identity verification  NFC (Near Field Communication) identity verification  Electronic ID card (eID) verification   EU Digital Identity (EUDI) Wallet verification (which all EU banks will need to accept by 2027)  

All above options are supported by IDnow’s identity verification and fraud prevention platform, which combines certified biometric checks, real-time fraud prevention, and seamless orchestration.  

By 2027, all banks that operate in Europe will be required to work with providers that are certified to the ETSI 119 461 v2.1.1 standard, allowing IDnow customers to have peace of mind today that its range of expert-led, automated and wallet-based identity verification solutions meet existing and upcoming EU regulatory requirements. 

“This latest certification confirms IDnow’s position as a trusted and future-proof technology partner for regulated businesses across Europe,” added Berghaus. “It represents our intention for IDnow to continue to provide the most flexible and future-proof identity verification and fraud prevention platform for businesses navigating complex European compliance and customer experience demands.”


iComply Investor Services Inc.

AML Compliance for Credit Unions: Global Trends and Member-Centric Solutions

Facing rising AML expectations, credit unions must modernize compliance. This article explains global KYB and KYC standards—and how iComply helps automate and streamline the process.

Credit unions worldwide are facing increasing AML scrutiny, especially in Canada, the U.S., UK, and Australia. This article explores KYB, KYC, KYT, and AML expectations in these jurisdictions, and shows how iComply helps automate up to 90% of compliance tasks—while preserving member privacy and trust.

Credit unions are the lifeblood of community banking across many of the world’s leading economies. From rural Canada to urban Australia, they offer cooperative financial services rooted in trust, mutual benefit, and member care. But in 2025, those same institutions are being held to banking-grade compliance standards—particularly when it comes to anti-money laundering (AML) and counter-terrorist financing (CTF).

With national regulators ramping up inspections and issuing new guidance, credit unions must modernize their approach to KYB, KYC, AML, and even KYT – without alienating members or overwhelming staff.

Global AML Expectations for Credit Unions Canada Regulator: FINTRAC (federal), BCFSA or FSRA (provincial) Requirements: Identity verification for members and beneficial owners, ongoing PEP/sanctions screening, transaction monitoring, and suspicious activity reporting United States Regulator: NCUA, FinCEN Requirements: CDD rule compliance, beneficial ownership verification for legal entity accounts, SAR filing, and compliance with the Corporate Transparency Act (CTA) United Kingdom Regulator: FCA and PRA Requirements: Customer due diligence, screening against the UK Sanctions List, ongoing monitoring, and robust AML/CTF controls under MLR 2017 Australia Regulator: AUSTRAC Requirements: Member identification, source of funds checks, transaction monitoring, suspicious matter reporting (SMRs), and annual AML program reviews What Credit Unions Must Do

To comply across jurisdictions, credit unions typically must:

Verify identities of natural persons and business account holders Conduct beneficial ownership checks for corporate members Screen members and transactions for PEPs, sanctions, and suspicious patterns Maintain audit-ready documentation and report to regulators Why Compliance Is Especially Challenging for Credit Unions Lean compliance teams and manual review processes Multiple disconnected systems for ID, screening, and reporting Tight budgets with little room for complex vendor integration Member-first culture that resists high-friction onboarding How iComply Helps

iComply is built for the unique needs of credit unions—offering modular, privacy-first compliance tools that work with your existing systems and workflows.

1. KYC + KYB with Edge Processing Natural person and legal entity verification using edge computing No raw PII leaves the member’s device unencrypted Compliant with GDPR, PIPEDA, and local privacy laws 2. Automated Beneficial Ownership Checks Visual mapping and verification of UBOs Screening for nominees and shell structures Risk-based logic for escalation or enhanced due diligence 3. Continuous AML Monitoring Sanctions, PEP, and adverse media screening Configurable triggers for transaction behaviour or geographic risk Integrated case management with audit trail 4. Simplified Workflows for Staff and Members White-labeled member portals No-code policy editor for compliance teams Instant alerts, reports, and regulatory-ready exports Real-World Efficiency Gains

Credit unions using iComply have:

Reduced onboarding time from 30–60 minutes to under 10 minutes per member Cut AML false positives by over 40% Passed regulator audits with zero material findings The Bottom Line

AML compliance isn’t optional, and the expectations are only rising. But for credit unions, the right technology makes it possible to:

Comply confidently across Canada, the U.S., UK, and Australia Protect member trust with private, secure onboarding Automate 90% of compliance tasks while scaling membership

Talk to iComply today to explore how we can help your credit union stay compliant, efficient, and member-focused—wherever you operate.


PingTalk

Gain a Competitive Edge with Unified Customer & Identity Profiles

Unify customer identity with CRM and CDP systems to power real-time personalization, boost trust, and drive ROI with a modern CIAM strategy.

Consumer expectations have reached an all-time high, and they’re demanding more than just personalized marketing. They expect every interaction to reflect who they are, what they want, and where they are in their journey. For digital leaders and marketing teams alike, this creates both urgency and opportunity: how to deliver unified, real-time experiences across channels while ensuring data privacy and trust.

 

Historically, this has been a complex and manual effort. Integrating identity provider (IdP) data with customer relationship management (CRM) systems often required custom development, while building user flows demanded coordination across multiple siloed teams. The result was disjointed customer journeys, delayed launches, and limited visibility into the customer identity lifecycle.

 

Fortunately, today’s modern customer identity and access management (CIAM) systems have changed the game. Digital leaders now achieve true end-to-end integration far beyond just single sign-on (SSO), using out-of-the-box (OOTB) connectors and no-code orchestration tools. Identity data can flow directly into marketing, analytics, and customer experience (CX) platforms, unlocking seamless omnichannel engagement with speed, accuracy, and confidence.


Turing Space

Taiwanese blockchain startup Turing Certs enters European campus, students can expect to receive blockchain diplomas

工商時報 新竹高中攜手圖靈證書 (Turing Certs) ,為畢業生發行中英文版數位畢業證書,將傳統紙本畢業證書注入創新升級。 The post Taiwanese blockchain startup Turing Certs enters European campus, students can expect to receive blockchain diplomas first appeared on Turing Space Inc..
圖靈新聞室

Taiwanese blockchain startup Turing Certs enters European campus, students can expect to receive blockchain diplomas

2025/08/12




Share on Facebook Share on Twitter Share on WhatsApp Share on LinkedIn Share by Mail

媒體聯絡信箱|marketing@turingspace.co

The post Taiwanese blockchain startup Turing Certs enters European campus, students can expect to receive blockchain diplomas first appeared on Turing Space Inc..

Data Verification Blockchain Technology Expected to Solve Problems Derived from Digital Transformation

工商時報 新竹高中攜手圖靈證書 (Turing Certs) ,為畢業生發行中英文版數位畢業證書,將傳統紙本畢業證書注入創新升級。 The post Data Verification Blockchain Technology Expected to Solve Problems Derived from Digital Transformation first appeared on Turing Space Inc..
圖靈新聞室

Data Verification Blockchain Technology Expected to Solve Problems Derived from Digital Transformation

2025/08/12




Share on Facebook Share on Twitter Share on WhatsApp Share on LinkedIn Share by Mail

媒體聯絡信箱|marketing@turingspace.co

The post Data Verification Blockchain Technology Expected to Solve Problems Derived from Digital Transformation first appeared on Turing Space Inc..

Embarking on a New Overseas Journey: G Camp Startup Team Showcases Taiwan’s Technological Innovation Strength at Web Summit

工商時報 新竹高中攜手圖靈證書 (Turing Certs) ,為畢業生發行中英文版數位畢業證書,將傳統紙本畢業證書注入創新升級。 The post Embarking on a New Overseas Journey: G Camp Startup Team Showcases Taiwan’s Technological Innovation Strength at Web Summit first appeared on Turing Space Inc..
圖靈新聞室

Embarking on a New Overseas Journey: G Camp Startup Team Showcases Taiwan’s Technological Innovation Strength at Web Summit

2025/08/12




Share on Facebook Share on Twitter Share on WhatsApp Share on LinkedIn Share by Mail

媒體聯絡信箱|marketing@turingspace.co

The post Embarking on a New Overseas Journey: G Camp Startup Team Showcases Taiwan’s Technological Innovation Strength at Web Summit first appeared on Turing Space Inc..

Turing Space’s CEO Jeff Hu and CTO Henry Hang Named to Forbes 30 Under 30 Asia

工商時報 新竹高中攜手圖靈證書 (Turing Certs) ,為畢業生發行中英文版數位畢業證書,將傳統紙本畢業證書注入創新升級。 The post Turing Space’s CEO Jeff Hu and CTO Henry Hang Named to Forbes 30 Under 30 Asia first appeared on Turing Space Inc..
圖靈新聞室

Turing Space’s CEO Jeff Hu and CTO Henry Hang Named to Forbes 30 Under 30 Asia

2025/08/12




Share on Facebook Share on Twitter Share on WhatsApp Share on LinkedIn Share by Mail

媒體聯絡信箱|marketing@turingspace.co

The post Turing Space’s CEO Jeff Hu and CTO Henry Hang Named to Forbes 30 Under 30 Asia first appeared on Turing Space Inc..

FastID

Trust at Scale with Fastly Image Optimizer and C2PA

Fastly Image Optimizer now supports C2PA, enabling verifiable content authenticity. Combat misinformation and build trust with secure image provenance at scale.
Fastly Image Optimizer now supports C2PA, enabling verifiable content authenticity. Combat misinformation and build trust with secure image provenance at scale.

Demystifying Fastly’s Defense Against HTTP Desynchronization Attacks

Learn how Fastly's robust architecture and strict protocol parsing defend against HTTP desynchronization attacks, ensuring your web applications are secure.
Learn how Fastly's robust architecture and strict protocol parsing defend against HTTP desynchronization attacks, ensuring your web applications are secure.

Monday, 11. August 2025

Spruce Systems

Revolutionizing Supply Chain Transparency

Verifiable digital credentials (VDCs) are transforming how companies verify suppliers, authenticate products, and prove ethical sourcing.
The Supply Chain Transparency Crisis

Global supply chains are under more pressure than ever. Delays in verifying suppliers, counterfeit goods slipping through the cracks, and unproven ethical sourcing claims have left retailers and manufacturers scrambling to protect trust. Traditional verification systems (slow, manual, and vulnerable to manipulation) can’t keep up.

Without instant proof of business registration, certifications, or compliance status, production stalls. Counterfeit goods remain a multi-billion-dollar problem, undermining brand reputations and putting consumers at risk. Ethical sourcing claims often rely on unverified labels, leaving “fair trade” or “sustainably sourced” promises open to doubt.

How VDCs Transform Supply Chain Operations

Verifiable digital credentials (VDCs) are cryptographically signed records that can be instantly checked for authenticity. In supply chains, they replace the patchwork of PDFs, phone calls, and manual forms with secure, real-time verification.

They work across global trade systems, integrating with logistics software, ERP platforms, and customs processes. Whether it’s confirming a supplier overseas or verifying a product’s compliance with safety regulations, VDCs allow companies to act in seconds instead of weeks.

Real-World Examples of VDC Impact Walmart

Faced with delays, counterfeits, and unverifiable ethical claims, Walmart implemented VDCs to:

Verify suppliers in seconds with cryptographic certainty. Authenticate products from source to shelf. Prove ethical sourcing through tamper-proof records.The result: faster time to market, stronger consumer trust, and reduced risk. Amazon

Amazon is exploring VDCs to streamline marketplace seller onboarding, reducing the time from application to listing, while keeping counterfeit goods out of its ecosystem. This protects both customers and legitimate sellers.

Target

Target safeguards its quality standards by verifying supplier credentials and tracking product origins. VDCs provide data-backed proof that meets consumer expectations for safety and integrity.

Tesla

Tesla is testing VDC-based tracking for raw materials, ensuring that minerals like cobalt meet strict sustainability and human rights criteria before they enter the supply chain.

Across industries, VDC adoption is building supply chains that are faster, more resilient, and more transparent.

The Business Case for VDC Adoption in Supply Chain

Companies are embracing VDCs for three main reasons:

Speed: Supplier verification shrinks from months to days, accelerating launches. Fraud prevention: Cryptographic verification blocks fake credentials and counterfeit products. Transparency: Real-time, verifiable records meet regulator and consumer demands for proof.

In competitive markets, proving your supply chain story isn’t just compliance, it’s a differentiator.

What’s Next for Supply Chain Credentials

Emerging trends promise to make VDCs even more powerful:

Blockchain integration for immutable supply chain records. IoT tracking to connect physical goods to digital credentials in real time. AI-driven verification for smarter fraud detection.

With regulators moving toward requiring verifiable transparency, adoption will soon shift from competitive advantage to a baseline requirement.

The Bottom Line

Verifiable digital credentials are no longer futuristic, they’re here and already transforming global commerce. From Walmart’s operational gains to Tesla’s sustainability tracking, the results show that verification delays can be eliminated, counterfeits shut out, and ethical sourcing backed by proof.

If your organization is ready to explore VDCs for supplier verification, product authentication, and ethical sourcing, SpruceID can help design and deploy systems that meet global standards and build consumer trust.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Kin AI

The Kinside Scoop 👀 #11

We’ve been heads down… here’s what’s up

Hey folks 👋

We know. It’s been a while.

The last time you heard from us in a Scoop, it was June 23rd… summer was just getting started, the days were long, and your inbox was quieter.

But while we’ve been quiet here, Kin hasn’t been sleeping. Behind the scenes, we’ve been tinkering, fixing, polishing — and yes, plotting a few surprises.

So. Let’s catch you up.

What’s new with Kin 🚀 We’ve got tabs on for you 📖

We’ve replaced our old left-hand menu with a snappy three-tab navigation system, so you can find your way easier around the app.

As an added bonus, navigating to the ‘You’ tab automatically brings up a Knowledge Map of the most recent memories your Kin has learnt about you.

All still private - just faster to use.

Field work complete 💬

Android users - a few of you have noticed Kin’s input field finding its way to odd places. It should stay put where it’s meant to now!

Stability boosts all around 😌

We’ve also cleaned up a whole slew of bugs in general. You might not notice the changes, but your Kin experience will feel calmer - just like it should.

Under the hood 🛠

Since the last Kinside Scoop, we’ve been putting serious hours into:

Memory enhancements - Yes, work’s still going on to make Kin’s Memory one of the most accurate systems around today. We’re getting closer: stay tuned.

Multi-language support - starting with the languages requested from our questionnaire, we’ve been laying the groundwork for your Kin to become a polyglot.

Voice mode stabilisation - we know a lot of people are still experiencing issues when talking in real-time to Kin. We think we’ve nailed down the beginning of a solution - keep your phones at the ready.

A secret project… 🤫

We can’t say much yet, but there’s something new in the works - and also coming soon.

It’s unlike anything we’ve done before, but we think it makes Kin even better at being the part of your support system that lives in your pocket.

As soon as we can share more, we will. We’re excited.

Come chat with us 🔊

The official Kin Discord is still the best place to talk to the Kin development team (as well as other users) about anything AI.

We regularly run three casual weekly calls, and you’re invited:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

Our current reads 📚

Tools - OpenAI released gpt-oss
READ- OpenAI

Article - OpenAI also release GPT-5
READ - Reuters

Article - Google experiments with AI Web Guide summary mode
READ - Techcrunch

Tool - Mistral AI give Le Chat (often hailed as one of the most private LLMs around) voice recognition and deep research tools
READ - AI News

This week’s Super Prompt 🤖

“How can I be more consistent?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to immediately jump into discussing how you personally can work to be more consistent, using habit-stacking an implementation intentions.

As a reminder, you can do this on both iOS and Android.

Open prompt in Kin

You’re still here 🧩

Thank you for that. We’ve been making a lot of changes to Kin based on feedback recently, and we appreciate all of you helping to guide us to making the best tool we can.

Kin is yours as much as it is ours, if not even more so.

So please - reply to this email, chat in our Discord, or even just shake the app to reach out to us.

Your voice is as valuable as ever, both within Kin and outside of it.

With love,

The KIN Team


FastID

Fastest Sites Run on Fastly

Make your site 25%+ faster with Fastly’s programmable edge. See why the fastest media sites — from Vox.com to Business Insider — run on Fastly.
Make your site 25%+ faster with Fastly’s programmable edge. See why the fastest media sites — from Vox.com to Business Insider — run on Fastly.

Sunday, 10. August 2025

Recognito Vision

Everything You Need to Know About Facial Recognition Search in 2025

Everything You Need to Know About Facial Recognition Search in 2025 Facial recognition search has quickly shifted from science fiction to everyday reality. Whether it’s tracking down an online impostor, securing access to a building, or reconnecting with an old classmate, the technology is becoming a go-to tool for both professionals and curious individuals. The...
Everything You Need to Know About Facial Recognition Search in 2025

Facial recognition search has quickly shifted from science fiction to everyday reality. Whether it’s tracking down an online impostor, securing access to a building, or reconnecting with an old classmate, the technology is becoming a go-to tool for both professionals and curious individuals. The magic lies in matching a human face to an image in a database or on the internet. The process is fast, accurate, and surprisingly accessible to anyone who knows where to look.

 

Understanding Facial Recognition Search and Its Process

Facial recognition search works by analyzing a photo or video to find and match an individual’s face in a database or on the internet. Unlike traditional image search, which focuses on colors, shapes, or objects, this method analyzes unique facial features. Think of it as the digital equivalent of recognizing a friend in a crowd only faster, and without the need to squint.

The process involves three main steps:

Detection – AI locates a human face in an image or video frame. Mapping – The software pinpoints facial landmarks such as the eyes, nose, and jawline. Matching – These mapped points are then compared with saved facial data to identify a possible match.

Facial recognition online services often combine large-scale public image databases with advanced AI. Tools like a facial recognition finder make it possible to track down a person from a single uploaded photo. Below is a simplified workflow example:

Step Action Technology Used 1 Detect the face in the photo AI + Computer Vision 2 Map facial landmarks Face recognition SDK 3 Compare with database Neural networks 4 Return match results Search engine integration

 

Key Technologies Behind Facial Recognition Search

The engine that drives facial recognition search is built from a combination of AI, deep learning, and specialized software. At the core, a face recognition SDK allows developers to integrate recognition capabilities into apps or websites.

But accuracy alone isn’t enough. To ensure the search result is from a real, live person and not a printed photo or screen replay, security layers like a face liveness detection SDK or liveness detection SDK are used. These tools can detect blinking, slight movements, and even texture differences in skin to confirm authenticity.

Here’s a quick comparison between traditional image search and facial recognition search:

Feature Image Search Facial Recognition Search Basis Color and object patterns Unique facial features Accuracy Lower for human faces High for human faces Real-time Use Limited Yes Security None Liveness detection available

 

Popular Uses of Facial Recognition Search

 

1. Security and Law Enforcement

Security agencies use internet facial recognition search tools to identify suspects, missing persons, or unauthorized access attempts. A facial recognition lookup can pull records from vast criminal and public databases within seconds.

2. Social Media and Online Networking

Ever spotted someone in a photo but couldn’t recall their name? Facial recognition online services help identify people from public social media images. A good facial recognition finder can track old friends or even verify profiles.

3. Business Applications

Banks and e-commerce platforms now use a face liveness detection SDK to confirm the identity of customers during transactions. This prevents fraud and speeds up onboarding for new accounts.

Free and Paid Facial Recognition Search Options

Many are tempted to try a free facial recognition search for quick results. While free tools can be useful for casual searches, they often come with limited accuracy and smaller databases. Paid services usually provide:

Larger and more up-to-date databases Stronger privacy protections Faster search times Integration options with face recognition SDK

Free vs Paid Tool Comparison

Feature Free Tools Paid Tools Accuracy Moderate High Privacy Low to Medium High Database Size Small to Medium Large Support Limited Full customer support

External resources worth checking out:

PimEyes – Facial recognition search engine Clearview AI – Law enforcement facial recognition FindClone – Social media face matching

 

How to Choose the Right Facial Recognition Finder

Selecting the best tool requires weighing its accuracy, privacy safeguards, speed, and cost. Here’s what to look for:

Database size: Larger databases mean better match chances. Privacy policies: Ensure that any images you upload are not kept permanently. Liveness detection: Prevents matches with fake images. Integration: For businesses, make sure the tool is compatible with your face recognition SDK.

Example: A marketing agency verifying influencer identities could choose a service with fast searches, a liveness detection SDK, and an API for automation.

Ethical and Privacy Concerns in Facial Recognition Search

Although facial recognition technology is convenient, it brings up significant ethical concerns. Without proper safeguards, it could lead to mass surveillance, identity theft, or profiling.

To protect user privacy:

Limit who can access the technology Always get consent before searching Store data securely with encryption

For deeper insights, visit:

NIST Biometrics Standards – U.S. standards for biometric systems

 

The Future of Facial Recognition Search

By 2025, expect facial recognition search to become more accurate and even faster. Integration with other biometrics like fingerprints and voice recognition will make identity verification nearly seamless. AI will also play a bigger role in reducing false positives and increasing security.

 

Conclusion

Facial recognition search has grown from a niche innovation to a practical everyday tool for security, networking, and business. The key to using it effectively lies in choosing a reliable service, understanding privacy risks, and staying updated on the latest advancements. With responsible use, this technology can be a powerful ally and Recognito is here to keep that future secure. You can explore the Recognito GitHub for more resources and tools.

Friday, 08. August 2025

iComply Investor Services Inc.

iComply Fall Release: Defending Against AI Threats to Biometrics and Data Sovereignty

The Fall 2025 iComply release takes aim at the new AI fraud threat with on-device randomized liveness and biometric checks, protecting both identity verification integrity and national data sovereignty.

Artificial intelligence is advancing at breakneck speed, and biometric authentication with liveness detection—once considered the gold standard in digital identity verification—is now under siege. Deepfakes, synthetic media, and AI-generated spoofing tools are more accessible and convincing than ever. Traditional systems relying on cloud-based analysis or static liveness checks are dangerously outdated.

Deepfakes, synthetic media, and AI-generated spoofing tools are more accessible and convincing than ever. Traditional facial recognition systems, especially those relying solely on cloud-based analysis or passive liveness checks, are completely obsolete, despite their prevalence in fintech, DeFi, and digital banking worldwide. At the same time, threat actors no longer need sophisticated tools to bypass standard facial recognition systems. A free, anonymous email account, some AI video gen software off the internet, and a still image or two from any social media account are now enough to fool most identity verification platforms – this is because they do not process the data locally.

The Threat

AI-powered fraud now makes it possible to bypass many KYC onboarding processes with nothing more than a still image, a free email account, and widely available deepfake software.

Cloud-based verification platforms introduce additional risk—sending sensitive biometric data offshore, often to vendors with questionable ownership, opaque data handling, or ties to jurisdictions that undermine privacy and sovereignty.

Fintechs and DeFi companies face heightened exposure, especially when relying on providers in the UK, US, Canada, and EU that use offshore subprocessors or outdated verification models.

Most systems labeled as “liveness detection” perform only surface-level checks before sending the image to the cloud for advanced processing. This forces them to rely on outdated 2D image processing often provided by questionable offshore data processors, making them easy targets for presentation attacks using photos, deepfake videos, or even AI-generated avatars. Biometric systems that were once built to stop fraud are now frequently bypassed by it.

“AI-driven fraud is exploding across legal, real estate, and financial services. This is a technology arms race. The only way to win is to meet AI with better AI, backed by privacy-first architecture. With our edge-computing biometrics, your users’ most sensitive data never leaves their device, and fraud attempts never reach your systems.” said Matthew Unger, CEO at iComply

The iComply Platform: Built for the Next Era of Threats

We’ve spent the last five years engineering and refining a better Live Face Match biometric authentication system that can perform any type of check directly on the user’s device. This not only addresses these modern threats, it is a game changer for personal data privacy and national data sovereignty. Our latest release of the iComply platform delivers randomized, concurrent liveness and biometric testing. Performed entirely on-device via our proprietary edge computing architecture to detect and neutralize generative AI spoofing before it can infiltrate your onboarding process.

 

Fall 2025 Release Highlights

1. Advanced Multi-Expression Live Face Match Testing: Enhancements to performance and concurrent processing of both biometric face matches and liveness detection algorithms. Our platform doesn’t just check for motion and a face match; it challenges users to perform randomized facial expressions and micro-movements in 3D, making it nearly impossible for pre-recorded or deepfaked media to replicate. Each expression is evaluated independently alongside biometric confidence scores and device metadata to create your confidence threshold, which can be customized based on your risk tolerance.

Real-time 3D facial recognition combined with randomized micro-expression prompts.

Concurrent biometric and liveness analysis makes pre-recorded or AI-generated forgeries virtually impossible to pass.

Independent scoring for each challenge, combined with device metadata, allows for fully configurable pass/fail thresholds.

2. Edge Computing for Real-Time AI Fraud Detection: Unlike API driven KYC or identity verification systems, our identity and biometric checks are performed directly on the user’s device through edge computing. Edge-computing ensures your customer data is always processed locally, in the country where they are at that moment, and validated before you touch it. This reduces exposure, accelerates processing time, and ensures biometric data never leaves the device, drastically improving both privacy and security. With this release, Pro and Enterprise accounts can now leverage enhanced configurability and data localization control for emerging regulations covering data privacy, security, and sovereignty.

All biometric processing happens locally, on the user’s device. This ensures that data never leaves the country of origin. Zero data leakage. Zero third-party processing.

No reliance on offshore cloud processors means significantly reduced attack surface, zero transmission risk, and compliance with emerging data sovereignty laws.

Enhanced configurability for Pro and Enterprise clients to meet national and sector-specific privacy mandates.

3. Enhanced Threshold Controls for Precision Matching: Manage thresholds for biometric confidence score, adjust pass criteria, and the number of facial expressions required to be completed successfully.

Dynamically set biometric confidence thresholds (e.g., 70%, 85%, 95%) based on your risk profile.

Adjust requirements based on the risk and use case of the biometric verification event.

 

AI Isn’t Going Away, But Neither Are We
Organizations can no longer rely on “good enough” systems from five years ago to stop the threats of today. AI-generated fraud is evolving faster than most compliance teams can adapt. Without advanced, on-device defences, organizations risk onboarding bad actors, breaching data protection laws, and undermining user trust. By engaging iComply as their AML compliance technology partner, our clients reduce cost, manual operations, and fragmented systems while gaining clarity, consistency, and confidence in their AML compliance program. A program that is built not just for today’s threats but also for the upcoming threats posed by generative AI and offshore data processing.

About iComply
iComply is a global leader in modular compliance solutions for KYB, KYC, KYT, and AML. Founded in 2017 and headquartered in Vancouver, Canada, iComply helps regulated and emerging financial services providers operate with trust, accountability, security, and privacy. Our proprietary edge computing technology processes and encrypts sensitive identity data directly on the user’s device, enabling compliance without compromising privacy or data sovereignty. The iComply platform consolidates up to eight legacy vendors into one secure, configurable system—reducing compliance costs by up to 90%, improving customer satisfaction by over 25%, and ensuring readiness for evolving regulations in over 195 countries and 142 languages. Learn more at www.icomplyis.com.


iComply and CE Corner Launch Free CE-Accredited Training on AI Fraud

AI fraud is growing fast. Most legal and financial service teams aren’t prepared. iComply and CE Corner have launched a free CE-accredited course to help professionals spot and stop the latest scams.

August 2025, Vancouver, Canada: iComply, a global leader in digital compliance technology, has announced a new strategic partnership with CE Corner, Canada’s premier continuing education provider for legal, financial, and insurance professionals. Together, the two firms are launching the first in a series of accredited training programs designed to equip professionals with the awareness and tools needed to combat AI-driven fraud, cryptocurrency abuse, and rising AML compliance threats.

The inaugural course, titled “Protecting Clients from Emerging Fraud,” is now live and available free of charge. It provides CE credit in multiple jurisdictions and is tailored for legal, real estate, wealth management, and financial services professionals.

“AI-driven fraud is exploding among legal, real estate, and financial services providers,” said Matthew Unger, CEO of iComply. “This is a technology arms race that demands active engagement from every level of an organization.”

Technology is advancing faster than compliance teams can train. Salesforces, support reps, and client-facing teams are now the frontline defence against fraud Yet most are ill-equipped to identify sophisticated attacks that use deepfakes, AI-generated documents, or blockchain obfuscation techniques. This new partnership aims to close that gap and give our frontline resources better tools and training to protect themselves, their clients, and our financial markets from AI-powered fraud.

Course Overview:

In just 1 hour, participants will learn:

How emerging fraud schemes are evolving through AI, spoofing, and social engineering

What frontline staff must know to detect threats before losses occur

Practical tactics for identifying red flags and protecting clients

Why CE training is no longer optional in a rapidly digitizing world

Access the course now at CE Corner.

iComply delivers end-to-end KYB, KYC, KYT, and AML compliance solutions for financial institutions, legal service providers, and fintech platforms worldwide. Built with a zero-trust security model and edge-computing architecture, iComply helps clients reduce compliance costs by up to 90%, while meeting or exceeding global standards such as SOC2, ISO27001, GDPR, and PIPEDA.

CE Corner is a trusted education platform for Canadian professionals across law, accounting, insurance, and financial services. It offers accredited, high-quality training programs to ensure professionals stay compliant, competent, and competitive in fast-changing regulatory environments.

Looking for more than awareness?

iComply also offers advanced AML compliance training programs for clients and partners. These 10-hour programs blend self-directed learning and live instruction to deliver actionable education that maps to your regulatory obligations.

Contact our team today to explore training options and technology solutions tailored to your business.


Dock

Know Your Agent: Solving Identity for AI Agents [Video and Takeaways]

The rise of AI agents is one of the most significant shifts unfolding across the internet today. From booking travel to managing work tasks, agents are quickly becoming powerful tools that act on behalf of users. In fact, by next year, there may be more non-human agents online than human

The rise of AI agents is one of the most significant shifts unfolding across the internet today. From booking travel to managing work tasks, agents are quickly becoming powerful tools that act on behalf of users. In fact, by next year, there may be more non-human agents online than human users. The promise is clear: automate the drudge work and reclaim your time.

But as the excitement grows, a critical piece of the conversation is being overlooked—identity. How do we know which agent is acting? Who authorized it? What is it allowed to do? And how do we prevent misuse when these agents gain access to sensitive systems or personal data?

In our latest live session, Dock Labs CEO Nick Lambert sat down with Peter Horadan, CEO of Vouched, to explore these questions in depth. Peter not only shared his perspective on the growing risks but also gave a live demo of a new identity and delegation framework that makes it possible to verify and control what agents can do on our behalf.

Here are the main takeaways:


uquodo

The Future of Digital Identity Verification: A Deep Dive into Passwordless Authentication

The post The Future of Digital Identity Verification: A Deep Dive into Passwordless Authentication appeared first on uqudo.

Aergo

[Aergo Talks #20] DeFAI, airdrop, and more

Q1: What is DeFAI? A term describing projects that merge Decentralized Finance (DeFi) with Artificial Intelligence (AI). Currently used to label this emerging segment in Web3. Similar to “DeSci” (Decentralized Science). Naming: Similar to “DeSci” (Decentralized Science). Q2: When will the HPP website and mainnet go live? Mainnet Launch: Originally planned for Q4, now targeted for
Q1: What is DeFAI? A term describing projects that merge Decentralized Finance (DeFi) with Artificial Intelligence (AI). Currently used to label this emerging segment in Web3. Similar to “DeSci” (Decentralized Science). Naming: Similar to “DeSci” (Decentralized Science). Q2: When will the HPP website and mainnet go live? Mainnet Launch: Originally planned for Q4, now targeted for Q3 2025 (ahead of schedule). Progress is on track; nearing the end of the planned window. Website Launch: Advanced stage — wireframes complete, style guide implemented, interactive elements reviewed. Branding draws from CRT distortion and terminal-style visuals, reflecting the “early era” feel of AI. Likely to launch before the mainnet.

Note: The hpp.io site is already live with content for testnets and the private mainnet.

Q3: When will AIP-21 rewards be distributed? Airdrop Start: After mainnet launch. Vesting: Rewards have a long vesting period. Expected Timing: Likely this quarter, soon after mainnet goes live. Q4: Are the AIP-21 swap ratios still accurate? Yes. Ratios were set at the time of the AIP-21 vote based on agreed project valuations. These remain locked and will not change. Q5: Do holders need to take action for the swap? If on Exchanges: Swaps may be automatic if the exchange supports it. If Off-Exchange: May require using a bridge or manual process.

General Guidance: Follow official channels for detailed instructions.

Expect a generous grace period for swapping. Token migrations are becoming more common — always track official updates for your holdings. Q6: How is HPP an upgrade to the existing Layer 1? AI Integration: Aergo L1 (current mainnet) will gain the AI technologies being developed for HPP (e.g., off-chain computation, verifiable AI interaction). Enterprise Use: Aergo remains the enterprise/private chain option, with forks for specific deployments (e.g., public sector). Dual-Layer Advantage: HPP Mainnet enables easier integration with exchanges, AI-native dApps, and multiple blockchains without L1 modifications. Q7: Community concern about low trading volume and price movement only on news Price changes are not always tied to news; many announcements have no lasting effect, and derivatives can even push prices down. Volume is important as a sign of utility, but isolated charts can mislead; always compare them with broader market trends. Q7: Will HPP mainnet launch in late Q3 or early Q4? Confirmed: Target remains end of Q3 2025. Airdrop: Will start after mainnet launch, following the vesting schedule.

[Aergo Talks #20] DeFAI, airdrop, and more was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 07. August 2025

Indicio

Indicio, SmartSearch and Socure reach new deals in AML, KYC and KYB

Biometric Update The post Indicio, SmartSearch and Socure reach new deals in AML, KYC and KYB appeared first on Indicio.

Dock

The EU’s New Business ID Wallet Could Save SMEs €37B

The European Commission has announced plans for a new EU Business Wallet, a secure digital solution designed for companies to manage and share business credentials across the EU. If the EUDI Wallet is the European Union’s answer to digital ID for individuals, the

The European Commission has announced plans for a new EU Business Wallet, a secure digital solution designed for companies to manage and share business credentials across the EU.

If the EUDI Wallet is the European Union’s answer to digital ID for individuals, the EU Business Wallet is its counterpart for organizations. 

It will enable businesses to store digital credentials, licenses, certificates, and proof of registration — all in a secure, interoperable format recognized across borders.

This initiative is part of the Competitiveness Compass, and it’s being positioned as a major step forward in reducing administrative burden and boosting digital efficiency:


HYPR

8 Essential Questions for Your Workforce Identity Verification (IDV) Vendor

Choosing the right identity verification (IDV) partner is one of the most critical security decisions you'll make. As organizations fortify their defenses, it’s clear that verifying the identity of your workforce requires a fundamentally different approach than verifying customers. The stakes are simply higher. For customer verification, the primary goal is often a smooth, low-friction

Choosing the right identity verification (IDV) partner is one of the most critical security decisions you'll make. As organizations fortify their defenses, it’s clear that verifying the identity of your workforce requires a fundamentally different approach than verifying customers.

The stakes are simply higher. For customer verification, the primary goal is often a smooth, low-friction sign-up process. For your workforce, the goal is ironclad security to prevent a breach. The reality is that many IDV solutions on the market are repurposed customer onboarding tools, not purpose-built platforms designed to stop a skilled attacker from impersonating an employee.

This guide is designed to help you look beyond the surface-level features and assess whether a vendor can truly meet the security demands of a modern enterprise. Use these questions to find a genuine partner and a solution that is truly workforce-grade.

Core Capabilities and Security

The foundation of any IDV solution is its ability to accurately verify an identity while defending against advanced, modern attacks.

1. How do you protect against deepfakes and other advanced impersonation attacks?

To protect against modern threats, your first question should focus on a vendor's strategy for tackling sophisticated fraud. Threat actors now use AI to create deepfakes for both presentation attacks (showing a fake image to a camera) and injection attacks (bypassing the camera to feed a fake video stream directly into the system).

A workforce-grade solution should deliver:

Advanced Liveness Detection: The best solutions employ sophisticated liveness checks to distinguish between a live person and a spoof like a mask or recording.  Injection Attack Prevention: A vendor should offer technology that prevents attackers from bypassing on-device cameras, making it nearly impossible to inject a deepfake into the verification stream. 2. What verification methods do you offer beyond a simple document check?

While document verification is essential, a resilient IDV platform must offer a wide array of options to create a multi-layered defense and ensure all employees can be verified successfully.

A top-tier vendor should provide a flexible framework that includes:

Geolocation and IP Intelligence: A modern IDV solution should analyze passive risk signals like the user's IP address and device location. Biometric Matching: Comparing a user's live selfie to the portrait on their government ID is a necessary feature for modern verification. Workforce-Specific Workflows: The most innovative solutions provide methods uniquely suited for an enterprise environment. One such powerful, context-aware method is manager attestation, where a supervisor can digitally vouch for an employee's identity through secure chat or video call. Deployment and Integration

A solution's value is directly tied to how well it integrates with your existing technology stack without causing major disruptions.

3. How does your solution integrate with our key workforce workflows and technology stack?

To avoid creating information silos and clunky workarounds, an IDV solution's value multiplies when it is deeply embedded into the systems where identity is most critical. For maximum efficiency and security, a vendor should offer:

IAM and IdP Integration: Out-of-the-box connectors for major Identity and Access Management (IAM) and Identity Provider (IdP) platforms like Okta, Microsoft Azure AD, and Ping are crucial for managing employee access and credential resets. Applicant Tracking Systems (ATS): To combat candidate fraud early in the hiring process, integrations with ATS platforms to verify an applicant's identity, ensuring the person you interview is the person you hire are important. Help Desk and Ticketing Systems: The ability to integrate into your existing help desk or ticketing platform is essential for securely handling high-risk workflows like password and MFA resets. SIEM Integration: A vendor should be able to seamlessly integrate with your SIEM systems. This allows your security team to feed identity event logs into a centralized platform for auditing, threat analysis, and compliance monitoring. Standards-Based Integration: Look for solutions built on open standards like OIDC and SAML, as this ensures broad compatibility and future-proofs your investment.  4. What is the deployment process like, and what resources are required from my team?

The deployment model should align with your organization's infrastructure and technical capabilities. A cloud-native platform offers superior scalability and easier integration. For organizations looking to address urgent threats, it's best to prioritize vendors that offer ready-to-use solutions rather than a lengthy, resource-intensive implementation project.

The User Experience

Security measures should empower productivity, not hinder it. The employee experience is paramount for adoption and success.

5. What is the end-user journey like for initial verification and future re-verifications?

The process should be fast, intuitive, and require minimal effort from the employee. The best user experience is achieved through:

App-less Workflows: Forcing users to download a separate application creates unnecessary friction. A vendor should offer app-less web experiences that allow users to complete verification on any device with a browser. Seamless Re-verification: It is critical that a solution is designed to handle re-verification and the re-binding of an identity to a new device. Products that treat every verification as a one-time event are poorly suited for managing the employee lifecycle, where device changes are common. 6. Do you support flexible workflows for different risk levels and use cases, like help desk support?

A one-size-fits-all approach to identity verification is inefficient. A modern IDV platform must allow for fully customizable and configurable workflows that can be tailored to specific use cases and risk levels. For example, a vendor should be able to integrate with your call center operations. This allows help desk agents to securely trigger a verification flow before performing high-risk actions like a password reset, which is a common vector for attack.

Security, Compliance, and Data Privacy

Handling sensitive employee data requires the highest, non-negotiable standards of security and certified compliance.

7. What are your security certifications and how do you ensure compliance with data privacy regulations?

A reputable vendor must demonstrate its commitment to security through widely recognized, independent certifications. You should require proof of:

Security and Trust Certifications: SOC 2, ISO, and FIDO2 certifications are essential benchmarks. Regulatory Compliance: The solution must support compliance with data privacy laws like GDPR and CCPA, as well as regulations like HIPAA. Identity Standards Adherence: The platform should be compliant with identity standards from NIST, ideally up to Identity Assurance Level 3 (IAL3) for the highest-risk environments. 8. How do you store and protect our employees' personally identifiable information (PII)?

A vendor's data handling policies are a direct reflection of its security posture. The ideal approach is one that minimizes your organization's data exposure. You should look for a vendor that:

Employs Strong Encryption: All data, both at-rest and in-transit, must be encrypted using strong standards like AES-256. Minimizes Data Retention: The best practice is to hold Personally Identifiable Information (PII) for the shortest time necessary. An attestation-only model, where the raw data is destroyed after a short period, significantly reduces your risk and is superior to models that store PII indefinitely. Finding Your Workforce Identity Verification Partner

Choosing an IDV vendor is about more than buying a tool; it's about establishing a partnership to navigate evolving threats. By asking these questions, you can identify a provider who understands the unique challenges of workforce security and is committed to your long-term success.

At HYPR, we built our HYPR Affirm solution on these foundational principles. We believe that true workforce security demands purpose-built technology that is both highly secure and easy to use. It’s why leading global organizations, like two of the four largest U.S. banks, trust HYPR to protect their employees and data.


Ocean Protocol

DF153 Completes and DF154 Launches

Predictoor DF153 rewards available. DF154 runs August 7th — August 14th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 153 (DF153) has completed. DF154 is live today, August 7th. It concludes on August 14th. For this DF round, Predi
Predictoor DF153 rewards available. DF154 runs August 7th — August 14th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 153 (DF153) has completed.

DF154 is live today, August 7th. It concludes on August 14th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF154 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF154

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF153 Completes and DF154 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

The Trust Equation: Why Customers Stay—or Leave

Explore how modern CIAM builds seamless, secure digital experiences that drive customer trust, loyalty, and growth—before they click away.

Every digital interaction is a chance to build trust or break it. Today’s customers are more privacy-conscious, security-aware, and experience-driven than ever, and they’ve never had more options. One broken login flow, one unnecessary verification step, one untrustworthy interface, and they’re gone—often for good.

 

So, why do some customers remain loyal to certain brands, while others abandon their carts, apps, or accounts with a single frustrating click?

 

The answer is trust. And increasingly, trust is powered by modern identity.


Aergo

How HPP Could Transform Public Blockchain Use

Upgrading Legacy Blockchain Systems with AI-Native Infrastructure Case Study: NHIS and Aergo. A Blueprint for Public Sector Blockchain Adoption Background The National Health Insurance Service (NHIS) of Korea pioneered blockchain adoption in the public sector by launching a high-throughput Timestamping Authority (TSA) system built on the Aergo Enterprise platform. This system verifies and recor
Upgrading Legacy Blockchain Systems with AI-Native Infrastructure Case Study: NHIS and Aergo. A Blueprint for Public Sector Blockchain Adoption Background

The National Health Insurance Service (NHIS) of Korea pioneered blockchain adoption in the public sector by launching a high-throughput Timestamping Authority (TSA) system built on the Aergo Enterprise platform. This system verifies and records the issuance of key documents, including insurance contracts, care applications, and official certifications, with over 400,000 transactions processed daily. The system is projected to handle over 1.8 million transactions per day once the upcoming services are fully deployed.

Key Features of the Aergo-Based TSA Immutable Timestamping: Each document issuance is timestamped and anchored to the Aergo blockchain. System Integration: Deployed with zero downtime, fully integrated into NHIS’s legacy systems. Environmental Efficiency: Reduced reliance on paper documentation and physical verification processes. Security & Auditability: Enhanced traceability and document verification for public trust and regulatory compliance.

This Aergo-powered system is widely recognized as one of the most successful enterprise blockchain deployments in the public sector.

How HPP Could Evolve the NHIS TSA: From Timestamping to Intelligence

Although HPP is not currently implemented by NHIS, future upgrades of similar public systems could benefit significantly from integration with HPP’s AI-native infrastructure. The House Party Protocol is designed to enhance legacy blockchain systems by combining verifiable AI execution, decentralized governance, and modular scalability.

Here’s how HPP could enhance a use case like NHIS’s TSA system:

1. Real-Time Fraud Detection Using AI Agents

Current Limitation
Fraud detection in the current TSA is largely external or manual, relying on human audits or external tools to identify document forgery, duplicate claims, or anomalous patterns.

HPP Advantage
HPP integrates a Fraud Detection System (FDS) that uses intelligent agents to flag suspicious behaviors in real-time. For example:

Detecting attempts to submit forged care applications or duplicate insurance claims. Flagging statistically abnormal combinations (e.g., elderly care requests submitted by unusually young applicants).

These agents run on ArenAI, HPP’s AI execution layer, and automatically initiate fraud reviews, reducing risk while accelerating operational trust.

2. Document Intelligence Through Noösphere + SLM

Current Limitation
The Aergo TSA verifies the timestamp and issuance of documents, but not the content of documents. There’s no native understanding or validation of what is inside each form.

HPP Advantage
HPP’s Noösphere infrastructure powers off-chain SLM (Small Language Model) inference, enabling systems to:

Analyze document contents for consistency (e.g., checking for contradicting information across multiple submissions). Classify and tag public documents automatically. Feed results into smart contracts that enforce policy (e.g., deny requests that don’t meet minimum medical criteria).

This creates a hybrid system where off-chain AI logic is made on-chain verifiable through Proof-of-Inference, increasing transparency and auditability for automated decisions.

Final Thought

As public institutions pursue digital transformation, the NHIS case offers a proven foundation. However, the next generation of infrastructure will require more than just timestamping. HPP demonstrates how AI-native Layer 2 blockchain architecture can enhance public systems into intelligent, verifiable, and programmable digital services — transitioning beyond record-keeping to real-time decision-making and automation.

Note: The HPP enhancements described are exploratory and not affiliated with NHIS at the time of writing.

How HPP Could Transform Public Blockchain Use was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Fastly's Resilience to HTTP/1.1 Desynchronization Attacks

Discover why Fastly's architecture protects against HTTP/1.1 desynchronization attacks, unlike other CDNs. Protect your applications with Fastly's secure platform.
Discover why Fastly's architecture protects against HTTP/1.1 desynchronization attacks, unlike other CDNs. Protect your applications with Fastly's secure platform.

Wednesday, 06. August 2025

HYPR

How to Prevent Helpdesk Social Engineering Attacks

Helpdesks are critical support hubs, but their central role makes them prime targets for sophisticated social engineering attacks. These attacks exploit human psychology, tricking helpdesk personnel into divulging sensitive information or compromising security, often by targeting credential resets. When attackers convince an agent to reset a legitimate user's password, they bypass secur

Helpdesks are critical support hubs, but their central role makes them prime targets for sophisticated social engineering attacks. These attacks exploit human psychology, tricking helpdesk personnel into divulging sensitive information or compromising security, often by targeting credential resets. When attackers convince an agent to reset a legitimate user's password, they bypass security, gaining unauthorized access to sensitive systems and data. The devastating impact was demonstrated by the 2023 MGM attack, reportedly initiated via a helpdesk social engineering tactic, causing significant disruptions and financial losses. Understanding and preventing these threats is crucial for organizational strength.

Defining Helpdesk Social Engineering Attacks

Helpdesk social engineering attacks are sophisticated tactics where cybercriminals manipulate helpdesk personnel through deception. The core objective is unauthorized access, often via credential resets. Attackers impersonate legitimate users, perhaps an executive needing urgent access, using publicly available information to sound convincing. This circumvents technical defenses, allowing free movement within networks for data exfiltration, ransomware deployment, or further attacks. The 2023 MGM breach, costing over $100 million in reported damages, exemplifies the profound financial and reputational harm from such a successful helpdesk social engineering attack.

How Common Social Engineering Attacks Are Performed on Helpdesks

A typical helpdesk social engineering attack is a carefully orchestrated sequence:

Reconnaissance: Attackers gather employee details from public sources (social media, company websites, data breaches) to create a believable persona. Impersonation: They contact the helpdesk, posing as a legitimate employee, often a high-authority figure or a distressed user, sometimes using caller ID spoofing or deepfake voice technology. Exploiting Weak Verification: Attackers exploit flaws like knowledge-based authentication (KBA), finding answers through research or dark web data to bypass security questions. Building Trust and Pressure: They use psychological tactics: Urgency: Creating immediate crises to rush the agent. Authority: Impersonating executives to imply repercussions for delays. Insider Knowledge: Using researched details to sound credible. Credential Reset/Modification: Trust established, they convince the agent to reset a password or enroll a new MFA device. Exploitation: With new credentials, they gain unauthorized access for data exfiltration, malware installation, or fraud.

These attacks are prevalent; reports from 2023 indicated that a significant percentage of organizations experienced credential compromises linked to social engineering, with an increasing shift to voice and video-based tactics.

Train your helpdesk staff to adopt a mindset of "verify, don't trust." This means questioning every request for credential changes or sensitive access, regardless of how urgent or authoritative the request seems. Always use established, out-of-band verification methods, such as calling the user back on a pre-registered, known phone number, rather than relying solely on information provided during the current interaction.

The Weakest Link: Flaws in Traditional Identity Verification

Traditional helpdesk identity verification methods often present critical vulnerabilities:

Reliance on Knowledge-Based Authentication (KBA): Easily compromised as answers to security questions are often publicly available or found in data breaches. Static Credentials (e.g., Passwords): Vulnerable to phishing and brute-force attacks; a compromised password grants persistent access. Lack of Multi-Factor Verification Enforcement: Helpdesks may have weak processes allowing MFA bypass or re-enrollment without stringent identity proofing. Human Error and Pressure: Agents, under pressure and manipulation (urgency, authority), may overlook red flags or deviate from protocols. Inconsistent Procedures: Lack of standardized verification protocols allows attackers to "shop around" for a less vigilant agent.

The inherent limitations of static credentials, once compromised, give attackers sustained access, enabling extensive network exploration and damage before detection. 

Implementing Low-Friction Authentication

Low-friction authentication is crucial to combating helpdesk social engineering by making authentication seamless without compromising security. Complex, slow processes can inadvertently lead staff to bypass protocols or fall prey to quick-fix social engineering.

Passwordless authentication eliminates the primary target for phishing—passwords—and offers numerous benefits:

Enhanced Security: FIDO-based solutions use phishing-resistant public-key cryptography, making compromise significantly harder. Superior Usability: Eliminates password memory burdens, frequent resets, and lockouts, providing a faster, intuitive login for users and reducing password-related helpdesk calls. Reduced Attack Surface: No passwords to steal, crack, or breach, drastically shrinking potential attack vectors. Cost Savings: Directly reduces helpdesk call volumes related to password issues, translating into significant operational savings.

Biometrics, for example, transforms login into a natural, quick action while providing a higher level of security assurance.

The Role of Generative AI in Helpdesk Social Engineering Attacks

Generative AI, including Large Language Models (LLMs) and deepfake technology, is rapidly enhancing the sophistication and scale of helpdesk social engineering attacks, making them harder to detect. For a deeper dive, read our blog on preventing generative AI attacks

AI's role includes:

Advanced Pretexting: LLMs generate highly plausible, contextually aware scripts for calls, emails, or chats, mimicking corporate language and adapting tone for credibility. Deepfake Voice Cloning: AI clones target voices from audio samples, enabling convincing "vishing" attacks where helpdesk agents believe they're speaking with the legitimate person. This was a key concern highlighted in HHS alerts. Deepfake Video: While still evolving for real-time helpdesk use, deepfake video could enable visual impersonation during video calls, adding another layer of authenticity. Automated and Scalable Attacks: AI automates reconnaissance, personalized message generation, and simultaneous social engineering attempts, allowing large-scale, targeted campaigns with less manual effort. Adaptive Strategies: AI systems can learn and refine their deceptive approaches based on responses, increasing their agility and making them harder to defend against with static security measures.

As generative AI makes impersonation easier, organizations must move beyond knowledge-based authentication. Implement identity verification methods that are inherently resistant to AI-generated fakes, such as live liveness detection for biometrics or multi-factor verification that relies on device-bound cryptographic keys rather than shared secrets.

Real-World Examples of Helpdesk Social Engineering

The threat of helpdesk social engineering is not theoretical; it's a proven and ongoing attack vector. Here are some notable instances and warnings:

HHS Sector Alert

Helpdesk social engineering is a persistent threat. The Health Sector Cybersecurity Coordination Center (HC3) within the U.S. Department of Health & Human Services (HHS) has issued alerts detailing sophisticated tactics.

HHS Sector Alert: HC3 highlighted threat actors (e.g., "Scattered Spider") using advanced social engineering. These attackers call helpdesks, impersonating employees (often in financial roles), using sensitive, likely breached, information (e.g., last four SSN digits) to pass initial verification. They then claim a broken phone, persuading helpdesk staff to enroll a new, attacker-controlled MFA device. This grants access to corporate resources, exploited for payment fraud or ransomware. HHS specifically noted the potential for AI voice impersonation, making remote identity verification increasingly challenging.

How to Prevent the Helpdesk from Social Engineering

Preventing helpdesk social engineering requires a multi-faceted approach combining strong technology, comprehensive training, and robust policies.

Using Deterministic Controls to Stop Social Engineering Attacks

Stopping AI-fueled social engineering and deepfake attacks means adopting deterministic controls over probabilistic methods like passwords. Deterministic controls offer higher certainty about user identity, often involving multi-factor verification (MFV) that uses inherently secure and hard-to-spoof methods.

Recommended steps to harden the credential reset process:

Implement Phishing-Resistant MFA: Prioritize FIDO2-based authentication (e.g., hardware security keys, biometrics with device-bound keys) which uses public-key cryptography, making it resistant to phishing and man-in-the-middle attacks. This should be a baseline for sensitive access and helpdesk-initiated changes. Introduce Dynamic Verification: Identity Proofing: Require strong identity proofing for account creation and high-risk operations like resets. This includes live liveness detection during video calls or leveraging trusted third-party services. Out-of-Band Verification: Always verify identity via a channel not controlled by the attacker, such as calling a pre-registered phone number or sending a code to a secure, verified email. Limit Resets via Secure Channels & Enforce Stringent Escalation: Define strict protocols for resets. Require multi-layer approvals for high-risk requests and implement "cooling-off" periods for new device enrollments from unusual locations. Exceptions should involve supervisory review and additional robust identity proofing. Emphasize Automation and Self-Service: Empower users with secure self-service password reset and account recovery using strong, phishing-resistant MFA. This reduces helpdesk burden and minimizes the attack surface.
Strengthening Workplace Security with Robust Identity Proofing

Effective identity proofing is paramount for preventing unauthorized access. While authentication confirms credential possession, identity proofing confirms the claimant's true identity, crucial against social engineering where attackers have valid information but aren't the legitimate user.

 

Robust identity proofing practices are essential throughout the employee lifecycle:

Onboarding: Ensures only legitimate employees gain initial access through verified IDs, background checks, and biometric enrollment. High-Risk Transactions/Requests: For actions like helpdesk password resets or sensitive data access, identity proofing should be re-applied or elevated. This includes biometric verification with liveness detection, document verification, or live video verification with a trained agent. Continuous Monitoring: Integrating identity proofing with continuous monitoring detects anomalous behavior, triggering strong proofing protocols if a user attempts unusual actions (e.g., new device enrollment from a foreign IP).

Strengthening identity proofing builds a more resilient defense against social engineering, significantly hindering impersonation attempts.

How HYPR Affirm Thwarts Social Engineering Attacks

HYPR Affirm directly combats sophisticated social engineering attacks targeting helpdesks and identity systems, especially those amplified by generative AI and deepfakes. It shifts from vulnerable, probabilistic identity verification to a deterministic, phishing-resistant approach.

Here's how HYPR Affirm helps:

Eliminates Phishable Credentials: Built on FIDO standards, it enables strong, passwordless authentication, removing the primary target for phishing and credential-stuffing attacks. Deterministic Identity Assurance: Provides comprehensive, adaptable identity verification using high-fidelity proofing, like live biometric verification with liveness detection, to confirm the user's true identity, not an impersonator. Automates & Strengthens Workflows: Automates complex identity verification flows, reducing human error and ensuring consistent protocols. High-risk events trigger robust identity proofing automatically. Adaptive Risk Analysis: Incorporates real-time identity risk analysis, leveraging dynamic signals to detect suspicious behavior (unusual logins, device changes), driving adaptive security measures. Protects Fallback Mechanisms: Ensures even alternative authentication methods are secure and phishing-resistant, or require strong identity proofing for recovery actions.

By implementing HYPR Affirm, organizations can fortify their identity security, making it significantly harder for social engineers to trick helpdesk personnel and gain unauthorized access.

Key Takeaways Social engineering is a growing threat: Attackers use sophisticated psychological tactics and AI-powered tools to target helpdesks for unauthorized access. Vulnerable verification methods are the entry point: Traditional, static identity checks (like passwords and security questions) are easy for attackers to bypass. Phishing-resistant authentication is key: Deploy FIDO-based passwordless solutions to eliminate the primary target of most social engineering attacks—the password itself. Implement deterministic identity proofing: For high-risk actions like credential resets, use strong, modern methods like live biometric verification with liveness detection to ensure the user is who they claim to be. Strengthen helpdesk procedures: Train staff to handle high-pressure situations and use secure, automated workflows to reduce human error and enforce consistent security policies. Leverage purpose-built tools: Solutions like HYPR Affirm are designed to provide AI-resistant identity assurance, offering a crucial layer of defense against modern social engineering techniques. Conclusion

Generative AI amplifies the evolving threat of helpdesk social engineering, which bypasses technical controls by exploiting human elements and outdated identity verification. Countering this requires deterministic controls and robust identity proofing, prioritizing phishing-resistant passwordless authentication and dynamic high-risk verification. HYPR Affirm offers essential tools for AI-resistant identity assurance, enabling organizations to prevent attacks and achieve comprehensive passwordless security.

FAQs

Q: Why Is Social Engineering Effective? A: Social engineering works by exploiting human psychology (trust, urgency, fear) to manipulate individuals into making mistakes or divulging information, often through convincing fabricated scenarios or impersonation.

Q: How are Helpdesks Targeted in AI Voice Cloning Attacks? A: Attackers use AI to mimic an employee's voice from audio samples, then call the helpdesk, posing as that individual. They request sensitive actions like password resets or new device enrollments, often claiming urgency or a broken device to bypass MFA.

Q: What is an Example of a Social Engineering Attack? A: A vishing attack where an attacker calls a helpdesk, impersonating an executive who "forgot" their password. Using publicly available details, they pressure the agent to bypass verification and reset credentials for a "critical project."

Related Resources Webinar: Prevent Helpdesk Social Engineering with HYPR Blog: Authentication in the Time of Generative AI: Strengthened Attacks Guide: Passwordless MFA Security Evaluation Guide Blog: Using Deterministic Security To Stop Generative AI Attacks Blog: The Rise of Multi-Factor Verification Blog: Best Practices for Identity Proofing in the Workplace


Anonym

How to Get 9 “Second Phone Numbers” on One Device

A second phone number is an additional phone number that you can use on your existing device, separate from your primary number.  You use a second phone number to shield your personal phone number in situations where you don’t want to give out your private line. Second phone numbers (or secondary phone numbers, as they’re sometimes […] The post How to Get 9 “Second Phone Numbers” on On

A second phone number is an additional phone number that you can use on your existing device, separate from your primary number. 

You use a second phone number to shield your personal phone number in situations where you don’t want to give out your private line.

Second phone numbers (or secondary phone numbers, as they’re sometimes called) can be either tied to a traditional SIM or operate on an internet connection (VoIP numbers). VoIP or “voice over IP” phone numbers are assigned to a user and not to a physical location. A VoIP second phone number is often called a virtual phone number.

Second phone numbers are usually kept permanently or long-term for things like separating work and personal life, signing up for services, travelling, shopping online, and interacting with people and organizations you don’t know or trust. They can help you organize your communications, manage and secure accounts and services, reduce the risk of scams to your private line, and protect against unwanted contact.

Considering a person’s personal phone number is the most valuable piece of data to advertisers, data brokers and criminals, shielding it with a second phone number is a smart privacy move. But what can be even smarter is protecting your personal phone number with a bunch of secondary phone numbers and using them for different purposes in your work and personal life.

MySudo offers 9 secondary phone numbers on one device

One of the most popular features of MySudo all-in-one privacy app is that you can quickly commission 9 secondary phone numbers on one device—and do it without giving away your personal information (except for UK numbers which require identity verification).

Second phone numbers on MySudo are VoIP numbers, so they’re private virtual phone numbers.

But where MySudo differs from other services is that each of the 9 phone numbers sits within its own digital identity or persona called a Sudo, so you’re effectively running 9 separate identities or personas for any purpose you choose.

Even better, each Sudo also has a dedicated secure email, optional virtual card, private browser, and a handle for free end-to-end encrypted messaging and calling without a phone number at all.

Sudos are useful for opening accounts, booking flights and hotel rooms, paying for food delivery and ride share, online dating, volunteering, and selling secondhand—any activity where you’re asked for your personal information but don’t want to give it away. 

What are the benefits of MySudo phone numbers? MySudo numbers are real, unique, working phone numbers in area codes or geographies of your choice. Numbers are currently available in the United States, Canada, and the United Kingdom. Each phone number has customizable voicemail, ringtones, and contacts list. MySudo numbers are fully functional for messaging plus voice, video and group calling. Calls and messages with other MySudo users are end-to-end encrypted. Calls and messages out of network are standard. MySudo phone numbers don’t expire. Your phone numbers will auto-renew so long as you maintain your paid plan. You can use MySudo phone numbers for short-term or long-term activities. Follow the 4 steps to setting up MySudo to meet your real life privacy needs. You can mute the notifications of, or delete, a number you no longer want.* MySudo numbers are VoIP numbers, which means they work over the internet instead of traditional phone lines or cellular networks. VoIP numbers can’t become a unique identifier to all your other personal information like a personal cell number can and can’t be tracked like a cell number that’s connected to cell towers through its SIM card.  MySudo numbers give you a second chance at digital privacy

How much do MySudo phone numbers cost?

Phone numbers are available with a MySudo paid plan. The plans offer good value:

SudoGo – the budget plan with a phone number

1 phone number 3 Sudos 100 messages a month 30 mins talk time a month 3 GB space 

SudoPro – the great value plan with more of everything

3 phone numbers 3 Sudos 300 messages a month 200 mins talk time a month 5 GB space

SudoMax – the most Sudos for the most options

9 phone numbers 9 Sudos Unlimited messages Unlimited calls 15 GB space 

Getting set up with MySudo is easy:

Download MySudo for iOS or Android. Choose your plan. Get MySudo Desktop and browser extension for extra convenience.

Watch this video from Naomi Brockwell on why you shouldn’t give out your personal phone number. From 5:20 in the video you’ll see Naomi explain privacy expert Michael Bazzell’s “clean-up strategy” in whichyoulock down your personal cell number and create multiple VoIP numbers to use instead of your private cell.

FAQs What’s the difference between a second phone number and a temporary phone number?

A second phone number is an additional number you keep long-term or permanently for a particular purpose alongside your main phone number. You use it to shield your personal phone number for privacy and security reasons. A temporary phone number is similar but is typically created for short-term or one-off use and may expire automatically or be intentionally discarded after a single use or short time. Another name for temporary phone numbers is disposable phone numbers.

Are second phone numbers safe?

Second numbers are safe when you use a reputable service like MySudo.

What is MySudo?

MySudo is an all‑in‑one privacy app that offers up to 9 virtual phone numbers, secure messaging, dedicated secure email, virtual cards, and built-in private browsers to protect your personal information and digital identity.

What’s included in a Sudo?

Each Sudo digital identity or persona includes:

1 email address – for end-to-end encrypted emails between app users, and standard email with everyone else 1 handle – for end-to-end encrypted messages and video, voice and group calls between app users 1 private browser – for searching the internet without ads and tracking 1 phone number (optional)* – for end-to-end encrypted messaging and video, voice and group calls between app users, and standard connections with everyone else; customizable and mutable 1 virtual card (optional)* – for protecting your personal info and your money, like a proxy for your credit or debit card or bank account

*Phone numbers and virtual cards are only available on a paid plan. Phone numbers are available for US, CA and UK only. Virtual cards for US only.

Download MySudo

Learn more:

4 Steps to Setting Up MySudo to Meet Your Real-Life Privacy Needs From Yelp to Lyft: 6 Ways to “Do Life” Without Using Your Personal Details 6 Ways to RECLAIM Your Personal Info from Companies that Sell it

* Deleting a phone number or its Sudo does not refund your entitlement for that phone number. For example, SudoMax plan provides nine phone numbers total lifetime in the account, as opposed to always allowing up to nine phone numbers concurrently. Once used, the only way to get another phone number is to purchase a line reset.
**Obtaining a UK phone number through MySudo requires identity verification.

The post How to Get 9 “Second Phone Numbers” on One Device appeared first on Anonyome Labs.


ComplyCube

UK Retail Bank Cost of KYC: What Financial Institutions Need to Know

With the cost of KYC rising, compliance obligations have become a significant operational dilemma. For retail banks in the UK, strict expectations from global regulatory bodies has created a greater need to reassess KYC processes. The post UK Retail Bank Cost of KYC: What Financial Institutions Need to Know first appeared on ComplyCube.

With the cost of KYC rising, compliance obligations have become a significant operational dilemma. For retail banks in the UK, strict expectations from global regulatory bodies has created a greater need to reassess KYC processes.

The post UK Retail Bank Cost of KYC: What Financial Institutions Need to Know first appeared on ComplyCube.


Metadium

MCP Server for the Metadium Blockchain: Bridging AI and Decentralized Identity

Introduction The convergence of AI and blockchain technology is no longer a futuristic concept — it’s happening now. Imagine large language models (LLMs) like Claude or ChatGPT directly interacting with blockchain networks. Developers could build decentralized applications in a far more intuitive and efficient manner. We’ve developed the Model Context Protocol (MCP) server for the Metadium block
Introduction

The convergence of AI and blockchain technology is no longer a futuristic concept — it’s happening now. Imagine large language models (LLMs) like Claude or ChatGPT directly interacting with blockchain networks. Developers could build decentralized applications in a far more intuitive and efficient manner.

We’ve developed the Model Context Protocol (MCP) server for the Metadium blockchain to make this vision a reality. This server empowers AI models to interact with the Metadium network using natural language, enabling seamless accessTJ to core blockchain functions.

(This technology is already integrated into MChat, which was recently launched.)

Metadium + MCP: A Perfect Match

Metadium is a next-generation blockchain platform optimized for decentralized identity (DID) management. It supports DID protocols, smart contracts, and token standards such as MRC20 and MRC721, allowing users to retain complete control over their digital identities.

The Model Context Protocol (MCP), developed by Anthropic, enables AI models to interact with external systems in a structured way. With MCP, AI can go beyond text generation and execute real-world tasks through direct system integration.

Key Features 1. Comprehensive Account Management

The Metadium MCP server supports a wide range of blockchain account operations:

Balance Check: Retrieve METADIUM token balances for up to 20 addresses simultaneously Transaction History: Access both external and internal transaction logs Token Tracking: View MRC20 and MRC721 token holdings and transfer activity Mining Records: Trace mined blocks by a specific address @mcp.tool()
async def get_metadium_balance(addresses: List[str]) -> Dict[str, Any]:
"""Get METADIUM Balance for one or more addresses (max 20)"""
if len(addresses) == 0 or len(addresses) > 20:
raise ValueError("addresses must contain 1–20 items")
# … implementation details 2. Smart Contract Integration

Developers can seamlessly interact with smart contracts:

ABI Retrieval: Access verified contracts’ Application Binary Interface Source Code: View the source code of verified contracts Contract Verification: Automate source code verification for new contracts 3. DID Support

Specialized tools for managing Metadium’s decentralized identity system:

DID Stats: Track issuance by hour, day, or month Total Issuance: Retrieve the real-time total number of issued DIDs async def get_total_issued_dids() -> Dict[str, Any]:
"""Get total number of issued DIDs (Decentralized Identifiers)"""
function_selector = "0xa1707e7b" # Call the nextEIN() function
call_data = {"to": MAINNET_DID_REGISTRY, "data": function_selector}
# … Contract call via JSON-RPC 4. Full Ethereum JSON-RPC Compatibility

As an Ethereum-compatible blockchain, Metadium supports all standard JSON-RPC API functions:

Block Info: Retrieve detailed block data by hash or number Transaction Handling: Send transactions, track status, and fetch receipts Gas Estimation: Predict gas requirements for transactions Event Logs: Filter and view smart contract event logs Technical Architecture Modular Design

The MCP server is organized into clearly separated modules:

├── api_client.py # HTTP/JSON-RPC client
├── accounts/ # Account-related features
├── contracts/ # Smart contract features
├── eth_namespace/ # Ethereum-compatible API
├── statistics/ # Network statistics
└── others.py # Utility functions Asynchronous Processing

All blockchain interactions use Python’s async/await to ensure high performance and responsiveness through concurrent requests:

async def make_jsonrpc_request(
method: str, params: List[Any] = None, request_id: int = 1
) -> Dict[str, Any]:
async with httpx.AsyncClient() as client:
response = await client.post(JSONRPC_API_BASE, headers=headers, json=payload)
return response.json() Human-Friendly Data Transformation

Hexadecimal blockchain data is automatically converted into readable decimal format for both AI models and end users:

def add_decimal_fields_to_block(block_data: Dict[str, Any]) -> Dict[str, Any]:
hex_fields = ["baseFeePerGas", "difficulty", "gasLimit", "gasUsed", …]
for field in hex_fields:
if field in result and result[field].startswith("0x"):
decimal_value = hex_to_decimal(result[field])
modified_result[f"{field}_decimal"] = decimal_value Real-World Use Cases 1. AI-Powered Blockchain Analytics

Developers can ask Claude natural-language queries like:

“Analyze DID issuance patterns over the past 7 days and list all MRC20 token transfers from this contract in chronological order.”

2. Smart Contract Development Support

Smart contract developers can speed up their workflows:

“Fetch the ABI of this contract, estimate gas costs for each function, and suggest optimization strategies.”

3. DeFi Application Monitoring

DeFi operators can monitor systems in real-time:

“Calculate the total value locked (TVL) in our protocol and summarize key events over the past 24 hours.”

Closing Thoughts

The Metadium MCP server is an innovative tool that bridges the gap between AI and blockchain. By enabling developers to interact with decentralized systems using natural language instead of complex APIs, it lowers the barrier to entry for building powerful Web3 applications.

The integration of Metadium — pioneering in decentralized identity — with cutting-edge AI capabilities opens up new possibilities across the blockchain ecosystem. We hope this tool empowers more developers to explore the world of Web3 and build truly transformative applications for the next digital era.

The Metadium Team

Metadium 블록체인을 위한 MCP 서버: AI와 디지털 신원 관리의 새로운 만남 들어가며

인공지능과 블록체인 기술의 융합은 더 이상 먼 미래의 이야기가 아닙니다. 특히 Claude, ChatGPT와 같은 대규모 언어 모델(LLM)이 블록체인 네트워크와 직접 상호작용할 수 있다면, 개발자들은 훨씬 더 직관적이고 효율적인 방식으로 탈중앙화 애플리케이션을 구축할 수 있을 것입니다.

이러한 비전을 실현하기 위해, 우리는 Metadium 블록체인을 위한 Model Context Protocol(MCP) 서버를 개발했습니다. 이 서버는 AI 모델이 Metadium 네트워크의 다양한 기능들을 자연어로 쉽게 활용할 수 있도록 도와주는 혁신적인 도구입니다.

(이 기술은 최근에 서비스를 시작한 MChat에 적용되었습니다.)

Metadium과 MCP: 완벽한 조합

Metadium은 디지털 신원 관리에 특화된 차세대 블록체인 플랫폼입니다. 분산 신원 증명(DID), 스마트 컨트랙트, 그리고 다양한 토큰 표준(MRC20, MRC721)을 지원하며, 사용자들이 자신의 디지털 신원을 완전히 통제할 수 있는 환경을 제공합니다.

Model Context Protocol(MCP)은 Anthropic에서 개발한 프로토콜로, AI 모델이 외부 시스템과 구조화된 방식으로 상호작용할 수 있게 해줍니다. 이를 통해 AI는 단순한 텍스트 생성을 넘어서 실제 시스템과의 통합된 작업을 수행할 수 있습니다.

핵심 기능 살펴보기 1. 포괄적인 계정 관리

Metadium MCP 서버는 블록체인 계정과 관련된 모든 작업을 지원합니다:

잔액 조회: 최대 20개 주소의 METADIUM 토큰 잔액을 한 번에 확인 트랜잭션 내역: 일반 트랜잭션부터 내부 트랜잭션까지 상세한 이력 추적 토큰 관리: MRC20과 MRC721 토큰의 전송 내역 및 보유 현황 조회 채굴 이력: 특정 주소가 채굴한 블록 목록 확인 @mcp.tool()
async def get_metadium_balance(addresses: List[str]) -> Dict[str, Any]:
"""Get METADIUM Balance for one or more addresses (max 20)"""
if len(addresses) == 0 or len(addresses) > 20:
raise ValueError("addresses must contain 1–20 items")
# … 구현 세부사항 2. 스마트 컨트랙트 통합

개발자들이 가장 관심을 가질 만한 기능 중 하나는 스마트 컨트랙트와의 완벽한 통합입니다:

ABI 조회: 검증된 컨트랙트의 Application Binary Interface 정보 추출 소스 코드 확인: 검증된 컨트랙트의 원본 소스 코드 조회 컨트랙트 검증: 새로운 컨트랙트의 소스 코드 검증 프로세스 자동화 3. DID(Decentralized Identifiers) 지원

Metadium의 핵심 기능인 분산 신원 관리를 위한 특별한 도구들을 제공합니다:

DID 통계: 시간별, 일별, 월별 DID 발급 현황 추적 총 발급량 조회: 현재까지 발급된 전체 DID 수량 실시간 확인 async def get_total_issued_dids() -> Dict[str, Any]:
"""Get total number of issued DIDs (Decentralized Identifiers)"""
function_selector = "0xa1707e7b" # nextEIN() 함수 호출
call_data = {"to": MAINNET_DID_REGISTRY, "data": function_selector}
# … JSON-RPC를 통한 컨트랙트 호출 4. 완전한 Ethereum JSON-RPC 호환성

Metadium은 Ethereum과 호환되는 블록체인이므로, 표준 Ethereum JSON-RPC API를 모두 지원합니다:

블록 정보: 블록 해시나 번호로 상세한 블록 정보 조회 트랜잭션 처리: 트랜잭션 전송, 상태 확인, 영수증 조회 가스 추정: 트랜잭션 실행 전 필요한 가스량 정확한 예측 이벤트 로그: 스마트 컨트랙트 이벤트 필터링 및 조회 기술적 아키텍처 모듈형 설계

이 MCP 서버는 기능별로 명확하게 분리된 모듈형 아키텍처를 채택했습니다:

├── api_client.py # HTTP/JSON-RPC 클라이언트
├── accounts/ # 계정 관련 기능
├── contracts/ # 스마트 컨트랙트 기능
├── eth_namespace/ # Ethereum 호환 API
├── statistics/ # 네트워크 통계
└── others.py # 유틸리티 함수들 비동기 처리

모든 블록체인 호출은 Python의 async/await를 사용하여 비동기적으로 처리됩니다. 이를 통해 여러 요청을 동시에 처리할 수 있어 성능과 응답성이 크게 향상됩니다:

async def make_jsonrpc_request(
method: str, params: List[Any] = None, request_id: int = 1
) -> Dict[str, Any]:
async with httpx.AsyncClient() as client:
response = await client.post(JSONRPC_API_BASE, headers=headers, json=payload)
return response.json() 데이터 변환 및 사용자 친화성

블록체인의 16진수 데이터를 자동으로 10진수로 변환하여 AI 모델과 사용자가 더 쉽게 이해할 수 있도록 도와줍니다:

def add_decimal_fields_to_block(block_data: Dict[str, Any]) -> Dict[str, Any]:
hex_fields = ["baseFeePerGas", "difficulty", "gasLimit", "gasUsed", …]
for field in hex_fields:
if field in result and result[field].startswith("0x"):
decimal_value = hex_to_decimal(result[field])
modified_result[f"{field}_decimal"] = decimal_value 실제 사용 사례 1. AI 기반 블록체인 분석 도구

개발자들은 이제 Claude에게 자연어로 복잡한 블록체인 분석을 요청할 수 있습니다:

“지난 7일간 가장 활발한 DID 발급 패턴을 분석하고, 특정 컨트랙트 주소의 모든 MRC20 토큰 전송 내역을 시간순으로 정리해줘.”

2. 스마트 컨트랙트 개발 지원

컨트랙트 개발자들은 AI의 도움을 받아 더 효율적으로 작업할 수 있습니다:

“이 컨트랙트 주소의 ABI를 가져와서 모든 함수의 가스 사용량을 추정하고, 최적화 방안을 제안해줘.”

3. DeFi 애플리케이션 모니터링

DeFi 프로토콜 운영자들은 실시간으로 자신들의 애플리케이션을 모니터링할 수 있습니다:

“우리 프로토콜의 총 예치량(TVL)을 계산하고, 지난 24시간 동안의 주요 이벤트들을 요약해줘.”

마치며

Metadium MCP 서버는 AI와 블록체인의 경계를 허무는 혁신적인 도구입니다. 개발자들이 복잡한 블록체인 API를 직접 다루지 않고도 자연어를 통해 강력한 탈중앙화 애플리케이션을 구축할 수 있게 해줍니다.

디지털 신원 관리의 미래를 선도하는 Metadium과 최첨단 AI 기술의 만남은 블록체인 생태계에 새로운 가능성을 열어줄 것입니다. 이 도구를 통해 더 많은 개발자들이 Web3의 세계에 쉽게 접근하고, 혁신적인 애플리케이션을 만들어나갈 수 있기를 기대합니다.

메타디움 팀

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(KR) | https://t.me/metadiumofficialkor Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

MCP Server for the Metadium Blockchain: Bridging AI and Decentralized Identity was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

It's Time to Evolve Authentication Security

Identity-based attacks have become prevalent, and successful attacks are impactful. Attackers use increasingly sophisticated ways to breach privileged systems, so we must defend our accounts by elevating our identity security methods. Okta is committed to leading the industry in combating identity-based attacks through initiatives like the Secure Identity Commitment. Here are actionable steps you

Identity-based attacks have become prevalent, and successful attacks are impactful. Attackers use increasingly sophisticated ways to breach privileged systems, so we must defend our accounts by elevating our identity security methods. Okta is committed to leading the industry in combating identity-based attacks through initiatives like the Secure Identity Commitment. Here are actionable steps you can take to protect your applications.

Table of Contents

Identity assurance is the goal Demystifying authentication factors Embrace phishing-resistant authentication Avoid weak authentication methods Elevate authentication security with Multi-factor Authentication (MFA) Customize authentication requirements dynamically Build secure apps by applying identity security concepts Join the identity security evolution Learn more about phishing-resistant authentication, identity security, and protecting your applications Identity assurance is the goal

When we think about authentication, we think of gaining access to sensitive resources. We want some level of barrier so the data isn’t publicly available. It’s not enough to merely add a barrier, though. Wouldn’t it be more useful to have assurances that the user’s credentials are uniquely theirs and that no one can impersonate them? It’s more than a fence around the data; we also have assurances that the user accessing the data is who they say they are. It sounds great in theory.

We want to balance security requirements with our users’ comfort in an ideal world. Increased security requirements may increase user friction points. The more friction points a user encounters, the lower their satisfaction, engagement, and app usage – the balance point changes depending on the app user and the data sensitivity. For example, requirements may differ for public applications catering to consumers (B2C) versus internal applications used within an organization’s workforce.

Let’s navigate this balancing act together so you can find the right path for your needs.

Demystifying authentication factors

Before we dive into possible solutions, let’s review the three authentication factor categories:

Something you know
Knowledge factors include passwords and PINs Something you have
Possession factors include devices such as smart cards, security keys, phones, and tablets Something you are
Inference factors include biometrics such as fingerprints and facial recognition

Authentication relies on one or more factor categories to establish identity assurances before granting users access to applications.

Embrace phishing-resistant authentication

The best-in-class, more secure, and recommended authentication methods are phishing-resistant. Phishing-resistant authentication is more difficult to hack and mitigates unauthorized access due to intercepting PINs and sign-in links.
Phishing-resistant authentication relies on biometrics and specialized devices or equipment to prevent an attacker from accessing your application.

Phishing-resistant factors include the following forms.

Smart cards and PIV cards

Large enterprises, regulated industries, and government entities widely use smart cards and PIV cards. These organizations may issue smart cards for attaching personal profiles to shared workstation access, as seen in banks or hospitals. Organizations may issue cards to their workforce even when the employee uses an issued laptop as an extra security measure.

Pros: Secure, can be uniquely tied to the user, and well utilized in industries

Cons: Requires a physical device that can be lost or stolen, not scalable to use for public and consumer security due to hardware requirements and convenience

Security keys and hardware devices

Hardware security keys are another elevated security mechanism organizations use for their workforce. Security keys can have differing levels of security, from the older and less secure Time-based One-Time Password (TOTP) keys, Near Field Communication (NFC) keys that require a secondary device such as a phone, and keys requiring biometrics. For the highest level of security, you’ll want to use keys and hardware with biometric capabilities. Security keys work by storing the credentials on hardware, which requires registering the key on each device you use. While keys that plug into computers may be familiar, biometric-capable hardware, such as a laptop, and capable software can also be a phishing-resistant authentication factor. Okta FastPass on a biometric-capable computer is an example of a phishing-resistant hardware device.

Pros: Biometric-based hardware devices are highly secure.

Cons: It may require a physical device, you need to register the key on each device you use, and it isn’t scalable for public and consumer security due to hardware requirements and convenience. Device manufacturers can make them small and lightweight for convenience, alleviating concerns about relying on bulky equipment. But what happens if the user loses or damages this device? How long would it take before they have access to the system again?

FIDO2 with WebAuthn and Passkeys

FIDO2 and WebAuthn combined are a strong authentication factor that utilizes biometrics on capable devices and new capabilities in web frameworks to increase user security reliably. This factor requires a biometric-capable device meeting FIDO standards, such as a phone or a laptop, and capable software. The World Wide Web Consortium spec for web authentication (WebAuthn) means JavaScript-based web apps can support phishing-resistant authentication right in the browser. The difference between phishing-resistant hardware factors, such as security keys or Okta FastPass on biometric devices and Passkeys, is discoverability and the ability to port credentials. Instead of storing credentials on the hardware, discoverable FIDO authentication stores credentials outside software, such as in the iCloud Keychain or Android Keystore. The credential storage makes authenticating on the same site across different devices within the same ecosystem possible without re-registering.

Pros: Biometric-based FIDO authentication is secure, scales for public and consumer users, and there is no need to carry a security key or card

Cons: Each app must support this authentication method, and consumers must own capable devices

For the highest levels of identity security, use phishing-resistant factors.

Phishing-resistant factors decision tree

We recommend phishing-resistant factors at Okta as they offer the best application protection. You have identity assurances built in, along with authentication security. Consider this decision tree for your authentication needs:

Avoid weak authentication methods

We no longer live in a world where passwords alone are good enough to secure sensitive resources. Studies have shown that over 80% of data breaches result from compromised credentials. We must elevate authentication methods by avoiding weak credentials and preferring more substantial forms. Look towards industry leaders in cybersecurity, including companies such as Okta, nonprofit foundations such as OWASP, and government standards such as NIST and NCSC, to guide you towards strong factors and away from weak ones. In particular, be wary of legacy factors.

Avoid security questions as a factor

Cybersecurity organizations do not recommend security questions, as they are neither secure nor reliable. Security questions are vulnerable to social engineering attacks. It’s best to avoid this method.

SMS one-time codes are unsafe

Attackers can access those messages through SIM-swapping and interception attacks. NIST proposes deprecating SMS as an authentication factor, so consider alternate authentication methods.

Email Time-based One-Time Passwords (TOTP) have similar security issues as SMS

Using email for TOTP presents similar security issues as SMS codes. Attackers can intercept email. Emails may mistakenly get flagged as spam. Email delivery delays can result in configuring longer time validity periods, causing lower security.

Avoid password antipatterns

Passwords must evolve by allowing longer character lengths and character variety. Avoid antipatterns such as complexity requirements and forced password resets. Enforce strong passwords by checking them against compromised password databases. Password managers can offset user risks by recommending unique, strong passwords for each site and applying the stored passwords. Still, password managers aren’t failproof, and users may use insecure passwords for the password manager themselves.

These factors do offer a weak barrier to sensitive resources, but a key element is missing: identity assurance. The weak authentication factors lack the safeguards to ensure the users making the authentication challenge are who they say they are.

Elevate authentication security with Multi-factor Authentication (MFA)

Passwords alone require caution, but a combination of passwords and other factors elevates identity security. A single legacy authentication factor is rarely secure enough to protect any resource; it isn’t safe enough to access your users and Okta configuration.

Adding factors such as authenticator apps supporting TOTP and push authentication increases the barriers to sensitive data. Raising the barriers helps protect your application by requiring more effort for impersonators trying to hack accounts. However, using the weakest authentication factors combined isn’t as strong as phishing-resistant.

Combine strong authentication factors

The best way to ensure authentication security and reasonable identity assurances is to combine moderate to high authentication factors. Doing so supports good security with secure fallback systems. For example, if you can’t use phishing-resistant authentication in a consumer scenario, layer a password with push authentication. Allow the consumer to opt into Passkeys while supporting MFA. For workforce scenarios, issue hardware keys as a backup factor in addition to Okta FastPass.

Okta’s authentication policy builder can help you create strong authentication requirements to access Okta services and applications protected by Okta’s sign-in while tailoring session lifetimes to your needs.

It’s time we evolve our application’s authentication security and favor phishing-resistant factors.

Customize authentication requirements dynamically

Identity security isn’t a one-size-fits-all solution. FIDO2 with WebAuthn factors such as Okta FastPass for workforce use cases and Passkeys for consumer use cases can be the standard methodology.

Consider Adaptive MFA for conditional authentication requirements

Complex use cases call for more tailoring. Your needs may change depending on use factors such as geographic location, IP addresses, device attributes, and threat detection. Identity Providers offer solutions that help you tailor authentication security. For example, Okta supports features such as Adaptive MFA, which adjusts authentication requirements depending on context, and Identity Threat Protection, which continuously monitors threats and can react by terminating authenticated sessions. If your industry requires the highest levels of identity security or your application contains highly sensitive resources, look to these options.

Revalidate identity for sensitive resource requests

Identity assurances don’t have to happen only at application entry. When sensitive actions and data require elevated authentication, consider using the Step Up Authentication Challenge to protect resources. The Step Up Authentication Challenge is an OAuth standard for requiring secure factors or recent authentication when performing actions within the application.

Third-party interactions may require identity assurances. While we primarily think about authenticating as a solo activity, think about the case where someone calls into a help center for support. The help center agent needs to verify identity remotely, and we don’t want to rely only on weak methods such as passwords or pins. Consider using Client-Initiated Back-channel Authentication (CIBA) for your application in cases like this.

What do all these recommendations mean for developers working on these applications? How can we take advantage of identity security best practices?

Build secure apps by applying identity security concepts

We developers have a tough job. We must ensure our applications meet compliance requirements and guard against security threats, all while delivering product features. Authentication is foundational, but not your entire product line. It’s an expectation that doesn’t drive product innovations for your app, but detrimental when implemented incorrectly.

Use an Identity Provider (IDP) that supports OAuth 2.1 and OpenID Connect (OIDC)

To best protect your application and free yourself from getting into the weeds of implementing authentication, delegate it to your Identity Provider (IdP) whenever possible. When you delegate authentication to an IdP like Okta, you can access industry-recognized best practices, such as using OAuth 2.1 and OpenID Connect (OIDC) standards with user redirect for the authentication challenge. Redirecting the user to the Okta-hosted Sign-in Widget frees you from managing authentication methods manually. It allows you to leverage the Sign-in Widget user challenge with the Okta Identity Engine (OIE) for phishing-resistant authentication factors. Using the Okta Identity Engine means your app accesses the latest and greatest features for secure identity management.

Delegate authentication to your Identity Provider (IDP)

When you redirect the user to Okta for sign-in, you make authentication Okta’s problem. And that’s great because it provides you with the most security and the least amount of work. Your Okta administrator can configure authentication policies and add business rules to those authentication user challenges. You don’t have to worry about how to implement WebAuthn in your app, ensuring you have all the user controls to handle push notifications, or track sign-in context to adapt authentication factors. It’s all handled. All you need to know is whether the user completed authentication challenges, and then you can return to delivering features.

If you’re concerned a browser redirect for sign in degrades user experience or if your application’s use case demands a custom look and feel, you can customize the Okta-hosted Sign-In Widget’s styles. When you combine a custom-branded Sign-In Widget with a custom domain, your users may never know they leave your site. We’re continuing to build out capabilities in this area so you can deliver both secure identity and branding requirements. Be on the lookout for content about customizing sign in.

Use a vetted and well-maintained OIDC client library

A vetted, well-maintained OIDC client library increases implementation speed, lowers developer effort, and, most importantly, is crucial for authentication security. Because OAuth 2.1 and OIDC are open standards, writing your code to handle the required transactions is tempting. Resist the temptation for the sake of your application security and the efforts for the continued maintenance that good authentication libraries require. It’s too easy to introduce developer error in something like the Proof-Key for Code Exchange (PKCE) verification steps or to miss something in the token verification, for example. Many more subtle errors can adversely affect your application. Resist the temptation.

The standards can also change over time, such as adding new protection mechanisms or introducing breaking changes. Writing custom implementation means changes and maintenance become your responsibility, and you can’t presume prior spec knowledge is good enough, as specs can change. Resist the temptation and take this responsibility off your plate.

Ideally, use a vetted, well-maintained OIDC client library that is OIDC-certified or the Okta SDKs. Okta’s SDKs not only securely handle the OAuth handshake and token storage for you, but you’ll also get built-in support for the latest advancements in OAuth specs, such as Step Up Authentication Challenge, CIBA, and more.

Join the identity security evolution

Protect your workforce and customers by elevating authentication factors using phishing-resistant factors. Allow Okta to work for you by configuring strong authentication policies. Enable dynamic authentication factors and threat detection in your Okta org to mitigate data breaches and strengthen your reputation.

In your software applications, leverage Okta SDKs to redirect users to the Okta-hosted Sign-in Widget to quickly gain access to the more secure authentication factors efficiently and seamlessly. Then, build more safety in your apps by adding the Step Up Authentication Challenge to maintain identity security. Staying updated with the latest security best practices and thoughtfully integrating OAuth specs are essential to secure identity management.

Apply these key takeaways

Use phishing-resistant factors for authentication wherever possible, preferring Passkeys and Okta FastPass depending on use case and target audience Offer strong MFA options as a backup authentication methods Delegate identity management and authentication to an Identity Provider (IDP) supporting OAuth 2.1 and OIDC Use an OIDC client library to redirect users to sign in through an Okta-hosted sign in page Consider using OAuth extension specs to elevate identity assurances continuously throughout the lifetime of a user session
Learn more about phishing-resistant authentication, identity security, and protecting your applications

I hope you feel inspired to join the secure identity evolution. If you found this post interesting, you may enjoy the following:

How to Secure the SaaS Apps of the Future Introducing CIBA for Secure Transaction Verification Add Step-up Authentication Using Angular and NestJS Why You Should Migrate to OAuth 2.0 From Static API Tokens

Remember to follow us on LinkedIn and subscribe to our YouTube for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


Aergo

Noosphere: A Gateway to Verifiable Off-Chain Intelligence

TL;DR Smart contracts are powerful but limited. They can’t think, adapt, or process complex real-world data. This becomes a major bottleneck as Web3 intersects with AI, RWA, and scientific computation. Noosphere introduces a verifiable off-chain intelligence layer, enabling smart contracts to securely delegate off-chain inference and computation. Limitations of Smart Contracts Smart contracts r
TL;DR
Smart contracts are powerful but limited. They can’t think, adapt, or process complex real-world data. This becomes a major bottleneck as Web3 intersects with AI, RWA, and scientific computation. Noosphere introduces a verifiable off-chain intelligence layer, enabling smart contracts to securely delegate off-chain inference and computation.
Limitations of Smart Contracts

Smart contracts revolutionized the game by introducing decentralized, deterministic, and transparent automation. But as the use cases for Web3 evolve, touching AI, real-world assets, and scientific modeling, those same design principles begin to feel limiting.

Smart contracts were never meant to think. They can’t infer, predict, adapt, or process complexity the way humans (or AI) can. This bottleneck has become one of the biggest blockers to building intelligent dApps.

While this design ensures security and transparency, it severely limits the capabilities of Web3 applications. Smart contracts cannot:

Perform AI inference (e.g., LLM-based responses) Aggregate multi-source or time-sensitive data Execute heavy off-chain computations Dynamically interact with complex, uncertain real-world conditions

This forces developers to either:

Build oversimplified logic directly on-chain, or Depend on centralized APIs or external scripts, undermining decentralization and verifiability.

Without off-chain computation, smart contracts can’t process large datasets, verify model outputs, or manage economic incentives related to data generation and verification. Worse, the absence of verifiability creates a black box. As a result, decentralized applications across DeFi, DeSci, and RWA are often stuck between being too limited to be useful or too centralized to be trusted.

What we need is not just more data. We need a programmable, auditable, and privacy-preserving delegation layer that brings intelligent logic to on-chain while preserving decentralization.

That’s where Noosphere comes in. It enables smart contracts to securely delegate off-chain computation to verifiable agents, bridging the gap between on-chain determinism and off-chain intelligence. With Noosphere, decentralized applications can reason, adapt, and act intelligently without sacrificing decentralization, privacy, or auditability.

What Noosphere Enables

With Noosphere, developers can:

Request off-chain computation directly from smart contracts, including LLM inference, risk assessments, or simulations. Receive verifiable responses and integrate them securely into on-chain workflows. Build privacy-preserving, intelligent dApps using a unified framework that combines compute infrastructure, oracles, and verification layers. Orchestrate AI agents that are programmable, auditable, and trustless.

By serving as a decentralized coordination and verification layer for off-chain logic, Noosphere upgrades the capabilities of smart contracts across all major sectors.

Real-World Applications DeFAI Agents (via ArenAI): Agents powered by off-chain AI models that autonomously allocate assets, hedge risks, or rebalance portfolios across chains, integrated directly into DeFi. DeSci Protocols: Scientific research platforms can outsource high-performance modeling (e.g. protein folding, climate simulation) to Noosphere agents. On-Chain RWA Intelligence: Tokenized real-world assets (real estate, receivables) gain real-time valuations, credit scores, or logistics tracking via AI models verified through Noosphere. Decentralized Compliance & KYC (with Booost): AI agents trained on regulatory data assess AML risk or compliance patterns and return auditable scores. When paired with Booost’s proof-of-humanity, it enables dynamic, compliant onboarding across ecosystems. Inference Markets for Synthetic Datasets: Researchers can generate and verify AI-based interpretations in medical, legal, or financial contexts. Tokens are staked to incentivize validation, with outputs coordinated and verified through Noosphere.

As decentralized applications evolve beyond static logic and into intelligent, adaptive systems, the need for verifiable off-chain computation becomes urgent. Noosphere fills this gap, not by replacing smart contracts, but by extending their capabilities with off-chain AI reasoning, data coordination, and secure delegation. Whether you’re building in DeFi, DeSci, RWA, or beyond, Noosphere unlocks the infrastructure to make your dApps not just programmable, but truly intelligent.

Noosphere: A Gateway to Verifiable Off-Chain Intelligence was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 05. August 2025

Indicio

How to deploy mobile driver’s licenses (mDLs) with Indicio Proven®

The post How to deploy mobile driver’s licenses (mDLs) with Indicio Proven® appeared first on Indicio.
A mobile driver’s license (mDL) is a type of cryptographically verifiable digital credential that you hold in a digital wallet on a mobile device. You can now issue and verify mDLs in Indicio Proven — along with other popular credential formats and communications protocols. By Helen Garneau

A mobile driver’s license (mDL) is a digital credential built on the ISO/IEC 18013-5 standard. It is stored directly on a user’s device, can be verified cryptographically, and works in both online and offline settings using Bluetooth or NFC.

This means you can digitally verify someone’s identity without relying on real-time access to a central database or system. This also means you can carry a government-issued credential in your mobile wallet and present only the specific data needed for a given interaction, such as confirming your age or license status. These credentials are self-contained, shared with consent, and designed to protect privacy.

Indicio Proven is the easiest way for organizations to issue and verify mDLs with minimal friction and helps you establish a foundation for portable digital identity while reducing fraud and supporting regulatory compliance while allowing individuals to manage their own identity information.

How to Issue and Verify mDLs with Indicio Proven Select your deployment approach
Choose Indicio’s hosted option or deploy Indicio Proven within your own infrastructure. Both support quick integration with your existing systems. Define your credential schema
Start with the ISO 18013-5 mDL schema or customize it with region-specific or sector-specific fields such as military service, donor status, or endorsement codes. Add document verification
Use one of Indicio’s partners, like Regula, to validate documents and biometric data before issuing digital credentials. Issue to a secure mobile wallet
Once the identity is verified, Proven issues the mDL directly to a wallet controlled by the user. The credential stays on their device and under their control. Support flexible verification
Enable verification online through APIs or offline through NFC or Bluetooth. The system respects consent and limits data exposure by design. Scale to meet future needs
The same infrastructure can issue and verify other credentials, including travel credentials, health records, and proof of residency. No additional systems are required to expand. Why Choose Indicio Proven

Indicio Proven is designed to evolve with your use case, giving you the flexibility to grow without rebuilding your solution. It’s a complete, end-to-end solution for implementing interoperable Verifiable Credentials, their associated communications protocols, and digital wallets/mobile SDK. And it comes with the support, training, and upgrades needed to ensure your implementation is and continues to be successful.

Take your credentials across jurisdictions, industries, and verification scenarios. Protect privacy from the start, and build consent into every transaction.

Get in touch today for a free demo of Verifiable Credentials + mDL and see how Proven can power your digital identity strategy.

###

 

The post How to deploy mobile driver’s licenses (mDLs) with Indicio Proven® appeared first on Indicio.


liminal (was OWI)

Liminal Demo Day: Evolving Identity Access Management

The post Liminal Demo Day: Evolving Identity Access Management appeared first on Liminal.co.

Spherical Cow Consulting

Not Just a Technical Problem: Why Fighting Disinformation Needs Resilient Infrastructure

Disinformation. Misinformation. Malinformation. These terms get used interchangeably, but they’re not the same thing. That distinction matters when designing resilient infrastructure that supports trust. Most of our efforts to address these problems focus on content, activities like fact-checking, moderation, and takedown requests. The post Not Just a Technical Problem: Why Fighting Disinformati

“Disinformation. Misinformation. Malinformation. These terms get used interchangeably, but they’re not the same thing.”

That distinction matters when designing resilient infrastructure that supports trust.

Misinformation is false or misleading information shared without intent to deceive. Disinformation is deliberately deceptive content, often politically or financially motivated. Malinformation is factual information used out of context to cause harm.

Most of our efforts to address these problems focus on content, activities like fact-checking, moderation, and takedown requests. And those are important. But after sitting through multiple sessions at WSIS+20 last month, I came away thinking about the architectures that enable or undermine digital trust in the first place. (Did you see my post last week on learnings from WSIS+20?)

Remember, trust doesn’t start with content. It actually starts with infrastructure.

The people in those WSIS+20 rooms weren’t talking about disinformation in the abstract. They were talking about humanitarian workers in the field, where timely, accurate, and secure information can be a matter of life and death. They talked about public health campaigns, peacekeeping missions, and journalists trying to survive in an environment where lies move faster than truth. And in almost every session, it became clear that the technical underpinnings of the Internet—especially in crisis and conflict settings—are being overlooked.

A Digital Identity Digest Not Just a Technical Problem: Why Fighting Disinformation Needs Resilient Infrastructure Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:10:36 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Identity is part of the equation

While identity wasn’t explicitly discussed in these sessions, it’s a critical part of establishing authenticity, which in turn helps build trust. IAM systems can’t prevent disinformation, but they can help validate source integrity and support accountability.

Verified senders can be identified without compromising privacy. Digital credentials can establish provenance for content or data. (Shout out to the C2PA work here!) Attribute-based access can help ensure information reaches the right people in the right roles.

I’m not promoting centralized control or surveillance. What I want is to build confidence in the systems we rely on to make decisions, especially in high-stakes environments.

Disinformation and infrastructure resilience

Something I thought about as I settled down with my notes after the event, though it wasn’t phrased this way quite like this during any of the sessions: When infrastructure fails, it doesn’t just disrupt services; it disrupts the foundation of trust that identity and information systems rely on. Several sessions at WSIS+20 focused on resilient digital infrastructure, especially in the context of sustainability and the UN’s 2030 Agenda. Speakers from IEEE, CERN, and disaster risk reduction agencies reminded us that resilience is more than just a technical property; it’s what enables everything. Disinformation thrives when infrastructure fails. That includes failures of availability, integrity, and interoperability. When identity systems falter, the ability to authenticate sources, validate messages, and maintain digital trust during crisis response suffers, too.

Digital infrastructure often isn’t designed to serve people in remote or underserved areas. Technical standards don’t always account for multilingual or multi-platform accessibility. Short-term, market-driven decisions prioritize scalability over long-term resilience.

Standards developers and IAM professionals know this at a technical level. Heck, I wrote about this a few weeks ago in a post on resilience in standards. But what’s often missed is how infrastructure failure becomes a governance issue. When people lose trust in digital systems, they distrust more than just the failed platform. They also start to distrust institutions and even each other.

Resilience isn’t for other people

IAM systems face similar challenges: do we build for edge cases, or optimize for the majority? Whose threat model are we prioritizing? How do we balance user experience with verifiability?

Just to make it more complicated, there is the fact that technology designed to protect can also exclude.

Overly strict verification requirements can lock out vulnerable populations. Misapplied protections can be used to suppress journalism or advocacy. “Safety” features can become surveillance tools in the wrong hands.

Even well-intentioned systems can marginalize people when their design doesn’t include a wide range of needs and experiences.

If we want to fight disinformation at scale, we need to stop thinking of it as just a content problem. It’s an infrastructure problem. And digital identity experts and standards architects have a role to play.

Closing the loop: From resilience back to disinformation

The sections above touched on how resilient, inclusive infrastructure supports digital trust. But let’s not lose sight of the central theme: disinformation. It spreads most easily where infrastructure is brittle, trust is low, and identity signals are weak or absent. That’s why the work of IAM professionals and standards developers matters—not just for security or compliance, but for defending the conditions in which truth can survive.

So, what can identity professionals do?

I love it when a plan comes together, and the plan here is to think about fighting disinformation and improving the resilience of our systems.

Treat resilience as a design goal: Build IAM systems that account for low-connectivity, low-trust environments. Make authenticity an architectural concern: Support verifiable claims, provenance metadata, and strong-but-private identifiers. Engage in governance conversations: Push for feedback loops between standards bodies, policymakers, and civil society. Ask who is being served and who isn’t. And what can standards architects do? Define and document trust assumptions: Clearly state what the system assumes about message integrity, source authenticity, and the broader infrastructure. Make those assumptions visible and testable. Design for degraded conditions: Create standards that support verifiability even when connectivity is intermittent, metadata is partial, or infrastructure is compromised. Include threat models beyond fraud: Consider disinformation campaigns, information suppression, and adversarial use of identity signals in your threat models. Build consultation into the process: Include journalists, humanitarian responders, civil society groups, and policy experts in standards development. Their use cases will expand your view of what “interoperable” and “resilient” really mean. Building for trust means building for everyone

Trust isn’t just about whether users believe your system is secure. It’s about whether they believe the Internet is still a place where truth can be found and relied upon. That belief erodes when digital systems exclude marginalized, underserved, and underrepresented users, whose experiences and threat models are often left out of design decisions. And that erosion creates fertile ground for disinformation, misinformation, and malinformation to take root.

This connection wasn’t made explicitly in the WSIS+20 sessions, but it became clear to me: trust in digital systems isn’t separate from trust in public discourse. If we want to defend the truth, we have to build systems that serve the whole public, not just the easy parts of it.

If we want to fight disinformation at scale, we need to stop thinking of it as just a content problem. It’s an infrastructure problem, and identity has a role to play.

This work is messy. It spans disciplines, sectors, and priorities. But if we want trustworthy systems, we have to build them with and for the people who rely on them most. That starts with looking beyond our immediate use cases and asking harder questions about who benefits, who’s left out, and what it means to build for trust in a world where truth itself is contested.

Want to stay updated when a new post comes out? I write about digital identity and related standards—because someone has to keep track of all this! Subscribe to get a notification when new blog posts and their audioblog counterparts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Welcome to the Digital Identity Digest
[00:00:04]

Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down interesting topics in the field of digital identity — from credentials and standards to browser weirdness and policy twists.

If you work with digital identity but don’t have time to follow every specification or hype cycle, you’re in the right place.

Let’s get into it.

What Is Disinformation, Really?

[00:00:29]

Disinformation. Misinformation. Malinformation.

They may sound similar, but these terms have crucial differences. And if we want to design digital systems that truly support trust and accountability, those differences matter.

This week, I’m sharing an unexpected takeaway from my time at WSIS+20 in Geneva. I left that event with a strong belief that disinformation isn’t just a content problem — it’s an infrastructure problem.

And that infrastructure includes identity.

Defining the Terms

[00:01:05]

Let’s start with some clear definitions — because words matter.

Misinformation is false or misleading information shared without intent to deceive. Think: hearing a rumor and passing it along without realizing it’s untrue. Disinformation is intentionally deceptive, crafted and spread to influence behavior or opinion, often politically or financially. Malinformation is true, but used maliciously — like doxing someone or leaking sensitive context to cause harm.

Most efforts to combat these focus on content — fact-checking, takedowns, moderation policies. And that work is vital.

But what I heard in the WSIS sessions wasn’t just about policies. It was about digital infrastructure.

Real-World Impact: Why Infrastructure Matters

[00:02:00]

Here are a few stories that helped this hit home:

Humanitarian workers struggling to communicate securely in conflict zones. Journalists fighting to survive and tell the truth amid algorithmic lies. Peacekeeping missions and public health campaigns racing to get accurate information out before disinformation spreads faster.

In all these cases, trust didn’t hinge on whether someone flagged a tweet. It depended on whether the underlying systems could support or sabotage the truth.

Technical Failures Become Governance Failures

[00:03:02]

If your network goes down, people will turn to unofficial channels.
If your logs are incomplete or timestamps unverifiable, message integrity falls apart.
If your system can’t authenticate a sender, how do you know whether or not to act?

That’s not just a technical failure — it’s a governance failure.

And when people lose trust in digital systems, the consequences ripple outward:

Trust in platforms erodes Trust in institutions falters Trust in each other breaks down Where Identity Comes Into Play

[00:03:45]

Interestingly, identity wasn’t a primary topic in most disinformation sessions. But it kept showing up — just at the edges.

Because when you ask:

Who sent this message? Has it been tampered with? Is this authentic?

You’re really asking identity questions.

Identity systems can help us answer those questions without sacrificing privacy, by:

Establishing provenance Enabling verified senders to be trusted faster Supporting credentials that show who said what, and when Ensuring information flows to the right people at the right time

While identity and access management alone can’t solve the disinformation crisis, they’re essential tools in restoring trust in the systems where that information travels.

Designing for True Resilience

[00:04:50]

Another recurring theme at WSIS+20 was resilience. Not just uptime and backups — real resilience.

How systems perform in messy, unpredictable, even dangerous environments.

Sessions on sustainability, infrastructure, and disaster response included speakers from IEEE, CERN, physicists, and others who manage risk daily.

One takeaway stuck with me:

“Resilience isn’t just technical — it’s a social contract.”

When resilience breaks down, we’re breaking that contract. We’re designing for the well-connected, the resourced, the mainstream — not for:

Remote communities Multilingual populations Low-trust or high-risk environments

And identity systems? They struggle with this all the time.

Exclusion Creates Fertile Ground for Disinformation

[00:06:02]

Strict verification protects against fraud. But what if you’re a displaced person without documents?

In trying to protect, we often exclude. And where people are excluded, disinformation grows.

Because people turn to what’s available. If trustworthy systems aren’t available — or don’t work for them — they’ll turn to anything that is.

So bringing this back full circle, disinformation thrives where:

Systems can’t verify sources Users don’t trust what they see Infrastructure fails or excludes

If your digital trust infrastructure — identity included — only works in ideal conditions, then you’ve built perfect conditions for disinformation.

Why Identity Standards Matter

[00:07:01]

Identity and access management (IAM) standards matter because they define the defaults.

They determine:

What’s interoperable What can be verified Whether truth can be seen, heard, and trusted

So if you’re an identity professional, what can you actually do?

What Identity Professionals Can Do

[00:07:25]

Here are some tangible steps to start with:

Treat resilience as a design goal
Consider low-connectivity and low-trust environments. Build for those, too. Make authenticity an architectural concern
Support verifiable claims, embed provenance, and use privacy-preserving identifiers. Engage in governance conversations
Don’t outsource this to policymakers. Collaborate with: Standards groups Civil society Policymakers Employers

Ask hard questions:

Who’s being served? Who’s being left out?

For Standards Architects: You Are My People

[00:08:20]

If you work on protocols, specs, or standards, here’s your to-do list:

Define layout and trust assumptions
Spell out what the system presumes about message integrity and infrastructure. Design for degraded conditions
Don’t assume perfect metadata or nonstop uptime. Think beyond fraud
Include disinformation, suppression, and misuse in your threat models. Build consultation into the process
Bring in: Journalists Emergency responders Civil society leaders

Their use cases will expand your understanding and improve your solutions.

Closing Thoughts: Trust as a Design Mandate

[00:09:30]

Trust isn’t just about security. It’s about whether people believe in digital systems at all.

When systems exclude people — by design or by neglect — trust erodes.
And in that erosion, disinformation thrives.

That’s what stood out to me most at WSIS+20.

If we want to fight mis-, dis-, and malinformation, we can’t just treat it as a content problem.

We must treat it as an infrastructure problem.

And identity professionals and standards architects?
We’re part of the solution.

It’s messy work. Cross-disciplinary. Politically thorny. Often frustrating.
But if we want trustworthy systems, we must build them for everyone — not just the easy users.

So keep asking:

Who’s benefiting? Who’s being left out?

Make it explicit. Even if it’s uncomfortable.

What does it really mean to build for trust, in a world where truth itself is constantly contested?

Food for thought. And thank you for listening.

Final Notes

[00:10:00]

If this helped make the complex a little clearer — or at least more interesting — share it with a friend or colleague.

Connect with me on LinkedIn @hlflanagan.

And if you enjoyed the show, subscribe and leave a review on Apple Podcasts…

[00:10:16]
…or wherever you listen.

[00:10:19]

You can also find the full written post at sphericalcowconsulting.com.

Stay curious. Stay engaged.
Let’s keep these conversations going.

The post Not Just a Technical Problem: Why Fighting Disinformation Needs Resilient Infrastructure appeared first on Spherical Cow Consulting.

Monday, 04. August 2025

1Kosmos BlockID

The 15-Second Voice Sample That Could Empty Your Bank Account: How AI Voice Cloning is Rewriting the Scammer’s Playbook

Imagine I got a FaceTime call from my daughter right now, tears streaming down her face, desperately pleading for help. “Dada, I’m stuck somewhere. I need some money right now. I lost my wallet. Could you just send me an Apple gift card?” The voice is unmistakably hers. The face looks exactly right. My parental … Continued The post The 15-Second Voice Sample That Could Empty Your Bank Account: H

Imagine I got a FaceTime call from my daughter right now, tears streaming down her face, desperately pleading for help. “Dada, I’m stuck somewhere. I need some money right now. I lost my wallet. Could you just send me an Apple gift card?” The voice is unmistakably hers. The face looks exactly right. My parental instincts would kick in, and the probability of me actually getting taken by it and sending her money would be very, very high. Except my daughter is safely at home, completely unaware of what just happened. I would have just become the latest victim of an AI voice cloning attack that required nothing more than a 15-second voice sample to execute.

These kinds of attacks are happening every day, and cybersecurity experts are warning that we’re on the brink of an epidemic that will make traditional phone scams look like child’s play.

From Comedy Central to Criminal Enterprise: The Evolution of Voice Mimicry

Voice attacks aren’t new. For decades, skilled impressionists have made careers out of mimicking celebrities on Comedy Central and late-night television. Turn on your TV, go watch any stand-up comedy where people mimic the voice of somebody else. Not everybody is good at mimicry, there are a few people who are really good at it, and that’s their skill set, and they make a living out of it.

So, what’s the difference? What once required rare talent and years of practice can now be accomplished by anyone with a smartphone and access to AI tools.

You can literally take a voice sample of 20 seconds, 15 seconds, and trust me, getting a voice sample of any user is a piece of cake. You can record them in a meeting, in webinars, at conferences. Taking a voice sample of a person, feeding it into an AI engine, and having AI generate paragraphs of text in your voice couldn’t be easier.

The technology combines voice cloning with face swapping capabilities, creating what security professionals call “deepfakes”, AI-generated content that can make anyone appear to say or do anything. Unlike the obvious robotic voices of yesterday’s scam calls, these new attacks are virtually indistinguishable from the real thing.

The Perfect Storm: Why Voice Cloning Attacks Are About to Explode

Currently, sophisticated voice cloning technology requires some technical expertise to deploy effectively. But that barrier is rapidly disappearing. But before you snap your fingers, trust me, this is going to be in the palms of every individual on this planet because they are building AI agents, voice bots, chatbots, and all of them are available as apps on your phone.

The democratization of AI tools means that what once required specialized knowledge will soon be as simple as downloading an app. Combined with the wealth of voice samples available through social media, video calls, and public speaking engagements, attackers will have unprecedented access to the raw materials needed for convincing impersonations.

Consider the attack surface: every Zoom meeting, every Instagram story, every TikTok video, every voicemail message becomes potential ammunition for cybercriminals. For public figures, executives, or anyone with an online presence, avoiding voice sample collection is virtually impossible.

From Spam Calls to Family Emergencies: The Human Cost of AI Deception

The implications extend far beyond individual financial losses. Traditional text-based scams already trick thousands of people daily with messages claiming, “I’m stuck at an airport. I need an Apple ID or gift card.” Now imagine those same scenarios playing out with the actual voice and face of a loved one making the plea.

Imagine what’s going to happen to all these spam calls that people have been receiving over time. Those text messages that you get, saying, “I’m stuck at an airport. I need an Apple ID or an Apple Card or a gift card,” and people fall for it. Imagine that happening in the age of AI. It’s going to be rampant.

The psychological impact cannot be overstated. When a scammer can perfectly replicate your child’s voice expressing genuine distress, the emotional manipulation becomes exponentially more powerful. Traditional security awareness training that teaches people to “verify before you trust” becomes significantly more challenging when the verification methods themselves can be compromised.

For organizations, the threat is equally severe. Help desk calls from “employees” requesting password resets, IT support requests from “executives” demanding immediate access, and vendor communications requesting urgent payment changes all become potential attack vectors when voice authentication can be spoofed with AI precision.

The $4.4 Million Question: Counting the Cost of Deepfake Breaches

While comprehensive data on AI voice cloning losses is still emerging, the broader cybersecurity landscape provides sobering context. According to IBM’s 2025 Cost of a Data Breach Report, the average cost of a data breach has reached a record high of $10.22 million for US companies, while the global average was $4.44 million.

The reputational damage may prove even more costly. Consumer trust, once lost, can take years to rebuild. According to recent research, 75% of consumers would stop shopping with a brand that suffered a security incident. For organizations that handle sensitive customer data or financial transactions, a successful deepfake-enabled breach could trigger regulatory investigations, class-action lawsuits, and permanent customer defection.

Beyond direct financial losses, there’s the operational disruption. Companies targeted by sophisticated social engineering attacks often must shut down systems, reset credentials enterprise-wide, and implement emergency security protocols that can paralyze operations for days or weeks.

Beyond Traditional Defenses: The Rise of Liveness-Based Authentication

Traditional security measures are proving inadequate against AI-powered impersonation attacks. Standard multifactor authentication, password policies, and even basic biometric systems can be circumvented when attackers can convincingly impersonate authorized users during help desk interactions.

At 1Kosmos, we’re addressing this challenge head-on. If somebody is using biometrics to authenticate into a system, be it face, be it voice, be it anything, if we have the ability to identify that it’s crossed a certain threshold of risk with relationship to it being a deepfake or fake or AI-generated content, we can raise those signals. Our systems then have the ability to determine the kind of access they need to provide or even prevent access altogether based on those signals.

The solution lies in what we call “liveness detection”, technology that can distinguish between live human interaction and AI-generated content. We’ve developed systems that combine multiple authentication factors, including live facial scanning compared against government-issued credentials, to create what I call a “risk threshold” that determines whether access should be granted.

We look at all the fraud signals from various factors to generate what we call a risk threshold that could tell our systems what that system should or should not do with that access request or authentication attempt. The way we have designed our platform is to ensure that all the signals that we get when a user authenticates into the system, be it video, be it live ID, be it selfie, be it a document scan, or be it voice, we analyze these signals comprehensively.

This marks a shift away from reactive security measures that only respond after a breach has occurred. Instead, we focus on proactive security that works to stop threats before they happen.

Being proactive means putting systems in place that can detect voice attacks, deepfakes, and other forms of AI-generated impersonation early in the process. That kind of prevention is becoming essential as these attacks grow more advanced.

At 1Kosmos, we believe it’s our responsibility to help users and organizations recognize and block these threats before any damage is done. Our biometric authentication platform is built to detect signs of manipulation in real time and prevent unauthorized access based on those signals.

Building Deepfake-Resistant Organizations: The Path Forward

The window for preparation is rapidly closing. As AI voice cloning tools become more accessible and sophisticated, organizations must implement robust detection and prevention measures before they become targets.

The most effective defense combines technological solutions with updated security protocols. This includes implementing liveness-based biometric authentication for all system access, training staff to recognize potential deepfake scenarios, and establishing verification procedures that don’t rely solely on voice or video confirmation.

For individual protection, the advice is equally urgent: establish out-of-band verification methods with family members, be skeptical of urgent financial requests regardless of apparent source, and understand that if something seems emotionally manipulative, it very well might be.

The threat of AI voice cloning isn’t a distant future concern, it’s a present-day reality that’s about to become exponentially more dangerous. Organizations and individuals who take proactive steps now will be far better positioned to defend against the inevitable wave of sophisticated impersonation attacks heading our way.

We still have a long way to go, but companies are recognizing that threats like this are no longer a fairytale. They are very real. We believe that identity is the entry into any organization or into any IT assets. We need to be 100 times more careful and stringent about how we do deepfake checks.

Ready to protect your organization against AI voice cloning and deepfake attacks? Learn more about 1Kosmos’s liveness-based biometric authentication solutions and discover how proactive security measures can keep your business safe from the next generation of social engineering threats.

The post The 15-Second Voice Sample That Could Empty Your Bank Account: How AI Voice Cloning is Rewriting the Scammer’s Playbook appeared first on 1Kosmos.


IDnow

Chips in: Why it’s time to tap into NFC-enabled identity verification.

The European Union passes major regulation to allow the private sector to read eIDs via Near Field Communication. In a significant milestone in the evolution of Europe’s digital identity framework, the European Commission has adopted Regulation (EU) 2025/1208, which authorizes private sector companies to access key data stored in the chips of electronic identity documents […]
The European Union passes major regulation to allow the private sector to read eIDs via Near Field Communication.

In a significant milestone in the evolution of Europe’s digital identity framework, the European Commission has adopted Regulation (EU) 2025/1208, which authorizes private sector companies to access key data stored in the chips of electronic identity documents (eIDs), most notably the bearer’s portrait (DG2) via Near Field Communication (NFC) technology.

The regulation, which amends the original Commission Decision 2025/1218, came into force on July 10, 2025. 

With NFC, identity verification moves from passive image capture to active chip-based validation, fundamentally transforming how businesses can confirm identities across onboarding journeys.

What’s changed and why it matters.

Until recently, access to the chip on EU member states’ national ID cards and biometric passports was largely restricted to border authorities. With Regulation 2025/1208 now adopted, private entities can legally read the facial image stored in DG2, provided the holder consents, using NFC technology. 

This opens the door to more accurate, secure, fast, and automated identity proofing across industries like financial services, insurance, gaming, telecom, and mobility, without compromising on privacy or regulatory integrity.

Let’s break it down: What’s in the chip?

eMRTDs (electronic Machine-Readable Travel Documents), such as biometric passports and eID cards, store personal data in a secure chip embedded in the document. This structure follows the globally recognized ICAO 9303 standard, developed by the International Civil Aviation Organization (ICAO), which ensures interoperability and security across international borders. 

The chip’s contents are organized into Data Groups (DGs), each designed to hold specific types of information:

DG1: Machine-readable zone (MRZ) data (e.g. name, date of birth, document number) 

Private sector access? Yes.
Use case: Retrieve electronic data from ID holder in secure way.

DG2: Biometric facial image (high-resolution portrait of the bearer)

Private sector access? Yes.
Use case: Biometric match to selfie.

DG3: Fingerprint data (accessible only to border and police authorities)

Private sector access? No.
Use case: Reserved for border and police use only.

By reading the encrypted data directly from the chip, IDnow’s NFC-enabled verification confirms the authenticity and integrity of the document and the portrait photo, ensuring the person presenting it is its rightful holder.

The Trust Playbook. Discover how top teams from your industry are turning digital identity and trust into growth strategies.​ Download now Meet strict regulatory frameworks with confidence.

With NFC-enabled verification, businesses can: 

Validate ID authenticity through cryptographic signatures  Verify the portrait from the chip (DG2) against a selfie or video for liveness detection  Read DG1 and DG2 simultaneously for cross-checking document and biometric consistency  Reduce false positives and manual review, accelerating onboarding 

NFC-based identity verification isn’t just technically superior, it’s also compliant with both existing and emerging regulations: 

National frameworks: Compliant with current schemes such as GWG (Germany), FMA (Austria), ANSSI (France), and others across the EU.  eIDAS 2.0: The EU’s revised digital identity regulation, requiring high-assurance identity proofing methods for Qualified Trust Service Providers (QTSPs), banks, and government services  Anti-Money Laundering Regulation (AMLR): Recently approved and set to replace national laws by mid-2027. NFC supports fully automated, risk-based identity proofing in line with Know Your Customer (KYC) and Customer Due Diligence (CDD) expectations. 

So, whether you’re serving regulated industries or managing onboarding at scale, NFC verification gives you a future-proof foundation.

The IDnow NFC advantage.

IDnow’s NFC identity verification is already live and battle-tested across Europe, with tens of thousands of verifications processed every month. 

Available globally: All new passports, residence permits, and national ID cards in the EU include NFC chips. Globally, most passports include this chip by default. In some regions, driving licences and other documents are also adopting chip-based formats.  Available for mobile app and mobile SDK integrations: Developer-friendly SDKs and APIs make it simple to integrate NFC into existing onboarding flows across platforms.  UX / speed: Onboarding can be completed in under one minute. Just tap the document. No manual data entry needed.  Conversion Rate: NFC onboarding shows higher success rates compared to optical scanning, thanks to improved read accuracy and fewer drop-offs.  Automation Rate: NFC verifications are fully automated, with no need for manual agent review. This accelerates onboarding and scales efficiently.  Security and data integrity: Cryptographic signature checks detect cloned or forged chips. Data is read with 100% accuracy, straight from the issuing authority. NFC = No barriers to implementation.

IDnow’s NFC-enabled identity verification is available and fully compliant today. With Regulation (EU) 2025/1208 adopted, the private sector now has a clear legal foundation to access biometric data, such as the facial image stored on the chip of eID documents, with the user’s consent. This enables immediate integration of NFC verification into onboarding journeys.

Forward-thinking organizations that move early can gain a competitive edge by offering faster, more secure, and regulation-ready identity verification. Don’t wait for market-wide adoption, now is the time to optimize your onboarding flows and lead the way.

How it works: NFC-powered identity proofing, step by step.

So, what happens when a user taps their ID document with their phone? NFC-based identity verification transforms onboarding from a manual, error-prone process into a fast, secure, and fully automated journey. By reading encrypted data directly from the chip, rather than relying on photos or scans, IDnow ensures every verification is accurate, fraud-resistant, and compliant with the latest regulations. 

Static capture: Capture the document’s MRZ to generate the key and decrypt the chip.  NFC readout: Instantly read secure data from the chip.  Selfie check: Capture a selfie for biometric face comparison and liveness detection. 

Here’s how NFC-enabled verification with IDnow streamlines the entire process, from document tap to trusted onboarding. See just how seamless and user-friendly NFC onboarding can be.

Accept marketing cookies to view this YouTube video.

Manage my cookie preferences

Ready to activate NFC?

With NFC, identity verification becomes not only faster, but foundationally stronger. The future of onboarding is already in your customers’ pockets. Let IDnow help you unlock it securely, seamlessly, and in full compliance with Europe’s most advanced digital identity framework. 

Talk to your Account Manager, Customer Success Manager or contact us to learn how NFC-enabled identity verification can unlock valuable business opportunities.

By


Suzy Thomas
Customer and Product Marketing Lead
Connect with Suzy on LinkedIn


Dock

EUDI Wallet Hype vs. Reality

At our recent live event about the EUDI wallet, Esther Makaay (VP of Digital Identity at Signicat) shared an insightful slide about the gap between expectations and reality.

At our recent live event about the EUDI wallet, Esther Makaay (VP of Digital Identity at Signicat) shared an insightful slide about the gap between expectations and reality.


Ockto

Datagedreven KYC-automatisering: focus op de signalen die ertoe doen

 “Er zijn banken met 2.500 mensen die alleen aan CDD werken.” – Robby Philips, Deloitte Die uitspraak uit de Data Sharing Podcast is illustratief voor de manier waarop veel financiële instellingen vandaag de dag omgaan met hun KYC-verplichtingen. Het anti-witwasbeleid is strenger dan ooit, maar de middelen waarmee het wordt uitgevoerd: mensen, spreadsheets, verouderde

 “Er zijn banken met 2.500 mensen die alleen aan CDD werken.”

– Robby Philips, Deloitte

Die uitspraak uit de Data Sharing Podcast is illustratief voor de manier waarop veel financiële instellingen vandaag de dag omgaan met hun KYC-verplichtingen. Het anti-witwasbeleid is strenger dan ooit, maar de middelen waarmee het wordt uitgevoerd: mensen, spreadsheets, verouderde systemen zijn vaak niet meegegroeid.


Datagedreven KYC-automatisering: fraude, klantdata en compliance

In deze aflevering van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Robby Philips (Deloitte) en Gert-Jan van Dijke (Ockto) over de transitie van bulk-KYC naar slimme, datagedreven automatisering. Hoe kunnen banken en financiële dienstverleners voldoen aan strengere regelgeving én tegelijkertijd de focus leggen op wat er écht toe doet? 

In deze aflevering van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Robby Philips (Deloitte) en Gert-Jan van Dijke (Ockto) over de transitie van bulk-KYC naar slimme, datagedreven automatisering. Hoe kunnen banken en financiële dienstverleners voldoen aan strengere regelgeving én tegelijkertijd de focus leggen op wat er écht toe doet? 


FastID

Fastly is easier than ever to use with our Model Context Protocol (MCP) Server

Manage Fastly with ease using the new open-source Model Context Protocol (MCP) Server. Integrate with AI assistants for conversational control of your services.
Manage Fastly with ease using the new open-source Model Context Protocol (MCP) Server. Integrate with AI assistants for conversational control of your services.

Friday, 01. August 2025

Recognito Vision

How Passport Recognition Is Changing the Game in Digital Identity

Let’s imagine you’re checking into a hotel after a long flight. You’re tired, hungry, and just want your key. But instead of fumbling with paperwork, the receptionist simply scans your passport and boom, your details are verified, and you’re all set. No typing. No waiting. No drama. That seamless experience? It’s powered by passport recognition,...

Let’s imagine you’re checking into a hotel after a long flight. You’re tired, hungry, and just want your key. But instead of fumbling with paperwork, the receptionist simply scans your passport and boom, your details are verified, and you’re all set. No typing. No waiting. No drama.

That seamless experience? It’s powered by passport recognition, a rapidly evolving technology that’s rewriting how we verify identity in real time.

So what’s under the hood of passport recognition, and why does it matter for businesses, governments, and consumers alike? Let’s explore.

 

What Is Passport Recognition?

Passport recognition refers to the automated process of scanning and extracting information from a passport’s data page using computer vision and AI. It’s often part of a broader ID document recognition SDK, which can process various identity documents beyond passports, such as driver’s licenses or national IDs. It allows machines to identify and authenticate passports with high accuracy and often in seconds.

The core components include:

Optical Character Recognition (OCR) to extract name, passport number, expiry date, etc.

Machine-Readable Zone (MRZ) decoding, where standardized passport data is stored.

Document validation using holograms, microprint, and UV patterns.

Face matching, comparing the passport photo to a live image or selfie.

Think of it like giving computers the ability to read and verify your passport like a border officer, only faster and with fewer errors.

 

Why Traditional Passport Checks Don’t Cut It Anymore

Manual passport checks are vulnerable to:

Human error (typos, missed fakes)

Delays at airports, hotels, or secure facilities

Fraud through forged or tampered documents

In a world where speed and security matter equally, automated passport recognition offers a compelling alternative.

Quick stat: According to the International Air Transport Association (IATA), 73% of travelers prefer to use biometrics instead of passports for identity verification. That number is growing fast.

 

How Passport Recognition Technology Works

Let’s lift the hood and see how the magic happens.

Step-by-Step Passport Recognition Process: Step Description 1. Image Capture A camera or mobile device captures the passport’s data page. 2. MRZ Detection The system locates and isolates the machine-readable zone. 3. OCR Extraction Characters from the MRZ and other fields are read via OCR. 4. Data Validation The system checks for authenticity: font, format, expiry date, etc. 5. Face Match (Optional) Compares the passport photo with a live selfie or stored biometric.

With today’s advanced AI, the whole recognition flow wraps up in under three seconds.

 

Passport Recognition vs Traditional OCR: What’s the Difference? Feature Traditional OCR Passport Recognition Accuracy 85–90% 97–99% Security Features None Validates holograms, UV, microprinting Facial Biometrics MRZ Decoding Use Cases Document scanning Identity verification

Passport recognition is not just OCR on steroids it’s an entirely smarter approach built for identity assurance.

 

Industries Benefiting from Passport Recognition

From airports to Airbnb, industries across the board are embedding this technology into their platforms. Here’s where it’s making waves:

Travel & Border Control Automated border gates (e-gates)

Self-check-in kiosks

Immigration pre-screening

“With passport recognition, airports can reduce processing time by up to 40% during peak hours.” SITA Air Transport IT Insights

 

Healthcare & Insurance ID verification during telehealth appointments

Onboarding for digital health insurance

Prescription fraud prevention

Banking & Fintech KYC during account registration

Cross-border remittance validation

Preventing identity fraud in loan applications

Hospitality & Rentals Faster hotel check-ins

ID verification for short-term rental platforms (e.g., Airbnb)

VIP loyalty programs linked to passport data

Education & Exams Verifying student ID for international admissions

Securing online proctoring systems

The bottom line? If you deal with real humans and legal identity documents, passport recognition can tighten your security and smooth your UX.

The Power of Passport Recognition + Face Verification

On its own, passport recognition is impressive. But pair it with face verification, and the security multiplies.

Here’s how it works:

User scans their passport using a smartphone or webcam.

System extracts data and photo from the passport.

User takes a selfie or a live video.

The system uses AI to verify the passport photo against a real-time selfie.

Verification result is returned within seconds.

This combo stops impostors in their tracks, including those armed with stolen documents or digital fakes.

Bonus Tip: Want higher fraud resistance? Add id document liveness detection SDK capabilities to confirm the person behind the document is physically present not a still image or a video spoof.

 

Advantages of Using Passport Recognition in 2025

Let’s get specific about the benefits:

Speed

Verifies documents in under 3 seconds. Perfect for high-traffic systems and instant KYC.

Accuracy

Uses deep learning and AI to extract data with up to 99% accuracy even from worn or wrinkled documents.

Compliance

Helps meet global identity regulations like:

GDPR (Europe)

eIDAS (EU)

KYC/AML (Global finance)

HIPAA (U.S. Healthcare)

User Experience

No long forms. No typos. No delays.

Just scan, verify, done.

 

Real-World Case Study: How Banks Use Passport Recognition

A leading European neobank integrated passport recognition into its digital onboarding.

Before:

Manual ID verification took 6–12 hours

20% drop-off due to long wait times

After:

Verification in under 60 seconds

Conversion rate improved by 37%

Fraud attempts decreased by 45%

Talk about ROI.

 

Challenges to Watch Out For

While powerful, passport recognition isn’t without its hurdles.

Low-Quality Images

Crinkled pages, bad lighting, or glare can cause recognition failures. Always guide users to scan in good light.

Document Forgery

Some fake passports can bypass simple OCR-only systems. That’s why layered security with AI, facial matching, and liveness detection is a must.

Device Limitations

Older mobile devices may struggle with camera quality. Make sure your SDK supports fallback options or minimum device specs.

Choosing the Right Passport Recognition SDK

Not all tools are created equal. Here’s what to look for in a solid passport recognition solution:

MRZ extraction & validation

Face matching capability

Cross-platform support (iOS, Android, Web)

Real-time results (< 3 seconds)

Built-in compliance (GDPR, KYC, AML)

Developer documentation & SDK support

Some top players also include open-source demo tools, sample UIs, and REST APIs to make integration smooth.

 

Wrapping It All Up

Passport recognition is no longer futuristic tech; it’s a real solution already reshaping how we verify identity across apps, borders, and industries. Whether you’re streamlining travel check-ins, onboarding banking customers, or securing virtual healthcare visits, this technology helps eliminate friction and boost security. It ensures you’re not just looking at a document, you’re verifying the person behind it.

And if you’re looking for a trusted provider to help you integrate reliable passport recognition into your systems, Recognito, top performer in NIST FRVT, is built for that mission. With AI-driven performance, lightning-fast processing, and support trusted across industries, it’s a solid step forward in the future of identity verification.

Curious how this can work for your platform? Try our passport verification feature and experience firsthand how secure and fast digital ID checks can be. You can also have a look at Recognito’s GitHub.


1Kosmos BlockID

Driving Change Together: 1Kosmos Sponsors Bell Canada’s Golf Tournament Supporting Kids Help Phone

At 1Kosmos, we believe that technology, when put to good use, has the power to transform lives and communities. That’s why, when Bell Canada, one of our valued customers and a longstanding champion of youth mental health, invited us to join their annual fundraising golf tournament for Kids Help Phone, we enthusiastically accepted. More Than … Continued The post Driving Change Together: 1Kosmos S

At 1Kosmos, we believe that technology, when put to good use, has the power to transform lives and communities. That’s why, when Bell Canada, one of our valued customers and a longstanding champion of youth mental health, invited us to join their annual fundraising golf tournament for Kids Help Phone, we enthusiastically accepted.

More Than a Game: An Event with Impact

Every year, Bell Canada brings together the brightest minds from across IT, InfoSec, Fraud, and business leadership for an event that’s about so much more than golf. This gathering is the single most critical fundraising event for Kids Help Phone, the national organization at the forefront of supporting youth in crisis. As the only 24/7 e-mental health service for young people in Canada, Kids Help Phone provides an essential lifeline to children and teens, many of whom have no other support system.

A Staggering Statistic and a Shared Mission

A representative from Kids Help Phone shared a striking figure at this year’s event: 75% of children and youth reaching out to the helpline disclose things they’ve never told anyone else. For some, Kids Help Phone is their only resource in moments of vulnerability, fear, or confusion. This statistic underscores the vital importance of the organization’s work, and why sustained support matters.

Partnering for Good

For 1Kosmos, participation in this event went far beyond sponsorship. It represented an opportunity to walk alongside Bell Canada in championing cybersecurity, mental wellness, and community care, pillars that are deeply aligned with our own values. It was inspiring to connect with leaders and teams from across the technology spectrum, all united by the goal of uplifting youth and shaping a safer, more supportive future.

Thank You, Bell Canada and Kids Help Phone

We extend our deepest thanks to Bell Canada for their vision and leadership in supporting Kids Help Phone, and to the tireless staff and volunteers who make a real difference in the lives of young Canadians every day.

At 1Kosmos, we look forward to continuing our partnership and our shared commitment to protect, empower, and uplift, on and off the golf course.

Learn More & Keep Kids Safe Online

To discover more about the essential work of Kids Help Phone or to get involved, visit their website or reach out to their team. We’re grateful to all our partners and peers who joined us on the green in support of children in need.

At 1Kosmos, our commitment to protecting and empowering the next generation extends beyond secure identity solutions. That’s why we created our “1Kids” video series, a fun, educational program designed to teach kids the basics of online safety and cybersecurity. From spotting phishing scams to protecting personal information, these episodes help kids and families navigate the digital world confidently and securely.

Check out the 1Kids video series to help the young people in your life stay safe online, and join us as we continue building a safer, more supportive future for all children.

The post Driving Change Together: 1Kosmos Sponsors Bell Canada’s Golf Tournament Supporting Kids Help Phone appeared first on 1Kosmos.


Tokeny Solutions

Apex Digital 3.0 is Live – The Future of Finance Starts Now

The post Apex Digital 3.0 is Live – The Future of Finance Starts Now appeared first on Tokeny.
July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now

To truly scale tokenisation, we need a global force at the heart of capital markets. A player with the reach, trust, and operational strength to be able to bring all stakeholders out of fragmented and manual systems into the future of on-chain finance.

That player is Apex Group.

Yesterday, Apex Group launched Apex Digital 3.0, the digital infrastructure that seamlessly bridges traditional finance with on-chain finance at scale.

This is a turning point for our industry. Apex Digital 3.0 is not a product, it’s a movement to transform global finance, redefine distribution, and unlock liquidity. For the first time, a global asset servicer now offers blockchain-powered infrastructure for tokenisation and stablecoins, covering everything from regulatory setup, structuring to issuance, compliance, servicing, and global distribution.

In today’s market, launching a tokenised product is slow and fragmented. Issuers must juggle multiple providers, face regulatory complexity, and often wait months, only to end up with limited liquidity and poor distribution. It’s impossible to scale that way.

With Apex Digital 3.0, we’ve changed the game. Everything is integrated. Tokenising existing assets or natively issuing ones on-chain can be done within a few weeks. What’s more, Apex Group clients, who already entrust them to operate over $3.5 trillion in assets, can now move on-chain seamlessly, without changing the tools or workflows they know.

To them, it simply feels like an upgrade, and they will gain new capabilities. It includes 24/7 subscriptions, redemptions, and transfers; access to multiple secondary trading venues and borrowing or lending in a real-time DeFi application. No disruption. Just a next-generation investor experience.

What truly sets Apex Digital 3.0 apart is its ability to bring all stakeholders together, including issuers, investors, allocators, and distributors. Tokenised assets can connect directly with existing investor pools, including through multiple distribution channels and physical events like Apex Invest. This dramatically enhances both liquidity and distribution, solving one of the most critical and long-missing pieces in the industry.

Tokenisation finally works, at scale. Tokeny’s technology is the foundation of this transformation. We’re delivering blockchain capability that integrates across the entire value chain and fund lifecycle, enhancing the experience and value for all of Apex Group’s clients.

Daniel and Luc are proud to be appointed to lead Apex Digital 3.0, powered by the full strength of the Tokeny team. Our mission becomes bigger: To transform financial markets and unlock access for all. Just as Microsoft put a computer on every desk, we’re building the digital infrastructure to put private assets in every portfolio, bringing the future of finance to everyone.

This marks a brand new chapter, and we’re proud to be writing it with you!

Tokeny Spotlight

Apex Digital 3.0

Apex Group announces the  launch of Apex Digital 3.0 to bridge traditional and onchain finance at scale.

Read More

DAW New York

There won’t be one stablecoin to rule them all. There will be a plethora of stablecoins.

Read More

GENIUS Act Passes

The GENIUS Act is now law, marking one of the most significant moments in the history of digital assets.

Read More

Tokeny Team

Learn about Héctor Castro Mateos, who has been at the forefront of Tokeny’s QA team.

Read More

Tokeny on ERC3643 Podcast

Our CEO, joins the ERC3643 podcast to talk about the beginnings of Tokeny and the ERC-3643 standard.

Read More

RWA Summit Cannes

Our CCO, joins the panel: Institutional strategies for scaling tokenised assets, alongside industry leaders.

Read More Tokeny Events

Spark 25 by Fireblocks
September 8th-10th, 2025 | 🇪🇸 Spain

Register Now

Apex Invest Global Event Series 2025
September 22nd-23rd, 2025 | 🇨🇭 Switzerland

Register Now

Tokeny Team Building 
September 17th-19th, 2025 | 🇪🇸 Spain

ERC3643 Association Recap

ERC-3643 Presented at the SEC Crypto Task Force

Association’s President, Dennis O’Connell, presented ERC-3643 to the SEC Crypto Task Force, alongside leaders from Chainlink Labs, Enterprise Ethereum Alliance, LF Decentralized Trust, and Etherealize.

Read what has been discussed here

Chainlink Launches Automated Compliance Engine in Collaboration With Apex Group, GLEIF, and ERC3643 Association

The ERC3643 Association, Chainlink Labs, Apex Group Ltd, and Global Legal Entity Identifier Foundation (GLEIF) collaborate to launch an automated compliance engine compatible with ERC-3643.

Read the full press release here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Sep1 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group Last month, together with Apex Group, we introduced Apex Digital 3.0, the first… Aug1 Apex Digital 3.0 is Live – The Future of Finance Starts Now July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now To truly scale tokenisation, we need a global force at the… Jul1 Real Estate Tokenization Takes Off in Dubai June 2025 Real Estate Tokenization Takes Off in Dubai Dubai’s real estate market is breaking records. According to data shared by Property Finder, Dubai recorded… May13 Is the UAE Taking the Lead in the Tokenization Race? April 2025 Is the UAE Taking the Lead in the Tokenization Race? As you know, the U.S. is now pushing to become the crypto nation.…

The post Apex Digital 3.0 is Live – The Future of Finance Starts Now appeared first on Tokeny.


uquodo

Securing Digital Identity : The Impact of Face Verification and Liveness Detection

The post Securing Digital Identity : The Impact of Face Verification and Liveness Detection appeared first on uqudo.

Aergo

[Aergo Talks #19] Token, Roadmap, and Exchange

1. Can I have confidence to hold Aergo? Confidence should be grounded in understanding the current direction and internal activity of the project. Aergo has undergone major changes and is now transitioning into HPP (House Party Protocol), a more modern and AI-aligned infrastructure. A Living Roadmap is actively maintained, offering transparent updates about progress, team deliverable
1. Can I have confidence to hold Aergo? Confidence should be grounded in understanding the current direction and internal activity of the project. Aergo has undergone major changes and is now transitioning into HPP (House Party Protocol), a more modern and AI-aligned infrastructure. A Living Roadmap is actively maintained, offering transparent updates about progress, team deliverables, and launch timelines. Investors are encouraged to review the roadmap and evaluate their belief in the foundation’s vision. 2. Why migrate from Aergo to HPP? A major community governance vote in March 2025 (AIP-21) resulted in overwhelming support for migrating Aergo into HPP, representing a new direction. The goal of HPP is to modernize the infrastructure, introduce AI-native functionality, and align with current market needs, particularly in areas such as modular architecture, agent-based execution, and scalability. HPP maintains the legacy of Aergo while positioning the ecosystem for growth in the next 3–5 years. The community vote saw near-unanimous support and had higher participation than many larger market-cap projects. 3. How will the token migration from Aergo to HPP work on exchanges? The migration process is exchange-dependent. Each exchange may handle the conversion differently (e.g., automatic swap, opt-in, withdrawal only). The team is working closely with major partners and will provide detailed guidance as each exchange finalizes its plan. Community members should stay tuned on official channels (Telegram, X) for updates related to their preferred platforms. 4. When is HPP mainnet launching? The public mainnet is targeted for Q3 2025, which is ~3 months earlier than initially projected in the March AIP. The testnet has already been completed, and the private mainnet is now live, with developer tools being actively rolled out. The team is deliberately not announcing an exact date to maintain flexibility and ensure quality. Rushed deadlines can cause avoidable bugs and quality issues. Precise dates can lead to price manipulation through derivatives trading or “sell-the-news” behavior. 5. Why keep the launch date private? HPP is a digital product, not constrained by logistics like physical goods or theater releases. A flexible launch date allows for rigorous QA and performance testing before going live. Avoiding a public date also prevents speculative volatility, as many traders use fixed timelines for leverage and price swings. The team prioritizes quality and stability over hype. 6. When is Booost launching on HPP? Booost is already deployed on testnet and is currently running on the private mainnet. While technically live, its public launch is imminent, pending final readiness checks. Booost’s integration is a key milestone in showcasing HPP’s support for identity verification and anti-deepfake primitives. 7. What is VaasBlock building on HPP? VaasBlock is building W3DB.io, a Web3-native intelligence and verification platform. Think of it as a cross between IMDB, Crunchbase, and CoinMarketCap, focusing on projects and individuals. Community members can contribute by: verifying project and team data, training AI models, and tagging and labeling datasets. In return, they can earn token rewards. HPP was selected as the base layer due to its multi-chain design, existing exchange access, and built-in support for AI-integrated workflows 8. What’s the “next big news” that could make Aergo pop? The community member’s question assumes a correlation between announcements and price, but this is rare unless connected to major macro headlines or figures. That said, the HPP mainnet launch is arguably the most significant upcoming milestone in terms of long-term fundamentals. However, no one can guarantee whether it will result in a price “pop” — and speculation shouldn’t drive strategic decisions. 9. Why is engagement low? What about marketing? A community member voiced concerns about low social media engagement and marketing visibility. The team responded with the following key points:
1) HPP does not engage in artificial engagement tactics like bots or paid shill armies.
2) Most projects with unusually high engagement have manipulated metrics, which don’t reflect real community or product traction.
3) VaasBlock uses a marketing effectiveness score that evaluates campaigns based on their actual impact (price, volume, etc.) rather than vanity metrics. According to that model, Aergo/HPP ranks in the top 20% of current Web3 projects. The team is focused on building lasting value, not temporary hype.

[Aergo Talks #19] Token, Roadmap, and Exchange was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


Safle Wallet

Meet Agentic AI

Imagine having a tireless assistant who never sleeps, never asks for a raise, and actually gets the complicated stuff done right, Every time. Meet Agentic AI, the new star in financial services that’s flipping the script on how banks, insurers, and investment firms run their show. What’s Agentic AI Anyway? Think of agentic AI as AI on steroids 💪 not just tools that follow commands, bu

Imagine having a tireless assistant who never sleeps, never asks for a raise, and actually gets the complicated stuff done right, Every time.
Meet Agentic AI, the new star in financial services that’s flipping the script on how banks, insurers, and investment firms run their show.

What’s Agentic AI Anyway?

Think of agentic AI as AI on steroids 💪 not just tools that follow commands, but smart agents that act autonomously, make decisions, learn on the fly, and adapt to complex environments. Unlike your typical chatbot that parrots scripted answers, agentic AI can juggle multiple tasks, solve problems, and even decide the next best step in a process.

It’s like upgrading from a calculator to a personal financial advisor who never takes a coffee break. 😎

Why Financial Services Are Hooked 🪝

The finance world is a labyrinth of rules, mountains of paperwork, and a 24/7 demand for speed and precision. Enter Agentic AI, which has quickly become the MVP by:

Supercharging customer service: NVIDIA reports a jump from
25% to 60% in businesses using generative AI chatbots over just
one year. These agents handle everything from dispute resolutions to updating your “Know Your Customer” details. They reduce human error and free up employees to tackle the trickier stuff that needs a human touch. Cracking down on fraud: AI agents don’t just watch transactions
They hunt for suspicious activity in real-time, alert compliance teams, and can even freeze accounts instantly. With cybercrime on the rise, these digital watchdogs are indispensable. Speeding up digital payments and banking: Whether it’s bill pay or cash flow management, agentic AI ensures everything ticks along smoothly, staying compliant with complex regulations and cutting costs with efficient audit trails. Decoding mountains of data: Financial docs, market reports, customer feedback, It’s a jungle of unstructured text. Agentic AI digests all this mess, highlights insights, and even suggests smart investment moves. 🏋️‍♀️ Real Life AI Agents Doing the Heavy Lifting 🏋️‍♂️ BlackRock’s Aladdin: This platform uses AI to optimize everything from risk management to trading. It’s the behind the scenes genius powering big money moves worldwide. Bunq’s Finn: An in-app chatbot that handles over 90% of user support tickets. Imagine a super efficient financial buddy who knows your preferences and sorts your queries instantly. 🫠 Capital One’s Chat Concierge: Makes buying a car smoother by offering real time info and guidance no pushy salesman, just smart, instant help. The Numbers Don’t Lie 📰

According to NVIDIA’s State of AI in Financial Services report,
Over 90% of financial firms see a positive revenue impact after implementing AI. That’s not just hype it’s proof that agentic AI is more than a shiny tech trend. It’s a powerful revenue booster and risk reducer.

Why Should You Care? 🤷‍♀️

Agentic AI isn’t just reshaping finance it’s a window into the future of work, where human ingenuity teams up with relentless, data-crunching machines.

The question is: Are you ready to work with your AI co-pilot, or will you get left behind while the bots take over?

Stay curious, keep questioning, and remember in the world of AI,
Staying ahead is the best strategy.

Catch you on the next byte of brilliance,
Team Safle 🌟


ComplyCube

The CryptoCubed Newsletter: July Edition

July has seen huge strides in the crypto world as regulators have clamped down on the sector. From Algeria's complete crypto ban to the USA's strategic plan to be the crypto capital of the world, read on to explore key changes. The post The CryptoCubed Newsletter: July Edition first appeared on ComplyCube.

July has seen huge strides in the crypto world as regulators have clamped down on the sector. From Algeria's complete crypto ban to the USA's strategic plan to be the crypto capital of the world, read on to explore key changes.

The post The CryptoCubed Newsletter: July Edition first appeared on ComplyCube.

Tuesday, 22. April 2025

Radiant Logic

Modernizing Your Legacy Identity Infrastructure is Finally Possible

Unmanaged service accounts are a hidden threat to IT security, creating vulnerabilities that cybercriminals exploit—learn how to identify, clean up, and secure these overlooked accounts to protect your organization. The post Modernizing Your Legacy Identity Infrastructure is Finally Possible appeared first on Radiant Logic.

Indicio

Implement digital identity in Europe with SD-JWT VCs from Indicio Proven

The post Implement digital identity in Europe with SD-JWT VCs from Indicio Proven appeared first on Indicio.
Indicio’s flagship product supports the issuance, holding, and verification of SD-JWT VCs, Europe’s choice of Verifiable Credential for privacy-preserving digital identity in the European Union Digital Wallet (EUDI).

By: Trevor Butterworth

By 2026, the European Union has mandated that every member country provide its citizens, residents, and businesses with a secure, interoperable digital wallet so that they can use Verifiable Credentials to prove who they are and share information across borders, platforms, and services, online and offline, across all EU member states.

For its Verifiable Credential format, the EU will use SD-JWT VCs.

What is  SD-JWT VC?

The acronym stands for Selective Disclosure JSON Web Token Verifiable Credential.

A JSON Web Token is a standardized way to share digitally-signed information between parties. Selective disclosure means that this information can be shared selectively, thereby enabling a party to restrict the data they share to the specifics needed for a given purpose. Verifiable credential means the JWT includes the specific data formats as well as validation and processing rules required to express Verifiable Credentials.

Why is this important?

Simply put, it enables data privacy and increases security.

The easiest way to understand this is by looking at how we share information to access resources using physical credentials, like a driver’s license.

To verify age for purchasing an age-restricted item, a person would present a physical ID that contained their date of birth, typically a driver’s license, passport, or national ID card. But in presenting that ID, the person verifying it would be able to see all the personal data on the ID.

In an age of identity theft, this is no longer tenable. But it is also unacceptable in terms of data privacy. Selective disclosure is non-negotiable for the European Union. EU data privacy law — GDPR — requires organizations minimize the data they collect, process, and store to fulfill a specific purpose; selective disclosure makes minimization easy. 

How to issue and verify SD-JWT Verifiable Credentials with Indicio Proven

Access Indicio Proven, either using its interface or its API, and connect your existing IAM, CRM, HR software, identity provider, API, or database. Select the SD-JWT VC from the menu and start issuing credentials. 

Indicio Proven’s verifier software automatically accepts SD-JWT VCs, so you have  immediate interoperability with all EU digital wallets..  

Why go with Indicio Proven for your EU credential solution?

Indicio was the first to demonstrate interoperability between SD-JWT VC and AnonCreds credentials in a single workflow. 

We’ll shortly enable similar interoperability with mdoc/mDL and W3C VC credentials.  

Simply put, if you want to interoperate globally, Proven has proved itself in the real world.

There are two other important reasons for choosing Indicio Proven. 

It enables you to scale rapidly to any level of issuance or verification. 

It has the most powerful governance solution in the marketplace, allowing you to easily orchestrate credential roles, trust lists, and workflows in hierarchical ways. 

And because governance is cached as a machine-readable file for each credential issuer, holder, and verifier, offline verification is possible using BLE, NFC, or Wifi Aware.

See how EUDI and SD-JWT can work in your business. Contact Indicio or book a demo to get started.

###

The post Implement digital identity in Europe with SD-JWT VCs from Indicio Proven appeared first on Indicio.


HYPR

How To Prevent Candidate Fraud with HR Identity Verification

The Rising Threat of Candidate Fraud Remote work has drastically changed hiring, unintentionally creating new opportunities for fraud. Reports indicate a significant jump in fraudulent activity, with some analyses suggesting one in six applicants for remote roles show signs of fraud. Experts project that by 2028, AI-generated job applicant profiles could account for one in four candi
The Rising Threat of Candidate Fraud

Remote work has drastically changed hiring, unintentionally creating new opportunities for fraud. Reports indicate a significant jump in fraudulent activity, with some analyses suggesting one in six applicants for remote roles show signs of fraud. Experts project that by 2028, AI-generated job applicant profiles could account for one in four candidates globally. This surge in remote onboarding fraud poses serious threats, including financial losses, security risks, and legal issues. To combat this, strong identity verification is now essential for modern fraud prevention.


Types of Candidate Fraud: Breaking Down the Tactics

Fraudsters use various clever methods to get into organizations. Understanding these tactics is key to building effective defenses.

Fake Qualifications Fraud: This involves candidates fabricating resumes, exaggerating qualifications, or providing false references to secure a job they're not fit for. This can include inventing work histories or boosting grades. Synthetic Identity Fraud: A more advanced technique, where fraudsters combine real and made-up personal information to create a new, seemingly legitimate identity. This makes detection harder for standard background checks. Deepfake-based Impersonation: Leveraging AI, fraudsters generate realistic video and voice simulations to impersonate real people during video interviews, making it very difficult to tell if the person is genuine. Stolen Identity: Criminals steal legitimate job seekers' personal data and use it to apply for jobs and pass checks. This victimizes the individual whose identity is stolen and risks the hiring company onboarding a malicious person.

Understanding the Business Implications of Candidate Fraud

The fallout from candidate fraud goes far beyond just a bad hire, affecting many different parts of a business.

Financial Losses: Bringing on unqualified individuals can lead to lower productivity, mistakes, and potential financial liabilities due to misconduct or poor performance, directly hitting your company's bottom line. Reputational Damage: If it comes out that your company hired people with fake credentials or if fraudulent activity becomes public, it can severely damage trust among clients, partners, and the general public, harming your brand's reputation. Legal and Regulatory Noncompliance: Failing to properly check candidates can lead to not following industry-specific rules and data protection laws, resulting in significant legal penalties and hefty fines. Security Breaches: Fraudulent employees, especially those with bad intentions, pose a direct risk of data theft, loss of intellectual property, or other critical security breaches that can have devastating effects on your organization. Operational Disruption: Having unqualified or fraudulent staff can disrupt workflows, require expensive extra training, and lead to higher employee turnover, all of which hinder efficient business operations and how you use your resources. Hiring Fraud Prevention Tactics for Human Resources

To effectively fight the growing problem of candidate fraud, HR departments need to adopt proactive and strong prevention strategies.

Implement a Top-Notch Identity Verification Solution That Follows Best Practices

Putting a comprehensive identity verification solution in place is absolutely crucial. Such a solution should simplify and automate identity-proofing processes, offer additional re-verification steps at important moments, and accurately verify identities while balancing strict security with ease of use for candidates. This includes verifying candidates during onboarding.

Stronger Interviewing Techniques

Go beyond basic interviews and add more advanced methods:

Conduct live technical assessments to truly verify claimed skills. Make video interviews mandatory and use technology to check for deepfakes and AI-generated voice simulations. Cross-check social media profiles for anything odd or inconsistent with the information provided. Background Screening Enhancements

Boost your background screening processes:

Use reputable third-party background check providers that employ advanced fraud detection techniques. Consider continuous employee monitoring even after the initial onboarding to spot any suspicious behavior or changes. Use Zero Trust & Identity Security During Hiring
It's time to team up with your security crew and embed cutting-edge zero trust and identity security into your entire hiring pipeline. This powerful partnership creates a proactive defense against fraud right from the start. Ditch those old-school passwords. They're a major weak spot for phishing and brute-force attacks. By embracing passwordless methods like biometrics or FIDO passkeys, you slam the door on fraudsters trying to steal credentials to access your systems. Then, supercharge your verification with zero trust principles and FIDO passkeys. The core idea is simple but revolutionary: "never trust, always verify". Every single access request is questioned until it's proven legitimate. This enables continuous authentication, so even if a bad actor slips through the cracks initially, their movements are instantly flagged and restricted. FIDO passkeys are the perfect tool for this, providing a highly secure and slick way to ensure only the verified individual can advance at every step of the hiring journey. 

Candidate Fraud Real-World Cases

The threat of candidate fraud isn't just theoretical; it has real, tangible consequences for businesses worldwide. These aren't isolated incidents but a growing trend that highlights the critical need for advanced identity verification and constant vigilance.

The HYPR blog post, "HYPR Unmasks a Fake IT Worker: North Korea Isn't the Only Threat," shares a firsthand account of how HYPR successfully stopped a potential fraud attempt involving a highly sophisticated impersonator. This incident clearly demonstrates the effectiveness of robust identity verification in a real-world situation. The fraudster presented a convincing resume and tried to mimic legitimate behavior, but HYPR's system was able to detect the deception, preventing a potentially damaging security breach.

Beyond individual cases, broader patterns of hiring fraud illustrate the severity of the problem:

The North Korean IT Worker Scheme

Reports detail how fraudulent IT workers, sometimes linked to state-sponsored activities, infiltrate companies using fake identities. They pose as remote software engineers, using fake profiles and stolen identities to gain employment. These schemes often aim to steal data or funnel illicit wages back to their countries, posing both financial and national security risks.

Financial Sector Impersonation

The financial sector has seen cases where individuals with false credentials or stolen identities gained access to critical systems. Such incidents lead to significant financial losses, regulatory penalties, and reputational damage. Impersonators might use fake certifications to get roles in sensitive departments, then exploit their access for fraud or data theft.

The Rise of AI-Generated Resumes

With generative AI tools, companies now regularly encounter applicants submitting entirely computer-generated resumes and cover letters that are hard to distinguish from genuine ones. Additionally, deepfake technology enables interview fraud where candidates use AI to create convincing video and voice simulations for remote interviews, making it difficult to identify genuine candidates.

Protect Your Business Against Hiring Fraud with HYPR Affirm

Strong authentication and verification processes are no longer just an option; they’re an absolute necessity. Hiring fraud is constantly evolving, with new and more sophisticated tactics emerging daily. Traditional background checks and manual verification methods simply aren’t enough to keep pace with the cleverness of modern fraudsters.

This is precisely where a dedicated, robust identity verification solution like HYPR Affirm becomes indispensable. HYPR Affirm is specifically designed to prevent candidate fraud by confirming the true identity of every applicant, right from the initial application stage through to onboarding. By using advanced technologies such as:

Biometrics: Utilizing unique physical or behavioral characteristics like facial recognition to confirm a person's identity. Document Verification: Authenticating government-issued IDs, passports, and other official documents for signs of tampering or forgery, ensuring their legitimacy. Liveness Detection: Ensuring that the person presenting the identity is a real, live individual and not a spoof, a photograph, or a sophisticated deepfake. Location: Verifying the candidate's physical location against expected or declared information to detect inconsistencies that may signal fraud. Manager Attestation: Enabling a step-up escalation for complex cases by allowing a manager or help desk agent to conduct a live chat and video call for final verification.

HYPR Affirm verifies candidates at critical touchpoints throughout the hiring journey, creating a formidable barrier against deception. This multi-layered approach significantly reduces the risks associated with various types of fraud, including:

Synthetic identities: By cross-referencing multiple data points and using advanced algorithms to detect fabricated personal details. Deepfake impersonations: By employing sophisticated liveness detection and forensic analysis during video interactions to ensure the person on screen is genuinely real. Stolen credentials: By verifying that the individual applying is indeed the legitimate holder of the presented identity documents.

This comprehensive approach safeguards your entire hiring process, shielding your business from the profound consequences of financial loss, potentially devastating security breaches, and irreparable damage to your reputation. With HYPR Affirm, you can confidently build your team, knowing that you're bringing on genuine talent, not an elaborate deception.

Key Takeaways Candidate fraud is a growing danger, made worse by remote hiring, leading to significant financial losses and security vulnerabilities. Fraudsters use many different methods, including fake qualifications, synthetic identities, deepfake technology, and stolen personal data. The business implications are severe, covering financial losses, harm to reputation, legal non-compliance, and security breaches. Implementing a comprehensive identity verification solution, improving interviewing techniques, enhancing background checks, and adopting zero trust principles are all crucial for effective prevention. Conclusion

Candidate fraud, increasingly fueled by remote hiring and sophisticated tactics like deepfakes, poses significant financial, reputational, and security risks to businesses. Combating this evolving threat requires a multi-faceted approach, including robust identity verification, improved interviewing techniques, enhanced background checks, and a strong zero-trust security posture.

This is where HYPR Affirm comes in, offering a powerful defense. It provides deterministic, high-fidelity identity proofing to ensure that only verified individuals join your team. By safeguarding your business from fraud, HYPR Affirm helps you secure your future. The company is even recognized as a sample vendor in this critical space.


myLaminin

Your Research, Connected: The Case for Commercial RDM Tools that Scale with Collaboration

Research is increasingly collaborative, complex, and global. Whether it’s a multi‑site clinical study, a cross‑university climate project, or a public‑private health data initiative, researchers now work across institutions and nations. That kind of collaboration depends on systems—not just goodwill. Platforms like myLaminin provide secure support for research operations and data exchange via robus
Research is increasingly collaborative, complex, and global. Whether it’s a multi‑site clinical study, a cross‑university climate project, or a public‑private health data initiative, researchers now work across institutions and nations. That kind of collaboration depends on systems—not just goodwill. Platforms like myLaminin provide secure support for research operations and data exchange via robust repositories, role‑based access, metadata standards, FAIR compliance, and audit trails.

Ocean Protocol

DF152 Completes and DF153 Launches

Predictoor DF152 rewards available. DF153 runs July 31st — August 7th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 152 (DF152) has completed. DF153 is live today, July 31st. It concludes on August 7th. For this DF round, Predictoo
Predictoor DF152 rewards available. DF153 runs July 31st — August 7th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 152 (DF152) has completed.

DF153 is live today, July 31st. It concludes on August 7th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF153 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF153

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF152 Completes and DF153 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Introducing CIBA for Secure Transaction Verification

Digital applications constantly deal with identities. It’s important to verify identity at the application’s front door through authentication. Several mature and sophisticated techniques and standards for user authentication, such as OpenID Connect (OIDC) and Security Assertion Markup Language (SAML), allow a trusted identity provider (IDP) to securely authenticate the user before allowing access

Digital applications constantly deal with identities. It’s important to verify identity at the application’s front door through authentication. Several mature and sophisticated techniques and standards for user authentication, such as OpenID Connect (OIDC) and Security Assertion Markup Language (SAML), allow a trusted identity provider (IDP) to securely authenticate the user before allowing access to an application.

However, front door authentication is not the only context in which identities must be verified.

Consider the following scenarios:

Update your email address through a bank’s customer service Recover the userID/password through HelpDesk Securely perform a transaction at a retail Point of Sale (POS) system Authenticate with browser-constrained devices, such as smart speakers Authenticate from a shared kiosk

In each of the above cases, though the identities need to be verified, it might not be possible or appropriate to have the user perform authentication through an interactive login interface such as a web browser.

Table of Contents

How applications handle identity verification today Let’s step through identity verification scenarios Update email address for bank accounts Point of Sale (POS) payment Use CIBA to verify identity transactions securely and consistently CIBA enables a smooth authentication experience CIBA builds upon OAuth 2.0 and OIDC Security considerations when using CIBA CIBA support in Okta Use CIBA for secure identity verification in your apps Learn more about CIBA, Okta, and identity verification How applications handle identity verification today

While it’s popular to leverage secure IDP and standards to provide initial authentication through a login interface, identity verification in places such as above is built in an ad hoc manner. Depending on design, some applications do this inefficiently with a terrible customer experience, while others are less secure and vulnerable.

Let’s step through identity verification scenarios

Consider identity verification needs where you can’t rely on a traditional user authentication process, such as when multiple parties or browser-constrained systems are involved. Let’s examine those cases and identify identity security pitfalls.

Update email address for bank accounts

Consider a user calls customer service to update the email address associated with their bank account. Often, the helpline personnel ask the user for certain personally identifiable information ( PII) answers such as last name, date of birth, and last 4 digits of social security number. Upon verification, the helpline personnel updates the email address through the customer care application, which performs privileged operations on the bank’s identity database to change the user record.

There are problems with the approach.

First, the customer experience is not great. The customer also needs to provide PII information for verification, which an attacker can guess or obtain using social engineering. This can easily lead to an account takeover, where a fraudulent actor can successfully pass the verifications and infiltrate the account with a new email ID. The second issue is that the customer care application needs to change the user profile from the backend without authentication. The application would typically use powerful credentials to perform such a privileged operation. For example, an application can obtain and use a token with user management privileges to call the user update API. Such a token allows updating user accounts in the banking system directory, but it can be misused if the token leaks.

Wouldn’t it be nice to get some form of token that provides the application with just enough privilege to update only the calling user profile? That way, it could adhere to the least privilege principle of security.

Point of Sale (POS) payment

This is another interesting scenario. When a user tries to pay in the retail Point of Sale (POS) system with a bank account, they won’t be comfortable signing in to their bank account on a shared device and providing their credentials.

Instead, it would be ideal if the POS system allowed secure payments with an alternative form of verification, where the user does not need to provide their credentials in a public system!

Can we do something to decouple user authentication from the application?

Use CIBA to verify identity transactions securely and consistently

The idea is to decouple authentication from the application so that it can be initiated on one device and verified on another. Client-Initiated Backchannel Authentication (CIBA) allows exactly that separation.

CIBA is a relatively new authentication flow based on OAuth 2.0 in which the client can initiate an interaction flow to authenticate the users without having end-user interaction from the consumption device. The flow involves back-channel communication from the client to the OAuth 2.0 authorization provider without redirecting through the user’s browser (consumption device). The authentication would be verified independently from a separate authentication device, such as a phone or smartwatch, in possession of the user and securely enrolled with the provider.

CIBA enables a smooth authentication experience

Consider the following flow for our banking email address use case.

The customer care application initiates an authentication event for the user. It sends a direct CIBA request to the authorization server. Unlike a regular login page, the user is redirected to, the authorization server sends a push notification to the user’s phone. The authorization server is notified when the user accepts the notification on her phone/smartwatch. The authorization server then issues a user token to the application. The application uses the user-scoped token to complete the target operation, which is updating the email address.

A few benefits of this approach are:

The user experience becomes smoother during verification. It also can instill confidence in users that the system operates securely. Push notification offers higher security than other out-of-band user authentication methods, such as SMS one-time code (OTP). The application token can be narrowly scoped to the user, providing the least privileged access.

Here is a simplified flow of a transaction using CIBA.

CIBA builds upon OAuth 2.0 and OIDC

CIBA is an extension on top of OIDC, which itself is based on the OAuth 2.0 framework. It brings in a new OAuth 2.0 grant type in the family: urn:openid:params:grant-type:ciba. As customary with the OIDC discovery endpoint, CIBA introduces additional metadata parameters, such as backchannel_token_delivery_modes_supported and backchannel_authentication_endpoint. The discovery document payload looks like this:

{ "issuer": "...", "authorization_endpoint": ".../authorize", "token_endpoint": ".../token", "userinfo_endpoint": ".../userinfo", "jwks_uri": "...", "grant_types_supported":[ "authorization_code", "refresh_token", "password", "urn:openid:params:grant-type:ciba" ], "backchannel_token_delivery_modes_supported":["poll", "ping", "push"], "backchannel_authentication_endpoint": "", .... }

The backchannel_token_delivery_modes_supported parameter needs some additional commentary. The specification defines three different modes of notifying the client about the completion of authentication.

Poll: In this mode, the client keeps polling the authorization server until the authentication is complete or the event times out. In case of successful authentication, the final poll returns the tokens to the application. This mode is the simplest and easiest to implement. Ping: When the authentication is complete, it will call back to a registered URL of the client, notifying the status. The client makes a request to the authorization server for tokens. Push: When the authentication is complete, it will call back to the client’s registered URL with the tokens.

Ping and Push modes are more complex to implement and need additional metadata and implementation steps on the client side. However, it saves network trips caused by the polling cycle.

Since the CIBA request uses a back-channel, it must contain a parameter that the authorization server can use to identify the user. Typically, the parameter is supplied using the login_hint or id_token_hint parameter of the request.

The authentication device performs out-of-band authentication instead of the traditional authentication flow, where the client interacts with the authorization server sequentially. In practical implementations, it would be a push notification to a device such as a phone or smartwatch. The device needs to be securely registered to the authorization server for the user so that it knows where to send the authorization request. The push notification can be delivered by embedding the mechanism in the application’s mobile application or using a companion authenticator application.

Security considerations when using CIBA

CIBA is vulnerable to attacks akin to an MFA fatigue attack. Consider the case where an attacker guesses a user ID or infiltrates a user account and repeatedly attempts to carry out a sensitive transaction implemented using CIBA authorization. The real user might get overwhelmed by repeated push notifications and accept one.

A related scenario is when the attacker has a list of user IDs and initiates transactions for each. While most users would ignore the push prompt, a small percentage could approve the request.

In summary, CIBA suffers from a weakness where an attacker can force-initiate an authorization event. In certain scenarios, a more secure alternative is the device code flow, where a user can actively initiate authorization on their device using a QR code or one-time code.

Also, CIBA should not be used in a same-device scenario where the consumption and authentication devices are the same.

CIBA support in Okta

CIBA is not yet widely implemented. Okta has been an early adopter of the CIBA standard.

CIBA is rapidly gaining traction in the banking industry. FAPI specifications, developed based on the OAuth 2.0 token model, include the CIBA profile. CIBA, along with complementary product offerings such as Demonstrating Proof of Possession (DPoP), make up the key components required for highly regulated identity.

In Europe, CIBA can help implement the decoupled authentication flows outlined by PSD2 and UK Open Banking. Consumer Data Right (CDR) in Australia is expected to include the specification soon. Beyond the banking industry, CIBA is promising to provide enhanced security and user experience for the Helpdesk, customer service, retail Point of Sale (POS), Interactive Voice Response (IVR), and shared kiosk-based applications.

Okta supports CIBA in poll mode, a feature called Transactional Verification. The Okta authorization server includes the CIBA grant as part of the support.

The authentication process is supported by allowing the creation of a mobile push authenticator using the Okta device SDK. This SDK can be easily embedded in the organization’s mobile application or as a separate companion application. Check out the iOS and Android guides on how to implement a branded push authenticator using the SDK. The guides include sample applications to get you quickly started building the experience.

Use CIBA for secure identity verification in your apps

Digital applications are crucial for every business, and securing them is paramount. It’s not enough to protect just the front door with authentications. Applications must always be vigilant during their operations and operate on a zero-trust model. CIBA is an important tool to ensure that applications enforce continuous and secure authorization in appropriate contexts without compromising the user experience.

Learn more about CIBA, Okta, and identity verification

If you want to learn more about CIBA, Okta, and identity verification, check out these resources:

CIBA specification Configure CIBA with Okta

Follow us on Twitter and subscribe to our YouTube channel for more identity content. Feel free to leave us a comment below about the identity topics you want to learn more.

Wednesday, 30. July 2025

Anonym

4 Ways MySudo Email is Better than Masked Email

A masked email address is a unique, automatically generated email address that you can use to shield your primary email from spam and email scams like phishing attacks. Email masking adds a layer of protection between you and your inbox—and with data breaches and fraud at record highs, masked email, as well as email aliases, […] The post 4 Ways MySudo Email is Better than Masked Email appeared f

A masked email address is a unique, automatically generated email address that you can use to shield your primary email from spam and email scams like phishing attacks.

Email masking adds a layer of protection between you and your inbox—and with data breaches and fraud at record highs, masked email, as well as email aliases, temporary email, and disposable email, are now popular privacy tools.

But if you’re searching for truly private and secure email, you can do much better than masked email. MySudo has loads of benefits over other email options and we’ll cover them all here—but first, let’s do the 101 on masked email.

Is a masked email the same as an email alias?

Email masking is sometimes confused with email aliasing, but they’re not the same.

An email alias is a secondary email address (usually a variation of your primary email address) that sits in your primary inbox—whereas a masked email is an entirely separate email account that gets forwarded to your primary email inbox.

But a masked email is not a temporary email or a disposable email, either.

A temporary email address is usually set up to receive one-off messages, has no sending capability, and will auto-expire after a set time or set number of uses.

A disposable email is like a temporary email but typically lasts longer. It may have limited sending capability for replies. You can throw away your disposable email whenever you’re done with it.

How does email masking work?

While your primary email provider can handle your email aliases, you’ll need a third-party provider to run your masked email service. Here’s what happens: 

You create a masked email address through a service like Apple’s iCloud Private Relay, Firefox Relay, or a custom domain. When someone sends an email to your masked address, the service forwards the email to your real inbox. You read the email in your real inbox without the sender ever knowing your real email address.

Depending on the service you choose, replies will go either:

Via the service: Some masked email services let you send email from the masked address, so the recipient sees the masked email address, not your real email address, or From your real inbox: If you reply to an email that’s been forwarded from the masked email service, the service will rewrite the “From” address to keep your real email hidden.

You can disable or delete your masked email at any time.

What type of email is MySudo email?

MySudo email is a popular secure email service with full send and receive support. It’s entirely separate from your personal email account and intentionally protects your personal email from spam and email-based scams.

MySudo email is better than masked email in at least 4 ways:


1. MySudo is more than email; it’s a complete private identity management solution

Your MySudo email sits within a secure Sudo digital identity which you set up, manage and retain until you decide to delete it. While email is a popular feature of a Sudo, each Sudo contains so much more, to make it a complete private identity management solution. Here’s what’s in each Sudo:

1 email address – for end-to-end encrypted emails between app users, and standard email with everyone else 1 handle – for end-to-end encrypted messages and video, voice and group calls between app users 1 private browser – for searching the internet without ads and tracking 1 phone number (optional)* – for end-to-end encrypted messaging and video, voice and group calls between app users, and standard connections with everyone else; customizable and mutable 1 virtual card (optional)* – for protecting your personal info and your money, like a proxy for your credit or debit card or bank account

*Phone numbers and virtual cards only available on a paid plan. Phone numbers available for US, CA and UK only. Virtual cards for US only.

Masked email services usually provide only disposable or relay email addresses, not full identity separation like MySudo does.


2. You can have up to 9 separate email accounts with MySudo

The top plan on MySudo entitles you to up to 9 separate Sudo digital identities, which means you can actually maintain up to 9 separate email accounts at any one time (plus 9 secondary phone numbers, private browsers, handles, and optional virtual cards).

The real power comes when you apply those 9 separate Sudos to different purposes in your busy life: shopping, dating, classified selling, travel, social media accounts, and more. Anywhere you don’t want to use your personal email, phone, and credit card, use your Sudo details instead. You stay private, safe and organized. Read: 4 Steps to Setting Up MySudo to Meet Your Real Life Privacy Needs and From Yelp to Lyft: 6 Ways to “Do Life” Without Using Your Personal Details.


3. MySudo email is end-to-end encrypted between MySudo users

MySudo email is end-to-end encrypted between MySudo users. This gives you much greater privacy and control, especially for sensitive communications. You can easily invite friends and colleagues to the app, so you can email, call, and message each other securely.

Most masked email services only forward email and don’t secure or end-to-end encrypt it beyond basic TLS during transmission.


4. You can sign up to MySudo without giving away your personal email or any other info

Many masked email services tie the masked address to your real identity behind the scenes, but MySudo doesn’t. In fact, we can’t. That’s because MySudo won’t ask for any of your personally identifiable information, like primary email address or phone number, when you set up and log in to your account. Instead, your account is protected by an authentication and encryption key that never leaves your device.

We’ll only ask for personal information if you opt-in to use the optional MySudo virtual cards feature and for UK phone numbers, because for these services we must do a one-time verification of your identity by law.

So, don’t waste your time setting up masked email or email aliases for privacy. Instead, go straight to MySudo, where you can have up to 9 secure email addresses that shield and protect your personal email—and loads of other privacy and security features besides.

Download MySudo for iOS or Android.

You might also like: The Top 10 Ways Bad Actors Use Your Stolen Personal Information The 5 Big Benefits of Using the Private Email in MySudo 14 Real-Life Examples of Personal Data You Want to Keep Private

The post 4 Ways MySudo Email is Better than Masked Email appeared first on Anonyome Labs.


LISNR

Signal to Sale: How Ultrasonic Tech is Solving Retail’s Attribution Problem

Signal to Sale: How Ultrasonic Tech is Solving Retail’s Attribution Problem For all retailers, the physical store remains crucial for customer conversion, but the rapid expansion of digital channels has created a non-linear path from awareness to purchase. The exponential increase in consumer touchpoints makes it nearly impossible for industry leaders to draw a clear […] The post Signal to Sale:
Signal to Sale: How Ultrasonic Tech is Solving Retail’s Attribution Problem

For all retailers, the physical store remains crucial for customer conversion, but the rapid expansion of digital channels has created a non-linear path from awareness to purchase. The exponential increase in consumer touchpoints makes it nearly impossible for industry leaders to draw a clear line across the whole customer journey. This disruption in the funnel universally impacts big box stores, CPGs, and small brick-and-mortar merchants alike.

How are companies supposed to invest resources into promotion campaigns when conversion metrics are basically a shot in the dark?

The Problem: In today’s retail ecosystem, merchants only truly know their customers at checkout, but by then, it’s too late. If businesses could identify their consumers throughout the entire shopping journey, they could present personalized offerings at critical stages of the buying process and understand full attribution of which promotions worked and which didn’t.

The Game Plan:

Recognize which customers viewed certain TV promotions at-home Identify customers when they enter the store Present customers with personalized offerings during their shopping experience to increase spend or highlight higher margin items Cultivate unique shopping experiences that increase word of mouth marketing Gamify the checkout experience to make consumers excited for their next visit.

With this equation, your retail store doesn’t become an option; it becomes the destination.

The Solution: LISNR’s Radius, an ultrasonic SDK capable of sending data over inaudible sound using standard speakers and microphones. Radius transmits data using frequencies higher than what humans can hear allowing these “tones” to be layered over TV broadcasts and in-store music/announcements. Our Zone66 tone profile can send data over 30 feet from standard speakers and can be used to identify a consumer and their relative location within the physical store. Once identified, merchants can send personalized offerings directly to the consumer’s device as they are in front of specific products.

Furthermore, LISNR offers Quest, a loyalty management platform that enables tracking of consumers’ purchase history and incentivizes future purchases. Utilizing the gamified nature of Quest at checkout, consumers can track their progress toward rewards and redemption, keeping the store top of mind.

Start Buildling Your Own ultrasonic Solutions The Proliferation of LISNR-Enabled Digital Touchpoints

LISNR capitalizes on digital touchpoints by enabling the delivery of valuable and personalized offers directly to consumers at various touchpoints found in everyday shopping experiences. By integrating with LISNR, retailers can enable attribution within their existing infrastructure.

At-Home Television


Retail media, CTV, and linear TV ad spend is projected to be around $5 billion in 2025, according to MediaPost. Retailers clearly understand the value in advertising through television; however, realizing the ROI is much more difficult. How can businesses confidently attribute an advertisement to an uptick in purchases at checkout? The answer is Radius.

Radius’ ultrasonic tones can be played simultaneously with a normal TV audio broadcast. Image 1 shows an example of a Radius tone at a frequency far above standard broadcast audio being played at the human audible level. Retailers can utilize any audio stream to encourage their consumers to open their app and engage or receive a promotion. What better way to embrace the ever-present second screen?

See a TV broadcast advertisement delivery in action

Retailers can either broadcast a single offer to all consumers or offer a personalized coupon based on their previous shopping behavior.

Image 1: Radius tone being broadcast at a frequency higher than human hearing In-Store Audio

In-store audio, whether music or PA announcements, is commonplace in retailers and is often drowned out as white noise. As a result, in-store audible advertisements often fall short, with retailers relying on consumers to be actively listening. Again, Radius can break this paradigm.

By operating simultaneously with in-store audio, consumers are seamlessly presented with personalized offers via their device. The Radius SDK even allows data to be transmitted over three distinct audio frequencies or “channels” to prevent overlapping messages. Image 2 demonstrates how three distinct data streams can be played simultaneously over audio.

For this use case, retailers can broadcast “Zone IDs” in-store to locate consumers and present relevant messaging. Once the consumer’s device receives the Zone ID for the store, the retailer’s backend app logic is able to present the consumer with a personalized offering. App logic can also be set up to limit the amount of promotions a consumer receives in a given timeframe.

Image 2: Radius tones being broadcast over 3 different channels At-Checkout

Many of today’s shopper cards or loyalty programs are either a black box for consumers, or a percentage discount disguised as loyalty points. This does nothing to create loyal or engaged customers. LISNR instead offers Quest, a gamified loyalty platform.

By creating gamified experiences, Quest taps into a different mindset of the consumer, building a connection to merchants while increasing transaction size and volume. Quest is not just about dollars spent; it empowers retailers to incentivize any customer action (visits, specific item purchases, referrals) that they deem impactful. Consumers can interact with time-based criteria (quests) or lifetime progression (achievements) to track their loyalty and redeem rewards.

Quest’s data builds a positive feedback loop for consumer preferences. As consumers shop more frequently, their preferences become more defined allowing retailers to present more relevant offers. These offers in turn incentivize consumers to come back, spend more in-store, and look forward to their next visit.

Quest allows consumers to earn and collect achievement badges modeled after video games Conclusion

By establishing a channel from awareness to conversion, retailers can capitalize on the power of targeted marketing and attribution. Retailers no longer need to guess at their ROI vs cost per impression, LISNR leads to insights of cost per conversion. Add personalized loyalty into the mix and you begin to see the LISNR flywheel for retailers.

We’ve created an easily digestible overview of this process, highlighting the digital touchpoints for consumers in-store. Fill out your contact information below to download a digital copy for you and your team.

The post Signal to Sale: How Ultrasonic Tech is Solving Retail’s Attribution Problem appeared first on LISNR.


liminal (was OWI)

Link Index for Data Access Control

The post Link Index for Data Access Control appeared first on Liminal.co.

The post Link Index for Data Access Control appeared first on Liminal.co.


ComplyCube

Best AML Software in 2025: What to Look for in a Compliant Solution

With the removal of travel bans post-COVID-19, financial crimes have increased, with criminals using smarter, deceptive tactics. Selecting the right AML software can safeguard your business from these high-risk criminal activities. The post Best AML Software in 2025: What to Look for in a Compliant Solution first appeared on ComplyCube.

With the removal of travel bans post-COVID-19, financial crimes have increased, with criminals using smarter, deceptive tactics. Selecting the right AML software can safeguard your business from these high-risk criminal activities.

The post Best AML Software in 2025: What to Look for in a Compliant Solution first appeared on ComplyCube.


The Top 5 AML Fines in 2025 Business Need to Know

Regulators worldwide have issued over $6 billion in AML fines this year. Yet, these fines are projected to grow as regulations worldwide undergo rapid changes to close out the significant money laundering and fraud gaps. The post The Top 5 AML Fines in 2025 Business Need to Know first appeared on ComplyCube.

Regulators worldwide have issued over $6 billion in AML fines this year. Yet, these fines are projected to grow as regulations worldwide undergo rapid changes to close out the significant money laundering and fraud gaps.

The post The Top 5 AML Fines in 2025 Business Need to Know first appeared on ComplyCube.

Tuesday, 29. July 2025

HYPR

NIST SP 800-63-3 Review: Digital Identity Guidelines Overview

Evolution from 800-63-2 to 800-63-3 The NIST SP 800-63 guidelines are dynamic, constantly adapting to evolving technological advancements and threats. The latest iteration, NIST SP 800-63-3, represents a crucial evolution from its predecessor, 800-63-2, incorporating significant improvements to address emerging vulnerabilities and provide stronger security measures. A key update r
Evolution from 800-63-2 to 800-63-3

The NIST SP 800-63 guidelines are dynamic, constantly adapting to evolving technological advancements and threats. The latest iteration, NIST SP 800-63-3, represents a crucial evolution from its predecessor, 800-63-2, incorporating significant improvements to address emerging vulnerabilities and provide stronger security measures.

A key update resides within NIST 800-63B, a core component of the 800-63-3 guidelines, which focuses intently on authentication methods. Notably, email one-time passwords (OTPs) have been explicitly placed in a limited scope. This decision directly acknowledges their inherent susceptibility to widespread phishing at the workplace, where email is easily compromised.

Similarly, SMS-based authentication has been formally downgraded as a viable authenticator for high-assurance scenarios. While SMS was initially considered a significant step forward for two-factor authentication, we found through the years, mobile providers and even the SS7 network itself was compromised.

These pivotal revisions in NIST 800-63-3 unequivocally signal a strategic shift towards prioritizing stronger, more phishing-resistant authentication protocols. NIST actively encourages organizations to adopt resilient authentication mechanisms that genuinely protect against unauthorized access and prevent identity fraud.

Key Concepts and Processes of Identity Proofing and Authentication

The guidelines introduce a significant shift by retiring the concept of a "level of assurance (LOA)" as a single, all-encompassing ordinal that dictates implementation-specific requirements. Instead, NIST 800-63-3 emphasizes that agencies (and by extension, organizations) should select.

IAL (Identity Assurance Level), AAL (Authenticator Assurance Level), and FAL (Federated Assurance Level) as distinct, independent options. This selection process is driven by appropriate business and privacy risk management considerations, alongside specific mission needs. While many systems might coincidentally have the same numerical level for each of IAL, AAL, and FAL, this is not a mandatory requirement, and agencies should avoid assuming they will always be identical within any given system.

The distinct components of identity assurance detailed in these guidelines are as follows:

IAL refers to the identity proofing process, which validates the real-world identity of the applicant. AAL refers to the authentication process, which verifies the user's claimed identity during a transaction. FAL refers to the strength of an assertion in a federated environment, specifically used to communicate authentication and attribute information (if applicable) to a relying party (RP).

This explicit separation of categories provides organizations with greater flexibility in choosing identity solutions and significantly enhances the ability to embed privacy-enhancing techniques as fundamental elements of identity systems, regardless of the chosen assurance level.

Beyond these foundational assurance concepts, the guidelines meticulously elaborate on the crucial roles played by various key actors within the sophisticated digital identity ecosystem:

Credential Service Providers (CSPs): These entities bear the significant responsibility for issuing and meticulously managing authenticators (digital credentials) for users.Their role ensures the secure storage of the unique digital representation of the individual and its secure use for authentication. Their meticulous handling of these credentials is vital for the entire chain of trust. Relying Parties (Verifiers): These are the diverse services, applications, or systems that judiciously consume the authenticated identity to grant appropriate access to specific resources or services. They inherently rely on the assertions provided by the CSPs to verify the user's identity before extending trust or access. The important parts of their role involve verifying identity, often by confirming the user's authentication complies with specified Authentication Assurance Levels (AALs). The Digital Identity Model: NIST's Vision for Online Presence

NIST defines a sophisticated and nuanced concept of digital identity that extends far beyond the simplistic notion of a username and password. This comprehensive model fundamentally emphasizes the unique and verifiable nature of an individual's digital representation and its pivotal role in facilitating secure online transactions and interactions across diverse platforms.

The Digital Identity Model, as conceptualized by NIST, illustrates a clear and sequential flow for establishing and utilizing a secure digital identity, moving from an applicant's initial request to their engagement in online transactions. This model involves several interconnected key stages:

Applicant: This initial stage represents the individual requesting access or registration for a digital service. At this point, the applicant may submit personal data such as their name, email, or an ID photo to initiate the process. Enrollment: Here, the identity is rigorously verified using various identity proofing methods. Once verification is successful, credentials or authenticators are issued to the individual for future use. Digital Identity: Once the enrollment process is complete and verified, a unique digital representation of the individual is created. This digital identity is then stored securely and subsequently used for authentication purposes in various online contexts. Online Transaction: In the final stage of the model, the user leverages their established digital identity to authenticate and gain access to a service. During this process, the system actively verifies the user's identity, ensuring compliance with predefined Authentication Assurance Levels (AALs) to secure the transaction.

This model provides a clear visual and conceptual framework for understanding the lifecycle of a digital identity within the NIST guidelines, emphasizing the progression from initial proofing to ongoing authentication.

Key Processes in Digital Identity Management

NIST SP 800-63-3 breaks down digital identity management into three key, interconnected processes:

Identity Proofing: The foundational step of verifying an individual's identity, ensuring it exists and belongs to the claimant. This prevents fraudulent account creation and initial unauthorized access. Digital Authentication: The ongoing process of verifying a user's claimed identity each time they attempt an online transaction or access a resource. It ensures the legitimate holder is performing the action. Federated Identity Management: A mechanism for linking identities across different organizations, allowing users to authenticate once and gain access to multiple relying parties without repeated authentication. Understanding NIST Assurance Levels (IALs, AALs, FALs)

NIST defines Identity Assurance Levels (IALs), indicating the certainty that a claimed identity corresponds to a real-world identity. These are part of NIST 800-63-3 and provide a tiered approach to evaluating identity proofing strength.

IAL1 (Low Assurance): No requirement to link the individual to a real-world identity; information is self-asserted. IAL2 (Medium Assurance): Uses digital documents as evidence to support the claimed identity's real-world existence and verifies the person's association. IAL3 (High Assurance): Requires an authorized and trained representative to verify the individual in person, often with biometrics, for the highest certainty.

IALs primarily measure assurance at a single point in time, during enrollment or initial identity proofing, and do not cover ongoing authentication.

Authentication Assurance Levels (AALs) quantify authentication mechanism strength during login:

AAL1: Typically single-factor (e.g., username/password), generally discouraged for sensitive data. AAL2: Requires at least two distinct authentication factors, designed to resist replay attacks, though SMS OTPs are now less secure. AAL3: The highest level, requiring strong cryptographic device-based authentication (e.g., FIDO security key, device-bound passkeys), highly resistant to phishing and man-in-the-middle attacks.

Enrollment and Identity Proofing (SP 800-63-A)

NIST 800-63-A provides practical and prescriptive examples of proofing methods that can be judiciously utilized to meet these varying assurance levels. These methods are designed to collectively minimize the risk of fraudulent identity creation and unauthorized access:

Document Verification: This involves examining official documents (e.g., passport, driver’s license) either in person or digitally, with technology capable of detecting forgeries or alterations. Facial Recognition with Liveness Detection: This cutting-edge method uses facial biometrics to confirm the person matches the claimed identity. Crucially, liveness detection is integrated to detect and thwart spoofing attempts using photos, videos, or masks. Live Video Verification: This adds a significant layer of human-centric security by facilitating a face-to-face verification session over a secure video conference. An authorized agent engages directly with the individual to confirm liveness and detect signs of coercion. Chat Verification: For lower-risk scenarios or as a preliminary step, chat verification can be employed, often combining AI and human interaction. Location Detection: Verifying the geographical location of the individual during the proofing process can be important, though it must strictly adhere to all privacy regulations. Attestation: A critical component providing an auditable trail, attestation involves a responsible party formally confirming and documenting the results of the identity proofing process, retaining results but not sensitive PII.

The strategic integration of these diverse methods, as meticulously outlined in NIST 800-63-A, culminates in a comprehensive, multi-layered identity proofing ecosystem.

Authentication and Lifecycle Management (SP 800-63-B)

NIST Special Publication 800-63-B delves into the critical area of Authentication and Lifecycle Management, placing significant emphasis on "verifier impersonation resistance," directly acknowledging the widespread and persistent threat of phishing attacks. This mandate means that authentication methods must be meticulously designed to prevent attackers from successfully impersonating legitimate relying parties (e.g., websites, applications) in order to trick unsuspecting users into revealing their credentials or authentication factors.

The decisive move to deprecate email OTP and significantly downgrade SMS-based authentication in NIST 800-63B directly reflects the understanding that these methods, while once considered helpful, are no longer sufficient to provide adequate assurance against modern, targeted threats.

Federation and Assertions (SP 800-63-C)

The core concept elaborated in 800-63-C is the precise definition of Federated Assurance Levels (FALs). FALs are designed to quantify the confidence that can be placed in the assertions or claims made by one identity provider (often acting as a Credential Service Provider or CSP) to a distinct relying party (or verifier) about a user's identity and their authentication event.

FAL1 (Low Assurance): Corresponds to the lowest level of confidence in the assertion, often linked to an AAL1 authentication event. FAL2 (Medium Assurance): Reflects a moderate level of confidence, typically corresponding to an AAL2 authentication event. FAL3 (High Assurance): Denotes the highest level of confidence in the assertion, corresponding to an AAL3 authentication event.

These levels are crucial because they enable a relying party to understand and trust the level of rigor and security that was applied by the identity provider in establishing and authenticating that user's identity. This allows for informed risk decisions when granting access based on a federated assertion.

The process of conveying authentication and attribute information in a federated environment typically involves several key elements:

Assertions: Cryptographically signed digital statements made by a trusted identity provider about a user's identity or authentication event. Protocols: Standardized technical protocols (e.g., SAML, OAuth 2.0, OpenID Connect) used to securely exchange these assertions. Trust Frameworks: Established frameworks defining policies, procedures, and technical agreements between participating entities to ensure interoperability and security.

Federated identity management significantly improves user experience via single sign-on (SSO) and enhances security by centralizing identity management with trusted providers.

Implementation Guidelines for Identity and Authentication Assurance Levels

Implementing NIST 800-63-3 involves selecting suitable assurance levels and addressing challenges. A common error is only verifying identity at hire; a robust strategy must cover the entire employee lifecycle.

Critical scenarios requiring secure identity proofing and strong authentication include:

Employee Onboarding: Ensuring new hires are legitimate before granting system access, preventing interview fraud. Credential Resets: Protecting against social engineering that exploits reset processes, as seen in the MGM Resorts attack. Changing Roles or Elevated Privileges: Re-verifying identity before granting new access levels. Elevated Detected Risk: Prompting re-proofing when monitoring systems detect suspicious activity (e.g., unusual login locations). Role of HYPR in Compliance and Assurance

HYPR's solutions are strategically engineered to not just meet but exceed NIST 800-63-3 requirements, significantly enhancing identity assurance. Our unique value proposition is a commitment to true passwordless security; eliminating passwords entirely, not just offering them as an option. This comprehensive approach integrates phishing-resistant passwordless authentication, continuous risk monitoring, and automated identity verification into a unified platform.

HYPR specifically contributes to NIST 800-63-3 compliance and security enhancement by:

Elevating AALs: HYPR is singularly focused on enabling organizations to meet and exceed AAL3 requirements. Our FIDO Certified passwordless authentication directly aligns with NIST's most stringent recommendations for AAL3. By eliminating passwords, HYPR removes the primary attack vector for phishing and credential theft, securing OS-level access and consumer interactions. Strengthening IALs: HYPR Affirm is our comprehensive identity verification solution, tailored for workforce identity proofing throughout the full employee lifecycle. It helps achieve IAL2 and IAL3 compliance using chat, video, facial recognition with liveness detection, document authentication, and supports step-up re-proofing based on risk. This ensures continuous identity assurance beyond a single point-in-time check, aligning with the spirit of NIST.

By integrating NIST 800-63-3 with solutions like HYPR, organizations bridge business and security objectives. This approach can lead to reduced cyber liability insurance and operational cost savings from fewer password resets. Ultimately, it drastically minimizes the attack surface, creating a more resilient and secure digital environment.

Conclusion: Embracing a Secure Digital Identity Future

The NIST SP 800-63-3 Digital Identity Guidelines is crucial for modern digital identity management, emphasizing extensive identity proofing, strong phishing-resistant authentication, and secure federated identity practices. Their evolution highlights NIST's responsiveness to emerging threats like phishing, advocating for cryptographic authenticators.

Adhering to these guidelines is a critical strategic imperative, enhancing cybersecurity, reducing fraud, and improving user experience. NIST SP 800-63-3 remains vital for fostering trust in digital identities.

Organizations that proactively embrace and diligently implement these guidelines, especially by leveraging advanced and comprehensive identity assurance platforms like HYPR, are well-positioned to protect their invaluable digital assets and empower their users securely into a more productive digital future, where identity security truly starts here.

FAQs

Q: What is NIST SP 800-63-3? A: NIST SP 800-63-3 refers to the National Institute of Standards and Technology's Digital Identity Guidelines, which provide a comprehensive framework for digital identity management, including identity proofing, authentication, and federated identity management.

Q: What are Identity Assurance Levels (IALs)? A: IALs are a critical part of the NIST Digital Identity Guidelines that signify the degree of certainty that a claimed digital identity corresponds to a real-world identity, with levels ranging from IAL1 (self-asserted) to IAL3 (requiring in-person verification).

Q: How does HYPR help with NIST compliance? A: HYPR's solutions, such as its FIDO Certified passwordless authentication and comprehensive identity verification platform (HYPR Affirm), directly assist organizations in achieving compliance with NIST 800-63-3 guidelines by providing high assurance levels (specifically AAL3 and IAL2 capabilities) and eliminating vulnerable, password-based authentication methods.

Related Resources: What Is Identity Assurance? Best Practices for Identity Proofing in the Workplace Understanding NIST-800-63: What is it? What to know

 


Spherical Cow Consulting

What WSIS+20 Taught Me About Digital Identity and Global Governance

I went to Geneva to understand what, if anything, people were saying regarding digital identity and standards in a governance-focused forum. My brain is now full. I adore the topic of identity and the standards development process; everything from the brilliant minds, the challenges, and the intense edge cases. The post What WSIS+20 Taught Me About Digital Identity and Global Governance appeared

“I went to Geneva to understand what, if anything, people were saying regarding digital identity and standards in a governance-focused forum. My brain is now full.”

I’m an identity and standards geek. I adore the topic of identity and the standards development process; everything from the brilliant minds, the challenges, and the intense edge cases. (Well, some of the challenges. I could do without a few.)

But I also recognize that both the identity industry and the standards process have serious issues, especially when it comes to the diversity of representation and issues of governance. Too often, we hear from the same people in the same rooms, solving the same problems.

So, when I knew I’d be in Geneva for the first Global Digital Collaboration Conference (which, by the way, exceeded my expectations), I applied for accreditation to attend the WSIS+20 High-Level Event, hosted by the ITU. The WSIS+20 is, according to their website, “an existing multistakeholder United Nations (UN) process on digital governance and cooperation with a vision of fostering people-centered, inclusive, and development-oriented information and knowledge societies.” My goal was pretty simple: listen to people I don’t usually get to hear from. Try to understand why the worlds of digital identity, governance, and Internet policy remain so siloed, despite everyone’s insistence that this is all interconnected.

I’ve learned a lot. I’m not sure what to do with all of it yet, but I’m glad I came.

A Digital Identity Digest What WSIS+20 Taught Me About Digital Identity and Global Governance Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:11:24 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Defending Our Voice: Participation ≠ Power

Let’s start with the first session I attended: “Defending Our Voice: Global South Participation in Digital Governance.” Kemly Camacho (Association for Progressive Communications) moderated the session, which included speakers from IT for Change, Derechos Digitales, the UN Human Rights Office, and the Brazilian Internet Steering Committee.

They were primarily discussing governance, not technical standards, but a lot of what they said seemed relevant for the standards world, too. Right out of the gate, the first speaker, Nandini Chamim, laid out a set of points I think everyone in identity and standards work should hear:

Presence ≠ Participation: Just being in the room doesn’t mean having influence. Real participation means being able to shape agendas and priorities. Multistakeholder ≠ Public Policy: Equal status in dialogue doesn’t automatically create legitimate public policy. What matters is how different interests are mediated to reach consensus. The Mundial 2014 process raised this issue: Legitimacy in Internet governance must be earned, not assumed through default institutional formats. Trust deficit: Historical data shows ongoing public skepticism. Governance spaces are often dominated by powerful countries and corporations, sidelining the issues important to less-represented groups. Technical ≠ Apolitical: Since WSIS, we’ve seen how technical standards often embed political values. They shape not only systems, but also societal norms. Openness must be questioned: When we talk about openness, we should ask: Openness towards what (i.e., open to what kind of issues?), and for whom?

Even when civil society groups are invited, it’s often tokenistic; visa issues, language barriers, and lack of funding limit meaningful engagement. And when they do show up, there’s often no follow-up, no feedback loops, and no real seat at the agenda-setting table.

IAM Is a Multi-Tool. We’re Not the Whole Toolbox.

Another thing that became clear throughout the event was that many of the people here are tackling big problems, and digital identity isn’t exactly one of them.

This reminded me of a conversation I had with Andrew Hindle and Richard Bird at the bar during Identiverse 2025. (All the best conversations seem to happen at the bar.) We talked about how siloed the identity team is within most organizations, even though IAM touches every part of the business.

That conversation led me to understand that identity people often think in systems, but the rest of the organization thinks in functions. HR, finance, compliance, marketing—they’ve all got their own language and priorities. And understanding IAM isn’t in their job description.

WSIS+20 made me realize that the same kind of disconnect exists at the global level, too.

The people in these rooms work on global human rights, economic justice, equity, ethics, and environmental resilience. To them, identity is just one tool in a thousand-piece toolbox. Standards are just one governance process among many. Most of them have never heard of SAML, FIDO, or the W3C Digital Credential API, and don’t need to. They don’t use the terms CIAM, workforce identity, or even authentication and authorization. They have use cases that require things like identity verification, but it’s not on their radar.

What they do understand is that technology is not neutral. Technologists and policymakers must shape it around a platform of human rights. Where I’ve previously written that “the technology is ready, the governance is not,” what I saw here is that the governance work is happening, but it’s happening in a different room, with a very different vocabulary.

So what do we do about the gap?

It’s tempting to say we need to get technologists into policy rooms and policymakers into technical working groups. But I don’t think that’s the answer, at least not by itself. This isn’t a matter of “getting in the room.” It’s about figuring out how to share power, information, and influence across two very different systems.

The only halfway-satisfying idea I have right now is this: mandated consultation, in both directions.

Policy decisions should be reviewed by expert technical groups. Technical standards should be reviewed by policymakers and civil society.

Even that solution frustrates me. It’ll slow processes down further in a time when everything else is speeding up. It may leave even more room for de facto standards to take root via market dominance. But I can’t think of another way to bridge the growing legitimacy gap between how we make standards and how people expect governance to work. Consultation also won’t be enough if there’s no way to track what feedback was given, what was included, and, critically, what was left out and why. That’s not just a process problem. That’s a legitimacy problem.

If you have other suggestions, I’m listening.

Want to stay updated when a new post comes out? I write about digital identity and related standards—because someone has to keep track of all this! Subscribe to get a notification when new blog posts and their audioblog counterparts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:04]
Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down interesting topics in the field of digital identity—from credentials and standards to browser weirdness and policy twists.

If you work with digital identity but don’t have time to follow every specification or hype cycle, you’re in the right place.

Let’s get into it.

Defining the Governance Gap

[00:00:30]
So I went to Geneva for the World Summit on the Information Society (WSIS) meeting. It’s a multistakeholder United Nations process, organized by the ITU, that focuses on digital governance and cooperation.

My goal? To understand what, if anything, people were saying about digital identity and standards in a governance-focused forum.

And it was… really interesting. I’m very glad I went.

Now, if you know me, you know I’m an identity and standards geek. I love the process—well, most of it. The brilliant minds, the weird edge cases, the moments where it feels like we’re collectively inching the Internet forward? I live for that.

But I also recognize the industry’s flaws:

Representation gaps Governance issues Repetition of the same voices and perspectives

And that’s not how progress works.

Finding WSIS and Listening Beyond the Norm

When I found out I’d already be in Geneva for the Global Digital Collaboration Conference (which far exceeded expectations), I applied for accreditation to attend the WSIS+20 High-Level Event.

According to its official description, WSIS is about:

“Fostering people-centered, inclusive, and development-oriented information and knowledge societies.”

In real terms? It’s the room where governments, civil society, and international organizations gather to talk about the future of digital governance on a global scale.

My objective was simple:
Listen to people I don’t normally hear from.
And try to understand why identity, governance, and Internet policy feel like separate worlds, even though they’re clearly interconnected.

The Session That Set the Tone

[00:02:43]
The first session I attended was titled:
Defending Our Global South Participation in Digital Governance.

It was moderated by Kemli Camacho from the Association for Progressive Communications, with panelists from:

IT for Change Derechos Digitales UN Human Rights Office Brazilian Internet Steering Committee

They weren’t there to talk about standards, but what they said resonated powerfully.

A speaker named Nadini Chowni offered some compelling truths:

Presence ≠ Participation
Just being in the room doesn’t mean influencing the agenda. Real participation means shaping the priorities—not just reacting to them. Multistakeholder ≠ Public Policy
Equal footing in a discussion doesn’t guarantee fair policy outcomes. What really matters is how interests are mediated and resolved. Legitimacy Must Be Earned
She referenced the Mundial 2014 process, which emphasized that governance legitimacy is earned—not assumed—just because a structure looks inclusive on paper. There’s a Public Trust Deficit
Long-standing skepticism exists due to powerful countries and corporations dominating governance spaces, often sidelining less-represented voices.

And perhaps most importantly:
Being technical is not apolitical.
The design of standards reflects values. It shapes systems. It sets social norms.

Tokenism, Risk, and Real Costs

[00:04:39]
Even when civil society is invited, participation is often tokenistic:

Visa problems Language barriers No travel funding No translation or agenda No follow-up mechanism

As one speaker said:

“Participation is costly. And it’s risky.”

Sometimes, just showing up carries political risk. And even then, there’s no guarantee your voice will shape the outcome.

As a working group chair in a standards organization, this hit hard. I try to be inclusive—but I often rely on those who are willing to speak up.

Clearly, I need to do better.

Identity Isn’t the Toolbox—It’s Just a Tool

[00:06:01]
Here’s the realization:
Identity and access management (IAM) is like a multi-tool—it’s handy, flexible, and powerful.
But it’s not the whole toolbox.

At the WSIS event, most people weren’t even thinking about digital identity.

Instead, they were focused on:

Human rights Economic justice Environmental resilience

Digital identity was just one small piece of their broader challenges.

This reminded me of a conversation I had at Identiverse earlier this year—with Andrew Hindle and Richard Bird—about how siloed IAM projects tend to be inside organizations.

Even though identity touches everything (HR, finance, compliance, security), it often goes unacknowledged across domains.

Why?

IAM folks think in systems Others think in functions

Different language. Different priorities. Different goals.

And at a global level, it’s the same. The WSIS crowd? They’ve never heard of SAML or FIDO. They’re not discussing consumer or workforce identity.

They’re talking about:

Data sovereignty Rights Accountability

They expect technologists to embed human rights into the systems they build.

Governance Is Happening… Somewhere Else

[00:07:42]
I used to say:

“Technology is ready; governance is not.”

Now, I’m not so sure. Governance is happening—just in other rooms, with very different vocabulary.

And that’s overwhelming.

We’ve got:

WSIS IGF Global Digital Collaboration National frameworks Regional strategies

Even experts struggle to keep up.
For civil society groups, trying to monitor all that—on tight budgets, in multiple languages—it’s nearly impossible.

That’s not just a risk. It’s already causing:

Fragmentation Incompatibility Conflicting outcomes

And no one wins.

The Gap Isn’t Just Presence—It’s Power

[00:08:45]
So how do we bridge that gap?

It’s tempting to say:
“Just bring technologists into policy rooms and policymakers into technical working groups.”

But that’s not enough.

Why?

Because it’s not about presence. It’s about power—and how to share it across two very different systems.

One half-formed idea:
Create mandated consultations in both directions.

Policy decisions get reviewed by expert technical groups Technical standards get reviewed by policymakers and civil society

That should be baseline practice.

But even that frustrates me—because it’s going to slow us down. And we’re already struggling to keep up.

And the slower we go, the more room there is for de facto standards—those not built on consensus, but on market dominance.

Still, I can’t see another way to address the legitimacy gap.

And consultation alone isn’t enough without accountability.

What Happens Next?

[00:09:53]
One of the speakers noted that the WSIS Elements Paper—the framework guiding these discussions—barely included strong human rights language.

It was, in their words, “legally timid.”

So we need more than just listening.

We need:

Traceability Transparency Accountability

We need to track:

What feedback was given What was included What was excluded—and why

This isn’t just a process gap.
It’s a legitimacy problem.

If you’ve been in one of these policy rooms, or tried to bring identity work into broader governance conversations, I would truly love to hear from you.

Because honestly? I don’t see how we get from two parallel worlds to one where we can truly collaborate.

But I’m listening.

Final Thoughts

[00:10:48]
And that’s it for this week’s episode of the Digital Identity Digest.

If this made things a little clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn.

And if you enjoy the show, please subscribe and leave a rating or review on Apple Podcasts, or wherever you listen.

You can also find the full written post at sphericalcowconsulting.com.

Stay curious. Stay engaged.
Let’s get these conversations going.

The post What WSIS+20 Taught Me About Digital Identity and Global Governance appeared first on Spherical Cow Consulting.


Dock

Why Mastercard Is Betting on mDLs

During our recent podcast on how digital ID will transform payments, Leonard Botezatu, Director of Product & Service Design at Mastercard, shared a powerful idea:  A credit card issued in the U.S. works almost anywhere in the world. An mDL should work the same way. Mastercard

During our recent podcast on how digital ID will transform payments, Leonard Botezatu, Director of Product & Service Design at Mastercard, shared a powerful idea: 

A credit card issued in the U.S. works almost anywhere in the world. An mDL should work the same way.

Mastercard is backing mobile driver’s licenses (mDLs) not just as a convenience feature, but because they’re built on international identity standards. Just like EMV enabled global payments interoperability, these standards are now paving the way for global digital identity.

Why does that matter?

Because fragmented identity systems are costly. 

Today, businesses, especially those operating across borders, must deal with inconsistent ID formats, multiple onboarding flows, and incompatible verification tools. 

Mastercard’s bet is that digital ID can work everywhere, just like their payment rails do.


PingTalk

Challenges in Preparing Ecommerce Channels for the Peak Season Rush

The ecommerce peak season rush is around the corner. Here's how to prepare to improve conversions, wow your customers, and keep fraudsters at bay.

FastID

Make Attackers Cry: Outsmart Them With Deception

Fight cyber attackers with deception! Fastly's Next-Gen WAF introduces a new "Deception" action to outsmart and frustrate attackers, turning the tables on them.
Fight cyber attackers with deception! Fastly's Next-Gen WAF introduces a new "Deception" action to outsmart and frustrate attackers, turning the tables on them.

Monday, 28. July 2025

Metadium

68 Million Tons of Greenhouse Gas Reduction in Cambodia, Powered by Metadium Technology

Hello from the Metadium team. Recently, the Cambodian government officially recognized Verywords for achieving a 680,000-ton reduction in greenhouse gas (GHG) emissions, marking Korea’s first successful case of an “Internationally Transferred Mitigation Outcome (ITMO).” This is more than a milestone in climate action — it demonstrates the real-world application of Metadium’s technology in a

Hello from the Metadium team.

Recently, the Cambodian government officially recognized Verywords for achieving a 680,000-ton reduction in greenhouse gas (GHG) emissions, marking Korea’s first successful case of an “Internationally Transferred Mitigation Outcome (ITMO).” This is more than a milestone in climate action — it demonstrates the real-world application of Metadium’s technology in a global climate cooperation initiative.

Why is this project significant?
Under the Paris Agreement, South Korea must reduce 291 million tons of carbon emissions by 2030, including 37.5 million tons through international reduction efforts.
Until now, no ITMO project had received formal approval, but Verywords broke new ground by earning recognition through the distribution of electric motorcycles in partnership with Cambodia.

What role did Metadium play in this project?
Metadium served as the technical backbone connecting climate action and data by implementing a system that uses Decentralized Identifiers (DID) and NFT technology to verify participation and emission reduction activities.

Public sector officials and private citizens who received electric motorcycles were issued Metadium DIDs. These DIDs enabled them to be reliably identified as participants in the reduction effort, and their activity history could be securely tracked and verified. The reduction data from each motorcycle was integrated with Metadium’s carbon-neutral eco platform and issued in the form of points, linking data with the participant’s history. NFT technology was used to verify each participant’s activity records uniquely.

Metadium’s contribution to a trusted data-driven ecosystem
For international reduction outcomes to be officially recognized, the key is to clearly and verifiably answer: Who reduced what, and how?
Metadium’s DID and NFT technologies enabled a secure and transparent connection between participants and their activity data, providing an infrastructure that made verification possible on the blockchain.This solidified the legitimacy and traceability necessary for international carbon credit transactions.

Beyond climate cooperation: Unlocking global ecosystem potential
The Verywords project goes beyond distributing electric motorcycles. It’s evolving into a sustainable ecosystem involving real-time driving data collection via IoT modules, a membership-based energy usage model, and a battery reuse system.
Within this structure, Metadium’s DID and NFT-based verification infrastructure demonstrated strong potential for expansion in Southeast Asia and other developing markets where climate technology is in demand.

Conclusion: Metadium as a foundation for the global ESG ecosystem
The Verywords project is a meaningful first step in proving that the international carbon reduction model can work in practice. Metadium played a core role by providing trust-based technology that connects participants and data.
Moving forward, Metadium will continue to contribute to the global ESG ecosystem by offering a scalable infrastructure rooted in DID and blockchain technology — supporting carbon reduction and sustainable development goals worldwide.

The Metadium Team

캄보디아 온실가스 68만 톤 감축, 메타디움 기술 기여

안녕하세요, 메타디움 팀입니다.

최근 베리워즈가 캄보디아 정부로부터 68만 톤 규모의 온실가스 감축 실적을 공식 인정받으며, 한국의 첫 ‘국제 감축(ITMO)’ 성공 사례로 주목받고 있습니다. 이는 단순한 기후 사업의 성공을 넘어, 메타디움 기술이 실질적인 글로벌 기후 협력 사업에 적용된 중요한 이정표입니다.

왜 이 프로젝트가 중요한가?

한국은 파리기후협정에 따라 2030년까지 총 2.91억 톤의 탄소를 감축해야 하며, 그 중 3,750만 톤을 해외에서 ‘국제 감축’ 방식으로 달성해야 합니다.

그동안 어떤 ITMO 사업도 정부 승인에 이르지 못했지만, 베리워즈는 캄보디아와 협력해 전기 오토바이 보급을 통해 공식 감축 실적을 인정받았습니다.

이 프로젝트에서 메타디움은 어떤 역할을 했나?

메타디움은 전기 오토바이 보급 대상자의 디지털 신원(DID) 등록과 NFT 기술을 기반으로 한 참여 증빙 시스템을 통해, 감축 활동과 데이터를 연결하는 인프라 역할을 수행했습니다.

• 전기 오토바이를 수령한 공공기관 공무원과 민간 사용자에게 메타디움 DID가 발급되었으며, 각 참여자는 이 DID를 통해 감축 활동의 참여자로 식별되고, 그 활동 이력이 안전하게 추적·증명될 수 있도록 설계되었습니다.
• 각 오토바이의 감축 실적은 메타디움의 탄소중립 에코 플랫폼과 연동되어 포인트 형태로 발행되었으며, 참여자의 활동 이력과 연계되었습니다.
• NFT 기술은 각 참여자의 활동 내역을 고유하게 증명하는 수단으로 사용되었습니다.

데이터 기반 생태계 신뢰성, 메타디움이 기여한 부분

국제 감축 실적이 공식적으로 인정되기 위해서는, ‘누가’, ‘무엇을’, ‘어떻게 감축했는가’를 명확하게 기록하고 검증할 수 있어야 합니다.

메타디움의 DID 및 NFT 기술은 감축 활동 주체의 신원과 활동 데이터를 안전하게 연결하며, 블록체인을 통해 그 과정을 투명하게 증명할 수 있는 기반 인프라로 기능했습니다. 이는 감축 실적의 국가 간 거래 시, 명확한 정당성과 추적 가능성을 동시에 확보할 수 있도록 도왔습니다.

기후 협력을 넘어, 글로벌 생태계 확장을 위한 가능성

베리워즈 프로젝트는 단순한 전기 오토바이 보급에 그치지 않고, IoT 모듈 기반의 실시간 데이터 수집, 멤버십형 에너지 사용 모델, 배터리 재활용 시스템 등 지속가능한 탄소 감축 생태계 구축으로 이어지고 있습니다.

이러한 구조 안에서 메타디움은 DID 및 NFT 기반의 감축 증빙 인프라를 제공함으로써, 기후 기술을 필요로 하는 동남아 및 개발도상국 시장에서 높은 활용성과 확장성을 입증했습니다.

결론: 메타디움, 글로벌 ESG 생태계의 신뢰 기반

베리워즈 프로젝트는 ‘국제 감축’이라는 새로운 협력 모델이 실제로 작동할 수 있음을 보여준 의미 있는 첫걸음이며, 메타디움은 이 과정에서 참여 주체와 데이터를 연결하는 신뢰 기반 기술로 핵심 역할을 수행했습니다.

앞으로도 메타디움은 DID와 블록체인 기술을 통해, 탄소 감축과 지속 가능한 개발 목표를 지원하는 글로벌 ESG 생태계의 핵심 인프라로 자리매김해 나가겠습니다.

메타디움 팀

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(EN) | http://t.me/metadiumofficial Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

68 Million Tons of Greenhouse Gas Reduction in Cambodia, Powered by Metadium Technology was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockam

Brave Journey To Freedom

The concept of an arranged marriage was never alien to me. I had been bought up to accept this as a normal aspect of my Culture. Continue reading on Clubwritter »

The concept of an arranged marriage was never alien to me. I had been bought up to accept this as a normal aspect of my Culture.

Continue reading on Clubwritter »


Okta

Secure Your Express App with OAuth 2.0, OIDC, and PKCE

Every web application needs authentication, but building it yourself is risky and time-consuming. Instead of starting from scratch, you can integrate Okta to manage user identity and pair Passport with the openid-client library in Express to simplify and secure the login flow. In this tutorial, you’ll build a secure, role-based expense dashboard where users can view their expenses tailored to thei

Every web application needs authentication, but building it yourself is risky and time-consuming. Instead of starting from scratch, you can integrate Okta to manage user identity and pair Passport with the openid-client library in Express to simplify and secure the login flow. In this tutorial, you’ll build a secure, role-based expense dashboard where users can view their expenses tailored to their team.

Check out the complete source code on GitHub and get started without setting it up from scratch.

Table of Contents

Why use Okta for authentication Why use PKCE in OAuth 2.0 A secure web app using Express, OAuth 2.0, and PKCE Create your Express project and install dependencies Configure environment variables for OIDC authentication Create the Okta OIDC web application Build the Express app Define team mapping and sample expenses Create a file to handle authentication Set up routing in Express Add EJS views in Express Run the Express app with authentication Learn more about OAuth 2.0, OIDC, and PKCE Why use Okta for authentication

Building an authentication system and handling credentials, sessions, and tokens is highly insecure and exposes your application to serious vulnerabilities.

Okta provides a secure, scalable, and standards-based solution using OpenID Connect (OIDC) and OAuth 2.0. It also integrates seamlessly with OIDC client libraries for your favorite tech stack and allows you to fetch tokens.

Why use PKCE in OAuth 2.0

To further strengthen security, this project uses PKCE (Proof Key for Code Exchange), defined in RFC 7636. PKCE is a security extension to the Authorization Code flow. Developers initially designed PKCE for mobile apps, but experts now recommend it for all OAuth clients, including web apps. It helps prevent CSRF and authorization code injection attacks and makes it useful for every type of OAuth client, even confidential clients such as web apps that use client secrets. As OAuth 2.0 has steadily evolved, security best practices have also advanced. RFC 9700: Best Current Practice for OAuth 2.0 Security captures the consensus on the most effective and secure implementation strategies. Additionally, the upcoming OAuth 2.1 draft requires PKCE for all authorization code flows, reinforcing it as a baseline security standard.

With Okta, you can implement modern authentication features and focus on your application logic without worrying about authentication infrastructure.

A secure web app using Express, OAuth 2.0, and PKCE

Let’s build an expense dashboard where users log in with Okta and view spending data based on their role. Whether they work in Finance, Marketing, or HR, each team views only its own expenses. To keep things minimal in this demo project, we’ll define roles and users directly in the app.

You’ll also use OpenID Connect (OIDC) through the openid-client library for authentication. Then, you’ll map each user’s email from the ID token to a team. The dashboard applies principles of least privilege and displays expenses by team, so each user sees only their department’s spending.

Prerequisites

Node.js installed (v22+ recommended)

Okta Integrator Free Plan org

Create your Express project and install dependencies

Create a new project folder named express-project-okta, and open a terminal window in the project folder.

Initialize a new Node.js project:

npm init -y

Install the required packages:

npm install express@5.1 passport@0.7 openid-client@6.6 express-session@1.18 ejs@3.1 express-ejs-layouts@2.5 dotenv

Now, install the development dependencies:

npm install --save-dev nodemon

In the package.json file, update the scripts property with the following:

"scripts": { "start": "nodemon index.js" }

What do these dependencies do?

These installed packages become your Express project’s dependencies.

express: Handles routing and HTTP middleware for your web app

passport: Sets up and maintains server-side sessions

openid-client: Node.js OIDC library with PKCE support; handles the OAuth handshake and token exchange.

express-session: Manages user sessions on the server

dotenv: Loads environment variables from a .env file

ejs: Enables dynamic HTML rendering using embedded JavaScript templates

express-ejs-layouts: Adds layout support to EJS, helping manage common layout structures across views

Configure environment variables for OIDC authentication

Create a .env file in the root directory with placeholders for your Okta configuration.

OKTA_ISSUER= OKTA_CLIENT_ID={yourClientId} OKTA_CLIENT_SECRET={clientSecret} APP_BASE_URL=http://localhost:3000 POST_LOGOUT_URL=http://localhost:3000

In the next step, you’ll get these values from your Okta Admin Console.

Create the Okta OIDC web application

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Click Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Web Application as the application type, then click Next

Enter an app integration name

Configure the redirect URIs: Sign-in redirect URIs: http://localhost:3000/authorization-code/callback Sign-out redirect URIs: http://localhost:3000 In the Controlled access section, select the appropriate access level Click Save Where are my new app's credentials?

Creating an OIDC Web App manually in the Admin Console configures your Okta Org with the application settings.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Client Secret: Click Show in the Client Credentials section to reveal Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane.

You’ll need these values for your application configuration:

OKTA_OAUTH2_ISSUER="https://dev-133337.okta.com/oauth2/default" OKTA_OAUTH2_CLIENT_ID="0oab8eb55Kb9jdMIr5d6" OKTA_OAUTH2_CLIENT_SECRET="NEVER-SHOW-SECRETS"

Your Okta domain is the first part of your issuer, before /oauth2/default.

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Build the Express app

Create an index.js file in your project root. It serves as the main entry point for your application. Use it to initialize the Express app, set up the routes, and configure Passport to manage user sessions by serializing and deserializing users on each request.

import express from 'express'; import session from 'express-session'; import passport from 'passport'; import routes from './routes.js'; import expressLayouts from 'express-ejs-layouts'; const app = express(); app.set('view engine', 'ejs'); app.use(expressLayouts); app.set('layout', 'layout'); app.use(express.urlencoded({ extended: false })); app.use(session({ secret: "your-hardcoded-secret", resave: false, saveUninitialized: true, })); app.use(passport.initialize()); app.use(passport.session()); passport.serializeUser(function (user, done) { done(null, user); }); passport.deserializeUser(function (obj, done) { done(null, obj); }); app.use('/', routes); app.listen(3000, () => { console.log('Server listening on http://localhost:3000'); }); Define team mapping and sample expenses

Create a utils.js file to serve as a data module for your project. This file includes a user-to-team mapping and has dummy expense data for each team, covering all teams configured for testing in your web app.

The application determines the user’s team context from the email claim in the ID token and filters the expense list accordingly, so the dashboard displays only that team’s data.

To customize the data, open utils.js and update the following objects:

ALL_TEAMS_NAME - an array listing all teams in your organization

userTeamMap - maps each user’s email (or “admin” for full access) to a specific team

dummyExpenseData - contains sample expense data for each team

export const ALL_TEAMS_NAME = ["finance", "hr", "legal", "marketing", "dev advocacy"]; export const userTeamMap = { "hannah.smith@task-vantage.com": "admin", "grace.li@task-vantage.com": "legal", "frank.wilson+@task-vantage.com": "dev advocacy", "carol.lee@task-vantage.com": "finance", "alice.johnson@task-vantage.com": "marketing", "sarah.morgan@task-vantage.com": "hr", }; export const dummyExpenseData = { finance: [ { name: "Alice Johnson", item: "Product Launch Campaign", amount: 1200, }, { name: "Bob Smith", item: "Promotional Material", amount: 450, }, { name: "Carol Lee", item: "Team Lunch", amount: 180, }, { name: "David Kim", item: "Event Booth", amount: 950, }, ], hr: [ { name: "Eve Martinez", item: "Internet", amount: 300, }, { name: "Frank Wilson", item: "Compliance Training", amount: 600, }, { name: "Grace Li", item: "Conference Travel", amount: 1500, }, { name: "Henry Zhang", item: "Team Offsite", amount: 1000, }, ], marketing: [ { name: "Alice Johnson", item: "Payroll Processing", amount: 750, }, { name: "Carol Lee", item: "Compliance Training", amount: 400, }, { name: "Eve Martinez", item: "Team Lunch", amount: 200, }, { name: "Frank Wilson", item: "Team Offsite", amount: 850, }, ], legal: [ { name: "Grace Li", item: "Event Booth", amount: 1100, }, { name: "David Kim", item: "Product Launch Campaign", amount: 1300, }, { name: "Bob Smith", item: "Conference Travel", amount: 1250, }, { name: "Henry Zhang", item: "Team Lunch", amount: 170, }, ], "dev-advocacy": [ { name: "Eve Martinez", item: "Internet", amount: 280, }, { name: "Frank Wilson", item: "Payroll Processing", amount: 720, }, { name: "Grace Li", item: "Compliance Training", amount: 500, }, { name: "Alice Johnson", item: "Team Offsite", amount: 950, }, ], }; export function getModifiedTeam(team) { if (!team?.trim()) return []; const toPascalCase = (str) => str .trim() .split(/\s+/) .map((word) => word.charAt(0).toUpperCase() + word.slice(1).toLowerCase()) .join(' '); const toKebabCase = (str) => str.trim().toLowerCase().split(' ').join('-'); if (team === 'admin') { return ALL_TEAMS_NAME.map((element) => ({ id: toKebabCase(element), label: toPascalCase(element), })); } return [ { id: toKebabCase(team), label: toPascalCase(team), }, ]; }

The file also defines getModifiedTeam, a helper that converts a team name into an array of objects. Each object has an id and a label. If the team is admin, the function returns an object for every entry in ALL_TEAMS_NAME; otherwise, it returns a single object for the specified team. Later in the project, the app calls this function to transform the user’s team information.

Create a file to handle authentication

Create an auth.js file for this step. This file uses the openid-client library to handle the OIDC flow: it logs users in, exchanges the authorization code for tokens, and logs them out. It also defines a middleware that guards protected routes.

In the auth.js file, add the following code:

import * as client from "openid-client"; import "dotenv/config"; import { getModifiedTeam, userTeamMap } from './utils.js'; async function getClientConfig() { return await client.discovery(new URL(process.env.OKTA_ISSUER), process.env.OKTA_CLIENT_ID, process.env.OKTA_CLIENT_SECRET); } export async function login(req, res) { try { const openIdClientConfig = await getClientConfig(); const code_verifier = client.randomPKCECodeVerifier(); const code_challenge = await client.calculatePKCECodeChallenge(code_verifier); const state = client.randomState(); req.session.pkce = { code_verifier, state }; req.session.save(); const authUrl = client.buildAuthorizationUrl(openIdClientConfig, { scope: "openid profile email offline_access", state, code_challenge, code_challenge_method: "S256", redirect_uri: `${process.env.APP_BASE_URL}/authorization-code/callback`, }); res.redirect(authUrl); } catch (error) { res.status(500).send("Something failed during the authorization request"); } } function getCallbackUrlWithParams(req) { const host = req.headers["x-forwarded-host"] || req.headers.host || "localhost"; const protocol = req.headers["x-forwarded-proto"] || req.protocol; const currentUrl = new URL(`${protocol}://${host}${req.originalUrl}`); return currentUrl; } export async function authCallback(req, res, next) { try { const openIdClientConfig = await getClientConfig(); const { pkce } = req.session; if (!pkce || !pkce.code_verifier || !pkce.state) { throw new Error("Login session expired or invalid. Please try logging in again."); } const tokenSet = await client.authorizationCodeGrant(openIdClientConfig, getCallbackUrlWithParams(req), { pkceCodeVerifier: pkce.code_verifier, expectedState: pkce.state, }); const { name, email } = tokenSet.claims(); const teams = getModifiedTeam(userTeamMap[email]); const userProfile = { name, email, teams, idToken: tokenSet.id_token, }; delete req.session.pkce; req.logIn(userProfile, (err) => { if (err) { return next(err); } return res.redirect("/dashboard"); }); } catch (error) { console.error("Authentication error:", error.message); return res.status(500).send(`Authentication failed: ${error.message}`); } } export async function logout(req, res) { try { const openIdClientConfig = await getClientConfig(); const id_token_hint = req.user?.idToken; const logoutUrl = client.buildEndSessionUrl(openIdClientConfig, { id_token_hint, post_logout_redirect_uri: process.env.POST_LOGOUT_URL, }); req.logout((err) => { if (err) return next(err); req.session.destroy((err) => { if (err) return next(err); res.redirect(logoutUrl); }); }); } catch (error) { res.status(500).send('Something went wrong during logout.'); } } export function ensureAuthenticated(req, res, next) { if (req.isAuthenticated()) { return next(); } res.redirect("/login"); }

This file includes the following functions:

getClientConfig - Retrieves the authorization server’s metadata using the discovery endpoint. login - This function starts the Authorization Code + PKCE flow. It generates the required values to enable PKCE: the code_verifier and code_challenge. These values, along with the state value protect the user sign in process from attack vectors. PKCE protects against auth code interception attacks, and the state parameter protects against Cross-Site Request Forgery (CSRF). The openid-client builds the user sign in URL with these values and redirects the user to Okta to complete the authentication challenge. getCallbackUrlWithParams - Reconstructs the complete callback URL, including protocol, host, path, and query. authCallback - This function runs when the user redirects back to the app after the authentication challenge succeeds. At this point, the redirect URL back into the application includes the auth code. The OIDC client verifies the auth code by checking that the state value matches the parameter in the first redirect. Once verified, the openid-client library uses the auth code for the token exchange by adding the code_verifier to the token request. The authorization server validates the auth code and the code_verifier value to ensure the request comes from the client making the original authentication request, mitigating attacks using stolen auth codes. Once we get back valid tokens, we handle the app’s business logic, such as mapping the user to a team and storing the profile details and ID token in the session. If everything succeeds, it redirects the user to the dashboard. logout - Logs the user out of the app and redirects to the post-logout URL. ensureAuthenticated - Middleware that allows authenticated users to proceed and redirects others to the login page. Set up routing in Express

Now things start to come together and feel like a real app. The routes.js file defines all the essential routes, from login and logout to viewing your profile, the expense dashboard, and individual team expense pages. The app handles each endpoint’s core logic and checks a user’s authentication status before granting access to protected pages.

It acts as our app’s traffic controller, directing users to the right pages and ensuring that only logged-in users can view sensitive information like the expense dashboard or group details. This structure keeps our app organized and secure and lays the foundation for a smooth user experience.

import express from "express"; import "dotenv/config"; import { authCallback, ensureAuthenticated, login, logout } from "./auth.js"; import { dummyExpenseData } from './utils.js'; const router = express.Router(); router.get("/", (req, res) => { res.render("home", { title: "Home", user: req.user }); }); router.get("/login", login); router.get("/authorization-code/callback", authCallback); router.get("/profile", ensureAuthenticated, (req, res) => { res.render("profile", { title: "Profile", user: req.user }); }); router.get("/dashboard", ensureAuthenticated, (req, res) => { const team = req.user?.teams || []; res.render("dashboard", { title: "Dashboard", user: req.user, team, }); }); router.get("/team/:id", ensureAuthenticated, (req, res) => { const teamId = req.params.id; const teamList = req.user?.teams || []; const team = teamList.find((team) => team.id === teamId); if (!team) { return res.status(404).send("Team not found"); } const expenses = dummyExpenseData[teamId]; const total = expenses.reduce((sum, exp) => sum + exp.amount, 0); res.render("expenses", { title: team.name, user: req.user, team, expenses, total, }); }); router.get("/logout", logout); export default router; Add EJS views in Express

Now it’s time to give the app a user interface. You’ll use EJS templates to build pages that respond dynamically to who’s logged in and what data they see. The app uses ejs templates to render the pages, plus express-ejs-layouts for common layout structures.

Create a folder named views, then add the following EJS files:

home.ejs

<% if (user) { %> <h1>Welcome, <%= user.name || 'User' %>!</h1> <% } else { %> <h1>Welcome</h1> <% } %> <p class="lead">Log your expenses and manage your team's spending on the dashboard.</p> <% if (user) { %> <a href="/dashboard" class="btn btn-primary">Go to Dashboard</a> <% } else { %> <a href="/login" class="btn btn-success">Login</a> <% } %>

profile.ejs

<h1>Profile</h1> <p><h2 style="display: inline-block; margin: 0; font-size: 16px;">Name:</h2> <%= user.name %></p> <p><h2 style="display: inline-block; margin: 0; font-size: 16px;">Email:</h2> <%= user.email %></p>

layout.ejs

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title><%= typeof title !== 'undefined' ? title : 'Expense Dashboard' %></title> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" /> <style> html, body { height: 100%; margin: 0; } body { display: flex; flex-direction: column; } .content { flex: 1; } .team-heading { display: inline-block; font-weight: 600; color: #2c3e50; margin-bottom: 1rem; } </style> </head> <body> <nav class="navbar navbar-expand-lg navbar-dark bg-primary mb-4"> <div class="container"> <a class="navbar-brand" href="/dashboard">Expense Dashboard</a> <div> <% if (user) { %> <a href="/dashboard" class="btn btn-light btn-sm me-2">Dashboard</a> <a href="/profile" class="btn btn-light btn-sm me-2">Profile</a> <a href="/logout" class="btn btn-danger btn-sm">Logout</a> <% } else { %> <a href="/login" class="btn btn-success btn-sm">Login</a> <% } %> </div> </div> </nav> <main class="container content"> <%- body %> </main> <footer class="text-center mt-5 mb-3 text-muted"> &copy; Okta Inc. Expense Dashboard </footer> </body> </html>

dashboard.ejs

<h1>Dashboard</h1> <p>Welcome, <%= user.name || 'User' %></p> <h2 style="font-size: 24px;">Your Teams</h2> <% if (team && team.length > 0) { %> <ul class="list-group"> <% team.forEach(team => { %> <li class="list-group-item d-flex justify-content-between align-items-center"> <%= team.label %> <a href="/team/<%= team.id %>" class="btn btn-primary btn-sm">View</a> </li> <% }) %> </ul> <% } else { %> <p>You are not part of any teams yet.</p> <% } %>

expenses.ejs

The EJS template renders the team info and expenses data in a tabular format.

<h1><%= team.label %></h1> <div>Welcome to the <p class="team-heading"><%= team.label %></p> team page.</div> <br/> <% if (expenses && expenses.length > 0) { %> <h2 style="font-size: 24px;">Expenses</h2> <table class="table table-bordered"> <thead> <tr> <th>Name</th> <th>Item</th> <th>Amount ($)</th> </tr> </thead> <tbody> <% expenses.forEach(exp => { %> <tr> <td><%= exp.name %></td> <td><%= exp.item %></td> <td><%= exp.amount %></td> </tr> <% }) %> </tbody> </table> <div class="alert alert-info"><h6 style="display: inline-block; margin: 0;">Total:</h6> $<%= total %></div> <% } else { %> <p>No expenses found for this team.</p> <% } %> Run the Express app with authentication

In your terminal, start the server:

npm start

Open your browser and navigate to http://localhost:3000.

Click Login and authenticate with your Okta account. The app then displays your Expense Dashboard, Profile, and a Log out option.

Note: When you’re signed in to the Developer Console as an admin, Okta keeps your org session active and automatically logs you into the app. To test other user accounts, use an incognito tab to test the login flow.

Admin view:

User view:

Expenses view:

And that’s it! You’ve built a secure Expense Dashboard and connected your Express application to Okta using OIDC and OAuth.

Learn more about OAuth 2.0, OIDC, and PKCE

Here’s a quick rundown of the features I used in this project to build a secure expense dashboard:

OpenID Connect (OIDC) is an identity and authentication layer built on OAuth 2.0.

Authorization Code Flow with PKCE is the most secure flow for server-side and browser-based web apps.

If you’d like to explore the whole project and skip setting it up from scratch, check out the complete source code on GitHub.

To explore further, check out these official Okta resources to learn more about the key concepts.

Authentication vs Authorization

OAuth 2.0 and OpenID Connect overview

Implement Authorization Code with PKCE

Authorization Servers in Okta

Follow us on LinkedIn, Twitter, and subscribe to our YouTube channel to see more content like this. If you have any questions, please comment below!


FastID

Navigating the Privacy-Performance Paradox

Empower publishers to thrive in a privacy-first world with Trusted Server, built on Fastly Compute. Reclaim control of ad strategy and data.
Empower publishers to thrive in a privacy-first world with Trusted Server, built on Fastly Compute. Reclaim control of ad strategy and data.

ETags: What they are, and how to use them

How to optimize your ETags to speed up your site and reduce calls to your origin without requiring significant code refactoring or content overhaul.
How to optimize your ETags to speed up your site and reduce calls to your origin without requiring significant code refactoring or content overhaul.

Saturday, 26. July 2025

liminal (was OWI)

This Week in Identity

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Here are the main industry […] The post This Week in Identity a

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Here are the main industry highlights of this week impacting identity and fraud, cybersecurity, trust and safety, financial crimes compliance, and privacy and consent management.

🪄Innovation and New Technology Developments

Idemia Launches TPE6 Biometric Platform to Advance Enrollment for Law Enforcement and Civil Use

IDEMIA Public Security has introduced TPE6, the latest update to its LiveScan biometric enrollment platform, aimed at improving speed, accuracy, and user experience for law enforcement and civil registration. Unveiled during a Biometric Update webinar, the refreshed system includes user safety features, enhanced biometric quality feedback, and a new dual iris and facial recognition camera. Designed with input from police agencies and other users, TPE6 supports applications ranging from background checks to immigration. The platform is widely deployed across the U.S., Canada, and internationally, and offers customizable tools to suit various operational needs. (Source)

💰 Investments and Partnerships

Daylight Security Secures $7 Million Seed to Launch AI-Powered MDR Service with Human Oversight

Daylight Security, an Israeli cybersecurity startup, has raised $7 million in seed funding to launch a hybrid Managed Detection and Response (MDR) service that combines AI agents with human analysts. Backed by Bain Capital Ventures and notable Israeli investors, the company aims to address the growing complexity of cyber threats by accelerating detection and response while reducing the workload on internal teams. Founded by intelligence veterans Hagai Shapira and Eldad Rudich, Daylight’s model uses AI for data analysis and triage, with human experts making final decisions. Already in use by clients in finance and tech, the company plans to expand its team as it targets the growing MDR market. (Source)

Datadog in Talks to Acquire Upwind for $1 Billion to Expand Cloud Security in Israel

Datadog is reportedly in advanced talks to acquire Israeli cybersecurity startup Upwind Security for approximately $1 billion, just three years after the company’s founding. Upwind, which offers a comprehensive cloud-native application protection platform (CNAPP), has raised $180 million to date, including a $100 million Series A round in December 2024 that valued the company at around $900 million. Founded by former Spot.io executives, Upwind integrates multiple cloud security functions into a single platform. The potential acquisition would mark Datadog’s largest in Israel, expanding on previous smaller deals like Seekret and Ozcode. (Source)

Xelix Secures €137 Million to Expand AI-Driven Accounts Payable Automation and Global Reach

Xelix, a London-based fintech firm specializing in accounts payable (AP) automation, has secured €137 million in Series B funding led by Insight Partners. The company leverages agentic AI to detect invoice fraud, prevent overpayments, and streamline supplier communications, auditing over $750 billion in spending annually for clients like AstraZeneca and Virgin Atlantic. Its growth has been bolstered by the Helpdesk module, which helps manage supplier queries alongside its AI-powered audit tools. With this new funding, Xelix plans to enhance its platform, expand globally, and further position AP as a strategic function within finance departments. (Source)

Regnology to Acquire Wolters Kluwer FRR Unit to Strengthen Cloud-Based Regulatory Reporting Solutions

Regnology has announced plans to acquire Wolters Kluwer’s Finance, Risk and Regulatory Reporting (FRR) unit, aiming to expand its regulatory reporting capabilities and market reach. The deal is expected to enhance Regnology’s support for financial institutions by integrating FRR’s tools into its cloud-first platform, offering scalable solutions for evolving compliance needs, including Basel IV. The acquisition, pending regulatory and employee approvals, reflects Regnology’s strategy to provide unified infrastructure for both legacy and modern systems. Both companies emphasize continued service excellence and growth opportunities for clients and employees. (Source)

StrongestLayer Launches With $5.2M to Build AI-Native Email Security Against Generative Phishing Threats

StrongestLayer has launched from stealth with $5.2M in seed funding to build AI-native email security that counters the rise of generative AI phishing. Founded by veterans from Proofpoint and Google, the platform uses LLMs for advanced intent analysis and reasoning, moving beyond outdated pattern-matching. As attackers craft highly personalized phishing emails with ease, StrongestLayer offers detection and training tools tailored to each organization’s threat profile, aiming to meet the evolving challenges of AI-powered email attacks. (Source)

Stripe Acquires Orum to Expand Real-Time Payments and Bank Verification Capabilities

Stripe has acquired Orum, a U.S. fintech focused on payment orchestration and bank account verification, to strengthen its real-time payments infrastructure. Orum 🥇 supports ACH, RTP, and FedNow rails and enables fast bank authentication via a single API. The move aligns with Stripe’s push beyond card payments, following investments in digital assets and open banking. Orum’s team, including CEO Stephany Kirkpatrick, will join Stripe. The deal reinforces Stripe’s position in the growing real-time payments space, where demand for fast and integrated solutions continues to rise. (Source).

Vanta Acquires Riskey to Transform Vendor Risk Management with Real-Time AI Intelligence

Vanta has acquired Riskey to enhance its Vendor Risk Management platform with real-time, AI-driven risk intelligence. The integration replaces outdated assessments with continuous monitoring to detect vendor threats proactively. Riskey’s tech adds dynamic AI scoring and alerts for breaches, misconfigurations, and leaked credentials. Vanta VRM now enables automated assessments and streamlined mitigation, cutting time and cost for IT teams. The move reinforces Vanta’s position in AI-powered trust management and boosts security with measurable ROI. (Source)

Paddle Secures $25 Million to Accelerate Global Expansion and Monetization Support for SaaS and AI Companies

Paddle has raised $25M from CIBC Innovation Banking to fuel global expansion, product development, and enterprise support—building on $293M in prior equity funding. As a Merchant of Record, Paddle simplifies payments for 6,000+ SaaS, AI, and app companies. Growth in 2025 is driven by AI adoption, Apple’s web payments shift, and a 40% annual growth rate. The company expanded to Austin and made key hires from Shopify, Intercom, and ServiceNow. Recent partnerships with Vercel and RevenueCat, along with Apple policy changes, have strengthened Paddle’s role in digital monetization. (Source)

Lansweeper Acquires Redjack to Expand Unmanaged Asset Discovery and Strengthen Cybersecurity Visibility

Lansweeper has acquired Redjack, a passive asset discovery firm that uses sensors to monitor network traffic across cloud, on-prem, container, and edge environments. Redjack’s platform offers real-time visibility into all connected assets and maps dependencies to expose shadow IT and risks. It also scores assets for resilience and business criticality. The acquisition boosts Lansweeper’s capabilities in attack surface management and third-party risk, expanding its roadmap to cover unmanaged assets. Backed by $159M in funding, Lansweeper will integrate Redjack to deepen asset intelligence and cybersecurity visibility.(Source)

⚖️ Policy and Regulatory

Sam Altman Warns of AI Voice Clone Fraud Crisis and Calls for Tech-Regulator Collaboration

At a Federal Reserve conference, OpenAI CEO Sam Altman warned of a looming fraud crisis fueled by AI-generated voice clones, calling current bank voice authentication systems insecure. He predicted a surge in sophisticated attacks using minimal audio input to mimic voices and move funds undetected. Altman also highlighted threats from AI video deepfakes and urged collaboration between tech firms and regulators. Fed Vice Chair Michelle Bowman expressed openness to partnership, and OpenAI plans to expand its presence in Washington, D.C. to support policy and regulatory engagement. (Source)

Chinese-Linked Hackers Exploit SharePoint Zero-Day to Breach Over 50 Organizations

Microsoft has attributed recent cyberattacks exploiting a zero-day vulnerability in its SharePoint server platform to Chinese state-affiliated hacking groups, including Linen Typhoon, Violet Typhoon, and Storm-2603. At least 54 organizations, such as a California energy operator and a federal health agency, have reportedly been breached. The vulnerability allows unauthorized access to on-premises SharePoint servers, enabling data theft and lateral movement across networks. Microsoft has released patches for all affected SharePoint versions and warns that unpatched systems remain at high risk of further exploitation. (Source)

Dior Data Breach Exposes Sensitive Customer Information in U.S. Following Louis Vuitton Incident

Dior has disclosed a data breach that compromised the personal information of its U.S. customers, including names, contact details, Social Security numbers, and passport information, though not payment data. The breach occurred on January 26, 2025, and has since been contained, according to third-party cybersecurity experts. Dior is offering affected individuals two years of free identity theft protection and credit monitoring. This incident follows a similar data breach reported by fellow LVMH group brand Louis Vuitton, impacting clients in multiple countries. (Source)

Retailers Confront Growing Return Fraud as Casual Dishonesty Escalates in E-Commerce Era

Return fraud is surging in the U.S., costing businesses an estimated $103B annually. While some schemes involve scams like empty box returns, much of the fraud comes from everyday consumers abusing generous return policies to “rent” or misuse items. E-commerce has worsened the issue, as online returns are harder to verify. Retailers—especially small businesses—are tightening policies and using data to flag repeat offenders. Still, many shoppers view these actions as harmless, fueling a culture of casual dishonesty in retail.. (Source)

Mexico Mandates Biometric CURP and Launches Unified Identity Platform by 2026

Mexico has enacted a law mandating biometric identification for all citizens, transforming the previously optional CURP (Unique Population Registry Code) into a compulsory document. The updated CURP will include personal details, a photograph, and biometric fingerprint and iris data encoded in a QR code. The rollout of the new identifier is scheduled to be completed by February 2026. The legislation also calls for the creation of a Unified Identity Platform to integrate this data with state databases, and mandates that both public and private institutions update their systems accordingly. Additionally, a nationwide initiative to collect biometric data from minors is set to begin within 120 days. (Source)

🔗 More from Liminal

Access Our Intelligence Platform

Stay ahead of market shifts, outperform competitors, and drive growth with actionable intelligence.

Save your Spot: Evolving Identity Access Management Demo Day

Liminal Demo Day will feature the top solution providers delivering live, 15-min demos focused on real-world IAM use cases across customer and workforce access journeys.

Link Index for Data Access Control

Discover the top 24 vendors shaping Data Access Control in 2025. This Link Index reveals how organizations are managing permissions, securing sensitive data, and aligning with evolving compliance demands.

Link Index for AI Data Governance

Discover how top vendors are shaping the future of AI Data Governance through scalable controls, model oversight, and real-time compliance across complex data environments.

Link Index for Ransomware Prevention

Explore the latest Link Index on Ransomware Prevention, featuring 22 top vendors helping organizations stay resilient against evolving cyber threats.

The post This Week in Identity appeared first on Liminal.co.

Friday, 25. July 2025

Anonym

Fighting Identity Fraud with Insurance: New Revenue Streams for 2025 

Identity Fraud: A $50 billion opportunity in disguise for insurance companies  Identity fraud isn’t slowing down. In fact, it’s accelerating.   According to recent projections, global identity fraud losses will pass $50 billion by 2025, driven by phishing, synthetic identities, and data breaches across digital ecosystems. For insurance providers, this growing crisis is not just a […
Identity Fraud: A $50 billion opportunity in disguise for insurance companies 

Identity fraud isn’t slowing down. In fact, it’s accelerating.  

According to recent projections, global identity fraud losses will pass $50 billion by 2025, driven by phishing, synthetic identities, and data breaches across digital ecosystems. For insurance providers, this growing crisis is not just a threat. It’s a massive opportunity to offer high-value, proactive solutions.  

Enter identity fraud solutions for insurance: bundled tools and services that go beyond claims and coverage to actively protect customers in their everyday digital lives.  

What’s driving the surge in identity fraud? 

As more of life moves online, and with generative AI and dark web data trading on the rise, identity theft has become faster, smarter, and far more widespread. 

Key trends include:  

Synthetic identity fraud, where fake personas are created using real and fabricated data. Phishing and account takeovers, primarily through mobile apps and SMS.   Credential stuffing, using leaked passwords to access personal accounts. Digital impersonation, aided by AI-generated photos, voices, and documents.  

Traditional insurance offerings aren’t built to handle these threats, but privacy and identity protection bundles are.  

A new layer of protection: Digital identity bundles

Leading insurers are now bundling digital identity protection with their offerings, helping policyholders stay ahead of fraud and bounce back faster when it strikes. 

Bundled identity fraud tools may include:    

Virtual cards for safer online purchases   Masked phone numbers and emails to protect personal contact info Dark web monitoring and breach alerts   Credit and identity theft monitoring   Wallet-based credentials for secure authentication and claims access  

By embedding these tools into policies, insurers add measurable value to customers’ lives while opening up entirely new lines of business.  

Why identity protection is a smart revenue play  

Here’s what makes identity fraud solutions so attractive for insurers:

Recurring revenue potential from subscription-based privacy tools   Cross-sell and upsell opportunities during digital onboarding and renewals Improved retention through proactive risk reduction and service stickiness   Lower fraud-related claims through early detection and secure communication   Differentiation in a market where most offerings feel commoditized  

Plus, identity protection is a need that transcends age and demographics. Whether you’re protecting a retiree from phishing or a young family from account takeover, the value is straightforward to communicate.  

Leading with privacy builds trust  

Consumer surveys show growing demand for brands that prioritize privacy:

81% of consumers say a company’s data practices influence their buying decisions  72% are more likely to stay loyal to brands that give them control over personal data  

Offering embedded identity protection doesn’t just reduce fraud, it sends a powerful signal. You’re not just selling policies. You’re safeguarding your customers’ digital lives.  

Getting started: Partnering for success  

Insurers don’t need to build these tools in-house.  

Partnering with privacy and identity technology providers allows you to:  

Launch fast with white-labeled apps or SDKs   Customize offerings for different policyholder segments  Integrate into existing claims, onboarding, and renewal flows  

From virtual communications to secure claims access, everything can be built into the policy experience, no heavy development is required.  

In 2025 and beyond, identity fraud solutions for insurance won’t just be nice to have, they’ll be expected. Consumers are looking for comprehensive protection, and insurers that deliver it will earn more than just premiums. They’ll earn long-term loyalty, new revenue, and a strong position in a digital-first world.  

Ready to differentiate your insurance offering with identity fraud protection?  

Anonyome Labs provides white-label privacy and identity tools tailored for insurers. Request a demo to see how you can unlock new growth with Privacy as a Service. 

The post Fighting Identity Fraud with Insurance: New Revenue Streams for 2025  appeared first on Anonyome Labs.


Dark Matter Labs

Unlocking the Value for Urban Nature: An Economic Case for Street Tree Preservation in Berlin

Site plan showing the study area and the expected tram line development Valuing the Ecosystem Services of street trees in Berlin to inform the strategic development of the extension of the M10 tram line This blog article provides an overview of the joint efforts undertaken in partnership with the District Office Charlottenburg-Wilmersdorf of Berlin, The Nature Conservancy in Europe gGmbH, Dor
Site plan showing the study area and the expected tram line development Valuing the Ecosystem Services of street trees in Berlin to inform the strategic development of the extension of the M10 tram line

This blog article provides an overview of the joint efforts undertaken in partnership with the District Office Charlottenburg-Wilmersdorf of Berlin, The Nature Conservancy in Europe gGmbH, DorfwerkStadt e.V., and Politics for Tomorrow / nextlearning e.V.

Nature often has to pave the way for the expansion of the city’s urban infrastructure. But what if we could more accurately understand the value tree, so we can make a stronger case for decisions that reduce tree removal?

Map showing two tram route options in the urban area, overlaid with baseline canopy cover data

Tram route options and baseline canopy cover

This map shows two tram route options in the urban area, overlaid with baseline canopy cover data:

A red dotted line represents planned Tram Line M10 (New Tram Extension). It spans from Kaiserin-Augusta-Allee through Mierendorffplatz, Osnabrücker Straße, and ends at Tegeler Weg. The alternative tram route is shown as a magenta dashed line, running through Gaußstraße and Obersstraße.

As Berlin advances its commitment to sustainable mobility, the planned extension of tram line M10 between Turmstraße and S+U Jungfernheide illustrates a familiar urban dilemma: progress at the cost of nature, the tram extension risks removing over a hundred mature street trees. Trees constantly provide a variety of benefits to the citizens around them. As built infrastructure expands, it’s easy to forget the invisible nature systems already at work: cooling our streets, cleaning our air, managing stormwater, and supporting our mental and physical health.

This is where ecosystem services valuation comes in, by quantifying the benefits trees provide, we can integrate ecological assets into infrastructure planning, allowing decision-makers to balance new infrastructure with the natural capital we already have.

A tool to value urban nature

To address this gap, the district council of Berlin Charlottenburg-Wilmersdorf and The Nature Conservancy in Europe commissioned our Trees as Infrastructure team to develop and deploy an Ecosystem Services Valuation Tool for the affected Mierendorff Island sites in Berlin. This tool translates ecological functions into benefits and, consequently, into economic metrics, thereby assigning a value to nature within financial and planning systems.

The model assessed ecosystem services within a 50-meter buffer along the tram route, focusing on quantifiable benefits in three key areas:

Climate Adaptation & Mitigation: shelter from wind, temperature moderation and carbon sequestration Water Management & Flood Alleviation: rainfall interception and stormwater reduction Health & Well-being: stress reduction, air quality improvements, and support for active lifestyles

To calculate these values, our model leveraged a combination of diverse local data, including high-resolution tree canopy data, detailed meteorological information, and socioeconomic demographics for Berlin. The economic value was derived by applying established valuation methods such as avoided damages (e.g. quantifying the benefits of climate regulation and flood mitigation), market prices (e.g. for carbon credits), and replacement costs (e.g. for water quality improvements). To support these calculations, we use established tools and frameworks, including GI-VAL, B£ST, and InVEST.

We also developed a web interface for this tool so that the results could be easily shareable and accessible to a broad audience. Designed to reflect the transparency of traditional financial asset dashboards, the Ecosystem Services Dashboard enables users to explore the data in depth: they can adjust scenarios, compare outcomes, and zoom in or out on the map, encouraging a hands-on, investigative approach.

Web interface data tables and visualisationsWeb interface for scenario analysis Risks and model limitations

Our model offers valuable insights into the economic contributions of urban trees and their ecosystem services. Like any innovative tool, it has boundaries, best understood as invitations for further exploration rather than flaws.

Its accuracy depends on data quality; incomplete inputs can lead to misestimations, underscoring the need for robust local data. As with many valuation tools, our model simplifies complex natural systems to yield actionable insights, though it may not capture site-specific nuances such as microclimates or tree placement. It currently assumes a linear relationship between services and benefits, while real ecosystems often exhibit non-linear dynamics and tipping points — areas for future refinement.

Then there’s the challenge of putting a price tag on nature. Valuing public commons like clean air or carbon storage remains challenging without standard market prices. Even so, indicative estimates help make the benefits of healthy ecosystems visible in policy and planning. While individual trees generate limited carbon credits, their collective impact and co-benefits are significant at scale.

Systemic shifts such as climate change, policy changes, or technological disruption, are not yet modeled but represent critical frontiers. While we focus on economic benefits, the model does not yet capture social and cultural values that communities attribute to urban green spaces or account for social vulnerability, such as how the loss of trees might disproportionately affect low-income or elderly residents.

In this light, the model is best used as one valuable piece in a broader puzzle — most powerful when combined with local expertise and qualitative insights to establish a more responsible, evidence-informed decision-making approach for urban resilience.

Our key findings

When considering the three benefit groups on climate regulation, water management and health, the main findings for the baseline situation of the standing trees as they are today were:

Baseline value: At the baseline scenario site, the total value of ecosystem services provided by all currently standing trees located within 50 metres of the road over a 10-year period is estimated at €10.5 million. This equates to an average economic value of €29,440 from ecosystem services per tree over the same period.

Our model was then used to explore three distinct scenarios to compare the potential impact of tree removal:

‘Optimistic’ scenario: Relates to the removal of the 35 trees identified in a technical survey commissioned by the BVG (Berlin’s public transport agency) were removed, the projected ecosystem services loss would be approximately €1 million over a decade. ‘Realistic’ scenario: Relates to the removal of 131 trees, which are those at risk of being felled as their crown diameter overlaps a certain buffer distance from the future tram line, leading to an estimated loss of approximately €4.2 million in value over a decade. ‘Alternative’ route scenario: Relates to the removal of 45 trees if the tramline was built along an alternative route, resulting in a projected loss of approximately €300,000.

Although the alternative route maintains a canopy cover similar to the optimistic scenario, its economic impact on ecosystem services is notably lower. This difference largely comes down to a few important factors. The trees along the alternative path tend to be smaller, so they naturally provide fewer benefits like carbon capture, cooling, and air purification. Secondly, this route passes through an area with much fewer residents, meaning trees contribute less to direct health benefits such as stress relief, improved air quality and encouraging outdoor activity. Also, with fewer buildings and residents exposed to wind currents, the climate regulation benefits per tree are less relevant.

This highlights a key insight for planning: the impact of a tree in a city is highly dependent on its specific context, what surrounds it, how many people it benefits, the local environmental risks, and how much the urban system relies on it to cope with those risks. Not all trees have the same value, and understanding their place within the urban fabric is key to making decisions around preserving urban nature and maximising ecosystem services.

This map displays the elderly residential distribution along the tram line route options. What does this mean for city stakeholders?

The numbers in this study reveal a critical opportunity: urban trees are not just passive greenery but invaluable infrastructure delivering vital ecosystem services — carbon capture, air purification, cooling, stormwater management, and community well-being — that currently go unpriced and underleveraged. This structural blind spot means cities miss a chance to unlock new financing pathways that could transform how living ecosystems are maintained and expanded.

Imagine innovative financing frameworks that monetise the ecosystem value of existing trees to directly funnel resources into city households. The beneficiaries of these ecosystem services include municipal enterprises managing energy and rainwater systems, municipal housing associations, and private companies seeking to improve their sustainability profiles and creditworthiness with banks. These stakeholders could contribute to a new funding mechanism, reflecting the shared value trees provide across urban sectors. These funds could then be allocated to enhance tree maintenance, ensuring the health and longevity of the urban canopy, while simultaneously financing targeted climate adaptation measures in vulnerable areas of the district.

To support this vision, city policies must evolve:

Empower subsidiarity and support local initiatives by formally recognising the community’s active stewardship role in caring for and maintaining urban nature. Strengthen tree protection ordinances by setting stricter tree replanting ratios (e.g. 1:5 or 1:10) and fines proportional to the projected ecosystem service losses over 10–20 years, using our study — which translates these services into monetary value — as the foundation for accurately accounting the time lag and diminished benefits between a felled mature tree and its young replacement. Redesign compensation frameworks to go beyond tree replacement, enabling funds to be channeled into additional appropriate interventions (e.g. unsealing, rain gardens) that respond to site-specific adaptation needs.

This case in Berlin reinforces a critical insight on the need for coordinated urban planning: housing and urban development, climate adaptation, and transport transitions must be designed as interconnected systems. Without such integration, we risk implementing climate solutions that deliberately undermine the very resilience we aim to build.

On March 22 2025, our findings were formally presented by one of our team member Sebastian Klemm, to a broad audience, including the public, political representatives, and notably, district council leads from civil engineering, green space and climate adaptation. As a result of this engagement, our tree ecosystem valuation study is being used by the district councillor to support the renegotiation of all elements of the project, including the tram line extension, its planning alternatives, proposed route, the calculation basis for compensation measures, and their intended purpose.

Sebastian Klemm presented our TreesAI findings on March 22 2025 to an audience including public attendees, political representatives, and district council leaders from civil engineering, green space, and climate adaptation. Looking ahead

The results of this study reveals the significant, long-term societal losses that would result from removing these mature trees to extend the M10 tram line. This local tension of transport infrastructure developing versus the protection of nature that regulates our microclimate, mirrors a wider urban planning dilemma faced by cities across Europe and beyond on how to align societal development with ecological and societal resilience.

This case points to a larger strategic question:

How can infrastructure planning be reoriented to systematically reflect public health, climate resilience, and the long-term well-being of communities in legal and fiscal decisions?

This question shaped the expert dialogue on June 20th 2025, curated and convened by Politics for Tomorrow / nextlearning.eu in collaboration with the Trees as Infrastructure team, DorfwerkStadt e.V., and the Charlottenburg-Wilmersdorf district office. The working session marked a critical step in turning the study’s findings into actionable governance frameworks. Building on the earlier public presentation, this closed session provided a platform for cross-sector reflection and collaborative policy design. By integrating ecological, health, and economic insights with legal and administrative expertise, the session enabled a shared understanding of trees as living infrastructure, emphasizing their value not just in compensation terms but within proactive planning and investment strategies.

Most importantly, this dialogue laid the foundation for systemic change, shifting from fragmented responsibilities to cooperative governance, and from isolated environmental mitigation to integrated, health-oriented resilience policies. Ideas for a coordinated collaboration strategy emerged, including piloting new planning and financing models on Mierendorff Island, embedding the valuation approach into Berlin’s compensation guidelines, and developing governance tools like a Commons Index and local operating models to unlock cross-sector investment.

The outcomes of this dialogue offer immediate policy relevance:

Proposals for adapting the M10 route planning based on comprehensive ecological and social valuation; A replicable citywide model for integrating ecological and health data into infrastructure decisions; A framework for embedding green infrastructure value into the planning and budgeting strategies of the district, Senate, and both public and private institutions; An alternative to outdated cost-benefit tools like the Koch method, which continues to treat trees as depreciable assets, instead recognising their full ecological, social, and economic value as resilient, multi-solving elements.

If you are working to build cities that are equitable and ecologically sound, this is your invitation: Join us in advancing a movement for living infrastructure — where urban trees and nature are not obstacles to progress but essential infrastructure for urban resilience and liveable cities.

Full Report: Dive into the complete findings, data, and methodology in the ecosystem services valuation report > Link to PDF

Interactive Dashboard: Explore the spatial distribution and value of Berlin’s tree canopy via our Ecosystem Services Dashboard.

Partners: District Office Charlottenburg-Wilmersdorf of Berlin, The Nature Conservancy in Europe gGmbH, DorfwerkStadt e.V., and Politics for Tomorrow / nextlearning.eu

Team: Sofia Valentini, Sebastian Klemm, Chloe Treger, Gurden Batra, Caroline Paulick-Thiel

TreesAI identity created by Arianna Smaron

Unlocking the Value for Urban Nature: An Economic Case for Street Tree Preservation in Berlin was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 24. July 2025

Spruce Systems

Congressional Testimony Spotlights the Need for Secure Privacy-Preserving Digital Identity

Crypto Council for Innovation highlights SpruceID’s role in advancing secure, privacy-preserving digital credentials during a House hearing on digital asset policy.

We’d like to congratulate Alison Mangiero and the Crypto Council for Innovation on a powerful and forward-looking testimony before the House Ways and Means Subcommittee. The hearing, titled “Making America the Crypto Capital of the World,” spotlighted critical issues surrounding digital asset policy, and we’re proud that SpruceID’s work in privacy-preserving digital identity was highlighted as part of the solution.

As Alison noted in her remarks, digital assets reshape how we transfer value, access financial services, and verify identity. A key challenge in this transformation is ensuring that digital identity systems are secure, interoperable, and resistant to evolving threats like deepfakes generated by AI. That’s where blockchain-based approaches can play a defining role.

“Today, for less than $15, artificial intelligence can generate images of people and fake IDs that can fool current identity verification security solutions. But companies, like SpruceID, are working on applications of blockchain and cryptography that have security features that even AI cannot break.” - Alison Mangiero, Crypto Council for Innovation
Why Identity Is Core to Crypto

At first glance, identity and crypto may seem like separate domains, but they are deeply connected. Blockchain-based systems, by design, enable trust without intermediaries. But to participate in real-world applications, such as opening a financial account, signing a contract, or receiving government benefits, users still need a secure way to prove who they are. Privacy-preserving digital identity provides that missing link.

Using cryptographic credentials that can be selectively disclosed, individuals can prove facts about themselves (like age or residency) without oversharing personal information. This aligns with the values of the crypto ecosystem, like decentralization, privacy, and user control, while also addressing urgent needs around fraud prevention, compliance, and equitable access.

Real-World Deployment at Scale

This work is already happening. The California Department of Motor Vehicles has issued over two million mobile driver’s licenses using SpruceID’s technology, enabling residents to prove their identity online with strong privacy protections and safeguards against synthetic fraud. California is also exploring additional ways to unlock new efficiencies in public service delivery.

Collaborating on National Standards

Beyond state-level innovation, SpruceID is working with the National Institute of Standards and Technology (NIST) and the National Cybersecurity Center of Excellence (NCCoE) to demonstrate how digital credentials, when paired with regulatory clarity from agencies like FinCEN, can streamline Know Your Customer (KYC) checks and improve compliance across the financial sector.

As digital identity becomes foundational to safe participation in the digital economy, collaboration across public and private sectors will be key.

Building the Next Generation of Public Infrastructure

SpruceID’s mention in this testimony is just one piece of a much larger effort. We’re especially grateful to the public servants and policymakers working to ensure that digital infrastructure in the U.S. is secure, privacy-preserving, and future-ready.

Read the Full Testimony

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ocean Protocol

DF151 Completes and DF152 Launches

Predictoor DF151 rewards available. DF152 runs July 24th — July 31st, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 151 (DF151) has completed. DF152 is live today, July 24th. It concludes on July 31st. For this DF round, Predictoor
Predictoor DF151 rewards available. DF152 runs July 24th — July 31st, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 151 (DF151) has completed.

DF152 is live today, July 24th. It concludes on July 31st. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF152 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF152

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF151 Completes and DF152 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 23. July 2025

Innopay

Douwe Lycklama Joins Sibos 2025 Panel

Douwe Lycklama Joins Sibos 2025 Panel from 29 Sep 2025 till 02 Oct 2025 Trudy Zomer 23 July 2025 - 11:00 Frankfurt 50.121329352631, 8.6365638 Douwe Lycklama, Senior Vice President at INNOPAY, wi
Douwe Lycklama Joins Sibos 2025 Panel from 29 Sep 2025 till 02 Oct 2025 Trudy Zomer 23 July 2025 - 11:00 Frankfurt 50.121329352631, 8.6365638

Douwe Lycklama, Senior Vice President at INNOPAY, will be joining a panel, on behalf of Oliver Wyman, at Sibos 2025 in Frankfurt, Germany. The session, titled ‘Breaking down the perfect payment experience’, will take place on Monday 29 September at 9.30 AM to discuss what truly defines a seamless payment experience in today’s digital economy.

Douwe will be part of an expert panel of international leaders, including:

Denim Deform Cengiz, ColendiBank Mick Fennell, Temenos Jo Jagadisch, TD Melvyn Low, Oversea-Chinese Banking Corporation Limited (OCBC) Bruno Mellado, BNP Paribas
 

Together, they’ll dive into users’ expectations, the role of regulation and technology, and how institutions can create payment journeys that offer trust, transparency, and true value.

This session is one of 250+ on the Sibos 2025 program. Themed ‘The next frontiers of global finance’, the event will explore AI, digital assets, quantum computing, cybersecurity, ESG, and more.

Explore the full Sibos 2025 program →


Ocean Protocol

Ocean Nodes Update: Transitioning into Phase 2

A preview of what’s next, and a few important changes along the way Ocean Nodes have come a long way since their launch in August 2024. In less than a year, we’ve seen over 1.71 million nodes deployed across 70+ countries, powered by you, our community. Together, we’ve stress tested the stack, reported bugs, experimented, and helped push the infrastructure forward. This collective effort has

A preview of what’s next, and a few important changes along the way

Ocean Nodes have come a long way since their launch in August 2024. In less than a year, we’ve seen over 1.71 million nodes deployed across 70+ countries, powered by you, our community. Together, we’ve stress tested the stack, reported bugs, experimented, and helped push the infrastructure forward. This collective effort has laid a strong foundation. Now, it’s time to build on it.

As we look ahead, it’s time to enter Phase 2, a new chapter that shifts the focus toward GPU-powered compute, performance-based incentives, and a more production-grade environment.

Here’s what’s changing, and how we’re preparing for what’s next.

Phase 1 rewards wrap July 31

The Ocean Nodes community has surpassed every expectation. Your contribution has proven that decentralized compute can scale globally. That effort, and your uptime, hasn’t gone unnoticed, as we currently stand at 12.45M ROSE rewards distributed. To make space for the transition into Phase 2, we’ll be ending the current rewards system as of July 31.

Here’s why: the next stage will introduce major infrastructure updates, which require testing and refinement, with bugs and instability expected in the early stages. To keep things fair for everyone and the focus on progress towards the next stage, rewards in their current form will be paused during this time. Read on.

ONBs wrap at ONB — Perks ahead

We’re also capping Ocean Node Badges (ONBs) at ONB1. The reason is simple: initial participation was beyond projections, with over 1.71M total nodes, and we want to ensure the system remains clear and manageable going forward.

If you’ve earned ONB1, you’ll receive exclusive benefits in Phase 2. This is our way of recognizing the early builders who helped shape and strengthen the Ocean Nodes network.

The perks tied to ONB1 will be announced when Phase 2 launches. We’re making sure they’re meaningful, as per usual.

Phase 2 begins in September — with adjusted competitive rewards system

Ocean Nodes Phase 2 is set to launch in September, with a key focus on GPU-powered compute environments. This will allow for more advanced workloads and real-world AI use cases.

With this shift, the rewards model will be adjusted. The aim is to better reflect the value that GPU-based nodes bring to the network and to support more demanding jobs such as model training and multi-stage compute workflows.

More information about the reward structure will be shared at the time of launch.

What to expect in Phase 2

Phase 2 is all about making Ocean Nodes more powerful, usable, and aligned with real-world compute needs. Here’s a sneak peek at what’s coming:

GPU Support — training, fine-tuning, and heavy workloads Paid Compute Jobs — flexible pricing based on usage Upgraded Monitoring System — with benchmark jobs, node history, and detailed performance metrics Comprehensive dashboard — clearer dashboards and logs so you can see how your node is doing Node Configurability — choose which features to expose or disable What’s next

August is all about preparing for Phase 2. We’ll be testing the new system, making improvements, and finalizing how the updated rewards system will work. This will be your chance to try out what’s coming.

We’ll be running benchmark compute jobs on selected GPU-enabled nodes to measure performance. These benchmarks are short, simple jobs, designed to give us insight into how different setups perform. They also help us shape a reward system that’s fair, reliable, and ready for scale.

Thanks for Building With Us

As we enter this next chapter, we want to acknowledge the effort and energy this community has invested in Ocean Nodes. Phase 1 showed us what’s possible. Phase 2 is about scaling that possibility into a reliable, compute-focused network that serves real-world use cases.

You’ve helped bring Ocean Nodes this far, and we’re just getting started.

Keep an eye on our Discord, Twitter, and blog for updates and sneak peeks as we gear up for September.

Thanks for being here. Let’s keep building!

Ocean Nodes Update: Transitioning into Phase 2 was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

ToolShell Remote Code Execution in Microsoft SharePoint: CVE-2025-53770 & CVE-2025-53771

Microsoft revealed two critical vulnerabilities, CVE-2025-53771 and CVE-2025-53770, actively exploited to compromise SharePoint servers.
Microsoft revealed two critical vulnerabilities, CVE-2025-53771 and CVE-2025-53770, actively exploited to compromise SharePoint servers.

DDoS in June

June’s DDoS report reveals a 250B+ request attack on a High Tech provider and the rise of the Byline Banshee. Get key insights & actionable guidance.
June’s DDoS report reveals a 250B+ request attack on a High Tech provider and the rise of the Byline Banshee. Get key insights & actionable guidance.

Tuesday, 22. July 2025

Spherical Cow Consulting

Kill the Wallet? Rethinking the Metaphors Behind Digital Identity

Much like "the cloud" or "the superhighway", the metaphor of a "wallet" has become convenient shorthand for a tangle of technical, policy, and usability decisions. As we keep building out digital identity ecosystems with verifiable credentials, identity wallets, and cross-jurisdictional trust models, I ask: is the metaphor still helping us? The post Kill the Wallet? Rethinking the Metaphors Beh

“Much like ‘the cloud’ (really just someone else’s computer) or ‘the superhighway’ (I never have figured that one out), the metaphor of a ‘wallet’ has become a convenient shorthand for a tangle of technical, policy, and usability decisions.”

But as we keep building out digital identity ecosystems, complete with verifiable credentials, identity wallets, and cross-jurisdictional trust models, I want to ask:

Is the metaphor still helping us? Or is it time to kill the wallet?

(Apologies to everyone who suddenly got stuck with a Bugs Bunny earworm.)

A Digital Identity Digest Kill the Wallet? Rethinking the Metaphors Behind Digital Identity Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:08:46 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Why metaphors matter

Basically, a digital wallet is a secure container for digital credentials. But metaphors are powerful: They shape user expectations, influence system design, and carry emotional and cultural baggage.

Say “wallet,” and people conjure different things:

A tap-to-pay credit card or mobile payment app A driver’s license or ID holder A catch-all pouch for everything from boarding passes to coffee shop punch cards to loyalty cards

This matters because the assumptions baked into that metaphor directly affect how systems are designed and how people trust (or don’t) them.

One word, too many meanings

Consider Google Wallet. It assumes it can store just about anything, provided protocols and formats are supported. Apple Wallet is similarly broad in scope but imposes a more curated, policy-heavy experience; credentials often go through approval workflows, and Apple maintains tight control over what gets displayed.

Then you have purpose-built wallets like the SIROS Foundation’s wwWallet, which explicitly aim for neutrality and open standards. In that case, “wallet” is just the delivery mechanism: Credentials come from many issuers, and the wallet doesn’t try to second-guess the user’s intent.

So far, so good. But many users still assume they’ll only need one wallet. After all, they only carry one physical one, right?

Well… not exactly.

Surprise: you’re already carrying multiple wallets

A growing number of users already interact with multiple wallet-like experiences; they just don’t recognize them as such.

Take a gym app with a scannable membership barcode. That’s not a digital wallet; it’s just displaying an unprotected credential. But a university app that stores a student ID, enables cryptographic access to campus systems, or lets students securely share transcripts? That’s starting to behave like a wallet. These apps issue, hold, and present credentials, but often without using open standards, secure storage mechanisms, or user-centric consent flows. In practice, they’re wallet-adjacent without meeting the formal definitions found in standards like ISO/IEC 18013-5 or NIST guidance.

This distinction matters when issuers or verifiers only trust credentials handled within their own apps. If every organization builds its own closed-loop container, users end up juggling multiple apps that can’t talk to each other. That may be good for organizational control, but it’s bad for user experience, portability, and interoperability.

To make sense of this ambiguity, researchers Lukkiena, de Reuver, and Bharosa offer a taxonomy of digital wallets that identifies 10 core characteristics across three levels: wallet architecture, functional capabilities, and governance model. (Thanks, Henk Marsman, for pointing me to this article!) For example, wallets can be custodial or self-sovereign, anchored to a specific platform or OS-agnostic, and focused on narrow single-issuer use cases or broader cross-domain ecosystems. Their conclusion? There’s no universal definition of “wallet,” and that’s a problem when different actors use the same word but mean fundamentally different things. When it takes this much effort to explain what we mean by “wallet,” maybe it’s time to admit the metaphor is no longer fit for purpose.

Who controls permission and consent?

The wallet metaphor also glosses over deeper architectural questions like who’s in charge of permission and consent.

When you hand someone your physical wallet, no pop-up asks if you’re sure. You’ve already decided what to share. Digital systems, though, are expected to do better. They support selective disclosure (I hope), enforce access policies, and (ideally) prompt you when data is about to be shared.

But when the wallet is mediated by a browser or embedded in a platform you don’t control, who’s responsible for enforcing that consent? The wallet? The issuer? The verifier? The browser? Even people deeply involved don’t agree on the answers here.

The NIST blog on digital wallets offers a definition, and that definition sets the stage for various assumptions:

“A digital wallet is a native application on your mobile device—though in the future, may also be stored in the cloud—that holds and secures your VDCs… Depending on the entity issuing the VDC, users may need to download a wallet application supported by the credential issuer before a VDC can be issued to their phone.”

This is useful, but it also normalizes a model where wallets are tied to issuers, not users. If every credential needs its own issuer-approved container, we’re not talking about wallets anymore. We’re talking about app-specific credential lockers. That’s a very different interaction model and one that may undermine user control.

When regulation and design don’t talk to each other

In Europe, things get even murkier. The EU’s data protection frameworks (GDPR, eIDAS 2.0) layer in consent requirements that assume a clear user interface and intentional disclosure. A 2023 study published in the Harvard Journal of Law & Technology, however, highlighted just how far the actual UX has drifted from those principles.

In “Two Worlds Apart! Closing the Gap Between Regulating EU Consent and User Studies,” researchers Bielova, Santos, and Gray examined real consent flows and found a minefield of “dark patterns” and manipulation. Decline buttons are hidden or misleading, options are presented in confusing hierarchies, and “Accept All” is given visual prominence over granular choices.

If we’re now building digital wallets that insert themselves into this consent process, we have to ask: are we replicating these same patterns? Are we genuinely improving user control or just rebranding old manipulations?

Designing for privacy: lessons from Kantara

The Kantara Initiative’s Privacy-Enhancing Mobile Credentials (PEMC) Implementers Report offers a different and possibly more practical perspective. It doesn’t try to define “wallet” from a metaphorical standpoint. Instead, it focuses on capabilities that put the user back in charge:

“The wallet SHALL be designed to facilitate user understanding and control over what data is being shared and for what purpose. User consent SHALL be explicit, contextual, and revocable.”

That’s a higher bar than most current systems hit.

The report also stresses the importance of:

Purpose limitation: credentials should only be used for clearly defined, disclosed functions. Transparency and auditability: users should be able to review where and how credentials have been used. User-managed permissions: ideally, from a central UI that lets users adjust sharing policies without reissuing credentials.

These aren’t just checkboxes for compliance. They’re structural features that define trust. If your “wallet” can’t support these requirements, maybe it shouldn’t call itself one.

So… do we kill the wallet?

Maybe. Or maybe we reframe it.

The wallet metaphor has done a lot of work. It helped early adopters wrap their heads around verifiable credentials. It gave vendors a way to pitch new apps without diving into crypto protocols.

But now, it’s showing its limitations.

It implies singularity, when reality demands multiplicity. It collapses trust boundaries, hiding the difference between issuer-owned and user-controlled containers. It blurs accountability, especially when it comes to consent and user agency. And it distracts regulators, who often assume the metaphor aligns with actual practice.

If we’re serious about building systems that scale, interoperate, and respect users, we may need to put the metaphor on pause. Maybe even kill it.

Or at least, give it a long-overdue retirement party.

Bonus question: Got a better metaphor?

I’m genuinely curious: What should we call these things? If “wallet” is too narrow, too payment-focused, or just too confusing, what’s the alternative?

Inbox? Locker? Credential safe? Something new entirely? Or is the ambiguity still worthwhile for a reason I’m missing?

Drop me a note. I promise not to brand it.

Want to stay updated when a new post comes out? I write about digital identity and related standards—because someone has to keep track of all this! Subscribe to get a notification when new blog posts and their audioblog counterparts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

00:00:04
Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down interesting topics in the field of digital identity—from credentials and standards to browser weirdness and policy twists.

If you work with digital identity but don’t have time to follow every specification or hype cycle, you’re in the right place.

00:00:26
Let’s get into it.

Why Do We Call It a Wallet?

00:00:30
So, have you ever stopped to wonder: why do we call it a wallet?

In digital identity, the term wallet has become so common that we don’t even think twice about it—much like the cloud (air quotes intended), which, as we know, is really just someone else’s computer.

The wallet metaphor has served as convenient shorthand. It wraps up a lot of complex technical, policy, and usability decisions into a single image that feels familiar.

But is it still serving us well? Or is it time to—dare we say—kill the wallet?

And yes, if you’re now hearing Elmer Fudd singing “Kill da Wabbit,” you’re not alone. It’s stuck in my head too.

Why Metaphors Matter

00:01:15
Metaphors help make the abstract more tangible. In digital identity, a wallet conjures up something:

Personal Portable Secure That holds important things we don’t want to lose

In theory, a digital wallet does exactly that—a secure container for digital credentials.

00:01:34
However, there’s a catch.

Metaphors don’t just explain things—they shape them. They guide system design and influence both user and architect expectations.

And when a metaphor starts to mislead or restrict what’s possible, it’s time to reconsider it.

What Are We Really Talking About?

00:01:54
When we say wallet, what are we actually describing?

Sometimes, we mean a secure application that stores and presents digital credentials. But other times, we’re referring to:

A whole service ecosystem Trust registries Credential exchanges Key management systems

00:02:18
This ambiguity creates confusion.

If you imagine a wallet as an app that lives only on your phone, you might not expect it to:

Sync across devices Backup to the cloud Integrate with browsers

So, the metaphor starts to limit understanding rather than enhance it.

Physical Wallets vs. Digital Identity

00:02:45
Think about your real wallet. You might carry:

Credit cards A driver’s license A photo of your dog Maybe some cash (if you’re feeling nostalgic)

But your work ID might live on a badge you scan at the door.
Your passport is likely in a drawer.
Your vaccine certificate might be in an email or government portal.

00:03:06
Each credential lives in a different place and serves a different function. Yet digital credentials are expected to behave as a single type—all handled the same way.

That’s a problem.

The wallet metaphor reinforces the idea that if you control something, you must physically possess it. But that’s not how real life—or digital systems—work.

Delegation and Flexibility

00:03:32
We delegate trust and control all the time.

Browsers remember our passwords Apps access our photos Others pick up prescriptions or check in for us

00:03:44
Digital identity must support this same flexibility—not just theoretically, but by design.

If the wallet metaphor implies identity is always something you carry and only you carry, it fails to reflect:

Delegation Guardianship Enterprise-managed credentials

Sometimes, you don’t need to carry the credential—you just need to control access to it.

Trust, Adoption, and Governance

00:04:12
Another problem: the wallet metaphor implies that once you have your credentials, you’re done.

But really, that’s just the beginning.

For a credential to matter:

It must be accepted It must be verifiable It must be trusted

00:04:30
This brings us to:

Trust registries Governance frameworks Interoperability standards

None of these live inside the wallet. Yet without them, the wallet is just a lonely app with nowhere to go.

Who Are We Building For?

00:04:50
Are we building for everyday users—or for people like us?

The danger in sticking too closely to the wallet metaphor is that we end up designing for:

Tech-savvy users Privacy-conscious individuals People willing to manage keys and credentials

00:05:08
But most users aren’t in that space. They just want things to work.

They want identity to be seamless—not a side project.
And they certainly don’t want to be blamed for losing access when their private key is wiped in a phone reset—or dropped in a beer.

Rethinking Security and Usability

00:05:32
We need to stop designing for the metaphor. People aren’t all ready to manage their own cryptographic infrastructure—and that’s okay.

Security isn’t one-size-fits-all. Usability isn’t either.

There are cases where:

Cloud-based key management offers better recovery options Delegation to trusted devices boosts usability Giving users a choice increases adoption

We shouldn’t cling to the idea that the most secure option is always the only secure option.

Do We Kill the Wallet?

00:06:08
Not necessarily.

The wallet metaphor has brought us this far. It’s familiar, useful, and still works in many settings.

But we should be:

More careful in how we use it Clearer about what we mean Open to other metaphors—or better yet, clearer explanations

00:06:30
Maybe it’s time for:

Identity lockers Digital toolboxes Credential dashboards

Or maybe it’s time to explain what these systems actually do—without relying on metaphor at all.

Language Matters

00:06:48
The user brings their own context. That’s who we’re building for.

So:

In specs: our language must be crystal clear For users: our explanations must be accurate and inclusive

We may need a whole basket of metaphors, not just one.

Wrapping Up

00:07:12
As always, if you have questions or want to dive deeper, visit the written blog. I’d love to hear your thoughts.

Thanks for listening.

00:07:22
That’s it for this week’s episode of the Digital Identity Digest. If this made things a little clearer—or at least more interesting—please share it with a friend or colleague.

Let’s keep the conversation going.

Connect with me on LinkedIn @hlflanagan and don’t forget to subscribe and leave a review on Apple Podcasts or wherever you listen.

You’ll find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged—and I’ll talk to you next time.

The post Kill the Wallet? Rethinking the Metaphors Behind Digital Identity appeared first on Spherical Cow Consulting.


Okta

Create a React PWA with Social Login Authentication

Progressive Web Apps (PWAs) offer the speed, reliability, and offline functionality of native apps—all delivered through the web. However, security is as important as performance, especially regarding user authentication. Modern authentication is essential in a world where users expect instant, secure access across multiple devices and platforms. Identity providers, like Okta, offer secure, sca

Progressive Web Apps (PWAs) offer the speed, reliability, and offline functionality of native apps—all delivered through the web. However, security is as important as performance, especially regarding user authentication. Modern authentication is essential in a world where users expect instant, secure access across multiple devices and platforms.

Identity providers, like Okta, offer secure, scalable, and developer-friendly tools for implementing authentication. Federated identity allows users to sign in using existing social accounts.

In this article, we’ll walk through how to build a React-based PWA with offline support and integrate it with Google Social Login using Okta. You’ll learn how to deliver a fast, reliable user experience with modern identity features built in. Let’s get started.

Table of Contents

Creating an Okta integration Create the React app Secure routes in your React app with React Router Authenticate using OAuth 2.0 and OpenID Connect (OIDC) Federated identity using Social Login Configure Google as an Identity Provider in Okta Test authenticating with Google Social Login Set up your React app as a PWA Build a secure todo list React PWA Authenticate with Social Login from a React PWA Learn more about React, PWA, Social Login, and Federated Identity

What you’ll need

This is a beginner-friendly tutorial, so you’ll mostly need the willingness to learn! However, you’d need access to a few things:

Node.js and NPM. Any LTS version should be fine, but in this tutorial, I use Node 22 and NPM v10 A command terminal Basic JavaScript and TypeScript knowledge An IDE of your choice. I use PHPStorm, but you can use VSCode or something similar. A Google Cloud Console Account. You can set up one using your Gmail account. Creating an Okta integration

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Click Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Single-Page Application as the application type, then click Next

Enter an app integration name

In the Grant type section, ensure that both Authorization Code and Refresh Token are selected Configure the redirect URIs: Sign-in redirect URIs: http://localhost:5173/login/callback Sign-out redirect URIs: http://localhost:5173 In the Controlled access section, select the appropriate access level Click Save Where are my new app's credentials?

Creating an OIDC Single-Page App manually in the Admin Console configures your Okta Org with the application settings. You may also need to configure trusted origins for http://localhost:5173 in Security > API > Trusted Origins.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Create the React app

We’ll use a Vite template to scaffold the project. The example app for this tutorial is a todo application called “Lister”. To create a React app named “Lister”, run the following command in your terminal to scaffold the project:

npm create vite@5.4 lister

Select React and TypeScript as the variant.

Follow the instructions after running the command to navigate into your app directory and installing dependencies.

We have extra dependencies to add. Run the following commands in your terminal.

Install React Router by running

npm install react-router-dom@5.3.4

Install React Router types by running

npm install --save-dev @types/react-router-dom@5.3.3

To use Okta authentication with our React app, let’s install the Okta SDKs by running

npm install @okta/okta-react@6.9.0 @okta/okta-auth-js@7.8.1

I wrote this post using Vite 5.4, React 18.3, Okta React 6.9, and Okta AuthJS SDK 7.8.

With this, you now have the base React project set up.

Secure routes in your React app with React Router

Open the project in your IDE. Let’s navigate to App.tsx and paste in the following code:

import './App.css'; import { Route, Switch, useHistory } from 'react-router-dom'; import { OktaAuth, toRelativeUrl } from '@okta/okta-auth-js'; import { LoginCallback, Security } from '@okta/okta-react'; import Home from './pages/Home.tsx'; const oktaAuth = new OktaAuth({ clientId: import.meta.env.VITE_OKTA_CLIENT_ID, issuer: `https://${import.meta.env.VITE_OKTA_DOMAIN}`, redirectUri: window.location.origin + '/login/callback', scopes: ['openid', 'profile', 'email', 'offline_access'], }); function App() { const history = useHistory(); const restoreOriginalUri = (_oktaAuth: OktaAuth, originalUri: string) => { history.replace(toRelativeUrl(originalUri || '/', window.location.origin)); }; return ( <Security oktaAuth={oktaAuth} restoreOriginalUri={restoreOriginalUri}> <Switch> <Route path="/login/callback" component={LoginCallback}/> <Route path="/" exact component={Home}/> </Switch> </Security> ); } export default App

We set up the Okta authentication SDK packages in the App Component. Pay attention to the OktaAuth config:

const oktaAuth = new OktaAuth({ clientId: import.meta.env.VITE_OKTA_CLIENT_ID, issuer: `https://${import.meta.env.VITE_OKTA_DOMAIN}`, redirectUri: window.location.origin + '/login/callback', scopes: ['openid', 'profile', 'email', 'offline_access'], });

If you encounter any issues with the login, a good place to start debugging is from here. We’ll use environment variables to define our OIDC configuration in the app for convenience. In the root of your Lister project, create an .env file and edit it to look like so:

VITE_OKTA_DOMAIN={yourOktaDomain} VITE_OKTA_CLIENT_ID={yourOktaClientID}

Replace {yourOktaDomain} with your Okta domain for example, dev-123.okta.com or trial-123.okta.com. Note the variable doesn’t include the HTTP protocol. Replace {yourOktaClientID} with the Okta client ID from the Okta application you created.

Before moving forward, let’s set up React Router in our project root. Navigate to src/main.tsx and replace the existing code with the following code snippet:

import ReactDOM from 'react-dom/client' import App from './App.tsx' import './index.css' import { BrowserRouter } from "react-router-dom"; ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render( <BrowserRouter> <App/> </BrowserRouter>, )

In the App.tsx earlier, we imported Home from ./pages/Home.tsx and used it in our routing. Let’s create the Home component. In the src folder, create a pages folder, and in that, create a Home.tsx file.

const Home = () => { return (<h2>You are home</h2>); } export default Home;

This is a minimal home component that represents our home page.

Authenticate using OAuth 2.0 and OpenID Connect (OIDC)

Next, we want to add the ability to sign in and out with our Okta without social login as a starting point. We’ll add the social login connection later.

To do that, we’ll create the SignIn component and a generic Layout component to control user access based on their authentication. Navigate to your src folder, then create a components folder to hold child components.

In the newly created components folder, create the Layout.tsx, Layout.css, and SignIn.tsx files.

Open the Layout.tsx file and add the following code:

import './Layout.css'; import { useOktaAuth } from "@okta/okta-react"; import SignIn from "./SignIn.tsx"; import { Link } from "react-router-dom"; import logo from '../assets/react.svg'; const Layout = ({children}) => { const { authState, oktaAuth} = useOktaAuth(); const signout = async () => await oktaAuth.signOut(); return authState?.isAuthenticated ? (<> <div className="navbar"> <Link to="/"><img src={logo} className="logo" /></Link> <div className="right"> <Link to="/profile">Profile</Link> <button onClick={signout} className="no-outline">Sign Out</button> </div> </div> <div className="layout"> {...children} </div> </>) : <SignIn/>; } export default Layout;

This component imports the useOktaAuth from the @okta/okta-react package. This React hook helps us the user’s authenticated state and gives them access to the child components of the Layout component. The hook also lets us sign in or out our users.

At the top of the file, we import Layout.css. Open Layout.css so fill in the CSS we need:

.layout { max-width: 1280px; margin: 0 auto; padding: 2rem; text-align: center; } .layout.sign-in { margin-top: 35vh; } .logo { height: 32px; will-change: filter; transition: filter 300ms; } .navbar { display: flex; justify-content: space-between; }

These minor stylings help the Layout.tsx navbar look proper. Let’s not forget the SignIn component used in the Layout component.

Paste the following code into SignIn.tsx:

import { useOktaAuth } from "@okta/okta-react"; import logo from '../assets/react.svg'; const SignIn = () => { const { oktaAuth} = useOktaAuth(); const signin = async () => await oktaAuth.signInWithRedirect(); return ( <div className="sign-in layout"> <h2> <img src={logo} className="logo" alt="Logo"/> Lister</h2> <button className="outlined" onClick={signin}>Sign In</button> </div> ); } export default SignIn;

Here, we use the same useOktaAuth hook to sign in our user. Lastly, we update src/App.tsx to use our new Layout component. We wrap the Layout component around the routes that require authentication. Your code now looks like this:

import './App.css'; import { Route, Switch, useHistory } from 'react-router-dom'; import { OktaAuth, toRelativeUrl } from '@okta/okta-auth-js'; import { LoginCallback, Security } from '@okta/okta-react'; import Home from './pages/Home.tsx'; import Profile from './pages/Profile.tsx'; import Layout from "./components/Layout.tsx"; const oktaAuth = new OktaAuth({ clientId: import.meta.env.VITE_OKTA_CLIENT_ID, issuer: `https://${import.meta.env.VITE_OKTA_DOMAIN}`, redirectUri: window.location.origin + '/login/callback', scopes: ['openid', 'profile', 'email'], }) ; function App() { const history = useHistory(); const restoreOriginalUri = (_oktaAuth: OktaAuth, originalUri: string) => { history.replace(toRelativeUrl(originalUri || '/', window.location.origin)); }; return ( <Security oktaAuth={oktaAuth} restoreOriginalUri={restoreOriginalUri}> <Switch> <Route path="/login/callback" component={LoginCallback}/> <Layout> <Route path="/" exact component={Home}/> <Route path="/profile" component={Profile}/> </Layout> </Switch> </Security> ); } export default App

Be careful not to wrap the callback route in the Layout component, or else you’ll experience some weirdness during logins. If you look at the code above, you see we added a route for a profile component. Let’s create that component!

Navigate to src/pages and create the Profile.tsx and Profile.css files. In your Profile.tsx file, paste these in:

import './Profile.css'; import { useState, useEffect } from "react"; import { useOktaAuth } from "@okta/okta-react"; import { IDToken, UserClaims } from "@okta/okta-auth-js"; const Profile= () => { const { authState, oktaAuth} = useOktaAuth(); const [userInfo, setUserInfo] = useState<UserClaims | null>(null); useEffect(() => { if(!authState || !authState.isAuthenticated) setUserInfo(null); else setUserInfo((authState.idToken as IDToken).claims); }, [authState, oktaAuth]); return (userInfo) ? ( <div> <div className="profile"> <h1>My User Profile (ID Token Claims)</h1> <p> Below is the information from your ID token which was obtained during the &nbsp; <a href="https://developer.okta.com/docs/guides/implement-auth-code-pkce">PKCE Flow</a> {' '} and is now stored in local storage. </p> <p> This route is protected with the {' '} <code>&lt;SecureRoute&gt;</code> {' '} component, which will ensure that this page cannot be accessed until you have authenticated. </p> <table> <thead> <tr> <th>Claim</th> <th>Value</th> </tr> </thead> <tbody> {Object.entries(userInfo).map((claimEntry) => { const claimName = claimEntry[0]; const claimValue = claimEntry[1]; const claimId = `claim-${claimName}`; return ( <tr key={claimName}> <td>{claimName}</td> <td id={claimId}>{claimValue.toString()}</td> </tr> ); })} </tbody> </table> </div> </div> ) : (<div> <p>Fetching user profile...</p> </div>) }; export default Profile;

And in your Profile.css file, add the following styles:

td, th { text-align: left; padding: 1px 10px } td:first-child, th:first-child { border-right: 1px solid #dcdcdc; } table { max-width: 600px; } .profile { margin: auto; } .profile h1, p { text-align: left; width: fit-content; }

The Profile component shows you the information in the useOktaAuth. When building your profile page, you will probably use only a handful of that information.

Lastly, paste this helper CSS code into the index.css file in your folder root; it’s just minor styling tweaks to improve your app’s appearance.

#root { font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif; line-height: 1.5; font-weight: 400; font-synthesis: none; text-rendering: optimizeLegibility; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } a, button { font-weight: 500; color: #213547; text-decoration: inherit; } a:hover, button:hover { color: #535bf2; } h1 { font-size: 3.2em; line-height: 1.1; } button { border-radius: 8px; border: 1px solid transparent; padding: 0.6em 1.2em; font-size: 1em; font-weight: 500; font-family: inherit; background-color: #1a1a1a; cursor: pointer; transition: border-color 0.25s; } button:hover { border-color: #646cff; } button:focus, button:focus-visible { outline: 4px auto -webkit-focus-ring-color; } button.outlined { border: 1px solid; } button.no-outline { border: none; } button.no-outline:focus, button.no-outline:focus-visible, button.no-outline:hover, button.no-outline:active { border: none; outline: none; } @media (prefers-color-scheme: light) { :root { color: #213547; background-color: #ffffff; } a:hover { color: #747bff; } button { background-color: #f9f9f9; } }

Run npm run dev in the console. The command serves your app at http://localhost:5173 and you should be able to sign in with your Okta account.

With these, all we need to do now is integrate Social Login and then make this app a PWA, both of which are straightforward!

Federated identity using Social Login

Social login is an authentication method that allows users to sign into an application using their existing credentials from platforms like Google, Facebook, or Apple. It simplifies the login process, reduces password fatigue, and enhances security by leveraging trusted identity providers. In our case, we are choosing Google as our social login provider.

Configure Google as an Identity Provider in Okta

First, we’d need to sign up for Google Workspace and create a Google project. After that, we configure Google as an Identity Provider (IDP). Follow the instructions to set up Google for Social Login from Okta Developer documentation.

When you define the OAuth consent screen in Google Cloud, use the following configuration:

Add http://localhost:5173 to your authorized JavaScript Origins - this is the test server for our React application. Add /oauth2/v1/authorize/callback to the Authorized redirect urls session. Replace {yourOktaDomain} with your actual Okta domain.

When adding the required scopes in Google Cloud, include the ./auth/userinfo.email, ./auth/userinfo.profile, and the openid scopes.

After setting up Google Cloud, you’ll configure Okta. Use the following values:

Enable automatic account linking to make it easier for users with an Okta account to sign in with Google. Add routing rules to allow all logins to use Google Social Login. For this tutorial, we’re keeping the routing conditions permissive; however, you should be a lot more stringent on a production application. You can check the routing page to configure routing to fit your use case better. Test authenticating with Google Social Login

If you run npm run dev and click the sign-in button, you should see the “Sign In With Google” button and your usual Okta sign-in / sign-up screen!

Set up your React app as a PWA

Lastly, let’s make our app a PWA so we can use it offline. First, we need to add a new dependency. Open the command terminal to the project’s root and run the following command.

npm install vite-plugin-pwa@1.0.1

Next, we update our vite.config.ts in your project root to include PWA configuration and add manifest icons:

import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' import { VitePWA } from "vite-plugin-pwa"; // https://vitejs.dev/config/ const manifestIcons = [ { src: 'pwa-192.png', sizes: '192x192', type: 'image/png', }, { src: 'pwa-512.png', sizes: '512x512', type: 'image/png', } ] export default defineConfig({ plugins: [ react(), VitePWA({ registerType: 'autoUpdate', devOptions: { enabled: true }, manifest: { name: 'Lister', short_name: 'lister', icons: manifestIcons, } }) ], })

You can get cute favicons from an icon generator and replace the manifestIcons source images with those. You can also look at the Vite PWA documentation to better understand each option’s meaning and how to use it.

With these changes, end your current npm script and run npm run dev again; everything should be peachy. Now we have an app with Social Login capabilities.

Since this application is a todo list application, let’s add the todo list feature. Since our app is a PWA, our users should be able to use the application even when offline. To make the data accessible offline, we can store it locally on the client using browser storage and then sync the data with our servers using service workers (let us know if you want to see a tutorial using service workers).

Build a secure todo list React PWA

Since we will persist the todo list data, creating a model is a good idea. This model serves as a layer of abstraction over the DB calls. In this section, we’ll save the data to local storage; in the future, we may want to switch to another technology. A model helps us make this change in the implementation without changing code when consuming the model. Now let’s create the model: navigate to the src folder and create a folder named models. In that folder, create a Task.model.ts. We’ll call each item in the todo list a task. The task model file should look like this:

export interface Task { name: string; description: string; done: boolean; } const key = 'lister-tasks'; export default { addTask: (task: Task) => { const currentTasksJSON = localStorage.getItem(key); if (!currentTasksJSON) { localStorage.setItem(key, JSON.stringify([task])); return; } const currentTasks = JSON.parse(currentTasksJSON); currentTasks.push(task); localStorage.setItem(key, JSON.stringify(currentTasks)); }, all: (): Task[] => { const currentTasksJSON = localStorage.getItem(key); if (!currentTasksJSON) return []; return JSON.parse(currentTasksJSON); }, save: (tasks: Task[]) => localStorage.setItem(key, JSON.stringify(tasks)), }

The model is a small wrapper over LocalStorage. The first part of the model defines the Task interface – all we need for a task is its name, description, and done state. The key variable is the localStorage item name; I chose to use lister-tasks for mine.

Remember, don’t store sensitive user data (e.g., passwords) on the client side but on a secure server!

Next up, we update the home page at src/pages/Home.tsx to look like this:

import './Home.css'; import { useEffect, useState } from "react"; import TaskModel, { Task } from "../models/Task.model.ts"; const EMPTY_TASK: Task = { name: "", description: "", done: false } as const; const Home = () => { const [tasks, setTasks] = useState<Task[]>(TaskModel.all().reverse()); const [addMode, setAddMode] = useState(false); const [form, setForm] = useState<Task>(EMPTY_TASK); const [expanded, setExpanded] = useState<boolean[]>(new Array(tasks.length).fill(false)); useEffect(() => TaskModel.save(tasks), [tasks]); const toggleTask = (id: number) => { const _tasks = [...tasks]; _tasks[id].done = !_tasks[id].done; setTasks(_tasks); } const addNewTask = (e: Event) => { e.preventDefault(); setExpanded(new Array(tasks.length + 1).fill(false)); setTasks([...tasks, form]); setForm(EMPTY_TASK); setAddMode(!addMode); } const toggleExpansion = (id: number) => { const _expanded = [...expanded]; _expanded[id] = !_expanded[id]; setExpanded(_expanded); } return (<> <h2 className="tab-heading"> <button className={`no-outline ${!addMode && 'active'}`} onClick={() => setAddMode(false)}>Task List </button> <button className={`no-outline ${addMode && 'active'}`} onClick={() => setAddMode(true)}>New Task + </button> </h2> {addMode && <form className="tab" action="#" onSubmit={addNewTask}> <div className="form-fields"> <div className="form-group"> <label htmlFor="name">Name</label> <input type="text" name="name" id="name" placeholder="Task name" onChange={(e) => setForm({...form, name: e.target.value})} required/> </div> <div className="form-group full"> <label htmlFor="description">Description</label> <textarea rows="5" maxLength="800" onChange={(e) => setForm({...form, description: e.target.value})} className="form-control" name="description" id="description" placeholder="describe the task..."></textarea> </div> </div> <div className="form-group"> <input type="submit" value="Submit"/> </div> </form>} {!addMode && <ul className="tab task-list"> {tasks.map((task, idx) => <li key={idx} className={`${task.done && 'done'}`}> <div className="title-card"> <input type="checkbox" name={'task' + idx} checked={task.done} onChange={() => toggleTask(idx)}/> <p className="name">{task.name}</p> <p className="expand" onClick={() => toggleExpansion(idx)}>&#9660;</p> </div> {expanded[idx] && <p className="description">{task.description}</p>} </li>)} </ul>} </>); } export default Home;

The first three lines of the code are the necessary imports. Next, we create a default empty task. The rest of the component is a basic CRUD page, with the required state for creating, reading, updating, and deleting tasks. I used a useEffect hook to save the tasks to local storage whenever they change. If you look at the component, a Home.css import is at the top. Let’s create that file in the same directory and paste these into the content:

.form-fields { display: flex; justify-content: space-between; flex-wrap: wrap; } .form-group{ margin: 5px 0; } .form-fields .form-group{ width: 100%; display: flex; flex-direction: column; } .form-group.check-group { display: flex; } /**Submit button styling**/ input:not([type="submit"]):not([type="checkbox"]), select, textarea { display: block; max-width: 100%; padding: 6px 12px; font-size: 16px; line-height: 1.42857143; color: #555; background-color: #fff; background-image: none; border: 1px solid #ccc; border-radius: 0; box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075); -webkit-transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s; transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s; margin-bottom: 5px; } form input[type="submit"] { display: block; background-color: #213547; box-shadow: 0 0 0 0 #213547; text-transform: capitalize; letter-spacing: 1px; border: none; color: #fff; font-size: .9em; text-align: center; padding: 10px; width: 50%; margin: 15px 0 0 auto; transition: background-color 250ms ease; border-radius: 5px; } .tab { max-width: 500px; margin: auto; } form label { text-align: left; max-width: 100%; margin-bottom: 5px; font-size: 16px; font-weight: 300; line-height: 24px; } .tab-heading { border-bottom: 1px solid #D9E4EEFF; } .tab-heading button{ width: 50%; } .tab-heading button:first-child{ text-align: right; } .tab-heading button:last-child{ text-align: left; } .tab-heading button:hover { background: #dcdcdc; border-radius: 0; } .tab-heading button.active{ background: rgba(217, 228, 238, 0.42); } .tab-heading button.active:first-child { border-right: 1px solid rgba(217, 228, 238, 0.9); border-bottom-left-radius: 0; border-top-left-radius: 0; } .tab-heading button.active:last-child{ border-left: 1px solid rgba(217, 228, 238, 0.9); border-bottom-right-radius: 0; border-top-right-radius: 0; } .task-list { list-style-type: none; } .task-list p.description { text-align: left; margin-top: 0; font-size: 0.8rem; } .task-list li { display: flex; flex-direction: column; border-bottom: 2px solid rgba(217, 228, 238, 0.7); padding: 5px 10px; } .task-list li .title-card { display: flex; } .task-list li.done .title-card *:not(.expand){ text-decoration: line-through; } .task-list li:hover { background: rgba(234, 243, 252, 0.59); } .task-list li input[type=checkbox] { margin-right: 15px; cursor: pointer; } .task-list li p.name { font-size: 1.2rem; } .task-list li p.expand { color: #46617a; font-size: 1rem; margin-left: auto; cursor: pointer; }

The above are helper styles. I used a tabular design for the todo list component so CRUD can be on the same page without using modal popups. Once all the files are in place, you’ll see the Todo List home page when you log in with Okta.

Once you have all the required manifest icons in your project, when you serve the app, you’ll see a prompt in the browser to install it on your machine! If you don’t want to create icons, use the ones in the sample repo.

Authenticate with Social Login from a React PWA

Great job making it this far! Along the way, we’ve explored how social login works with Okta and Google and how to set up a basic PWA using React, Vite, and the Vite PWA plugin. As a bonus, we now have a handy little todo list app to help keep our day on track!

Of course, a production-ready application would involve more advanced service worker configurations and a proper database setup, but our current implementation is adequate for an introduction. Now it’s your turn to have fun: open the app in your browser, try signing in with Okta or Google, and test the install prompt to see how smoothly it runs as a standalone app. Happy coding!

Learn more about React, PWA, Social Login, and Federated Identity

If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

The Ultimate Guide to Progressive Web Applications Use Redux to Manage Authenticated State in a React App Android Login Made Easy with OIDC

Remember to follow us on Twitter and subscribe to our YouTube channel for fun and educational content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below! Until next time! Toodles!

Monday, 21. July 2025

UbiSecure

Appointment of Tom Edwards as new Executive Chair

Ubisecure Appoints Tom Edwards as Executive Chair to Accelerate Growth in RegTech, Digital Identity and Compliance Markets London, July 22nd, 2025 –... The post Appointment of Tom Edwards as new Executive Chair appeared first on Ubisecure Digital Identity Management.
Ubisecure Appoints Tom Edwards as Executive Chair to Accelerate Growth in RegTech, Digital Identity and Compliance Markets

London, July 22nd, 2025 – Ubisecure, the European digital identity services provider and world’s largest issuer of Legal Entity Identifiers (LEI) through its RapidLEI service, today announced the appointment of Tom Edwards as the company’s Executive Chair, effective immediately. This strategic appointment marks the next chapter in the company’s RegTech evolution as it accelerates its growth in the global compliance and digital identity markets.

With a proven track record in scaling high-growth technology businesses, Mr Edwards brings deep expertise in corporate strategy, driving operational excellence, and enterprise go-to-market execution, particularly within regulated, compliance critical industries. He will work closely with Ubisecure’s leadership team to guide the company’s strategic direction, scale operations, and deepen relationships with global customers, partners and institutions.

“I’m excited to join Ubisecure at a pivotal moment of growth,” said Edwards. “Managing the opportunities presented in the RegTech market by Digital Identity, both Individual and Organisational, is essential in today’s landscape of expanding regulation, national identity initiatives, and ever rising fraud, especially in cross border transactions. The company is uniquely positioned to address some of the most pressing market challenges by enabling digital identity, to ensure regulatory compliance and foster digital trust. I look forward to working with the team as we accelerate innovation and expand across our markets.”

Mr Edwards previously held the COO and then CEO role at CubeLogic, an enterprise risk and compliance provider, where he scaled the business to double the revenue and the customer base during his leadership. He has also held leadership and advisory roles across fintech, reg-tech, and enterprise data infrastructure, bringing a strong track record of execution and growth in mission-critical technology environments.

“Tom’s appointment reflects Ubisecure’s ambition to maximise growth from our position as the world’s largest issuer of Legal Entity Identifiers (LEI) and one of Europe’s foremost Digital Identity service providers,” said Paul Tourret, Board Director, Ubisecure. “Tom’s insight and leadership will be instrumental as we build on our position as the global number one accredited LEI Issuer and continue delivering mission-critical Digital Identity enterprise solutions to the world’s most regulated industries.”

“We are delighted to welcome Tom as the new Executive Chair of Ubisecure,” said Paul Davidson, Partner, Octopus Ventures & Non-Executive Director, Ubisecure. “His extensive operational leadership and track record in scaling technology businesses make him the perfect addition to our team as we deliver on the growth opportunities across our core RegTech and Digital Identity solutions.”

Ubisecure’s suite of RegTech solutions helps enterprises and financial institutions solve key compliance, fraud and operational challenges by adopting a technology first approach to meet global regulations, reduce risk and deliver simplified governance.

The appointment follows a twelve-month period of positive momentum for Ubisecure, particularly within the LEI space including under its RapidLEI brand, and the onboarding of new major global banks as GLEIF Validation Agents – further strengthening its role in the global identity and RegTech ecosystem.

Find more information about Ubisecure & RapidLEI solutions at www.ubisecure.com and www.rapidlei.com

For media or investor inquiries, please contact Steve Waite, CMO, Ubisecure, press@ubisecure.com.

 

About Ubisecure and RapidLEI

Ubisecure is a European digital identity service provider, providing innovative identity and access management (IAM) and Legal Entity Identifier (LEI) solutions to enable secure, compliant digital business. Its RapidLEI service is the world’s largest LEI Issuer, delivering automated, API-enabled LEI registration and management to thousands of regulated firms, financial institutions, and identity providers worldwide.

As a RegTech innovator, RapidLEI helps organisations meet global compliance like DORA and FATF Recommendations, as well as enabling cross border trade by streamlining entity verification and Know Your Business (KYB) processes. Accredited since 2018 by the Global Legal Entity Identifier Foundation (GLEIF), RapidLEI empowers compliance teams with structured, regulated organisation identity data to reduce fraud, enhance transparency, and accelerate onboarding.

The post Appointment of Tom Edwards as new Executive Chair appeared first on Ubisecure Digital Identity Management.


Dock

How Digital ID Is Reshaping the Travel Industry [Video and Takeaways]

Digital ID is already transforming how we move through the world. From faster airport check-ins to personalized hotel experiences, identity is becoming portable, private, and verifiable. To explore what’s real, what’s next, and what identity organizations should be doing today, we hosted a live conversation with

Digital ID is already transforming how we move through the world. From faster airport check-ins to personalized hotel experiences, identity is becoming portable, private, and verifiable.

To explore what’s real, what’s next, and what identity organizations should be doing today, we hosted a live conversation with two people at the forefront of this shift. Annet Steenbergen, an advisor to the EU Digital Identity Wallet Consortium, shared insights from the large-scale pilots testing the EUDI Wallet across Europe. And Nick Price, CEO of Netsys and Co-Chair of the Decentralized Identity Foundation’s Travel & Hospitality Working Group, brought a global perspective from his real-world implementations of decentralized identity technologies.

Moderated by our CEO Nick Lambert, the session dug into how digital ID is being used right now, what’s still in development, and why the travel industry needs to start preparing for what’s coming.

Here are the key takeaways from that conversation.


FastID

How Apps Can Respect Privacy While Still Getting Personal

Learn how apps can offer personalized experiences without compromising user privacy. Solutions like Private Access Tokens, OHTTP, and MASQUE Relay protect data without harming user experience.
Learn how apps can offer personalized experiences without compromising user privacy. Solutions like Private Access Tokens, OHTTP, and MASQUE Relay protect data without harming user experience.

Friday, 18. July 2025

Anonym

Privacy as a Service: A New Frontier for Insurance Brand Differentiation

Insurance brand differentiation through PaaS  There’s a new way to stand out in the insurance industry, and it’s not about offering lower premiums or faster claims. It’s about privacy.   Consumer expectations are shifting, and Privacy as a Service (PaaS) is emerging as a powerful differentiator for insurers seeking to lead in trust, security, and digital […] The post Privacy as a
Insurance brand differentiation through PaaS 

There’s a new way to stand out in the insurance industry, and it’s not about offering lower premiums or faster claims. It’s about privacy.  

Consumer expectations are shifting, and Privacy as a Service (PaaS) is emerging as a powerful differentiator for insurers seeking to lead in trust, security, and digital experience. This is your opportunity to move beyond compliance and build tangible brand equity through privacy. 

The Rise of Privacy for the Everyday Consumer 

In today’s hyper-digital world, privacy has gone from a background concern to a front-page priority. According to Pew Research, 67% of Americans say they understand little to nothing about what companies do with their data, up from 59% in previous years. Most also feel they have little or no control over how businesses or government agencies use their data. 

This is where Privacy as a Service (PaaS) becomes a critical advantage. 

PaaS offers insurers a strategic way to embed privacy tools into their offerings from encrypted messaging and masked contact details to user-controlled data sharing and real-time breach monitoring.  

What privacy as a service looks like 

Privacy as a Service isn’t theoretical. It’s a growing suite of tools insurers can implement today, either through white-labeled apps or integration with their existing digital platforms.  

These include: 

Private communication channels (e.g., encrypted messaging) 
Virtual cards for secure payments or online purchases 
Digital wallets that store only necessary credentials 
User-controlled identity and data-sharing preferences 
Real-time alerts and monitoring for data breaches, leaks, or misuse 

Together, these tools help protect against fraud, phishing, impersonation, and unauthorized data sharing, all while building policyholder confidence. 

What specific PaaS solutions are available? 

Several technology providers now offer turnkey or customizable privacy-as-a-service toolkits.  

For example: 

Anonyome Labs provides secure communication tools (like virtual phone numbers and masked emails), identity protection, digital wallets, and breach monitoring. 
Jumio and Okta offer identity verification and access management solutions that support consumer-controlled credentialing. 
Apple’s Private Relay and others are shifting expectations for how personal data should be handled in digital experiences, further reinforcing the importance of privacy-centric offerings. 

Insurers can choose to license these features or integrate them into native apps for a seamless user experience. 

How to effectively communicate the value of privacy tools

Implementing PaaS is only half the equation. Insurers must clearly articulate why it matters to customers. Here’s how: 

Lead with control: Emphasize how customers can manage what they share and with whom they share it.  Show real-world benefits: Frame privacy tools as ways to reduce fraud, protect families, and save money, not just as technical features. Promote peace of mind: Position your brand as one that safeguards people, not just policies. 

Messaging should appear across onboarding flows, app experiences, marketing campaigns, and customer support channels. 

Differentiation in a crowded market 

With so many insurance products becoming commoditized, Privacy as a Service provides a new lever for differentiation. It aligns with what modern consumers care about: safety, autonomy, and digital integrity. 

In a space where brand loyalty is tied to values and experience, offering built-in privacy sends a powerful message you don’t just insure people. You protect them holistically. 

Ready to offer privacy as a Service? 

Anonyome Labs helps insurers integrate turnkey privacy solutions into their digital experiences. Request a demo to discover how you can leverage privacy as your next competitive advantage. 

The post Privacy as a Service: A New Frontier for Insurance Brand Differentiation appeared first on Anonyome Labs.


liminal (was OWI)

This Week in Identity

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Here are the main industry […] The post This Week in Identity a

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Here are the main industry highlights of this week impacting identity and fraud, cybersecurity, trust and safety, financial crimes compliance, and privacy and consent management.

🪄Innovation and New Technology Developments

Zendesk Acquires HyperArc to Strengthen Explore With GenAI-Powered Analytics and Real-Time Insights

Zendesk has acquired HyperArc (Acq. by Zendesk), an AI-native analytics platform known for its HyperGraph engine and real-time GenAI-powered insights, as part of a strategy to advance its analytics capabilities. The integration will enhance Zendesk’s existing Explore platform with next-generation analytics features, including self-service insights and automation tools. The move aligns with Zendesk’s broader goal of delivering deeper, more actionable customer intelligence, and will enable the company to offer improved reporting and decision-making tools across its user base. HyperArc’s team and technology are expected to play a central role in shaping Zendesk’s future analytics offerings. (Source)

Amplitude Acquires Kraftful To Unify User Feedback and Behavior Insights with AI-Powered Voice of Customer Tools

Amplitude has acquired Kraftful, a startup specializing in AI-powered Voice of Customer tools, to enhance its ability to turn user feedback into actionable insights. Kraftful’s platform centralizes feedback from various sources like app reviews and support tickets, using large language models to detect trends, sentiment, and feature requests with high accuracy. It also includes AI-generated surveys and interviews that dynamically adapt to user responses, helping teams uncover deeper user needs and test product ideas. With this integration, Amplitude aims to close the gap between user behavior and user motivation, offering a complete view of what customers are doing and why. The Kraftful team will join Amplitude to embed these capabilities natively, accelerating the company’s AI roadmap and product innovation. (Source)

💰 Investments and Partnerships

Exein Raises €70 Million to Expand Global AI-Driven IoT Cybersecurity Platform

Exein, a cybersecurity firm specializing in embedded runtime protection for IoT devices, has raised €70 million in Series C funding to support its global expansion across the US, Japan, Taiwan, and South Korea. The company, which already secures over a billion smart devices including critical infrastructure, offers AI-driven, real-time threat detection at the device level. This decentralized model aligns with evolving regulations such as the EU’s NIS2 and the upcoming Cyber Resilience Act. With over 450% year-over-year growth and strategic partnerships with major manufacturers, Exein plans to use the funding to scale operations, pursue acquisitions, and develop new security tools for AI and LLM-enabled devices. (Source)

Gravitee Acquires Ambassador to Strengthen AI-Driven API Management and Expand in North America

Gravitee has acquired US-based Ambassador to enhance its AI-ready API and event management capabilities, expanding its reach in the North American market. The deal brings in Ambassador’s key products—Edge Stack, a Kubernetes-native ingress and API gateway, and Blackbird, an AI-driven tool for rapid API development. With this acquisition, Gravitee aims to solidify its position as a leader in agentic API management by offering a unified platform for API design, event handling, and AI interaction governance. The move also brings Ambassador’s team onboard, with former CEO Steve Rodda joining Gravitee as North America Field CTO. (Source)

Jack Dorsey Backs $10 Million Open-Source Collective to Reimagine Decentralized Social Media

Jack Dorsey has invested $10 million into a nonprofit called “and Other Stuff,” a collective focused on developing open-source tools and protocols to reshape social media. Formed in May, the group includes early Twitter employees and developers from projects like Nostr and Cashu. Unlike traditional tech ventures, the collective eschews corporate structures, aiming to build decentralized, protocol-driven alternatives to mainstream platforms. Their work spans experimental apps, developer tools, and a forthcoming social media “Bill of Rights” centered on user privacy, transparency, and autonomy. Dorsey’s goal is to support an open, resilient social web beyond the constraints of ad-driven platforms. (Source)

CertifID Raises $47.5 Million to Strengthen Identity Verification and Combat Real Estate Wire Fraud

CertifID has raised $47.5 million in a Series C funding round led by Centana Growth Partners, with continued support from Arthur Ventures. The company, which provides wire fraud protection for the real estate industry, plans to use the funds to enhance its identity verification, transaction monitoring, and secure payments capabilities. CertifID also aims to expand its team, partnerships, and security features amid rising threats from increasingly sophisticated fraud tactics. The platform combines AI tools with human expertise and has reportedly prevented $1.3 billion in fraud losses to date, reinforcing its role in safeguarding high-value financial transactions. (Source)

Island Raises $250 Million in Series E to Accelerate Growth of Secure Enterprise Browser

Cybersecurity startup @Island has secured a significant investment from J.P. Morgan as part of its $250 million Series E funding round, which values the company at $4.8 billion. Since October 2023, Island has more than quadrupled its valuation, reflecting growing demand for secure enterprise browsers. The Tel Aviv- and Dallas-based company, led by veterans Mike Fey and Dan Amiga, has raised over $750 million to date and serves 450 clients, including several Fortune 100 firms. Island’s browser offers robust security features and data controls tailored to enterprise needs, and its consistent revenue growth highlights the company’s rapid ascent in the cybersecurity sector. (Source)

Zip Security Secures $13.5 Million to Expand AI-Powered Cybersecurity for SMBs

Zip Security has raised $13.5 million in a Series A round led by Ballistic Ventures, bringing its total funding to $21 million. The company, founded by ex-Palantir engineers, targets the underserved segment of small and mid-sized businesses that often lack dedicated cybersecurity staff. Zip’s AI-powered platform automates essential security and compliance tasks, offering tools like endpoint protection, identity management, and compliance workflows in an accessible interface. Designed to reduce reliance on consultants and managed service providers, Zip aims to deliver scalable, cost-effective cybersecurity solutions to a broader range of organizations, including those in regulated industries. (Source)

Signicat Acquires Inverid to Strengthen Digital Identity Verification Capabilities in Europe

Signicat has acquired Dutch identity verification firm Inverid, integrating its NFC-based ReadID technology to enhance its digital identity platform. The deal brings immediate synergies, bolstering Signicat’s capabilities in high-assurance, scalable document verification trusted by governments and financial institutions. Inverid, backed by Main Capital since 2022, has grown rapidly through R&D and market expansion. This acquisition aligns with Signicat’s strategy of combining innovation and strategic acquisitions to lead in Europe’s digital identity sector, especially as demand rises for secure, compliant verification solutions amid developments like the European Identity Wallet. (Source)

OpenAI’s Acquisition of Windsurf Collapses as Google Secures Key Talent and Licensing Deal

OpenAI’s $3 billion acquisition of AI coding startup Windsurf collapsed after the startup objected to Microsoft gaining access to its technology, given Microsoft’s competing product, Copilot. OpenAI’s attempt to secure an exception from Microsoft was denied, prompting Windsurf to explore alternatives. Google has since hired Windsurf CEO Varun Mohan, cofounder Douglas Chen, and key R&D staff, and will pay approximately $2.4 billion for talent and non-exclusive tech licensing. The majority of Windsurf’s team remains, with new interim leadership appointed, as the company reassesses its path forward independently. (Source)

Virtru Secures $50 Million To Expand Trusted Data Format Adoption for AI and Critical Infrastructure

Virtru, a D.C.-based data security company, raised $50 million in Series D funding led by ICONIQ, doubling its valuation to $500 million. The company’s core innovation is Trusted Data Format (TDF), which embeds security directly into data files—a method developed by co-founder Will Ackerly during his time at the NSA. Over 6,000 organizations, including JPMorgan Chase, Salesforce, and the U.S. Department of Defense, now use Virtru’s platform. As AI adoption introduces new data-sharing risks, Virtru’s microsecurity approach offers persistent protection that travels with the data itself. The new funding will accelerate global TDF adoption and support advanced protection for AI and critical infrastructure systems. (Source)

Corsha Gains Strategic Backing from Booz Allen To Scale Machine Identity for Zero Trust and Mission-Critical Systems

Corsha secured a strategic investment from Booz Allen Ventures to scale its machine identity platform, supporting Zero Trust adoption across critical systems. The partnership targets growing demand for secure machine-to-machine communication in sectors like defense, energy, and space. Corsha’s mIDP technology enables real-time authentication and deployment, positioning it as key infrastructure for national security. (Source)

⚖️ Policy and Regulatory

Monzo Fined £21.1 Million for Failing to Prevent Financial Crime During Rapid Growth

The U.K.’s Financial Conduct Authority (FCA) has fined Monzo Bank £21.1 million (approximately $28.6 million) for failing to maintain adequate systems to prevent financial crime between 2018 and 2022. The FCA cited poor due diligence practices that allowed high-risk customers to open accounts using implausible addresses like Buckingham Palace, and noted that Monzo failed to address compliance issues even after regulatory warnings. The digital bank’s rapid growth outpaced its onboarding controls, with over 34,000 high-risk accounts potentially added after a 2020 review. Monzo acknowledged the shortcomings, stating the issues are historical and have since been addressed. (Source)

Barclays Fined £42 Million for AML Failures in WealthTek and Stunt & Co Cases

The UK Financial Conduct Authority (FCA) has fined Barclays Bank UK and Barclays Bank a total of £42 million for significant lapses in managing financial crime risks linked to two separate cases involving WealthTek and Stunt & Co. Barclays Bank UK failed to verify WealthTek’s authorisation before opening a client money account, risking misappropriation of £34 million, and has pledged £6.3 million in voluntary payments to impacted clients. Separately, Barclays Bank did not properly assess or monitor risks tied to bullion firm Stunt & Co, which was linked to a broader money laundering scheme involving £46.8 million. Despite law enforcement warnings, the bank did not reassess the relationship. The FCA acknowledged Barclays’ cooperation and ongoing efforts to improve its anti-money laundering controls. (Source)

Zuckerberg to Testify in $8 Billion Shareholder Trial Over Facebook’s Privacy Failures and 2012 FTC Violation

Meta CEO Mark Zuckerberg is set to testify in a shareholder-led $8 billion trial alleging that he and other executives allowed Facebook to operate in violation of a 2012 FTC agreement protecting user privacy. The case stems from the 2018 Cambridge Analytica scandal, which exposed how millions of users’ data were misused, leading to significant financial penalties for Meta, including a $5 billion FTC fine. Shareholders seek reimbursement from Zuckerberg and other former leaders, including Sheryl Sandberg and Marc Andreessen. The Delaware trial, starting this week, will scrutinize past board actions and Meta’s data governance during a period of growing privacy scrutiny. (Source)

🔗 More from Liminal

Access Our Intelligence Platform

Stay ahead of market shifts, outperform competitors, and drive growth with actionable intelligence.

Save your spot: Tackling First-Party Fraud Demo Day

Discover how 10 leading vendors are stopping chargebacks, promo abuse, and refund fraud in real time.

Link Index for Data Access Control

Discover the top 24 vendors shaping Data Access Control in 2025. This Link Index reveals how organizations are managing permissions, securing sensitive data, and aligning with evolving compliance demands.

Link Index for AI Data Governance

Discover how top vendors are shaping the future of AI Data Governance through scalable controls, model oversight, and real-time compliance across complex data environments.

Link Index for Ransomware Prevention

Explore the latest Link Index on Ransomware Prevention, featuring 22 top vendors helping organizations stay resilient against evolving cyber threats.

The post This Week in Identity appeared first on Liminal.co.


IDnow

Future of AML identification.


PingTalk

Software Is Alive & Well in the Age of Cloud

Discover why modern identity software remains essential for enterprises needing flexibility, control, and resilience without compromising on cloud-native agility.

 

The cloud is here, and it’s thriving, but so is enterprise software. While the industry buzzes with the promise of SaaS-first strategies and fully cloud-native ecosystems, a powerful truth remains: not all businesses can or should go all-in on the cloud. And that’s perfectly okay.

 

The rise of the cloud hasn’t diminished the value of software, it’s reinvigorated it. Let’s explore why.


Metadium

Metadium 2025 H1 Activity Report

Dear Community, As we wrap up the first half of 2025, we want to reflect on our progress and share our journey with you. Thanks to your continued interest and support, Metadium has achieved meaningful growth and transformation throughout the year’s first half. This report outlines the key milestones and advancements we’ve accomplished over the past six months. Summary On July 12, 2025, at

Dear Community,

As we wrap up the first half of 2025, we want to reflect on our progress and share our journey with you. Thanks to your continued interest and support, Metadium has achieved meaningful growth and transformation throughout the year’s first half. This report outlines the key milestones and advancements we’ve accomplished over the past six months.

Summary

On July 12, 2025, at 07:16:54 KST, the total number of blocks generated on the Metadium mainnet surpassed 100 million. From January to June 2025, a total of 1,791,930 transactions were processed, and 71,685 new DIDs were created. On February 1, 2025, Francisco Dantas Filho was officially appointed as the new CEO of Metadium. A successful mainnet upgrade (go-metadium version m0.10.1) was completed to activate the Transaction Restriction Service (TRS). With AI-powered MCP (Model Context Protocol) integration into the WEB2X platform, developers can now easily build services on Metadium using natural language commands. Metadium’s mainnet development company officially joined the Digital Identity Technology Standard Forum, enabling broader application of Metadium’s distinctive DID technology in Korea’s digital identity ecosystem. The AI-based conversational blockchain explorer ‘MChat’ officially launched, offering users a more intuitive and interactive way to query Metadium mainnet data using natural language.

Technology

H1 Monthly Transactions

From January to June 2025, a total of 1,791,930 transactions were processed, and 71,685 DID wallets were created.

100 Million Block Milestone

As of July 12, 2025, at 07:16:54 KST, the Metadium mainnet reached the significant milestone of 100 million blocks generated. This achievement highlights the stability and operational continuity of the Metadium blockchain, reinforcing the strength and reliability of its ecosystem.

Appointment of New CEO

On February 1, 2025, Francisco D. Filho was officially appointed as Metadium’s new CEO. With his exceptional leadership and vision, we anticipate continued sustainable growth for Metadium.

For more details, please click here.

Mainnet Update

We are pleased to announce the successful completion of the mainnet update (go-metadium version m0.10.1), which activates the Transaction Restriction Service (TRS). This update significantly bolsters the security and stability of our services, strengthening our commitment to providing an exceptional experience for our users.

For more details, please click here.

WEB2X-MCP Integration

The WEB2X platform has been upgraded with AI-powered MCP (Model Context Protocol) functionality, allowing developers to build on the Metadium blockchain using natural language commands. This update significantly lowers the entry barrier and expands the possibilities for developers to create intuitive blockchain services on Metadium.

For more details, please click here.

Membership in the Digital Identity Technology Standard Forum

Metadium’s mainnet development company officially joined the Digital Identity Technology Standard Forum, Korea’s key standardization body in the digital identity sector. With this membership, Metadium’s decentralized identity technology is expected to be more actively utilized and contribute to ecosystem-wide standardization efforts.

For more details, please click here.

Official Launch of AI Explorer MChat

The AI-powered conversational blockchain explorer ‘MChat’ has officially launched. Users can interactively explore blockchain data by entering wallet addresses, transaction hashes, or block numbers and asking questions in natural language. This launch makes it easier for non-technical users to understand and access Metadium mainnet data in a more user-friendly format.

For more details, please click here.

Metadium will continue to pursue innovation and build a blockchain ecosystem that delivers real value to users and the community.

Thank you, as always, for your unwavering support.

The Metadium Team

안녕하세요, 메타디움 팀입니다.

2025년 상반기를 마무리하며, 메타디움이 걸어온 발자취를 되돌아보고 그 여정을 여러분과 함께 나누고자 합니다.

여러분의 꾸준한 관심과 참여 덕분에, 메타디움은 상반기에도 의미 있는 성장과 변화를 이어갈 수 있었습니다.

이번 리포트를 통해 상반기 동안의 주요 성과와 진전을 보다 자세히 공유드립니다.

요약

2025년 7월 12일 오전 7시 16분 54초(KST), 메타디움 메인넷의 누적 블록 생성 수가 1억 개를 달성했습니다. 2025년 1월부터 6월까지 총 1,791,930건의 트랜잭션이 처리되었으며, DID는 71,685건이 생성되었습니다. 2025년 2월 1일, Francisco Dantas Filho님이 메타디움의 새로운 CEO로 공식 선임되었습니다. TRS(Transaction Restriction Service) 활성화를 위한 메인넷(go-metadium 버전 m0.10.1) 업데이트를 성공적으로 완료했습니다. WEB2X 플랫폼에 AI 기반 MCP 기능이 추가되면서, 메타디움 블록체인을 자연어 명령만으로 더욱 쉽게 구축할 수 있는 개발 환경이 마련되었습니다. 메타디움 블록체인 메인넷 개발사가 디지털신원기술표준포럼에 정식 회원사로 참여하며, 메타디움의 차별화된 DID 기술이 디지털 신원 생태계에서 더욱 활발히 활용될 것으로 기대됩니다. AI 기반 대화형 블록체인 익스플로러 ‘MChat’이 정식 오픈되어, 지갑 주소, 트랜잭션 해시, 블록 번호 등을 자연어로 질의하며 메타디움 메인넷 데이터를 직관적으로 탐색할 수 있는 환경이 마련되었습니다.

기술 업데이트

H1 월간 트랜잭션

2025년 1월부터 6월까지 총 1,791,930건의 트랜잭션이 처리되었으며, DID는 71,685건이 생성되었습니다.

블록 생성 1억 달성

2025년 7월 12일 오전 7시 16분 54초(KST) 기준, 메타디움 메인넷의 누적 블록 생성 수가 1억 개를 넘어섰습니다. 이는 메타디움 블록체인의 안정성과 지속 운영 능력을 입증하는 중요한 이정표로, 생태계의 견고한 성장 기반을 다시 한 번 확인시켜 주는 성과입니다.

새로운 CEO 선임

2025년 2월 1일부로 Francisco D. Filho님이 메타디움의 새로운 CEO로 공식 선임되었습니다. Dantas Filho님은 뛰어난 리더십과 비전을 바탕으로 메타디움의 지속 가능한 성장을 이끌어 나갈 것으로 기대됩니다.

자세한 내용은 여기를 확인해보세요.

메인넷 업데이트

TRS(Transaction Restriction Service) 활성화를 위한 go-metadium 버전 m0.10.1 메인넷 업데이트를 성공적으로 완료했습니다. 이 업데이트를 통해 보안과 서비스 안정성이 더욱 강화되었습니다.

자세한 내용은 여기를 확인해보세요.

WEB2X-MCP 통합

WEB2X 플랫폼에 AI 기반 MCP(Model Context Protocol) 기능이 새롭게 추가되면서, 메타디움 블록체인을 자연어 명령만으로도 쉽게 구축할 수 있는 개발 환경이 마련되었습니다. 이를 통해 개발자는 복잡한 코딩 없이도 직관적인 방식으로 블록체인 서비스를 구현할 수 있게 되었으며, 메타디움 블록체인의 접근성과 활용 가능성이 더욱 확대되었습니다.

자세한 내용은 여기를 확인해보세요.

메인넷 개발사 디지털신원기술표준포럼 정회원 참여

메타디움 블록체인 메인넷 개발사가 디지털신원기술표준포럼(Digital Identity Technology Standard Forum)에 정식 회원사로 참여하게 되었습니다. 이번 합류로 메타디움의 탈중앙화 신원 기술(DID)이 국내 디지털 신원 생태계에서 더욱 활발히 활용될 것으로 기대됩니다.

자세한 내용은 여기를 확인해보세요.

AI 기반 익스플로러 MChat 정식 오픈

AI 기반 대화형 블록체인 익스플로러 ‘MChat’이 정식 오픈되었습니다. 사용자는 지갑 주소, 트랜잭션 해시, 블록 번호 등의 정보를 입력한 후, 자연어로 질문함으로써 블록체인 데이터를 더욱 직관적이고 대화형 방식으로 탐색할 수 있습니다. 이번 출시를 통해 블록체인 기술에 익숙하지 않은 사용자들도 메타디움 메인넷 데이터를 쉽게 이해하고 접근할 수 있는 환경이 조성되었습니다.

자세한 내용은 여기를 확인해보세요.

메타디움은 앞으로도 끊임없는 혁신을 바탕으로, 사용자와 커뮤니티 모두에게 실질적인 가치를 제공하는 블록체인 생태계 구축에 최선을 다하겠습니다.

늘 아낌없는 관심과 성원에 감사드립니다.

메타디움 팀

Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(EN) | http://t.me/metadiumofficial Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

Metadium 2025 H1 Activity Report was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 17. July 2025

myLaminin

From Risk to Readiness: Winning at Electronic Record Compliance

In today’s data-driven world, electronic record compliance is essential to operational success and regulatory survival. From GDPR to HIPAA and PIPEDA, organizations must follow strict standards to store, protect, and dispose of data responsibly. Tools like audit trails, encryption, and role-based access help reduce risk. For research institutions, platforms like myLaminin simplify this process—supp
In today’s data-driven world, electronic record compliance is essential to operational success and regulatory survival. From GDPR to HIPAA and PIPEDA, organizations must follow strict standards to store, protect, and dispose of data responsibly. Tools like audit trails, encryption, and role-based access help reduce risk. For research institutions, platforms like myLaminin simplify this process—supporting secure, compliant, and collaborative data management at every stage.

Ockto

Een wereld van Wallets, API's en AI – Data delen 2030 – VIP congres

In deze aflevering van de Data Sharing Podcast een iets andere insteek dan je gewend bent: het is een live opname vanaf het VIP Congres, waar Gert-Jan van Dijke (Director Accounts bij Ockto) sprak over de toekomst van data delen. Geen studio, maar een podium vol publiek – met scherpe inzichten over wat er nodig is om toekomstbestendige klantreizen mogelijk te maken.

In deze aflevering van de Data Sharing Podcast een iets andere insteek dan je gewend bent: het is een live opname vanaf het VIP Congres, waar Gert-Jan van Dijke (Director Accounts bij Ockto) sprak over de toekomst van data delen. Geen studio, maar een podium vol publiek – met scherpe inzichten over wat er nodig is om toekomstbestendige klantreizen mogelijk te maken.