Last Update 12:58 PM May 26, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Sunday, 26. May 2024

Lockstep

A creative response to Generative AI faking your voice

With Generative AI being used to imitate celebrities and creators, the question arises, is your likeness a form of intellectual property (IP)? Can you trademark your face or copyright your voice? These questions are on the bleeding edge of IP law and could take years to resolve. But I find there may be a simpler... The post A creative response to Generative AI faking your voice appeared first on

With Generative AI being used to imitate celebrities and creators, the question arises, is your likeness a form of intellectual property (IP)? Can you trademark your face or copyright your voice?

These questions are on the bleeding edge of IP law and could take years to resolve. But I find there may be a simpler way to legally protect personal appearance.

On my reading of technology-neutral data protection law, generating likenesses of people without their permission could be a privacy breach.

Let’s start with the generally accepted definition of personal data as any data that may reasonably be related to an identified or identifiable natural person. Personal data (sometimes called personal information) is treated in much the same way by the California Privacy Rights Act (CPRA), Europe’s General Data Protection Regulation (GDPR), Australia’s Privacy Act, and the new draft American Privacy Rights Act (APRA).

These regulatory approaches to privacy place limits on how personal data is collected, used and disclosed. If personal data is collected without a good reason, or in excess of what’s reasonable for the purpose, or without the knowledge of the individual concerned, then privacy law may be breached.

What’s more, technology neutrality in privacy law means it does not matter how personal data comes to be held in a storage system; if it’s there, it may be deemed to have been collected.

Collection may be done directly and overtly via forms, questionnaires and measurements, or indirectly and subtly by way of acquisitions, analytics and algorithms.

To help stakeholders deal with the rise of analytics and Big Data, the Australian privacy regulator developed the Guide to Data Analytics and the Australian Privacy Principles which explains that:

“The concept of ‘collects’ applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when information is created with reference to, or generated from, other information” (underline added).

That guidance should apply to Deep Fakes, for what are digital images and voices if not data?

Digital recordings are series of ‘ones and zeros’ representing optical or acoustic samples that can be converted back to analog to be viewed or heard by people. If those sounds and images are identifiable as a natural person—that is, the output looks like or sounds like someone in particular—then logically that data is personal data about that person.

[Aside: If it seems like a stretch to label digitally sampled light and sound as personal data, then consider digital text. That too is merely ‘ones and zeros’, in this case representing coded characters, which can be converted by a display device or printer to be human readable. If those characters form words and sentences which relate to an identifiable individual, then the ones and zeros from which they were all derived are clearly treated by privacy law as personal data.]

 

And it ought not matter under technology neutral privacy law if an identifiable image or sound was recorded from real life or synthesised by software: the law would apply in both cases.

The same sort of interpretation would seem to be available under any similar technology neutral data protection regime.

That is, if a Generative AI model makes a likeness of a real-life individual Alice, then we can say the model has collected [by creation] personal information about Alice, and the operation of the model could be subject to privacy law.

I am not a lawyer but this seems to me to be easy enough to test in a ‘digital line up’. If a face or voice is presented to a sample of people, and an agreed percentage of them say the face or voice reminds them of Alice, then that would be evidence that personal data of Alice has been collected.

Moreover, if it was found that the model was actually prompted to mimic someone, then the case would be pretty strong, shall we say, on its face.

The post A creative response to Generative AI faking your voice appeared first on Lockstep.

Saturday, 25. May 2024

Spherical Cow Consulting

Preparing for the Quantum Shift in Cybersecurity

The internet's security relies on complex math, or cryptography. However, with quantum computers on the horizon, current encryption could become easily breakable. Post-quantum cryptography research is now focused on developing new, quantum-resistant methods. With the possibility of large-scale quantum computers within the next twenty years, organizations must prepare for the quantum apocalypse, w

The security, confidentiality, and integrity of the Internet are ultimately based on really hard math, aka cryptography. Computers get faster every few years (Moore’s Law, anyone?), meaning those math problems need to be That Much Harder to remain the foundation of security for the digital world. But what happens when our computers shift from transistors to quantum mechanics? That’s when everything based on today’s really hard math problems becomes laughably easy to crack.

Cybersecurity enthusiasts are not looking forward to that moment; it will be an Internet-wide emergency like we’ve never seen before. So, when can we expect quantum cryptography to break Internet security? Great question! That emergency may happen pretty soon. Or it may be a few more years. Some conspiracy theorists suggest it has already happened, and hackers are keeping quiet about it so they can suck up all the data with none the wiser. Ultimately, no one knows, though there are various predictions out there.

What is Quantum Mechanics?

You’ve probably heard about atoms. Teeny tiny particles that combine in various ways to make matter like oxygen and iron. Those teeny tiny particles are made up of even teenier tinier particles, and those sub-atomic particles are like magic.

“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke

Those subatomic particles offer some very interesting behavior that scientists and engineers can sometimes predict and manipulate. Predicting and manipulating those subatomic particles is what quantum mechanics is all about, and when it works, it is blindingly fast.

Quantum Computing

Scientists have been studying quantum mechanics for over a hundred years. It wasn’t until 1982, when Richard Feynman wrote in his famous paper, Simulating Physics with Computers, that people started thinking about the possibilities. Things got exciting in 1994 when Peter Shor developed a quantum algorithm. That algorithm was the warning bell that cryptographic systems based on solving current complex math problems would be blown out of the water.

What that algorithm could do was find the prime factors of large numbers in shorter timeframes than ever before. You have probably seen those tables about how long it takes to crack a password (like this one or this one) that shows millions or billions of years to crack a long, complicated password. Now, imagine those numbers cut down to days. More bluntly, what once took a few billion years to solve using traditional computing and existing algorithms will take days with quantum computing.

Post-Quantum Cryptography

With Shor’s Algorithm kicking things off, research is underway to develop more quantum algorithms that can be used as the basis of new cryptographic methods. The U.S. National Institute of Standards and Technology (NIST) has a very active program focused entirely on this area of study.

The goal of this program isn’t just new, advanced math. It’s advanced math that can interoperate with both new and traditional protocols and networks. It takes an amazing mind to figure out how to develop an algorithm that takes advantage of quantum mechanics when computers that use quantum mechanics are not quite ready for prime time. (See what I did there?)

NIST’s take on the probability and timing of the quantum apocalypse scenario is:

“While in the past it was less clear that large quantum computers are a physical possibility, many scientists now believe it to be merely a significant engineering challenge. Some engineers even predict that within the next twenty or so years sufficiently large quantum computers will be built to break essentially all public key schemes currently in use.”

Looking into the Void

The world is on the brink of having new computers that can crack code faster than dropping fine china on a granite countertop. Even if post-quantum cryptography is ready for real-world use around the same time those new computers are a reality, we still have a big problem. Great! What happens to all the material that people, organizations, and governments have been collecting and storing, waiting for the day they can crack it right open? Because it’s unrealistic to say that data collection isn’t happening.

The number of cybersecurity attacks reported goes up every year. According to Harvard Business Review, breaches spiked in 2023. And all that is just what we know about. There is a lot of data already out there, and it’s unlikely that said data is encrypted with quantum-resistant algorithms.

When the first quantum-based computer is produced, we will see some interesting and terrifying information exposures. And if you want to freak out and head to your closest off-the-grid bunker, imagine AI powered by quantum-based computers.

Planning for an Emergency

Like all animals, humans are terrible at staying in emergency mode for long. And preparing for something as mind-blowing as the quantum apocalypse when we don’t know the timing is difficult. Do we treat it as an imminent emergency? Or do we slide it behind all the other emergencies clamoring for our attention? The World Economic Forum breaks down the steps that organizations should build into their plans into three items:

Be aware that changes in computing that will make your current security models irrelevant will happen. Train your workforce to stay aware of changes in best practices for cryptography and conduct assessments of what data you have that is most sensitive to changes in advances in cryptography. Remember your IoT devices and other digital assets may also be impacted. Wrap Up

The Quantum Apocalypse has many cybersecurity practitioners biting their nails as responding to the brave new world of quantum cryptography will not be simple, quick, or cheap. It has business leaders thinking about retirement. And it’s just another Urgent Event for everyone else to worry about (because we don’t have enough to worry about today).

Still, the only thing worse than an apocalyptic event is an apocalyptic event no one saw coming. While there may be nothing for you and your organization to do right now beyond staying aware, that awareness will put you ahead of the game when this new computing paradigm becomes available to the world.

I love to receive comments and suggestions on how to improve my posts! Feel free to comment here, on social media, or whatever platform you’re using to read my posts! And if you have questions, go check out Heatherbot and chat with AI-me.

The post Preparing for the Quantum Shift in Cybersecurity appeared first on Spherical Cow Consulting.


FindBiometrics

Officials Voice Schengen EES Concerns from Both Sides of the Channel

French Transport Minister Patrice Vergriete has voiced concerns about the upcoming rollout of the Entry/Exit System (EES) at Schengen area entry points, citing potential operational issues due to a lack […]
French Transport Minister Patrice Vergriete has voiced concerns about the upcoming rollout of the Entry/Exit System (EES) at Schengen area entry points, citing potential operational issues due to a lack […]

Friday, 24. May 2024

Entrust

New Publication From the Cloud Security Alliance (CSA): Hardware Security Modules as a Service

I’ve been part of the Cloud Security Alliance (CSA) Cloud Key Management working group for... The post New Publication From the Cloud Security Alliance (CSA): Hardware Security Modules as a Service appeared first on Entrust Blog.

I’ve been part of the Cloud Security Alliance (CSA) Cloud Key Management working group for 3+ years. We try to meet up virtually every two weeks, skillfully facilitated by our CSA staff member Marina Bregkou. The working group is formed from a dozen or so people, some who attend regularly and some who drop in and out. It can sometimes be tricky for us trying to make time in our calendars while still trying to do our day jobs; however, the mix of backgrounds, experience, and characters that make up our group keep us engaged and enthusiastic.

We’ve been working on a paper for many months, discussing hardware security modules (HSMs) and, in particular, their as-a-service manifestation (HSMaaS). If you’re not familiar with HSMs and their cloud-based as-a-service relatives, here’s an introduction to HSMs.

The CSA approach is to always remain vendor agnostic so no one contributor can shift the content of the paper to promote or discuss specific product or vendor solutions. However, that doesn’t stop us from sharing our experience and insight.

I’m sure like other CSA working groups, of which there are 20 or more, the major milestone is when we get to the point where we publish a paper. It is the culmination of weeks of contribution, discussion, cogitation, and deliberation. Recently we published HSM-as-a-Service Use Cases, Considerations, and Best Practices, which tackles the following topics:

The definition and architecture of an HSM The current and future state of the HSMaaS market Industry, compliance, and risk use cases for the HSMaaS model The importance of clearly defined responsibilities in the HSMaaS model Security considerations for HSMs Key management considerations unique to HSMaaS Important considerations when setting up governance for HSMs HSM vendor selection best practices

We hope it will be a useful, impartial reference for cloud service customers, whose industry, compliance, security, or risk drivers necessitate increased control over HSMs and key-management operations.

Once you’ve read the CSA paper and decide you want to use a trusted HSM-as-a-Service provider with 25+ years’ experience and global coverage, I suggest you check out Entrust nShield as a Service.

The post New Publication From the Cloud Security Alliance (CSA): Hardware Security Modules as a Service appeared first on Entrust Blog.


FindBiometrics

Osaka Expo to Support Biometric Payments, Care of NEC

The 2025 Osaka Kansai Expo will adopt a facial recognition system for payments and visitor access management, marking one of Japan’s largest implementations of this technology. NEC developed the system, […]
The 2025 Osaka Kansai Expo will adopt a facial recognition system for payments and visitor access management, marking one of Japan’s largest implementations of this technology. NEC developed the system, […]

Entrust

Resolving the Zero Trust Encryption Paradox

PKI and cryptography are critical components of a Zero Trust strategy, driving the use of... The post Resolving the Zero Trust Encryption Paradox appeared first on Entrust Blog.

PKI and cryptography are critical components of a Zero Trust strategy, driving the use of encryption to keep identities, devices, data, connections, and communications secure. Like many things, increased use of encryption starts with good intentions, but may have unintended consequences. In this case, the proliferation of certificates across the organization may create a management and ownership challenge, adding cyber risk. This is the Zero Trust encryption paradox, which is why two of the critical early steps on an organization’s Zero Trust journey are to identify and inventory all cryptographic assets, followed by establishing clear ownership.

As a clear example, the 2024 State of Zero Trust & Encryption Study sponsored by Entrust highlights how a lack of skilled personnel and no clear ownership makes the management of credentials painful, cited by 50% and 47% of respondents, respectively. And at the same time, 59% of respondents say managing keys has a severe impact on their organizations.

Establishing visibility and clear ownership of cryptographic assets may seem very logical and manageable on the surface, but today’s reality is far more complex. Long gone are the days of a massive Active Directory implementation to service an entirely on-prem certificate authority (CA). Today’s digital ecosystem spans servers, applications, networks, identities, infrastructure, hardware, and endpoints, with some data residing in the cloud and other data on-prem. New use cases continue to add to PKI deployment complexity as more teams use certificates, causing PKI sprawl. As well, in their zeal to implement a Zero Trust strategy, different siloed teams from IT to security to infrastructure and beyond often acquire their own certificate authorities, deploying PKI and certificates without proper governance. In fact, the Entrust study revealed that 37% of organizations polled cited unmanaged certificates as a main area of concern that might result in the exposure of sensitive or confidential data.

Why an enterprise-wide Zero Trust strategy is critical

Without an enterprise-wide Zero Trust strategy, this increased use of PKI can actually increase an organization’s vulnerability both today and in the future when planning for PQC migration. Plus, expired certificates can cause significant organization disruption, costing time and money to locate the expired certificate and identify everywhere it was installed.

Today, PKI and cryptographic assets are critical infrastructure, expanding in number, and essential to a Zero Trust strategy. However, it is a false assumption to think that systems will be secured forever with conventional PKI cryptography, and the magnitude of this risk is often unknown because organizations lack enterprise-wide crypto asset visibility. Simply put, you can’t manage what you can’t see. CISA’s Zero Trust Maturity Model (ZTMM) features five pillars underpinned by three core tenets: Visibility and Analytics, Automation and Orchestration, and Governance. Visibility in the CISA ZTMM refers to observable artifacts that result from the characteristics of and events within enterprise-wide environments. This focus on cyber-related data analysis helps inform policy decisions, facilitate response activities, and build a proactive security risk profile.

Essential early steps

So how do you resolve the Zero Trust encryption paradox? The first step is to establish clear ownership with a team accountable for enterprise-wide Zero Trust and encryption strategy and migration. And it appears more organizations are taking this to heart with dedicated PKI specialists operating under CxO oversight vs. being the domain IT generalists. Next is to inventory data and flows to identify the location of high-value data at rest, in transit, and in use. With ownership and data flows identified, the next step is to inventory the organization’s cryptographic assets, which is usually a combined automated manual effort to identify all keys, certificates, secrets, and libraries. Armed with this information, the dedicated team is now able to draft a crypto-agility strategy. This is a critical milestone in the Zero Trust journey, mitigating the organization’s crypto-related risk – including people, processes, and technology – with built-in capabilities.

And there you have it: An enterprise-wide Zero Trust strategy with clear ownership and crypto asset visibility and agility not only resolves the Zero Trust encryption paradox, but also provides a strong foundation for the next step of your Zero Trust journey.

The post Resolving the Zero Trust Encryption Paradox appeared first on Entrust Blog.


Zero Trust and AI: You Can’t Have One Without the Other

Cyberattacks were forecast to have cost the global economy $8 trillion USD in 2023, and... The post Zero Trust and AI: You Can’t Have One Without the Other appeared first on Entrust Blog.

Cyberattacks were forecast to have cost the global economy $8 trillion USD in 2023, and this number is forecast to grow to $10.5 trillion by 2025. In many cases, the scale and effectiveness of these attacks are being fueled by artificial intelligence (AI), especially deepfakes. Our Identity Fraud Report 2024 shows a 31x increase in the volume of deepfake attempts between 2022 and 2023. Facing this intensifying threat landscape, governments and enterprises around the world are scrambling to implement Zero Trust strategies to improve their cyber-risk posture and resilience. As evidence of the importance of Zero Trust to secure the organization, just 18% of respondents in the 2024 State of Zero Trust & Encryption Study sponsored by Entrust say that Zero Trust is not a priority at this time.

While previous Zero Trust journeys may have sputtered due to the limits of existing technology and a rigorous framework, AI is a game changer. On the surface, Zero Trust and AI may appear to be polar-opposite concepts with the former framed by the strict “Never Trust, Always Verify” principle, while the latter is characterized by both the promise and fear of the great unknown. However, much like “opposites attract,” Zero Trust and AI are natural partners.

An AI-Powered Approach to Zero Trust

With the biggest challenge to implementing Zero Trust being cited as a lack of in-house expertise (by 47 percent of respondents) in the Entrust report, it becomes apparent that additional resources are needed. Zero Trust demands constant vigilance and that’s where AI’s ability to discover, classify, and process large volumes of distributed data comes in. AI can literally speed up the detection of and response to cyberattacks.

However, bad actors may try to poison or otherwise manipulate the training data to blunt the effectiveness of such AI systems. So, Zero Trust and AI are somewhat akin to the “which came first, the chicken or the egg” metaphor. AI-enhanced visibility and decision-making can increase Zero Trust effectiveness, but Zero Trust is needed to protect the integrity of the data being used to train the AI model.

CISA’s Zero Trust Maturity Model (ZTMM) 2.0 foreshadowed this emerging relationship between Zero Trust and AI with a significant focus on the modernization of the Identity and Devices domains to improve an organization’s cyber-risk posture. Some specific examples include:

Identity Verification – Establishing and maintaining trusted identity is a critical component of any Zero Trust strategy, yet this is becoming harder and harder with AI-generated fakes. This is where AI-enabled biometric identity verification can help level the playing field to identify deepfakes in real time. Adaptive Authentication – AI-enabled authentication can dynamically adjust privileges to respond to real-time risk factors like device reputation, geolocation, and behavioral biometrics. This AI-enabled approach aligns directly with Zero Trust’s “least privilege” construct. Behavioral Analytics and Pattern Recognition – AI models that continuously learn and adapt to emerging patterns are ideal to analyze large volumes of distributed data to flag anomalies and potential threats. With this AI-enabled approach, Zero Trust’s “Never Trust, Always Verify” is more easily attainable.

So, there you have it: Zero Trust and AI are inextricably linked for organizational success and safety. With strict access controls, comprehensive visibility, and continual monitoring, Zero Trust lets organizations take advantage of the power of AI, while also helping to neutralize AI risks.

Learn more about Entrust’s identity-centric Zero Trust solutions.

The post Zero Trust and AI: You Can’t Have One Without the Other appeared first on Entrust Blog.


FindBiometrics

EU Calls for Digital ID Proposals, Setting Aside €20M

European Union authorities are starting to lay additional groundwork for the region’s digital ID ecosystem with a new call for proposals. Issued this month, “DIGITAL-2024-BESTUSE-TECH-06 – Accelerating the Best Use […]
European Union authorities are starting to lay additional groundwork for the region’s digital ID ecosystem with a new call for proposals. Issued this month, “DIGITAL-2024-BESTUSE-TECH-06 – Accelerating the Best Use […]

IBM Blockchain

Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability

IBM MQ is a messaging system that allows parties to communicate with each other in a protected and reliable manner. The post Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability appeared first on IBM Blog.

The exchange of securities between parties is a critical aspect of the financial industry that demands high levels of security and efficiency. Triparty repo dealing systems, central to these exchanges, require seamless and secure communication across different platforms. The Clearing Corporation of India Limited (CCIL) recently recommended (link resides outside ibm.com) IBM® MQ as the messaging software requirement for all its members to manage the triparty repo dealing system.

Read on to learn more about the impact of IBM MQ on triparty repo dealing systems and how you can use IBM MQ effectively for smooth and safe transactions.

IBM MQ and its effect on triparty repo dealing system

IBM MQ is a messaging system that allows parties to communicate with each other in a protected and reliable manner. In a triparty repo dealing system, IBM MQ acts as the backbone of communication, enabling the parties to exchange information and instructions related to the transaction. IBM MQ enhances the efficiency of a triparty repo dealing system across various factors:

Efficient communication: IBM MQ enables efficient communication between parties, allowing them to exchange information and instructions in real-time. This reduces the risk of errors and miscommunications, which can lead to significant losses in the financial industry. With IBM MQ, parties can make sure that transactions are executed accurately and efficiently. IBM MQ makes sure that the messages are delivered exactly once, and this aspect is particularly important in the financial industry.
Scalable and can handle more messages: IBM MQ is designed to handle a large volume of messages, making it an ideal solution for triparty repo dealing systems. As the system grows, IBM MQ can scale up to meet the increasing demands of communication, helping the system remain efficient and reliable.
Robust security: IBM MQ provides a safe communication channel between parties, protecting sensitive information from unauthorized access. This is critical in the financial industry, where security is paramount. IBM MQ uses encryption and other security measures to protect data, so that transactions are conducted safely and securely.
Flexible and easy to integrate: IBM MQ is a flexible messaging system that can be seamlessly integrated with other systems and applications. This makes it easy to incorporate new features and functionalities into the triparty repo dealing system, allowing it to adapt to changing market conditions and customer needs. How to  use IBM MQ effectively in triparty repo dealing systems

Follow these guidelines to use IBM MQ effectively in a triparty repo dealing system and make a difference:

Define clear message formats for different types of communications, such as trade capture, confirmation and settlement. This will make sure that parties understand the structure and content of messages, reducing errors and miscommunications. Implement strong security measures to protect sensitive information, such as encryption and access controls. This will protect the data  from unauthorized access and tampering. Monitor message queues to verify that messages are being processed efficiently and that there are no errors or bottlenecks. This will help identify issues early, reducing the risk of disruptions to the system. Use message queue management tools to manage and monitor message queues. These tools can help optimize message processing, reduce latency and improve system performance. Test and validate messages regularly to ensure that they are formatted correctly and that the information is accurate. This will help reduce errors and miscommunications, enabling transactions to be executed correctly. CCIL as triparty repo dealing system and IBM MQ

The Clearing Corporation of India Ltd. (CCIL) is a central counterparty (CCP) that was set up in April 2001 to provide clearing and settlement for transactions in government securities, foreign exchange and money markets in the country. CCIL acts as a central counterparty in various segments of the financial markets regulated by the Reserve Bank of India (RBI), namely., the government securities segment, that is, outright, market repo and triparty repo, USD-INR and forex forward segments.

As recommended by CCIL, all members are required to use IBM MQ as the messaging software for the triparty repo dealing system. IBM MQ v9.3 Long Term Support (LTS) release and above is the recommended software to have in the members’ software environment.

IBM MQ plays a critical role in triparty repo dealing systems, enabling efficient, secure, and reliable communication between parties. By following the guidelines outlined above, parties can effectively use IBM MQ to facilitate smooth and secure transactions. As the financial industry continues to evolve, the importance of IBM MQ in triparty repo dealing systems will only continue to grow, making it an essential component of the system.

Ready to enhance your triparty repo transactions? Join us for a webinar on 6 June  to learn more about the CCIL’s notification and discover how IBM MQ can streamline your operations and ensure secure, reliable communication.

Visit the IBM MQ page to learn more

The post Enhancing triparty repo transactions with IBM MQ for efficiency, security and scalability appeared first on IBM Blog.


FindBiometrics

Leak of Biometric Police Data in India Signals Rising Risks

A significant data breach has exposed the sensitive biometric information of thousands of law enforcement officials and police applicants in India, with lost PII including fingerprints, facial images, and other […]
A significant data breach has exposed the sensitive biometric information of thousands of law enforcement officials and police applicants in India, with lost PII including fingerprints, facial images, and other […]

Identity News Digest – May 24, 2024

Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Microsoft Azure to Require MFA Starting July […]
Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Microsoft Azure to Require MFA Starting July […]

This week in identity

E54 - CyberArk and Venafi / QRadar and Palo Alto / Akamai and NoName Security

Summary In this episode, Simon and David discuss recent acquisitions in the identity and access management space, including Palo Alto's acquisition of QRadar, Akamai's acquisition of NoName, and CyberArk's acquisition of Venafi. They explore the importance of resilience in IAM infrastructure and the growing need for managing machine identities and workloads. The conversation highlights the chall

Summary

In this episode, Simon and David discuss recent acquisitions in the identity and access management space, including Palo Alto's acquisition of QRadar, Akamai's acquisition of NoName, and CyberArk's acquisition of Venafi. They explore the importance of resilience in IAM infrastructure and the growing need for managing machine identities and workloads. The conversation highlights the challenges and opportunities in securing non-human identities and the role of PAM in addressing these issues. They also touch on the dark web and identity-based threats.

Keywords

identity and access management, acquisitions, resilience, IAM infrastructure, machine identities, workloads, PAM, non-human identities, dark web, identity-based threats

Takeaways

Recent acquisitions in the IAM space include Palo Alto's acquisition of Q Radar, Akamai's acquisition of No Name Security, and CyberArk's acquisition of Venafi. Managing machine identities and workloads is a growing challenge in the IAM space. PAM plays a crucial role in securing non-human identities.

Chapters

00:00 Introduction and Overview

02:40 Recent Acquisitions in the IAM Space

06:02 The Importance of Resilience in IAM Infrastructure

09:12 Managing Machine Identities and Workloads

15:23 The Role of PAM in Securing Non-Human Identities

26:14 Upcoming Presentation at Identiverse


FindBiometrics

INTERPOL Issues Request for Mobile Biometric Devices

INTERPOL has issued a solicitation for mobile biometric devices. “Open Call for Tender no. 7279” suggests that the International Criminal Police Organization is open to working with one or more […]
INTERPOL has issued a solicitation for mobile biometric devices. “Open Call for Tender no. 7279” suggests that the International Criminal Police Organization is open to working with one or more […]

SC Media - Identity and Access

California school association hack hits nearly 55K

SecurityWeek reports that almost 54,600 individuals had their data potentially compromised following a cyberattack against the Association of California School Administrators, the U.S.'s largest umbrella group for school leaders, following an apparent ransomware attack last September.

SecurityWeek reports that almost 54,600 individuals had their data potentially compromised following a cyberattack against the Association of California School Administrators, the U.S.'s largest umbrella group for school leaders, following an apparent ransomware attack last September.


Almost 400K impacted by CentroMed breach

San Antonio-based primary healthcare provider CentroMed had personally identifiable information from nearly 400,000 patients compromised following a data breach late last month, reports Cybernews.

San Antonio-based primary healthcare provider CentroMed had personally identifiable information from nearly 400,000 patients compromised following a data breach late last month, reports Cybernews.


Tens of millions of US criminal database info exposed

Cybernews reports that widely known threat actor USDoD has exposed a U.S. criminal database purportedly having 70 million rows of sensitive information dating from 2020 to 2024 allegedly exfiltrated by the SXUL threat operation.

Cybernews reports that widely known threat actor USDoD has exposed a U.S. criminal database purportedly having 70 million rows of sensitive information dating from 2020 to 2024 allegedly exfiltrated by the SXUL threat operation.


Cyberespionage schemes leveraged in escalating Moroccan gift card theft campaign

Moroccan hacking operation Storm-0539, also known as Ant Lion, has ramped up its gift card theft activities with cyberespionage tactics ahead of the Memorial Day holiday, according to BleepingComputer.

Moroccan hacking operation Storm-0539, also known as Ant Lion, has ramped up its gift card theft activities with cyberespionage tactics ahead of the Memorial Day holiday, according to BleepingComputer.


Northern Block

Self-Sovereignty: A Lost Vision or an Evolving Concept? (with Vladimir Vujovic)

Is the focus on government initiatives overshadowing self-sovereignty? Explore this in the latest episode of The SSI Orbit Podcast with Mathieu Glaude and Vladimir Vujovic. The post Self-Sovereignty: A Lost Vision or an Evolving Concept? (with Vladimir Vujovic) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>Self-Sovereignty: A Lost Vision o

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

Are you curious about how Self-Sovereign Identity (SSI) has evolved over the years and where it is headed?

In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Vladimir Vujovic, Head of Digital Product Management at SICPA to discuss the journey and future of SSI and decentralized identity technologies. Vladimir, with his extensive background in product management and experience in the SSI space since 2017, offers valuable insights into the technological advancements, challenges, and opportunities in this domain.

From the early days of Evernym to the current state of mainstream adoption, Vladimir sheds light on the progress made and the hurdles still to be overcome. How have new credential formats and protocols impacted the industry? What role do government initiatives and large-scale pilots play in the broader adoption of SSI? Vladimir addresses these questions and more, providing a comprehensive overview of the SSI landscape.

In this conversation, you’ll learn:

Why government initiatives are taking center stage in digital identity. How the focus on government wallets may affect the broader adoption of self-sovereign identity. The potential risks and benefits of relying on government-driven frameworks like eIDAS 2.0. Insights into parallel initiatives in other industries, such as seamless passenger travel in aviation. Whether waiting for government approval is justifiable or if it’s hindering tangible progress.

Don’t miss out on this opportunity to gain valuable insights and expand your knowledge. Tune in now and start exploring the possibilities!

 

Key Insights: The SSI space has evolved significantly since 2017, becoming a mainstream digital identity technology. New credential formats and protocols, such as SD-JWT, MDoc, and OpenID4VC, have emerged, contributing to the industry’s growth. Government initiatives like eIDAS v2 in Europe are driving adoption and creating expectations for significant market expansion. The technical nature of SSI products requires companies to focus on specific use cases and industries to create value-added solutions. Government bodies are increasingly exploring and building SSI capabilities in-house, impacting the role of technology vendors. Strategies: Focus on building value-added products or services on top of verifiable credentials. Explore specific use cases and verticals to create tailored solutions for different industries. Monitor and adapt to government initiatives and large-scale pilots to align with market trends. Chapters: 00:00 – How things have evolved since Evernym’s early days in 2027 8:50 – Have we lost the vision of ‘Self-Sovereignty’ with an overfocus on Government ID? 20:30 – Complexity of exchange interactions in identity vs payments 24:10 – Evolution in thinking around building SSI-enabled products 31:00 – The complexities of issuing credentials vs verifying them 39:10 – How far open source goes and impacts product design 45:25 – Adoption curves, horizontal vs vertical product focus 51:45 – Where the technical SSI space is at today Additional resources: Episode Transcript eIDAS Overview SICPA About Guest

Vladimir Vujovic is the Head of Digital Product Management at SICPA. He has been building SSI products for over 7 years and has held various product management roles at Evernym, a pioneer in SSI technology, and now at SICPA, a leading provider of authentication, identification, and supply chain solutions. At SICPA, he manages innovation in R&D Digital, focusing on authentic and verifiable data and decentralized identity.

With more than 10 years of experience in product management across companies of different sizes, Vladimir has a proven track record of building enterprise software and a deep passion for product management. He has extensive knowledge and expertise in Self-Sovereign Identity (SSI) and the broader trends in decentralized identity.

LinkedIn: https://www.linkedin.com/in/vladimir-vujovic-2a2b743/

  The post Self-Sovereignty: A Lost Vision or an Evolving Concept? (with Vladimir Vujovic) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>Self-Sovereignty: A Lost Vision or an Evolving Concept?</strong> (with Vladimir Vujovic) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


KuppingerCole

Identity Threat Detection and Response (ITDR)

by Mike Neuenschwander Threat detection for identity systems poses challenges that differ from endpoint, system, and network breaches, because users are considered trusted, provided sufficient measures such as strong authentication and MFA are utilized. But organizations have difficulty quantifying their identity assets, evaluating risk exposure, monitoring for attack vectors (including account ta

by Mike Neuenschwander

Threat detection for identity systems poses challenges that differ from endpoint, system, and network breaches, because users are considered trusted, provided sufficient measures such as strong authentication and MFA are utilized. But organizations have difficulty quantifying their identity assets, evaluating risk exposure, monitoring for attack vectors (including account takeovers, lateral movement, account data exfiltration), and enabling response teams to launch effective kill chains. Identity Threat Detection and Response (ITDR) solutions are designed to fill these requirements.

Ocean Protocol

Unveiling Market Dynamics: Winners of the Google Trends Analysis and Predictive Modeling

Podium Introduction Participants in the “Google Trends” Data Challenge analyzed the influence of public search interest on cryptocurrency market prices. The challenge required a detailed analysis of Google Trends data, integration of additional data sources, and the application of advanced ML methods to predict market behaviors. Essential tasks included conducting exploratory data analyses (EDA),
Podium Introduction

Participants in the “Google Trends” Data Challenge analyzed the influence of public search interest on cryptocurrency market prices. The challenge required a detailed analysis of Google Trends data, integration of additional data sources, and the application of advanced ML methods to predict market behaviors. Essential tasks included conducting exploratory data analyses (EDA), identifying correlations, and investigating how historical and current trends could forecast future market movements.

Data scientists across various expertise levels engaged in this challenge to determine Google Trends’ impact on cryptocurrency valuations. They examined data from the past and present to predict future trends and built models that could potentially guide real-world investment decisions. This competition highlighted the power of Google Trends data in predicting cryptocurrency markets and showcased the importance of combining various data sources to strengthen financial models.

The winners of the challenge, through their exceptional skills in interpreting complex datasets and developing predictive models, have not only enhanced their technical skills but also contributed significantly to advancing data-driven financial analysis in the cryptocurrency field. Their models offer insights that could influence investment strategies in the dynamic cryptocurrency market, inspiring others to follow in their footsteps.

Podium and Top-10 Submissions

The top submissions of this challenge were exceptional. Participants demonstrated outstanding abilities in utilizing ML and data analysis to probe and predict movements within the cryptocurrency market. Let’s explore the top three submissions that stood out due to their thorough analytics and insightful conclusions.

Top 10 1st Place: Anamaria

Anamaria secured first place in the “Google Trends” Data Challenge by applying advanced ML techniques to analyze the correlation between Google Trends data and cryptocurrency prices. Her analytical framework led to precise findings on the predictive nature of search data on cryptocurrency values. [Click here to see the report]

In her analysis, Anamaria integrated data and preparation process culminated in a comprehensive analysis of multiple cryptocurrency trends from 2019 to 2024. She found that Google Trends data had a measurable impact on cryptocurrency prices, with a correlation coefficient often exceeding 0.5. This indicates a moderate to strong positive relationship, suggesting that changes in search interest could potentially lead to significant price movements. Her model highlighted that significant spikes in search volumes, especially noted in 2021, preceded price increases in cryptocurrencies like Bitcoin and Ethereum. In 2021 alone, during the peak interest phase, her analysis directly correlated with the highest market prices observed during that year.

Anamaria’s findings, with a mean absolute error (MAE) of 3.53, mean squared error (MSE) of 28.47, and a root mean squared error (RMSE) of 5.34, have significant implications for investors and market analysts. Her research suggests that monitoring search trend spikes could provide a strategic advantage in predicting market movements, a practical application that can guide investment strategies in the dynamic cryptocurrency market.

2nd Place: Ekpenyong

Ekpenyong earned second place in the Data Challenge with his detailed statistical analysis of the relationship between Google Trends data and cryptocurrency prices. His project employed rigorous statistical methods to uncover significant insights about market dynamics. [Click here to see the report]

He used the Jarque-Bera test to check for normality in his analysis, revealing that cryptocurrency prices and Google Trends data were non-normally distributed with p-values less than 0.05. He also applied the Augmented Dickey-Fuller (ADF) test, confirming non-stationarity in the time series data for both prices and trends. This non-stationarity was further validated by the Phillips-Perron and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests, showing unit roots and time-varying statistical properties. These findings indicated that advanced models like Neural Networks were more suitable for his analysis due to their ability to handle non-stationary data.

Ekpenyong’s correlation analysis, using Spearman and Pearson coefficients, showed strong positive correlations between search trends and cryptocurrency prices for significant tokens. For example, Bitcoin and Ethereum exhibited correlation coefficients of 0.75 and 0.68, respectively. His Granger causality tests revealed significant bidirectional relationships for several cryptocurrencies, including Fetch.ai and Monero, with p-values less than 0.05, indicating that search interest could predict price changes and vice versa. Okpo also conducted an optimal lag analysis, finding that a one-week lag often provided the highest correlation, with Fetch.ai showing a correlation coefficient of 0.84 at a two-week lag.

3rd Place: Ahan

Ahan secured third place in the Data Challenge with his rigorous analysis of the correlation between Google Trends data and cryptocurrency prices, focusing on various market capitalizations. His project involved detailed statistical examination and predictive modeling to uncover critical insights. [Click here to see the report]

He categorized cryptocurrencies into large, mid, and small-cap based on market capitalization, analyzing top assets like Bitcoin, Ethereum, and Solana. His exploratory data analysis (EDA) revealed that Bitcoin showed a 1200% increase in Google search interest from 2016 to 2017, correlating with a price surge from $1,000 to nearly $20,000. Similarly, from 2020 to 2021, Bitcoin’s search interest grew by 150%, with prices rising from approximately $7,200 to over $64,000. Ethereum showed a correlation coefficient of approximately 0.28, indicating that about 7.84% of its price variability could be explained by changes in Google search trends, particularly around significant network upgrades and DeFi activities.

Ahan’s analysis extended to the correlation and time-lagged impact of search trends on prices. He found that Dogecoin exhibited a high correlation coefficient of 0.59, suggesting that 34.81% of its price movements were linked to changes in Google Trends data. His time-lag analysis showed that Bitcoin’s price had an optimal correlation of 0.34 with a one-day lag, explaining about 11.56% of the price variability. In contrast, Ethereum’s optimal lag was seven days with a correlation of 0.48, accounting for 23.04% of price movements. On the other hand, Dogecoin had a significant correlation of 0.65 with a one-day lag, explaining 42.25% of its price variability, highlighting its sensitivity to social media and public sentiment.

Interesting Facts

High Correlation Between Search Interest and Dogecoin Price Movements

Dogecoin’s price movements show a high correlation coefficient of approximately 0.59 with Google search trends. This indicates that 34.81% of Dogecoin’s price variability is directly influenced by changes in search interest, highlighting the strong impact of public sentiment on this cryptocurrency.

Optimal Time Lags for Predicting Bitcoin Prices

Analysis revealed that Bitcoin’s price had an optimal correlation of 0.34 with Google Trends data at a 1-day lag. This correlation explains 11.56% of the price variability, suggesting that Bitcoin’s market reacts quickly to changes in public interest.

Significant Increase in Bitcoin Search Interest During Bull Markets

From 2020 to 2021, Google search interest in Bitcoin increased by approximately 150%, correlating with a price rise from around $7,200 to over $64,000. This surge in interest and price underscores the strong relationship between public attention and market performance during bull runs.

ARIMA Model Predictive Accuracy for Cryptocurrency Prices

An ARIMA model used to predict cryptocurrency prices based on Google Trends data showed substantial accuracy, with a mean absolute error (MAE) of 3.53, mean squared error (MSE) of 28.47, and root mean squared error (RMSE) of 5.34. This demonstrates the model’s effectiveness in capturing the relationship between search trends and market prices.

Volatility and Search Trends in Smaller Market Cap Cryptocurrencies

Emerging cryptocurrencies like Ocean Protocol and SingularityNET exhibited strong correlations with Google search trends, especially within a week of changes in search behavior. This suggests that smaller market-cap cryptocurrencies are highly sensitive to public interest, with search trends as a predictive tool for price movements.

2024 Championship

Each challenge features a prize pool of $10,000, distributed among the top 10 participants. Our championship points system distributes 100 points across the top 10 finishers in each challenge, with each point valued at $100.

Contestants accumulate points toward the 2024 Championship by participating in challenges. Last year, the top 10 champions received an extra $10 for every point they earned.

Moreover, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists retain their intellectual property rights while we offer assistance in monetizing their creations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

Unveiling Market Dynamics: Winners of the Google Trends Analysis and Predictive Modeling was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Now is the Perfect Time to Upgrade to Ping’s Cloud | Ping Identity

In many ways, change defines the identity landscape of today. As new technologies, threats, and trends emerge, the identity and access management (IAM) space experiences overhauls, progressions, and evolutions. Our job at Ping Identity entails more than providing you with secure and seamless IAM experiences; we also help you anticipate the future landscape.    As many of today’s top

In many ways, change defines the identity landscape of today. As new technologies, threats, and trends emerge, the identity and access management (IAM) space experiences overhauls, progressions, and evolutions. Our job at Ping Identity entails more than providing you with secure and seamless IAM experiences; we also help you anticipate the future landscape. 

 

As many of today’s top C-suite executives “are accelerating investments in digital transformation,” it's not a matter of if you will migrate to the cloud, but rather when (1). Many of today’s leading Fortune 100 Companies have realized that Ping’s cloud is a one-stop solution to many of their most significant IAM deployment challenges. 

 

With robust scale and performance, diverse SaaS capabilities, multiple cloud tenancy offerings, robust migration toolkits, and a host of case studies from some of the largest enterprises in the world, now is the perfect time to upgrade to Ping’s cloud

Thursday, 23. May 2024

FindBiometrics

Microsoft Azure to Require MFA Starting July 2024

Microsoft Azure will mandate the use of multi-factor authentication (MFA) for all users beginning in July of 2024, as part of its Secure Future Initiative. The aim is to enhance […]
Microsoft Azure will mandate the use of multi-factor authentication (MFA) for all users beginning in July of 2024, as part of its Secure Future Initiative. The aim is to enhance […]

KuppingerCole

Adapting to Evolving Security Needs: WAF Solutions in the Current Market Landscape

Join us for a webinar where we will explore recent shifts in the WAF market and the rising prominence of WAAP solutions. Discover the latest security features and capabilities required by the WAF market in 2024. Gain valuable insights into market trends and key vendors and discover what differentiates the industry leaders. Additionally, we will address the challenges both vendors and customers e

Join us for a webinar where we will explore recent shifts in the WAF market and the rising prominence of WAAP solutions. Discover the latest security features and capabilities required by the WAF market in 2024. Gain valuable insights into market trends and key vendors and discover what differentiates the industry leaders.

Additionally, we will address the challenges both vendors and customers encounter in the changing market conditions. We will be also providing actionable strategies and recommendations for organizations to navigate the complexities of selecting and implementing WAF solutions. Do not miss this opportunity to stay informed about the dynamics of the WAF market.

Key Takeaways:

Overview of the WAF market and transition to WAAP Basic & Advanced capabilities Overview of the Vendors in the WAF Market Challenges in Securing WAFs Recommendations & Strategies to become Future-Proof

Join our research analyst, Osman Celik, as he presents our upcoming webinar on Web Application Firewalls. Osman has expertise not only in the WAF market, but also in attack surface management, network security, threat intelligence and vulnerability management, among other relevant cybersecurity solutions. In his recent research, Osman analyzes the motivations behind vendors integrating API security into their WAF solutions, ultimately transitioning towards Web Application and API Protection (WAAP). He examines both basic and advanced capabilities that organizations should expect from WAF vendors today.

In addition, Osman conducts a thorough analysis of the current WAF vendor landscape and identifies industry leaders based on their commitment to innovation, market presence, and overall product capabilities. In his evaluation of thirteen vendors, Osman identifies their strengths, challenges, and market positions, and provides insights for businesses navigating the complex WAF market. Do not miss the chance to receive strategic recommendations from Osman during the webinar.




FindBiometrics

How AWS and FIDO Laid the Foundation for Footprint’s $13M Series A

“Meanwhile, the FIDO2 alliance agreed on a standard for private and secure auth–what Apple calls passkeys–in Q4 of 2022. This was the final missing piece we saw coming to let […]
“Meanwhile, the FIDO2 alliance agreed on a standard for private and secure auth–what Apple calls passkeys–in Q4 of 2022. This was the final missing piece we saw coming to let […]

SC Media - Identity and Access

Unified Identity Security, Identity is Under Attack & Identity is Security - Andre Durand, David Bradbury, Wendy Wu - ESW #363


IBM Blockchain

Enhance your data security posture with a no-code approach to application-level encryption

ALE provides an additional layer of protection by encrypting data at its source and enhances data security, privacy and sovereignty posture. The post Enhance your data security posture with a no-code approach to application-level encryption appeared first on IBM Blog.

Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle.

While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance your data security, privacy and sovereignty posture.

Why should you consider application-level encryption?

Figure 1 illustrates a typical three-tier application deployment, where the application back end is writing data to a managed Postgres instance.

Figure 1: Three-tier application and its trust boundary

If you look at the high-level data flow, data originates from the end user and is encrypted in transit to the application, between application microservices (UI and back end), and from the application to the database. Finally, the database encrypts the data at rest using either bring your own key ( or keep your own key ( strategy.

In this deployment, both runtime and database admins are inside the trust boundary. This means you’re assuming no harm from these personas. However, as analysts and industry experts point out, there is a human element at the root of most cybersecurity breaches. These breaches happen through error, privilege misuse or stolen credentials and this risk can be mitigated by placing these personas outside the trust boundary. So, how can we enhance the security posture by efficiently placing privileged users outside the trust boundary? The answer lies in application-level encryption.

How does application-level encryption protect from data breaches?

Application-level encryption is an approach to data security where we encrypt the data within an application before it is stored or transmitted through different parts of the system. This approach significantly reduces the various potential attack points by shrinking the data security controls right down to the data.

By introducing ALE to the application, as shown in figure 2, we help ensure that data is encrypted within the application. It remains encrypted for its lifecycle thereon, until it is read back by the same application in question.

Figure 2: Protecting sensitive data with application-level encryption

This helps make sure that privileged users on the database front (such as database administrators and operators) are outside the trust boundary and cannot access sensitive data in clear text.

However, this approach requires changes to the application back end, which places another set of privileged users (ALE service admin and security focal) inside the trust boundary. It can be difficult to confirm how the encryption keys are managed in the ALE service.

So, how are we going to bring the value of ALE without such compromises? The answer is through a data security broker.

Why should you consider Data Security Broker?

IBM Cloud® Security and Compliance Center (SCC) Data Security Broker (DSB) provides an application-level encryption software with a no-code change approach to seamlessly mask, encrypt and tokenize data. It enforces a role-based access control (RBAC) with field and column level granularity. DSB has two components: a control plane component called DSB Manager and a data plane component called DSB Shield, as shown in Figure 3.

Figure 3: Protecting sensitive data with Data Security Broker

DSB Manager (the control plane) is not in the data path and is now running outside the trust boundary. DSB Shield (the data plane component) seamlessly retrieves the policies such as encryption, masking, RBAC and uses the customer-owned keys to enforce the policy with no-code changes to the application!

Data Security Broker offers these benefits:

Security: Personally identifiable information (PII) is anonymized before ingestion to the database and is protected even from database and cloud admins. Ease: The data is protected where it flows, without code changes to the application. Efficiency: DSB supports scaling and to the end user of the application, this results in no perceived impact on application performance. Control: DSB offers customer-controlled key management access to data. Help to avoid the risk of data breaches

Data breaches come with the high cost of time-to-address, the risk of industry and regulatory compliance violations and associated penalties, and the risk of loss of reputation.

Mitigating these risks is often time-consuming and expensive due to the application changes required to secure sensitive data, as well as the oversight required to meet compliance requirements. Making sure your data protection posture is strong  helps avoid the risk of breaches.

IBM Cloud Security and Compliance Center Data Security Broker provides the IBM Cloud and hybrid-multicloud with IBM Cloud Satellite® no-code application-level encryption  to protect your application data and enhance your security posture toward zero trust guidelines.

Get started with IBM Cloud® Data Security Broker today

The post Enhance your data security posture with a no-code approach to application-level encryption appeared first on IBM Blog.


SC Media - Identity and Access

There are no bad machines – only ones that behave badly because of human error

Protecting non-human identities has become a real challenge – here’s what to do about them.

Protecting non-human identities has become a real challenge – here’s what to do about them.


EBook: The state of identity 2024

Download the full eBook to see what we found and what to do about it.

Download the full eBook to see what we found and what to do about it.


FindBiometrics

EU Digital Wallet Framework Gets an Upgrade

On the heels of its regulatory framework coming into force, the European Union’s emerging digital ID system has now seen an upgrade to its open source architecture. The latest version […]
On the heels of its regulatory framework coming into force, the European Union’s emerging digital ID system has now seen an upgrade to its open source architecture. The latest version […]

Dock

Governance Proposal: Fee Reduction and Verified IDs for Council Membership

As our network evolves, maintaining its integrity and efficiency becomes increasingly vital.  To this end, we are excited to introduce two new significant governance proposals to enhance the transparency of Council membership and the efficiency of transaction fees. In summary: We propose to reduce our length fee from 0.

As our network evolves, maintaining its integrity and efficiency becomes increasingly vital. 

To this end, we are excited to introduce two new significant governance proposals to enhance the transparency of Council membership and the efficiency of transaction fees.

In summary:

We propose to reduce our length fee from 0.01 to 0.0001 tokens per byte to ensure that transactions and network upgrades become more cost-effective for our users. We propose that all council members have verified identities to reduce spam generated by anonymous members and to promote greater accountability and trust within our governance structure.

Below, we will delve into the details of these proposals and explain their rationale and anticipated impact on our community and network.

Reducing Transaction Fees for a More Efficient Network

Transaction fees on the Dock network depend on several factors, including the transaction's byte size, the amount of storage read or written, and the compute resources required.

Currently the network charges 0.01 tokens per byte of transaction, which we call the “length-fee”. For instance, if your transaction is 1000 bytes, the length-fee alone contributes 10 tokens (1000 * 0.01). 

We recognize that our length fee is significantly higher than that of some networks in the Polkadot ecosystem, such as KILT, Moonbeam or Centrifuge.

To address this, we are reducing the length fee by 100 times, bringing it down to 0.0001 tokens per byte.

Here are examples of how this change will affect popular transactions:

Transfer transactions will cost 1.3 tokens instead of 1.8 tokens. Creating DIDs will cost 2.75 tokens instead of 4.5 tokens. Revocation (status lists) will cost 2.9 tokens instead of 10.8 tokens.

Additionally, network upgrades will also become much cheaper, as their costs are primarily driven by their large size.

We believe this will ensure Dock’s network fees remain very competitive and entice more and more organizations to our platform. 

You can vote on this proposal here.

Introducing Verified Identities for Council Membership

Currently, anyone can become a Council member by locking a certain amount of tokens and submitting their candidacy. 

However, we’ve witnessed a recent increase in spam made by anonymous members, including a failed attempt by a Validator to become elected so they could pay themselves additional treasury rewards.

To dissuade future scam attempts, we propose that all Council members must have a verified identity, indicated by a green checkmark. 

This new requirement ensures transparency and accountability within the Council.

Prospective members must set up their identities and request verification before submitting their candidacy for Council membership. Once verified, they can proceed to submit their candidacy.

Learn more about setting up an identity here.

Vote on this governance proposal here.


FindBiometrics

Thales to Lead Africa’s First ISO-compliant National Digital ID Wallet Project

Mauritius is poised to become the first African country to implement a fully interoperable digital ID wallet based on ISO standards, thanks to a newly announced partnership with Thales. The […]
Mauritius is poised to become the first African country to implement a fully interoperable digital ID wallet based on ISO standards, thanks to a newly announced partnership with Thales. The […]

HYPR

What’s the State of Identity Assurance Today? Recap of the 2024 Report

Identity security is at a crossroads. As digital transformation accelerates, organizations are increasingly vulnerable to identity-focused attacks, which are now the primary entry point for cybercriminals. The incorporation of artificial intelligence (AI) into the attacker’s arsenal ups the stakes even higher. Cybercriminals recently stole $25 million from a multinational finance firm i

Identity security is at a crossroads. As digital transformation accelerates, organizations are increasingly vulnerable to identity-focused attacks, which are now the primary entry point for cybercriminals. The incorporation of artificial intelligence (AI) into the attacker’s arsenal ups the stakes even higher. Cybercriminals recently stole $25 million from a multinational finance firm in a single stroke by impersonating executives using deepfake video and audio. 

With security teams grappling with unprecedented demands, we’ve expanded our seminal State of Passwordless annual report to encompass the broader identity security field. Now titled State of Passwordless Identity Assurance, this fourth edition investigates current and emerging identity threats to organizations, their security perspectives and practices, and greatest areas of vulnerability. Conducted by HYPR and Vanson Bourne, the report is based on interviews with 750 IT/IS decision makers, representing a cross-section of industries across the globe. The results expose the gap between evolving threats and outdated identity models, and the extent that this undermines global security and business growth.

Key Findings Widespread Vulnerabilities: 91% of breached organizations cited credential misuse as a critical failure point. Financial Impacts: The average cost of authentication-related breaches reached a staggering $5.48 million and identity fraud costs a further $2.78 million. Adoption Challenges: Despite the clear benefits, the shift towards more secure passwordless methods like passkeys is slow, hindered by implementation complexities and costs. Threat Trends

Our research reveals  an unsettling rise in identity theft and fraud, driven by the availability of compromised credentials and sophisticated phishing schemes. Nearly all (99%) surveyed organizations faced some kind of attack over the past 12 months and almost eight in ten (78%) were targeted by identity-based attacks.

Not surprisingly, phishing and malware are the most prevalent. Push notification attacks, also called MFA-prompt bombing, continue to be a favorite technique of modern hacking groups. Toward the bottom of the list just two years ago, push attacks figure prominently in many recent high-profile attacks including the widespread campaign against Apple users last month.

Types of cyberattacks faced in the last 12 months

The Widespread Impact of Identity Breaches

The consequences of identity security breaches are significant. A staggering 84% of organizations that experienced a cyberattack subsequently suffered a breach, with 62% experiencing multiple breaches. For those organizations that were breached, 91% blame authentication weaknesses and the misuse of credentials for one or more breaches, a notable increase from 82% the previous year.

These breaches not only carry significant financial burdens — with costs averaging $5.48 million — but also lead to customer loss, reputational damage, and substantial fines.

The Current Landscape: Overcoming Inertia

Many organizations remain tethered to outdated security practices that no longer suffice in the current digital era. The findings underscore the urgency to shift from traditional perimeter-based defenses to an identity-first security strategy. Today, the average organization struggles with the complexities of managing an expanding number of digital identities, brought on by remote work trends and the adoption of new technologies.

This transition stresses the need for robust identity security practices that not only prevent unauthorized access but also ensure a seamless user experience. Despite advancements, 99% of organizations still rely on outdated legacy authentication methods, highlighting a significant gap in adopting more secure and efficient solutions like passwordless authentication and continuous, automated identity verification.

On a positive note, four in ten (41%) or organizations plan to use passwordless authentication or passkeys over the next 1-3 years. In addition, 43% intend to incorporate identity verification into their identity security processes.

The Role of Artificial Intelligence in Identity Security

Artificial Intelligence (AI) presents both opportunities and threats for identity security. On the one hand, AI can enhance identity security protocols through adaptive and risk-based controls. On the other hand, cybercriminals are using AI to exploit vulnerabilities more effectively, creating tailored phishing messages and convincing deepfakes.

IT security decision-makers recognize the dual nature of AI, with the ability to prevent threats from generative AI (60%) and deepfake identity fraud (45%) among their top concerns. Despite these challenges, three-quarters (75%) believe that adopting AI within their identity security stack will ultimately give them an advantage over cybercriminals.

Download the State of Passwordless Identity Assurance Report

As businesses continue to transform their operations and business models, they face unprecedented and dynamic security risks. The research underscores the urgent need for organizations to adopt a holistic, identity-first security strategy that leverages advanced technologies and continuous verification processes.

HYPR’s Identity Assurance platform empowers organizations with a comprehensive identity security approach so they can protect their digital identities, safeguard sensitive data, and ensure long-term business resilience in an increasingly digital world.

Get the full report on the State of Passwordless Identity Assurance in 2024.


Extrimian

Differences between zk-SNARKs and zk-STARKs

In the world of cryptography, two terms often come up: zk-SNARKs and zk-STARKs. Despite their similar names, these cryptographic tools have distinct features. In this guide, we’ll break down the key differences between zk-SNARKs and zk-STARKs, their applications, and how they relate to technologies like zk-Sync and Self-Sovereign Identity (SSI). Understanding zk-SNARKs zk-SNARKs, or Zero-Knowledge

In the world of cryptography, two terms often come up: zk-SNARKs and zk-STARKs. Despite their similar names, these cryptographic tools have distinct features. In this guide, we’ll break down the key differences between zk-SNARKs and zk-STARKs, their applications, and how they relate to technologies like zk-Sync and Self-Sovereign Identity (SSI).

Understanding zk-SNARKs

zk-SNARKs, or Zero-Knowledge Succinct Non-Interactive Argument of Knowledge, are known for their compactness and efficiency in verifying proofs. They let parties verify statements without sharing sensitive info. Here’s what you need to know:

Compactness: zk-SNARKs create small proofs, making them handy for blockchain networks and other systems with limited resources. Privacy-Preserving Transactions: In cryptocurrencies like Zcash, zk-SNARKs hide details like sender, recipient, and amount, ensuring transaction privacy. Decentralized Finance (DeFi): zk-SNARKs power privacy-preserving smart contracts, decentralized exchanges, and secure lending protocols in DeFi. How zk-SNARKS work_explanation system Understanding zk-STARKs

On the other hand, zk-STARKs (Zero-Knowledge Scalable Transparent Arguments of Knowledge) prioritize scalability and transparency. They handle complex computations efficiently and offer transparent proof generation:

Scalability: zk-STARKs handle complex tasks well, making them ideal for large-scale decentralized systems. Transparency: Unlike zk-SNARKs, zk-STARKs allow anyone to verify the entire proof generation process, boosting trust without relying on third parties. Secure Data Sharing: zk-STARKs enable secure data sharing in healthcare, supply chain management, and identity verification, keeping data private while ensuring integrity. Comparing zk-SNARKs and zk-STARKs

While both zk SNARKs and zk STARKs focus on privacy-preserving proofs, they have key differences:

Compactness: zk-SNARKs are more compact, while zk-STARKs prioritize scalability. Transparency: zk-SNARKs are opaque, whereas zk-STARKs are transparent. Applications: zk-SNARKs excel in DeFi and privacy-preserving transactions, while zk-STARKs are great for scalable systems and secure data sharing. Main differences between zk-SNARKs and zk-STARKS Featurezk-SNARKszk-STARKsCompactnessMore compact proofsLess compact proofsScalabilityLess scalableHighly scalableTransparencyNon-transparentTransparentVerification SpeedFaster verificationSlower verificationProof GenerationRequires trusted setupDoes not require trusted setupTrust DependencyRelies on trusted setupMinimal trust dependencyApplicationsPrivacy-preserving transactions, DeFiScalable decentralized systems, data sharingExamplesZcash, StarkEx, QuorumzkPorter, ZKSwap Linking zk-SNARKs and zk-STARKs with SSI

Both zk-SNARKs and zk-STARKs play crucial roles in technologies like Self-Sovereign Identity (SSI):

With zk-STARKs, SSI solutions provide privacy-preserving authentication and verification, enhancing security for digital identities.

zk-SNARKs and zk-STARKs relation with Zero-Knowledge Proofs (ZKPs)

Both zk-SNARKs and zk-STARKs are rooted in the principles of Zero-Knowledge Proofs (ZKPs), a fundamental cryptographic concept. ZKPs allow one party to prove knowledge of a secret without revealing the secret itself, forming the basis for zk-SNARKs and zk-STARKs.

Proof System between: ZKP, SNARK and STARK The Future of Privacy-Preserving Technologies

As privacy and security become more crucial in digital systems, zk-SNARKs and zk-STARKs will continue to lead the way. By integrating these tools into solutions like zk-Sync and SSI, companies like Extrimian are shaping a future where privacy and security are top priorities in digital interactions.

Stay tuned for more insights into the world of cryptography and decentralized tech with our Blog Page, and learn more about Decentralized Digital Identity and Privacy on the Extrimian Academy!

The post Differences between zk-SNARKs and zk-STARKs first appeared on Extrimian.


Shyft Network

A Guide to FATF Travel Rule Compliance in Switzerland

The minimum threshold for the FATF Travel Rule in Switzerland is transactions over 1,000 CHF. VASPs must obtain and transmit originator and beneficiary information. Originator info required includes name, account/transaction number, and address or ID details. Switzerland, a Central European country, has a crypto-enthusiastic population. According to Statista, the number of people wh
The minimum threshold for the FATF Travel Rule in Switzerland is transactions over 1,000 CHF. VASPs must obtain and transmit originator and beneficiary information. Originator info required includes name, account/transaction number, and address or ID details.

Switzerland, a Central European country, has a crypto-enthusiastic population. According to Statista, the number of people who own or use cryptocurrencies reached 21% in 2023, up from 10% in 2019.

Switzerland’s clear regulatory environment, focus on innovation, and economic factors also helped it bag the second spot on the Henley & Partners Crypto Adoption Index with a score of 46.9%.

When it comes to regulation, the European banking hub requires the country’s crypto sector to comply with its strict standards, which include the FATF Travel Rule, to ensure transparency, financial stability, and customer security.

Background of the Crypto Travel Rule in Switzerland

In 2019, the Swiss Financial Market Supervisory Authority, or FINMA in short, published guidance covering the FATF Travel Rule. With the latest update to the FINMA-AMLO legislation (Article 10), FINMA approved the text requiring the transmission of information on the originator and beneficiary customer along with payment orders. With this, the Crypto Travel Rule came into effect in Switzerland in 2020.

Key Features of the Travel Rule

According to FINMA guidance, the AML Act has always applied to all blockchain service providers (VASPs), including exchanges, trading platforms, and wallets. The regulator believes the blockchain’s inherent anonymity presents increased risks, requiring existing rules on combating money laundering and terrorist financing to apply to blockchain and crypto service providers.

FINMA mandates that organizations register with the regulator, implement Know Your Customer (KYC) processes, conduct extensive customer due diligence (CDD), and follow strict reporting guidelines. Providers must verify their customers’ identities, establish the identity of the beneficial owner, and take a risk-based approach to monitoring business relationships. If there is reasonable suspicion of illegal behavior, such as money laundering, providers must report to the Money Laundering Reporting Office Switzerland (MROS).

The basic idea here is to “make it more difficult for sanctioned persons or states to act anonymously in the payment transaction system.”

Compliance Requirements

In Switzerland, the FATF Travel Rule applies to transactions over $1,000 (1,000 CHF), down from $5,000 (5,000 CHF) after an amendment to FINMA’s AML Ordinance in Feb. 2020.

For the originator, the following information must be collected and verified:

Name Account number or unique transaction reference number The physical address, date and place of birth, client identification number, or national ID number

For the beneficiary, the following information must be collected:

Name Account number or unique transaction reference number

The originating VASP must ensure the originator information is complete and accurate and that the beneficiary information is complete. All information should be provided to the counterparty or authorities upon request.

For domestic transactions, the originating VASP can provide only the originator’s name and transaction reference number as long as it can submit additional information to the beneficiary VASP and authorities within three working days.

Impact on Cryptocurrency Exchanges and Wallets

A VASP, as per the Anti Money Laundering Ordinance (AMLO), is considered to be operating professionally if any of the following applies:

- Performing transactions worth more than CHF 2 million per year;

- Achieving a gross revenue of over CHF 50,000 in a calendar year;

- Unlimited control of third-party funds surpassing CHF 5 million or

- Have business relationships with over 20 contractual parties in a calendar year.

Now, under FINMA’s AML Act, financial intermediaries are required to obtain a license from the regulator. Different types of crypto licenses available include Exchange licenses, Fintech licenses, Investment fund licenses, and Banking licenses.

Some trading activities require ongoing monitoring by FINMA, which also applies to those who keep crypto from customers in “wallets” and manage accounts.

FINMA’s Crypto Travel Rule guidance further applies to non-custodial wallets, self-hosted wallets, or, as FINMA states, external wallets. As per this, VASPs are required to prove the ownership of these wallets when transacting with them.

Only after the identity of the wallet owner is verified, the beneficial owner’s identity is established, and address ownership is proved via ‘suitable technical means’ that the VASP is to transact with these external or self-hosted wallets.

Global Context and Comparisons

FATF’s latest report revealed that Switzerland is “Largely Complaint” when it comes to the adoption of the Crypto Travel Rule, like Germany, France, Canada, Hong Kong, Japan, the UK, and the US.

Concluding Thoughts

Switzerland, one of the top 10 economies by GDP per capita ranking, has been seeing a lot of digital innovation thanks to its clear and proactive regulations. With Zug dubbed the Crypto Valley, the country aims to spearhead fintech and blockchain development through its robust regulatory framework, including the Crypto Travel Rule that ensures security.

FAQs on Crypto Travel Rule Compliance in Switzerland

Q1: What is the minimum transaction threshold for the FATF Travel Rule in Switzerland?

The FATF Travel Rule in Switzerland applies to transactions over 1,000 CHF.

Q2: What information must VASPs collect and transmit under the FATF Travel Rule?

VASPs must collect and transmit the originator’s name, account/transaction number, address or ID details, and the beneficiary’s name and account/transaction number.

Q3: What are the compliance requirements for VASPs in Switzerland regarding the FATF Travel Rule?

VASPs must register with FINMA, implement KYC processes, conduct customer due diligence, verify customer identities, establish the beneficial owner’s identity, and report suspicious activities to the Money Laundering Reporting Office Switzerland (MROS). For domestic transactions, they must provide the originator’s name and transaction number and submit additional information within three working days if requested.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

A Guide to FATF Travel Rule Compliance in Switzerland was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Ontology Weekly Report (May 14th — May 20th, 2024)

Ontology Weekly Report (May 14th — May 20th, 2024) As we navigate another week at Ontology, our community continues to grow and thrive with various developments, collaborations, and discussions. Here’s a recap of our activities and updates for this past week: Latest Developments Karmaverse Collaboration on Binance Live: We joined Karmaverse on Binance live, engaging in a dynamic disc
Ontology Weekly Report (May 14th — May 20th, 2024)

As we navigate another week at Ontology, our community continues to grow and thrive with various developments, collaborations, and discussions. Here’s a recap of our activities and updates for this past week:

Latest Developments Karmaverse Collaboration on Binance Live: We joined Karmaverse on Binance live, engaging in a dynamic discussion about our collaborative projects and the future of gaming in the blockchain space. Web3 Wonderings: This week’s session covered the latest news in the crypto world. If you missed the live event, catch the recording to stay informed about the evolving landscape. Community Updates on X: Our community updates have moved to X! Follow us to stay abreast of our latest progress and announcements. Zealy on Discord: You can now find access to Zealy events under Discord events, making it easier for our community to engage and participate. Development Progress Ontology EVM Trace Trading Function: Continues to progress at 87%, enhancing our trading functionalities within the EVM. ONT to ONTD Conversion Contract: Development is ongoing at 52%, aiming to streamline the conversion process for our users. ONT Leverage Staking Design: We are steadily advancing, with progress now at 37%, to provide innovative staking solutions. Product Development Partnership with XSTAR: ONTO has partnered with XSTAR to drive Web3 adoption together. This partnership is expected to foster significant advancements in accessibility and user engagement. April Monthly Report Published: Our April monthly report is now available! Dive into a comprehensive review of last month’s activities and achievements to see how ONTO is evolving. On-Chain Activity dApp Ecosystem: We maintain a robust portfolio of 177 dApps on MainNet, supporting a wide array of applications and services. Transaction Growth: This week, the network saw an increase of 1,344 dApp-related transactions, bringing the total to 7,765,096. Overall transactions on MainNet also grew by 5,579, totaling 19,434,144. Community Growth Engagement on Social Platforms: Our community remains active on both X and Telegram. Join the discussions and be a part of our growing ecosystem. Telegram Discussion on Interoperable DID Solutions: Led by Ontology Harbingers, this week’s discussion, “Exploring Interoperable DID Solutions: Web2, Web3, and Beyond,” delved into how decentralized identity can bridge the gap between traditional web services and blockchain technology, focusing on KYC, login mechanisms, and peer interactions. Stay Connected 📱

Stay engaged with Ontology by following us on our social media channels. Your participation and feedback are crucial as we continue to push the boundaries of blockchain technology and decentralized identity.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Thank you for your continued support and engagement. Let’s keep innovating and shaping the future of Web3 together!

Ontology Weekly Report (May 14th — May 20th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF90 Completes and DF91 Launches

Predictoor DF90 rewards available. DF91 runs May 23— May 30, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_t
Predictoor DF90 rewards available. DF91 runs May 23— May 30, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop completed on May 3, 2024. This article elaborates.

Data Farming Round 90 (DF90) has completed.

DF91 is live today, May 23. It concludes on May 30. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF91 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF91

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF90 Completes and DF91 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 22. May 2024

KuppingerCole

A Bridge to the Future of Identity: Navigating the Current Landscape and Emerging Trends

In an era defined by digital transformation, the landscape of identity and access management (IAM) is evolving at an unprecedented pace, posing both challenges and opportunities for organizations worldwide. This webinar serves as a comprehensive exploration of the current state of the identity industry, diving into key issues such as security, compliance, and customer experience. Modern technology

In an era defined by digital transformation, the landscape of identity and access management (IAM) is evolving at an unprecedented pace, posing both challenges and opportunities for organizations worldwide. This webinar serves as a comprehensive exploration of the current state of the identity industry, diving into key issues such as security, compliance, and customer experience. Modern technology offers innovative solutions to address the complexities of identity management.

Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will share his perspective on the state of the IAM/digital identity market, emphasizing major trends and the implications of the merger between Ping Identity and ForgeRock.

Andre Durand, CEO of Ping Identity, will highlight early product investments, strategic steps post-merger, and the vision for the future. Drawing upon real-world examples showcased at global customer roadshows, he'll outline the trajectory of identity management and the integration of cutting-edge technologies like AI.

Join this webinar to:

Gain insights into the current state of the IAM market and emerging trends. Understand the implications of the merger between Ping Identity and ForgeRock. Explore practical examples of convergence in identity management. Learn about early product investments and strategic initiatives post-merger. Discover the vision for the future of identity management, including the role of AI and emerging technologies. Obtain strategies for navigating the evolving landscape of identity and access management. Acquire actionable insights to optimize security, compliance, and customer experience in your organization.


Indicio

Passwordless login will soon be the norm for bank accounts & financial institutions

The post Passwordless login will soon be the norm for bank accounts & financial institutions appeared first on Indicio.
Passwordless login isn’t new, but, with the constant increase in data breaches from identity fraud, it is now a necessity.

By Tim Spring

The problems with passwords

Thirty-percent of users have experienced a security breach due to weak passwords and 81% of hacking-related breaches take advantage of stolen or weak passwords. We know that the username and password method of authentication has been flawed for years and cannot be fixed; and yet, it is still the most common way to access accounts online. 

The problem is that there has not been an efficient way to create and secure a user’s account at the point of creation. Know-Your-Customer (KYC) practices are done to set up a bank account, but they can vary by organization and by area. Once this information has been collected, and an account has been set up, the easiest way that organizations control access is through a username and password. 

Many organizations have chosen to continue using password security with multi-factor authentication (MFA). — While this offers improved security, MFA is now under stress from attackers who have found ways to bypass it. MFA also requires additional effort from end users who have become accustomed to less friction in digital interaction, and as a result it is struggling with adoption and user fatigue. 

But MFA simply underscores the problem of passwords in general: People hate them. First, we all have too many — one 2023 study found that the average person has 100 passwords. — Second, if they are easy to remember, they are easy to fake; and if they are hard to fake, they are hard to remember. Given all this, 45.7% of people admit to re-using passwords across multiple websites or accounts, making them even more insecure and dangerous. 

It’s time to overhaul the system. With verifiable credential technology financial institutions do their initial KYC and issue their customers a verifiable account credential. Authentication and access is managed by cryptography, meaning that when the account holder presents their credential for login to their account, the bank instantly verifies that they are an account holder (the account holder also instantly verifies that they are interacting with their bank). No logins or passwords or MFA is needed. 

The outcome of safer login methods

By removing passwords, your team becomes much more resilient toward phishing attacks, which is where a bad actor will try to get an unsuspecting person to share their login credentials by posing as a trusted party. Because of their decentralized nature, verifiable credentials cannot be lost, stolen, or copied, so you can be confident that the person presenting the credential is the person who should have it.

You will also see an increase in efficiency once passwords are removed. Employees spend 11 hours per year on average remembering or resetting passwords, which doesn’t sound like too much, but for an organization of 15,000 could be costing as much as $5.2 million in lost productivity. 

Better efficiency doesn’t just mean less money lost and more productive employees, the customer experience also improves. 44% of consumers reported facing medium to high friction when engaging with their digital banking platform. This means that almost half of people trying to access online banking have some form of difficulty. Simple, quick authentication processes will make your organization stand out, keep your customers happy, and keep them coming back.

 

Passwordless login is the future of online interaction. To see an example of how this technology will soon allow you to interact with your financial institution you can see a demonstration here.

For questions about use cases or to learn more about your options for implementing decentralized identity technology you can get in touch with the Indicio team.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Passwordless login will soon be the norm for bank accounts & financial institutions appeared first on Indicio.


Ocean Protocol

Predictoor Simulator Analytics are Now Webapp-Based and Persistent

Migration from Matplotlib to Plotly & Dash enabled easier-to-use & more powerful analytics Contents 1. Introduction 2. Evolution of Simulator Visualization - Matplotlib -> streamlit -> plotly / dash 3. Running Simulation & Plots 4. Worked Example to Interpret Plots - Predictoor profit, trader profit vs time - Accuracy vs time - Predictoor profit dist'n, trader profit
Migration from Matplotlib to Plotly & Dash enabled easier-to-use & more powerful analytics Contents
1. Introduction
2. Evolution of Simulator Visualization
- Matplotlib -> streamlit -> plotly / dash
3. Running Simulation & Plots
4. Worked Example to Interpret Plots
- Predictoor profit, trader profit vs time
- Accuracy vs time
- Predictoor profit dist'n, trader profit dist'n
- Variable importances, model responses
- Classifier metrics: f1/recall/precision, log loss vs time

Appendices: details on classifier metrics Summary

This post describes how we’ve improved the visualizations for Ocean Predictoor’s simulator by migrating from Matplotlib to Plotly/Dash.

It then proceeds to do walk-through of each of the plots, from predictoor/trader profit vs time to variable importance bar plots and model response contour plots.

1. Introduction

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. We developed tools for “predictoors” running simulations of their predictoor bots to test parameters. The simulations can now display visualizations in the browser using Dash to show bot accuracy, profit, and more. An example screen shot of some (but not all) Dash visualizations is shown below:

These simulation plots include the profits for both Predictoor bots & Trader bots and Predictoor bot accuracy. The screenshots later in the blogpost show all other plots generated by Dash. 2. Evolution of Simulator Visualization 2.1 Before: Matplotlib

Originally, we used matplotlib to generate visualizations of simulation analytics. Why plot simulations? Plots give way more insight into simulation performance than just console log output. Why matplotlib? Matplotlib is Python library & has good documentation, so we could rapidly develop visualizations. Plus, our users are all data scientists operating in Python. But, there were problems.

2.2 Limitations of Matplotlib

Despite its initial benefits, matplotlib posed several challenges:

It occupied the entire screen and disrupted other tasks. Once closed, these plots could not be accessed again. The interactive features were basic, limiting detailed data exploration. It was only usable locally. 2.3 Then: Better Data Management, and Streamlit

We improved our data management. Before, data for plots was stored in memory only. So we refactored the code to store the plots on disk, via Python pickling.

Then we tried out Streamlit, using the updated data management. It overcame the matplotlib limitations above, except that interactivity was still weak.

2.4 Finally: Moving to Plotly/Dash

So we tried Plotly/Dash: plotly for the core plots, Dash for the webapp. It quickly became clear that it met our needs, including interactivity. It offers:

Advanced interactive elements for detailed data analysis. Web-based plots that are accessible anytime and can be shared easily. Compatibility with Python, allowing us to maintain rapid development. 3. Running Simulation & Plots

This section walks through how to run simulation and get live plots in a webapp.

First, begin by following the Predictoor bot README’s instructions to install the pdr-backend code, and set simulation parameters in my_ppss.yaml.

Then, kick off a simulation using this CLI command:

pdr sim my_ppss.yaml

Then, open a new terminal and get simulator plots going:

# prep
cd ~/code/pdr-backend # or wherever your pdr-backend dir is
source venv/bin/activate # start the plots server
pdr sim_plots

The plots server will give a url, such as http://127.0.0.1:8050. Developers can open that url in their browser to see plots update in real time.

Below are screenshots of the nine simulation plots currently output by Dash. We will walk through the meanings of each plot for a Logistic Regression model example.

4. Worked Example to Interpret Plots

This section is a walk-through on a simple AI model (logistic regression) to predict ETH price on Binance.

4.0 Parameter Setup

Here is our parameter setup, via my_ppss.yaml.

Logistic regression” is a linear model for classification (up vs down). For our example, the model predicts ETH-USDT 5m candle up/down predictions by using 5000 5m candles of training data for ETH-USDT and ETH-USDT leading up to May 9, 2024. The model makes two-sided predictions. That is, it predicts both UP and DOWN probabilities each epoch. The predictoor stakes 100 OCEAN per epoch (candle). So if the model says a 60% probability of UP, it stakes 60 OCEAN for the UP prediction and 40 for DOWN. The total staked by all other Predictoors predicting the same pairs is 10,000 OCEAN. The simulation also includes trader parameters. Traders start with 100,000 USDT and 2 ETH and buy 100 USD per epoch. Traders have a 0% trading fee in simulation and can have up to 3 open positions at a time. 4.1 Predictoor Profit Vs. Time

The predictoor profit plot is the first Dash plot displayed. It updates live with each iteration of the simulation. The x-axis shows time in 5min candle intervals, and the y-axis shows a positive and increasing trend of expected predictoor profit for the input simulation parameters. So far, so good.

4.2 Trader Profit Vs. Time

While Predictoors earn OCEAN for each accurate prediction, traders earn OCEAN by buying / selling on predictions with an automated trading bot.

The plot below is trader profit vs. time. We can see that over this time interval, the trader has actually lost money. Yet we know from accuracy vs time plot (further below) that accuracy is >50%. What’s going on?

The simple answer is: on average it loses more $ when it’s wrong, compared to making $ when it’s right.

What are ways to actually make $? Well, this is the essence of trading! Possibilities include: have higher accuracy; use the AI model’s probability information better to bound the up/down bet size; and more sophisticated strategies like magnitude of price changes, applying triple barrier method, etc.

4.3 Accuracy vs. Time, With Confidence Intervals

Predictoor’s accuracy vs. time for ETH-USDT predictions is visualized below. It also shows confidence intervals (CIs). Average accuracy is 50.66%. The blue shadow indicates 95% confidence intervals, i.e. there’s a 95% chance that the true accuracy is within the bounds given by the shadow. The CI width narrows over time as the model makes more predictions.

While the Predictoor bot would make more money if its stake was higher relative to those of other Predictoors, the simulated gains are healthy when accuracy is >50%.

4.4 Predictoor Profit Distribution

Predictoor profit is displayed with two lines in the plot below: one as a function that the probability that a cryptocurrency price will go UP in the next 5min and the other probability that the price will go DOWN. These lines appear to be inverses of each other because the Predictoor parameters are making two-sided predictions. You can see that Predictoor profit distribution averages at 0.46 OCEAN per iteration, therefore since profit is positive, across 3250 iterations, (0.46*3250 OCEAN) = 1495 OCEAN profit, very similar to what the first plot shows except for the rounding for significant digits.

4.5 Trader Profit Distribution

This plot shows that as the logistic regression model’s prediction probabilities become more certain (as they near 0 or 1 for the likelihood of prices going UP), then the trader bot’s gains & losses become larger. Conversely, notice how around 50% the trader bot does not profit or lose much because the bets are net neutral. At the top of plot, the trader’s profit distribution average shows -$0.01 USD, which equates to -$33 USD over the approximately 3,300 iterations since the simulation started and matches the plot in section 4.2.

4.6 Variable Importances

The variable importances bar plot shows which tokens’ historical data the model’s prediction accuracy the most. ETH-USDT historical data affects accuracy the most; the most recent candle (t-1) has the biggest impact, and second-most-recent has second-biggest impact (t-2). This finding makes sense as looking back 1 candle’s price is most probably closer to the next candle’s price than that of 2 candles prior (see Martingale Process). BTC-USDT training data at the same # of look backs also has a decent amount of impact. Of course, the sum of the relative importances is 100%.

4.7 Model Response Surface (Contour Plot)

This contour plot shows how the model’s probability of up-vs-down changes as a function of the most important variables. Darker blue or darker red means higher probability of UP or down respectively.

Contour plot details:

The x-axis of this plot is the most important variable (ETH-USDT at t-1). The y-axis is the second-most important variable (ETH-USDT at t-2). The z-axis (looking up) is the model’s response to these two variables, with the other two variables (BTC-USDT at t-1, t-2) fixed at their most recent value. The white region is equal probability of UP vs down. Darker blue means higher probability UP. Darker red means higher probability DOWN.

There are no curved lines because the underlying model is linear and there is no calibration. A nonlinear model or calibration make the responses nonlinear.

Scatter plot details. The plot brings additional information via scatter plot points, to show how well the model fits the training data. Blue points are for measured UP, and the model returned UP as well. Red points are for measured DOWN, and the model returned DOWN as well (correct). Yellow points are when the model got it wrong.

4.8 Classifier metrics: f1/recall/precision

Earlier, we showed the plot for classifier accuracy vs. time. But accuracy doesn’t capture the full picture. Recall, precision, and f1-score help a lot:

Recall: of all the actual “up”, how often did the model catch it? Precision: of all the predicted “up”, how often was it right? f1-score: geometric mean of ^ (perfect = 1 for each)

The plot below shows all three. The Appendix below elaborates on recall, precision, and f1-score.

4.9 Classifier Metrics: Log Loss vs. Time

Log loss captures another aspect of classifier performance. It accounts for how confident the model thought it was, rather than simply being correct vs wrong (like accuracy does). The ideal log loss is 0, which would indicate perfect predictions with 100% confidence in the correct outcome every time.

A log loss of 1.0084 suggests that, on average, the model is making predictions that are significantly incorrect or it is highly confident in incorrect predictions. The logarithmic nature of the penalty in the log loss formula means that being wrong with high confidence results in a much larger penalty. The appendix has details.

4.10 Discussion: Opportunities for Improvement

Here are example opportunities to improve the model. (There are more beyond this.)

Enhancing Data Quality: More or better quality data might improve the model’s accuracy. Calibrating Classification: Removing noisy or irrelevant data can help increase both precision and recall. Model Tuning: Adjusting model parameters or trying different algorithms might yield better results. 5. Conclusion

This post describes how we’ve improved the visualizations for Ocean Predictoor’s simulator by migrating from Matplotlib to Plotly/Dash.

It then proceeds to do walk-through of each of the plots, from predictoor/trader profit vs time to variable importance bar plots and model response contour plots.

Appendix 1: Precision, Recall, and f1-score

Precision is the model’s ability to determine that the positives caught are correct. In the plot shown, precision is moderate at 0.5076, reflecting a balance of about being correct half the time between avoiding false positives but missing true positives.

Function Recall is the model’s ability to identify true positives of predictions (e.g., all instances where the model should predict an upward movement in cryptocurrency prices correctly). A function recall of 0.3892 shows that a significant number of positives are missed since the recall is relatively low, out of 1.0.

What is F1 score? The F1 score is a harmonic mean of precision and recall, making it a balanced metric that considers both the precision and recall of a predictive model. It is particularly useful when the costs of false positives and false negatives are different, or when the class distribution is imbalanced.

The F1 score is defined as:

An F1 Score = 0.4406 suggests a moderate balance between precision and recall but also indicates there’s room for improvement, as the score is not very close to 1.0 (the best possible F1 score).

Costs Associated with model performance of an F1 Score <50%:

The F1 score calculates where both false positives and false negatives carry a cost. Inaccuracies (false positives and false negatives) can lead to poor prediction & trading decisions. For example, false positives might lead to predicting/buying assets that do not increase in value, resulting in losses. False negatives might mean missing out on profitable buying opportunities.

Appendix 2: Log Loss Details

What is log loss? Also known as logistic loss, calculates the accuracy of classifiers. It is particularly relevant for models where predictions are probabilistic, such as logistic regression models. Log loss quantifies how far the model’s predicted probabilities are from the true class labels. Here’s the formula for log loss:

Where:

N is the number of observations. yi​ is the actual class label for observation 𝑖i, which can be 0 or 1. pi​ is the predicted probability of the observation 𝑖i being in class 1. log is the natural logarithm. About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable businesses and individuals to trade tokenized data assets seamlessly to manage data all along the AI model life-cycle. Ocean-powered apps include enterprise-grade data exchanges, data science competitions, and data DAOs. Follow Ocean on Twitter or TG, and chat in Discord.

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Predictoor has over $400+ million in monthly volume, just six months after launch with a roadmap to scale foundation models globally. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program.

Predictoor Simulator Analytics are Now Webapp-Based and Persistent was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Western Sydney University breach tied to Office 365 environment compromise

Australia's Western Sydney University had information from nearly 7,500 students and academic staff compromised following a cyberattack against the university's Microsoft 365 environment, reports BleepingComputer.

Australia's Western Sydney University had information from nearly 7,500 students and academic staff compromised following a cyberattack against the university's Microsoft 365 environment, reports BleepingComputer.


GitHub addresses maximum severity Enterprise Server vulnerability

Updates have been issued by GitHub to remediate a maximum severity security vulnerability impacting its self-hosted software development platform GitHub Enterprise Server, tracked as CVE-2024-4985, which could be exploited to evade authentication defenses, The Hacker News reports.

Updates have been issued by GitHub to remediate a maximum severity security vulnerability impacting its self-hosted software development platform GitHub Enterprise Server, tracked as CVE-2024-4985, which could be exploited to evade authentication defenses, The Hacker News reports.


Data breach impacts wireless provider Patriot Mobile

U.S. mobile virtual network operator and the country's sole "Christian conservative wireless provider" Patriot Mobile had the personal information of its subscribers reportedly exposed following a data breach, according to TechCrunch.

U.S. mobile virtual network operator and the country's sole "Christian conservative wireless provider" Patriot Mobile had the personal information of its subscribers reportedly exposed following a data breach, according to TechCrunch.


KuppingerCole

Passwordless Authentication for Consumers

by Alejandro Leal Over the past few years, there has been a significant increase in the adoption of passwordless solutions in both enterprise and consumer use cases. Passwordless authentication is a term used to describe a set of identity verification solutions that remove the password from all aspects of the authentication flow, as well as d from the recovery process. Passwordless solutions shoul

by Alejandro Leal

Over the past few years, there has been a significant increase in the adoption of passwordless solutions in both enterprise and consumer use cases. Passwordless authentication is a term used to describe a set of identity verification solutions that remove the password from all aspects of the authentication flow, as well as d from the recovery process. Passwordless solutions should therefore provide an easy and frictionless user experience, but not at the expense of security. This KuppingerCole Buyer's Compass will provide you with questions to ask vendors, criteria to select your vendor, and requirements for successful deployments.

Cloud Security vs. Secure Cloud

by Alexei Balaganski The million-dollar security question that has been discussed to death in recent years is: why, despite the constantly evolving market of sophisticated cybersecurity solutions and ever-increasing investments in security by businesses, the number of successful cyberattacks does not seem to be decreasing? In fact, the scale and cost of an average breach only grows each year. T

by Alexei Balaganski

The million-dollar security question that has been discussed to death in recent years is: why, despite the constantly evolving market of sophisticated cybersecurity solutions and ever-increasing investments in security by businesses, the number of successful cyberattacks does not seem to be decreasing? In fact, the scale and cost of an average breach only grows each year.

The answer to this question is actually quite trivial: people making executive decisions about investments in cybersecurity do not know or care much about it. Typically, this leads either to tragically understaffed and overworked security teams or, even worse, to the proliferation of the “cargo cult of cybersecurity”, when a security solution gets purchased, but never properly deployed, operated, or monitored afterward. This is especially true for public clouds, where many customers are still struggling with the notion of “shared responsibility model”. Yes, cloud service providers usually have much more sophisticated tools and better qualified experts on their security teams, but they are not responsible for securing your applications or data – it is still entirely your problem as a customer.

What can be done to break this vicious cycle? Well, the best approach would be to redesign your entire business processes to make them more resilient to cyberattacks. If you do not collect sensitive information about your customers, it won’t be leaked by a hacker. If you enforce the Zero Trust model across your networks, the chances of a ransomware attack would be substantially reduced. A somewhat less radical solution would be to outsource your cybersecurity to a third party, making it their concern. This works really well in certain scenarios, such as processing credit card transactions, for example. In a broader sense, however, employing a managed security service provider can be quite costly and still won’t guarantee anything (people still make mistakes, after all).

Would AI perhaps improve the situation? This remains to be seen – we already know that AI solutions are creating their own, entirely new kinds of cybersecurity risks. In any case, the worst enemy of security is complexity, and thus, reducing the overall complexity of your IT infrastructure and consequently simplifying and consolidating the security controls needed to protect it, should be a primary strategic goal of every digital business. Unification of technology stacks across environments, strict and consistent enforcement of declarative security policies, and intelligent automation of security operations are major factors in achieving this goal.

Choosing the right cloud provider that can help with it is an important first step on that journey. And it doesn’t have to be one of the “big three” – today, let’s take a look at another contender, Oracle Cloud Infrastructure. Being a latecomer to the cloud service market, Oracle had an opportunity to learn from its predecessors’ mistakes and design its architecture differently in several ways.

Perhaps the most significant differentiator of OCI is its unified approach towards service delivery regardless of the offered cloud model. Whether served from the public cloud, a private Cloud@Customer deployment, a dedicated commercial or government region, a sovereign cloud compliant with local regulatory frameworks, or even offered by Oracle’s partner using the Alloy platform – the services, data models, identity and security controls, and other aspects remain the same – as opposed to other providers that cannot deliver feature parity across their public and private offerings.

This alone allows for reducing the overall complexity of hybrid deployments dramatically, but combined with the ability to provide consistent security controls across those environments simplifies protection against cyberthreats even further. Moreover, Oracle places a strong focus on “secure defaults”, meaning that customers do not have to make each decision themselves, instead relying on best practices and controls that cannot be bypassed or deactivated accidentally.

Another important differentiator is maintaining an open ecosystem with not just partners and resellers, but also with 3rd party technology providers and even direct competitors. Again, as opposed to some other providers that keep their customers locked into their infrastructures with proprietary interfaces and large egress fees, Oracle strives to make their services available in other clouds and designs services around industry standards and open protocols. 

Since Oracle Database services form a major part of the company’s cloud portfolio, it is unsurprising that the company invests a lot into data protection solutions – from multiple types of encryption and data masking to numerous data security controls. This includes Oracle Data Safe, SQL Firewall, as well as more traditional tools like Audit Vault and Database Firewall. Oracle’s Autonomous Database service turns the database into a fully managed service that delegates all administrative, operational, and security controls to AI-powered automation, completely removing human mistakes as a risk factor.

Security Zones enable secure compartmentalization for customers’ resources and applications. All these controls are continuously updated and mapped across a multitude of compliance regulations for specific geographies and industries. Needless to say, these security controls are complemented by identity management, strong, passwordless authentication methods (including FIDO2 and passkeys), and access governance tools with a high degree of automation as well.

Oracle Cloud Guard provides a centralized security management and monitoring hub that has evolved from security posture management towards a complete cloud-native workload protection, threat intelligence, and security analytics platform. More importantly, however, the entire OCI cloud infrastructure has been designed from scratch to incorporate security controls at every layer, from low-level network isolation to firewalls and encryption in transit. Currently, Oracle is working on implementing the industry-wide Zero Trust Packet Routing initiative across its infrastructure to provide a truly identity-aware, data-centric, self-enforcing network security architecture.


Tokeny Solutions

Institutional RWA Tokenization Needs Permissioned Cash Coins

The post Institutional RWA Tokenization Needs Permissioned Cash Coins appeared first on Tokeny.
May 2024 Institutional RWA Tokenization Needs Permissioned Cash Coins

Stablecoins are the killer use cases for the crypto space, with a market cap exceeding $160 billion according to DefiLlama. Research firm Sacra predicts that stablecoins may overtake payment giant Visa in total payment volume this quarter. However, financial institutions have been hesitant to embrace stablecoins due to regulatory concerns and the inherent volatility of the broader cryptocurrency market.

The skepticism surrounding stablecoins is not without basis. As highlighted in our 2022 newsletter, “Stablecoins Were Not Made to Go to the Moon,” the collapse of TerraUSD (UST) exposed significant vulnerabilities in certain stablecoin models. This incident underscored the need for stablecoins to be backed by credible and regulated entities.

Regulators worldwide are enhancing regulations for payment tokens, making licensed institutions like banks the most suitable cash coins issuers. Additionally, stringent controls are required to enforce KYC and AML rules. These controls will be needed during the full-lifecycle of the token and not only when they are minted and redeemed like current stablecoins. For instance, the recently introduced Lummis-Gillibrand Payment Stablecoin Act in the US, AML legislation in Europe, and Hong Kong’s HKMA public consultation paper on regulating stablecoins all emphasize the necessity of conducting KYC and AML for stablecoin holders.

The future of stablecoins is likely to become permissioned, at least for regulated institutions. We expect more commercial banks to issue bank-grade digital payment coins by leveraging permissioned tokens to bring controls and restrictions that enforce compliance. As wallets don’t enable the proper identification of a user, banks will need advanced token smart contracts to enforce their duties, wherever the tokens are.

The open-source ERC-3643 permissioned token standard offers a common compliance framework suitable for both permissioned stablecoins and tokenized securities. By issuing cash and securities with the same compliance framework, we ensure both compliance and interoperability, eliminating silos.

Tokeny is at the forefront of this transformation, empowering commercial banks to issue and manage payment coins while upgrading operating systems for asset managers, investment banks, fund servicers, and distributors to bring securities seamlessly on-chain.

Step by step, we are solving the biggest issues in the institutional RWA tokenization industry: bringing cash and securities on the same ledger. This eliminates the lengthy payment process and data reconciliation using fiat rails, laying the foundation for delivering the true value of blockchain. Not only does this facilitate atomic settlement, but it also opens the door to new services like peer-to-peer secondary trading, collateralized lending, automated corporate actions, and more.

Contact us when your organization is ready to upgrade on-chain and stay ahead of your competitors.

Tokeny Spotlight

TALENT

Head of Operations, Margot Pages, celebrated her 6-month anniversary.

Read More

EVENT

CCO, Daniel Coheur, and Head of Americas, Greg Cignarella are attending DAW California.

Read More

PRODUCT NEWSLETTER

In this month’s product newsletter, we dive into our latest advancements in multi-chain tokenization capabilities.

Read More

PARTNERSHIP

We are thrilled to announce that we have joined forces with Globacap to enhance tokenized private asset distribution.

Read More Tokeny Events

Digital Assets Week California
May  21th-22th, 2024 | 🇺🇸 USA

Register Now

DLT & Digital Currencies 
June  4th, 2024 | 🇪🇸 Spain

Register Now

EU Commission Workshop on Asset Tokenization
June  11th, 2024 | 🇧🇪 Belgium

Register Now

Consensus 
May  29th-31th, 2024 | 🇺🇸 USA

Register Now

World Token Summit 3.0
June  11th – 12th, 2024 | 🇦🇪 UAE

Register Now ERC3643 Association Recap

Verification with ONCHAINID and ERC-3643 

The ERC-3643 is renowned for enabling always-on compliance for asset tokenization. But how does that work? Find out the technical details in our recent post.

Learn more here

President, Dennis, Spoke at DWIC 

Sharing his insights on “The Current State Of Institutional RWA” at the Decentralized Web3 Investment Conclave (DWIC).

Read more

ERC-3643’s History Showcased 

We launched a dedicated webpage to show the development of the ERC-3643, from inception to impact.

Visit the page

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  May22 Institutional RWA Tokenization Needs Permissioned Cash Coins May 2024 Institutional RWA Tokenization Needs Permissioned Cash Coins Stablecoins are the killer use cases for the crypto space, with a market cap exceeding $160… Apr25 BlackRock’s Influence and the Future of MMFs April 2024 BlackRock’s Influence and the Future of MMFs In the world of finance, innovation acceleration often requires the endorsement of industry giants. BlackRock’s embrace… Mar25 🇭🇰 Hong Kong’s Competitive Leap: Fueling Tokenization Growth Across Asia March 2024 Hong Kong’s Competitive Leap: Fueling Tokenization Growth Across Asia This month, we attended the Digital Assets Week Hong Kong conference and were struck… Feb26 Why Do Asset Managers, Like BlackRock, Embrace Tokenization? February 2024 Why Do Asset Managers, Like BlackRock, Embrace Tokenization? I’m excited to share some exciting news from our side, as we proudly participated in Citi’s…

The post Institutional RWA Tokenization Needs Permissioned Cash Coins first appeared on Tokeny.

The post Institutional RWA Tokenization Needs Permissioned Cash Coins appeared first on Tokeny.


PingTalk

What Is Behavioral Biometrics? How Is It Used?

With the advancement of technology, it’s never been easier for cybercriminals to access compromised credentials, deploy convincing social engineering attacks, or use deep fakes to defraud and exploit organizations.    Thus, the quest for more effective and efficient fraud prevention methods is ongoing. Traditional security methods can come with trade-offs, like increased friction for

With the advancement of technology, it’s never been easier for cybercriminals to access compromised credentials, deploy convincing social engineering attacks, or use deep fakes to defraud and exploit organizations. 

 

Thus, the quest for more effective and efficient fraud prevention methods is ongoing. Traditional security methods can come with trade-offs, like increased friction for users or limited adaptability to new threats. But, emerging technologies like behavioral biometrics offer a dynamic approach to fraud prevention, promising a more streamlined and frictionless experience for both organizations and users.


BlueSky

Just shipped: Bluesky Direct Messages!

You can now send direct messages (DMs) to people on Bluesky! Say hi to a friend, colleague, or a crush.

You can now send direct messages (DMs) to people on Bluesky! Say hi to a friend, colleague, or a crush.

These are private one-to-one messages directly within the Bluesky app. By default, your permissions allow anyone you follow to DM you. You can change these settings to allow no one or anyone to message you.

How do I send a DM? Click the Chat icon. On mobile, you can find the icon at the bottom of your screen. On desktop, this is a chat bubble on the side bar (or go to https://bsky.app/messages). On mobile, click the plus icon to start a new conversation. On desktop, click “New chat” in the top right to start a new conversation. Search for the user you want to message. Write your message, and hit send! If the app says a user cannot be messaged, they may have set their account to only allow messages from people they follow or from no one. DM Privacy and Safety Who can message me?

By default, only people you follow can send you DMs. To change this, check the settings in the DM interface. You can allow DMs from no one, only people you follow, or all Bluesky users.

Set who can message you, and whether you want notification sounds.

Blocked users will not be able to DM you. Muted users are able to DM you. Additionally, you can easily block users right from within the DM feature.

Reporting DMs

You can report DMs directly to mods, who will review reported messages for Community Guidelines violations. Moderators are able to view the reported message and surrounding messages for context to assess the report. Infractions may result in temporary or permanent loss of DM privileges or even full account takedowns.

DM Privacy

In rare cases, the Bluesky moderation team may need to open your DMs to investigate broader patterns of abuse, such as spam or coordinated harassment. This would only be done when absolutely necessary to keep Bluesky safe. Access is extremely limited and tracked internally.

This first version of DMs has limited features (no images or encryption yet), but we'll be adding more safety enhancements in future updates.

Future Updates Media in DMs: Currently, Bluesky DMs allow you to send text messages. In the future, you’ll be able to send images and other forms of media! Group DMs: In the future, you’ll be able to create direct messages for groups. Encrypted DMs: We intend to fully support end-to-end encrypted messaging down the line. Read more about our technical plans for E2EE messaging in our 2024 protocol roadmap. Safety improvements: We’ll continue iterating on anti-harassment and safety tooling for direct messages.

Tuesday, 21. May 2024

KuppingerCole

Simplifying Cloud Access Management: Strategies for Enhanced Security and Control

As organizations increasingly migrate to cloud-based environments, the complexity of managing access to these resources grows exponentially. With both human and non-human entities interacting with cloud services, the necessity for a robust control plane to ensure the integrity and security of these interactions has never been more critical. Join experts from KuppingerCole Analysts and ARCON as t

As organizations increasingly migrate to cloud-based environments, the complexity of managing access to these resources grows exponentially. With both human and non-human entities interacting with cloud services, the necessity for a robust control plane to ensure the integrity and security of these interactions has never been more critical.

Join experts from KuppingerCole Analysts and ARCON as they discuss the challenges facing security leaders in today’s rapidly evolving cloud landscape, and how those challenges can compromise the security and efficiency of cloud resources. They will describe the current cloud access management landscape, and explore innovative solutions and best practices for addressing vulnerabilities as well as enhancing operational efficiency and compliance.

Paul Fisher, Lead Analyst at KuppingerCole will address the challenges of over-privileged entitlements, limited access controls, cumbersome credentials management, weak identity-threat detection, and inefficient entitlement management. These issues pose risks of credential abuse, identity theft, and data breaches, emphasizing the need for robust mitigation strategies.

Harshvardhan Lale, Vice President of Business Development at ARCON will explain the importance of being able to manage all kinds of identities from all cloud platforms, and how ARCON’s centralized Cloud Governance solution can deliver Cloud Infrastructure Entitlement Management (CIEM) to reduce cloud security risks.




SC Media - Identity and Access

Microsoft’s AI ‘Recall’ feature raises security, privacy concerns

Critics say the new Copilot+ PCs’ “photographic memory” could be exploited by threat actors.

Critics say the new Copilot+ PCs’ “photographic memory” could be exploited by threat actors.


IBM Blockchain

Achieving cloud excellence and efficiency with cloud maturity models

Cloud maturity models are frameworks for evaluating an organization’s cloud adoption readiness on both a macro and individual service level. The post Achieving cloud excellence and efficiency with cloud maturity models appeared first on IBM Blog.
Cloud maturity models (CMMs) are helpful tools for evaluating an organization’s cloud adoption readiness and cloud security posture. Cloud adoption presents tremendous business opportunity—to the tune of USD 3 trillion—and more mature cloud postures drive greater cloud ROI and more successful digital transformations. There are many CMMs in practice and organizations need to decide which are most appropriate for their business and their needs. CMMs can be used individually, or in conjunction with one another. Why move to the cloud?

Business leaders worldwide are asking their teams the same question: “Are we using the cloud effectively?” This quandary often comes with an accompanying worry: “Are we spending too much money on cloud computing?” Given the statistics—82% of surveyed respondents in a 2023 Statista study cited managing cloud spend as a significant challenge—it’s a legitimate concern.

Concerns around security, governance and lack of resources and expertise also top the list of respondents’ concerns. Cloud maturity models are a useful tool for addressing these concerns, grounding organizational cloud strategy and proceeding confidently in cloud adoption with a plan.

Cloud maturity models (or CMMs) are frameworks for evaluating an organization’s cloud adoption readiness on both a macro and individual service level. They help an organization assess how effectively it is using cloud services and resources and how cloud services and security can be improved.

Why move to cloud?

Organizations face increased pressure to move to the cloud in a world of real-time metrics, microservices and APIs, all of which benefit from the flexibility and scalability of cloud computing. An examination of cloud capabilities and maturity is a key component of this digital transformation and cloud adoption presents tremendous upside. McKinsey believes it presents a USD 3 trillion opportunity and nearly all of responding cloud leaders  (99%) view the cloud as the cornerstone of their digital strategy, according to a Deloitte study.

A successful cloud strategy requires a comprehensive assessment of cloud maturity. This assessment is used to identify the actions—such as upgrading legacy tech and adjusting organizational workflows—that the organization needs to take to fully realize cloud benefits and pinpoint current shortcomings. CMMs are a great tool for this assessment.

There are many CMMs in practice and organizations must decide what works best for their business needs. A good starting point for many organizations is to engage in a three-phase assessment of cloud maturity using the following models: a cloud adoption maturity model, a cloud security maturity model and a cloud-native maturity model.

Cloud adoption maturity model

This maturity model helps measure an organization’s cloud maturity in aggregate. It identifies the technologies and internal knowledge that an organization has, how suited its culture is to embrace managed services, the experience of its DevOps team, the initiatives it can begin to migrate to cloud and more. Progress along these levels is linear, so an organization must complete one stage before moving to the next stage.

Legacy: Organizations at the beginning of their journey will have no cloud-ready applications or workloads, cloud services or cloud infrastructure. Ad hoc: Next is ad hoc maturity, which likely means the organization has begun its journey through cloud technologies like infrastructure as a service (IaaS), the lowest-level control of resources in the cloud. IaaS customers receive compute, network and storage resources on an on-demand, over the internet, pay-as-you-go pricing basis. Repeatable: Organizations at this stage have begun to make more investments in the cloud. This might include establishing a Cloud Center of Excellence (CCoE) and examining the scalability of initial cloud investments. Most importantly, the organization has now created repeatable processes for moving apps, workstreams and data to the cloud. Optimized: Cloud environments are now working efficiently and every new use case follows the same foundation set forth by the organdization. Cloud-advanced: The organization now has most, if not all, of its workstreams on the cloud. Everything runs seamlessly and efficiently and all stakeholders are aware of the cloud’s potential to drive business objectives. Cloud security maturity model

The optimization of security is paramount for any organization that moves to the cloud. The cloud can be more secure than on-premises data centers, thanks to robust policies and postures used by cloud providers. Prioritizing cloud security is important considering that public cloud-based breaches often take months to correct and can have serious financial and reputational consequences.

Cloud security represents a partnership between the cloud service provider (CSP) and the client. CSPs provide certifications on the security inherent in their offerings, but clients that build in the cloud can introduce misconfigurations or other issues when they build on top of the cloud infrastructure. So CSPs and clients must work together to create and maintain secure environments.

The Cloud Security Alliance, of which IBM® is a member, has a widely adopted cloud security maturity model (CSMM). The model provides good foundation for organizations looking to better embed security into their cloud environments.

Organizations may not want or need to adopt the entire model, but can use whichever components make sense. The model’s five stages revolve around the organization’s level of security automation.

No automation: Security professionals identify and address incidents and problems manually through dashboards. Simple SecOps: This phase includes some infrastructure-as-code (IaC) deployments and federation on some accounts. Manually executed scripts: This phase incorporates more federation and multi-factor authentication (MFA), although most automation is still executed manually. Guardrails: It includes a larger library of automation expanding into multiple account guardrails, which are high-level governance policies for the cloud environment. Automation everywhere: This is when everything is integrated into IaC and MFA and federation usage is pervasive. Cloud-native maturity models

The first two maturity models refer more to an organization’s overall readiness; the cloud-native maturity model (CNMM) is used to evaluate an organization’s ability to create apps (whether built internally or through open source tooling) and workloads that are cloud-native. According to Deloitte, 87% of cloud leaders embrace cloud-native development.

As with other models, business leaders should first understand their business goals before diving into this model. These objectives will help determine what stage of maturity is necessary for the organization. Business leaders also need to look at their existing enterprise applications and decide which cloud migration strategy is most appropriate.

Most “lifted and shifted” apps can operate in a cloud environment but might not to reap the full benefits of cloud. Cloud mature organizations often decide it’s most effective to build cloud-native applications for their most important tools and services.

The Cloud Native Computing Foundation has put forth its own model.

Level 1 – Build: An organization is in pre-production related to one proof of concept (POC) application and currently has limited organizational support. Business leaders understand the benefits of cloud native and, though new to the technology, team members have basic technical understanding. Level 2 – Operate: Teams are investing in training and new skills and SMEs are emerging within the organization. A DevOps practice is being developed, bringing together cloud engineers and developer groups. With this organizational change, new teams are being defined, agile project groups created and feedback and testing loops established. Level 3 – Scale: Cloud-native strategy is now the preferred approach. Competency is growing, there is increased stakeholder buy-in and cloud-native has become a primary focus. The organization is beginning to implement shift-left policies and actively training all employees on security initiatives. This level is often characterized by a high degree of centralization and clear delineation of responsibilities, however bottlenecks in the process emerge and velocity might decrease. Level 4 – Improve: At level 4, the cloud is the default infrastructure for all services. There is full commitment from leadership and team focus revolves heavily around cloud cost optimization. The organization explores areas to improve and processes that can be made more efficient. Cloud expertise and responsibilities are shifting from developers to all employees through self-service tools. Multiple groups have adopted Kubernetes for deploying and managing containerized applications.  With a strong, established platform, the decentralization process can begin in earnest. Level 5 – Optimize: At this stage, the business has full trust in the technology team and employees company-wide are onboarded to the cloud-native environment. Service ownership is established and distributed to self-sufficient teams. DevOps and DevSecOps are operational, highly skilled and fully scaled. Teams are comfortable with experimentation and skilled in using data to inform business decisions. Accurate data practices boost optimization efforts and enables the organization to further adopt FinOps practices. Operations are smooth, goals outlined in the initial phase have been achieved and the organization has a flexible platform that suits its needs. What’s best for my organization?

An organization’s cloud maturity level dictates which benefits and to what degree it stands to gain from a move to the cloud. Not every organization will reach, or want to reach, the top level of maturity in each, or all, of the three models discussed here. However, it’s likely that organizations will find it difficult to compete without some level of cloud maturity, since 70% of workloads will be on the cloud by 2024, according to Gartner.

The more mature an organization’s cloud infrastructure, security and cloud-native application posture, the more the cloud becomes advantageous. With a thorough examination of current cloud capabilities and a plan to improve maturity moving forward, an organization can increase the efficiency of its cloud spend and maximize cloud benefits.

Advancing cloud maturity with IBM

Cloud migration with IBM® Instana® Observability helps set organizations up for success at each phase of the migration process (plan, migrate, run) to make sure that applications and infrastructure run smoothly and efficiently. From setting performance baselines and right-sizing infrastructure to identifying bottlenecks and monitoring the end-user experience, Instana provides several solutions that help organizations create more mature cloud environments and processes. 

However, migrating applications, infrastructure and services to cloud is not enough to drive a successful digital transformation. Organizations need an effective cloud monitoring strategy that uses robust tools to track key performance metrics—such as response time, resource utilization and error rates—to identify potential issues that could impact cloud resources and application performance.

Instana provides comprehensive, real-time visibility into the overall status of cloud environments. It enables IT teams to proactively monitor and manage cloud resources across multiple platforms, such as AWS, Microsoft Azure and Google Cloud Platform.

The IBM Turbonomic® platform proactively optimizes the delivery of compute, storage and network resources across stacks to avoid overprovisioning and increase ROI. Whether your organization is pursuing a cloud-first, hybrid cloud or multicloud strategy, the Turbonomic platform’s AI-powered automation can help contain costs while preserving performance with automatic, continuous cloud optimization.

Explore IBM Instana Observability Explore IBM Turbonomic

The post Achieving cloud excellence and efficiency with cloud maturity models appeared first on IBM Blog.


How AI-powered recruiting helps Spain’s leading soccer team score

Designed for Spain’s Sevilla Fútbol Club, Scout Advisor is a natural language processing tool built on the IBM® watsonx™ platform. The post How AI-powered recruiting helps Spain’s leading soccer team score appeared first on IBM Blog.

Phrases like “striking the post” and “direct free kick outside the 18” may seem foreign if you’re not a fan of football (for Americans, see: soccer). But for a football scout, it’s the daily lexicon of the job, representing crucial language that helps assess a player’s value to a team. And now, it’s also the language spoken and understood by Scout Advisor—an innovative tool using natural language processing (NLP) and built on the IBM® watsonx™ platform especially for Spain’s Sevilla Fútbol Club. 

On any given day, a scout has several responsibilities: observing practices, talking to families of young players, taking notes on games and recording lots of follow-up paperwork. In fact, paperwork is a much more significant part of the job than one might imagine. 

As Victor Orta, Sevilla FC Sporting Director, explained at his conference during the World Football Summit in 2023: “We are never going to sign a player with data alone, but we will never do it without resorting to data either. In the end, the good player will always have good data, but then there is always the human eye, which is the one that must evaluate everything and decide.” 

Read on to learn more about IBM and Sevilla FC’s high-scoring partnership. 

Benched by paperwork 

Back in 2021, an avalanche of paperwork plagued Sevilla FC, a top-flight team based in Andalusia, Spain. With an elite scouting team featuring 20-to-25 scouts, a single player can accumulate up to 40 scout reports, requiring 200-to-300 hours of review. Overall, Sevilla FC was tasked with organizing more than 200,000 total reports on potential players—an immensely time-consuming job. 

Combining expert observation alongside the value of data remained key for the club. Scout reports look at the quantitative data of game-time minutiae, like scoring attempts, accurate pass percentages, assists, as well as qualitative data like a player’s attitude and alignment with team philosophy. At the time, Sevilla FC could efficiently access and use quantitative player data in a matter of seconds, but the process of extracting qualitative information from the database was much slower in comparison.  

In the case of Sevilla FC, using big data to recruit players had the potential to change the core business. Instead of scouts choosing players based on intuition and bias alone, they could also use statistics, and confidently make better business decisions on multi-million-dollar investments (that is, players). Not to mention, when, where and how to use said players. But harnessing that data was no easy task. 

Getting the IBM assist

Sevilla FC takes data almost as seriously as scoring goals. In 2021, the club created a dedicated data department specifically to help management make better business decisions. It has now grown to be the largest data department in European football, developing its own AI tool to help track player movements through news coverage, as well as internal ticketing solutions.  

But when it came to the massive amount of data collected by scouters, the department knew it had a challenge that would take a reliable partner. Initially, the department consulted with data scientists at the University of Sevilla to develop models to organize all their data. But soon, the club realized it would need more advanced technology. A cold call from an IBM representative was fortuitous. 

“I was contacted by [IBM Client Engineering Manager] Arturo Guerrero to know more about us and our data projects,” says Elias Zamora, Sevilla FC chief data officer. “We quickly understood there were ways to cooperate. Sevilla FC has one of the biggest scouting databases in the professional football, ready to be used in the framework of generative AI technologies. IBM had just released watsonx, its commercial generative AI and scientific data platform based on cloud. Therefore, a partnership to extract the most value from our scouting reports using AI was the right initiative.”  

Coordinating the play 

Sevilla FC connected with the IBM Client Engineering team to talk through its challenges and a plan was devised.  

Because Sevilla FC was able to clearly explain its challenges and goals—and IBM asked the right questions—the technology soon followed. The partnership determined that IBM watsonx.ai™ would be the best solution to quickly and easily sift through a massive player database using foundation models and generative AI to process prompts in natural language. Using semantic language for search provided richer results: for instance, a search for “talented winger” translated to “a talented winger is capable of taking on defenders with dribbling to create space and penetrate the opposition’s defense.”  

The solution—titled Scout Advisor—presents a curated list of players matching search criteria in a well-designed, user-friendly interface. Its technology helps unlock the entire potential of the Sevilla FC’s database, from the intangible impressions of a scout to specific data assets. 

Sevilla FC Scout Advisor UI  Scoring the goal 

Scout Advisor’s pilot program went into production in January 2024, and is currently training with 200,000 existing reports. The club’s plan is to use the tool during the summer 2024 recruiting season and see results in September. So far, the reviews have been positive.   
 
“Scout Advisor has the capability to revolutionize the way we approach player recruitment,” Zamora says. “It permits the identification of players based on the opinion of football experts embedded in the scouting reports and expressed in natural language. That is, we use the technology to fully extract the value and knowledge of our scouting department.”  

And with the time saved, scouts can now concentrate on human tasks: connecting with recruits, watching games and making decisions backed by data. 

When considering the high functionality of Scout Advisor’s NLP technology, it’s natural to think about how the same technology can be applied to other sports recruiting and other functions. But one thing is certain: making better decisions about who, when and why to play a footballer has transformed the way Sevilla FC recruits.  

Says Zamora: “This is the most revolutionary technology I have seen in football.”  
 
Want to learn how watsonx technology can score goals for your team? 

See what watsonx can do

The post How AI-powered recruiting helps Spain’s leading soccer team score appeared first on IBM Blog.


KuppingerCole

AWS’s Sovereign Cloud: A Game-Changer for Europe?

by Matthias Reinwarth The digital era demands that organizations harness the power of advanced cloud platforms while adhering to regulatory frameworks, particularly in Europe. Commercial enterprises and governmental bodies alike have shown hesitance towards cloud adoption, primarily due to the legal uncertainties highlighted by the Schrems and Schrems II rulings. These rulings have intensified co

by Matthias Reinwarth

The digital era demands that organizations harness the power of advanced cloud platforms while adhering to regulatory frameworks, particularly in Europe. Commercial enterprises and governmental bodies alike have shown hesitance towards cloud adoption, primarily due to the legal uncertainties highlighted by the Schrems and Schrems II rulings. These rulings have intensified concerns over data sovereignty and security, leading to increased scrutiny over where and how data is stored. To mitigate these issues, sovereign cloud services have emerged as a crucial solution, ensuring compliance without compromising on technological advancements.

What is a Sovereign Cloud?

So, what is a sovereign cloud, and how sovereign can sovereign be? Essentially, it’s a cloud environment designed to adhere to the strict data residency, privacy, and regulatory requirements of a specific country. Data is stored and managed within the country’s borders, ensuring it’s governed by local laws. This approach provides high levels of security and compliance, making it ideal for governmental and highly regulated industries. By leveraging sovereign clouds, organizations can enjoy advanced cloud services without the legal and security concerns tied to foreign data management.

Figure 1: Key principles of a sovereign cloud

The Contenders in the Sovereign Cloud Market

When it comes to meeting the high standards of sovereign cloud, Microsoft, Google, IBM, Oracle, Salesforce, Alibaba, SAP, VMware, Atos, T-Systems, and Capgemini all claim promising solutions for the European markets. Microsoft’s Cloud for Sovereignty stands out, alongside Google’s partnership with T-Systems and Oracle’s EU Sovereign Cloud. However, potential customers must verify just how sovereign these solutions truly are. It’s crucial to ensure they meet the needs of highly regulated commercial entities and governmental organizations. Remember, all that glitters may not be sovereign enough.

Enter: AWS

A few years ago, AWS wasn’t keen on the sovereign cloud concept, with their Chief Security Officer, Stephen Schmidt, referring to it as “a marketing term more than anything else.” (Source) AWS believed their existing infrastructure already met most regulatory requirements.

A few days ago, this changed dramatically: AWS is making a bold move with its European Sovereign Cloud, committing €7.8 billion to launch by the end of 2025 in Brandenburg, Germany. This cloud environment will be entirely separate from other AWS regions, ensuring all data and operations remain within the EU. By leveraging the powerful AWS Nitro System, it promises unmatched security and compliance for European customers. This significant investment also includes job creation and skills development, aiming to support highly regulated industries and government organizations in meeting stringent data sovereignty requirements.

AWS might be a latecomer to the sovereign cloud market, but their €7.8 billion investment in the AWS European Sovereign Cloud signals a significant shift in their strategy. Clearly, something has changed in their perception, pushing them to heavily invest to meet the stringent demands of European regulatory and data sovereignty needs. 

Impact on Gaia-X

What does this move by AWS mean for the Gaia-X initiative? Gaia-X itself is all about creating a federated and secure data infrastructure with a focus on data sovereignty, transparency, and interoperability with a European touch. AWS is a day-1 member of the Gaia-X group actively supporting this initiative and involved in multiple Gaia-X working groups.  AWS’s late but powerful entry into the sovereign cloud market could either support Gaia-X’s efforts by adding robust infrastructure or challenge Gaia-X’s collaborative nature with its sheer scale and resources. Will AWS complement or overshadow Gaia-X’s goals, with both approaches navigating the evolving landscape of digital sovereignty in Europe? 

Evaluating the Investment

So, what is the real value behind this massive expenditure? Does it truly address the stringent data sovereignty and compliance needs of German and EU customers? Additionally, will this substantial investment be sufficient from their perspective, given their high standards for data security and regulatory compliance? Moreover, how will government and public services perceive this development? Will they see AWS’s efforts as enough to meet their unique demands, or will there still be skepticism about the adequacy of these measures? These questions are pivotal as AWS attempts to secure its foothold in this highly regulated market.

Why now, and why with such a massive investment? Is AWS catching up with the market, or are they trying to overshadow the competition with sheer force? This Billion-Euro commitment suggests a strategic pivot: AWS might be late to the sovereign cloud game, but they’re going all-in, potentially to avoid losing market share to local providers and those already “there”. The real question is, will this investment finally deliver solutions to meet the high standards of German and EU customers by the end of 2025?

AWS’s extensive plans include not only building and operating the AWS European Sovereign Cloud but also creating numerous high-skilled jobs and collaborating with local communities on innovative programs. This initiative aims to accelerate productivity, empower digital transformation, and contribute significantly to Germany’s GDP. However, AWS needs to ensure they can deliver from their (actually rather late) “day one” meeting both the technical and regulatory demands of a highly competitive and scrutinized market.


SC Media - Identity and Access

GitHub, FileZilla exploited for multiple malware delivery

Sophisticated Russian threat operation GitCaught has exploited both GitHub and FileZilla to facilitate the deployment of several malicious payloads, including the Atomic macOS Stealer, or AMOS, as well as the Octo, Lumma, and Vidar information-stealing malware strains, Security Affairs reports.

Sophisticated Russian threat operation GitCaught has exploited both GitHub and FileZilla to facilitate the deployment of several malicious payloads, including the Atomic macOS Stealer, or AMOS, as well as the Octo, Lumma, and Vidar information-stealing malware strains, Security Affairs reports.


Survey: IAM experts share best practices and lessons learned

According to a new survey by CyberRisk Alliance, here are the best methods for establishing strong IAM processes.

According to a new survey by CyberRisk Alliance, here are the best methods for establishing strong IAM processes.


IAM survey reveals top implementation challenges

According to a new survey, here are some of the top roadblocks standing in the way of IAM implementation.

According to a new survey, here are some of the top roadblocks standing in the way of IAM implementation.


Identity security: Challenges and best practices

Survey respondents identified their greatest IAM challenges and best avenues of success.

Survey respondents identified their greatest IAM challenges and best avenues of success.


2024 Identiverse trends report: Key findings

The following is an excerpt from the 2024 Identiverse Trends Report.

The following is an excerpt from the 2024 Identiverse Trends Report.


PingTalk

What Is Identity Security?

Identity security is more important than ever as cybersecurity threats continue to rise across the globe. Depending on your business model, companies may need to address the matter from two perspectives: customer identity and access management (CIAM) for those serving external customers, and workforce challenges for those who have multiple staff members (and external vendors) who need to access yo

Identity security is more important than ever as cybersecurity threats continue to rise across the globe. Depending on your business model, companies may need to address the matter from two perspectives: customer identity and access management (CIAM) for those serving external customers, and workforce challenges for those who have multiple staff members (and external vendors) who need to access your digital platforms.


Aergo

Aergo Bridge Incident Report (Updated)

Summary: 7.7 million Aergo tokens were withdrawn via the Aergo Bridge service. All transactions were processed normally, with precise amounts transferred between the Aergo and ERC vaults. The bridge service has been temporarily halted by our threat detection system due to an unusually high amount of transactions. We are conducting a comprehensive review of all systems and will resume service once

Summary:
7.7 million Aergo tokens were withdrawn via the Aergo Bridge service. All transactions were processed normally, with precise amounts transferred between the Aergo and ERC vaults. The bridge service has been temporarily halted by our threat detection system due to an unusually high amount of transactions. We are conducting a comprehensive review of all systems and will resume service once everything is verified.

*Both vaults and the bridge have not been compromised by any hacks and remain secure.

1. Details of the Incident
Beginning with block 19905987 at 07:11:59 on May 19, 2024 (UTC), a notably large transaction marked the start of suspicious activity. The total Aergo tokens withdrawn in these suspicious transactions amounted to 7,706,818.22.

2. Immediate actions
Following the detection of these unusual activities, we temporarily suspended the Aergo bridge service. All functionalities have been put on hold, and emergency inspections are underway to assess and rectify the situation.

3. Final Notes
We’ve temporarily halted operations on the bridge service due to a false alarm caused by functionality limitations in Argoscan, which our Fraud Detection System (FDS) relies on for monitoring bridge vaults. We are currently focused on confirming the security of the funds held in the Bridge Vault.

Upon the initial review, the issue with the Bridge Vault and associated services was not due to a security vulnerability. Instead, it stemmed from a malfunction in the Fraud Detection System (FDS) triggered by an error in the explorer. This incident has been identified as a technical error, and we are confident there are no additional security concerns.

The enhancement of the AergoScan Explorer is actively underway. We are diligently working through the development and review processes, anticipating the completion of these improvements within two weeks following the resumption of bridge operations.

Aergo Bridge Incident Report (Updated) was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 20. May 2024

KuppingerCole

From Access Management to ITDR: Market Trends Explored

In this episode of KuppingerCole Analyst Chat, host Matthias Reinwarth speaks with Marina Iantorno, a Research Analyst at KuppingerCole Analysts, about the latest market trends in Identity and Access Management (IAM) and cybersecurity for 2024. They discuss the significant growth rates in Access Management and ITDR, driven by the increasing complexity and sophistication of cyber threats. Marina

In this episode of KuppingerCole Analyst Chat, host Matthias Reinwarth speaks with Marina Iantorno, a Research Analyst at KuppingerCole Analysts, about the latest market trends in Identity and Access Management (IAM) and cybersecurity for 2024. They discuss the significant growth rates in Access Management and ITDR, driven by the increasing complexity and sophistication of cyber threats.

Marina highlights the evolution of Access Management solutions to support remote workforces and the rising importance of ITDR in proactive threat detection and response. The conversation also covers the steady growth of the email security market in response to phishing and ransomware threats, as well as key strategies businesses are adopting to stay competitive in the IAM space. Finally, they explore the impact of regulatory compliance on IAM solutions and predict future trends in identity-centric security.




SC Media - Identity and Access

CyberArk acquires Venafi for $1.54B, integrating human and machine IAM

The acquisition comes as machine identities are paced to outnumber human identities 40 to 1.

The acquisition comes as machine identities are paced to outnumber human identities 40 to 1.


WebTPA reports 2.4 million plan members had their data stolen

The large health administrative services provider says while Social Security numbers may have been stolen, the breach did not impact financial or healthcare data.

The large health administrative services provider says while Social Security numbers may have been stolen, the breach did not impact financial or healthcare data.


Shyft Network

FATF Travel Rule Compliance in Germany

Germany has a minimum threshold of EUR 1000 for the FATF Travel Rule, requiring identification information for crypto transactions. VASPs and crypto wallet providers must comply with stringent AML and CFT directives, including KYC and EDD requirements. Germany monitors all virtual asset transfers, including self-custody wallets, to mitigate money laundering and terrorist financing risks. Germ
Germany has a minimum threshold of EUR 1000 for the FATF Travel Rule, requiring identification information for crypto transactions. VASPs and crypto wallet providers must comply with stringent AML and CFT directives, including KYC and EDD requirements. Germany monitors all virtual asset transfers, including self-custody wallets, to mitigate money laundering and terrorist financing risks.

Germany is a major hub of cryptocurrency activity in Europe, with over a million participants engaging in daily crypto trading. The country has a transparent regulatory environment and open policies on blockchain technology and crypto, aligning with EU and MiCA regulations.

As a member of the Financial Action Task Force (FATF), Germany also implements anti-money laundering (AML) and counter-terrorist financing (CFT) directives, including the FATF Travel Rule.

Background of the Crypto Travel Rule in Germany

In 2021, Germany published a bill on the transfer of crypto assets and enforced the FATF Travel Rule, requiring compliance from crypto companies by October 2021. However, once the EU’s Transfer of Funds Regulation (TFR) is implemented, Germany will evaluate and potentially repeal its ordinance to ensure a coordinated approach to crypto regulation within the EU.

Key Features of the Travel Rule

In the EU, Germany is one of the first jurisdictions to authorize the Crypto Travel Rule. To ensure the traceability of fund transfers and mitigate money laundering and terrorist financing using crypto, the country’s regulators introduced the Money Transfer Regulation. Germany considers all virtual asset transfers as cross-border transfers.

Under the Travel Rule, virtual asset service providers (VASPs) and crypto wallet providers must acquire, hold, and submit certain information on crypto asset transfers, making it available to appropriate authorities when requested. Licensed financial institutions, securities firms, and credit institutions that send or receive crypto on behalf of their customers must also adhere to these obligations.

Additionally, Germany follows stringent risk-based approach (RBA) policies, including Know Your Customer (KYC) and Enhanced Due Diligence (EDD) requirements. The country’s AML rules require crypto companies to follow risk management procedures, detect suspicious transactions, implement customer due diligence (CDD) processes, and engage in continuous monitoring and reporting of suspicious business transactions to the authorities.

Compliance Requirements

Germany adheres to FATF’s recommendations for the Travel Rule, applying a minimum threshold of EUR 1000 for collecting and sharing identifying information about the originator and beneficiary of a crypto transaction.

For transactions above this limit, VASPs must obtain, send, and retain the following PII:

For the originator:

Name Account number or unique transaction number Date and place of birth or address, official personal document number, or customer identification number

For the beneficiary:

Name Account number or unique transaction number

However, even for crypto transfers below EUR 1000, the name and account number or unique transaction number of both the originator and beneficiary must be recorded. All this information must be provided to the official authorities or the beneficiary VASP within three business days.

Impact on Cryptocurrency Exchanges and Wallets

For a VASP to operate in Germany and offer its services to natural citizens and legal entities, it must apply for a license with BaFin, as per the amendments made to its Banking Act. Not complying with the German Banking Act makes it a felony offense.

When it comes to the custody of crypto, the financial regulator defines it as the custody, management, and security of crypto or the private keys used to keep, store, or transfer crypto for others. Much like a VASP, a crypto custodian also needs a license from BaFin to operate.

The regulator requires enhanced due diligence (EDD) for self-hosted or unhosted wallets. These wallet providers need to take risk-appropriate measures to mitigate money laundering and terrorist financing risks.

Global Context and Comparisons

As per FATF’s latest report on the implementation of the Crypto Travel Rule, Germany is “Largely Complaint.” This puts Germany in the same category as Canada, France, Israel, Japan, the UK, and the US, which have fully embraced the rule with proper checks and systems in place.

This European country strictly follows FATF recommendations in regard to the personal data that must be collected during transactions and the threshold.

However, unlike other countries, Germany has clarified rules and requires monitoring of crypto asset transfers even when users store and manage their virtual assets themselves in a self-custody wallet. In such cases, the individual is in control of their private keys and the security of their crypto assets.

German regulators see self-managed digital asset wallets “with increased risk” and believe it to be “a starting point for a suspicious transaction.”

Concluding Thoughts

Overall, Germany’s implementation of the Crypto Travel Rule aims to enhance the traceability and security of cryptocurrency transactions, adhering to international standards. This impacts service providers and wallet providers, ensuring compliance with anti-money laundering and counter-terrorist financing directives.

FAQs on Crypto Travel Rule in Germany

Q1: What is the minimum threshold for identifying information under Germany’s FATF Travel Rule?

Germany has a minimum threshold of EUR 1000 for the FATF Travel Rule, requiring VASPs to collect and share identifying information for crypto transactions above this amount.

Q2: What is the purpose of the Money Transfer Regulation in Germany?

The Money Transfer Regulation in Germany aims to ensure the traceability of fund transfers and mitigate money laundering and terrorist financing using cryptocurrencies.

Q3: What are the obligations of financial institutions dealing with crypto in Germany?

Financial institutions, securities firms, and credit institutions in Germany must comply with AML and CFT directives, implement risk management procedures, detect suspicious transactions, and report any suspicious activity to the authorities.

About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

FATF Travel Rule Compliance in Germany was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spruce Systems

Why Advancing Digital Identity is Critical to U.S. National Security

SpruceID highlights the need for advanced digital identity systems to protect critical infrastructure from sophisticated, state-sponsored attacks.

Every year, cyberattacks increase in severity and intensity as more and more of our collective human life is conducted online. Between 2021 and 2023, cyberattacks rose by 72%, the fastest rise on record. The nature of these attacks is also changing: the average reader might still think of “hacking” in terms of digital theft and petty web vandalism, but we now see frequent, sophisticated, state-sponsored cyberattacks that impact physical, real-world targets – including critical infrastructure and sensitive operations - such as stealing personal data of millions of people and taking down hospital systems.

This trend poses profound risks to the basic safety and security of U.S. citizens, and 2024 has already given us a frightening example. In January, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) told members of Congress that a China-backed hacking group known as Volt Typhoon was working to infiltrate U.S. computer networks. The group sought access to water treatment, electrical, and transportation systems, with the goal of “inciting societal panic” and compromising U.S. threat response capabilities. 

Cybersecurity is National Security

These attacks show just how impactful cyberattacks can be and how imperfectly we secure even the most critical digital systems. As CISA head Jen Easterly told legislators, “Chinese cyber actors have taken advantage of very basic flaws in our technology … We’ve made it easy on them.”

One such flaw is the lack of verifiable digital identities that control access to everything from email servers to dams and steel mills. Current identity verification is too easily spoofed because it often relies on pre-Internet authentication concepts that are impractical and insecure in today’s threat environment. More advanced verification systems are offered by just a handful of large, centralized providers, creating single points of failure, such as those exploited in the OPM data breach of 2015. On this topic, DHS Assistant Secretary for Cybersecurity and Communications Andy Ozment commented that "[if] an adversary has the credentials of a user on the network, then they can access data even if it's encrypted.” As a result, the top recommendation from the House Committee of Oversight and Reform was to move federal information security efforts toward zero trust.

To close such vulnerabilities, one tool holds particular promise: a digitally native ID system, built from the ground up to securely bridge the physical-digital gap that America’s enemies are eagerly exploit. When we increase users’ and software agents’ abilities to demonstrate who they are with high assurance to a variety of different systems, we also increase security with more granular controls over who can access what. This moves us away from singular accounts that can access large troves of sensitive data, and towards a zero trust, need-to-know basis per account, closing many outstanding security gaps in nationally important systems.

Critical Infrastructure and the Changing Threat Landscape

The OPM database is by no means the only digital “crown jewel” in the US government’s IT infrastructure. In 2013, the White House identified 16 categories of “critical infrastructure” fundamental to national security. These include three broad categories of government and civilian systems: Communications and Data; Transportation and Energy; and Public Health and Emergency Response. 

All three of those categories have been targeted by sophisticated cyberattacks, some of which can grant attackers long-term access to systems. This was the goal of the notorious Russian-backed SolarWinds attack, and of the more recent Volt Typhoon incursions.

The compromise of these systems makes America vulnerable to a vast range of threats, from subtle to dramatic. The U.S. itself pioneered the subtler sort of attack with Stuxnet, a worm that altered control software and destroyed physical centrifuges Iran used to enrich uranium. America’s enemies might use a similar playbook to alter automated systems in a defense manufacturing facility, leading it to produce inoperable weapons. More acute and targeted cyberattacks could also cripple hospital operations during a disaster, or disrupt hydroelectric dams and interrupt power to millions – or even unleash devastating flooding.

Critical Infrastructure and the Changing Threat Landscape

These growing vulnerabilities are the product of a mix of technological and social changes. An increasing number of digital systems are built to allow remote access, in some cases as a product of the shift towards working from home. Even more widespread is reliance on remote “cloud computing” for data or processing. When they’re compromised, those remote services can give intruders major access to critical data and operations.

At the same time, the current confusion of access and identity systems makes these systems easier for bad actors to penetrate. Many digital identity systems rely on analog infrastructure, particularly drivers’ licenses, as their ultimate source of truth. But those pre-digital systems don’t carry all of their security guarantees into the digital world.

More advanced digital identity systems overwhelmingly rely on a very small number of commercial providers, creating a huge concentration of risk. In a 2023 incident, for instance, China-based attackers compromised Microsoft email systems by forging identity authentication tokens using a stolen “signing key.

Microsoft has generally excelled in its role as an identity manager for critical systems. But its centralized control also makes hacks of truly immense scale more of a threat. As Adam Meyers of security firm Crowdstrike has said, “having one monolithic vendor that is responsible for all of your technology, products, services and security ... can end in disaster.”

From Bits to Atoms: Digital Identity in the Physical World

Many possible cyberattacks rely on identity credentials to gain initial access to systems: According to Verizon, 91% of phishing attacks seek to compromise identity credentials. In turn, 81% of data breaches make use of stolen identity credentials.

In our day-to-day lives, we frequently rely on outdated authentication methods, such as simple username-password combinations that can be easily stolen. Many would likely be stunned by how many critical systems have fewer protections than their Amazon accounts.

But what if user identities could not be remotely compromised because proof of identity was linked to physical objects, rather than only clonable strings of characters or spoofable digital tokens? This is one of the many security features provided by Verifiable Digital Credentials (VDC), which can combine cryptography and modern hardware innovations to create vastly more secure digital credentials than the current baseline.

Fundamentally, a Verifiable Digital Credential is the digitally native version of your driver’s license/identification card, professional license, or certifications, which contains verifiable cryptographic signatures from the issuer of the credential, making it provably authentic and tamper-evident.

The “cryptographic signature” isn’t just a picture of a handwritten signature, nor the cursive letters that show up in DocuSign or similar services. Instead, it’s machine-checkable evidence that a statement was made by the right entity, such as when the DMV uses a cryptographic signature to indicate that the holder of a mobile driver’s license can operate a vehicle. Unlike pictures of handwritten signatures or scans of plastic ID cards, cryptographic signatures cannot be feasibly generated by AI and are designed to be future-proof, even against quantum computers. 

Through these new security features, digital IDs can be built from the start to assure the safety of the physical-digital frontier where many compromises occur, while ensuring privacy and user control. Breaking the protections of the secure element of a cell phone or key fob requires a high level of sophistication, such as using an electron microscope directly on the physical chip, and which is a longshot for most attackers.

Requiring a specific device to access a digital service can be used to ensure a particular person’s physical presence – for instance, in front of the control panel of a hydroelectric dam – rather than a simple badge. This is known as an “authentication factor” in guidance from NIST, and is a widely adopted requirement to layer into security programs across federal, state, and private sector systems. Digital IDs can provide many additional improvements to identity authentication and assurance in one package.

Security Across Digital Borders

The U.S. has over 430 federal agencies, many hundreds of state agencies, and more than 200,000 government contracting firms, each with their own IT systems, technology stacks, and personnel. Significant cybersecurity risks lie at the edges of these systems: Anything from a missing USB stick to an expired contractor access card can compromise our national security. 

Verifiable Digital Credentials (VDCs) can help with security at the edges by providing robust ways for users and software agents to demonstrate who they are and their privileges across any environment. This is a key enabler for zero trust architectures, which allow for the granular specification of a user’s access rights - rather than “trusting” them after logging in once. VDCs can enable zero-trust interoperability because they are based on data formats and sharing protocols currently being refined by global standards organizations, such as NIST, the International Organization for Standardization (ISO), and the Internet Engineering Task Force (IETF), and the World Wide Web Consortium (W3C).

These standards allow many different government agencies and bodies to provision verifiable digital credentials tailored to their needs, which incorporate baseline privacy and security features. They are also usable across different agencies without the need to create a new IT super-authority.

Technology that Incorporates Market-Based Innovation and Democratic Values

VDCs can be customized for specific use cases to provide better user experiences and security properties. It is possible for many vendors to implement VDC solutions in parallel that can talk to each other, reducing vendor lock-in and increasing IT agility. In contrast to relying upon one monolithic IT system, a dynamic ecosystem of vendors each specializing in their own use cases can allow agencies to leverage competitive market forces to provide the best solutions at the lowest cost to the taxpayer.

For example, while one firm may specialize in VDCs for physical building access, another might create an identity for software packages to help prevent the next SolarWinds catastrophe. Agencies responsible for our national security will be able to pick the best-in-breed solutions for their specific problems at competitive price points.

The use of open protocols to structure a competitive market for verifiable digital credentials can create an industry that boosts domestic cybersecurity strength. These solutions can also extend to private sector use cases, ensuring a strong commercial base, as was successful for the microchip industry.

Further, when security and privacy are incorporated at the shared protocol level, all implementations can start with the same baselines for ensuring against cyberattacks, unlawful surveillance, and the creation of risky data honeypots.

This architecture can work globally across public and private sectors to increase security, protecting against cyberattacks. It can ship with base technology that encourages individual freedoms while proving of little to no value to autocratic regimes. For instance, enshrining user privacy in the foundational technical standards could render the implementing systems unusable for mass surveillance of private activity, or the implementation of “social credit” systems.

By fostering an industry of best-in-class digital identity technologies, America can retain a leadership position in the global development of digital identity infrastructure to support the security across all of its critical infrastructure sectors and beyond.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


IBM Blockchain

The history of the central processing unit (CPU)

Learn about the background and history of the central processing unit (CPU). The post The history of the central processing unit (CPU) appeared first on IBM Blog.

The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run.

There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per second, CPUs are the one piece of equipment on computers that can’t be sacrificed. No matter what technological advancements occur, the truth remains—if you remove the CPU, you simply no longer have a computer.

In addition to managing computer activity, CPUs help enable and stabilize the push-and-pull relationship that exists between data storage and memory. The CPU serves as the intermediary, interacting with the primary storage (or main memory) when it needs to access data from the operating system’s random-access memory (RAM). On the other hand, read-only memory (ROM) is built for permanent and typically long-term data storage.

CPU components

Modern CPUs in electronic computers usually contain the following components:

Control unit: Contains intensive circuitry that leads the computer system by issuing a system of electrical pulses and instructs the system to carry out high-level computer instructions. Arithmetic/logic unit (ALU): Executes all arithmetic and logical operations, including math equations and logic-based comparisons that are tied to specific computer actions. Memory unit: Manages memory usage and flow of data between RAM and the CPU. Also supervises the handling of the cache memory. Cache: Contains areas of memory built into a CPU’s processor chip to reach data retrieval speeds even faster than RAM can achieve. Registers: Provides built-in permanent memory for constant, repeated data needs that must be handled regularly and immediately. Clock: Manages the CPU’s circuitry by transmitting electrical pulses. The delivery rate of those pulses is referred to as clock speed, measured in Hertz (Hz) or megahertz (MHz). Instruction register and pointer: Displays location of the next instruction set to be executed by the CPU. Buses: Ensures proper data transfer and data flow between the components of a computer system. How do CPUs work?

CPUs function by using a type of repeated command cycle that is administered by the control unit in association with the computer clock, which provides synchronization assistance.

The work a CPU does occurs according to an established cycle (called the CPU instruction cycle). The CPU instruction cycle designates a certain number of repetitions, and this is the number of times the basic computing instructions will be repeated, as permitted by that computer’s processing power.

The basic computing instructions include the following:

Fetch: Fetches occur anytime data is retrieved from memory. Decode: The decoder within the CPU translates binary instructions into electrical signals that engage with other parts of the CPU. Execute: Execution occurs when computers interpret and carry out a computer program’s set of instructions.

With some basic tinkering, the computer clock within a CPU can be manipulated to keep time faster than it normally elapses. Some users do this to run their computer at higher speeds. However, this practice (“overclocking”) is not advisable since it can cause computer parts to wear out earlier than normal and can even violate CPU manufacturer warranties.

Processing styles are also subject to tweaking. One way to manipulate those is by implementing instruction pipelining, which seeks to instill instruction-level parallelism in a single processor. The goal of pipelining is to keep each part of the processor engaged by splitting up incoming computer instructions and spreading them out evenly among processor units. Instructions are broken down into smaller sets of instructions or steps.

Another method for achieving instruction-level parallelism inside a single processor is to use a CPU called a superscalar processor. Whereas scalar processors can execute a maximum of one instruction per clock cycle, there’s really no limit to how many instructions can be dispatched by a superscalar processor. It sends multiple instructions to various of the processor’s execution units, thereby boosting throughput.

Who invented the CPU?

Breakthrough technologies often have more than one parent. The more complex and earth-shaking that technology, the more individuals who are usually responsible for that birth.

In the case of the CPU—one of history’s most important inventions—we’re really talking about who discovered the computer itself.

Anthropologists use the term “independent invention” to describe situations where different individuals, who may be located countries away from each other and in relative isolation, each come up with what are similar or complementary ideas or inventions without knowing about similar experiments taking place.

In the case of the CPU (or computer), independent invention has occurred repeatedly, leading to different evolutionary shifts during CPU history.

Twin giants of computing

While this article can’t honor all the early pioneers of computing, there are two people whose lives and work need to be illuminated. Both had a direct connection to computing and the CPU:

Grace Hopper: Saluting “Grandma COBOL”

American Grace Brewster Hopper (1906-1992) weighed a mere 105 pounds when she enlisted in the US Navy—15 pounds under the required weight limit. And in one of US maritime history’s wisest decisions, the Navy gave an exemption and took her anyway.

What Grace Hopper lacked in physical size, she made up for with energy and versatile brilliance. She was a polymath of the first order: a gifted mathematician armed with twin Ph.D. degrees from Yale University in both mathematics and mathematical physics, a noted professor of mathematics at Vassar College, a pioneering computer scientist credited with writing a computer language and authoring the first computer manual, and a naval commander (at a time when women rarely rose above administrative roles in the military).

Because of her work on leading computer projects of her time, such as the development of the UNIVAC supercomputer after WWII, Hopper always seemed in the thick of the action, always at the right place at the right time. She had personally witnessed much of modern computing history. She was the person who originally coined the term “computer bug,” describing an actual moth that had become caught within a piece of computing equipment. (The original moth remains on display at the Smithsonian Institution’s National Museum of American History in Washington, DC.)

During her experience working on the UNIVAC project (and later running the UNIVAC project for the Remington Rand Corporation), Hopper became frustrated that there was not a simpler programming language that could be used. So, she set about writing her own programming language, which famously came to be known as COBOL (an acronym for COmmon Business-Oriented Language).

Robert Noyce: The Mayor of Silicon Valley

Robert Noyce was a mover and shaker in the classic business sense—a person who could make amazing activity start happening just by showing up.

American Robert Noyce (1927-1990) was a whiz-kid boy inventor. He later channeled his intellectual curiosity into his undergrad collegiate work, especially after being shown two of the original transistors created by Bell Laboratories. By age 26, Noyce earned a Ph.D. in Physics from the Massachusetts Institute of Technology (MIT).

In 1959, he followed up on Jack Kilby’s 1958 invention of the first hybrid integrated circuit by making substantial tweaks to the original design. Noyce’s improvements led to a new kind of integrated circuits: the monolithic integrated circuit (also called the microchip), which was formulated using silicon. Soon the silicon chip became a revelation, changing industries and shaping society in new ways.

Noyce co-founded two hugely successful corporations during his business career: Fairchild Semiconductor Corporation (1957) and Intel (1968). He was the first CEO of Intel, which is still known globally for manufacturing processing chips.

His partner in both endeavors was Gordon Moore, who became famous for a prediction about the semiconductor industry that proved so reliable it has seemed almost like an algorithm. Called “Moore’s Law,” it posited that the number of transistors to be used within an integrated circuit reliably doubles about every two years.

While Noyce oversaw Intel, the company produced the Intel 4004, now recognized as the chip that launched the microprocessor revolution of the 1970s. The creation of the Intel 4004 involved a three-way collaboration between Intel’s Ted Hoff, Stanley Mazor and Federico Faggin, and it became the first microprocessor ever offered commercially.

Late in his tenure, the company also produced the Intel 8080—the company’s second 8-bit microprocessor, which first appeared in April 1974. Within a couple of years of that, the manufacturer was rolling out the Intel 8086, a 16-bit microprocessor.

During his illustrious career, Robert Noyce amassed 12 patents for various creations and was honored by three different US presidents for his work on integrated circuits and the massive global impact they had.

ENIAC: Marching off to war

It seems overly dramatic, but in 1943, the fate of the world truly was hanging in the balance. The outcome of World War II (1939-1945) was still very much undecided, and both Allies forces and Axis forces were eagerly scouting any kind of technological advantage to gain leverage over the enemy.

Computer devices were still in their infancy when a project as monumental in its way as the Manhattan Project was created. The US government hired a group of engineers from the Moore School of Electrical Engineering at the University of Pennsylvania. The mission called upon them to build an electronic computer capable of calculating yardage amounts for artillery-range tables.

The project was led by John Mauchly and J. Presper Eckert, Jr. at the military’s request. Work began on the project in early 1943 and didn’t end until 3 years later.

The creation produced by the project—dubbed ENIAC, which stood for “Electronic Numerical Integrator and Computer”—was a massive installation requiring 1,500 sq. ft. of floor space, not to mention 17,000 glass vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches and 1,500 relays. In 2024 currency, the project would have cost USD 6.7 million.

It could process up to 5,000 equations per second (depending on the equation), an amazing quantity as seen from that historical vantage point. Due to its generous size, the ENIAC was so large that people could stand within the CPU and program the machine by rewiring connections between functional units in the machine.  

ENIAC was used by the US Army during the rest of WWII. But when that conflict ended, the Cold War began and ENIAC was given new marching orders. This time it would perform calculations that would help enable the building of a bomb with more than a thousand times the explosive force of the atomic weapons that ended WWII: the hydrogen bomb.

UNIVAC: Getting back to business

Following WWII, the two leaders of the ENIAC project decided to set up shop and bring computing to American business. The newly dubbed Eckert-Mauchly Computer Corporation (EMCC) set out to prepare its flagship product—a smaller and cheaper version of the ENIAC, with various improvements like added tape drives, a keyboard and a converter device that accepted punch-card use.

Though sleeker than the ENIAC, the UNIVAC that was unveiled to the public in 1951 was still mammoth, weighing over 8 tons and using 125 kW of energy. And it was still expensive: around USD 11.6 million in today’s money.

For its CPU, it contained the first CPU—the UNIVAC 1103—which was developed at the same time as the rest of the project. The UNIVAC 1103 used glass vacuum tubes, making the CPU large, unwieldy and slow.

The original batch of UNIVAC 1s was limited to a run of 11 machines, meaning that only the biggest, best-funded and best-connected companies or government agencies could gain access to a UNIVAC. Nearly half of those were US defense agencies, like the US Air Force and the Central Intelligence Agency (CIA). The very first model was purchased by the U.S. Census Bureau.

CBS News had one of the machines and famously used it to correctly predict the outcome of the 1952 US Presidential election, against long-shot odds. It was a bold publicity stunt that introduced the American public to the wonders that computers could do.

Transistors: Going big by going small

As computing increasingly became realized and celebrated, its main weakness was clear. CPUs had an ongoing issue with the vacuum tubes being used. It was really a mechanical issue: Glass vacuum tubes were extremely delicate and prone to routine breakage.

The problem was so pronounced that the manufacturer went to great lengths to provide a workaround solution for its many agitated customers, whose computers stopped dead without working tubes.

The manufacturer of the tubes regularly tested tubes at the factory, subjecting tubes to different amounts of factory use and abuse, before selecting the “toughest” tubes out of those batches to be held in reserve and at the ready for emergency customer requests.

The other problem with the vacuum tubes in CPUs involved the size of the computing machine itself. The tubes were bulky and designers were craving a way to get the processing power of the tube from a much smaller device.

By 1953, a research student at the University of Manchester showed you could construct a completely transistor-based computer.

Original transistors were hard to work with, in large part because they were crafted from germanium, a substance which was tricky to purify and had to be kept within a precise temperature range.

Bell Laboratory scientists started experimenting with other substances in 1954, including silicon. The Bell scientists (Mohamed Italia and Dawn Kahng) kept refining their use of silicon and by 1960 had hit upon a formula for the metal-oxide-semiconductor field-effect transistor (or MOSFET, or MOS transistor) modern transistor, which has been celebrated as the “most widely manufactured device in history,” by the Computer History Museum. In 2018 it was estimated that 13 sextillion MOS transistors had been manufactured.

The advent of the microprocessor

The quest for miniaturization continued until computer scientists created a CPU so small that it could be contained within a small integrated circuit chip, called the microprocessor.

Microprocessors are designated by the number of cores they support. A CPU core is the “brain within the brain,” serving as the physical processing unit within a CPU. Microprocessors can contain multiple processors. Meanwhile, a physical core is a CPU built into a chip, but which only occupies one socket, thus enabling other physical cores to tap into the same computing environment.

Here are some of the other main terms used in relation to microprocessors:

Single-core processors: Single-core processors contain a single processing unit. They are typically marked by slower performance, run on a single thread and perform the CPU instruction cycle one at a time. Dual-core processors: Dual-core processors are equipped with two processing units contained within one integrated circuit. Both cores run at the same time, effectively doubling performance rates. Quad-core processors: Quad-core processors contain four processing units within a single integrated circuit. All cores run simultaneously, quadrupling performance rates. Multi-core processors: Multi-core processors are integrated circuits equipped with at least two processor cores, so they can deliver supreme performance and optimized power consumption. Leading CPU manufacturers

Several companies now create products that support CPUs through different brand lines. However, this market niche has changed dramatically, given that it formerly attracted numerous players, including plenty of mainstream manufacturers (e.g., Motorola). Now there’s really just a couple of main players: Intel and AMD.

They use differing instruction set architectures (ISAs). So, while AMD processors take their cues from Reduced Instruction Set Computer (RISC) architecture, Intel processors follow a Complex Instruction Set Computer (CISC) architecture.

Advanced Micro Devices (AMD): AMD sells processors and microprocessors through two product types: CPUs and APUs (which stands for accelerated processing units). In this case, APUs are simply CPUs that have been equipped with proprietary Radeon graphics. AMD’s Ryzen processors are high-speed, high-performance microprocessors intended for the video-game market. Athlon processors was formerly considered AMD’s high-end line, but AMD now uses it as a general-purpose alternative. Arm: Arm doesn’t actually manufacture equipment, but does lease out its valued processor designs and/or other proprietary technologies to other companies who make equipment. Apple, for example, no longer uses Intel chips in Mac CPUs, but makes its own customized processors based on Arm designs. Other companies are following suit. Intel: Intel sells processors and microprocessors through four product lines. Its premium line is Intel Core, including processor models like the Core i3. Intel’s Xeon processors are marketed toward offices and businesses. Intel’s Celeron and Intel Pentium lines (represented by models like the Pentium 4 single-core CPUs) are considered slower and less powerful than the Core line. Understanding the dependable role of CPUs

When considering CPUs, we can think about the various components that CPUs contain and use. We can also contemplate how CPU design has moved from its early super-sized experiments to its modern period of miniaturization.

But despite any transformations to its dimensions or appearance, the CPU remains steadfastly itself, still on the job—because it’s so good at its particular job. You know you can trust it to work correctly, each time out.

Smart computing depends upon having proper equipment you can rely upon. IBM builds its servers strong, to withstand any problems the modern workplace can throw at them. Find the IBM servers you need to get the results your organization relies upon.

Explore IBM servers

The post The history of the central processing unit (CPU) appeared first on IBM Blog.


SC Media - Identity and Access

Cyberattack disrupts American Radio Relay League

BleepingComputer reports that operations at the American Radio Relay League have been interrupted following a cyberattack against its IT systems that impacted its email and various online services.

BleepingComputer reports that operations at the American Radio Relay League have been interrupted following a cyberattack against its IT systems that impacted its email and various online services.


Over 2.4M affected by WebTPA breach

Health plan and insurer administrative services provider WebTPA had data from more than 2.4 million individuals compromised following a security breach last April, reports The Record, a news site by cybersecurity firm Recorded Future.

Health plan and insurer administrative services provider WebTPA had data from more than 2.4 million individuals compromised following a security breach last April, reports The Record, a news site by cybersecurity firm Recorded Future.


IDnow

5 things gambling operators need to know before Euro 2024.

Major sporting events offer huge opportunities and risks for gaming operators and players. Here, we share the top five things that operators need to be aware of during Euro 2024. Whether you’re a football fan or not, the Euros (UEFA European Football Championship) is a monumental global occasion that, for the month it lasts, captivates the […]
Major sporting events offer huge opportunities and risks for gaming operators and players. Here, we share the top five things that operators need to be aware of during Euro 2024.

Whether you’re a football fan or not, the Euros (UEFA European Football Championship) is a monumental global occasion that, for the month it lasts, captivates the world. 

Starting in mid-June and running until mid-July, Euro 2024 will see 24 teams compete in a series of games in cities throughout Germany to be crowned the best football team in Europe. 

Major sporting events like the Euros and the World Cup are often a time for friends and family to congregate and perhaps bet on the outcomes of games, either with each other or via increasingly popular online gaming platforms.

Surge in [legal and illegal] gambling activity predicted.

As global sporting events have such broad appeal, they tend to attract both casual and regular bettors. Plus, due to the convenience and accessibility that online platforms now offer, betting during these periods is only getting more popular. Indeed, betting during the 2022 World Cup increased by 13% compared to the 2018 World Cup. Worryingly, the surge in gambling activity was seen on both regulated and unregulated sites. For example, in the UK alone, 250,000 people visited unregulated, black-market sites during the World Cup compared to just 80,0000 during the same timeframe of the previous year. 

First-time players may be lured to unregulated platforms by ‘too good to be true’ bonus offers. However, few are aware that these platforms do not offer consumer protection measures and come with a significantly higher risk of financial fraud or data breaches. 

In fact, gambling on the black market poses problems for both the player and the business. Discover the reasons [and dangers] of why some players decide to forgo KYC and identity verification checks in favor of unregulated platforms in our blog, ‘In the grey: So, how exactly does online gambling work in Brazil?’ and ‘Exploring black market gambling in Germany.’

Gambling regulations 101: Europe and the UK. Discover how gaming operators can keep pace with the ever-growing multi-jurisdictional landscape in Europe and beyond. Read now Challenges of operating during Euro 2024.

Although they should be concerns for operators at any time of year, there is an increased likelihood of bonus abuse, multi-accounting/gnoming, underage gambling, money laundering and fake documentation during sign-ups throughout the Euro 2024 tournament. Here are the top five things that gambling operators need to be aware of during Euro 2024. 

Increase in site traffic is often used by fraudsters as a smoke screen for nefarious activities. With these sudden surges of traffic, it can be considerably harder for operators to find anomalies, which could be indicative of fraud. In addition to the above, fraudsters may conduct identity theft, or attempt account takeovers by launching phishing attacks and cyberattacks on unsuspecting users. There is also increased likelihood of more cross border transactions attempts by bots.  Unsecured public Wi-Fi networks can pose a security risk for gaming and gambling operators, when players log in to place bets. Such networks are susceptible to interception by hackers, leading to unauthorized access to user accounts, exposure of sensitive personal information and potential financial losses for both users and operators.   Rising rates of gambling fraud, with chargeback fraud a particularly common issue. While identity verification solutions can help verify the identity of users and detect fraudulent activity, preventing chargeback fraud requires additional measures such as transaction monitoring and collaboration with payment processors to identify and block suspicious transactions in real time.  Lose market share. Increased competition from illegal or unlicensed operators – many of which may offer superior UX or sign-up bonuses or offer registration without KYC – pose challenges to regulated platforms. How to capture new customers and safeguard against potentially losing existing customers should be top of mind for all operators.  Heightened regulatory scrutiny and enforcement to ensure operators are doing everything possible to protect end users during these times of increased betting. Those who are found to have inadequate KYC and AML and age verification controls in place may be subjected to fines.

“The challenges operators face during major sporting tournaments are not necessarily any different to their regular daily compliance challenges, but the sheer volume of new players from all around the world, coupled with increased activity is likely to cause issues for unprepared operators,” said Roger Redfearn-Tyrzyk, Vice President of Global Gaming at IDnow.

The Euros 2024 will be a huge opportunity for gambling platforms, new and old, so it’s essential they get it right. They’ll need to onboard huge numbers of players and verify accounts thoroughly so that they’re onboarding the right players and effectively fighting fraud

Roger Redfearn-Tyrzyk, Vice President of Global Gaming at IDnow.
Regulations, risks and rewards of the European gambling market.

Although all global gambling platforms are likely to see an uptick in player onboarding, it will likely be European nations that share the same time zone as the tournament (CEST), that will see the most pronounced increase in usage. But, of course, with Copa America just around the corner and many Latin American countries preparing to launch their gambling regulations, similar opportunities and challenges await the LATAM market, which is well on its way to becoming the largest in the world.

Meanwhile, the European online gambling market is expected to grow by 9.20% by 2025, fuelled not only by the rising popularity of gambling and tournaments like the Euro 2024, but also by software and hardware innovations. According to a report published by the Statista Research Department, the five European countries with the highest Gross Gaming Revenue are as follows:

United Kingdom: €15.99 billion 
  Italy: €13.2 billion 
  Germany: €12.1 billion 
  France: €10.2 billion 
  Spain: €6 billion

A special mention must also be made about the Netherlands; a market that only became regulated in recent years but has enjoyed rapid growth ever since. Learn more about the Netherlands in our blog, ‘What a difference a year makes: The state of play in the Dutch online gambling market.’ 

Although regarded as a single state in relation to its economy, each of the 27 members of the European Union is responsible for passing its own laws. Because of this, online gaming regulations can be quite complex. Each European country essentially shapes its own gambling regulations. For a more comprehensive overview of European gambling regulations, check out ‘Regulations for online gambling – an overview.’

Betting on a safer, more secure gambling experience with IDnow.

Our fully automated identity verification solutions help gaming operators offer secure player onboarding, deposits and withdrawals, and conduct seamless AML and age verification checks to comply with not only European and UK regulations, but also requirements in emerging markets like Canada and Brazil

Interested in what the gambling industry’s top compliance challenges are likely to be for 2024 and beyond? Check out IDnow’s recently released ‘Challenges in compliance survey’ to discover operators’ most common concerns, why players abandon onboarding, the likely effect of UK’s upcoming financial risk checks, and much more.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn

Friday, 17. May 2024

SC Media - Identity and Access

Unforeseen outcomes of innovation

DigiCert's Amit Sinha on how digital trust is being pushed to its limits as new innovations ramp up.

DigiCert's Amit Sinha on how digital trust is being pushed to its limits as new innovations ramp up.


Australians’ prescription records breached in large-scale ransomware attack

The country’s federal government has stepped in following the hack of e-script provider MediSecure, but it’s unclear how much personal and medical data was stolen.

The country’s federal government has stepped in following the hack of e-script provider MediSecure, but it’s unclear how much personal and medical data was stolen.


IBM Blockchain

How will quantum impact the biotech industry?

Enterprises across the world will be investing to upskill talent and prepare their organizations for the arrival of quantum computing. The post How will quantum impact the biotech industry? appeared first on IBM Blog.

The physics of atoms and the technology behind treating disease might sound like disparate fields. However, in the past few decades, advances in artificial intelligence, sensing, simulation and more have driven enormous impacts within the biotech industry.

Quantum computing provides an opportunity to extend these advancements with computational speedups and/or accuracy in each of those areas. Now is the time for enterprises, commercial organizations and research institutions to begin exploring how to use quantum to solve problems in their respective domains.

As a Partner in IBM’s Quantum practice, I’ve had the pleasure of working alongside Wade Davis, Vice President of Computational Science & Head of Digital for Research at Moderna, to drive quantum innovation in healthcare. Below, you’ll find some of the perspectives we share on the future in quantum compute in biotech.

What is quantum computing?

Quantum computing is a new kind of computer processing technology that relies on the science that governs the behavior of atoms to solve problems that are too complex or not practical for today’s fastest supercomputers. We don’t expect quantum to replace classical computing. Rather, quantum computers will serve as a highly specialized and complementary computing resource for running specific tasks.

A classical computer is how you’re reading this blog. These computers represent information in strings of zeros and ones and manipulate these strings by using a set of logical operations. The result is a computer that behaves deterministically—these operations have well-defined effects, and a sequence of operations resulting in a single outcome. Quantum computers, however, are probabilistic—the same sequence of operations can have different outcomes, allowing these computers to explore and calculate multiple scenarios simultaneously. But this alone does not explain the full power of quantum computing. Quantum mechanics offers us access to a tweaked and counterintuitive version of probability that allows us to run computations inaccessible to classical computers. 

Therefore, quantum computers enable us to evaluate new dimensions for existing problems and explore entirely new frontiers that are not accessible today. And they perform computations in a way that more closely mirrors nature itself.

As mentioned, we don’t expect quantum computers to replace classical computers. Each one has its strengths and weaknesses: while quantum will excel at running certain algorithms or simulating nature, classical will still take on much of the work. We anticipate a future wherein programs weave quantum and classical computation together, relying on each one where they’re more appropriate. Quantum will extend the power of classical. 

Unlocking new potential

A set of core enterprise applications has crystallized from an environment of rapidly maturing quantum hardware and software. What the following problems share are many variables, a structure that seems to map well to the rules of quantum mechanics, and difficulty solving them with today’s HPC resources. They broadly fall into three buckets:

Advanced mathematics and complex data structures. The multidimensional nature of quantum mechanics offers a new way to approach problems with many moving parts, enabling better analytic performance for computationally complex problems. Even with recent and transformative advancements in AI and generative AI, quantum compute promises the ability to identify and recognize patterns that are not detectable for classical-trained AI, especially where data is sparse and imbalanced. For biotech, this might be beneficial for combing through datasets to find trends that might identify and personalize interventions that target disease at the cellular level. Search and optimization. Enterprises have a large appetite for tackling complex combinatorial and black-box problems to generate more robust insights for strategic planning and investments. Though further on the horizon, quantum systems are being intensely studied for their ability to consider a broad set of computations concurrently, by generating statistical distributions, unlocking a host of promising opportunities including the ability to rapidly identify protein folding structures and optimize sequencing to advance mRNA-based therapeutics. Simulating nature. Quantum computers naturally re-create the behavior of atoms and even subatomic particles—making them valuable for simulating how matter interacts with its environment. This opens up new possibilities to design new drugs to fight emerging diseases within the biotech industry—and more broadly, to discover new materials that can enable carbon capture and optimize energy storage to help industries fight climate change.

At IBM, we recognize that our role is not only to provide world-leading hardware and software, but also to connect quantum experts with nonquantum domain experts across these areas to bring useful quantum computing sooner. To that end, we convened five working groups covering healthcare/life sciences, materials science, high-energy physics, optimization and sustainability. Each of these working groups gathers in person to generate ideas and foster collaborations—and then these collaborations work together to produce new research and domain-specific implementations of quantum algorithms.

As algorithm discovery and development matures and we expand our focus to real-world applications, commercial entities, too, are shifting from experimental proof-of-concepts toward utility-scale prototypes that will be integrated into their workflows. Over the next few years, enterprises across the world will be investing to upskill talent and prepare their organizations for the arrival of quantum computing.

Today, an organization’s quantum computing readiness score is most influenced by its operating model: if an organization invests in a team and a process to govern their quantum innovation, they are better positioned than peers that focus just on the technology without corresponding investment in their talent and innovation process.  IBM Institute for Business Value | Research Insights: Making Quantum Readiness Real

Among industries that are making the pivot to useful quantum computing, the biotech industry is moving rapidly to explore how quantum compute can help reduce the cost and speed up the time required to discover, create, and distribute therapeutic treatments that will improve the health, the well being and the quality of life for individuals suffering from chronic disease. According to BCG’s Quantum Computing Is Becoming Business Ready report: “eight of the top ten biopharma companies are piloting quantum computing, and five have partnered with quantum providers.”

Partnering with IBM

Recent advancements in quantum computing have opened new avenues for tackling complex combinatorial problems that are intractable for classical computers. Among these challenges, the prediction of mRNA secondary structure is a critical task in molecular biology, impacting our understanding of gene expression, regulation and the design of RNA-based therapeutics.

For example, Moderna has been pioneering the development of quantum for biotechnology. Emerging from the pandemic, Moderna established itself as a game-changing innovator in biotech when a decade of extensive R&D allowed them to use their technology platform to deliver a COVID-19 vaccine with record speed. 

Learn more: How Moderna uses lipid nanoparticles (LNPs) to deliver mRNA and help fight disease

Given the value of their platform approach, perhaps quantum might further push their ability to perform mRNA research, providing a host of novel mRNA vaccines more efficiently than ever before. This is where IBM can help. 

As an initial step, Moderna is working with IBM to benchmark the application of quantum computing against a classical CPlex protein analysis solver. They’re evaluating the performance of a quantum algorithm called CVaR VQE on randomly generated mRNA nucleotide sequences to accurately predict stable mRNA structures as compared to current state of the art. Their findings demonstrate the potential of quantum computing to provide insights into mRNA dynamics and offer a promising direction for advancing computational biology through quantum algorithms. As a next step, they hope to push quantum to sequence lengths beyond what CPLEX can handle.

This is just one of many collaborations that are transforming biotech processes with the help of quantum computation. Biotech enterprises are using IBM Quantum Systems to run their workloads on real utility-scale quantum hardware, while leveraging the IBM Quantum Network to share expertise across domains. And with our updated IBM Quantum Accelerator program, enterprises can now prepare their organizations with hands-on guidance to identify use cases, design workflows and develop utility-scale prototypes that use quantum computation for business impact. 

The time has never been better to begin your quantum journey—get started today.

Bringing useful quantum computing
to the world

The post How will quantum impact the biotech industry? appeared first on IBM Blog.


This week in identity

E53 - A Review of RSA Conference 2024 - Part 2

Summary In this episode, Simon and David discuss the convergence of identity and cybersecurity, particularly in the context of cloud adoption. They explore the challenges and opportunities that arise from this convergence and the impact on organizations of different sizes. They also touch on the confusion caused by the abundance of acronyms in the industry and the need for clarity and standardiz

Summary

In this episode, Simon and David discuss the convergence of identity and cybersecurity, particularly in the context of cloud adoption. They explore the challenges and opportunities that arise from this convergence and the impact on organizations of different sizes. They also touch on the confusion caused by the abundance of acronyms in the industry and the need for clarity and standardization. Overall, they emphasize the importance of protecting identity components and the critical role of identity in security. The conversation explores the challenges and opportunities in the identity and access management (IAM) space, with a focus on the importance of data management and the need for effective discovery and remediation processes. The fragmentation of identity systems and the lack of visibility into identities and their interactions are identified as key issues. The acquisition of Q Radar by Palo Alto is discussed as a potential game-changer in the IAM space. The conversation concludes with the recognition that while automation and AI have their place, human involvement is still crucial for effective remediation.

Keywords

identity, cybersecurity, convergence, cloud, challenges, opportunities, acronyms, standardization, protection, security, identity and access management, IAM, data management, discovery, remediation, fragmentation, visibility, Q Radar, Palo Alto, automation, AI, human involvement

Takeaways

Identity and cybersecurity are converging, particularly in the context of cloud adoption.

Organizations of different sizes face different challenges and opportunities in managing identity and security.

The abundance of acronyms in the industry can be confusing, and there is a need for clarity and standardization.

Protecting identity components is crucial, as identity often plays a central role in security breaches. Effective data management is crucial in the identity and access management space.

Fragmentation of identity systems and lack of visibility into identities and their interactions are key challenges.

The acquisition of Q Radar by Palo Alto has the potential to impact the IAM space.

While automation and AI have their place, human involvement is still necessary for effective remediation.

Chapters

00:00 Introduction and Post-RSA Recovery

01:23 Unpacking the Convergence of Identity and Cybersecurity

07:13 Lessons from the Transition from Horses to Cars

09:08 The Confusion of Acronyms and the Need for Clarity

13:25 The Hype Cycle and the Trajectory of New Technologies

15:16 The Impact of Cloud Adoption on Identity and Security

23:21 The Transient Tilt in the Cloud and the Importance of Protecting Identity Components

24:13 The Importance of Data Management in IAM

27:38 Challenges of Fragmentation and Lack of Visibility

30:53 The Potential Impact of the Q Radar Acquisition

34:44 The Role of Automation and Human Involvement in Remediation


IBM Blockchain

AI in commerce: Essential use cases for B2B and B2C

Explore how using AI in commerce has the capacity to create more fundamentally relevant and contextually appropriate buying experiences. The post AI in commerce: Essential use cases for B2B and B2C appeared first on IBM Blog.
Four AI in commerce use cases are already transforming the customer journey: modernization and business model expansion; dynamic product experience management (PXM); order intelligence; and payments and security.  By implementing effective solutions for AI in commerce, brands can create seamless, personalized buying experiences that increase customer loyalty, customer engagement, retention and share of wallet across B2B and B2C channels.  Poorly run implementations of traditional or generative AI in commerce—such as models trained on inadequate or inappropriate data—lead to bad experiences that alienate consumers and businesses. Successful integration of AI in commerce depends on earning and keeping consumer trust. This includes trust in the data, the security, the brand and the people behind the AI.

Recent advancements in artificial intelligence (AI) are transforming commerce at an exponential pace. As these innovations are dynamically reshaping the commerce journey, it is crucial for leaders to anticipate and future-proof their enterprises to embrace the new paradigm.  

In the context of this rapid advancement, generative AI and automation have the capacity to create more fundamentally relevant and contextually appropriate buying experiences. They can simplify and accelerate workflows throughout the commerce journey, from discovery to the successful completion of a transaction. To take one example, AI-facilitated tools like voice navigation promise to upend the way users fundamentally interact with a system. And these technologies provide brands with intelligent tools, enabling more productivity and efficiency than was possible even five years ago. 

AI models analyze vast amounts of data quickly, and get more accurate by the day. They can provide valuable insights and forecasts to inform organizational decision-making in omnichannel commerce, enabling businesses to make more informed and data-driven decisions. By implementing effective AI solutions—using traditional and generative AI—brands can create seamless and personalized buying experiences. These experiences result in increased customer loyalty, customer engagement, retention, and increased share of wallet across both business-to-business (B2B) and business-to-consumer (B2C) channels. Ultimately, they drive significant increases in conversions driving meaningful revenue growth from the transformed commerce experience.  

Explore commerce consulting services Creating seamless experiences for skeptical users

It’s been a swift shift toward a ubiquitous use of AI. Early iterations of e-commerce used traditional AI largely to create dynamic marketing campaigns, improve the online shopping experience, or triage customer requests. Today the technology’s advanced capabilities encourage widespread adoption. AI can be integrated into every touchpoint across the commerce journey. According to a recent report from the IBM Institute for Business Value, half of CEOs are integrating generative AI into products and services. Meanwhile, 43% are using the technology to inform strategic decisions. 

But customers aren’t yet completely on board. Fluency with AI has grown along with the rollout of ChatGPT and virtual assistants like Amazon’s Alexa. But as businesses around the globe rapidly adopt the technology to augment processes from merchandising to order management, there is some risk. High-profile failures and expensive litigation threatens to sour public opinion and cripple the promise of generative AI-powered commerce technology.  

Generative AI’s impact on the social media landscape garners occasional bad press. Disapproval of brands or retailers that use AI is as high as 38% among older generations, requiring businesses to work harder to gain their trust. 

A report from the IBM Institute of Business Value found that there’s enormous room for improvement in the customer experience. Only 14% of surveyed consumers described themselves as “satisfied” with their experience purchasing goods online. A full one-third of consumers found their early customer support and chatbot experiences that use natural language processing (NLP) so disappointing that they didn’t want to engage with the technology again. And the centrality of these experiences isn’t limited to B2C vendors. Over 90% of business buyers say a company’s customer experience is as important as what it sells.   

Poorly run implementations of traditional or generative AI technology in commerce—such as deploying deep learning models trained on inadequate or inappropriate data—lead to bad experiences that alienate both consumers and businesses. 

To avoid this, it’s crucial for businesses to carefully plan and design intelligent automation initiatives that prioritize the needs and preferences of their customers, whether they are consumers or B2B buyers. By doing so, brands can create contextually relevant personalized buying experiences, seamless and friction-free, which foster customer loyalty and trust. 

This article explores four transformative use cases for AI in commerce that are already enhancing the customer journey, especially in the e-commerce business and e-commerce platform components of the overall omnichannel experience. It also discusses how forward-thinking companies can effectively integrate AI algorithms to usher in a new era of intelligent commerce experiences for both consumers and brands. But none of these use cases exist in a vacuum. As the future of commerce unfolds, each use case interacts holistically to transform the customer journey from end-to-end–for customers, for employees, and for their partners.   

Use case 1: AI for modernization and business model expansion

AI-powered tools can be incredibly valuable in optimizing and modernizing business operations throughout the customer journey, but it is critical in the commerce continuum. By using machine learning algorithms and big data analytics, AI can uncover patterns, correlations and trends that might escape human analysts. These capabilities can help businesses make informed decisions, improve operational efficiencies, and identify opportunities for growth. The applications of AI in commerce are vast and varied. They include:

Dynamic content

Traditional AI fuels recommendation engines that suggest products based on customer purchase history and customer preferences, creating personalized experiences that result in increased customer satisfaction and loyalty. Experience building strategies like these have been  used by online retailers for years. Today, generative AI enables dynamic customer segmentation and profiling. This segmentation activates personalized product recommendations and suggestions, such as product bundles and upsells, that adapt to individual customer behavior and preferences, resulting in higher engagement and conversion rates. 

Commerce operations

Traditional AI allows for the automation of routine tasks such as inventory management, order processing and fulfillment optimization, resulting in increased efficiency and cost savings. Generative AI activates predictive analytics and forecasting, enabling businesses to anticipate and respond to changes in demand, reducing stockouts and overstocking, and improving supply chain resilience. It can also significantly impact real-time fraud detection and prevention, minimizing financial losses and improving customer trust.  

Business model expansion

Both traditional and generative AI have pivotal and functions that can redefine business models. They can, for example, enable the seamless integration of a marketplace platform where AI-driven algorithms match supply with demand, effectively connecting sellers and buyers across different geographic areas and market segments. Generative AI can also enable new forms of commerce—such as voice commerce, social commerce and experiential commerce—that provide customers with seamless and personalized shopping experiences.

Traditional AI can enhance international purchasing by automating tasks such as currency conversions and tax calculations. It can also facilitate compliance with local regulations, streamlining the logistics of cross-border transactions.

However, generative AI can create value by generating multilingual support and personalized marketing content. These tools adapt content to the cultural and linguistic nuances of different regions, offering a more contextually relevant experience for international customers and consumers. 

Use case 2: AI for dynamic product experience management (PXM)

Using the power of AI, brands can revolutionize their product experience management and user experience by delivering personalized, engaging and seamless experiences at every touchpoint in commerce. These tools can manage content, standardize product information, and drive personalization. With AI, brands can create a product experience that informs, validates and builds the confidence necessary for conversion. Some ways to use relevant personalization by transforming product experience management include: 

Intelligent content management

Generative AI can revolutionize content management by automating the creation, classification and optimization of product content. Unlike traditional AI, which analyzes and categorizes existing content, generative AI can create new content tailored to individual customers. This content includes product descriptions, images, videos and even interactive experiences. By using generative AI, brands can save time and resources while simultaneously delivering high-quality, engaging content that resonates with their target audience. Generative AI can also help brands maintain consistency across all touchpoints, ensuring that product information is accurate, up-to-date and optimized for conversions. 

Hyperpersonalization

Generative AI can take personalization to the next level by creating customized experiences that are tailored to individual customers. By analyzing customer data and customer queries, generative AI can create personalized product recommendations, offers and content that are more likely to drive conversions.

Unlike traditional AI, which can only segment customers based on predefined criteria, generative AI can create unique experiences for each customer, considering their preferences, behavior and interests. Such personalization is crucial as organizations adopt software-as-a-service (SaaS) models more frequently: Global subscription-model billing is expected to double over the next six years, and most consumers say those models help them feel more connected to a business. With AI’s potential for hyperpersonalization, those subscription-based consumer experiences can vastly improve. These experiences result in higher engagement, increased customer satisfaction, and ultimately, higher sales. 

Experiential product information

Al tools allow individuals to learn more about products through processes like visual search, taking a photograph of an item to learn more about it. Generative AI takes these capabilities further, transforming product information by creating interactive, immersive experiences that help customers better understand products and make informed purchasing decisions. For example, generative AI can create 360-degree product views, interactive product demos, and virtual try-on capabilities. These experiences provide a richer product understanding and help brands differentiate themselves from competitors and build trust with potential customers. Unlike traditional AI, which provides static product information, generative AI can create engaging, memorable experiences that drive conversions and build brand loyalty.  

Smart search and recommendations

Generative AI can revolutionize search engines and recommendations by providing customers with personalized, contextualized results that match their intent and preferences. Unlike traditional AI, which relies on keyword matching, generative AI can understand natural language and intent, providing customers with relevant results that are more likely to match their search queries. Generative AI can also create recommendations that are based on individual customer behavior, preferences and interests, resulting in higher engagement and increased sales. By using generative AI, brands can deliver intelligent search and recommendation capabilities that enhance the overall product experience and drive conversions. 

Use case 3: AI for order intelligence 

Generative AI and automation can allow businesses to make data-driven decisions to streamline processes across the supply chain, reducing inefficiency and waste. For example, a recent analysis from McKinsey found that nearly 20% of logistics costs could stem from “blind handoffs”—the moment a shipment is dropped at some point between the manufacturer and its intended location. According to the McKinsey report, these inefficient interactions might amount to as much as $95 billion in losses in the United States every year. AI-powered order intelligence can reduce some of these inefficiencies by using: 

Order orchestration and fulfillment optimization

By considering factors such as inventory availability, location proximity, shipping costs and delivery preferences, AI tools can dynamically select the most cost-effective and efficient fulfillment options for an individual order. These tools might dictate the priority of deliveries, predict order routing, or dispatch deliveries to comply with sustainability requirements.  

Demand forecasting

By analyzing historical data, AI can predict demand and help businesses optimize their inventory levels and minimize excess, reducing costs and improving efficiency. Real-time inventory updates allow businesses to adapt quickly to changing conditions, allowing for effective resource allocation.

Inventory transparency and order accuracy

AI-powered order management systems provide real-time visibility into all aspects of the critical order management workflow. These tools enable companies to proactively identify potential disruptions and mitigate risks. This visibility helps customers and consumers trust that their orders will be delivered exactly when and how they were promised. 

Use case 4: AI for payments and security 

Intelligent payments enhance the payment and security process, improving efficiency and accuracy. Such technologies can help process, manage and secure digital transactions—and provide advance warning of potential risks and the possibility of fraud. 

Intelligent payments

Traditional and generative AI both enhance transaction processes for B2C and B2B customers making purchases in online stores. Traditional AI optimizes POS systems, automates new payment methods, and facilitates multiple payment solutions across channels, streamlining operations and improving consumer experiences. Generative AI creates dynamic payment models for B2B customers, addressing their complex transactions with customized invoicing and predictive behaviors. The technology can also provide strategic and personalized financial solutions. Also, generative AI can enhance B2C customer payments by creating personalized and dynamic pricing strategies. 

Risk management and fraud detection

Traditional AI and machine learning excel in processing vast volumes of B2C and B2B payments, enabling businesses to identify and respond to suspicious trends swiftly. Traditional AI automates the detection of irregular patterns and potential fraud, reducing the need for costly human analysis. Meanwhile, generative AI contributes by simulating various fraud scenarios to predict and prevent new types of fraudulent activities before they occur, enhancing the overall security of payment systems. 

Compliance and data privacy

In the commerce journey, traditional AI helps secure transaction data and automates compliance with payment regulations, enabling businesses to quickly adapt to new financial laws and conduct ongoing audits of payment processes. Generative AI further enhances these capabilities by developing predictive models that anticipate changes in payment regulations. It can also automate intricate data privacy measures, helping businesses to maintain compliance and protect customer data efficiently. 

The future of AI in commerce is based on trust 

Today’s commercial landscape is swiftly transforming into a digitally interconnected ecosystem. In this reality, the integration of generative AI across omnichannel commerce—both B2B and B2C—is essential. However, for this integration to be successful, trust must be at the core of its implementation. Identifying the right moments in the commerce journey for AI integration is also crucial. Companies need to conduct comprehensive audits of their existing workflows to make sure AI innovations are both effective and sensitive to consumer expectations. Introducing AI solutions transparently and with robust data security measures is imperative.  

Businesses must approach the introduction of trusted generative AI as an opportunity to enhance the customer experience by making it more personalized, conversational and responsive. This requires a clear strategy that prioritizes human-centric values and builds trust through consistent, observable interactions that demonstrate the value and reliability of AI enhancements.  

Looking forward, trusted AI redefines customer interactions, enabling businesses to meet their clients precisely where they are, with a level of personalization previously unattainable. By working with AI systems that are reliable, secure and aligned with customer needs and business outcomes, companies can forge deeper, trust-based relationships. These relationships are essential for long-term engagement and will be essential to every business’s future commerce success, growth and, ultimately, their viability.

Explore commerce consulting services Deliver omnichannel support with retail chatbots

The post AI in commerce: Essential use cases for B2B and B2C appeared first on IBM Blog.


liminal (was OWI)

Weekly Industry News – Week of May 13

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Week of Week of May […] The post Weekly Industry News – Week of

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Week of Week of May 13, 2024

Here are the main industry highlights of this week.

➡ Innovation and New Technology Developments U.S. Senators Advocate $32 Billion Annual Investment to Boost AI Innovation and Outcompete China

A bipartisan group of U.S. senators, led by Majority Leader Chuck Schumer, advocates for a significant increase in government funding for AI research to bolster America’s leadership in AI innovation. The senatorial group is championing this initiative due to their concerns about AI’s potential to disrupt jobs and electoral processes and surpass human capabilities. Their proposal includes a targeted annual investment of at least $32 billion for non-defense AI innovation and emphasizes the need to outcompete countries like China. The plan also features a comprehensive AI research and development initiative, including an “AI-ready data” program and enhanced infrastructure for AI testing within government agencies.

Read the full article on pymnts.com NIST Updates NICE Framework to Include AI Security and Cyber Resiliency

BThe National Institute of Standards and Technology (NIST) has updated the National Initiative for Cybersecurity Education (NICE) framework to include new competencies and update existing skills. This aims to address the shortage of qualified cyber professionals and expand roles to cover areas like AI security and cyber resiliency. Despite these efforts, only 14% of organizations currently use the NICE framework in job postings, indicating a need for broader adoption.

Read the full article on federalnewsnetwork.com Google Unveils On-Device AI Feature to Detect Phone Scams at Google I/O 202

At the 2024 Google I/O developer conference, Google revealed a new feature to detect potential phone call scams. The feature, powered by Gemini Nano, Google’s on-device AI, listens for scam patterns and alerts users. It will be opt-in to protect privacy, but some concerns about listening to conversations remain. No release date is set, but it demonstrates the potential of Gemini Nano.

Read the full article on economictimes.indiatimes.com Pennsylvania Introduces Identity Verification Kiosks to Streamline Unemployment Claims

Pennsylvania has introduced identity verification kiosks in CareerLink and UPS locations to improve the unemployment compensation filing process. These ID.me kiosks, staffed with trained personnel, allow claimants to use physical documents, enhancing accessibility for those without home computers or mobile phones. The initiative addresses issues faced with digital verification and aims to help more people navigate the system effectively.

Read the full article on wvia.org ➡ Investments and Partnerships LogRhythm and Exabeam Join Forces to Enhance AI-Driven Cybersecurity Solutions Amid Industry Consolidation

Thoma Bravo’s SIEM company, LogRhythm , is merging with Exabeam, another cybersecurity firm backed by Cisco and Lightspeed Venture Partners. This merger is the latest trend of consolidation within the cybersecurity industry. LogRhythm had previously raised $126 million before Thoma Bravo acquired a majority stake, while Exabeam’s total funding reached nearly $400 million by its 2021 Series F round. The merger, expected to close in Q3 2024, aims to advance AI-driven cybersecurity solutions, concentrating on their security operations product portfolio.

Read the full article on techcrunch.com Koodoo Partners with Resistant AI to Enhance Document Checking with Advanced Forensics

Koodoo has teamed up with Resistant AI to enhance its Document Checking service with advanced Document Forensics technology. This partnership aims to provide financial services firms with a solution to verify customer documents, extract information, and detect signs of fraud. Resistant AI’s technology uses AI to identify subtle document irregularities, improving fraud detection accuracy and streamlining financial institutions’ operational processes.

Read the full article on finextra.com TabaPay Withdraws from Synapse Acquisition Amid Disputes Over Financial Obligations

TabaPay has withdrawn from a planned acquisition of assets from Synapse due to unmet closing conditions. Synapse accused Evolve Bank & Trust of not fulfilling funding obligations, but Evolve refutes involvement. There are further complications involving Mercury, which disputes Synapse’s claims. Despite this, Synapse remains hopeful for a resolution.

Read the full article on techcrunch.com Australia Allocates AU$288.1 Million for National Digital Identity Program Launch in July

Australia’s federal government has allocated AU$288.1 million (US$190.9 million) to develop a national digital identity program set to commence in July. The funding will be used for private sector pilot programs, infrastructure enhancements, and enhancing regulatory and security measures. A significant portion will go towards upgrading the myGovID platform, soon to be rebranded as myID, to facilitate secure access to government services for businesses. Other budget allocations include developing data standards, performing security assessments, and enhancing the identity exchange managed by Services Australia. Additionally, funding includes upgrading the Credential Protection Register in response to the 2022 Optus data breach and rolling out digital driver licenses in the Northern Territory.

Read the full article on cbiometricupdate.com Data Zoo Secures $35 Million Investment from Ellerston Capital and Appoints New CEO to Drive Global Expansion

Sydney-based identity verification company Data Zoo secured $35 million in its first external funding round, led by Ashok Jacob’s Ellerston Capital through its JAADE fund. This investment values Data Zoo at over $100 million and supports its software, which enhances KYC compliance and fraud prevention for financial institutions and fintech startups. Developed by founder Tony Fitzgibbon and CIO Memoona Anwar, the technology reduces costs and risks while protecting customer privacy by eliminating the need to store identity data. Following the investment, Fitzgibbon stepped down as CEO, with former London Stock Exchange executive Charlie Minutella taking over the role in New York. The new funds will promote the adoption and innovation of Data Zoo’s software.

Read the full article on startupdaily.net Accenture Wins $789M Contract to Enhance U.S Navy Cybersecurity with SHARKCAGE Systems

The U.S. Navy awarded Accenture Federal Services a $789 million, ten-year contract to enhance cybersecurity for Navy and Marine Corps networks. The contract involves unifying operations within the SHARKCAGE environment and providing integrated systems using commercial hardware and software. Accenture’s responsibilities include design, testing, production, and support, as well as improving cyber monitoring and attack sensing for Navy fleets.

Read the full article on meritalk.com StrongDM Secures $34 Million for Global Expansion and Enhanced Security Features

StrongDM , a Zero Trust PAM company, raised $34 million in a Series C funding round led by Anchor Capital Advisors LLC, totaling $96 million in funding. The funds will expand global operations and establish a Polish engineering center. StrongDM plans to enhance its PAM solution with advanced security features like micro-authorizations and contextual enforcement.

Read the full article on pulse2.com CUBE Enhances Regulatory Intelligence Capabilities with Acquisition of Reg-Room

CUBE, an automated regulatory intelligence company, acquired New York-based Reg-Room LLC, known for its expertise in monitoring regulatory changes in financial services. Reg-Room offers products like Reg-Track, Reg-Impact, and Regulatory Risk Report. Supported by an investment from HgCapital Trust, the acquisition will enhance CUBE’s compliance data management and workflow automation with Reg-Room’s precise regulatory updates.

Read the full article on startupdaily.net ➡ Policy and Regulatory  BlockTower Capital Suffers Major Hack, Losing Funds from Main Hedge Fund

BlockTower Capital, a major investment firm with $1.7 billion in assets under management, has fallen victim to a hack, resulting in its main hedge fund being partially drained by fraudsters. The stolen funds are still missing, and the hacker remains at large. Crypto hacks continue to impact the industry, with fraudsters stealing approximately $1.7 billion from various projects last year. BlockTower, founded in 2017 and based in Miami and New York, has invested in companies like Dapper Labs, Sky Mavis, and Terraform Labs. In 2022, the firm raised a $150 million venture fund.

Read the full article on bnnbloomberg.ca Deepfake Scam Targets WPP CEO in Sophisticated Corporate Fraud Attempt

Scammers used AI-generated deepfakes to impersonate WPP CEO Mark Read in a failed fraud attempt. They created a fake WhatsApp account, arranged a Microsoft Teams meeting, and used cloned voices and visuals to mimic Read and another executive. The goal was to deceive an agency leader into initiating a fraudulent business venture. This incident highlights the need for better deepfake detection and vigilance against digital impersonation in corporate settings.

Read the full article on amp-theguardian-com US Senate Passes FAA Authorization with Controversial Expansion of TSA Facial Recognition

The US Senate passed a bill to extend FAA programs for 5 years, including TSA’s controversial expansion of facial recognition from 25 to over 430 airports. Senators Jeff Merkley and John Kennedy proposed an amendment to halt this until 2027 due to privacy concerns but it didn’t pass. Debate continues on balancing biometric innovation with civil liberties. 

Read the full article on nytimes.com Australian Federal Court Refuses to Extend Order for Social Media Platform X to Hide Violent Video

An Australian court declined to extend an order requiring social media platform X to hide a video of Bishop Mar Mari Emmanuel’s stabbing, highlighting conflicts over content restrictions. X had initially geo-blocked the video, but it remained accessible globally and via VPNs. The decision raised concerns about content control and free speech, with Elon Musk calling it governmental overreach. Australia is reviewing its online safety laws, reflecting ongoing debates over digital platform regulations and censorship.

Read the full article on wsj.com FINTRAC Fines Binance $4.4 Million for Regulatory Violations in Canada

Canada’s Fintract Global fined Binance $4.4 million for not registering as a foreign money services business and breaching anti-money laundering laws. Using blockchain analytics, FINTRAC found that Binance failed to report over 5,900 transactions between June 2021 and July 2023. This highlights the need for stricter oversight of international cryptocurrency trading. The penalty reflects a global trend towards increased regulation and transparency in the crypto market.

Read the full article on theblock.co

The post Weekly Industry News – Week of May 13 appeared first on Liminal.co.


Indicio

Complete Zero Trust with Decentralized Identity

The post Complete Zero Trust with Decentralized Identity appeared first on Indicio.

SC Media - Identity and Access

The enterprise browser: The first win-win-win for CISOs, CIOs and end users

Island's Mikey Fey on how enterprise browsers combine security and productivity.

Island's Mikey Fey on how enterprise browsers combine security and productivity.


SailPoint's approach to unified identity security for the modern enterprise

SailPoint's Wendy Wu says identity is at the forefront of securing the enterprise.

SailPoint's Wendy Wu says identity is at the forefront of securing the enterprise.


AI-generated code top cloud security concern amid 100% use rate in survey

GenAI, API and identity risks are key concerns, as well as conflicts between DevOps and SecOps.

GenAI, API and identity risks are key concerns, as well as conflicts between DevOps and SecOps.

Friday, 17. May 2024

IBM Blockchain

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

A deeper look into why business intelligence challenges might persist and what it means for users across an organization. The post A new era in BI: Overcoming low adoption to make smart decisions accessible for all appeared first on IBM Blog.

Organizations today are both empowered and overwhelmed by data. This paradox lies at the heart of modern business strategy: while there’s an unprecedented amount of data available, unlocking actionable insights requires more than access to numbers.

The push to enhance productivity, use resources wisely, and boost sustainability through data-driven decision-making is stronger than ever. Yet, the low adoption rates of business intelligence (BI) tools present a significant hurdle.

According to Gartner, although the number of employees that use analytics and business intelligence (ABI) has increased in 87% of surveyed organizations, ABI is still used by only 29% of employees on average. Despite the clear benefits of BI, the percentage of employees actively using ABI tools has seen minimal growth over the past 7 years. So why aren’t more people using BI tools?

Understanding the low adoption rate

The low adoption rate of traditional BI tools, particularly dashboards, is a multifaceted issue rooted in both the inherent limitations of these tools and the evolving needs of modern businesses. Here’s a deeper look into why these challenges might persist and what it means for users across an organization:

1. Complexity and lack of accessibility

While excellent for displaying consolidated data views, dashboards often present a steep learning curve. This complexity makes them less accessible to nontechnical users, who might find these tools intimidating or overly complex for their needs. Moreover, the static nature of traditional dashboards means they are not built to adapt quickly to changes in data or business conditions without manual updates or redesigns.

2. Limited scope for actionable insights

Dashboards typically provide high-level summaries or snapshots of data, which are useful for quick status checks but often insufficient for making business decisions. They tend to offer limited guidance on what actions to take next, lacking the context needed to derive actionable, decision-ready insights. This can leave decision-makers feeling unsupported, as they need more than just data; they need insights that directly inform action.

3. The “unknown unknowns”

A significant barrier to BI adoption is the challenge of not knowing what questions to ask or what data might be relevant. Dashboards are static and require users to come with specific queries or metrics in mind. Without knowing what to look for, business analysts can miss critical insights, making dashboards less effective for exploratory data analysis and real-time decision-making.

Moving beyond one-size-fits-all: The evolution of dashboards

While traditional dashboards have served us well, they are no longer sufficient on their own. The world of BI is shifting toward integrated and personalized tools that understand what each user needs. This isn’t just about being user-friendly; it’s about making these tools vital parts of daily decision-making processes for everyone, not just for those with technical expertise.

Emerging technologies such as generative AI (gen AI) are enhancing BI tools with capabilities that were once only available to data professionals. These new tools are more adaptive, providing personalized BI experiences that deliver contextually relevant insights users can trust and act upon immediately. We’re moving away from the one-size-fits-all approach of traditional dashboards to more dynamic, customized analytics experiences. These tools are designed to guide users effortlessly from data discovery to actionable decision-making, enhancing their ability to act on insights with confidence.

The future of BI: Making advanced analytics accessible to all

As we look toward the future, ease of use and personalization are set to redefine the trajectory of BI.

1. Emphasizing ease of use

The new generation of BI tools breaks down the barriers that once made powerful data analytics accessible only to data scientists. With simpler interfaces that include conversational interfaces, these tools make interacting with data as easy as having a chat. This integration into daily workflows means that advanced data analysis can be as straightforward as checking your email. This shift democratizes data access and empowers all team members to derive insights from data, regardless of their technical skills.

For example, imagine a sales manager who wants to quickly check the latest performance figures before a meeting. Instead of navigating through complex software, they ask the BI tool, “What were our total sales last month?” or “How are we performing compared to the same period last year?”

The system understands the questions and provides accurate answers in seconds, just like a conversation. This ease of use helps to ensure that every team member, not just data experts, can engage with data effectively and make informed decisions swiftly.

2. Driving personalization

Personalization is transforming how BI platforms present and interact with data. It means that the system learns from how users work with it, adapting to suit individual preferences and meeting the specific needs of their business.

For example, a dashboard might display the most important metrics for a marketing manager differently than for a production supervisor. It’s not just about the user’s role; it’s also about what’s happening in the market and what historical data shows.

Alerts in these systems are also smarter. Rather than notifying users about all changes, the systems focus on the most critical changes based on past importance. These alerts can even adapt when business conditions change, helping to ensure that users get the most relevant information without having to look for it themselves.

By integrating a deep understanding of both the user and their business environment, BI tools can offer insights that are exactly what’s needed at the right time. This makes these tools incredibly effective for making informed decisions quickly and confidently.

Navigating the future: Overcoming adoption challenges

While the advantages of integrating advanced BI technologies are clear, organizations often encounter significant challenges that can hinder their adoption. Understanding these challenges is crucial for businesses looking to use the full potential of these innovative tools.

1. Cultural resistance to change

One of the biggest hurdles is overcoming ingrained habits and resistance within the organization. Employees used to traditional methods of data analysis might be skeptical about moving to new systems, fearing the learning curve or potential disruptions to their routine workflows. Promoting a culture that values continuous learning and technological adaptability is key to overcoming this resistance.

2. Complexity of integration

Integrating new BI technologies with existing IT infrastructure can be complex and costly. Organizations must help ensure that new tools are compatible with their current systems, which often involve significant time and technical expertise. The complexity increases when trying to maintain data consistency and security across multiple platforms.

3. Data governance and security

Gen AI, by its nature, creates new content based on existing data sets. The outputs generated by AI can sometimes introduce biases or inaccuracies if not properly monitored and managed.

With the increased use of AI and machine learning in BI tools, managing data privacy and security becomes more complex. Organizations must help ensure that their data governance policies are robust enough to handle new types of data interactions and comply with regulations such as GDPR. This often requires updating security protocols and continuously monitoring data access and usage.

According to Gartner, by 2025, augmented consumerization functions will drive the adoption of ABI capabilities beyond 50% for the first time, influencing more business processes and decisions.

As we stand on the brink of this new era in BI, we must focus on adopting new technologies and managing them wisely. By fostering a culture that embraces continuous learning and innovation, organizations can fully harness the potential of gen AI and augmented analytics to make smarter, faster and more informed decisions.

Read the report

The post A new era in BI: Overcoming low adoption to make smart decisions accessible for all appeared first on IBM Blog.


Entrust

NSA Announces Update to Commercial National Security Algorithm Suite 2.0 and Quantum Computing FAQ

In October 2022, Entrust published the article, NSA Announces New Post-Quantum Resistant Algorithm Suite 2.0... The post NSA Announces Update to Commercial National Security Algorithm Suite 2.0 and Quantum Computing FAQ appeared first on Entrust Blog.

In October 2022, Entrust published the article, NSA Announces New Post-Quantum Resistant Algorithm Suite 2.0 and Transition Timetable. At that time the National Security Agency was one of the first to put a firm line in the sand, publishing an aggressive timeline for the recommended adoption of post-quantum-resistant algorithms. It was a bit of a wake-up call to the digital security industry. Fast forward 18 months and we have a new updated version of the paper: The Commercial National Security Algorithm Suite 2.0 and Quantum Computing FAQ. Algorithm specifications, recommendations, and guidance are slowly starting to crystallize. The eagerly anticipated NIST standardized PQC (post-quantum cryptography) algorithms are set to be approved in summer 2024. You can check for updates here.

At first glance, the algorithm suite looks unchanged:

Algorithm Function Specification Parameters Advanced Encryption Standard (AES) Symmetric block cipher for information protection FIPS PUB 197 Use 256-bit keys for all classification levels. ML-KEM (aka CRYSTALS-Kyber) Asymmetric algorithm
for key establishment FIPS PUB 203 Use Category 5 parameter, ML-KEM-1024, for all classification levels. ML-DSA (aka CRYSTALS-Dilithium) Asymmetric algorithm for digital signatures in any use case, including signing firmware and software FIPS PUB 204 Use Category 5 parameter, ML-DSA-87, for all classification levels. Secure Hash Algorithm (SHA) Algorithm for computing a condensed representation of information FIPS PUB 180-4 Use SHA-384 or SHA-512 for all classification levels. Leighton-Micali Signature (LMS) Asymmetric algorithm for digitally signing firmware and software NIST SP 800-208 All parameters approved for all classification levels. LMS SHA-256/192 is recommended. Xtended Merkle Signature Scheme (XMSS) Asymmetric algorithm for digitally signing firmware and software NIST SP 800-208 All parameters approved for all classification levels.

Table: Commercial National Security Algorithm Suite 2.0

The newly assigned NIST names are now included in the table (see highlighted rows above) as we prepare to let go of the names CRYSTALS-Kyber and CRYSTALS-Dilithium, chosen originally by the cryptographers who designed the algorithms. These are now replaced by the more formal ML-DSA (Digital Signature Algorithm) and ML-KEM (Key Encapsulation Mechanism), a set of algorithms that can be used to establish a shared secret key between two parties communicating over a public channel. These algorithms now have FIPS Publications, FIPS PUB 203 and 204 respectively, assigned to them. The paper also includes an informative FAQ section and some guidance on which PQC algorithms can be used in which situation. Some key areas to note include:

NIST SP 800-208

Regarding stateful hash-based signature schemes, published in 2020, the NSA has only approved LMS and XMSS for use in National Security Systems (NSS). The multi-tree algorithms HSS and XMSSMT are not allowed, although no explanation has been given for this decision.

Stateless Hash-Based Digital Signature Standard (SLH-DSA)

Known to many as SPHINCS+, despite being hash-based, it cannot be used to sign software. It’s not part of the CNSA and is not approved for use.

FIPS Approval of LMS and XMSS Signing Algorithms

Hardware security modules (HSMs) are discussed in relation to certification/FIPS approval of LMS and XMSS signing algorithms. For validating signatures, they must have passed NIST’s Cryptographic Algorithm Validation Program (CAVP).

Hardware Security Modules and the Use of LMS and XMSS

Code signing applications using HSMs will need to be validated to NIST’s Cryptographic Module Validation Program (CMVP). LMS and XMSS algorithms are stateful, meaning they use one-time signatures and when deployed can only produce a finite number of signatures before a new public/private key needs to be generated. See the illustration below for a visual of the stateful key generation process.

 

Figure 1: Illustration of the stateful generation of derived keys

The private key is a random seed that derives a new private key for each signature. To maintain their security properties, the private key must be used in conjunction with a counter. This is what limits derived private keys to only generate a single signature. The CNSA 2.0 paper further highlights the concern that these private keys, if backed up or migrated to multiple devices, could break the “one-signature-per-derived-private-key” rule. The NSA’s opening position was to forbid export of keys created using these algorithms regardless of whether they’re encrypted or not. While some vendors still believe in the doctrine of storing keys in the box, in the physical memory of the HSM, most vendors have followed Entrust’s long-standing approach: Store keys as encrypted fragments, tokens, or key blobs – whichever terminology you prefer – outside of the physical HSM. The HSM-vendor community, including Entrust, have raised strong objections with the NSA proposal since that would create a single point of failure and severely limit the lifespan of an HSM. Deployments of HSMs are typically configured in pairs where resilience, availability, and backing up keys is standard practice. The NSA has agreed to look at supporting exports without the detrimental impact on the integrity of the algorithm’s security. We’ll continue to keep you updated on the outcome on their review.

Firmware Signatures – Begin Transitioning Immediately

The updated CNSA 2.0 paper also explains the rationale for the prioritization of firmware signatures in their timeline, recognizing that in this use case, the validation algorithm is not easily updated.

“Software- and firmware-signing: begin transitioning immediately, support and prefer CNSA 2.0 by 2025 where available, exclusively use CNSA 2.0 by 2030.”

Further, the paper explains, “even in systems that are designed for extensibility and cryptographic agility, a quantum-resistant root of trust may be required in the firmware years before the rest of the system upgrades to quantum-resistance. NSA prioritizes this in our timelines to avoid unexpected costs and security issues later in our transition.”

The NSA is recognizing the role that HSMs as a root of trust play in signing firmware, which then acts as the foundation layer of an organization’s cryptographic stack. With 2025 only months away, Entrust is already working on firmware that is safe from a cryptographically relevant quantum computer (CRQC).

ML-DSA for Firmware or Software Signing

The paper recognizes that firmware roots of trust are long-term and, “are a critical component to upgrade…expected to be implemented for some long-lived signatures in 2025, before validated ML-DSA is widely available.” This is impactful for Entrust and our customer base.

Deployment of CNSA 2.0 Algorithms in Mission Systems

The paper encourages algorithm deployment as soon as available and recommends, “testing in vendor and government research environments now to understand the effects of deployment of the new algorithms on particular systems given the increased sizes used in these algorithms.”

Hybrid Solutions

The FAQ section covers the NSA’s position on hybrid cryptographic solutions, using a combination of both classic and post-quantum cryptography. The idea is that until PQC algorithms are standardized and proven in the field, hybrid covers both bases – if the classic algorithms are compromised by a CRQC, the PQC algorithm will protect the data. Equally, if the PQC algorithm was found to be weak, the classic algorithm will still provide some level of protection. According to the paper, “the NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.” They further state in the paper that, “NSA recognizes that some standards may require using hybrid-like constructions to accommodate the larger sizes of CRQC algorithms and will work with industries on the best options for implementation.”

The NSA explained their reservations about hybrid deployments are due to concerns on complexity, compatibility, and the fact that hybrid deployments are an interim solution and will need to be migrated in the future to solely PQC algorithms.

However, the hybrid approach has support in the industry. Some highlight the ill-fated SIKE algorithm, which made it to the 4th round of the NIST competition only to falter and be broken on a laptop, as an example of what can happen to emerging post-quantum cryptography. Hybrid adds an extra layer of protection in case the PQC algorithm turns out to be vulnerable. I’ve published blog posts about my colleagues at Entrust actively involved in the specification of composite signatures in conjunction with the Internet Engineering Task Force.

In summary, in the space of 18 months the PQC position has moved forward. We can anticipate some of the NIST PQC short-listed algorithms to be standardized in summer 2024. What was theoretical and fuzzy a couple of years ago is starting to take shape. Timelines are firming up and the NIST and NSA are being prescriptive on which algorithms should be adopted in which particular use case, and by when.

For organizations who are following NSA recommendations and want to experiment now, Entrust offers a PQC Option Pack for use in conjunction with our nShield HSMs. It supports the NIST’s PQC algorithms identified for standardization. Customers with an nShield FIPS Level 3 HSM and the nShield Post-Quantum Option Pack can generate quantum-resistant keys inside the HSM, protected by FIPS 140-2 Level 3 Security World standard mechanisms, and carry out key signing, digital signature, encryption, decryption, and key exchange. Learn more about our post-quantum cryptography solutions.

 

The post NSA Announces Update to Commercial National Security Algorithm Suite 2.0 and Quantum Computing FAQ appeared first on Entrust Blog.


UbiSecure

BNP Paribas becomes latest Validation Agent in the Global LEI System

BNP Paribas clients to benefit from integrated Legal Entity Identifier issuance and management, in partnership with RapidLEI service from Ubisecure. LONDON, UK... The post BNP Paribas becomes latest Validation Agent in the Global LEI System appeared first on Ubisecure Digital Identity Management.
BNP Paribas clients to benefit from integrated Legal Entity Identifier issuance and management, in partnership with RapidLEI service from Ubisecure.

LONDON, UK – March 16th, 2024 – Today, Ubisecure, one of the world-leading Legal Entity Identifier (LEI) issuer through its RapidLEI service, announces a partnership with international banking group, BNP Paribas. The bank is now approved by the Global LEI Foundation (GLEIF) as a Validation Agent (VA), with Ubisecure as the GLEIF-accredited LEI Issuer.

The VA program enables banks and other financial institutions/trust service providers to leverage existing client onboarding and KYC/AML processes for LEI issuance. With VA status, BNP Paribas is empowered to request LEI issuance within its onboarding validation processes, meaning the bank’s clients benefit from obtaining the LEI without having to apply elsewhere. With the LEI now required or requested by more than 200 regulations, BNP Paribas ensures compliance and uninterrupted transactions without duplicative processes.

Thomas Louis, Global Markets Chief Data Officer at BNP Paribas, said, “Becoming a Validation Agent to issue LEIs as part of existing procedures means greater efficiency for both the bank and our clients, enabling further value within our services and streamlining our customer experience. Our clients need LEIs to do business, and we’re already performing validation checks that meet requirements for LEI issuance. It makes perfect sense to consolidate procedures.”

Ubisecure is a GLEIF-accredited LEI Issuer that is one of the leading LEI issuers of LEIs worldwide. Its RapidLEI service provides an API for LEI lifecycle management and same-session LEI registration, making it simple for VAs to issue LEIs as part of onboarding, and also to discover, consolidate and manage LEIs on behalf of clients.

Paul Tourret, Corporate Development Officer at Ubisecure said, “There are many benefits for banks who become LEI Validation Agents, particularly being able to leverage established onboarding workflows. New ways to incorporate LEIs into various use cases are evolving all the time to make more transparent, trustworthy business interactions, and RapidLEI is proud to be at the forefront of such initiatives.”

Find out more about the Validation Agent program at rapidlei.com/gleif-validation-agents.

 

About BNP Paribas

BNP Paribas has a presence in 64 countries, with almost 184,000 employees, of which more than 145,000 in Europe. The Group supports all its customers – individuals, associations, entrepreneurs, SMEs and institutions – in the success of their projects through its financing, investment, savings and protection solutions.

BNP Paribas holds key positions in its three operating divisions: Corporate & Institutional Banking for corporate and institutional clients; Commercial, Personal Banking & Services for retail banking networks and specialised financial services, and Investment & Protection Services for savings, investment and protection solutions.

 

About Ubisecure and RapidLEI

Ubisecure is accredited by the Global Legal Entity Identifier Foundation (GLEIF) to issue Legal Entity Identifiers (LEI). RapidLEI is a Ubisecure service that automates the LEI lifecycle to deliver LEIs quickly and easily. As of May 2024, over 300,000 organisations have chosen RapidLEI to issue their global organisation identifier. As well as pioneering the LEI Everywhere program, the company is a technology innovator and provides Identity & Access Management (IAM) software and cloud services for Customers, Workforce, & Organisation Identity use cases. Enterprises and Governments use Ubisecure IAM solutions to enhance user experience and security through improved registration, authentication, authorisation, and identity data management. Ubisecure also provides solutions to companies maintaining their own customer identity pools (such as banks and mobile network operators) to become Identity Providers (IdP) for strong authentication and federation services.

For more information about Ubisecure visit www.ubisecure.com or rapidlei.com

Ubisecure LEI: 529900T8BM49AURSDO55

The post BNP Paribas becomes latest Validation Agent in the Global LEI System appeared first on Ubisecure Digital Identity Management.


Microsoft Entra (Azure AD) Blog

Microsoft Entra Private Access for on-prem users

The emergence of cloud technology and the hybrid work model, along with the rapidly increasing intensity and sophistication of cyber threats, are significantly reshaping the work landscape. As organizational boundaries become increasingly blurred, private applications and resources that were once secure for authenticated users are now vulnerable to intrusion from compromised systems and users. Whe

The emergence of cloud technology and the hybrid work model, along with the rapidly increasing intensity and sophistication of cyber threats, are significantly reshaping the work landscape. As organizational boundaries become increasingly blurred, private applications and resources that were once secure for authenticated users are now vulnerable to intrusion from compromised systems and users. When users connect to a corporate network through a traditional virtual private network (VPN), they’re granted extensive access to the entire network, which potentially poses significant security risks. These challenges have introduced new demands that traditional network security approaches struggle to meet. Even Gartner predicts that by 2025, at least 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021.

 

Microsoft Entra Private Access, part of Microsoft’s Security Service Edge (SSE) solution, securely connects users to any private resource and application, reducing the operational complexity and risk of legacy VPNs. It enhances the security posture of your organization by eliminating excessive access and preventing lateral movement. As traditional VPN enterprise protections continue to wane, Private Access improves a user’s ability to connect securely to private applications easily from any device and any network—whether they are working at home, remotely, or in their corporate office. 

 

Enable secure access to private apps that use Domain Controller for authentication 

 

With Private Access (Preview), you can now implement granular app segmentation and enforce multifactor authentication (MFA) on any on-premises resource authenticating to domain controller (DC) for on-premises users, across all devices and protocols without granting full network access. You can also protect your DCs from identity threats and prevent unauthorized access by simply enabling privileged access to the DCs by enforcing MFA and Privileged Identity Management (PIM). 

 

To enhance your security posture and minimize the attack surface, it’s crucial to implement robust Conditional Access controls, such as MFA, across all private resources and applications including legacy or proprietary applications that may not support modern auth. By doing so, you can safeguard your DCs—the heart of your network infrastructure.

 

A closer look at the mechanics of Private Access for on-prem user scenario

 

Here’s how Private Access helps secure access to on-prem resources and applications and provides a seamless way for employees to access the on-premises resources when they’re locally accessing these resources, while ensuring the security of the company's critical services. Imagine a scenario where an employee is working on-premises at their company's headquarters. They need to access the company's DCs to retrieve some important information for their project or make some changes. However, when they try to access the DC directly, they find that access is blocked. This is because the company has enabled privileged access, which restricts direct access to the DC for security reasons. 

 

Instead of accessing the DC directly, the employee's traffic is intercepted by the Global Secure Access Client and routed to the Microsoft Entra ID and Private Access Cloud for authentication. This ensures that only authorized users can access the DC and its resources.

 

When the employee attempts to access the private resources they need, they’re prompted to authenticate using MFA. This additional layer of security ensures that only legitimate users can gain entry to the DC. Private Access also extends MFA to all on-premises resources, even those that lack built-in MFA support. This means that even legacy applications can benefit from the added security of MFA. With Private Access, the company has also enabled granular app segmentation, which allows them to segment access to specific applications or resources within their on-premises environment. This means that the employee can only interact with the services they’re authorized to access, ensuring the security of critical services.

 

Despite these added security measures, the employee's user experience remains seamless. Only authentication traffic leaves the corporate network, while application traffic remains local within the corporate network. This minimizes latency and ensures that the employee can access the information they need quickly and efficiently.

 

Figure 1: Private Access enforces flexible MFA to on-prem resources for on-prem users, strengthening your security posture and minimizing your attack surface.

 

Key benefits: Elevate network access security to on-premises resources with Private Access

 

Organizations seeking to enhance the security of their on-premises resources and protect their critical assets, including DCs, against identity threats can benefit from the key capabilities provided by Private Access—in preview. With Private Access, organizations can enable granular segmented access and extend Conditional Access controls to all their private applications. 

 

Private Access allows for the implementation of MFA for private apps that use DC for authentication, adding an extra layer of security to prevent unauthorized access and reduce identity-related risks. By enabling granular segmented access policies for individual applications or groups, organizations can ensure that only authorized users interact with critical resources and services. Additionally, Private Access extends Conditional Access controls to all private resources, even those relying on legacy protocols, allowing organizations to consider factors such as application sensitivity, user risk, and network compliance when enforcing modern authentication methods across their entire environment.

 

Conclusion

 

Private Access provides granular access controls on all private applications for any user- on-premises or remote while bridging the gap between legacy applications and modern security practices. The capabilities of Private Access provide new tools to confidently enable secure access to private apps that use DC for authentication and navigate the complex landscape of modern authentication and access controls. 

 

Explore the future of secure access today by joining Microsoft Entra Private Access in preview and stay ahead of evolving security challenges.

 

To learn more, watch “Announcing new capabilities to protect on-premises resources with MFA via Microsoft Entra Private Access” for a closer look into how these new capabilities work.   

 

 

Read more on this topic

Microsoft Entra Private Access: An Identity-Centric Zero Trust Network Access Solution

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

SC Media - Identity and Access

Ransomware attack impacts law enforcement data in Wichita

Law enforcement data were compromised in a ransomware attack on the city government of Wichita, Kansas, giving hackers access to an unspecified number of people’s personal information, including names, Social Security numbers, driver’s licenses and other state IDs, and payment card information, reports The Record, a news site by cybersecurity firm Recorded Future.

Law enforcement data were compromised in a ransomware attack on the city government of Wichita, Kansas, giving hackers access to an unspecified number of people’s personal information, including names, Social Security numbers, driver’s licenses and other state IDs, and payment card information, reports The Record, a news site by cybersecurity firm Recorded Future.


KuppingerCole

Jun 25, 2024: Foundational Security – the Critical Cyber Security Infrastructure

In the landscape of cybersecurity, the foundation remains unshakable, and these timeless principles continue to shape our digital defenses. Despite the rapid pace of technological advancement, there are certain aspects that demonstrate that threats persist over time.
In the landscape of cybersecurity, the foundation remains unshakable, and these timeless principles continue to shape our digital defenses. Despite the rapid pace of technological advancement, there are certain aspects that demonstrate that threats persist over time.

Ocean Protocol

DF89 Completes and DF90 Launches

Predictoor DF89 rewards available. DF90 runs May 16— May 23, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_t
Predictoor DF89 rewards available. DF90 runs May 16— May 23, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop completed on May 3, 2024. This article elaborates.

Data Farming Round 89 (DF89) has completed.

DF90 is live today, May 16. It concludes on May 23. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF90 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF90

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF89 Completes and DF90 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


YeshID

Simplify SOC 2 Compliance with YeshID

Navigating the complexities of SOC 2 compliance is daunting. Said more simply: getting SOC 2 certified sucks. It will always suck some, but it won’t suck quite as much if... The post Simplify SOC 2 Compliance with YeshID appeared first on YeshID.

Navigating the complexities of SOC 2 compliance is daunting. Said more simply: getting SOC 2 certified sucks. It will always suck some, but it won’t suck quite as much if you’re using YeshID.

Certification is important and becoming vital. Certification assures others that your systems are secure, their data will be protected, and your processes align with stringent regulatory standards. More and more companies will refuse to do business with you if you aren’t SOC 2-certified. You may be familiar with companies like Vanta, Drata & Secureframe that help with certification. We’ve gotten SOC 2 certified with their help.

SOC 2 certification requires well-defined access management protocols and evidence that the protocols are being followed. The first part is “relatively” easy: anyone can design a protocol. The second part is hard, and it’s where YeshID helps big-time. YeshID simplifies and streamlines identity & access management (instead of cobbling together checklists, spreadsheets, and ticketing systems). And YeshID keeps track of what you do. YeshID helps you get SOC 2 (and stay SOC 2) with more SOCcess.

Let’s see how it works. 

CC and SOC 2

Almost everyone knows that CC stands for the Common Criteria for Information Technology Security Evaluation and that it’s an international standard (ISO/IEC 15408) for computer security certification. (Wikipedia, CC website)

Almost everyone knows that SOC 2, or Service Organization Control Type 2, is a cybersecurity framework developed by the American Institute of Certified Public Accountants (AICPA) in 2010.

And almost everyone knows the relationship between SOC 2 and the Common Criteria. But for the one or two readers who don’t know, we’ve spelled out some of the connections and how YeshID helps.

Logical and Physical Access Controls

SOC 2 compliance ensures that only authorized personnel can access your systems and data. YeshID excels in managing access controls:

Production Deployment Access Control (CC 6.1): YeshID restricts access to production deployments by controlling who can modify application access. This ensures that only authorized personnel can deploy changes to production environments. Access Reviews (CC 6.2, CC 6.3, CC 6.4): Conducting quarterly access reviews is a crucial part of maintaining SOC 2 compliance. YeshID facilitates these reviews by providing comprehensive information on user access rights, helping you ensure that access is appropriately restricted and any required changes are tracked to completion. Restricted Database and Network Access (CC 6.1): YeshID helps restrict privileged access to production databases and networks to authorized users with a business need. By controlling application-level permissions, YeshID indirectly restricts access to critical systems. Remote Access MFA (CC 6.6): YeshID integrates with Multi-Factor Authentication (MFA) solutions to enforce MFA for remote access, ensuring that only authorized employees can access production systems remotely. Enhancing Processing Integrity and System Operations

Maintaining the integrity of your data processing and monitoring system operations are vital components of SOC 2 compliance. YeshID supports these areas through:

Change Management (CC 8.1, CC 5.3, CC 7.1): YeshID enforces change management procedures by requiring approvals and tracking changes in access rights. This ensures all changes are authorized, documented, tested, and reviewed before implementation. Log Management (CC 2.1): YeshID generates logs for actions such as account provisioning, deprovisioning, and access modifications, which are essential for auditing and reviewing system changes. Supporting Control Environment and Communication

A robust control environment and effective communication are essential for SOC 2 compliance. YeshID helps in these areas by:

Code of Conduct and Confidentiality Agreements (CC 1.1): YeshID can require employees to acknowledge the company’s code of conduct and sign confidentiality agreements during onboarding, ensuring a commitment to integrity and ethical values. Security Awareness Training (CC 1.4, CC 2.2): YeshID ensures employees complete security awareness training during onboarding and annually thereafter, helping maintain a high level of security awareness across the organization. Roles and Responsibilities (CC 1.3, CC 1.4, CC 1.5): YeshID specifies the roles and responsibilities of employees for various systems and applications, ensuring that everyone is aware of their internal control responsibilities. System Changes Communication (CC 2.2): YeshID logs changes to access rights, effectively communicating system changes to authorized internal users. Seamless Onboarding and Offboarding

Managing employee access throughout their lifecycle is a critical aspect of SOC 2 compliance. YeshID excels in:

Onboarding New Users (CC 6.2): YeshID simplifies the registration and authorization of new internal and external users, ensuring that only authorized users are granted system access. Revoking Access Upon Termination (CC 6.3, CC 6.5): YeshID facilitates the offboarding process by revoking access for terminated employees, ensuring compliance with termination policies and reducing the risk of unauthorized access. Unique Account Authentication (CC 6.1): YeshID integrates with authentication systems to enforce unique account authentication, ensuring each user has a unique username and password or authorized SSH keys. Conclusion

With YeshID, you can streamline your IAM processes, enhance security, and ensure compliance. Our robust features help you manage employee access, conduct thorough access reviews, enforce change management procedures, and maintain a secure control environment.

Get SOC 2 certified and stay SOC 2 certified with YeshID. Let us help you simplify the complexities of managing employee access and achieve SOCcess. Try for free now!

The post Simplify SOC 2 Compliance with YeshID appeared first on YeshID.

Wednesday, 15. May 2024

Lockstep

Australia heads to a uniform governance regime for all data

I’ve been speaking and writing a lot recently about the newly legislated Australian Government Digital ID System (AGDIS). Talking Digital ID with NAB Conserving the IDs we are already familiar with instead coming up with new ones The need to keep Digital ID small (and simple). Digital ID caught between the old and the new... The post Australia heads to a uniform governance regime for all data ap

I’ve been speaking and writing a lot recently about the newly legislated Australian Government Digital ID System (AGDIS).

Talking Digital ID with NAB Conserving the IDs we are already familiar with instead coming up with new ones The need to keep Digital ID small (and simple). Digital ID caught between the old and the new

In some ways, AGDIS embodies fresh thinking, such as the pivot away from abstract “digital identity” to the more concrete Digital ID.

On the other hand, AGDIS tries to leverage the federal government’s long-standing “Trusted Digital Identity Framework” (TDIF) which was conceived a decade ago for the purposes of single sign-on to tax and human services. TDIF predates modern digital wallets and verifiable credentials. It is a creature of an earlier era.

Unsurprisingly, there are mixed messages around AGDIS.

Will it furnish Australians with “reusable” proof of identity? Will it come with new ID numbers and a central registry? Or will it simply better protect our many existing IDs in digital form? And what is an “ID” anyway?

Minister Katy Gallagher has assured us that there is no new national ID, and indeed, there is nothing I can see in the Digital ID Bill about new numbers or a central registry.

The last thing we need is a novel ID

And according to Lockstep’s research, there is no fundamental need for anything new of that type. You see, proof of identity works reasonably well today using familiar IDs like driver licences, passports, birth certificates and social security cards.

Or I should say, proof of identity works well in real life.

But identification breaks down in cyberspace when these IDs are presented as plaintext to online processes. Web forms and web servers can’t tell if a plaintext ID has been presented by its rightful holder or by a fraudster who’s bought the data on the black market.

Almost all identity fraud now occurs online; very little fraud is attempted in person using counterfeit ID documents. So that tells us that the logic of using government IDs for identification remains sound in the digital age — if only we made the presentation in the digital realm as reliable as it is in the physical world.

Pivot away from plaintext presentation — again!

The real problem to solve is not “identity” but identification, and specifically, making the government IDs we use day-to-day more reliable online.

Verifiable credentials technology is the solution. We should ‘seal’ existing IDs into digital wallets and then present them digitally, from device-to-server, instead of manually typing ID details into forms.

And the really good news is there’s a precedent for this transition. We’ve done it before! The world shifted from plaintext to digital IDs for handling credit card numbers — when chip cards replaced magnetic stripes.

Now we have smart phone wallets alongside chip cards, with exactly the same cryptographic security that protects cardholder details against theft and cloning.

Consumer acceptance today of digital wallets is high and growing; over one third of all card payments in Australia are now done via a digital wallet (Reference: Reserve Bank of Australia). So verifiable credentials are commonplace.

It’s all about data quality

What excites me the most is that AGDIS shows a way forward for all data.

The government in its wisdom is making the ACCC the Digital ID regulator. The ACCC currently governs the Consumer Data Right (CDR), Australia regulatory regime for open banking and data sharing. Now, the CDR is not perfect, but it features a strong regulatory model, it sits in the right place with the ACCC, and it is extensible to the protection of Digital IDs.

The CDR is essentially a governance systems for data flows, tracing where certain data has come from, where it’s going, what is it being used for, and above all, carrying consent for the data to be used in defined contexts.

I see CDR and Digital ID boiling down to data and metadata, in the broadest sense of that word.

That is, what are the properties of a data record that really matter when deciding whether to use it for some application? Where did the record come from? What is its intended purpose? If it’s a personal data record, then what consent was granted for its usage?

The same sort of metadata is routinely baked into verifiable credentials.

Remember that a verifiable credential is a data record holding one or more assertions about a person or entity (i.e. the credential subject) together with details of the credential issuer and metadata such as when, where, how and why the credential was issued.

So, with the ACCC governing data sharing and verifiable IDs, we could see a uniform new approach to managing data quality. Remember the pattern. In any critical digital transaction, there will be something precise you need to know about the counterparty. So ask yourself:

What do you need to know about the party you’re transacting with? Where will your transaction system get that data when you need it? And how will the system check know that it’s fit for purpose?

If we can govern that pattern consistently across the digital economy, then we will be able to solve a set of problems that are much bigger and much more important than identity. We can take care of the quality of all data.

Governing all data quality

The same data quality questions recur everywhere we look.

The wicked problems with deep fakes arise because we consumers can’t tell where the data is coming from. But a governance regime is within sight to provide quality signals (i.e. metadata) about any important data.

Consider an online image or article, or any piece of content online: what if you could be sure where the digital data has really come from, that it’s intact and genuine? We have the technology to authenticate authors and publishers and AI algorithms, anchored in verifiable credentials with certified hardware roots of trust.

If this looks complicated, then let me reiterate that we have already pivoted to digital presentation of payment card data. Banks routinely provision cardholder data in the form of verifiable credentials, a growing proportion of consumers are comfortable with digital wallets, and merchants can readily accept payments from digital devices, with radically reduced incidence of card fraud.

Australians could soon be presenting any Digital ID with the click of a button from a mobile digital wallet, with exactly the same privacy, security and ease of use as a card payment.

Looking to the future of data sharing, with AGDIS and CDR under the same regulatory umbrella, I see us heading towards a united governance regime for all important data. We can be sure where any data has come from, what it is supposed to be used for, and that it’s always been in the right hands.

The post Australia heads to a uniform governance regime for all data appeared first on Lockstep.


Shyft Network

Veriscope Regulatory Recap — 1st May to 15th May 2024

Veriscope Regulatory Recap — 1st May to 15th May 2024 Welcome to another edition of the Veriscope Regulatory Recap. Here, we will analyze the latest developments in the US and Nigerian cryptocurrency regulations scene and what these changes mean for the users. US Inches Closer to Clear Crypto Rules The US is all set to take a significant step with the FIT21 Act, which is headed f
Veriscope Regulatory Recap — 1st May to 15th May 2024

Welcome to another edition of the Veriscope Regulatory Recap. Here, we will analyze the latest developments in the US and Nigerian cryptocurrency regulations scene and what these changes mean for the users.

US Inches Closer to Clear Crypto Rules

The US is all set to take a significant step with the FIT21 Act, which is headed for a House vote soon. This act aims to clear up the murky waters of crypto regulation in the US by defining what counts as a commodity and what counts as a security.

Why is this a big deal?

Specifically, the bill gives the CFTC more control over digital commodities — a move welcomed by many in the industry who see the CFTC as more friendly to the crypto landscape compared to the SEC.

(Image Source)

The SEC has taken a more aggressive stance, evident in its numerous lawsuits against major players like Coinbase and Ripple Labs.

This clearer division of oversight could help companies navigate the market more confidently, knowing which regulatory body they will primarily interact with.

The takeaway: If the proposed bill passes, we may see a boost in crypto investments in the US as the market becomes well-regulated, as Sen. Hill noted in the Friday press release:

“As the collapse of FTX demonstrated, we need strong consumer protections and a functional regulatory framework to ensure the rapidly growing digital asset ecosystem is safe for investors and consumers while securing America as a leader in blockchain innovation.”

Nigeria to Ban Peer-to-Peer Crypto Trading

Switching gears to Nigeria, the country’s government is proposing a ban on peer-to-peer crypto trading to stabilize its currency, the naira.

(Image Source)

The Nigerian government believes that its currency has been feeling the heat from unregulated crypto activities.

Why does this matter?

P2P platforms are a lifeline for many Nigerians to access cryptocurrencies, bypassing traditional banking hurdles. By banning these, the government hopes to control currency fluctuations but at the risk of pushing crypto trading into the shadows.

The takeaway: This move could backfire by making crypto transactions in Nigeria less transparent and harder to regulate in the long run.

On the one hand, we have the US, where several ongoing developments aim to provide a clearer regulatory situation for the crypto industry, be it for better or worse. On the other hand, Nigeria is now tightening controls, which might restrict access for everyday users and push trading activities underground. The impact of these differing strategies on their respective markets will be an important development to watch.

Interesting Reads

Guide to FATF Travel Rule Compliance in Mexico

Guide to FATF Travel Rule Compliance in Indonesia

A Guide to FATF Travel Rule Compliance in Nigeria

The Visual Guide on Global Crypto Regulatory Outlook 2024

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Veriscope Regulatory Recap — 1st May to 15th May 2024 was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Global ID

EPISODE 25 — Learnings from the Georgetown Law identity roundtable

EPISODE 25 — Learnings from the Georgetown Law identity roundtable Earlier this month, Linda Jeng, founder and CEO of Digital Self Labs, hosted a superb roundtable on digital identity at the Institute of International Economic Law of Georgetown University Law Center, sponsored by the Hard Yaka Foundation. We caught up with GlobaliD co-founder and CEO Mitja Simcic for a quick recap on the event a
EPISODE 25 — Learnings from the Georgetown Law identity roundtable

Earlier this month, Linda Jeng, founder and CEO of Digital Self Labs, hosted a superb roundtable on digital identity at the Institute of International Economic Law of Georgetown University Law Center, sponsored by the Hard Yaka Foundation.

We caught up with GlobaliD co-founder and CEO Mitja Simcic for a quick recap on the event and what he learned on the latest episode of FUTURE PROOF.

Here’s Linda:

Congressman Bill Foster — THE thought leader on digital identity on the Hill — kicked off the day with the keynote speech on the importance of getting digital identity and privacy right for the digital future. We then had public sector session to hear how different countries are approaching digital identity. The EU adopted a wallet/credential approach with its eIDAS legislation. Brazil onboarded 80% of its adult population onto real-time payments PIX. In the US, the Financial Crimes Enforcement Network, US Treasury has launched its Identity Project, and the National Institute of Standards and Technology (NIST) on its work with state DMVs to issue mobile drivers licenses mDL.
Then we had a private sector session with the Better Identity Coalition, Universal Ledger, etonec GmbH, Prove, GlobaliD, Mastercard and IDEMIA North America. We discussed the different paradigms from federated identity to credential-managed identity and the challenges of building identity ecosystems. Followed by a fascinating discussion about the various technical standards groups from ISO, OpenID, DIF to LEI. For the research session, we heard from researchers at the Bank for International Settlements — BIS, International Monetary Fund, ITAM, Georgia Institute of Technology, American University, The Aspen Institute, FinRegLab and McKinsey Global Institute on key research trends from macro to privacy effects.
GlobaliD on X Mitja on X

EPISODE 25 — Learnings from the Georgetown Law identity roundtable was originally published in GlobaliD on Medium, where people are continuing the conversation by highlighting and responding to this story.


1Kosmos BlockID

Navigating Gartner’s Seven Tracks to MFA Maturity with 1Kosmos

In the ever-evolving landscape of cybersecurity, Multi-Factor Authentication (MFA) stands as a critical defense mechanism. Gartner’s recent report, “Seven Tracks to a Mature MFA Implementation,” written by Gartner analyst Paul Rabinovich, provides a strategic framework for organizations to enhance their MFA practices. This blog post explores the seven tracks and demonstrates how 1Kosmos not only …

In the ever-evolving landscape of cybersecurity, Multi-Factor Authentication (MFA) stands as a critical defense mechanism. Gartner’s recent report, “Seven Tracks to a Mature MFA Implementation,” written by Gartner analyst Paul Rabinovich, provides a strategic framework for organizations to enhance their MFA practices. This blog post explores the seven tracks and demonstrates how 1Kosmos not only aligns with these principles but also fortifies them through our innovative platform.

Gartner emphasizes the transition from a simplistic checklist approach to a more nuanced risk-assessment-driven MFA. This shift involves evaluating the security needs based on the level of risk associated with various assets.

Track 1: Transition to Risk-Assessment-Driven Strategy

In this initial track, organizations shift from a checklist-driven approach to one that prioritizes risk assessment in their MFA strategy. 1Kosmos supports this transition by providing a platform that dynamically evaluates the risk levels associated with individual assets. Through advanced adaptive risk controls, 1Kosmos ensures that assets deemed higher risk receive more stringent authentication measures, while those of lower risk maintain a balance between security and user convenience.

Track 2: Provide MFA Integration Guidance to DevOps Teams

1Kosmos offers comprehensive guidance and support to DevOps teams to ensure correct MFA implementation for applications. The 1Kosmos platform provides clear documentation, guidelines, and tools for seamlessly incorporating MFA protection throughout the application development lifecycle. By integrating MFA into DevOps practices, 1Kosmos minimizes vulnerabilities arising from improper configuration and ensures the effectiveness of MFA measures over time.

Track 3: Implement Additional Controls to Protect MFA

Recognizing the fallibility of MFA, 1Kosmos integrates robust controls to protect against misconfiguration, bypass, and abuse. Features like SIM Binding augment MFA security by thwarting specific vulnerabilities such as SIM swapping fraud. By continuously enhancing authentication-related processes and incorporating new authentication methods, 1Kosmos ensures that MFA measures remain resilient against emerging threats.

Track 4: Balance Trust, User Experience, and Total Cost of Ownership

1Kosmos prioritizes a balance between trust, user experience, and total cost of ownership when selecting authentication methods. The 1Kosmos platform offers a range of authentication options, including biometrics and mobile-based solutions, ensuring that organizations can implement MFA where it is most needed without compromising user experience. Additionally, 1Kosmos provides comprehensive user training and support to ensure that all user constituencies are equipped to navigate MFA processes effectively.

Track 5: Adapt MFA Implementations to Accommodate Diverse Factors

1Kosmos’s MFA platform is designed to be flexible and responsive to the diverse external and internal factors influencing an organization’s MFA strategy. Whether it involves compliance with evolving regulations, mitigating emerging cyber threats, improving user experience, or integrating with cutting-edge technological advancements, 1Kosmos’s platform adapts to these dynamics, ensuring comprehensive MFA coverage across all applications, data, and systems.

Track 6: Integrate Robust Credential Management Practices

By minimizing reliance on passwords and enhancing credential integrity, the 1Kosmos platform fortifies the authentication process against potential vulnerabilities. Through seamless integration within functional areas like DevOps, 1Kosmos ensures that MFA measures remain effective and resilient against evolving security threats.

Track 7: Implement Advanced Application Session Management Techniques

The 1Kosmos management techniques, including passive behavioral biometrics, fortify MFA implementations against various risks such as Cross-Site Request Forgery (CSRF) attacks. By upholding the security of user sessions throughout their duration, 1Kosmos ensures that MFA measures remain robust and effective in safeguarding organizational resources.

Conclusion

1Kosmos not only aligns with Gartner’s “Seven Tracks to a Mature MFA Implementation” but also enhances them by providing a comprehensive MFA platform equipped with advanced capabilities to address the nuances of modern cybersecurity challenges. Through our adaptive solutions and robust features, 1Kosmos empowers organizations to implement a future-proof and passwordless security environment, supporting the findings and recommendations outlined in Gartner’s report.

The post Navigating Gartner’s Seven Tracks to MFA Maturity with 1Kosmos appeared first on 1Kosmos.


Microsoft Entra (Azure AD) Blog

Tenant health transparency and observability

In previous resilience blog posts, we’ve shared updates about the continuous improvements we’re making to resilience and reliability, including our most recent update on regionally isolated authentication endpoints and an announcement last year of our industry-leading and first of its kind backup authentication service. These and other innovations behind the scenes enable us to deliver consistentl

In previous resilience blog posts, we’ve shared updates about the continuous improvements we’re making to resilience and reliability, including our most recent update on regionally isolated authentication endpoints and an announcement last year of our industry-leading and first of its kind backup authentication service. These and other innovations behind the scenes enable us to deliver consistently very high rates of availability globally each month.  

 

In this post, we’ll outline what we’re doing to help customers see how available and resilient Microsoft Entra really is for them, to not only hold us accountable when issues arise, but also better understand what actions to take within their tenant to improve its health. At the global level, you see it in the form of retrospective SLA reporting, which shows authentication availability exceeding our 4 9s promise (launched in spring 2021) by a wide margin and reaching 5 9s in most months. But it becomes more compelling and actionable at the tenant level: what is the uptime experience of my users on my organization’s apps and devices? Is my tenant handling surges in sign-in demand?   

 

We often hear from customers about the effect on resilience insights when they move to the cloud. In the on-prem world, identity health monitoring occurred onsite and with tight control; operational awareness happened entirely within a company’s first-party IT department. Now, we need to achieve that same transparency or better in an outsourced, cloud-based identity service and with a federated set of dependencies.  

 

IT departments and developers are working hard to ensure each of their users maintains seamless, uninterrupted access that doesn’t compromise security. Enabling access for the right users with minimal friction while stopping intrusions and risk is critical to keep the world running. When an organization outsources their identity service to Microsoft, they expect us to acknowledge degradations when they happen, then take accountability to learn and continuously improve from those events. We also recognize that human-driven communication can only take us so far.   

 

To meet these challenges, we’re increasingly embracing granular monitoring and automation. We start from the assumption that the unexpected will find a way of happening in any complex system, no matter how resilient it is. Beyond resilience, we must detect incidents, respond to them effectively, and improve as we go—and help our customers do the same. You see examples of this approach both in our rollout of in-tenant health monitoring and in our investments behind the scenes aimed at fast incident detection and communication.  

 

Let’s start with out-of-the-box automated health monitoring in premium tenants. Tenant-level health monitoring empowers customers to independently understand the quality of their users’ experiences with authentication and access. It also sets the stage to prompt tenant administrators with actions they can take to investigate and reduce disruptions, all from Microsoft Entra admin center or using MS Graph API calls.  

 

We’ve taken a step in this direction by introducing a group of precomputed health metric streams that enable our premium customers to watch key authentication scenarios, an early milestone in our investments to enhance transparent visibility into tenant health and service resilience. These new health metrics isolate relevant signals from activity logs and provide pre-computed, low-latency aggregates every 15 minutes for specific high-value observability scenarios.  

 

With their granularity and scenario-specific focus, health metrics go a step beyond the monthly tenant-level SLA reporting we released in 2023. Precomputed health metrics also supplement the activity log data that we’ve been providing and continue to improve on. With sign-in logs, customers can build their own computed metrics to monitor, like isolating a specific sign-in method to watch for increases in success and failure. With our new precomputed streams, customers can snap to Microsoft-defined indicators of health, take advantage of features we’re developing at scale, and dive into activity logs for deeper investigations. We encourage customers to make use of both options to get a full picture.  

 

During the initial public preview offering, we’re releasing health metric streams related to maintaining highly available:   

 

Multifactor authentication (MFA)   Sign-ins for devices that are managed under Conditional Access policies   Sign-ins for devices that are compliant with Conditional Access policies   Security Assertion Markup Language (SAML) sign-ins   

 

We’re starting with authentication-related scenarios because they are mission critical to all our customers, but other scenarios in areas like entitlement management, directory configuration, and app health will be added in time along with intelligent alerting capabilities in response to anomalous patterns in the data. We’re publishing the health metrics in Microsoft Entra admin center, Azure Portal, and M365 admin center, as well as in Microsoft Graph for programmatic access and integration into other monitoring pipelines.  

 

For more information about how to access the health monitoring metrics, visit the Microsoft Learn documentation.  

 

 

Figure shows the Scenario monitoring landing page & the Sign in with MFA scenario details

 

 

Even as in-tenant observability improves, customers will still rely on traditional incident communications when Microsoft-side issues happen. Like all service providers, we push messages about incidents to affected customers and post service health announcements to a website and communications feed in Azure. However, when this approach relies solely on hand-crafted service monitors and human-driven communications, it has limitations. Customers are right to have concerns about the timeliness of communication and the monitoring coverage itself.   

 

To address this challenge, we’re building increasingly sophisticated default monitoring packages attached to automated communications. The early results are promising. We’ve been able to bring times to notify customers about incidents down significantly, with service degradations and downtime being communicated within about 10 minutes of auto-detection. We’re also catching service degradations increasingly early by investing in monitoring, the results of which we track by watching customer-reported incident volumes.   

 

The  best incidents are the ones that never happen. Our goal is to find and mitigate problems before they impact our customers. So, in addition to advances, we continue to prioritize building systematic resilience measures to prevent service degradations and outages or auto-mitigate them before they affect a customer environment. We will share more on this in a future blog.   

 

To continuously improve our services in partnership with our customers, we’re combining improvements in our service-level safety net with tenant-level monitoring. We’re also expanding our monitored scenarios, boosting our out-of-the-box monitoring intelligence, and speeding up our communication. Plus, integration with Azure, M365, and Microsoft Graph ensures that Microsoft Entra observability can happen wherever it’s needed. Together, we’re making sure everyone can work securely and seamlessly. 

 

With our already strong foundation of availability and resilience, security-enhancing recommendations, and mature service monitoring and incident communications, we’re excited to see these new capabilities take Entra health transparency to the next level.    

  

Igor Sakhnov  

CVP, Microsoft Identity & Network Access Engineering   

 

 

Read more on this topic

Microsoft Entra resilience update: Workload identity authentication - Microsoft Community Hub

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

New developments in Microsoft Entra ID Protection

In the Microsoft Digital Defense Report 2023 (MDDR), we shared that on average, there are 11 token replay detections per 100,000 active users in Microsoft Entra ID each month. In addition, there are approximately 18,000 multifactor authentication (MFA) fatigue attempts observed per month.   The latest developments in Entra ID Protection help you reduce the risks of these attacks by making

In the Microsoft Digital Defense Report 2023 (MDDR), we shared that on average, there are 11 token replay detections per 100,000 active users in Microsoft Entra ID each month. In addition, there are approximately 18,000 multifactor authentication (MFA) fatigue attempts observed per month.

 

The latest developments in Entra ID Protection help you reduce the risks of these attacks by making it easier to deploy risk policies, understand their impact, and protect your organization from emerging threats.  

 

Here are the highlights: 

 

Deploying Entra ID Protection just became easier with Microsoft-managed policies in your environment and an impact analysis workbook.  You can now investigate and remediate compromised users faster with help from Copilot and expansion of self-remediation to hybrid users.  You can also fine-tune the Machine Learning (ML) algorithm by providing feedback and identify and block token theft and suspicious actions taken by an attacker within Entra ID with new detections.  

 

Keep reading to learn more!

 

Deploy with ease and confidence

 

Microsoft managed policies and impact analysis workbook

Identity and access management is a huge responsibility requiring diligence and expertise. Between policies across identity, infrastructure, network, devices, apps, and data—and weighing the impact to end users and security—there’s a lot on your plate. To help with this, we have two exciting updates so you can get started with protecting your users faster and easier.

 

As we announced in November, Microsoft-managed Policies will enable some of our most valuable Conditional Access polices by default in select tenants, including requiring end users to perform MFA when we detect high risk to their sign in. This policy blocks attackers and allows your users to self-remediate their risk. We’re enabling Microsoft-managed policies slowly and deliberately to make sure we can incorporate your feedback and maximize value for you. Learn more about our approach to managed policies in our documentation.

 

We know that changes to how your users authenticate into resources require thoughtful consideration, and it’s helpful to know how the changes will affect your unique environment. Our new Impact analysis of risk-based access workbook will help you see the precise impact of turning on risk-based Conditional Access Policies so you can enable a new policy with confidence. The workbook uses historical sign-in data to allow you to immediately see the impact the policy would have had, with no report-only policy required. You can try out the new workbook here.

 

New dashboard generally available

 

In July, Entra ID Protection launched a new dashboard that presents risk insights for your tenant at a glance. We’re excited to announce today that this experience is now generally available and is the default landing page of ID Protection. The dashboard will give you a better understanding of your tenant’s security posture through key metrics, graphics, and recommended actions to improve your organization’s security posture.

 

In general availability, the attack counts in the Attacks Graphic are also now clickable, and you can easily navigate to them in the Risk Detections report to further investigate. The Risk Detections report has this new “Attack type” column, showing the primary attack type based on MITRE ATT&CK techniques for the detections. This further empowers your admins and SOC teams to understand the risks and take actions accordingly. See the risk detection to MIRTE ATT&CK type mapping in our documentation.

 

Figure 1: Entra ID Protection dashboard GA

 

Investigate and remediate efficiently

 

On-premises password reset remediates user risk of compromise (general availability)

Our new feature to allow on-premises password changes to reset user risk is now generally available for Entra P1 and P2 customers. This feature allows hybrid customers to include their users in risk-based Conditional Access polices that require user password remediation. If you were waiting for GA to enable this feature, now is the time to do so to make user risk policies easier to manage. Visit Remediate risks and unblock users in Entra ID Protection to learn more.

 

Figure 2: Enable On-premises password reset to reset user risk in Identity Protection settings

 

User Risk Investigation Copilot in public preview

Learning more about a user’s risk level and recommendations on how to mitigate a user’s risk is easier than ever with the introduction of the User Risk Investigation skill in Microsoft Entra, which is available in public preview as a part of Copilot for Security. This skill summarizes the risk history of the user, how to remediate risk for that user, and how to scale and automate response and remediation to identity threats.

 

An identity admin notices that a user has been flagged as high risk due to a series of abnormal sign-ins. With Copilot for Security, the admin can quickly investigate and resolve the risk by clicking on the user in question to receive an immediate summary of risk and instructions for remediation.

 

Improved Threat Prevention and Remediation Capabilities

 

Over the past few months, multiple new detections have been introduced to Entra ID Protection that protect against new and emerging attack vectors, like anomalous graph usage, token theft, and attacker in the middle (AitM) attacks. In addition, hybrid tenants can now be confident that user risk is resolved when a password is reset on-premises, and all tenants can benefit from our new functionality that takes your feedback into account when determining if an event is risky.

 

Suspicious API traffic detection (general availability)

When entering an environment, attackers often search for information about users and tenant configuration to prepare for further exploitation. ID Protection will now change a user’s risk level if we observe them making an abnormally high number of calls to MS Graph and AAD Graph compared to that user’s baseline, which will help identify both compromised users and insider threats scavenging for intel.

 

Detecting token theft in real-time and post-breach

With token-based attacks on the rise, you need detections that help you identify and protect against this emerging threat. Two new detections in ID Protection help you do this. Our industry-first Real-time Anomalous Token Detection automatically disrupts token replay attacks in real-time when paired with a risk-based Conditional Access for sign-ins.

 

We have also built an offline detection that extends coverage of Microsoft 365 Defender’s Attacker in the Middle signals. This detection will flag the impacted user with high risk to prompt the configured Conditional Access user risk policy, allowing customers to confirm or dismiss the risk on the user. The session token is also revoked in cases where Continuous Access Evaluation is enabled.

 

You can learn more about our new detections at What are risk detections?

 

Admin feedback on detections trains our ML

 

We hold our detections in Entra ID to a very high standard, but occasionally we do issue a false positive detection. You can now help train our ML models by acting on risky sign-ins. You can confirm a sign-in as risky, safe, or dismiss risk. Each of these will send information back to our ML model and optimize future detections for your organization. You can learn more about giving Entra ID Protection risk feedback here.

 

We hope your organization can benefit from these new detections and features and that you will revisit the positive impact that risk-based Conditional Access can have on your organization's security.

 

Thanks, and let us know what you think!

 

Alex Weinert

 

 

Read more on this topic

Microsoft Entra adds identity skills to Copilot for Security - Microsoft Community Hub Remediate User Risks in Microsoft Entra ID Protection Through On-premises Password Changes - Microsoft Community Hub Act now: Turn on or customize Microsoft-managed Conditional Access policies - Microsoft Community Hub​ 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


IBM Blockchain

Enhancing data security and compliance in the XaaS Era 

Learn how to simplify data compliance requirements and fuel AI and data-intensive workloads while protecting data security. The post Enhancing data security and compliance in the XaaS Era  appeared first on IBM Blog.

Recent research from IDC found that 85% of CEOs who were surveyed cited digital capabilities as strategic differentiators that are crucial to accelerating revenue growth. However, IT decision makers remain concerned about the risks associated with their digital infrastructure and the impact they might have on business outcomes, with data breaches and security concerns being the biggest threats.  

With the rapid growth of XaaS consumption models and the integration of AI and data at the forefront of every business plan, we believe that protecting data security is pivotal to success. It can also help clients simplify their data compliance requirements as organizations to fuel their AI and data-intensive workloads.  

Automation for efficiency and security 

Data is central to all AI applications. The ability to access and process the necessary data yields optimal results from AI models. IBM® remains committed to working diligently with partners and clients to introduce a set of automation blueprints called deployable architectures.  

These blueprints are designed to streamline the deployment process for customers. We aim to allow organizations to effortlessly select and deploy their cloud workloads in a way that is tailor-made to align with preset, reviewable security requirements and to help to enable a seamless integration of AI and XaaS. This commitment to the fusion of AI and XaaS is further exemplified by our recent accomplishment this past year. This platform is designed to enable enterprises to effectively train, validate, fine-tune and deploy AI models while scaling workloads and building responsible data and AI workflows. 

Protecting data in multicloud environments 

Business leaders need to take note of the importance of hybrid cloud support, while acknowledging the reality that modern enterprises often require a mix of cloud and on-premises environments to support their data storage and applications. The fact is that different workloads have different needs to operate efficiently.  

This means that you cannot have all your workloads in one place, whether it’s on premises, in public or private cloud or at the edge. One example is our work with CrushBank. The institution uses watsonx to streamline desk operations with AI by arming its IT staff with improved information. This has led to improved productivity and , which ultimately enhances the customer experience. A custom hybrid cloud strategy manages security, data latency and performance, so your people can get out of the business of IT and into their business. 

This all begins with building a hybrid cloud XaaS environment by increasing your data protection capabilities to support the privacy and security of application data, without the need to modify the application itself. At IBM, security and compliance is at the heart of everything we do.  

We recently expanded the IBM Cloud Security and Compliance Center, a suite of modernized cloud security and compliance solutions designed to help enterprises mitigate risk and protect data across their hybrid, multicloud environments and workloads. In this XaaS era, where data is the lifeblood of digital transformation, investing in robust data protection is paramount for success. 

XaaS calls for strong data security 

IBM continues to demonstrate its dedication to meeting the highest standards of security in an increasingly interconnected and data-dependent world. We can help support mission-critical workloads because our software, infrastructure and services offerings are designed to support our clients as they address their evolving security and data compliance requirements. Amidst the rise of XaaS and AI, prioritizing data security can help you protect your customers’ sensitive information. 

Watch “What is XaaS?” led by IBM VP Chuck Smith

The post Enhancing data security and compliance in the XaaS Era  appeared first on IBM Blog.


Holochain

Accounting for Valueflows and Regeneration Reimagined

#HolochainChats with William McCarthy

Professor William McCarthy teaches accounting and information systems at Michigan State University and is the innovator of a new way of accounting called Resources-Events-Agents (REA). 

In this exclusive interview, Professor McCarthy talks about how traditional accounting systems have a hard time keeping track of value flows and supporting practices that help the environment. 

He says that the double-entry bookkeeping method, which started in Venice in the 1400s, makes it harder to understand the real economy and work towards sustainability. 

So as companies face challenges with modern supply chains and the need for more sustainable economies, Professor McCarthy's work on REA offers an interesting take on the future of accounting.

Let’s dive in. 

Limitations of Traditional Accounting

For hundreds of years, accountants have used a method called double-entry bookkeeping. This method was first used by merchants in Venice, Italy, in the 1400s. They used math and wrote things down on paper to keep track of their business deals.

But Professor McCarthy says this old way of accounting doesn't work well anymore. It doesn't show the whole story of what's really happening in the economy. When something happens, like when two companies trade goods, traditional accounting quickly turns it into simple numbers and accounts, stripping away the context and rich data of the transaction. It doesn't keep all the important details.

This makes it hard for companies to understand their supply chains and how their actions impact the environment. The old accounting method is like telling a story but leaving out key parts. And Professor McCarthy thinks it’s time for change. 

The REA Accounting Model

Professor McCarthy has a different idea. He calls it Resources-Events-Agents, or REA for short. REA is like a new language for accounting that focuses on telling the whole economic story. "We often characterize REA as economic storytelling," Professor McCarthy explains.

In REA, instead of just writing down numbers, accountants keep track of all the important stuff. They look at the resources being traded, the events happening, and the people or companies involved. It's like writing a detailed story that doesn't leave anything out. "I found a new way of [telling economic stories] using Entity Relationship modeling, and later other things, that was called R E A, because “R” means the two resources that they were going to swap; “A” means to two agents who are at arm's length with each other; and “E” very simply means the process of sending things across this way." Professor McCarthy explains.

With REA, it's easier to see how things are connected. For example, if a company sells something, REA would keep track of what was sold, who bought it, and how it affects things like inventory and contracts. This gives a clearer picture of the economy. Imagine a visual story… "You have a customer called Charlie who will make a sale of inventory. And we put an object into the system, and it's described as a sale or shipment," Professor McCarthy elaborates.

Professor McCarthy says REA is like building a big, detailed map of how a company works. This map can help companies make better decisions and understand their impact on the world around them.

Implications for Regeneration and Sustainability

REA accounting could be a game-changer for companies trying to be more sustainable and regenerative. By tracking the flow of resources, events, and agents across a supply chain, REA provides a more comprehensive picture of a company’s environmental and social impact. "The best example I've seen of the supply chain example is Lynn Foster's examples for an apple pie," Professor McCarthy says, referring to a case study on the Valueflows website.

Traditional accounting makes it hard to see how business actions affect the environment, discounting “externalities”. But REA keeps track of important details that show the full impact.

Additionally, professor McCarthy believes that REA accounting is becoming increasingly important as customers and regulators demand more information about how products are made. People want detailed information about the origin and processing of the products they buy, from the farming and dyeing of cotton to the processing of olive oil. They want assurances about the absence of certain chemicals in the soil and the testing done at various stages of production.

It’s this demand for transparency driven by a growing awareness of the social and environmental impacts of business that has customers shifting to companies which are optimizing for sustainability rather than profit.

In response to these concerns, we are seeing changes in regulations, particularly in the EU and North America, that require companies to provide more detailed information about their supply chains. REA accounting is well-suited to meet these requirements because it allows for the flow of contextualized data, not just numbers on ledgers.  

However, changing accounting systems is not easy. "The biggest barrier to all of my work for years has been computational viability," Professor McCarthy admits. But with new technologies and a shift in mindset, REA could help companies tell a richer story of their impact on the world.

Innovating for the Future of Accounting

Despite the challenges, Professor McCarthy sees hope for the future of accounting innovation. He believes that new technologies, like Holochain, could help make REA accounting more viable.

In many ways, Holochain is an ideal technology for REA accounting because of how it leverages decentralized data storage — making it affordable and easy for smaller players in a supply chain to participate.

Professor McCarthy believes that small, innovative groups could lead the way in changing accounting. These nimble, environmentally and socially aware organizations might be able to make a real impact.

However, he also acknowledges the challenges ahead, particularly from large, established players in the accounting software industry. These companies may be resistant to change and could even try to acquire innovative startups to maintain their dominance.

As the accounting profession faces a choice between evolution and obsolescence, the work of visionaries like Professor McCarthy and the potential of new technologies like Holochain offer a glimpse of a more regenerative and sustainable future. The path forward may not be easy, but the stakes are too high to cling to the status quo.


Ontology

Ontology Weekly Report (May 7th — May 13th, 2024)

Ontology Weekly Report (May 7th — May 13th, 2024) Welcome to this week’s edition of the Ontology Weekly Report, where we delve into recent developments, product updates, and community engagements. Our commitment to expanding the blockchain landscape continues to yield exciting progress and opportunities for interaction. Latest Developments AMA with Port3 Network: We engaged in an enlighte
Ontology Weekly Report (May 7th — May 13th, 2024)

Welcome to this week’s edition of the Ontology Weekly Report, where we delve into recent developments, product updates, and community engagements. Our commitment to expanding the blockchain landscape continues to yield exciting progress and opportunities for interaction.

Latest Developments AMA with Port3 Network: We engaged in an enlightening AMA session with Port3 Network, discussing future collaborations and insights into decentralized applications. KYC and DID Article: Visit the Ontology website to read our latest article on the integration of KYC and decentralized identity systems, exploring the synergies between compliance and privacy. Hosted by Karmaverse on Binance Live: Karmaverse hosted us on Binance Live, offering a platform to discuss Ontology’s role in the evolving blockchain ecosystem and our upcoming projects. Development Progress Ontology EVM Trace Trading Function: Progress remains steady at 87%, as we continue to refine and enhance our trading functionalities within the EVM. ONT to ONTD Conversion Contract: Development has progressed to 52%, ensuring a smooth and efficient user experience. ONT Leverage Staking Design: We have reached 37% completion in our innovative staking design, aimed at providing more versatile staking options to our users. Product Development Top dApps Announced: The latest list of top dApps on the Ontology network has been released, highlighting the most active and innovative applications currently available. ONTO V4.7.2 Release: We’re excited to announce that ONTO V4.7.2 is now live, featuring updates and improvements that enhance overall user experience. On-Chain Activity Stable dApp Ecosystem: Our network continues to support a robust ecosystem with 177 total dApps on MainNet. Transaction Growth: This week, we observed an increase of 822 dApp-related transactions, bringing the total to 7,763,752. Total transactions on MainNet also grew by 6,041, reaching 19,428,565. Community Growth Engagement and Discussions: Our community platforms on Twitter and Telegram are alive with discussions on the latest developments. We encourage you to join the conversation and contribute to the growing dialogue around blockchain technology. Telegram Discussion on Wearable Technology: This week, led by Ontology Loyal Members, we discussed “Securing Our Digital Selves: Decentralized Identity in the Age of Wearable Technology.” This conversation explored the intersection of blockchain, privacy, and the expanding landscape of wearable tech. Stay Connected 📱

We invite you to stay connected and up-to-date with Ontology by following us on our social media channels. Your involvement is crucial to our mutual success as we forge ahead in making blockchain technology more accessible and useful for everyone.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Weekly Report (May 7th — May 13th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

The Philippines opens up a potential new frontier for stablecoin innovation

The central bank of the Philippines is paving the way for virtual asset service providers (VASPs) and financial institutions to begin issuing stablecoins with regulatory guardrails in place. 

The central bank of the Philippines is paving the way for virtual asset service providers (VASPs) and financial institutions to begin issuing stablecoins with regulatory guardrails in place. 


Tokeny Solutions

Tokeny’s Talent | Adrian

The post Tokeny’s Talent | Adrian appeared first on Tokeny.
Adrian Corcoran is Head of Sales UK at Tokeny.  Tell us about yourself!

I love sports and have been active all my life. For the last 15 years, I played rugby but hung-up my boots last year because recovering on Sundays was getting too hard! Now, I play squash most early mornings and do Muay Thai 3 or 4 times a week with my youngest son. Between my girlfriend and I, we have 4 boys, aged between 17 and 23 years old. They certainly keep us busy!

What were you doing before Tokeny?

Before joining Tokeny, I worked as Head of Institutional Sales for a FinTech, and prior to that, I led Institutional Sales and Partnerships at a decentralized data cloud storage start-up. Before entering the Web3 space, I owned my own headhunting business for over 12 years, focusing on Wealth Management and Private Banking, with a notable crossover into Investment Banking.

How would you describe working at Tokeny?

Working at Tokeny is both challenging and rewarding. There’s never a dull moment, as our fast-paced environment ensures that there is always something to do. Our industry is very dynamic and innovative, so there are always plenty of opportunities to learn. Being part of Tokeny means being at the forefront of tokenization tech, which I get to explore with a diverse and talented team.

What are you most passionate about in life?

There are too many to name, but if I had to choose just three, they would be:

Family. My sister and I come from a very close-knit family, and those values have carried over into our own families. Work. I now know what I want to be when I grow up! 🙂 Many people spend years searching for their true passion; I consider myself fortunate to have found mine. Sports. I get irritable when I don’t play sports because staying fit and healthy is important to me. Plus, it’s a great way to release stress and maintain balance. What is your ultimate dream?

I’m already living the dream! I am very lucky and have a great life; there’s not much I would change.

What advice would you give to future Tokeny employees?

If you’re passionate about Web3 and Tokenization and you apply yourself, you’ll do really well.

What gets you excited about Tokeny’s future?

Tokenization is the future, and Tokeny is at the forefront with its innovative platform. As we move towards standardization and wider adoption, the possibilities are endless.

He prefers: check

Coffee

Tea

check

Movie

check

Book

Work from the office

check

Work from home

check

Dogs

Cats

check

Call

Text

check

Burger

Salad

check

Mountains

check

Ocean

Wine

check

Beer

Countryside

check

City

Slack

check

Emails

check

Casual

Formal

check

Crypto

Fiat

Night

check

Morning

More Stories  Tokeny’s Talent|Xavi’s Story 19 March 2021 Tokeny’s Talent|José’s Story 19 August 2021 Tokeny’s Talent|Joachim’s Story 23 April 2021 Tokeny’s Talent | Denisa 26 October 2023 Tokeny’s Talent | Ali 29 September 2023 Tokeny’s Talent | Gonzalo 24 November 2023 Tokeny’s Talent | Lautaro 17 May 2023 Tokeny’s Talent|Tony’s Story 18 November 2021 Tokeny’s Talent|Mihalis’s Story 28 January 2022 Tokeny’s Talent|Eva’s Story 19 February 2021 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Adrian first appeared on Tokeny.

The post Tokeny’s Talent | Adrian appeared first on Tokeny.

Wednesday, 15. May 2024

IBM Blockchain

The power of remote engine execution for ETL/ELT data pipelines

Data must be combined and harmonized from multiple sources into a unified, coherent format before being used with AI models. The post The power of remote engine execution for ETL/ELT data pipelines appeared first on IBM Blog.

Business leaders risk compromising their competitive edge if they do not proactively implement generative AI (gen AI). However, businesses scaling AI face entry barriers. Organizations require reliable data for robust AI models and accurate insights, yet the current technology landscape presents unparalleled data quality challenges.

According to International Data Corporation (IDC), stored data is set to increase by 250% by 2025, with data rapidly propagating on-premises and across clouds, applications and locations with compromised quality. This situation will exacerbate data silos, increase costs and complicate the governance of AI and data workloads. 

The explosion of data volume in different formats and locations and the pressure to scale AI looms as a daunting task for those responsible for deploying AI. Data must be combined and harmonized from multiple sources into a unified, coherent format before being used with AI models. Unified, governed data can also be put to use for various analytical, operational and decision-making purposes. This process is known as data integration, one of the key components to a strong data fabric. End users cannot trust their AI output without a proficient data integration strategy to integrate and govern the organization’s data. 

The next level of data integration

Data integration is vital to modern data fabric architectures, especially since an organization’s data is in a hybrid, multi-cloud environment and multiple formats. With data residing in various disparate locations, data integration tools have evolved to support multiple deployment models. With the increasing adoption of cloud and AI, fully managed deployments for integrating data from diverse, disparate sources have become popular. For example, fully managed deployments on IBM Cloud enable users to take a hands-off approach with a serverless service and benefit from application efficiencies like automatic maintenance, updates and installation.

Another deployment option is the self-managed approach, such as a software application deployed on-premises, which offers users full control over their business-critical data, thus lowering data privacy, security and sovereignty risks.

The remote execution engine is a fantastic technical development which takes data integration to the next level. It combines the strengths of fully managed and self-managed deployment models to provide end users the utmost flexibility.

There are several styles of data integration. Two of the more popular methods, extract, transform, load (ETL) and extract, load, transform (ELT), are both highly performant and scalable. Data engineers build data pipelines, which are called data integration tasks or jobs, as incremental steps to perform data operations and orchestrate these data pipelines in an overall workflow. ETL/ELT tools typically have two components: a design time (to design data integration jobs) and a runtime (to execute data integration jobs).

From a deployment perspective, they have been packaged together, until now. The remote engine execution is revolutionary in the sense that it decouples design time and runtime, creating a separation between the control plane and data plane where data integration jobs are run. The remote engine manifests as a container that can be run on any container management platform or natively on any cloud container services. The remote execution engine can run data integration jobs for cloud to cloud, cloud to on-premises, and on-premises to cloud workloads. This enables you to keep the design timefully managed, as you deploy the engine (runtime) in a customer-managed environment, on any cloud such as in your VPC, any data center and any geography.

This innovative flexibility keeps data integration jobs closest to the business data with the customer-managed runtime. It prevents the fully managed design time from touching that data, improving security and performance while retaining the application efficiency benefits of a fully managed model.

The remote engine allows ETL/ELT jobs to be designed once and run anywhere. To reiterate, the remote engines’ ability to provide ultimate deployment flexibility has compounding benefits:

Users reduce data movement by executing pipelines where data lives. Users lower egress costs. Users minimize network latency. As a result, users boost pipeline performance while ensuring data security and controls.

While there are several business use cases where this technology is advantageous, let’s examine these three: 

1. Hybrid cloud data integration

Traditional data integration solutions often face latency and scalability challenges when integrating data across hybrid cloud environments. With a remote engine, users can run data pipelines anywhere, pulling from on-premises and cloud-based data sources, while still maintaining high performance. This enables organizations to use the scalability and cost-effectiveness of cloud resources while keeping sensitive data on-premises for compliance or security reasons.

Use case scenario: Consider a financial institution that needs to aggregate customer transaction data from both on-premises databases and cloud-based SaaS applications. With a remote runtime, they can deploy ETL/ELT pipelines within their virtual private cloud (VPC) to process sensitive data from on-premises sources while still accessing and integrating data from cloud-based sources. This hybrid approach helps to ensure compliance with regulatory requirements while taking advantage of the scalability and agility of cloud resources.

2. Multicloud data orchestration and cost savings

Organizations are increasingly adopting multicloud strategies to avoid vendor lock-in and to use best-in-class services from different cloud providers. However, orchestrating data pipelines across multiple clouds can be complex and expensive due to ingress and egress operating expenses (OpEx). Because the remote runtime engine supports any flavor of containers or Kubernetes, it simplifies multicloud data orchestration by allowing users to deploy on any cloud platform and with ideal cost flexibility.

Transformation styles like TETL (transform, extract, transform, load) and SQL Pushdown also synergies well with a remote engine runtime to capitalize on source/target resources and limit data movement, thus further reducing costs. With a multicloud data strategy, organizations need to optimize for data gravity and data locality. In TETL, transformations are initially executed within the source database to process as much data locally before following the traditional ETL process. Similarly, SQL Pushdown for ELT pushes transformations to the target database, allowing data to be extracted, loaded, and then transformed within or near the target database. These approaches minimize data movement, latencies, and egress fees by leveraging integration patterns alongside a remote runtime engine, enhancing pipeline performance and optimization, while simultaneously offering users flexibility in designing their pipelines for their use case.

Use case scenario: Suppose that a retail company uses a combination of Amazon Web Services (AWS) for hosting their e-commerce platform and Google Cloud Platform (GCP) for running AI/ML workloads. With a remote runtime, they can deploy ETL/ELT pipelines on both AWS and GCP, enabling seamless data integration and orchestration across multiple clouds. This ensures flexibility and interoperability while using the unique capabilities of each cloud provider.

3. Edge computing data processing

Edge computing is becoming increasingly prevalent, especially in industries such as manufacturing, healthcare and IoT. However, traditional ETL deployments are often centralized, making it challenging to process data at the edge where it is generated. The remote execution concept unlocks the potential for edge data processing by allowing users to deploy lightweight, containerized ETL/ELT engines directly on edge devices or within edge computing environments.

Use case scenario: A manufacturing company needs to perform near real-time analysis of sensor data collected from machines on the factory floor. With a remote engine, they can deploy runtimes on edge computing devices within the factory premises. This enables them to preprocess and analyze data locally, reducing latency and bandwidth requirements, while still maintaining centralized control and management of data pipelines from the cloud.

Unlock the power of the remote engine with DataStage-aaS Anywhere

The remote engine helps take an enterprise’s data integration strategy to the next level by providing ultimate deployment flexibility, enabling users to run data pipelines wherever their data resides. Organizations can harness the full potential of their data while reducing risk and lowering costs. Embracing this deployment model empowers developers to design data pipelines once and run them anywhere, building resilient and agile data architectures that drive business growth.  Users can benefit from a single design canvas, but then toggle between different integration patterns (ETL, ELT with SQL Pushdown, or TETL), without any manual pipeline reconfiguration, to best suit their use case.

IBM® DataStage®-aaS Anywhere benefits customers by using a remote engine, which enables data engineers of any skill level to run their data pipelines within any cloud or on-premises environment. In an era of increasingly siloed data and the rapid growth of AI technologies, it’s important to prioritize secure and accessible data foundations. Get a head start on building a trusted data architecture with DataStage-aaS Anywhere, the NextGen solution built by the trusted IBM DataStage team.

Learn more about DataStage-aas Anywhere Try IBM DataStage as a Service for free

The post The power of remote engine execution for ETL/ELT data pipelines appeared first on IBM Blog.


Indicio

How Decentralized Identity enables re-usable KYC and what it means for you

The post How Decentralized Identity enables re-usable KYC and what it means for you appeared first on Indicio.
Know-your-Customer (KYC) processing just became easier and less expensive. With new decentralized identity technologies important client information is collected once, and can be reused repeatedly by using verifiable credentials to hold and share data.

By Tim Spring

What is KYC?

Know Your Customer (KYC) refers to the data collection process that financial institutions do to authenticate customer or client identity, assess financial risk, and attempt to avoid fraud. establish

KYC can be broken down into three steps that most financial institutions will take when opening an account or qualifying a customer for a loan.

Customer Identification Program

Financial firms must obtain four pieces of identifying information about a client: name, date of birth, address, and identification number (usually taxpayer identification number).

Customer Due Diligence

This is the process where all of a customer’s documents are collected to verify their identity and evaluate their risk profile for suspicious account activity.

Enhanced Due Diligence (optional)

Sometimes additional information is collected for customers that are suspected to be at a higher risk of infiltration, terrorism financing, or money laundering.

KYC is also regulated by several government agencies to reduce money laundering, and is often associated with Anti-Money Laundering policies (AML).

The drawbacks of traditional KYC

KYC’s main drawback is that it is time consuming to do, and becomes even more time consuming when you have to do it multiple times for multiple applications or organizations. For example a bank will need to do KYC to open an account for a customer, and then need to do it again to approve them for a loan. If the bank needs to refer this customer to a partner organization that organization will also need to do their own KYC to verify that customer. This is a lot of documentation for the applicant to collect and repeatedly submit, and takes considerable time, effort, and expense to repeatedly verify.

Each instance of KYC costs a financial institution between $13 and $130 per customer, with corporate clients costing considerably more — one survey reported between $1,501 and $3,500 per review. With banks having thousands of customers, and each customer potentially needing to do KYC multiple times, the average spending on KYC per bank hovers around 60 million dollars annually.

The time it takes to do KYC for a customer or client can be anywhere from 31 to 180 days.

Re-usable KYC through Decentralized Identity 

KYC is necessary (and regulated by the government) so it’s not going away, but we can make it much more efficient by reducing the amount of duplication.

The key value proposition presented by decentralized identity is that once you have verified the relevant financial data that same data can be verified repeatedly with the same level of assurance. How is this possible?

Once the KYC process has been conducted satisfactorily, a financial institution issues their customer or client with a verifiable credential containing all the details and relevant data from the initial KYC process. This information is digitally signed.

When the customer or client wants to share their KYC data with another party (or even the same financial institution), the authenticity of the credential is verified through cryptography (by checking a record on a distributed ledger) and the digital signatures provide proof that the data presented in the credential hasn’t been altered.

A verifiable credential is like a cryptographic wrapper that can seal any kind of information. Try to change the information and the seal is broken. The nature of decentralized identity is that the source of the wrapper can always be known and cryptographically verified, which means if you trust the source’s KYC process — such as that of a financial institution — you can trust the information presented by the credential holder.

A KYC credential can contain all kinds of information that could be necessary for a financial institution to have, but annoying or time consuming for a customer to acquire, for example: Proof of address, government ID number, proof of employment, proof of annual income, and more.

Once this process is more automated with decentralized identity, this information only needs to be collected one time or if the credential is set to expire. We can remove the need for partner organizations or other departments to re-do work that has already been done, saving customers time and stress, and organizations millions of dollars.

Collect once, verify often.

To learn more about how decentralized identity can change your workflows you can check out Indicio Proven, Indicio’s full solution for the secure exchange of verifiable credentials.

If you have questions about the technology or would like to discuss details with our team of technical experts you can contact Indicio.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post How Decentralized Identity enables re-usable KYC and what it means for you appeared first on Indicio.


Microsoft Entra (Azure AD) Blog

Meet us at Identiverse: May 28-31 in Las Vegas

The annual Identiverse conference is a great opportunity to meet with our community, immerse in the latest challenges and innovations, and hear from leaders in the identity industry. And it’s happening soon! Identiverse 2024 is taking place from May 28 to 31 in Las Vegas, Nevada at ARIA Resort & Casino.    By attending, you’ll be among the first to hear about what’s new with Micr

The annual Identiverse conference is a great opportunity to meet with our community, immerse in the latest challenges and innovations, and hear from leaders in the identity industry. And it’s happening soon! Identiverse 2024 is taking place from May 28 to 31 in Las Vegas, Nevada at ARIA Resort & Casino. 

 

By attending, you’ll be among the first to hear about what’s new with Microsoft Entra, our work in identity standards, and how it will help you navigate the constantly evolving identity and network access threat landscape. 

 

Plus, you can request a 1:1 meeting with a Microsoft identity expert and drop by Booth #2423 in the expo hall to ask questions and see the latest demos of identity solutions and Microsoft Copilot for Security.  

 

Featured Microsoft sessions at Identiverse

 

We’ve got a powerhouse lineup of topics showcasing our latest innovations to help you get the most from Microsoft Entra.     

 

During our session, Secure access for any trustworthy identity, anywhere, to anything, on Wednesday, May 29 at 2:00 PM, we'll update you on Microsoft’s progress to enabling the trust fabric, our vision for how organizations can secure every digital interaction from today into the future. Understanding that a Zero Trust approach to identity security is an ongoing journey, we’ll talk about four focus areas to consider and prioritize, including strengthening your identity foundation, securing access for your workforce and external identities, and securing access in multicloud.

 

It's worthwhile to get up early the next morning for the Microsoft Power Breakfast session, Unify your organization’s access controls across identity, endpoint, and network, on Thursday, May 30 from 7:15 AM to 8:15 AM. Nitika Gupta, Principal Manager of Product Management for identity security will discuss how to simplify your Zero Trust architecture with universal policies for any access point, from legacy on-premises resources to cloud apps and web. She’ll demonstrate how unified controls in Microsoft Entra Conditional Access can better protect your organization’s applications, data, and infrastructure from threats inside and outside your organization with fine-tuned policies that examine user, device, and network context.​

 

Also on Thursday, our team of experts Sarah Scott, Principal Manager of Product Management, and Melanie Maynes, Director of Product Marketing, are presenting Secure access with AI: Deep-dive into IAM powered by Microsoft Copilot for Security from 2:35 PM to 3:00 PM. This session delves into the intricate realm of identity protection and AI. We explore the synergies between industry recommended Identity and Access Management (IAM) practices and the advanced security features offered by generative AI tools. Plus, you’ll get to see a demo of Microsoft Copilot for Security as it assists identity admins with existing workflows in Microsoft Entra and provides rapid intelligent recommendations.​

 

Join us at the Microsoft booth for Identiverse Beer Crawl

 

On Wednesday, May 29th, from 5:00 PM to 6:30 PM, we’ll have craft brews and beverages at our booth on the show floor. It’s a great time to check out our latest demos, chat with our speakers and identity specialists, or just say hello!

 

Hear from Microsoft experts throughout the conference

 

Many of our experts have deep expertise in identity security and have earned the opportunity to speak at Identiverse. We hope you’ll attend all the sessions that are relevant to you.

 

Streamline Collaboration and Govern Guest Users and Partners with Microsoft Entra

Tuesday, May 28 from 7:00 PM to 7:15 PM  

Speaker: Laura Viarengo, Product Marketing Manager, External Identities, Microsoft

Learn how Microsoft Entra can help you create smooth sign-in experiences for external users, digitally verify their identities, grant them access to resources they need, and ensure least privilege access to reduce risk of lateral movement. 

 

ACR: The Missing Security Control: 

Wednesday, May 29 from 10:30 AM to10:55 AM  

Speaker: Pamela Dingle, Director of Identity Standards, Microsoft

Learn about the critical work the industry is doing to get Authentication Context aligned and on track across the federation landscape.​

 

General Motors Road to Modern Consumer Identity: 

Wednesday, May 29 from 10:30 AM to10:55 AM  

Speakers: Razi Rais, Senior Product Manager, Microsoft and Andrew Cameron, IT Fellow, Identity and Access Management, GM

Learn about the architectural decisions General Motors made to establish its global customer identity platform.

 

Untangling FIDO and Passkey Concepts: 

Thursday, May 30 from 4:00 PM to 4:25 PM | Joshua 10

Speaker: Danny Zollner, Senior Product Manager, Microsoft

Learn how to empower people to make educated policy or product decisions with clear understanding of password-less authentication.​

 

Externalizing Authorization is More than a Technology Problem…  

Thursday, May 30 from 5:10 PM to 5:35 PM  

Speakers: Pieter Kasselman, Identity Standards Architect, Microsoft and Sarah Cecchetti, Head of Product, Amazon Web Services

With the rise of advanced threat actors, regulation, compliance and pressures for greater business agility, authorization is more relevant than ever. Hear about the learnings and solutions from the experts.​

 

Modern Apple Identity Management Best Practices: 

Friday, May 31 from 8:30 AM-8:55 AM  

Speakers: Michael Epping, Senior Product Manager, Microsoft and Brian Melton-Grace, Senior Product Manager, Microsoft

Hear about how Microsoft helps organizations to improve their macOS end user experience and security posture by modernizing their Apple device identity strategy. Learn how to modernize Apple identity management at your organization.​

 

From Fires to Fixes: 

Friday, May 31 from 9:40 AM to10:05 AM  

Speaker: Tia Louden, Senior Technical Program Manager, Microsoft

Get a peek under the curtain on how Microsoft Identity handles our Security post incident reviews (PIR) process! Tia will share how a high-quality PIR process drives incident response teams to improve, learn from events, and use data-driven analysis to bake the learning into all areas of the organization’s culture and security posture.

 

From Keynote to Action: Building Workload Identity Foundations with Standards 

Friday, May 31 from 9:40 AM to10:05 AM  

Speakers:​ Pieter Kasselman, Identity Standards Architect, Microsoft, Evan Gilman, Co-founder, SPIRL, and George Fletcher, Identity Standards Architect, Capital One

The new IETF working group, Workload Identity for Multi-Service Environments (WIMSE), will discuss the progress and gaps in new draft standards to enable and deploy Zero Trust workload identity architectures.

 

Lastly, be sure to check out the Closing Keynote, The Future of Authorization, featuring Pieter Kasselman, Identity Standards Architect at Microsoft on Friday, May 31 from 11:00 AM to11:30 AM. 

 

We hope you’ll make it a point to attend these sessions. If you see us, be sure to say hello and let us know that you’re a follower of the Microsoft Entra blog on Tech Community. And when you talk to our Microsoft Entra experts at our booth, be sure to ask for an invitation to our Wednesday evening VIP Mixer. (That’s your reward for reading to the end of this post.)  

 

See you in Las Vegas!

 

Nichole Peterson

Senior Product Marketing Manager, Microsoft Entra

LinkedIn

 

 

Recent articles from Microsoft speakers at Identiverse 2024:

Microsoft Entra adds identity skills to Copilot for Security by Sarah Scott Act now: Turn on or customize Microsoft-managed Conditional Access policies by Nitika Gupta Auto rollout of Conditional Access policies in Microsoft Entra ID by Nitika Gupta Microsoft Entra: Top content creators to follow by Nichole Peterson

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Microsoft Entra delivers increased transparency

Seventy-five percent of cybersecurity professionals say the current threat landscape is the most challenging it has been in the last five years, according to the 2023 ISC2 Cybersecurity Workforce Study. You’re probably on the hook to secure access for your organization – preventing identity attacks and securing least privilege access. And we know it’s intense.   One of the ways we strive

Seventy-five percent of cybersecurity professionals say the current threat landscape is the most challenging it has been in the last five years, according to the 2023 ISC2 Cybersecurity Workforce Study. You’re probably on the hook to secure access for your organization – preventing identity attacks and securing least privilege access. And we know it’s intense.

 

One of the ways we strive to assist you is by providing accurate, reliable, and timely information to monitor and optimize the strength of your identity and network access security posture. This transparency gives you visibility that’s necessary to assess performance, tenant health, and your plan for improvements.

 

In 2024, we’ve released a series of innovations reinforcing our commitment to transparency. This blog recaps these improvements for you in three parts: 

 

Transparency in updates: Helping you know what’s new and coming soon for Microsoft Entra;  Transparency in adoption: Providing recommendations and license utilization insights; and  Transparency in operations: Tailored insights on SLA performance, scenario health, and sign-ins.  

 

Plus, all features highlighted in this blog are demonstrated in our video, Trust via Transparency.

 

 

I hope these added capabilities help maximize the value you receive from Microsoft Entra as you consider, deploy, and measure the progress of your Zero Trust approach.   

 

Transparency in updates

 

In the world of technology, change is constant. In 2023, we released over 100 Microsoft Entra updates and new capabilities and communicated this information across announcements, quarterly blogs, and multiple docs locations. Our first investment area of transparency aims to streamline this communication, helping you find and filter the product update information most relevant to you. 

 

What’s New hub in Microsoft Entra admin center

 

”What’s New” in Microsoft Entra gives a clear and complete view of Entra product innovation so you can stay informed, evaluate the latest innovations, and eliminate the need to manually track updates. Product updates are categorized into Roadmap and Change Announcements. The roadmap includes public previews and recent general availability releases, while Change Announcements detail modifications to existing features. 

 

Learn more: Introducing "What's New" in Microsoft Entra - Microsoft Community Hub

 

Transparency in adoption

 

The second investment area, transparency in adoption, focuses on helping you get more value from your Microsoft Entra licenses, giving you visibility to intelligent recommendations for improving configurations and protecting your organization.

 

Microsoft Entra license utilization insights

 

Microsoft Entra license utilization insights help you optimize your Entra licenses, as well as stay compliant by getting insights into the current usage. Today, you can see usage and licenses for Entra ID capabilities such as Conditional Access and risk-based Conditional Access. In the future, we will expand the license utilization insights to other products in the Microsoft Entra product line. 

 

Learn more: Introducing Microsoft Entra license utilization insights - Microsoft Community Hub

 

Microsoft Entra recommendations

 

Microsoft Entra recommendations can serve as a trusted advisor for enhancing your security posture and improving employee productivity. With Microsoft Entra recommendations, you get personalized and actionable insights based on best practices and industry standards to help you secure your organization. Plus, we’ve made updates to Identity Secure Score, which you can find on the Microsoft Entra recommendations blade.

 

Learn more: Introducing new and upcoming Entra Recommendations to enhance security and productivity - Microsoft Community Hub 

 

Transparency in operations

 

Transparency in operations focuses on what we're doing to help customers see how available and resilient Microsoft Entra really is, to hold us accountable when issues arise so we can keep improving, and to understand when they have actions to take within their tenant to improve its health. Let’s look at recently announced functionality in reporting, health, and monitoring:

 

Tenant-level SLA reporting

 

Monthly tenant-level SLA reporting enables you to monitor your tenant's performance against our Entra ID SLA promise of 99.99% availability in authenticating users and issuing tokens within your tenant.

 

Learn more: Tenant health transparency and observability - Microsoft Community Hub

 

Precomputed health metric streams

 

These new health metrics isolate relevant signals from activity logs and provide pre-computed, low-latency aggregates every 15 minutes for specific high-value observability scenarios. The first scenarios we’ve enabled are multifactor authentication (MFA), sign-ins for managed or compliant devices, and Security Assertion Markup Language (SAML) sign-ins. We're starting with authentication-related scenarios because they are mission-critical to all our customers, but other scenarios in areas like entitlement management, directory configuration, and app health will be added in time, along with intelligent alerting capabilities in response to anomalous patterns in the data.  

 

Learn more: Tenant health transparency and observability - Microsoft Community Hub

 

Copilot-assisted assessments

 

As our third example of our commitment to transparency in operations, we can help you understand how users interact with your organization's resources. Microsoft Copilot for Security is embedded in Microsoft Entra so you can more efficiently assess identities and access, plus investigate and resolve identity risks and even complete complex tasks. A great example of this assistance is asking Copilot to give you sign-in logs for a specific user for a specific amount of time, saving you the reporting time.

 

Learn more: Microsoft Entra adds identity skills to Copilot for Security - Microsoft Community Hub

 

Tell us what you think

 

For my team, transparency isn’t a buzzword; it’s our commitment. As we continue to enhance Microsoft Entra, earning your trust through transparency remains our guiding star. 

 

We look forward to you trying these new capabilities and hopefully making them part of your ongoing experience to reduce complexity and effectively manage your identity and network access security solutions. I’d be happy to hear your feedback and ideas, either in the comments below or via the “Provide Feedback” link on the Microsoft Entra admin center home page.

 

Best regards,

Shobhit Sahay

 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech Community Microsoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

This week in identity

E52 - A Review of RSA Conference 2024 - Part 1

Summary In this episode, Simon and David discuss their experiences at the RSA Conference 2024 and highlight the key themes and trends in the identity and access management (IAM) space. They emphasize the growing importance of identity in the security landscape and the increasing integration of identity into RSA. They also discuss the impact of AI and Gen AI on IAM, the need for better discovery

Summary

In this episode, Simon and David discuss their experiences at the RSA Conference 2024 and highlight the key themes and trends in the identity and access management (IAM) space. They emphasize the growing importance of identity in the security landscape and the increasing integration of identity into RSA. They also discuss the impact of AI and Gen AI on IAM, the need for better discovery and visibility in identity systems, and the challenges of transitioning from legacy technology to new, intelligent systems. They conclude by highlighting the importance of preparing data for the Gen AI world and the need for organizations to adapt and embrace new technologies in order to stay competitive.

Keywords

RSA Conference, RSAC2024, identity and access management, IAM, security, AI, Gen AI, discovery, visibility, legacy technology, data preparation, competitive advantage

Takeaways

Identity is becoming increasingly important in the security landscape, and RSA is a key event for identity professionals.

The integration of identity into themes and topics at RSAC2024 is a reflection of the growing significance of identity in the industry.

AI and Gen AI are driving the need for more intelligent identity systems and the transition from legacy technology.

Discovery and visibility are crucial in identity systems, and organizations need to break down silos and integrate their identity infrastructure.

Preparing data for the Gen AI world is essential for organizations to stay competitive and take advantage of new technologies.

Chapters

00:00 Introduction and Overview of RSA Conference

13:02 The Growing Importance of Identity in the Security Landscape

21:03 Challenges of Transitioning from Legacy Technology to New, Intelligent Systems

25:01 The Impact of AI and Gen AI on IAM

31:05 Preparing Data for the Gen AI World

33:30 Preview of Next Episode on Fraud and Cloud


Ockto

Hoe documentloos accepteren de standaard wordt: in gesprek met Gert Vasse

Aflevering 2 van de Data Sharing Podcast met Gert Vassen en Hidde Koning   Bij het accepteren van nieuwe klanten voor financiële diensten worden vaak nog papieren documenten gebruikt, wat inefficiënt, foutgevoelig en fraudegevoelig is. De oplossing ligt in documentloos accepteren, waarbij gegevens rechtstreeks uit betrouwbare digitale bronnen worden gehaald. Deze methode v
Aflevering 2 van de Data Sharing Podcast met Gert Vassen en Hidde Koning

 

Bij het accepteren van nieuwe klanten voor financiële diensten worden vaak nog papieren documenten gebruikt, wat inefficiënt, foutgevoelig en fraudegevoelig is. De oplossing ligt in documentloos accepteren, waarbij gegevens rechtstreeks uit betrouwbare digitale bronnen worden gehaald. Deze methode verhoogt de efficiëntie, vermindert de kans op fraude en waarborgt de privacy van consumenten beter.

In aflevering 2 van de Data Sharing Podcast gaat host Hidde Koning hierover in gesprek met Gert Vasse. Gert is expert op het gebied van digitale klantacceptatie en Commercieel Directeur bij Ockto.


Indicio

Indicio Proven joins the AWS Marketplace, providing powerful, award-winning identity verification, secure data sharing, and reusable KYC

The post Indicio Proven joins the AWS Marketplace, providing powerful, award-winning identity verification, secure data sharing, and reusable KYC appeared first on Indicio.
AWS customers now have access to Indicio’s universally compatible and highly-scalable solution for decentralized identity, Open Badges 3.0, and W3C verifiable credentials.

Seattle/May 14, 2024: When it comes to data, “verify once and reuse often” is the key to reducing cost, improving efficiency, and delivering a much better user experience, whether its passwordless access, certification, KYC, or network security. Now, the leading solution for implementing verifiable data, Indicio Proven®, is available on AWS Marketplace.

Built by the global market leader in decentralized identity technology, Indicio Proven is a universal software solution that works with any system to implement verifiable credentials. It’s quick to set up, easy to use, and scales in a cost-effective way to meet any number of use cases and users. 

By using verifiable credentials to share data and identity, information can be authenticated without checking in with its source, meaning time-consuming processes reliant on trustworthy data can be instant and automated.

Indicio Proven provides simple, powerful ways to implement passwordless login, reusable KYC, and zero trust access management. And its award-winning privacy and security features provide a way to manage biometrics and avoid deepfakes. 

“With Indicio Proven, we’ve created a product that will drive digital evolution across every sector,” said Heather Dahl, co-founder and CEO of Indicio. “Think about all the inefficiencies, all the risks of fraud, all the compliance headaches around data privacy and protection we endure when dealing with digital and paper documentation. We’re now giving people a simple, elegant, trustworthy way to remove all this and streamline how data and identity are managed and authenticated. This means reducing cost, risk, and friction. It means increasing privacy, security, and trust. And it means delivering dramatically better customer experiences through seamless interaction.” Verifiable data is the DNA for a new digital era — and AWS customers now have it at their fingertips to create a wave of innovation.”

Indicio’s award-winning technology has become the gold standard for trusted, authenticated biometrics and data verification, and is currently deployed by customers in travel and hospitality, government, financial services, agriculture, and education. Rapid deployment into any platform or system means organizations can issue and verify data from any data source while implementing privacy-preserving and zero trust security architectures. 

AWS customers will also benefit from the flexibility and scalability achieved when deploying Indicio Proven on AWS Cloud with:

streamlined procurement and quicker deployment extensive security validation simplified billing  discounts and flexible pricing

Expanding Indicio’s relationship with AWS means that developers in enterprises, governments, and organizations have powerful technologies to solve critical problems, create seamless, streamlined processes, and deliver digital transformation at scale.

Get Indicio Proven now in the AWS Marketplace.

Please visit Indicio for more information about creating variable data solutions in the cloud.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Indicio Proven joins the AWS Marketplace, providing powerful, award-winning identity verification, secure data sharing, and reusable KYC appeared first on Indicio.


KuppingerCole

Jun 27, 2024: Asking Good Questions About AI Integration in Your Organization

The integration of AI poses both unprecedented opportunities and challenges for organizations. Our webinar, "Asking the Right Questions: Navigating AI Integration in Enterprise Security," addresses the pressing need for CISOs, CIOs, and other information risk management professionals to navigate the complexities of AI adoption effectively.
The integration of AI poses both unprecedented opportunities and challenges for organizations. Our webinar, "Asking the Right Questions: Navigating AI Integration in Enterprise Security," addresses the pressing need for CISOs, CIOs, and other information risk management professionals to navigate the complexities of AI adoption effectively.

Monday, 13. May 2024

KuppingerCole

Jun 18, 2024: Securing the Digital Frontier: Exploring HP's Business Solutions for Endpoint Security

In the ever-evolving landscape of enterprise and personal computing, maintaining security and integrity across endpoints is paramount. This webinar explores the realm of Business Solutions for Endpoint Security, spotlighting the transformative potential of HP's cybersecurity innovations in safeguarding organizations against modern cyber threats.
In the ever-evolving landscape of enterprise and personal computing, maintaining security and integrity across endpoints is paramount. This webinar explores the realm of Business Solutions for Endpoint Security, spotlighting the transformative potential of HP's cybersecurity innovations in safeguarding organizations against modern cyber threats.

Entrust

The Dark Side of GenAI: Safeguarding Against Digital Fraud

Generative artificial intelligence (GenAI) has the capacity to create new opportunities, disrupt how we work,... The post The Dark Side of GenAI: Safeguarding Against Digital Fraud appeared first on Entrust Blog.

Generative artificial intelligence (GenAI) has the capacity to create new opportunities, disrupt how we work, and change how we think about AI regulation. Some predict it will be as disruptive, if not more so, than the widespread adoption of the internet. But with new opportunities come new challenges and threats. While GenAI continues to dominate the attention of businesses, the media, and regulators, it’s also caught the attention of fraudsters.

Recent technological advances mean it’s never been cheaper or easier to be a fraudster. In this brave new digital-first world, fraudsters have more tools at their fingertips than ever before. And it’s set to cost. Online payment fraud losses are predicted to increase from $38 billion in 2023 to $91 billion in 2028.

The rise of the GenAI fraudster

Fraudsters generally fall into two groups: 1. the lone amateur and 2. organized criminal enterprises. Traditionally the latter, with more resources at their disposal, has posed a higher threat to businesses. But GenAI offers even the most amateur fraudsters easy access to more scalable and increasingly sophisticated types of fraud.

The evidence is in the data. Over the last few years, less sophisticated or “easy” fraud dominated. Proprietary data from Onfido, an Entrust company, found that between 2022 and 2023, 80.3% of fraud caught fell into this category. The remainder was classed as “medium” (19.6%) or “hard” (0.1%). But recently there’s been an increase in more sophisticated fraud. Comparing these figures to data from the last six months finds a jump in both medium fraud (accounting for 36.4%) and hard fraud (accounting for 1.4%).

How fraudsters are using generative AI deepfakes

GenAI programs have made it easy for anyone to create realistic, fabricated content including audio, photos, and videos. Deepfake videos in particular, sophisticated synthetic media where a person’s likeness is replaced with someone else’s, are becoming increasingly common and convincing. Fraudsters have started using deepfakes to try and bypass biometric verification and authentication methods. These videos can be pre-recorded or generated in real time with a GPU and fake webcam, and typically involve superimposing one person’s face onto another’s.

This type of attack has surged in recent years. Comparing 2023 with 2022, there’s been a 3,000% increase in deepfake attempts. This is particularly concerning in the realm of digital onboarding and identity verification, where the integrity of personal identification is paramount.

Currently, a few fraudsters are responsible for creating deepfakes at scale. But the growing popularity of “fraud-as-a-service” offerings (where experienced fraudsters offer their services to others), combined with improvements in deepfake software, suggests their volume and sophistication will increase in 2024.

Document forgeries

Many customer due diligence processes involve the authentication of identity documents. But image manipulation software, and the emergence of websites such as OnlyFakes — an online service that sells the ability to create images of identity documents it claims are generated using AI — have made it easier for fraudsters to fake documents.

There are four different ways for fraudsters to create fake documents:

Physical counterfeit: A fake physical document created from scratch Digital counterfeit: A fake digital representation of a document created from scratch (i.e., in Photoshop) Physical forgery: An existing document that is altered or edited Digital forgery: An existing document that is altered or edited using digital tools

Historically, most fake documents were physical counterfeits (fake documents fraudsters created entirely from scratch). In 2023, Onfido identified that 73.2% of all document fraud caught was from physical counterfeits. In the last six months, that’s dropped to 59.56%, with digital forgeries accounting for a larger proportion of document fraud than prior years (34.8%).

This increase in digital forgeries can be attributed to the emergence of websites such as OnlyFakes. Fraudsters have wised up to the fact it’s a faster, cheaper, and more scalable way to create fake documents.

Synthetic identity fraud

Synthetic identity fraud is a type of fraud where criminals combine fake and real personal information, such as Social Security Numbers (SSNs) and names, to create a new identity. This new, fake identity is then used to open fake accounts, access credit, or make fraudulent purchases.

Generative AI tools offer a way for fraudsters to generate fake information for synthetic identities at scale. Fraudsters can use AI bots to scrape personal information from online sources, including online databases and social platforms, before using this information to create synthetic identities.

With synthetic identity fraud projected to generate $23 billion USD in losses by 2030, businesses are adopting advanced fraud detection and prevention technologies to root out synthetic fraud. Keeping fraudsters from entering in the first place with a reliable identity verification solution at onboarding is the foundational element in this detection framework.

Phishing

During phishing attacks, fraudsters reach out to individuals via email or other forms of communication requesting they provide sensitive data or click a link to a malicious website, which may contain malware.

Generative AI tools offer fraudsters an easy way to create more sophisticated and personal social engineering scams at scale. For example, using AI tools to write convincing phishing emails or for card cracking. Research has found that the top tools used by bad actors in 2023 include the dark web, fraud as a service, and generative AI. This includes the tool wormGPT, which provides a fast method for generating phishing attacks and malicious code.

Combatting GenAI fraud with… AI

The advancement in GenAI means we’re entering a new phase of fraud and cyberattacks. But the good news is that any technology fraudsters can access is accessible to those building fraud detection solutions. The best cyber defense systems of tomorrow will need AI to power them to combat the speed and scale of attacks. Think of it as an “AI versus AI showdown.”

With the right training, AI algorithms can recognize the subtle differences between authentic and synthetic images or videos, which are often imperceptible to the human eye. Machine learning, a subset of AI, plays a crucial role in identifying irregularities in digital content. By training on vast datasets of both real and fake media, machine learning models can learn to differentiate between the two with high accuracy.

One of the strengths of using AI to fight deepfakes and other GenAI fraud is its ability to continuously learn and adapt. As deepfake technology evolves, so too do the AI algorithms designed to detect them.

Securing digital identities against fraud

With AI-driven attacks from phishing, deepfakes, and synthetic identities on the rise, Entrust’s AI-powered, identity-centric solutions are critical in ensuring the integrity and authenticity of digital identities.

By innovating and integrating Onfido capabilities across the Entrust portfolio, we’re committed to helping:

Fight phishing and credential misuse with enhanced authentication leveraging biometrics and digital certificates Neutralize deepfakes while creating secure digital experiences with AI/ML-driven identity verification Enable trusted digital onboarding, authenticating customers or employees, and issue credentials in a matter of minutes while reducing fraud exposure and staying compliant with regulations and standards Secure data and cryptographic assets with cutting-edge encryption, key management, and compliance solutions

To learn more, download the full report here: https://go.entrust.com/identity-fraud-report-2024

The post The Dark Side of GenAI: Safeguarding Against Digital Fraud appeared first on Entrust Blog.


Ocean Protocol

New Data Challenge: GitHub Developer Dynamics

Analyze developer interactions and their impact on project crypto tokens. Overview This data challenge focuses on analyzing GitHub developer activity and its impact on crypto token prices. Participants will track developer activity trends over time by accessing and analyzing extensive datasets from Ocean Protocol, Bittensor, Fetch.AI, Numerai, and SingularityNET. Challenge tasks include ide
Analyze developer interactions and their impact on project crypto tokens. Overview

This data challenge focuses on analyzing GitHub developer activity and its impact on crypto token prices. Participants will track developer activity trends over time by accessing and analyzing extensive datasets from Ocean Protocol, Bittensor, Fetch.AI, Numerai, and SingularityNET. Challenge tasks include identifying patterns in developer engagement, ranking projects by activity levels, and assessing the most active contributors. They will also determine the correlation between developer actions and token prices, exploring if and how developer activity influences market dynamics.

Objectives

Utilizing statistical methods, participants will analyze the correlation between GitHub developer activity and cryptocurrency token prices. They will identify patterns in developer activity corresponding to significant project events or milestones and monitor the number of commits and repository creations in selected crypto projects to evaluate their evolution over time. They will investigate the impact of developer engagement on token price fluctuations, exploring different timeframes and identifying optimal time lags for correlation.

Including supplementary data like social media activity and partnership announcements enhances this analysis. The goal is to provide participants with a comprehensive understanding of how developer activities influence market dynamics and to develop predictive models that elucidate these relationships.

Data

Data provided for this challenge fall into four main categories, covering various aspects of GitHub activity and cryptocurrency market performance for Ocean Protocol, Bittensor, Fetch.AI, Numerai, and SingularityNET.

The first category contains records of all commits to the repositories of these crypto projects, detailing each commit’s ID, author details, timestamp, and summary of changes.

The second category includes data on issues reported within the project repositories, capturing each issue’s ID, title, description, status (open or closed), creation and update timestamps, and the reporter’s details.

The third category compiles information on each project’s repositories, including the repository ID, name, creation, and last update dates, number of forks, stars, watchers, primary programming language, and licensing details.

The fourth and final category records historical price data for each project’s cryptocurrency, showing daily open, high, low, and closing prices and trading volume. This dataset enables participants to explore how GitHub activities correlate with market trends and token price movements, serving as a foundation for analyzing developer impact on market dynamics.

Mission

This Data Challenge aims to uncover the relationships between GitHub developer activities and cryptocurrency market fluctuations. Participants will analyze developer commits, issues, repository details, and token prices to determine how these factors influence crypto projects’ financial and operational success.

The goal is to empower participants to predict market behaviors based on development activities, illustrating developers’ significant influence on cryptocurrency markets. This understanding supports informed strategic decision-making and allows a data-driven approach within the crypto industry, ultimately enhancing analytical skills and insight into the ecosystem’s dynamics.

Rewards

Our commitment to celebrating achievement and encouraging talent has shaped a rewarding system that recognizes outstanding performers and encourages all participants. We offer a total prize pool of $10,000, shared among the top 10 participants, adding a dynamic layer of excitement and competition to the 2024 championship. The top 10 winners receive monetary prizes and earn points towards the championship, creating a level playing field for experienced data scientists and those new to the field.

Opportunities

There’s even more on offer for exceptional performers. The top three participants in each challenge may collaborate with Ocean to develop dApps that monetize their algorithms. What distinguishes us? You maintain full ownership of your intellectual property. We aim to enable you to market your innovative solutions effectively. Let’s collaborate to transform your ideas into successful ventures!

How to Participate

Are you ready to join us on this quest? Whether you’re a seasoned data pro or just starting, there’s a place for you in our community of data scientists. Let’s explore and discover together on Desights, our dedicated data challenge platform. The challenge runs from April 11 until April 30, 2024, at midnight UTC. Click here to access the challenge.

Community and Support

To engage in discussions, ask questions, or join the community conversation, connect with us on Ocean’s Discord channel #data-science-hub or the Desights support channel #data-challenge-support.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord — or track Ocean’s progress on GitHub.

New Data Challenge: GitHub Developer Dynamics was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Empowering Interoperability: The Critical Role of Healthcare IAM | Ping Identity

Today’s healthcare relies on digital health ecosystems—a network of interconnected digital services. These ecosystems need to accommodate any type of user to support a variety of digital healthcare use cases and innovation. User types not only include consumers, providers, workforces, and partners, but also connected IoMT (Internet of Medical Things) such as remote patient monitoring (RPM) devices

Today’s healthcare relies on digital health ecosystems—a network of interconnected digital services. These ecosystems need to accommodate any type of user to support a variety of digital healthcare use cases and innovation. User types not only include consumers, providers, workforces, and partners, but also connected IoMT (Internet of Medical Things) such as remote patient monitoring (RPM) devices, wearables, and medical equipment. And now in the age of artificial intelligence, users also include AI sessions and agents. Digital health ecosystems also include cloud services and third-party partnerships and application programming interfaces (APIs).

 

Interoperability is a fundamental pillar within this intricate digital web. As healthcare systems evolve, the seamless exchange of health information across diverse digital ecosystems and stakeholders—such as providers, payers, pharmacies, patients/members—is critical. It’s been proven that shared data exchange improves care by informing clinical decisions that result in better patient outcomes.

 

Healthcare interoperability refers to the ability of different health information technologies to communicate, exchange, and use health information in a coordinated manner, both within and across organizational boundaries, in order to improve healthcare delivery. As discussed below, multiple regulations mandate interoperability. And achieving it is a complex endeavor. For example, interoperability relies on advanced cybersecurity capabilities, a standard called FHIR (Fast Healthcare Interoperability Resources) for data formats and APIs, legal agreements for data sharing, and governance models to manage the data exchange of medical records.

 

Interoperable systems are essential to optimize patient care and streamline healthcare delivery. Health information exchange across digital health ecosystems hinges on secure integrations and access management of clinical data, such as electronic health records (EHRs), across various information systems and interfaces. At the heart of this endeavor lies identity and access management (IAM). Healthcare IAM (also known as healthcare identity management) is a crucial component for both achieving healthcare data interoperability and complying with the regulations that mandate it.

 

Sunday, 12. May 2024

KuppingerCole

Strengthening Trust in Identities for Compliance and Security

In this episode, Matthias Reinwarth and Charlene Spasic continue their discussion on the topic of Zero Trust. They explore the benefits of strong and reliable identities in the context of Zero Trust, including regulatory compliance, business continuity, and resilience. They also discuss the application of Zero Trust to IoT and the role of AI in the implementation of Zero Trust. The conversation co

In this episode, Matthias Reinwarth and Charlene Spasic continue their discussion on the topic of Zero Trust. They explore the benefits of strong and reliable identities in the context of Zero Trust, including regulatory compliance, business continuity, and resilience. They also discuss the application of Zero Trust to IoT and the role of AI in the implementation of Zero Trust. The conversation concludes with a look at the challenges and future directions of Zero Trust.




Spherical Cow Consulting

The EU Digital Identity Architecture Reference Framework – How to Get There From Here

The EU's Digital Identity Architecture Reference Framework (ARF) offers a starting point for digital wallets. It aims to support user control over personal data while meeting legal and cybersecurity requirements. But to get there from here, you need to know what you don't know: the functional and non-functional requirements, along with interfaces and integration points for digital identity wallets

Digital wallets and the credentials they hold are ALL the rage these days. Understanding how to make them work in today’s world is where the EU’s Digital Identity Architecture Reference Framework (ARF) comes in.

If you’ve read my posts about verifiable credentials (this one and this one), the next logical step is to discuss how those credentials are stored in a digital wallet.

What’s a Wallet?

OK, so digital identity wallets may be all the rage, but that doesn’t necessarily mean that people—even techies—have a common understanding of what they are. According to the Open Wallet Foundation:

“A digital wallet is a container where you can store and access digital assets, credentials, and other useful items, such as tickets and keys. Another software component, most often called an agent, can put items into a wallet, take items out of a wallet, or process items in a wallet. While the wallet is the container, the agent is the mover and shaker.” — Gordon Gram, “Why the World Needs an Open Source Digital Wallet Right Now,” Open Wallet Foundation, February 2023.

That’s a good functional definition, though it doesn’t quite help determine the standard technical specifications for the container. But that’s a topic for a future blog post.

A Framework for Many Moving Parts

Governments, businesses, and individuals are all concerned with how to give an individual agency over their personal data. This concern is a natural response to the privacy laws that have rolled out worldwide in the last 5+ years. That said, coming up with a structure that supports user agency over their own data while still supporting other legal and cybersecurity-based requirements is a challenge. There are just SO MANY moving parts!

There are the entities that are responsible for the wallets themselves (i.e., the Holders). There are the entities that are responsible for the data that goes into the credentials stored by the wallet (i.e., the Issuers). There are the entities that want to ask for the credential (or some of the data that the credential holds) (i.e., the Verifiers).

But wait, there’s more! There are device manufacturers that have to have equipment capable of managing the cryptography associated with complex digital signatures. Services responsible for building a trusted registry of entities participating in this ecosystem. Entities providing audit services to make sure everything is running securely and according to current legal requirements and best practices. And so on.

With so many moving parts and personal data, is it any wonder the EU decided that further guidance was necessary? How many ways this can go wrong is more than a bit terrifying. And yet, the promises of privacy, security, and agency can’t be ignored. And the ARF is where that work starts.

Enter the EUDI Architecture Reference Framework

The ARF is an outline that provides the first blush of a framework for how digital wallets will work in the EU. The European Commission kicked off the work through a Commission Recommendation from June 2021 that urged Member States to develop common standards, technical specifications, and best practices in response to the eIDAS 2.0 regulation. EU Member States sent their experts to join a collaborative process to build the framework.

The work was done very openly via a public GitHub repository, letting people see and comment on the changes as the work developed.

While using the framework isn’t required to comply with eIDAS 2.0, it’s still the best way to help meet the goal of an interoperable environment for digital wallets. Other countries and companies should see this as a great place to start. 

Highlights

The ARF isn’t a particularly long document—it’s a comprehensive outline, after all, not the complete guidance— but it is dense with information! Here are a few highlights:

1. Functional Requirements Are A Thing: If your organization is exploring using digital verifiable credentials, then you could do worse than start here. And if you are interested in using digital verifiable credentials AND have a presence in the EU, then you REALLY want to start here. A wallet needs to be able to do a variety of things. From the ability to perform electronic identification, store and manage qualified and non-qualified electronic attestations of attributes, and provide mutual authentication capabilities, this isn’t something you want to get wrong. Given that, the level of detail in the ARF is great kickstarter guide for developers and stakeholders on the technical expectations and capabilities required for the EUDI Wallet. It’s an opportunity to mitigate “you don’t know what you don’t know” when making plans in the digital wallet world.

2. It’s Not All Technical Details: The ARF outline includes space for non-functional requirements, including guidance on security, privacy by design, and user control over personal data. Having a presence in the EU and a reliance on government-issued credentials means playing in the digital wallet and verifiable credential landscape. The ARF puts the framework in a context that will help you comply with the appropriate legal and regulatory frameworks while providing a secure and user-friendly experience.

3. But if You Want to Talk about Interfaces and Integration Points: The framework will specify various interfaces and integration points for a digital identity wallet (specifically the EUDI Wallet) with external entities like Member States’ infrastructures, identity cards, and trusted registries. You probably could go a trial-and-error route to figure out how to integrate with this digital ecosystem, but life is too short. Use the guidance to save time; it’s freely available.

What Happens Next?

As with any v1.0, you can expect changes. The ARF will evolve from an outline to a complete Architecture and Reference Framework. The authors intend to expand this outline into a comprehensive framework as set out in the Commission Recommendation, and it will be aligned with the outcomes of the legislative negotiations regarding the proposal for a European Digital Identity Framework. 

As noted earlier, the ARF has its own GitHub repository, and you are free to offer feedback if you have ideas and experience that will help improve the framework.

How Globally Applicable is the ARF?

If you ask me (and it’s my blog), deployers can and should use the ARF as the basis for large-scale wallet deployments worldwide. But others may not agree. The US, in particular, is much more chaotic regarding wallet deployments as each US State is deciding independently of others whether they will build their own or use ones from Google or Apple. The US Federal Government’s Department of Homeland Security is also doing amazing work in the verifiable credential and digital wallet space. Still, they don’t have the same level of mandate that regulations like eIDAS2, GDPR, etc, provide in the European Union. 

The post The EU Digital Identity Architecture Reference Framework – How to Get There From Here appeared first on Spherical Cow Consulting.

Friday, 10. May 2024

liminal (was OWI)

Link Index for AML Transaction Monitoring for Financial Service and Fintechs

The post Link Index for AML Transaction Monitoring for Financial Service and Fintechs appeared first on Liminal.co.

Weekly Industry News – Week of May 06

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Week of April 15, 2024 […] The post Weekly Industry News – Week

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Week of April 15, 2024

Here are the main industry highlights of this week.

➡ Innovation and New Technology Developments TikTok to Automatically Label AI-Generated Content with New Content Credentials Technology

TikTok will add an “AI-generated” label to identify content produced by AI content creation tools using Content Credentials technology from C2PA. The feature will start globally within the next few weeks. The initiative aims to ensure accurate labeling of AI content and ease the burden on creators. TikTok will expand the use of Content Credentials to identify content made with TikTok AI effects and maintain transparency for viewers in the coming months.

Read the full article on techcrunch.com BNP Paribas Becomes First EU G-SIB to Join Global LEI System as Validation Agent

BNP Paribas has been approved as a Validation Agent in the Global LEI System, making it the first European Union-based global systemically important bank to join the program. As a Validation Agent, the bank will provide LEIs to its corporate and institutional banking clients during onboarding and ensure LEI completeness during the KYC recertification process. The adoption of LEIs is supported by international bodies to enhance cross-border payment operations and combat global financial crime.

Read the full article on ffnews.com Biden Administration Launches New Global Cybersecurity Strategy at RSA Conference

The Biden administration has launched a new international cybersecurity strategy to combat cyber threats from countries like China and Russia. The strategy focuses on four main areas: creating a secure digital identity ecosystem, advocating for rights-respecting digital technology, forming coalitions to counter cyber threats, and boosting cybersecurity resilience in partner nations. It includes a $50MM Cyberspace and Digital Connectivity fund to aid allies in enhancing their cybersecurity. The strategy also aims to establish global norms around the use of artificial intelligence.

Read the full article on politico.com Google Updates Ad Policies to Ban Ads for Deepfake Pornography and Synthetic Explicit Content

Google has updated its advertising policies to ban ads for services that create deepfake pornography and other synthetic sexually explicit content. The new policy targets ads for services that alter or generate synthetic sexually explicit images or videos. Both human review and automated systems will support Google’s enforcement against these ads. The move aligns with growing concerns over nonconsensual deepfake pornography.

Read the full article on theverge.com Senate Grills UnitedHealth CEO on Change Healthcare’s Cyberattack and $22 Million Ransom Payment

The Change Healthcare cyberattack began with hackers accessing a server that lacked basic security measures. UnitedHealth Group CEO faced questioning about the attack’s details, including the use of compromised credentials. The attack involved ransomware, leading to operational disruptions in healthcare payments and claims processing. UnitedHealth responded by disconnecting the systems, rebuilding the platform, and paying a $22 million ransom in Bitcoin to mitigate the damage. The incident prompted the Office for Civil Rights investigation to determine if protected health information was exposed and patient privacy laws violated.

Read the full article on cbs12.com U.S. Agencies Warn of North Korean Spear-Phishing Campaigns Targeting Geopolitical Intelligence

The U.S. government issued a cybersecurity advisory warning about North Korean hackers’ spear-phishing campaigns. The hackers use spoofed emails to gather sensitive information from their targets. They exploit weak DNS DMARC record policies to send emails that appear to come from valid domains. The group behind these attacks, Kimsuky (or APT43), targets foreign policy experts and uses initial benign interactions to build trust. Organizations are advised to strengthen their DMARC policies and treat suspicious emails more cautiously.

Read the full article on thehackernews.com ➡ Investments and Partnerships Akamai Technologies to Acquire Noname Security for $450 Million to Boost API Security Capabilities

Akamai Technologies will acquire Noname Security for $450 million to boost its API security solutions. Noname Security’s technology will be integrated into Akamai’s existing security offerings to provide more comprehensive capabilities in identifying shadow APIs and addressing vulnerabilities and attacks. The deal is expected to close in Q2 2024, subject to customary closing conditions.

Read the full article on prnewswire.com Wiz Secures $1 Billion in Series E Funding, Eyes IPO with $12 Billion Valuation

Cloud security platform startup Wiz has raised $1 billion in Series E funding, led by Andreessen Horowitz, Lightspeed Venture Partners, and Thrive. With this funding, the company plans to expand its organic growth through R&D and talent acquisition and inorganic growth through strategic acquisitions of other cybersecurity startups. Wiz has quickly established a significant presence in the cloud security sector, boasting contracts with 40% of the Fortune 100. The company plans to leverage this new capital to continue its growth trajectory, aiming for $1 billion in annual recurring revenue by 2025.

Read the full article on techcrunch.com DocuSign Acquires AI Startup Lexion for $165 Million to Boost Contract Management Capabilities

Docusign has acquired Lexion, a contract workflow automation startup, for $165 million. This acquisition is part of DocuSign’s strategy to strengthen its presence in the contract management industry. DocuSign aims to leverage Lexion’s technology to provide deeper insights into contract structures and identify potential risks by leveraging structured data management and natural language processing (NLP) techniques. The acquisition comes as DocuSign reportedly navigates a potential sale to private equity, with Bain and Hellman & Friedman among the top bidders. Additionally, Docusign announced a workforce reduction of about 6%, cutting around 400 jobs.

Read the full article on techcrunch.com Delta Capita Boosts KYC Offerings with Acquisition of LSEG’s Client On-Boarding Technology

Delta Capita acquired the Client On-Boarding technology and client base from the LSEG (London Stock Exchange Group), formerly GoldTier. This acquisition aims to enhance its KYC capabilities and expand its suite of compliance tools and services, reinforcing its standing as a leading provider in KYC client lifecycle management. The move is part of Delta Capita’s broader expansion strategy, including previous blockchain and financial consultancy acquisitions.

Read the full article on consultancy.uk ➡ Policy and Regulatory  SoFi Fined $1.1 Million by FINRA for Lax Customer Verification Leading to $2.5 Million Theft

SoFi was fined $1.1 million by FINRA for inadequate customer identification measures that led to fraud, resulting in a $2.5 million theft. SoFi’s automated process for approving account openings was insufficient for verifying identities, leaving accounts vulnerable to exploitation by fraudsters. About $8.6 million was stolen from other financial institutions via SoFi Money accounts, with $2.5 million successfully withdrawn by the perpetrators. SoFi identified the flaws and implemented remediation steps, including enhanced staff training and improved customer verification processes.

Read the full article on fintechfutures.com IMF Report Stresses Cybersecurity as a Growing Financial Risk, Urges Enhanced Corporate Governance

The IMF warns about the rising threat of cyberattacks in the financial sector and emphasizes the importance of stronger corporate governance in cybersecurity. The organization encourages financial firms to increase cybersecurity training efforts and attain clearer oversight of cyber risks.

Read the full article on wsj.com Spanish Police Crack Encrypted Services to Identify Catalan Activist in Pro-Independence Investigation

Spanish police obtained data from encrypted services Wire and Proton to identify an activist linked to Catalonia’s pro-independence movement. The investigation aimed to uncover individuals involved in the 2019 street riots and a potential protest plan during King Felipe VI’s 2020 visit. Wire and Proton confirmed compliance with Swiss authorities’ requests for external email addresses. Proton highlighted its limitations in providing user data due to encryption policies.

Read the full article on techcrunch.com Massive Data Breach Exposes Over 5 Million Salvadorans’ Personal Information on Dark Web

Over 5.1 million Salvadorans were affected by a significant data breach where personal details, including high-definition facial photos and national ID numbers, were leaked on the dark web. The data, which represents about 80% of El Salvador’s population, was made available for free after an unsuccessful attempt to sell it. The source of the breach remains unconfirmed, but cybersecurity firm Resecurity suggests a potential link to the hacker group Guacamaya. The incident has raised concerns about identity fraud and other cybercrimes due to the improperly stored sensitive data.

Read the full article on biometricupdate.com UK Suspects China in Defence Ministry Cyberattack, Tightens Security Amid Investigation

The Ministry of Defence’s payroll system was recently hacked, with sensitive information of armed forces personnel compromised, including names, bank details and personal addresses. Although China is suspected, the government has not officially named the perpetrator due to ongoing investigations. Prime Minister Rishi Sunak acknowledged a “malign actor” was responsible, and the UK’s defensive strategies are robust. The security practices of the external contractor managing the system are being reviewed. Service personnel affected by the breach have been reassured about the safety of their May salaries.

Read the full article on bbc.com TD Bank Faces Justice Department Probe for Alleged Money Laundering Linked to Fentanyl Sales

The Justice Department is investigating TD Bank for facilitating money laundering related to illegal fentanyl sales. The bank is enhancing its anti-money laundering practices after acknowledging that its systems have failed to detect criminal activities. TD Bank is facing multiple anti-money laundering probes in the U.S. and was recently fined $9.2MM CAD by Canadian AML regulator Fintrac for related deficiencies. The bank has earmarked $450MM in capital within its Q1 regulatory filings to pay US AML penalties, and financial analysts predict the bank could face fines of up to $2 billion.

Read the full article on finance.yahoo.com Google’s Antitrust Trial Nears Conclusion, Spotlight on Search Market Dominance

Google is facing two antitrust trials by the Department of Justice, with the first one about its search operations nearing conclusion. The trial focuses on whether Google’s business practices in the search engine market violate anti-monopoly laws, with key issues discussed being Google’s potentially anticompetitive behavior, its significant payments to Apple to remain the default search engine on iOS devices, and the impact of these practices on market competition. The trial’s outcome could lead to significant remedies, including possible changes to how Google conducts its business or even a breakup of certain operations.

Read the full article on theverge.com

The post Weekly Industry News – Week of May 06 appeared first on Liminal.co.


KuppingerCole

Modernizing IAM for Today's Business Needs

by Alejandro Leal This Advisory Note examines the paradigm shift towards modern identity & access management (IAM) frameworks that support digital business initiatives, including zero trust and Secure Access Service Edge (SASE), and the imperative for organizations to modernize their IAM infrastructures. It highlights the necessity of transitioning from legacy systems that are ill-equipped for

by Alejandro Leal

This Advisory Note examines the paradigm shift towards modern identity & access management (IAM) frameworks that support digital business initiatives, including zero trust and Secure Access Service Edge (SASE), and the imperative for organizations to modernize their IAM infrastructures. It highlights the necessity of transitioning from legacy systems that are ill-equipped for the demands of cloud computing and diverse user groups, towards more agile, scalable, and integrated IAM solutions. This transition is not merely technical but strategic, ensuring IAM systems can support a wide array of identities, from employees to customers and devices, and play a critical role in securing digital interactions across various platforms and services.

The AI Database

by Alexei Balaganski As I was writing about Oracle’s new SQL Firewall some time ago, I had no idea it will be published on the same day the company officially announced the general availability of its flagship database product, Oracle Database 23ai. What a coincidence! And what a twist with the new name! I have mixed feelings about the name, by the way. On the one hand, it appears to be the mos

by Alexei Balaganski

As I was writing about Oracle’s new SQL Firewall some time ago, I had no idea it will be published on the same day the company officially announced the general availability of its flagship database product, Oracle Database 23ai. What a coincidence! And what a twist with the new name!

I have mixed feelings about the name, by the way. On the one hand, it appears to be the most unnecessary, marketing-driven change ever. I can vividly imagine thousands of DBAs, developers, consultants, journalists, and other IT professionals rolling their eyes and scratching their heads. Now they must update all their documentation and writings to reflect it, and that time could have been spent on more productive things.

On the other hand, in contrast to many other vendors throwing “AI” into their product names, Oracle does have quite a lot to show for it. Artificial intelligence was, of course, a substantial part of the database for quite some time already. The concept of the Autonomous Database was introduced back in 2017. Machine learning algorithms have been a part of the database core for years as well. However, in 2024, the only kind of AI everyone is talking about is Generative AI. And so, among over 300 major new features, the new release introduces several new ones crucial for not just enabling GenAI capabilities for business applications, but implementing them in a universal, globally scalable, secure and, last but not least, compliant manner.

The most notable addition is AI Vector Search, a set of capabilities to enable native support for generating, storing, indexing, and querying vector data directly in the database. This is, of course, a crucial requirement for implementing retrieval-augmented generation (RAG) to enhance the accuracy of large language model responses with additional information from external sources (such as an organization’s own sensitive information that it does not want to share with LLM operators in any other way).

Now Oracle Database 23ai can natively support storing vector embeddings for unstructured content in the same table as existing relational data. This allows for creating complex queries across them, combining traditional SQL with semantic search, as well as geospatial information, graphs, and so on – which is impossible with a standalone vector database. In contrast, Oracle’s converged database approach keeps all data in a single location, where it is uniformly protected by layers of data security controls, including the SQL Firewall.

When running on the Oracle Exadata platform, its underlying smart storage technology will even natively accelerate vector search operations to run AI applications at a massive scale. Curiously, even if you are not yet ready to migrate your application to the 23ai release, it is still possible to replicate existing data from other sources using the GoldenGate 23ai service and let Oracle Cloud handle the AI operations for you.

Another major feature introduced in the new release is JSON Relational Duality, a technology that aims to solve the decades-long debate between the fans of the relational and the document data models. Until now, the developers were forced to make an early design decision between the efficiency and consistency of SQL and the simplicity and flexibility of NoSQL, and to resort to additional middleware layers to address the shortcomings of both approaches.

With JSON Relational Duality Views (what a mouthful of a name!), developers no longer need to make this choice. It is now possible to store data in relational format but access it in the form of JSON documents. A view can be declared across multiple tables with a structure described using the familiar GraphQL syntax. The database then takes care of all the rest, including automated table updates when documents are modified, lock-free concurrency control to support stateless operations, and making the data available across a range of APIs, from SQL to REST or even MongoDB.

While the idea might sound somewhat trivial in hindsight, implementing it in a scalable, reliable, and standardized way has required years of research and development. But now the capability is officially available, and developers are encouraged to try it – even the free edition of the database includes the feature, along with the similarly improved Property Graph Views.

I have no intention to mention every of the 300 features introduced in the new release, but one thing that is especially close to me both as an analyst covering data security and compliance solutions and as a citizen of the European Union is Oracle’s Globally Distributed Database. With the introduction of built-in RAFT-based replication, the new release brings the concept of database sharding to a new level of scalability and performance. A global, hyperscale database that is distributed and replicated across multiple geographical locations in real-time is now a reality – and the data within it can be transparently localized according to complex rules.

For example, all information linked to EU citizens will be stored only in datacenters located in Germany, enforcing the EU data sovereignty regulations, while the data related to Indian citizens will be only kept within the borders of India. And yet, for business applications, the entire customer base will appear as a single table, enabling efficient but still compliant transactions and analytics. With new privacy regulations being introduced constantly, adapting existing applications becomes a matter of just adding new sharding rules to the database – no need to change anything in the business logic.

All these new capabilities are now officially available in the Oracle Cloud, both in their public regions and in the Cloud@Customer private cloud, as well as in Microsoft Azure as a part of Oracle’s and Microsoft’s joint Oracle Database@Azure offering. The on-prem availability is yet to be announced.

Thursday, 09. May 2024

auth0

I’ve Got Passkeys Working in My App! But How Do I Manage Them?

Passkeys allow you to authenticate securely, and they're easy to integrate using Auth0. But what happens after that? Let's learn how to list and revoke passkeys using Auth0.
Passkeys allow you to authenticate securely, and they're easy to integrate using Auth0. But what happens after that? Let's learn how to list and revoke passkeys using Auth0.

Entrust

SSL Review: March 2024

The Entrust monthly digital certificates review covers a range of topics, including news, trends, and... The post SSL Review: March 2024 appeared first on Entrust Blog.

The Entrust monthly digital certificates review covers a range of topics, including news, trends, and opinions. This month, read about our key considerations on the path to 90-day certificate validity, important updates related to post-quantum TLS, and more.

Entrust

The Path to 90-Day Certificate Validity: Challenges Facing Organizations

Feisty Duck Cryptography & Security Newsletter #111

European Union Starts to Confront Digital Platforms’ Dominance

TLS/SSL News & Notes

Josh Aas provides The Rustls TLS Library Adds Post-Quantum Key Exchange Support Educated Guesswork discusses Design choices for post-quantum TLS David Adrian states Post-quantum cryptography is too damn big Let’s Encrypt and Introducing Sunlight, a CT implementation built for scalability, ease of operation, and reduced cost

Code Signing News & Notes

IETF draft RFC Use of Remote Attestation with Certification Signing Requests

The post SSL Review: March 2024 appeared first on Entrust Blog.


Shyft Network

Shyft DAO April Update: Ambassador Program Revamp and Improved NFT Utility

Hello, Chameleons! As we have moved into May, let’s dive into the exciting updates and changes we’ve rolled out in the past month. A New Chapter Begins for our Ambassador Program April saw the much-anticipated relaunch of our Ambassador Program, complete with a revised evaluation process emphasizing quality over quantity. And here’s the results that the relaunch has yielded so far: S

Hello, Chameleons! As we have moved into May, let’s dive into the exciting updates and changes we’ve rolled out in the past month.

A New Chapter Begins for our Ambassador Program

April saw the much-anticipated relaunch of our Ambassador Program, complete with a revised evaluation process emphasizing quality over quantity. And here’s the results that the relaunch has yielded so far:

Successful Implementation: We’ve fully integrated the new qualitative approach, setting a higher standard for content and engagement. Enhanced Post Quality: The new framework has already shown its effectiveness, with a noticeable improvement in the quality of contributions from our ambassadors. Consistent Engagement: Most ambassadors have adapted well to the new guidelines, maintaining consistency and excellence in their submissions. Continuous Support and Engagement

Understanding the adjustments that come with change, we’ve committed to holding regular meetings most Fridays. These sessions will help us clarify tasks, set clear expectations, and ensure everyone is comfortable and confident with the new framework.

We’ll continue holding these meetings until we are confident that all ambassadors have fully grasped the updated program.

Enhancing Our NFT Collection

Looking ahead, we remain focused on enhancing the utility and appeal of our NFT collections. We are deep into the planning stages for the second drop of our NFT collection, exploring innovative ways to add value and utility to each piece. Stay tuned as we shape these ideas into reality!

Concluding Thoughts

As we transition fully into the revamped Ambassador Program and advance our NFT initiatives, we are incredibly optimistic about the future. Here’s to continuing our journey of innovation, community engagement, and collective success.

The Shyft DAO community is committed to building a decentralized, trustless ecosystem that empowers its members to collaborate and make decisions in a transparent and democratic manner. Our mission is to create a self-governed community that supports innovation, growth, and diversity while preserving the privacy and sovereignty of its users.

Follow us on Twitter and Medium for up-to-date news from the Shyft DAO.

Shyft DAO April Update: Ambassador Program Revamp and Improved NFT Utility was originally published in Shyft DAO on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF88 Completes and DF89 Launches

Predictoor DF88 rewards available. DF89 runs May 9 — May 16, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_t
Predictoor DF88 rewards available. DF89 runs May 9 — May 16, 2024. Passive DF & Volume DF are retired since airdrop 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Passive DF & Volume DF rewards are now retired. Each address holding veOCEAN was airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop completed on May 3, 2024. This article elaborates.

Data Farming Round 88 (DF88) has completed.

DF89 is live today, May 9. It concludes on May 16. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF89 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF89

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF88 Completes and DF89 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


YeshID

The AI Revolution Comes to IAM: Introducing YeshAI

YeshID’s mission is to deliver identity & access solutions that are simple, secure, and trustworthy. Today, we’re releasing our first AI agent. It’s designed to simplify onboarding and offboarding workflows. ... The post The AI Revolution Comes to IAM: Introducing YeshAI appeared first on YeshID.
https://yeshid.com/wp-content/uploads/2024/05/Onboarding-Template-9-May-2024.mp4

YeshID’s mission is to deliver identity & access solutions that are simple, secure, and trustworthy. Today, we’re releasing our first AI agent. It’s designed to simplify onboarding and offboarding workflows. 

Why onboarding and offboarding? These workflows are mandatory for SOC2 compliance. Crafting playbooks for specific roles often becomes a major source of IAM admin friction and frustration.

In our YeshList blog post, we discussed how we tackled this problem when we created task templates. The first templates we made were for onboarding and offboarding. Introducing AI takes it a step further and makes it even easier.

Simply describe the type of playbook you need, and our AI will suggest a workflow optimized to meet SOC2 standards. This gives you a solid foundation, making editing and customization faster than starting from scratch.

Why this matters to you:

Save time: Focus on the strategic aspects of IAM, not repetitive tasks. Reduce risk: Get AI-driven help to craft comprehensive playbooks, minimizing errors. Gain confidence: Every action is transparent and auditable, building trust in the process.

Experience YeshID LLM by visiting task templates in YeshID and learn how it can enhance your employee onboarding workflows. (Coming soon–support for offboarding and the messy middle.)

The Power of AI, Guided by Our Principles

We’ve been experimenting with LLMs for a while. We saw their potential to revolutionize identity and access management. Incorporating LLM reasoning capabilities will be a productivity multiplier, resulting in a better, more secure, and seamless experience for IAM managers and employees alike. We have a lot planned, and our plans are rooted in our core principles:

Simple: We believe technology should make your life easier, not harder. Secure: Protecting your data is our utmost priority. Trustworthy: We build on a firm foundation of transparency and reliability. Responsible AI: Balancing Innovation with Caution

AI can reason and use tools to intelligently evaluate and automate access requests, enforce security policies, perform audits, handle employee onboarding and offboarding, resolve integration issues, provide real-time support, optimize spending, and more. Much more. It’s a powerful tool that can work 24/7, reduce human error, and provide invaluable insights.

But, as we have all learned, there are problems. AIs can hallucinate, be tricked, and create disaster at scale. That’s why we will only ship AI-powered features when we’re confident they are secure, reliable, and work as you expect them to.

AI as Your Trusted Partner

We believe in a future where AI empowers IAM managers to be more effective and secure. It’s a future where routine tasks are handled seamlessly, and you can focus on what matters most. Our commitment to our principles means we’ll bring you these innovations quickly and responsibly, ensuring your experience with YeshID remains simple, secure, and trustworthy. We move fast, and much more is coming soon.

If you’d like to see how YeshAI can help you make better onboarding decisions, check out the bot on our website here.

However, if you are ready to experience the power, security, and compliance of an AI-enhanced IAM, then signup to get started with YeshID for free.

The post The AI Revolution Comes to IAM: Introducing YeshAI appeared first on YeshID.

Wednesday, 08. May 2024

auth0

Pro Plan Improvements for Self-Service Customers - May 2024

Offering expanded access and optionality for self-service plans and features
Offering expanded access and optionality for self-service plans and features

Indicio

Taking the next step in Zero Trust with Decentralized Identity

The post Taking the next step in Zero Trust with Decentralized Identity appeared first on Indicio.
In a new White Paper from Indicio, Trevor Butterworth, Chase Cunningham, and Will Groah explain the importance of understanding how identity and security are connected and that a Zero Trust strategy for security can only be fully realized through a decentralized approach to identity. 

Zero Trust and Decentralized Identity

The concepts of Zero Trust and Decentralized Identity emerged in the early 2000s as responses to the escalating problems around data breaches and identity theft.

Zero Trust meant what it said: trust no one and continuously verify for narrow access to network resources.

Decentralized identity tackled the underlying reason digital identities couldn’t be trusted: the architecture of identity access and management couldn’t offer the privacy and security protection needed to protect personal data.

Any network defense could be broken if the identity of a user was compromised, and a system of storing user identity in centralized databases and using passwords for access was an endless source of risk.

Decentralized identity completes Zero Trust because it creates a seamless, cryptographic way to verify identity without centralizing personal data or needing logins and passwords, meaning that a network could always be certain of the identity of those trying to access their systems and could manage continuous verification in a maximally frictionless way.

Why then has Zero Trust largely ignored Decentralized Identity? Because security and identity teams often have different organizational responsibilities when a strong security posture means that both are inextricably linked. The authors argue, however, that without decentralized identity, Zero Trust cannot be fully embraced and implemented.

How can you get started?

The first step is to read the latest White Paper on this topic. The authors show how zero trust and decentralized identity can work together and the benefits your organization can receive.

Download today, or get in touch with our experts with questions or if you want to learn how you can bring zero trust and decentralized identity to your organization.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Taking the next step in Zero Trust with Decentralized Identity appeared first on Indicio.


1Kosmos BlockID

Unlocking the Future of Secure Authentication for Shared Workstations with 1Kosmos 1Key

The team here at 1Kosmos is excited to announce the release of 1Kosmos 1Key, a phishing-resistant biometric security key that supports authentication for unlimited users per device. The 1Key reduces costs and defeats security vulnerabilities associated with lost, stolen, and shared keys. Safeguarding sensitive information with passwordless multi-factor authentication is difficult to deliver for Se

The team here at 1Kosmos is excited to announce the release of 1Kosmos 1Key, a phishing-resistant biometric security key that supports authentication for unlimited users per device. The 1Key reduces costs and defeats security vulnerabilities associated with lost, stolen, and shared keys.

Safeguarding sensitive information with passwordless multi-factor authentication is difficult to deliver for Sensitive Compartmented Information Facilities (SCIF), manufacturing clean rooms, customer help desks, higher education labs, retail bank branches, healthcare providers, and other restricted environments because access to mobile devices is not permitted and we are addressing these challenges head-on. 1Key is a FIDO-compliant innovative biometric security key designed to revolutionize access control in secure facilities, shared workstations, and kiosks.

The 1Key is the latest 1Kosmos authentication method that organizations can use to address their security challenges. In this instance, we are delivering a FIDO-compliant innovative biometric security key designed to revolutionize access control in secure facilities, shared workstations, and kiosks. The experience is seamless, includes Windows login support, and is compatible with any WebAuthn service or application.

The benefits of the 1Key are significant as security teams no longer need to assign every user a physical key, provide a cost advantage over conventional keys, and eliminate other forms of MFA (multi-factor authentication). Organizations can now easily defeat unauthorized access due to key sharing and improve cycle time for customer-facing business processes, especially where login to multiple systems is required.

The 1Kosmos Advantage

1Key is a FIDO2 passkey and CTAP2 compliant, providing interoperability across various systems and delivering the following capabilities:

Users will register their biometrics at the time of onboarding and will then have passwordless access to any supported desktop. The register once-use-anywhere improves onboarding and eliminates passwords for enrolled users. Open, scalable, and interoperable passwordless authentication for organizations whose workers are dynamically assigned a workspace. User/role specific access control policies that improve the authentication experience and allow enrolled employees to use their biometrics to log in to secured desktops without a password. Phishing-resistant passwordless access to any supported desktop, once users have registered their biometrics. Optionally combine NIST (National Institute of Standards and Technology) 800-63-3 IAL2-certified identity to the authentication experience.

With cybersecurity threats on the rise, it is more important than ever for organizations to prioritize security, no matter the environment. That is where 1Kosmos excels, and 1Key offers a revolutionary approach to authentication, combining the convenience of passwordless access with the security of biometric technology. For organizations looking to enhance their security posture in multi-user and restricted environments, 1Kosmos 1Key is the answer.

Read the datasheet to learn more about 1Kosmos 1Key.

The post Unlocking the Future of Secure Authentication for Shared Workstations with 1Kosmos 1Key appeared first on 1Kosmos.


liminal (was OWI)

Redefining Business Identity Verification: Innovations and Challenges

In this episode of State of Identity, host Cameron D’Ambrosi is joined by Diego Asenjo, Co-Founder & CEO of Mesh, for an in-depth discussion on the evolving challenges and breakthroughs in business identity verification. Learn how Mesh handles the complexities of establishing business legitimacy and expands the application of Business and Entity Verification beyond regulated […] The post Red

In this episode of State of Identity, host Cameron D’Ambrosi is joined by Diego Asenjo, Co-Founder & CEO of Mesh, for an in-depth discussion on the evolving challenges and breakthroughs in business identity verification. Learn how Mesh handles the complexities of establishing business legitimacy and expands the application of Business and Entity Verification beyond regulated industries. The conversation explores the role of automation in enhancing efficiency and accuracy, Mesh’s innovative approaches to overcoming the ‘cold start’ problem in digital identity networks, and the transformative impact of identity tokens and robust verification processes on streamlining and securing digital interactions.

The post Redefining Business Identity Verification: Innovations and Challenges appeared first on Liminal.co.


OWI - State of Identity

Redefining Business Identity Verification: Innovations and Challenges

In this episode of State of Identity, host Cameron D’Ambrosi is joined by Diego Asenjo, CEO & Co-Founder of Mesh, for an in-depth discussion on the evolving challenges and breakthroughs in business identity verification. Discover how Mesh navigates the complexities of establishing business legitimacy and broadens the scope of Business and Entity Verification across various industries, not just

In this episode of State of Identity, host Cameron D’Ambrosi is joined by Diego Asenjo, CEO & Co-Founder of Mesh, for an in-depth discussion on the evolving challenges and breakthroughs in business identity verification. Discover how Mesh navigates the complexities of establishing business legitimacy and broadens the scope of Business and Entity Verification across various industries, not just regulated ones. The episode explores the importance of automation in boosting efficiency and accuracy, Mesh’s strategies for tackling the ‘cold start’ problem in digital identity networks, and the transformative potential of identity tokens and robust business verification processes to streamline and safeguard digital interactions.


Ocean Protocol

Introducing the Ocean Zealy Community Campaign!

We’re happy to announce the Ocean Zealy Community Campaign, an exciting initiative designed to be inclusive and rewarding for the most active members of our community. 🌊 Program Objective Our goal is to empower participants to actively engage in discussions, share insights, and create compelling content that amplifies Ocean Protocol’s visibility. 💰 Reward Pool 2,000 Ocean Tokens ($mOCEAN)

We’re happy to announce the Ocean Zealy Community Campaign, an exciting initiative designed to be inclusive and rewarding for the most active members of our community.

🌊 Program Objective

Our goal is to empower participants to actively engage in discussions, share insights, and create compelling content that amplifies Ocean Protocol’s visibility.

💰 Reward Pool

2,000 Ocean Tokens ($mOCEAN) on Polygon that will be rewarded to the Top50 users in our leaderboard

Program Structure

The Ocean Zealy Community Campaign is structured around a series of engaging tasks and activities, each offering participants the opportunity to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

Campaign Duration: 31st of May

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/c/onceaprotocol/questboard

Introducing the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

Sébastien Marteau on urban mobility and carsharing for green mobility.

We spoke with Sébastien Marteau, Chief Commercial Officer at Fluctuo, about the latest trends in carsharing, decarbonization in the transport sector, the future of the industry and the challenges it faces. Are carsharing services becoming increasingly popular? If so, what factors are contributing to their success? Yes, certainly. Carsharing is a cost-effective way of ensuring […]
We spoke with Sébastien Marteau, Chief Commercial Officer at Fluctuo, about the latest trends in carsharing, decarbonization in the transport sector, the future of the industry and the challenges it faces. Are carsharing services becoming increasingly popular? If so, what factors are contributing to their success?

Yes, certainly. Carsharing is a cost-effective way of ensuring that you only pay for a car when you really need one. The economic benefits are real when you factor in all the costs associated with a car (purchase, maintenance, fuel, insurance, etc.).

In times of inflation, carsharing services represent a real gain for the consumer. They make it possible to share a vehicle. What’s more, carsharing is seen as an inclusive mode of transport, facilitating access to cars for people who don’t always have the means to own one.

Last but not least, carsharing services respond to ecological imperatives that are increasingly resonant in today’s society.

On a European scale, carsharing services saw strong growth in 2023. Data from our latest European Shared Mobility Index 2023 on free-floating services, which covers 115 cities in Europe, shows an increase in the size of shared car fleets of 25%, while the number of journeys has risen by 39%, especially with very dynamic countries such as Germany, Belgium, the Netherlands and Scandinavian countries such as Denmark and Norway.

European Shared Mobility Index 2023 – by Fluctuo. Learn more about the future of shared mobility, including a country-specific break-down of mobility trends and the increasing importance of identity verification technology. Get your copy How is the carsharing economy shaping up today?

The carsharing market is a mix of two types of players: private players, often linked to carmakers and positioned on free-floating services; and associations or user groups covering station-based services. The latter are sometimes partly financed by public authorities to enable them to develop.

Nevertheless, costs remain one of the major issues in the carsharing economy, such as those inherent in parking or vandalism.

To develop this economy, there is a real need for operators to collaborate and share data, so as to develop the use of carsharing. The aim is to unite in order to give visibility to this emerging industry and contribute to the sector’s economic growth.

What do you see as the future of green mobility?

The future of green mobility is linked to the reduction in the number of private cars and the development of a modal shift toward active, low-carbon modes such as walking, cycling and public transport. According to the French National Institute for Statistics and Economic Studies (INSEE), the car still accounts for the majority of home-to-work journeys of less than 2km (52.9%), so we absolutely must accelerate the modal shift to low-carbon modes.

If Europe is to achieve its objectives of climate neutrality by 2050 and a net reduction in greenhouse gas emissions of 90% by 2040 compared to 1990 levels, emissions from the transport sector will have to fall by almost 80% by 2040.

Decarbonizing road transport, switching to zero-emission vehicles and improving public transport and shared mobility will also have a direct impact on air quality and, consequently, on the health of all Europeans. The transition to climate neutrality implies the promotion of sustainable and affordable mobility, thanks to appropriate urban planning, which will be important in enabling more public transport, carsharing, car-pooling and active mobility such as walking and cycling for short-distance journeys. Ensuring access for all to affordable and accessible net-zero energy and mobility solutions will be an essential part of the transition. Carsharing will also need to be integrated into public transport plans and develop multimodality if it is to be democratized.

As more and more cities realize the importance of zero-emission mobility and take steps in this direction, how can carsharing services contribute to green mobility?

Carsharing is a gas pedal of de-motorization: By offering a simple, accessible solution for journeys requiring a car, carsharing enables its users to replace their personal vehicle. In the case of an individual vehicle, it is estimated that it remains stationary 95% of the time. Taking France as an example, we estimate that a “loop” carsharing vehicle replaces between 5 and 8 private vehicles. In some countries, it can even replace 12-15 cars. So there’s a real need to reduce the number of private vehicles to cut carbon emissions.

Carsharing also accelerates multimodality: carsharing users make greater use of public transport (+18%), trains (+29%), bicycles (+22%) and walking (+38%). The integration of carsharing services in rail and bus stations is very important.

Do carsharing operators have to comply with any specific regulations today?

A carsharing operator is often required to obtain a license to operate in a given territory, as well as to pay for the parking of its fleet. From the user’s point of view, the operator is also obliged to check that the user has a valid license, and ultimately to verify the user’s identity.

Cities and transport authorities have found it difficult to regulate carsharing. Regulations have often been too restrictive, limiting the operational capabilities of these services and affecting their business model. They have tended to focus on limiting the impact of the system, rather than working with operators to maximize the overall benefits of the mobility system. In some cases, the limits imposed on zone access and parking have led operators to withdraw.

What are the main challenges facing carsharing operators today?

Carsharing operators face a number of challenges:

Vandalism is having a real impact on the profitability of some operators, resulting in very expensive insurance policies. In the UK, for example, it seems that no private insurers are entering this market any more. Detecting incidents is therefore essential, so that the user is reimbursed for any damage caused; Migration to electric vehicles. Only 11% of cars in stations in France are electric, compared with 79% for free-floating. This migration will involve educating users about their mode of operation; Profitability. Many have struggled to become profitable due to high operating costs, poor coordination with competitors, narrow scope of action and unfavorable sharing models with municipalities (inadequate parking agreements). What solutions are carsharing operators looking for to facilitate and secure the use of their services?

With the explosion in generative AI, driver’s license fraud and document forgery are on the rise. Given their strong growth, there is a major challenge around the identification and authentication of users in vehicles. It is vital for operators to integrate very strict identity verification solutions, at the risk of undermining the profitability of their business model.

Want to know more about the future of micromobility? Read our interview with François Hoehlinger.

By

Mallaury Marie
Content Manager at IDnow
Connect with Mallaury on LinkedIn

Want to know more about the mobility industry? Discover the major trends in the mobility industry and the innovative models and solutions available to design a seamless user experience. Get your free copy now

liminal (was OWI)

The Evolving Landscape of Customer Identity and Access Management

The CIAM Market (Customer Identity and Access Management) is undergoing significant transformations. As businesses increasingly recognize CIAM’s critical role in securing and managing customer identities on digital platforms, this market is poised for substantial growth—from $6.2 billion in 2024 to $10.8 billion by 2028. This growth underscores the evolution of CIAM solutions from basic tools […]
The CIAM Market (Customer Identity and Access Management) is undergoing significant transformations. As businesses increasingly recognize CIAM’s critical role in securing and managing customer identities on digital platforms, this market is poised for substantial growth—from $6.2 billion in 2024 to $10.8 billion by 2028. This growth underscores the evolution of CIAM solutions from basic tools to essential, strategic business solutions.

Despite substantial ROI from CIAM—such as operational cost reductions and revenue growth—organizations face challenges with their existing systems. A significant issue is siloed and unauthoritative identity data management, further complicated by data privacy regulations that disrupt identity management across organizational divisions. This challenge is magnified by the need to balance stringent security with a fluid user experience, rendering traditional, inflexible CIAM solutions less effective.

With the market’s maturity, dissatisfaction with standard, non-customizable CIAM solutions is growing. About 49% of respondents, particularly from financial services, are considering switching providers to seek more advanced, customizable solutions that better integrate across applications and manage identities seamlessly. This drive for innovation is fueled by emerging security and compliance challenges and the necessity to support scalable operations and enhance customer engagement and retention effectively.

The CIAM market is shifting towards more flexible, customizable, and integrative solutions. Providers differentiate themselves by incorporating advanced features such as user behavior analytics, adaptive authentication, and API-first capabilities that allow organizations to adapt their CIAM systems without complete overhauls. These “plug-and-play” solutions enable businesses to seamlessly integrate new functionalities into existing systems, catering to specific operational and regional needs.

Additionally, CIAM’s strategic importance is increasingly recognized across business units, fostering more collaborative approaches in implementing and managing these solutions. With 89% of surveyed organizations acknowledging reduced business costs through CIAM and a significant percentage reporting enhanced revenue and customer retention, the strategic value of CIAM is clear.

As the market continues to evolve, organizations are urged to reassess their CIAM strategies, considering solutions that offer the flexibility to adapt to changing legal and technological landscapes, thus ensuring continuous improvement in customer management and operational efficiency.

What’s Next for Financial Services

The financial services sector is currently at a significant turning point regarding Customer Identity and Access Management (CIAM). A recent survey has revealed that almost half of the businesses within this industry are contemplating a shift towards more advanced CIAM solutions. This indicates that the next phase would involve adopting CIAM systems that enhance security and customer experience and provide a unified view of customer identities across all platforms. This shift is crucial for financial institutions that seek to replace legacy systems with more integrated and flexible solutions that can adapt to diverse regulatory environments and complex customer interactions.

As these organizations focus on scalability and compliance, they will likely lead the demand for next-generation CIAM capabilities. These capabilities may include improved identity verification processes and more sophisticated data management technologies. Transitioning to next-generation CIAM systems is essential in supporting the financial services sector’s unique needs, such as managing sensitive financial data and complying with stringent global regulations while ensuring a seamless customer journey from onboarding to ongoing engagement. Financial institutions must ensure that they can maintain the trust and confidence of their customers by providing secure and reliable services. As such, adopting next-generation CIAM capabilities will enable these organizations to deliver exceptional customer experiences while ensuring the highest level of data security and privacy.

The shift towards advanced CIAM systems will allow financial institutions to streamline operations, reduce costs, and improve overall customer satisfaction. Adopting these systems will also play a critical role in helping financial institutions stay ahead of the competition and meet the evolving needs of their customers.

Related Content: Market & Buyer’s Guide for Customer Identity and Access Management From OTPs to Passkeys: Navigating the Customer Authentication Landscape Q1 Market Trends (customer access required)

The post The Evolving Landscape of Customer Identity and Access Management appeared first on Liminal.co.


Shyft Network

A Guide to FATF Travel Rule Compliance in Nigeria

The minimum threshold for Nigeria’s Crypto Travel Rule is $1,000 (1,380,680 NGN). Transactions below this threshold require only the names and wallet addresses of the parties involved. All crypto transfers must be treated as cross-border and adhere to stringent wire transfer requirements. Nigeria, grappling with high inflation and unemployment, has one of the highest crypto adoption rates. Th
The minimum threshold for Nigeria’s Crypto Travel Rule is $1,000 (1,380,680 NGN). Transactions below this threshold require only the names and wallet addresses of the parties involved. All crypto transfers must be treated as cross-border and adhere to stringent wire transfer requirements.

Nigeria, grappling with high inflation and unemployment, has one of the highest crypto adoption rates. The crypto ownership in the country is expected to be as high as 46%, with almost 13 million cryptocurrency holders, more than any other African country.

Yet, Nigerian regulators have yet to provide a clear framework. However, authorities have been working on it and have strict anti-money laundering (AML), countering the financing of terrorism (CFT), and counter-proliferation financing (CPF) compliance rules.

Crypto Travel Rule in Nigeria

In 2022, the Nigerian SEC outlined general requirements for VASPs, covering Know Your Customer (KYC), Customer Due Diligence (CDD), and the FATF Travel Rule.

Under the FATF Travel Rule, regulators have issued guidelines for the licensing and registration of VASPs, allowing banks to provide accounts and offer services to these entities in Nigeria. The Central Bank can also impose sanctions such as fines, license suspension, and activity bans.

To align with international attitudes, the regulator has also created an AML/CFT/CPF framework that requires entities to appoint a compliance officer, maintain a compliance manual, and implement employee education and training programs.

For CDD, entities must conduct ongoing due diligence, perform Enhanced Customer Due Diligence (ECDD) for higher-risk clients, and verify beneficial ownership at the outset of business relationships or transactions. They must also establish risk management systems to identify Politically Exposed Persons (PEPs) and monitor and report any suspicious transactions. All transaction records and related information must be retained for at least five years.

Compliance Requirements

All crypto transfers are to be treated as cross-border transfers and must adhere to the requirements for cross-border wire transfers. These obligations also extend to non-VASP Capital Market Operators (CMOs) handling crypto transfers for customers.

Originating VASPs are required to obtain, verify, and retain complete originator and beneficiary information. This information must be transmitted securely and immediately to the beneficiary VASP and provided to authorities upon request.

Beneficiary VASPs must also secure and verify complete originator and beneficiary information, ensuring its accuracy and availability to authorities when needed.

CMOs must acquire and verify the following originator information:

Full name of the originator Originator’s wallet address Physical address, national identity number, incorporation number, or business registration number if the originator is not a natural person

For beneficiaries, CMOs need to collect:

Beneficiary’s name Beneficiary’s wallet address These requirements apply to transactions exceeding $1,000 (1,380,680 NGN).

Transactions below this threshold are exempt but must still include the names and wallet addresses of both the originator and the beneficiary.

CMOs are also required to verify this information if there is any suspicion of money laundering or terrorist financing.

Moreover, VASPs, according to the Nigerian securities regulator, are required to be incorporated and maintain an office within Nigeria.

In March 2024, the regulator suggested a fivefold increase in the registration fee to be submitted alongside license applications. Moreover, the CEO or managing director of the crypto exchange applying for a license must also reside in Nigeria.

These proposed rule changes will apply to foreign operators targeting Nigerian users but not to financial portals or tech firms providing supporting infrastructure or software to crypto exchanges.

Global Context and Comparisons

Over the past few years, Nigeria has initiated several steps to regulate the crypto space. These measures aim to remove Nigeria from the FATF’s Gray List. The IMF has found that gray-listing can have a “significant negative impact on a country’s capital flows,” with its GDP decreasing by as much as 7.6 percent.

The latest FATF report on the status of Travel Rule implementation rated Nigeria as ‘partially compliant’ based on its performance in several areas:

Conducting a crypto and VASP risk assessment Passing a law on VASP registration Implementing AML/CFT measures Adopting the travel rule for VASPs Concluding Thoughts

Over the past year, a more than 68% drop in its fiat currency, the Naira, along with foreign exchange shortages, high remittance costs, recessions, and an unstable political situation, have increased the appeal of crypto assets in Africa’s second-largest economy.

Driven by these factors, the Sub-Saharan African country has taken steps to establish a clear regulatory framework for crypto assets in accordance with international standards. This means that both local and foreign crypto companies must adopt these rules to cater to and thrive in Nigeria’s growing crypto market.

FAQs on Crypto Travel Rule Nigeria Q1: What is the Crypto Travel Rule in Nigeria?

The Crypto Travel Rule in Nigeria requires that any crypto transaction exceeding $1,000 must include detailed originator and beneficiary information to prevent money laundering and terrorism financing.

Q2: What are the requirements for crypto transactions under this rule?

For transactions above the $1,000 threshold, originating and beneficiary Virtual Asset Service Providers (VASPs) must obtain, verify, and retain complete information about the parties involved and ensure it is transmitted securely.

Q3: What happens with transactions that do not meet the minimum threshold for the Crypto Travel Rule?

Transactions below the $1,000 threshold are exempt from the detailed reporting requirements but must still include the names and wallet addresses of the originator and beneficiary.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

A Guide to FATF Travel Rule Compliance in Nigeria was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Ontology Weekly Report (April 30th — May 6th, 2024)

Ontology Weekly Report (April 30th — May 6th, 2024) Welcome to the latest edition of the Ontology Weekly Report, where we keep you up to date on the latest developments, achievements, and community activities within the Ontology ecosystem. Here’s a quick recap of the past week: 🎉 Highlights New Quests on Zealy: Exciting new quests have been launched in Zealy, inviting our community m
Ontology Weekly Report (April 30th — May 6th, 2024)

Welcome to the latest edition of the Ontology Weekly Report, where we keep you up to date on the latest developments, achievements, and community activities within the Ontology ecosystem. Here’s a quick recap of the past week:

🎉 Highlights New Quests on Zealy: Exciting new quests have been launched in Zealy, inviting our community members to engage and earn rewards. Latest Developments DID and Privacy Article by Geoff: Read our latest publication on decentralized identity (DID) and privacy, written by Geoff, to gain deeper insights into the future of secure digital identities. Ontology Metrics: Look at some of Ontology’s latest metrics! Our journey together has been incredible, and we’re just getting started. Space with Sugar Kingdom NFT: We had a fantastic space with Sugar Kingdom NFT, discussing the ONT DID Fund and how it empowers Web3 projects. Ontology Odyssey Quest on Zealy: The Ontology Odyssey quest is officially live on Zealy! Join the adventure and explore the wonders of our ecosystem. Development Progress Ontology EVM Trace Trading Function: Now at 87%, we’re making steady progress towards enhancing our trading capabilities within the EVM space. ONT to ONTD Conversion Contract: Development continues, now at 52%, to ensure seamless conversion between ONT and ONTD. ONT Leverage Staking Design: Progressing at 37%, this feature will soon offer innovative staking options to our community. Product Development New ONTO Version Live: The latest version of ONTO is now live, bringing new features and enhancements to improve your wallet experience. On-Chain Activity 177 total dApps on MainNet as of May 6th, 2024, maintaining a dynamic and robust ecosystem. 7,762,930 total dApp-related transactions on MainNet, marking an increase of 1,812 from last week. 19,422,524 total transactions on MainNet, showing an impressive increase of 25,165 from last week. Community Growth Engaging Community Discussions: Our social media platforms, including Twitter and Telegram, are buzzing with the latest developments and community interactions. Stay connected and join the conversation! Telegram Discussion on Privacy: Led by Ontology Loyal Members, this week’s discussion focused on “Empowering Privacy with Anonymous Credentials,” providing valuable insights into the future of privacy. Stay Connected 📱

Keep up with Ontology by following us on our official social media channels. Your continued support and engagement are vital to our shared success in the evolving world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Weekly Report (April 30th — May 6th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Sara Assicurazioni Elevates Security for Employee and Partner Communities | Ping Identity

In the bustling realm of insurance, secure yet efficient access to information and applications is essential to success. Additionally, insurers need to combat fraud, meet regulatory requirements, protect privacy and deliver innovative and customized solutions to its various users. Italy’s leading insurer, Sara Assicurazioni, is migrating from a legacy identity platform to Ping Identity in order to

In the bustling realm of insurance, secure yet efficient access to information and applications is essential to success. Additionally, insurers need to combat fraud, meet regulatory requirements, protect privacy and deliver innovative and customized solutions to its various users. Italy’s leading insurer, Sara Assicurazioni, is migrating from a legacy identity platform to Ping Identity in order to meet these goals.

 

With a diverse set of insurance offerings, Sara Assicurazioni has a complex employee and partner community that help support those offerings, including full-time employees, doctors, car repair shops and more. The organisation saw that in the near future, its existing IAM platform would struggle with flexibility, scale, evolving security requirements and would ultimately hinder innovation. In order to stay on the leading edge of identity, which keeps employees and partners happy while protecting them against ever-increasing cyber threats, Sara Assicurazioni decided to retire its existing infrastructure and upgrade to Ping’s modern and powerful identity platform.

Tuesday, 07. May 2024

liminal (was OWI)

The Rising Threat of Deepfakes: Detection, Challenges, and Market Growth

Deepfakes, sophisticated digital manipulations using advanced AI, pose significant risks such as misinformation and privacy breaches. Their increasing prevalence demands effective detection methods and comprehensive regulatory measures. The industry dedicated to this task is expanding rapidly, driven by regulatory developments, collaborations by major tech companies like Adobe and Microsoft, and t
Deepfakes, sophisticated digital manipulations using advanced AI, pose significant risks such as misinformation and privacy breaches. Their increasing prevalence demands effective detection methods and comprehensive regulatory measures. The industry dedicated to this task is expanding rapidly, driven by regulatory developments, collaborations by major tech companies like Adobe and Microsoft, and the accessibility of creation tools. While the growth in deepfake technology introduces opportunities in entertainment and digital marketing, it more critically raises threats such as fraud and identifying theft, highlighting the need for detection methods and regulatory frameworks to lessen the impact of harmful deepfakes. The market for deepfake detection, crucial in combating the rising threat of deepfakes, is poised for significant growth, with projections showing an increase from $5.5 billion in 2023 to $15.7 billion by 2026, at a CAGR of 42.0%.

The widespread distribution of deepfakes, highlighted by over 500,000 instances on social media in 2023, underscores the urgent need for sophisticated detection technologies. With fake news spreading six times faster than real news, the integrity of digital content is at risk, reinforcing the need for improved detection capabilities and increased public awareness. Despite humans being able to detect fake speech 73% of the time, the challenge of distinguishing manipulated content effectively is critical, driving the growth of the deepfake detection market. According to our analysis, the deepfake fraud prevention and detection market growth will be fueled by advancements in deepfake technology and the growing demand for effective countermeasures.

Deepfake Detection Use Cases

To maintain authenticity, truth, and security, deepfake detection is essential across several domains, such as media, entertainment, politics, and fraud prevention. Primary use cases include:

Fraudulent Content Creation: Detecting and mitigating the creation of false content. Identity Theft and Impersonation: Preventing misuse of personal identities. Cybersecurity and Phishing Prevention: Blocking phishing attempts that utilize manipulated media. Manipulated Media: Identifying alterations in videos and images. Misleading/Fake Videos: Addressing and correcting videos that spread misinformation. Content Authenticity and Verification: Ensuring the genuineness of content circulated online. Protecting Privacy: Safeguarding personal information from unauthorized use. Preventing Blackmail and Extortion: Stopping threats that leverage fabricated content. Challenges and Opportunities in Deepfake Fraud Detection

Increased awareness and initiatives aimed at detecting deepfakes are helping to counter their growing threat. However, the lack of adequate regulation and accountability allows the fraudulent potential of deepfakes to continue escalating.

Keeping up with rapid advancements in deepfake technology. Developing detection methods that operate effectively in real-time. Staying ahead of continually evolving deepfake methods. Scaling detection solutions to meet widespread needs. How it Works

The deepfake detection process involves users uploading media, which is then analyzed by models using specialized algorithms to generate a score that determines whether the content is a deepfake. 

Why Now

Deepfake detection technologies are crucial due to the increasing difficulty in distinguishing between authentic and manipulated content, the misuse potential in fraud and identity theft, and the rising accessibility of deepfake creation tools. Here are some considerations:

Difficulty in Distinguishing Real from Fake 61% of Americans need help differentiating between real and fabricated videos, indicating a significant challenge in verifying authenticity. A survey by Royal Society Publishing showed that only 21.6% of participants could accurately identify a deepfake, underscoring the need for better detection methods. Misuse in Fraud and Identity Theft The use of AI for face-swapping has led to increases in biometric fraud, with selfie fraud doubling and biometric fraud quintupling over the past year. A notable incident involved a deepfake video of Ukrainian President Zelensky, which could have had severe implications for national security. Ease of Access and Creation The creation of deepfakes has surged, evidenced by a 30-fold increase in occurrences from 2022 to 2023. Deepfakes are now easier to create, with reports indicating that a convincing deepfake video can be produced in under 25 minutes using just one clear image.

The widespread nature of deepfakes and their potential dangers highlight the growing market for sophisticated deepfake detection technologies to combat misinformation and protect individual and national security.

A Look Ahead

Our analysis demonstrates that effectively addressing the risks of deepfake manipulation requires a comprehensive strategy. A practical detection method integrates metadata analysis, facial and voice recognition, and behavioral cues. This approach systematically examines file data and scrutinizes patterns to identify inconsistencies and irregularities, providing a dependable mechanism for detecting deepfakes. 

The rise of deepfake technology represents a critical challenge that spans multiple sectors, necessitating a comprehensive approach for effective management and mitigation. The advancement of deepfake detection technologies, underpinned by strong regulatory measures and increased public awareness, is essential to safeguard the integrity of digital media. As we progress into an era where digital authenticity can be easily manipulated, the demand for sophisticated, accessible, and effective detection systems is paramount. By enhancing cooperation among tech industry leaders, improving regulatory frameworks, and educating the public, we can effectively counter the risks associated with deepfakes. Moving forward, the deepfake detection market is poised for significant growth and will play an important role in preserving digital authenticity and protecting both individual and national security. 

Related Content:

Outside-In Report: Combating Deepfakes: Advancing Detection and Regulation in the AI Era (Link Premium Users Only) Article: Facial Biometrics Trends and Outlooks  Market and Buyer’s Guide for Customer Authentication  Market and Buyers’ Guide for Transaction Fraud Prevention in E-commerce

The post The Rising Threat of Deepfakes: Detection, Challenges, and Market Growth appeared first on Liminal.co.


HYPR

HYPR and Microsoft Partner on Entra ID External Authentication Methods

Last week, Microsoft announced the public preview of external authentication methods (EAM) for Entra ID. As a close partner, HYPR has worked extensively with Microsoft on the new offering and we are excited to be one of the first external authentication method integrations. This means organizations can now choose HYPR phishing-resistant authentication for their Entra ID MFA method, use

Last week, Microsoft announced the public preview of external authentication methods (EAM) for Entra ID. As a close partner, HYPR has worked extensively with Microsoft on the new offering and we are excited to be one of the first external authentication method integrations. This means organizations can now choose HYPR phishing-resistant authentication for their Entra ID MFA method, use it in Entra ID Conditional Access policies, Privileged Identity Management, and more.

Our goal at Microsoft Security is to empower our customers with cutting-edge security solutions. The integration of Entra ID external authentication methods with HYPR reflects this mission, providing our customers with the flexibility to employ their preferred MFA methods, including phishing resistant MFA, to defend their environments against evolving threats."

– Natee Pretikul, Principal Product Management Lead, Microsoft Security

What Are Entra ID External Authentication Methods?

The external authentication methods feature was developed to replace the current Entra ID custom controls capability. The EAM solution uses industry standards and supports an open model, and provides far greater functionality than custom controls. With EAM, organizations can use their preferred authentication provider to satisfy MFA policy requirements, managing it the same way as Microsoft-native authenticators.

Key Benefits of the HYPR and Microsoft External Authentication Methods Integration

The new integration benefits both HYPR and Microsoft customers on multiple levels.

How the HYPR Entra ID external authentication method integration works

Greater Flexibility and Choice For Your Entra ID Environments 

With the HYPR–EAM integration, organizations can seamlessly use HYPR as an Entra ID authentication method to meet multi-factor authentication requirements, without the need for federation. Unlike federation configurations, the user identity is established and managed in Microsoft Entra ID. Essentially, HYPR’s leading phishing-resistant MFA becomes a like-native authentication option in the Entra ID ecosystem, and can be invoked to satisfy MFA requirements for Conditional Access policies, Privileged Identity Management (PIM) and Identity Protection sign-in risk policies.

Consolidate and Unify Authentication Processes

Many enterprises have complex IT environments with multiple identity providers and sign-in processes. These systems operate in silos, creating security blind spots, inefficiencies, and inconsistent user experiences. By choosing a platform-agnostic solution like HYPR, organizations can use the same secure, phishing-resistant authentication across IAM systems and workflows. HYPR already provides tight integrations with Microsoft Entra ID; the new EAM feature expands that connection. It empowers organizations to further consolidate their identity security and create consistent, unified MFA experiences for their users across all Microsoft and non-Microsoft environments.

Improve Visibility and Control

The Microsoft external authentication method integration puts some additional powerful tools into the hands of HYPR customers. Administrators and security teams can view all HYPR authentication events in the Entra ID admin center when HYPR is used as an EAM method.

Teams also can define highly granular Conditional Access controls, based on the type of authentication factor a user applies as they authenticate with HYPR. For example, access policies can vary depending on whether someone uses a fingerprint, facial recognition or PIN, to add even stronger levels of security assurance for specific use cases or resources.

Learn More About HYPR as a Microsoft Entra ID External Authentication Method

Microsoft Entra ID EAM is now in public preview. Read Microsoft’s technical documentation for more details about how this feature works. Current HYPR customers looking to join the public preview should contact their customer success representative. If your organization does not yet use HYPR, but you are interested in using it as an external authentication method, talk to our team!


Holochain

Holochain 0.2.8 & The Weave

Dev Pulse 139

We’ve got a follow-up to Holochain 0.2.6, which was a big one (it was the first recommended release in the 0.2 line). This one brings a new ability to manage clone cells from a coordinator zome (yes, you read that right!), and the Rustdoc is now building again (a blessing to any devs who have needed to read the 0.2 documentation). There are some breaking changes, but only for those who are using the conductor APIs directly rather than through a client.

I also want to mention The Weave, a project that my colleague Eric and a few close collaborators have been working on in various forms over the years. Originally demoed as a groupware container called We, the philosophy behind it has matured into a full vision for end-user-composable applications. What the heck does that mean? Skip to the bottom of this article to find out.

Holochain 0.2.8: Fix WebSocket binding

Release date: 30 April 2024
HDI compatibility: 0.3.x
HDK compatibility: 0.2.x (upgrade to 0.2.7 or newer to get the new features)
JS client compatibility: 0.16.x
Rust client compatibility: 0.4.x

This is a patch release that fixes the port binding issue in 0.2.7 (#3731). We now recommend this release for general development.

Holochain 0.2.7: Backend clone management, Websocket fixes, documentation

NOT RECOMMENDED FOR USE
Release date: 9 April 2024
HDI compatibility: 0.3.x
HDK compatibility: 0.2.x (upgrade to 0.2.7 to get the new features)
JS client compatibility: 0.16.x
Rust client compatibility: 0.4.x

NOTE: We’re not recommending this release for development work or distribution to end-users. We introduced a change to how Holochain binds the conductor API websocket to local interfaces, which caused it to get confused when the local interface supports both IPv4 and IPv6. If you scaffolded a hApp with Holochain 0.2.7, we recommend you run cargo update and nix flake update right away to save yourself some hassle.

That said, there are some interesting changes in here that you should know about!

The first big news is that clone management API functions, which were previously only available in the app and admin APIs, are now available as HDK functions. There are a few differences:

create_clone_cell only takes an existing cell ID, not a role ID. delete_clone_cell was previously only available to the admin API.

The second point probably needs a bit of explaining. Unlike disable_clone_cell, delete_clone_cell is a somewhat destructive operation, because it makes it impossible to bring a clone cell and its data back to life (although the data is still technically recoverable if you know how to open up the database). That’s why it’s never been part of the app API where any old client could access it. These new HDK functions, by contrast, can only operate on the hApp that the calling cell lives in. We’re working on locking down the app API so only clients authorised to access an app can actually access it, which means we could safely make delete_clone_cell available to clients, but in the meantime this gives you a way to securely clean up cells once they’re no longer needed.

If you think you’d find it useful to manage clones from the back end, we’d like you to try out these new HDK functions and tell us what you think. We may, for instance, change create_clone_cell to match how the corresponding app API function works in 0.3 or 0.4.

The other big news is that we’ve cleaned up the WebSocket code, so the app and admin APIs should have fewer bugs such as dropped connections (particularly when the conductor hits 100% CPU consumption). This does result in breaking changes, but only if you’re writing a Rust-based app that consumes the holochain_websocket crate directly. Anyone who calls the conductor APIs instead (which is most of you, including those who use the officially supported JavaScript and Rust clients) won’t be affected. If you don’t know whether you should care about the breaking changes, then you’re one of the ones who shouldn’t 🌝

Lastly, holochain --build-info now tells you the Lair keystore version.

Read about it all in the changelog, and make sure you scroll down because all the above updates happened in the two RCs before this release.

Holochain 0.1 retiring, 0.3 entering RC phase, 0.4 release cycle started

Now that Holochain 0.2.8 is the recommended release, the 0.1.x series won’t receive any more updates, and we’ll eventually shut down the centralised infrastructure that supports 0.1. We don’t have a specific time for shutting it down, but it should be considered end-of-life. We’re recommending that all developers still using Holochain 0.1 update their hApps to Holochain 0.2.8 now and distribute the update to their users as soon as possible.

Now, on to Holochain 0.3. Most hApp devs are already using a weekly release of Holochain 0.3 in their work, for instance Moss (see below). It contains a lot of stability and performance improvements that will be a welcome upgrade for devs and users. 

We’ve decided it’s time to acknowledge this reality and put an API freeze on Holochain 0.3.x. The first release candidate should be out soon. The develop branch of the holochain/holochain repo, and the weekly channel of Holonix, are now tracking the 0.4.x release series. If you’re already on 0.3, you’ll want to change the channel from weekly to 0_3_rc so you don’t get bumped accidentally to 0.4 on your next nix flake update:

// ...  inputs = { versions.url  = "github:holochain/holochain?dir=versions/0_3_rc"; // ...

Once the first 0.3 release is recommended for general use, 0.2.x will go into maintenance mode, only receiving critical bug fixes as needed.

Note: Some devs may have seen a 0.3.0 release on crates.io or GitHub. This was an automation error, and we’ve removed it from crates.io, so the first official version of Holochain 0.3 will be 0.3.1 or higher.

The Weave: growing thrivable social fabric

I’m a big fan of tools. That’s why I like using my computer: it’s an infinite box of tools that I can use to get work done, communicate with friends and colleagues, create art, organise family memories, etc. And I like tools that feel like tools — small programs that do one thing, do it well, work well with each other, and use common data formats. This is brilliant, if you ask me, and it makes me feel like I have superpowers whenever I need to do something complicated.

I am not a big fan of silos. It frustrates me that my colleagues and I have to log into six or seven different cloud platforms just to get a Dev Pulse published. And each of them has its own separate content organisation system, discussion system, etc, etc. We end up having to recreate the same structures in multiple places. (And so do the developers — if you’re creating a kanban app, you can’t just drop Slack into it and get commenting for free; you have to write your own commenting feature from scratch.)

As far as I can tell, this silo thing isn’t designed to serve my colleagues and me. Perhaps it isn’t even designed. It’s probably just an accidental outcome of companies working with what they know. Or maybe it’s some business folks deciding they can only make money if they hold their customers tight. But whatever the reason, it’s wasteful and annoying.

That’s why, when my colleague Eric showed me what he and the Lightningrod Labs team were doing with We (now called Moss) in February, it was like he’d opened the door in a stuffy room. This was exactly how I’d wished my applications worked for at least fifteen years. Sure, it was a little rough around the edges — Eric acknowledged that — but I could see the potential lying just beyond the bugs and unimplemented features.

The vision of The Weave: a protocol for social spaces

I won’t get too deep into the philosophy — I’ll let The Weave’s new website do that — but I’ll give you a little teaser: what HTTP did for information, Holochain is trying to do for social spaces. But HTTP alone wasn’t enough. It needed HTML, CSS, and JavaScript to make an experience that people would want to use. In a similar way, the Weave is a formal specification that overlays a set of design principles to make Holochain more accessible for devs building social tools.

HTML introduced pages and, most importantly, hyperlinks as a way to connect them together. This made the Web make sense. It was just enough structure for developers to run with, and it resulted in an explosion of creativity that gave us the web of today. But neither HTTP nor HTML gave us open standard for social spaces, so these concepts got enclosed by big players like Facebook and Twitter.

Holochain gives us open standards for permissionless social spaces. And what HTML et al did for HTTP, the Weave does for Holochain: assets that belong to your group can be linked together with ease.

How does this work?

A tool is something that a group can install, and it gives them the power to work with assets of a certain type (chat threads, images, files, folders, kanban cards, calendar entries, etc). Wherever it makes sense, an asset can have links on it that connect to other assets provided by the same tool — or some other completely different tool.

And that second point is what changes everything. Now I can add a conversation thread to a calendar entry if I like. I can drop links on my kanban card that point to cells in a table (there’s already an Airtable clone available). We can drop cards, boards, emails, messages, conversations, and drawings into a folder hierarchy that’s shared across all our tools. Or three or four independent folder hierarchies, if we like.

Why do I care about this?

I get excited about this as both a user and a developer. As a user, I’m not constrained by the walls that others have created for me — I can stitch together tools in ways that make sense for me. As a practical example, my team can gather all the stuff pertaining to one Dev Pulse into one place. (Or will be able to, once someone comes up with a replacement for Canva.) I expect this will smooth out a lot of the bumps we face when we’re working on a written piece.

As a developer, it means I won’t have to reinvent everything when I eventually make my seed swapping hApp into an awesome working product. I won’t have to re-create messaging, reputation, user profile, or categorisation features — I can just focus on the core thing that matters, a way to let people find each other’s seed packet offerings.

Eric believes this sort of easy composability means that developers will feel empowered to create that little tool they’ve always wished existed. When you only have to focus on your core thing, the task seems much less frightening.

I think that part of the reason more great tools don’t get built is that people think you have to have a full-featured app, a team of devs to handle feature requests, and a business model in place for when your user base (and your cloud hosting costs) scale 100×.

But what would you create if you only had to think about one little thing, the thing you care about most? What is that thing for you?

I’d love it if you shared your little tool ideas in the The Weave channel on our Discord.

In the meantime, download Moss, the first ‘frame’ implementing The Weave, try some of the early stage tools, and read the spec to get an understanding of how tools in The Weave are made for composability.

(Fun fact: the equivalent of a ‘browser’ in The Weave is called a ‘frame’, a word that my lovely bride — a weaver — contributed! A frame loom is a special kind of weaving loom used for making tapestries. Kinda fitting for a program for groups weaving a shared story, don’t you think?)

Cover photo by ALAN DE LA CRUZ on Unsplash


Elliptic

Leader of LockBit ransomware group named

Today, the UK’s National Crime Agency (NCA) revealed the identity of the leader of Lockbit ransomware as Russian national Dmitry Yuryevich Khoroshev. This action follows previous enforcement actions by the UK and US as part of Operation Cronos, which have targeted Lockbit ransomware group, dubbed the “world’s most harmful cyber crime group”. 

Today, the UK’s National Crime Agency (NCA) revealed the identity of the leader of Lockbit ransomware as Russian national Dmitry Yuryevich Khoroshev. This action follows previous enforcement actions by the UK and US as part of Operation Cronos, which have targeted Lockbit ransomware group, dubbed the “world’s most harmful cyber crime group”. 


Bloom

UnitedHealth Group Reports Massive Data Breach Impacting One-Third of Americans

Here we go again..... Andrew Witty, CEO of UnitedHealth Group, the largest health insurer in the U.S., just revealed to a congressional committee that a significant data breach had occurred. The intrusion was traced back to a subsidiary, Change Health Systems, which was compromised by the notorious Russian hacker

Here we go again.....

Andrew Witty, CEO of UnitedHealth Group, the largest health insurer in the U.S., just revealed to a congressional committee that a significant data breach had occurred. The intrusion was traced back to a subsidiary, Change Health Systems, which was compromised by the notorious Russian hacker group, BlackCat. This breach underscores the vulnerabilities inherent in traditional security measures, which often rely on single-factor authentication mechanisms and centralized data storage.

Two months ago, the hackers exploited a stolen password to infiltrate Change Health Systems, gaining access to an extensive array of sensitive patient data. During the testimony, Witty estimated that the personal data of potentially a third of all Americans could have been exposed, highlighting the enormous impact of the breach.

The incident spiraled when BlackCat, having seized control of Change Healthcare’s systems, demanded a $22 million ransom. Witty confirmed the payment of the ransom, a move made independently in a desperate bid to mitigate the damage. The breach not only affected UnitedHealth's customers but also reached non-customers due to Change Healthcare’s role in processing over 15 billion transactions annually.

UnitedHealth is now committed to a rigorous data review and is taking proactive steps to support those affected. This includes establishing a dedicated website for information dissemination and offering two years of free credit monitoring services. The company has expressed its determination to bolster defenses and provide necessary aid to both consumers and providers shaken by this event.

How Bloom Could Have Helped

This incident once again brings to light the critical need for robust, resilient security solutions like those provided by Bloom. Bloom’s cutting-edge self-sovereign identity and verifiable credentials solutions are designed to prevent such breaches.

At Bloom, we advocate for the implementation of decentralized identity solutions that eliminate single points of failure, such as those exploited in the UnitedHealth incident.

The Broader Impact of Ransomware

The rise in ransomware attacks is a growing concern globally, with payments to hackers in 2023 reaching a record $1.1 billion. These attacks are increasingly sophisticated and are carried out by a diverse array of actors, from large criminal networks to solo perpetrators. The trend underscores the urgent need for comprehensive security strategies that include advanced technological defenses and proactive risk management practices.

In response to these challenges, Bloom encourages a shift towards more secure, decentralized frameworks that not only protect against such threats but also provide individuals with the tools they need to manage their own digital identities safely and effectively. By prioritizing security and privacy, we can better safeguard our digital ecosystems against the escalating wave of cybercrime.


Ontology

Báo cáo hàng tháng của Ontology — Tháng 4

Báo cáo hàng tháng của Ontology — Tháng 4 Tháng 4 là một loạt các hoạt động và thành tựu của Ontology. Từ các mối quan hệ đối tác mới thú vị và sự tham gia của cộng đồng cho đến những tiến bộ đáng kể trong công nghệ của chúng tôi, dưới đây là bản tóm tắt những điểm nổi bật của tháng này: Cộng đồng và sức ảnh hưởng của Web3 🌐🤝 Ra mắt quỹ 10 triệu DID: Chúng tôi đã ra mắt quỹ 10 t
Báo cáo hàng tháng của Ontology — Tháng 4

Tháng 4 là một loạt các hoạt động và thành tựu của Ontology. Từ các mối quan hệ đối tác mới thú vị và sự tham gia của cộng đồng cho đến những tiến bộ đáng kể trong công nghệ của chúng tôi, dưới đây là bản tóm tắt những điểm nổi bật của tháng này:

Cộng đồng và sức ảnh hưởng của Web3 🌐🤝 Ra mắt quỹ 10 triệu DID: Chúng tôi đã ra mắt quỹ 10 triệu để thúc đẩy đáng kể hệ sinh thái nhận dạng phi tập trung (DID) của mình, thúc đẩy sự đổi mới và tăng trưởng. Hiện diện tại PBW : Thật vui khi được gặp rất nhiều bạn tại PBW! Chúng tôi đánh giá cao mọi cuộc trò chuyện và hiểu biết sâu sắc được chia sẻ. Những thắc mắc về Web3 : Trong tháng này, các cuộc thảo luận của chúng tôi kéo dài về DeFi và NFT, với các bản ghi có sẵn cho những người đã bỏ lỡ các phiên trực tiếp. Sự tham gia của Token2049 : Sự hiện diện của chúng tôi tại Token2049 là một thành công lớn, mở rộng khả năng hiển thị và kết nối của chúng tôi trong cộng đồng blockchain. Zealy Quest — Ontology Odyssey : Nhiệm vụ mới nhất của chúng tôi đã được triển khai, bổ sung thêm một lớp tương tác thú vị trong nền tảng của chúng tôi. Cập nhật về phát triển/công ty 🔧 Các mốc phát triển 🎯 Chức năng giao dịch theo dõi EVM của Ontology : Tiến độ đã đạt 80%, nâng cao khả năng giao dịch của chúng tôi trong không gian EVM. Hợp đồng chuyển đổi ONT sang ONTD : Chúng tôi đã đạt được cột mốc phát triển 50%, đơn giản hóa quy trình chuyển đổi cho người dùng của chúng tôi. Thiết kế đặt cược đòn bẩy ONT : Hiện ở mức 35%, sự phát triển này hướng tới việc cung cấp các tùy chọn đặt cược sáng tạo cho cộng đồng Ontology. Sự kiện và Hợp tác 🤝 Thành công của StackUp Phần 2: Chiến dịch mới nhất của chúng tôi với StackUp đã thành công vang dội nhờ sự tham gia của bạn. Quan hệ đối tác mới: Chúng tôi kỷ niệm sự hợp tác mới với LetsExchange và sự hỗ trợ của Ví GUARDA cho việc niêm yết ONG của ONT và BitGet. Quà tặng cộng đồng và AMA : Tháng này có rất nhiều sự kiện tương tác, bao gồm quà tặng với Lovely Wallet và AMA với KuCoin. Phát triển Ví ONTO 🌐🛍️

Khả năng truy cập UQUID : UQUID hiện có thể truy cập được trong ONTO, hợp lý hóa các giao dịch cho người dùng của chúng tôi.

Cập nhật ONTO : Chúng tôi đã tung ra bản cập nhật phiên bản mới để nâng cao trải nghiệm người dùng. AMA sắp tới với Kita Foundation : Đừng bỏ lỡ AMA của chúng tôi với Kita Foundation, nhằm tìm hiểu sâu hơn về các hoạt động hợp tác trong tương lai. Số liệu trên chuỗi 📊 Tăng trưởng dApp : Tổng số dApp trên MainNet của chúng tôi vẫn mạnh ở mức 177, cho thấy một hệ sinh thái sôi động. Tăng trưởng giao dịch : Tháng này chứng kiến sự gia tăng 773 giao dịch liên quan đến dApp và 13.866 giao dịch MainNet, phản ánh việc sử dụng mạng đang hoạt động. Sự gắn kết của cộng đồng 💬 Thảo luận sôi nổi : Các nền tảng truyền thông xã hội của chúng tôi tiếp tục gây xôn xao với các cuộc thảo luận và hiểu biết sôi nổi từ các thành viên cộng đồng nhiệt huyết và gắn bó của chúng tôi. Ghi nhận thông qua NFT : Chúng tôi đã phát hành NFT cho các thành viên tích cực trong cộng đồng để ghi nhận những đóng góp và sự tham gia của họ. Theo dõi chúng tôi trên mạng xã hội 📱

Theo dõi Ontology bằng cách theo dõi chúng tôi trên các kênh truyền thông xã hội của chúng tôi. Sự hỗ trợ và tham gia liên tục của bạn rất quan trọng đối với thành công chung của chúng ta trong thế giới đang phát triển của công nghệ blockchain và phi tập trung.

Trang web ontology / trang web ONTO / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog Forklog

Thông báo Telegram / Telegram tiếng Anh / GitHub Discord

Tháng 4 là tháng tăng trưởng năng động và hoạt động cộng đồng mạnh mẽ của Ontology. Chúng tôi cảm ơn cộng đồng của mình vì sự hỗ trợ không ngừng nghỉ của họ và mong chờ một tháng đổi mới, hợp tác và phát triển khác. Hãy theo dõi để biết thêm thông tin cập nhật và chúng ta hãy cùng nhau tiếp tục vượt qua các ranh giới của công nghệ blockchain!

Báo cáo hàng tháng của Ontology — Tháng 4 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ежемесячный отчет Ontology — апрель

Ежемесячный отчет Ontology — апрель Апрель стал для Ontology вихрем активности и достижений. От захватывающих новых партнерских отношений и взаимодействия с сообществом до значительных достижений в наших технологиях — вот краткий обзор основных событий этого месяца: Влияние сообщества и Web3 🌐🤝 Запуск фонда 10M DID. Мы запустили фонд 10M, чтобы значительно улучшить нашу экосистему де
Ежемесячный отчет Ontology — апрель

Апрель стал для Ontology вихрем активности и достижений. От захватывающих новых партнерских отношений и взаимодействия с сообществом до значительных достижений в наших технологиях — вот краткий обзор основных событий этого месяца:

Влияние сообщества и Web3 🌐🤝 Запуск фонда 10M DID. Мы запустили фонд 10M, чтобы значительно улучшить нашу экосистему децентрализованной идентификации (DID), способствуя инновациям и росту. Присутствие на PBW : Было здорово увидеть многих из вас на PBW! Мы ценим каждый разговор и поделились идеями. Web3 Wonderings : В этом месяце наши обсуждения охватывали DeFi и NFT, а записи были доступны для тех, кто пропустил прямые трансляции. Участие в Token2049 : Наше присутствие на Token2049 стало большим успехом, расширив нашу видимость и связи в сообществе блокчейнов. Zealy Quest — Ontology Odyssey : наш последний квест уже доступен, что добавляет захватывающий уровень взаимодействия с нашей платформой. Новости разработки/корпоративные 🔧 Этапы развития 🎯 Функция Ontology EVM Trace Trading : прогресс достиг 80%, что расширяет наши торговые возможности в пространстве EVM. Контракт на преобразование ONT в ONTD . Мы достигли рубежа разработки в 50 %, упростив процесс преобразования для наших пользователей. Дизайн ставок с использованием кредитного плеча ONT : теперь эта разработка составляет 35% и направлена ​​на предоставление инновационных вариантов ставок для сообщества Ontology. События и партнерство 🤝 Успех StackUp, часть 2. Наша последняя кампания с участием StackUp имела оглушительный успех благодаря вашему участию. Новые партнерские отношения : мы отметили новое сотрудничество с LetsExchange и поддержку кошелька GUARDA для ONT и листинга ONG на BitGet. Розыгрыши от сообщества и AMA : месяц был насыщен интерактивными мероприятиями, включая розыгрыши с Lovely Wallet и AMA с KuCoin. Разработка кошелька ONTO 🌐🛍️ Доступность UQUID : UQUID теперь доступен в ONTO, что упрощает транзакции для наших пользователей. Обновления ONTO : мы выпустили новую версию обновления для улучшения пользовательского опыта. Предстоящая АМА с Kita Foundation : не пропустите АМА с Kita Foundation, направленная на более глубокое погружение в будущее сотрудничество. Внутрисетевые метрики 📊 Рост dApp : общее количество dApps в нашей основной сети остается стабильным и составляет 177, что указывает на динамичную экосистему. Рост транзакций : в этом месяце наблюдалось увеличение на 773 транзакций, связанных с dApp, и 13 866 транзакций MainNet, что отражает активное использование сети. Участие сообщества 💬 Яркие дискуссии : наши платформы социальных сетей продолжают гудеть от оживленных дискуссий и идей от наших заинтересованных и увлеченных членов сообщества. Признание посредством NFT : мы выдали NFT активным членам сообщества в знак признания их вклада и участия. Следите за нами в социальных сетях 📱

Следите за новостями Ontology, подписываясь на нас в наших социальных сетях. Ваша постоянная поддержка и участие жизненно важны для нашего общего успеха в развивающемся мире блокчейна и децентрализованных технологий.

Сайт Ontology / Сайт ONTO / OWallet (GitHub)Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Telegram Announcement / Telegram English / Ontology Russian / Telegram AnnouncementRU / GitHubDiscord

Апрель стал месяцем динамичного роста и активной активности сообщества Ontology. Мы благодарим наше сообщество за неизменную поддержку и с нетерпением ждем еще одного месяца инноваций, сотрудничества и роста. Следите за обновлениями, и давайте продолжим вместе расширять границы технологии блокчейн!

Ежемесячный отчет Ontology — апрель was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ඔන්ටෝලොජි මාසික වාර්තාව — අප්‍රේල්

ඔන්ටෝලොජි මාසික වාර්තාව — අප්‍රේල් අප්‍රේල් යනු ඔන්ටොලොජි සඳහා ක්‍රියාකාරකම් සහ ජයග්‍රහණවල සුළි සුළඟකි. උද්යෝගිමත් නව හවුල්කාරිත්වයන් සහ ප්‍රජා ක්‍රියාකාරකම්වල සිට අපගේ තාක්‍ෂණයේ සැලකිය යුතු දියුණුව දක්වා, මෙන්න මේ මාසයේ උද්දීපනය පිළිබඳ නැවත විස්තරයක්: ප්‍රජාව සහ Web3 බලපෑම 🌐🤝 10M DID අරමුදල දියත් කිරීම: අපගේ විමධ්‍යගත අනන්‍යතා (DID) පරිසර පද්ධතිය සැලකිය යුතු ලෙස ඉහළ නැංවීමට, නවෝත්පාදන සහ
ඔන්ටෝලොජි මාසික වාර්තාව — අප්‍රේල්

අප්‍රේල් යනු ඔන්ටොලොජි සඳහා ක්‍රියාකාරකම් සහ ජයග්‍රහණවල සුළි සුළඟකි. උද්යෝගිමත් නව හවුල්කාරිත්වයන් සහ ප්‍රජා ක්‍රියාකාරකම්වල සිට අපගේ තාක්‍ෂණයේ සැලකිය යුතු දියුණුව දක්වා, මෙන්න මේ මාසයේ උද්දීපනය පිළිබඳ නැවත විස්තරයක්:

ප්‍රජාව සහ Web3 බලපෑම 🌐🤝 10M DID අරමුදල දියත් කිරීම: අපගේ විමධ්‍යගත අනන්‍යතා (DID) පරිසර පද්ධතිය සැලකිය යුතු ලෙස ඉහළ නැංවීමට, නවෝත්පාදන සහ වර්ධනය පෝෂණය කිරීමට අපි 10M අරමුදලක් දියත් කළෙමු. PBW හි සිටීම: PBW හි ඔබ බොහෝ දෙනෙක් දැකීම සතුටක්! බෙදාගත් සෑම සංවාදයක්ම සහ තීක්ෂ්ණ බුද්ධියක් අපි අගය කරමු. Web3 ආශ්චර්යයන්: මෙම මාසයේ, අපගේ සාකච්ඡා DeFi සහ NFT දක්වා විහිදුණු අතර, සජීවී සැසි මඟ හැරුණු අය සඳහා පටිගත කිරීම් තිබේ. ටෝකන්2049 සහභාගීත්වය: ටෝකන්2049 හි අපගේ පැමිණීම විශාල සාර්ථකත්වයක් ලබා ගත් අතර, බ්ලොක්චේන් ප්‍රජාව තුළ අපගේ දෘශ්‍යතාව සහ සම්බන්ධතා පුළුල් කරයි. Zealy ක්වෙස්ට්- ඔන්ටොලොජි ඔඩිස්සි : අපගේ නවතම ගවේෂණය සජීවී වන අතර, අපගේ වේදිකාව තුළ නියැලීමේ ආකර්ෂණීය තට්ටුවක් එක් කරයි. සංවර්ධන/ආයතනික යාවත්කාලීන🔧 සංවර්ධන සන්ධිස්ථාන 🎯 ඔන්ටොලොජි EVM ට්‍රේස් ට්‍රේඩින් ෆන්ක්ශන් : EVM අවකාශය තුළ අපගේ වෙළඳ හැකියාවන් වැඩිදියුණු කරමින් ප්‍රගතිය 80% දක්වා ළඟා වී ඇත. ONT සිට ONTD පරිවර්තන කොන්ත්‍රාත්තුව: අපි අපගේ පරිශීලකයින් සඳහා පරිවර්තන ක්‍රියාවලිය සරල කරමින් 50% සංවර්ධන සන්ධිස්ථානයට පැමිණ ඇත. ONT ලෙවෙරේජ් ස්ටේකින් ඩිසයින් : දැන් 35% වන විට, මෙම සංවර්ධනය ඔන්ටොලොජි ප්‍රජාව සඳහා නව්‍ය ස්ටැකිං විකල්පයන් සැපයීම සඳහා යොමු කර ඇත. සිදුවීම් සහ හවුල්කාරිත්වයන් 🤝 StackUp 2 කොටස සාර්ථකයි: StackUp සමඟින් අපගේ නවතම ව්‍යාපාරය ප්‍රබල ජයග්‍රහණයක් විය, ඔබේ සහභාගීත්වයට ස්තූතිවන්ත විය. නව හවුල්කාරිත්වයන්: අපි LetsExchange සමඟ නව සහයෝගීතාවයන් සැමරූ අතර ONT සහ BitGet හි ONG ලැයිස්තුගත කිරීම සඳහා GUARDA වොලට් හි සහයෝගය අපි සැමරුවා. ප්‍රජා දීමනා සහ AMAs: ලව්ලි වොලට් සමඟින් දීමනා සහ KuCoin සමඟ AMA ඇතුළු අන්තර්ක්‍රියාකාරී සිදුවීම්වලින් මාසය පිරී තිබුණි. ONTO පසුම්බි සංවර්ධන වෙත 🌐🛍️ UQUID ප්‍රවේශ්‍යතාව: UQUID දැන් ONTO තුළ ප්‍රවේශ විය හැකි අතර, අපගේ පරිශීලකයින් සඳහා ගනුදෙනු විධිමත් කරයි. යාවත්කාලීන වෙත: අපි පරිශීලක අත්දැකීම වැඩි දියුණු කිරීම සඳහා නව අනුවාද යාවත්කාලීනයක් නිකුත් කර ඇත. කිටා පදනම සමඟ ඉදිරියට එන AMA: අනාගත සහයෝගීතාවයන් වෙත ගැඹුරට කිමිදීම අරමුණු කරගත් කිටා පදනම සමඟ අපගේ AMA අතපසු නොකරන්න. දාම මෙට්‍රික්ස් 📊 dApp වර්ධනය: අපගේ මේන්නෙට් හි මුළු dApps සංඛ්‍යාව 177 හි ශක්තිමත්ව පවතී, එය විචිත්‍රවත් පරිසර පද්ධතියක් පෙන්නුම් කරයි. ගනුදෙනු වර්ධනය: සක්‍රීය ජාල භාවිතය පිළිබිඹු කරමින් මෙම මාසයේ dApp ආශ්‍රිත ගනුදෙනු 773 කින් සහ මේන්නෙට් ගනුදෙනු 13,866 කින් වැඩි විය. ප්‍රජා සහභාගීත්වය 💬 විචිත්‍රවත් සාකච්ඡා: අපගේ සමාජ මාධ්‍ය වේදිකා අපගේ නියැලී සිටින සහ උද්‍යෝගිමත් ප්‍රජා සාමාජිකයින්ගේ සජීවී සාකච්ඡා සහ තීක්ෂ්ණ බුද්ධිය සමඟින් දිගටම හඬ නඟයි. NFT හරහා පිළිගැනීම: අපි ක්‍රියාකාරී ප්‍රජා සාමාජිකයින්ට ඔවුන්ගේ දායකත්වය සහ මැදිහත්වීම් හඳුනා ගැනීම සඳහා NFT නිකුත් කර ඇත. සමාජ මාධ්‍ය ඔස්සේ අපව අනුගමනය කරන්න📱

අපගේ සමාජ මාධ්‍ය නාලිකා ඔස්සේ අපව අනුගමනය කිරීමෙන් ඔන්ටොලොජි සමඟ දිගටම සිටින්න. බ්ලොක්චේන් සහ විමධ්‍යගත තාක්ෂණයන්හි විකාශනය වන ලෝකයේ අපගේ හවුල් සාර්ථකත්වය සඳහා ඔබේ අඛණ්ඩ සහයෝගය සහ ක්‍රියාකාරිත්වය අත්‍යවශ්‍ය වේ.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

ඔන්ටෝලොජි මාසික වාර්තාව — අප්‍රේල් was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Insurance Trends and Digital Transformation in 2024

Historically, the insurance industry hasn’t had a reputation for being quick to adopt new technologies. However, ongoing regulatory pressures, rising competition, and changing policyholder demands have created new challenges for insurers, which digital transformation can help to address.    Today, leading insurance companies have become much more policyholder-centric, catering to the

Historically, the insurance industry hasn’t had a reputation for being quick to adopt new technologies. However, ongoing regulatory pressures, rising competition, and changing policyholder demands have created new challenges for insurers, which digital transformation can help to address. 

 

Today, leading insurance companies have become much more policyholder-centric, catering to the unique needs, behaviors, and preferences of consumers rather than offering a one-size-fits-all approach. 

 

These high-touch, personalized customer experiences have driven better policyholder satisfaction, loyalty, and retention, all of which support insurers’ bottom line. However, adopting a digital insurance model generates new concerns around policyholder data protection and online account security. 

 

This blog will explore the ongoing digital transformation of the insurance market, how the customer experience has been impacted, and what traditional insurers can do to deal with market headwinds and orchestrate a seamless experience for policyholders.


BlueSky

Product Roadmap

What’s coming next? Over the next few months, we’ll be introducing some long-requested features.

In the past year, Bluesky has grown from 40K users to 5.6M users. We’ve made it possible to create custom algorithms, introduced community-driven moderation, and opened up federation. This has laid the foundation for a social protocol that can exist long after Bluesky the app does.

What’s coming next? Over the next few months, we’ll be putting more of our energy into the application. This includes a lot of “Quality of Life” improvements and some long-requested features. The biggest changes will be:

DMs Video Improved Custom Feeds Improved anti-harassment features OAuth

We’re very excited to deliver these features you’ve been asking for. We don’t have exact timelines, but you can expect to see all of these in the next few months.

DMs (direct messages)

Historically, all bluesky posts have been public. But there’s a world of interactions that are opened up when users can directly message each other. Making personal connections, finding job opportunities, organizing events, workshopping posts – there’s a lot of reasons to slide into the DMs.

We’re currently working on a DM service that will integrate into the Bluesky app. This service will be “off protocol” at first so we can develop iteratively. We’ll use what we learn to land protocol-driven DMs in the future. For an update on what’s next for the protocol, see our protocol roadmap.

The v1 of DMs will be one-to-one. You’ll be able to restrict who can DM you (open, followed users only, and disabled). If you’ve used DMs on other social networks, it should feel familiar.

Video

Our devs keep getting told about cute animal videos which our users can’t share. The guilt is terrible.

We’re still finalizing the details, but it’s looking like the v1 of video integration on Bluesky will be 90-second clips that you can share on your posts.

Improved Custom Feeds

“Custom Feeds” are one of the best features of Bluesky, allowing users to completely customize their timeline, but they’re still pretty tough to work with. Our community has done an incredible job filling in the gaps, but we want to finally invest some more energy into making Feeds better.

Here’s the list of ideas in the works:

In-app feed creation. The ability to submit posts to feeds, curate the submissions, and manually moderate what’s included. Better feed discovery, and a way to see trending feeds. New feedback mechanisms which the algorithmic feeds can request, such as “show more” and “show less” buttons, and a way to track which posts have been seen to stop duplicates from showing so often. The ability to move “Following” out of the leftmost tab of your homepage. Feed “following” to drive superfeeds which show what’s happening in each of your communities. Better caching strategies to improve performance.

Algorithmic choice has been a key goal of Bluesky from the start, and we can’t wait to move Custom Feeds forward. It’s incredible how much our community has done with them already, and we think a little extra love will enable you to go even further.

What you see on social media is mostly determined by algorithms, and giving you the power to control your algorithms like this is one of the most important things we do.

Improved Anti-harassment Features

Public social networks unfortunately all have to deal with the problem of users who want to troll, harass, and just make other people’s lives miserable. Over the past year, we’ve implemented tooling like reply controls for threads, user lists, and community-driven moderation through labeling, but there is still more work to be done.

In the months to come, we’ll be doing another pass over moderation tooling, with a focus on anti-harassment mechanisms. We’ll be publishing more on this soon.

OAuth

You know those “Log in with Facebook” or “Log in with Google” buttons you see in apps? What if there was a “Log in with Bluesky” button? We think there should be! OAuth is the internet standard that makes that possible, and we’re bringing it to Bluesky and atproto.

OAuth is especially important for third-party clients – it’ll make signing in easier and safer for users. You never share your password with other clients, and “App Passwords” will no longer be required.

Once OAuth lands, we’ll expand on our 2FA model to enable more factors than email (which landed last week).

You can read about our technical design here.

See you on Bluesky!

If you haven't tried Bluesky yet, sign up here and give it a spin. We'll see you there!

Monday, 06. May 2024

Microsoft Entra (Azure AD) Blog

Platform SSO for macOS now in public preview

Today we’re announcing that Platform SSO for macOS is available in public preview with Microsoft Entra ID. Platform SSO is an enhancement to the Microsoft Enterprise SSO plug-in for Apple devices that makes usage and management of Mac devices more seamless and secure.   At the start of public preview, Platform SSO will work with Microsoft Intune. Additional mobile device management (MDM)

Today we’re announcing that Platform SSO for macOS is available in public preview with Microsoft Entra ID. Platform SSO is an enhancement to the Microsoft Enterprise SSO plug-in for Apple devices that makes usage and management of Mac devices more seamless and secure.

 

At the start of public preview, Platform SSO will work with Microsoft Intune. Additional mobile device management (MDM) providers will be added during the public preview. Please contact your MDM provider for more information on support and availability.

 

As part of this release, we’re introducing Microsoft Entra Join for macOS. This feature uses the Enterprise SSO plug-in to create a hardware-bound device record in Entra ID. Entra Join requires the use of an Entra ID organizational account.

 

In addition, we’re making three new ways to authenticate available, all configurable with MDM and available as part of Microsoft Entra ID Free:

 

Passwordless authentication with Secure Enclave: Like Windows Hello for Business, this method allows the user to interactively sign in to the desktop with their local account and password. Once the user signs in, a hardware-bound cryptographic key stored in the device’s Secure Enclave can be used as a trusted credential with Entra ID, giving the user SSO across applications that use Entra ID for authentication. This method allows users to go passwordless with Touch ID to unlock their device and be signed into Entra ID under the hood using a device-bound key. It can save organizations money by removing the need to purchase security keys, card readers, or other hardware. For information on our security and compliance standards, please see this guide.  Passwordless authentication with smart cards: With this method, the user signs into the Mac using an external smart card (or smart-card-compatible hard token like Yubikey). Once the device is unlocked, the smart card is further used with Entra ID to grant SSO across apps that use Entra ID for authentication.  Password synchronization with the local account: This method enables the user to interactively sign into the local machine account with their Entra ID password, granting SSO across apps that use Entra ID. The user no longer needs to remember separate passwords, and any changes to the Entra ID password are synchronized to the local machine. 

 

Getting started

 

Starting today, you’ll find updated documentation and tutorials for Platform SSO for macOS on Microsoft Learn to guide you through setup, deployment, usage, and troubleshooting.

 

If you haven’t already, you’ll want to take the following steps to help your organization prepare:

 

Update devices to use Company Portal 5.2404.0 or newer. Deploy the Enterprise SSO plug-in. Ensure users are registered for Microsoft Entra multifactor authentication. For the best experience, we recommend using Microsoft Authenticator.  For Google Chrome users, install the Microsoft Single Sign On extension. Update macOS devices to macOS 13** (Ventura) or later. macOS 14 (Sonoma) is recommended for the best user experience and feature set. 

 

** Note that migration from non-shared keys on macOS 13 to shared keys (supported on macOS 14+) requires user re-registration of the device.

 

Even more capabilities on the way

 

Through incremental releases over public preview, we’ll gradually introduce additional controls, reports, audit, and sign–in logging capabilities, plus APIs in Microsoft Graph to configure, query, and manage them. Please note that like Windows Hello for Business, some features may require a premium Entra ID license. 

 

Brian Melton-Grace

Senior Product Manager, Microsoft

LinkedIn

 

 

Read more on this topic

Coming Soon – Platform SSO for macOS

Microsoft Enterprise SSO for Apple Devices is Now Available for Everyone

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

Dark Matter Labs

Universal Basic Nutrient Income — Institutional Infrastructure for 2040 Food Preparedness?

Universal Basic Nutrient Income — Institutional Infrastructure for 2040 Food Preparedness This blog post marks the starting point of our work on the Universal Basic Nutrient Income (UBNI) policy instrument in Sweden. This speculative design project is part of our ongoing efforts within DM Food Systems Mission to deliver real-world options for the new life-ennobling economy, and the umbrella “9 ou
Universal Basic Nutrient Income — Institutional Infrastructure for 2040 Food Preparedness

This blog post marks the starting point of our work on the Universal Basic Nutrient Income (UBNI) policy instrument in Sweden. This speculative design project is part of our ongoing efforts within DM Food Systems Mission to deliver real-world options for the new life-ennobling economy, and the umbrella “9 out of 10” Protein Shift Innovation Platform sponsored by Vinnova. It continues our work from the Rapid Transition Lab on food system resilience, and food repricing in Sweden. We will collaborate on this with local partners MiljöMatematik.

The project proposes the introduction of Universal Basic Nutrient Income as a model for alleviating barriers to a large-scale shift towards more sustainable and healthy diets, as well as for preparing Sweden for an increasingly uncertain future through stimulating resilient local production. The realisation of such a system will be explored through a radical repricing mechanism based on true cost accounting. Furthermore, the new institutional infrastructure around such instrument needs to be aided by an increased, and coordinated ecosystem investment capacity in adequate portfolios of system interventions that can respond to different futures (e.g. 2 deg., 8 deg., supply chain collapse futures).

The prototyping phase will be carried out in Malmö where we aim to engage in food environment mapping to address affordability, accessibility, barriers and enablers to local sustainable consumption. We will test the role of UBNI under multiple future risk scenarios, and live-demonstration with citizens which will further inform dietary recommendations, policy design and municipal food system preparedness levels.

Consider the following scenario

The year is 2040. Sweden is now faced with the consequences of planetary ecological collapse due to the collective failure of transforming vital systems to meet the urgent demands of sustainability in the speed and scale required. The nation is now standing at a critical societal tipping point where the self-sufficiency agendas and local food systems’ sustainability intersect with the welfare of the citizens.

Fig 1. “Angry birds” escaped from a flooded peri-urban farm due to an abrupt rain event in 2035 (Image by the authors — Midjourney)Fig 1b. Heavy rain events flood agricultural fields in southern Sweden in 2040 (Image by the authors — Midjourney)

The new reality of constantly increasing volatility in global markets, unstable pricing, scarce resources and extreme weather conditions has put immense pressure on the reliability of global food systems and societal functions. Seasonal food shortages, export bans and disruptions in global supply chains have caused Sweden to rapidly reassess its own food system. Inspired by some of the tests from the previous decades, the Swedish government has introduced a Universal Basic Nutrient Income model (UBNI) and adjusted the taxation system in an attempt to safeguard the food system and ensure equitable access to healthy and sustainable food within planetary boundaries.

Fig 2. Food prices linked to true cost accounting of environmental, health, societal, and resilience impacts (Image by the authors — Midjourney)

An elaborate true cost accounting model relating to dietary patterns enabled the creation of a mechanism where food pricing is linked to capacity of ecological restoration, carbon storage, multi-scalar crisis resilience, as well as healthcare cost savings, promoting mental wealth and preventive healthcare through nutrition (see the blog post - “More than calories: a deep code transformation of our food systems”, 2022) Building on principles of regenerative economy, Swedes are now paying a dynamically changing Universal Basic Nutrition tax, composed through multiple instruments including increased VAT rates on foods with high costs to nature and environment. Through implementing this new taxation system, every Swedish citizen is now entitled to a provision of “free food” making the basic human right of equal access to sustainable and healthy foods a vital part of the new Swedish welfare system.

Fig 3. Sensing Labs — infrastructure of independent, coordinated labs estimating ‘True Costs of Food’ through open data infrastructures. (Image by the authors — Midjourney)

Dietary recommendations from leading research institutions and governmental bodies (incl. Livsmedelsverket Food database, Nordic Nutrition Recommendations, The Planetary Health Diet EAT Lancet, WWF One Planet Plate, RISE Food Climate database) has been operationalized and set a base for the new UNBI FoodBank — an institution that manages a dynamic portfolio of resilient, sustainable and healthy foods and recipes through multi-actor coordination. The available products and recipes are based on seasonal occurrence, local availability, current global markets, or resilience. In order for foods to be UBNI-eligible, they need to have high nutritional value and be sourced in a sustainable manner with low climate impact while ensuring both human and animal welfare. Furthermore, the FoodBank’s sensing function is responsible for addressing changing supply conditions, crisis preparedness and adequate adaptation of its food composition.

Fig 4. Local Swedish UBNI apples from an organic farm costing 0 kr, and imported apples (Image by the authors — Midjourney)

The successful model of Matkasse, the One Planet Plate weekly menus, and existing data infrastructures (e.g. Livsmedelsdatabasen) have inspired a system where citizens every week can select their subsidized set of recipes or self compose available products from the UNBI FoodBank which will be made free of charge as they promote higher national resilience goals — ecological regeneration, and society-wide health.

Fig 5. Weekly/monthly meal plan made free of charge due to its environmental, and health benefits — becoming part of the welfare infrastructure in Sweden, 2040.

Through accessible digital tools, the recipe bank allows for easy customization of meal plans, recipes and guides supporting citizens in exploring new foods and generation of shopping lists making sustainable consumption effortless and time effective. The illusion of choice is maintained and prevents the scheme to be labeled as governmental food rationing. Actors such as retailers gradually adapt to the scheme through building on existing infrastructures of membership cards, impact accounting, continued improvement of environmental and health performance of their products, resilient foods stock increase, data sharing as well as collaboration for increased localized, agroecological production and prosumer networks.

Fig 6. UBNI instrument and its Tranformation Fund resulting in new primary production territories (Image by the authors — Midjourney) System demonstrator — Purpose, Vision, and Goals

The purpose of Universal Basic Nutrient Income demonstrator is tofurther utilize the strong Swedish welfare infrastructure as a driver for large-scale dietary change, and societal resilience. Through perceiving equal access to sustainable and healthy foods as basic human rights and a vital part of the welfare system, barriers such as financial constraints, socio-economic context and attitudes might become less impactful in restricting the desired, society-wide dietary shifts.

The Universal Basic Nutrient Income model enables a rapid inclusion of less affluent socio-economic groups into healthy and sustainable dietary habits. By reducing the financial burden for citizens of accessing sustainable foods, it addresses a critical aspect of social equity.

Furthermore, recognizing the uncertain 2040 future, and continuous shrinking of actionable and safe operating space for humanity, the UBNI instrument with its FoodBank needs to be seen through the lens of preparedness. Supply side changes, global market disruptions, or an ecological collapse may alter the Bank’s composition. In the context of increasing volatilities, the dynamically changing base recommendation on what foods are considered resilient, sustainable and healthy will need to be supported by an institutionalized sensing and citizen support infrastructure informing what foods should be subsidized and which taxed, and in collapse scenarios, which ones rapidly produced or imported for resilience.

Hence, the vision behind this demonstrator is to shift the mindset from consumers being the main agents of change towards acknowledging the potential which lies in strong civic incentive mechanisms and multi-actor coordination infrastructure as means for achieving large-scale transformation and food sovereignty for all within multiple future possible scenarios.

Recognizing the power of local actor levels and governments in spearheading action towards promoting healthy and sustainable diets, the FoodBank will deal not only as a base for UBNI but also as an organization for transparent and decentralized coordination (see: DAO) between private, public and civic actors to co-curate and decide (e.g. through quadratic voting) on societal-scale portfolios of investments for preparedness, moving beyond short-return cycles, or grant making. In return, the institutionalized food programme will not be seen as a food rationing governmental initiative but a democratization of “who decides what we eat?”

The main goal is to co-design and test the possibility of the Universal Basic Nutrient Income model. This will entail design incl. food-related true cost accounting models (TMG & WWF, 2021), taxation and subsidies targeting dietary patterns (Röös et al, 2021), as well as foresight methodologies concerning the spectrum of future changes to dietary recommendations and food system risks endangering nutritional needs.

Prototype

The idea behind the prototype is to collaboratively develop a Universal Basic Nutrient Income Framework — an incentivising system that sees equal access to sustainable foods as a fundamental human right while addressing future uncertainties in the food system.

Refinement of the concept will be done through a collaborative foresight workshop with local food system actors based on Malmö municipality as test bed. Testing, and the tangible live demonstration will be done through an experiment with engagement of four local citizens scouting their local food environment in search for FoodBank sustainable foods and following a nutritional recommendation according to four future scenarios.

Fig 8. We have the apps already. How would the spaces of retail sector change under the Universal Basic Income instrument? (Image by the authors — Midjourney)

Through its provocational character, where the hypothesis is that food could be provided technically for free if true cost accounting models and taxation showcased its feasibility, the prototype will aim to inspire conversation on implementation of strong repricing measures. Furthermore, formation of institutional infrastructures for future resilience in a specific (Malmö) local context will be discussed.

Through its provocational character, where the hypothesis is that food could be provided technically for free if true cost accounting models and taxation showcased its feasibility, the prototype will aim to inspire conversation on implementation of strong repricing measures. Furthermore, formation of institutional infrastructures for future resilience in a specific (Malmö) local context will be discussed.

Contact

This blog post has been written by
Aleksander Nowak | aleks@darkmatterlabs.org
Alex Hansten | alex@darkmatterlabs.org

and would not be possible without all the inspiring work of colleagues from Dark Matter Labs

Work sponsored by Vinnova as part of the 8 innovation platform

Universal Basic Nutrient Income — Institutional Infrastructure for 2040 Food Preparedness? was originally published in 9outof10 — Protein Shift Innovation Platform on Medium, where people are continuing the conversation by highlighting and responding to this story.


Hello World — 9outof10 Innovation Platform & Seminar I

Hello World — 9outof10 Protein Innovation Platform & Seminar I 🚀How can mission-oriented innovation be used for food system transformation? 👐 Can participatory foresight help to bridge sectoral siloes and align? 📣 And, how can it be turned into action? The Swedish food system is at a turning point, facing environmental pressures that demand transformation. We are thrilled to share
Hello World — 9outof10 Protein Innovation Platform & Seminar I

🚀How can mission-oriented innovation be used for food system transformation?
👐 Can participatory foresight help to bridge sectoral siloes and align?
📣 And, how can it be turned into action?

The Swedish food system is at a turning point, facing environmental pressures that demand transformation. We are thrilled to share that Vinnova is funding eight innovation platforms, including ours, to drive sustainable change in Swedish food production and consumption.

At 9outof10, we’re focused on powering the shift toward sustainable Swedish protein production and consumption.

Our mission? By 2040, we envision 9 out of 10 meals in Sweden falling within planetary boundaries. But how do we get there?

We’re in an exploratory phase, consulting experts on climate assessment frameworks and dietary guidelines to navigate the complex landscape. Yet, we’ve encountered challenges, like the lack of standardisation in climate assessment and limited data accessibility. Moreover, primary production holds the key to success, but it lacks adequate support and investment. Consumer behaviour is pivotal too; changing habits is essential for systemic change.
Serina Ahlgren; Britta Florén; Susanne Bryngelsson; Kristina Bergman; Anna Wahlberg; Amanda Wood; Anna Karin Lindroos; Robin Lindström;Anton Unger; Emma Jonson; Erik Strandin Pers

Our journey continues as we engage stakeholders across the value chain to chart a path towards a sustainable future. And we want you to be part of it! Join us as we explore innovative solutions and future scenarios for Swedish food 🌱

Follow our page for updates, and don’t hesitate to share your thoughts. Let’s work together towards a healthier, greener future!

A bit about us and our first seminar with sustainable consumption stewards

Innovation platforms for sustainable food future

The current Swedish food system places immense pressure on the environment, contributing to climate change by greenhouse gas emissions, biodiversity loss, land degradation, and biogeochemical flows. It is evident that a transformation is crucial in order for the food system to become more sustainable for both people and the environment. This transformation requires a systems approach. To accelerate this transformation, Vinnova is financing eight innovation platforms to drive innovation at the system level for a sustainable and competitive food system. The work is based on eight bold and inspiring missions, for a sustainable food system. Based on the missions, coordinating actors must mobilise resources and commitment from across the Swedish food system and its actors to identify levers for change.

RISE Research Institutes of Sweden, Dark Matter Labs and SISP — Swedish Incubators & Science Parks, makes up one of the eight platforms, aiming towards a 2040 future where the Swedish food system is sustainable and within a safe operating space for both humans and the planet. We call ourselves.

9outof10 — Powering the shift towards sustainable Swedish protein production and consumption.
Seminar I — Foresight seminar with Swedish food system actors working with sustainable consumption frameworks.

Being in an explorative phase, we have so far consulted some of the most knowledgeable actors on climate assessment frameworks in Sweden to help us map and make sense of the complex landscape of the Swedish food system. Furthermore, to get a better understanding of the vast number of dietary guidelines available, and the level of alignment within this space, we organised an introduction seminar with researchers and representatives of the different guidelines developed to meet both climate and nutritional goals.

In addition to discussing the frameworks, we consulted the seminar participants on our mission that “ 9/10 meals in Sweden should be within planetary boundaries by 2040”, with a specific focus on shifting our protein consumption. Using foresight exercises and with this in mind, we discussed enablers and barriers to get closer to our mission statement. Through our conversations, we quickly discovered some issues with this mission, one being the difficulty to measure where we currently sit and how to measure future success.

Lack of standardisation in climate assessment guidelines

One of the main reasons behind this is the lack of standardisation within climate assessment frameworks that applies to all food products, produce and services. Part of the reason for this seems to be the difficulty, and/or inability, to properly measure the actual impact of certain products. Another large contributing factor to this is the large share of imported foods. Here, the environmental impact of production has been ‘outsourced’ to other places in the world, which additionally makes it difficult to assess and collect data for both the local and the wider environmental impacts.

Inaccessibility to data

Prominent research institutions like RISE as well as actors such as CarbonCloud and Coop, have conducted thorough analyses on the diverse impacts of food on climate change. These databases are comprehensive and valuable. However, due to the high cost of creating such data sets, access to data is not readily available, nor free of charge. This prevents smaller organisations and SMEs from assessing the climate impact of their products and services. The participants therefore highlighted the need for open access data in order to enable equal access to ensure comparative standards, to reach a common mission.

Primary production as part of the solution

It is clear that there is insufficient focus on how to make the transition for primary production a viable business model — finding these models is key to enable the transition. The demands on primary production to transition to sustainable practices do not currently seem to level with the amount of support given to producers. It therefore carries a lot of risk particularly for smaller producers. Furthermore, the great potential our arable land holds is not sufficiently recognised. Not only is it important in terms of food production capacity, but it is also important in terms of enabling crucial ecosystem services. Updated policies, business models and impact investments are needed to enable this change. The food system today accounts for 20–30% of the GHG emissions but attracts only 7% of the impact investments.

Consumer behaviour

Although much of the responsibility lies with politicians to ensure that primary production and food producers stay within planetary boundaries, the power of consumer behaviour should not be ignored. The role of food, being both culturally contingent, traditional and habitual, plays an enormous role for what is being produced, and what we expect to see produced. Changing this behaviour, be it through nudging, hard policies or soft incentives, is crucial to enable a system change. Figuring out how to approach consumers will be paramount to reaching the mission.

Based on these conversations, it is evident that the main impetus is on policy makers, primary production and consumer behaviour for enabling our mission statement. By collecting knowledge and input from various parts of the value chain, we hope to garner enough information to tease out a path towards a more sustainable Swedish food system. The work now proceeds by bringing these main issues into a conversation with actors and industry organisations from the primary production side, as well as representatives from the consumer side, to further understand where innovation and restructuring must happen in order to achieve a sustainable and inclusive transition.

Sweden has significant potential to enhance its innovation capacity so that the food system contributes to preserving both the environment and human health, while also providing us with food, jobs, and quality of life. How to do this will be part of our upcoming workshop series exploring different key questions: How do we ensure that primary protein production is within planetary boundaries by 2040? How do we ensure that protein consumption is both socially, economically and environmentally just, while also healthy and good for the planet? Can we get key actors to unstick their minds from their daily work and contribute to this process by exploring future pathways towards reaching a sustainable food system by 2040, using foresight and scenario building exercises?

Intrigued yet? Good. Let’s keep in touch. Follow our page to see how our story unfolds while we explore different ways of using scenarios for unlocking the potential of Swedish future foods.

We’d also love your input, so feel free to comment, share, and engage with our posts now and in the future!

Contact

This blog post was written by Alex Hansten (Dark Matter Labs) and Mari W. Meijer (RISE), with contributions from:

Hanna Svensson — RISE — hanna.e.svensson@ri.se

Aleksander Nowak — Dark Matter Labs — aleks@darkmatterlabs.org

Hello World — 9outof10 Innovation Platform & Seminar I was originally published in 9outof10 — Protein Shift Innovation Platform on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

Predictoor Dynamics Have Shifted Towards Accuracy

The “maximize accuracy” game is now outcompeting the “50/50 maximize stake” game Introduction In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. In Predictoor’s rewards formula, the more accurately you predict and the more you stake, the more you earn; you’re always in competition with others and their predictions. From this rewards
The “maximize accuracy” game is now outcompeting the “50/50 maximize stake” game Introduction

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. In Predictoor’s rewards formula, the more accurately you predict and the more you stake, the more you earn; you’re always in competition with others and their predictions.

From this rewards formula, we recognized that accuracy should be the key value proposition. It was a natural fit: staking caused accountability, and the game around it could nudge up accuracy over time.

Alas, “maximize accuracy” wasn’t happening. Rather, another game emerged that didn’t help Predictoor’s value proposition.

Observing the “50/50 High Stake” Game

People started running bots that used a “50/50” strategy, where at “prediction” the bot would stake exactly half its OCEAN up, and half down. This was popular, because it was nicely profitable and relatively safe.

To maximize $ earned, the 50/50 bot-runners simply maximized stake. Competition among 50/50 bot runners led staking volume to grow exponentially, to nearly $1B per month from $0 at launch six months earlier 😲.

These people had found that optimizing on stake (with no value-add to accuracy) was more profitable than optimizing on accuracy.

While the high volumes were exciting to see, they weren’t helping accuracy. And ultimately it was accuracy that mattered for Predictoor’s long-term success. What held back the model-based bots from competing? What could we do about it? Did it work?

Identifying the Problem

Was anyone running model-based prediction bots? We knew this answer to be “yes” because we were among them! Nothing like eating your own dog food:)

And, we found that it was hard to be profitable with model-based bots. Here’s why: it’s easier to be profitable predicting on both up & down, rather than just one side.

Let’s elaborate.

The 50/50 bots staked on both sides, for each prediction. They were always there to catch the good and the bad. Having both sides is predictable, and helps profitability. In contrast, the model-based bots staked on just one side: up or down, but not both at once. Each time a prediction was wrong (almost but not quite 50% of the time), it was slashed its full stake amount. This meant high variance in winnings from epoch to epoch: big win or big loss. High variance hurts profitability.

We also recognized that higher prediction accuracy could help compete against the 50/50 bots. So we continued our diligent line of research to increase model accuracy.

If you can’t be profitable, then you can’t stake meaningful amounts to optimize profitability.

Addressing the Problem

We did two things to address the problem: two-sided prediction bots, and higher accuracy models.

Two-sided prediction bots. We introduced these a couple months ago. They made it easy to run bots that submit both up *and* down predictions based on model confidence.

Here’s an example. If the bot calculates up=30% chance, and therefore down=70%, and has 1000 OCEAN to stake, then the bot stakes 0.30*1000 = 300 OCEAN to up, and 700 to down. Critically, the bot is always on both the winning side and the losing side. This reduces its variance and increases its profitability. The image below illustrates.

prediction bots submit “up” and “down” predictions with a stake-weighted confidence on each, i.e. “two-sided prediction”.

Model accuracy research. We continued to improve our own internal models’ accuracy. We also learned that for models with 52% accuracy, two-sided bots could compete against 50/50 bots, but one-sided could not. For models of 56% accuracy, even one-sided bots could compete, and two-sided did not help.

What Happened Next? Accuracy Went Up

We rolled out two-sided model-based bots, and provided affordances for more accurate models.

What happened then?

New predictoors armed with two-sided bots and more accurate prediction models joined the game. These new model-based bots drove aggregate prediction accuracy up, nicely above 50%.

The image below shows accuracy for each of the ten 5-minute feeds from April 8, 2024 until May 1, 2024. Most notable is in the right 1/3 of the image where all accuracies trend upwards nicely.

Accuracy vs time for each 5-minute feed

The following image shows accuracy for each of the ten 1 hour feeds from April 8, 2024 until May 1, 2024. As with 5min, accuracies trend up nicely.

Accuracy vs time for each 1 hour feed Then, What Happened? “50/50 Maximize-Stake” Game Faded

As accuracy increased, the 50/50 strategy became unprofitable, because the accurate model-based bots ate the stake of the 50/50 bots. Let’s elaborate:

In an environment where every predictoor has a 50% accuracy, using a 50/50 strategy means a predictor will be right as often as wrong. Each correct prediction offsets the loss from a wrong one, while also earning a share of the rewards based on the stake amount. However, in an environment, where some predictors have accuracy sufficiently higher than 50%, they will win part of the losses incurred by the 50/50 strategy. Consequently, the gains from correct predictions in the 50/50 strategy do not fully cover the losses from incorrect ones, rendering it unprofitable.

With “50/50 maximize-stake” strategy, becoming unprofitable, the 50/50 bot runners left or sharply reduced their stake.

The image below illustrates. Below:top shows stake for each of the ten 5-minute feeds from April 8, 2024 until May 1, 2024. It dropped from 4000 OCEAN / feed / epoch to <1000 OCEAN. As of May 6 there’s about 700 OCEAN / feed.

Below:bottom shows a similar trend for 1 hour feeds. It also dropped similarly.

Stake vs time for each 5-minute feed (top), and 1h feed (bottom) Discussion / Learnings

Accuracy is rewarded. At its heart, Predictoor is built on the principle of “reward accuracy”. The system rewards predictoors who make accurate predictions and penalizes those who don’t. We just had to reduce friction for making this happen.

Predictoors must adapt. From the above graphs, it can be seen that after 50/50 predictoors quit the game, the rest of the predictoors have lowered their stake amounts to maximize their revenue. Adapting to these changes and employing smarter strategies is a key part of the game. Understanding these dynamics is essential to maximizing your potential returns in the game. So, dive into it, tweak your strategies, and watch your profits improve!

Conclusion

In this article, we described how Predictoor rewards for accuracy and stake, and with a main value proposition of accuracy. We then described how the “50/50 maximize-stake” strategy was nicely profitable, alas, optimizing for stake not accuracy. We described steps we took to help make model-based predictions more profitable: two-sided predictions and more accurate models. Finally, we showed positive results of how accuracy has increased and the “50/50 maximize-stake” bots have been leaving.

The game is now firmly “maximize accuracy” and that’s a great thing.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable businesses and individuals to trade tokenized data assets seamlessly to manage data all along the AI model life-cycle. Ocean-powered apps include enterprise-grade data exchanges, data science competitions, and data DAOs. Follow Ocean on Twitter or TG, and chat in Discord.

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Predictoor has over $400+ million in monthly volume, just six months after launch with a roadmap to scale foundation models globally. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program.

Predictoor Dynamics Have Shifted Towards Accuracy was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Finema

KERI Tutorial: Sign and Verify with Signify & Keria

Authors: Kriskanin Hengniran & Nuttawut Kongsuwan, Finema Co., Ltd. Note: This tutorial is heavily inspired by “KERI Tutorial Series — KLI: Sign and Verify with Heartnet” by Kent Bull https://kentbull.com/2023/01/27/keri-tutorial-series-kli-sign-and-verify-with-heartnet/ This blog presents an introductory guide for implementing the Key Event Receipt Infrastructure (KERI) protocol

Authors: Kriskanin Hengniran & Nuttawut Kongsuwan, Finema Co., Ltd.

Note: This tutorial is heavily inspired by “KERI Tutorial Series — KLI: Sign and Verify with Heartnet” by Kent Bull
https://kentbull.com/2023/01/27/keri-tutorial-series-kli-sign-and-verify-with-heartnet/

This blog presents an introductory guide for implementing the Key Event Receipt Infrastructure (KERI) protocol using the Signify and KERIA agents. The guide starts with installation and running the agents using Docker containers. The guide then provides a script to showcase the use of Signify and KERIA agents, including procedures for creating autonomic identifiers (AIDs), signing messages, and verifying signatures.

Signify & KERIA

Signify & KERIA are open-source projects developed for building client-side identity-wallet applications using the KERI protocol. Signify-KERIA identity wallet utilizes the hybrid edge-cloud wallet architecture where Signify provides a lightweight edge wallet component whereas KERIA provides a heavier cloud wallet component. Signify and KERIA were designed based on the principle of “key at the edge (KATE)”. That is, essential cryptographic operations are performed at edge devices.

Some resources for Signify & KERIA can be found here

The Signify-KERIA protocol by Philip Feairheller: https://github.com/WebOfTrust/keria/blob/main/docs/protocol.md KERI API (KAPI): https://github.com/WebOfTrust/kapi/blob/main/kapi.md Signify Edge Agent

Signify provides an edge agent for a KERI identity wallet and is used primarily for essential cryptographic operations including key pair generation and digital signature creation. Signify utilizes the hierarchical deterministic (HD) key algorithm. For example, a Signify application could safeguard a single master seed which is used to generate and manage any number of AIDs. Signify is designed to be lightweight so as to support devices with limited capabilities.

Signify is currently available in Typescript and Python

SignifyPy (Python) https://github.com/WebOfTrust/signifypy Signify-TS (Typescript) https://github.com/WebOfTrust/signify-ts KERIA Cloud Agent

KERI Agent (KERIA) provides a cloud agent for a KERI identity wallet and is used for, e.g., data storage, agent-to-agent communications, and verification of KERI key event logs (KELs). KERIA is engineered to handle the heavy lifting for users, allowing their edge devices to stay lightweight while maintaining high security by performing all essential cryptographic operations at the edge.

A KERIA agent is cryptographically delegated by a Signify agent using the KERI delegation protocol. All instructions from a user are signed at the edge by a Signify agent and subsequently verified by a KERIA cloud agent. KERIA is currently available in Python: https://github.com/WebOfTrust/keria

Installation Guide for Node and NVM

In this guide, we will be using Node.js to run Signify-TS (Typescript) with a KERIA server running on a Docker container.

Install Node.js

Visit the Node.js website to download Node.js.

Install NVM on Linux or MacOS

To easily switch between different versions of Node.js, you could use Node Version Manager (NVM). NVM on Linux and macOS may be installed using curl

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

Alternatively, we could use wget

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash Install NVM on Windows

On Windows, NVM may be installed from the GitHub releases page.

Note: After installing NVM, do not forget to restart your terminal. Alternatively, you can refresh the available commands in your system path by executing source ~/.nvm/nvm.sh

Using NVM

To ensure that NVM is correctly installed, you can check its version with:

nvm ls

To install Node.js version 18.18.2, run:

nvm install v18.18.2

Here, we use v18.18.2 in this guide. This is not a requirement and other versions of Node.js should work with Signify-TS.

Install Dependencies and Run a KERIA Server

To install dependencies for the tutorial, you could clone this repository https://github.com/enauthn/tutorial-signify-keria and run the following script

git clone https://github.com/enauthn/tutorial-signify-keria
cd tutorial-signify-keria
npm install
docker-compose up -d

where docker-compose up -d runs a KERIA server in a Docker container. Alternatively, you could get a Docker image for running a KERIA server from Docker Hub https://hub.docker.com/r/weboftrust/keria.

Running Signify-TS Scripts

Here, we demonstrate Signify-TS Scripts for signing and verifying an arbitrary message. Here, a signer, called Allie, signs a message “Hello Brett” and sends it to a verifier, called Brett. Brett subsequently obtains Allie’s key event log (KEL) and uses it to verify the signature on the message.

Connecting Signify clients to the KERIA server

First, Allie and Brett must boot and then connect to a KERIA server with their Signify (edge) clients. Each boot command creates a separate agent and a database in the KERIA server. see https://github.com/WebOfTrust/keria for more details.

Typically, Allie’s and Brett’s Signify scripts should run on separate devices, but here we put them in the same Typescript file for simplicity. Allie’s and Brett’s Signify clients may also connect to different KERI servers at different service endpoints. In this tutorial, both clients connect to a KERIA server running on the local host for simplicity.

await signify.ready();

const url = 'http://127.0.0.1:3901';
const bootUrl = 'http://127.0.0.1:3903';
const bran1 = signify.randomPasscode();
const bran2 = signify.randomPasscode();

const allieClient = new signify.SignifyClient(
url,
bran1,
signify.Tier.low,
bootUrl
);
await allieClient.boot();
await allieClient.connect();

const brettClient = new signify.SignifyClient(
url,
bran2,
signify.Tier.low,
bootUrl
);
await brettClient.boot();
await brettClient.connect();

To explain the above script:

signify.randomPasscode() generates a random string with 126-bit entropy using libsodium new signify.SignifyClient(…) creates a new Signify instance, initialized with the newly generated passcode allieClient.boot() creates a KERIA agent and a corresponding database at the KERIA server via port 3903 allieClient.connect() connects a Signify agent to the corresponding KERIA agent that has been booted.

Signify uses a passcode to generate cryptographic keys using a key derivation function (KDF) where signify.Tier specifies how much the passcode is stretched.

Allie creates an AID

Before Allie can sign a message with the KERI protocol, she must first create an autonomic identifier (AID) with a key inception event in a Key Event Log (KEL).

const icpResult1 = await allieClient
.identifiers()
.create('aid1', {});
await waitOperation(allieClient, await icpResult1.op());

const rpyResult1 = await allieClient
.identifiers()
.addEndRole('aid1', 'agent', allieClient!.agent!.pre);
await waitOperation(allieClient, await rpyResult1.op());

To explain the above script:

allieClient.identifiers().create('aid1', {}) creates an AID where the Signify agent signs the inception event and sends the event along with its signature to the KERIA agent. Here, the AID is given an alias 'aid1'. allieClient.identifiers().addEndRole(...) cryptographically authorizes the KERIA agent to operate on behalf of the AID’s controller. waitOperation(...) waits for the KERIA agent to complete its operation.

addEndRole stands for “adding endpoint role authorization”. This is a mechanism in the KERI protocol where the controller of an AID cryptographically authorizes—by signing with a private key associated with the AID—a service endpoint of a KERIA agent to operate on the AID’s behalf. Another agent that needs to communicate with the authorized KERIA agent can then verify the authorization signature.

Brett resolves Allie’s AID

Allie and Brett may exchange their KELs using the Out-Of-Band Introduction (OOBI) protocol. Allie may generate her OOBI URL that points to her KERIA agent’s service endpoint (which has been authorized in the previous step) as follows:

const oobi1 = await allieClient.oobis().get('aid1', 'agent');

The generated OOBI URL could be sent to Brett via an out-of-band channel such as email, messaging apps, or scanning QR code. Subsequently, Brett may ask his KERIA agent to resolve Allie’s OOBI to obtain the AID’s KEL from Allie’s KERIA agent:

const oobiOp = await brettClient.oobis().resolve(oobi1.oobis[0], 'aid1');
await waitOperation(brettClient, oobiOp); Allie signs the Message

To sign a message, Allie may use the Signify KeyManager class as follows:

const aid1 = await allieClient.identifiers().get('aid1');
const keeper1 = await allieClient.manager!.get(aid1);
const message = "Test message";
const signature = await keeper1.sign(signify.b(message))[0];
console.log('signature', signature);

which generates the following CESR-encoded signature:

signature AAAnBe-VPfBU9-3eb7aM5GNwr_NBuoJzA8vm9AFPmgj3I4LIv1mup2bwPDlbIQ6gAgtaEZg5rwE1_fTVVTmPo0oI

To explain the above script:

allieClient.identifiers().get('aid1') retrieves the information about the AID with alias 'aid1' from the KERIA agent allieClient.manager!.get(aid1) creates an instance from the KeyManager class called a keeper that signs an arbitrary byte string with keeper.sign() signify.b() turns a text string into a byte string.

When a Signify agent creates an AID, it uses a salt together with its passcode to generates cryptographic keys associated with the AID. The salt is then encrypted and sent to the KERIA agent. allieClient.identifiers().get('aid1') also retrieves and decrypts the salt where the KeyManager could use the salt to regenerate the key for signing the message.

Brett verifies Allie’s signature

After Allie sends the message to Brett, Brett wants to make sure the message is really from Allie by verifying the signature on the message. Brett may retrieve Allie’s KEL and the corresponding key state of her AID to verify the signature as follows:

const aid1StateBybrettClient = await brettClient.keyStates().get(aid1.prefix);
const siger = new signify.Siger({qb64: signature});
const verfer = new signify.Verfer({
qb64: aid1StateBybrettClient[0].k[0]
});
const verificationResult = verfer.verify(siger.raw, signify.b(message));
console.log('verificationResult', verificationResult);

which gives the following output:

verificationResult true

To explain the above script:

brettClient.keyStates().get(aid1.prefix) retrieves the key state of the Allie’s AID signify.Siger({qb64: signature}) is a signature-wrapper instance of the Siger class, initialized with the Allies’ signature on the message. signify.Verfer(...) is a verifier-wrapper instance of the Verfer class, initialized by the Allie’s public key verfer.verify(...) then verifies the message and its signature using Allie’s public key

The verification of the signature against the message indicates that the signature is valid. This ensures the authenticity and integrity of the message that Brett received from Allie.

Conclusion

This tutorial gives a brief introduction for using the KERI protocol with the Signify and KERIA agents, which provide a footing for building client-side KERI-based applications. Signify provides libraries for building KERI edge agents whereas KERIA provides libraries for building companion cloud agents. These agents follow the principle of “key at the edge (KATE)” where essential cryptographic operations are performed at edge devices.

Unfortunately, this is still the early days for these two projects, and there are not many educational materials around as of May 2024. To dive deeper into these two projects, I recommend studying their integration scripts at https://github.com/WebOfTrust/signify-ts/tree/main/examples/integration-scripts.

KERI Tutorial: Sign and Verify with Signify & Keria was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Ping Identity + Microsoft Entra ID External Authentication Methods

We at Ping are thrilled to announce a milestone in our longstanding partnership with Microsoft that brings additional interoperability and security capabilities. Customers who have the Ping Identity platform along with Microsoft Entra ID can now leverage the new External Authentication Methods (EAM) feature to enhance security and streamline access experiences.   This strategic integ

We at Ping are thrilled to announce a milestone in our longstanding partnership with Microsoft that brings additional interoperability and security capabilities. Customers who have the Ping Identity platform along with Microsoft Entra ID can now leverage the new External Authentication Methods (EAM) feature to enhance security and streamline access experiences.

 

This strategic integration marks a significant step forward in our commitment to delivering frictionless security and user experiences to our customers. By aligning with Microsoft's external authentication methods, we enable organizations to fortify their digital landscapes with trusted and familiar authentication mechanisms.

 

At the heart of this collaboration lies a shared vision of efficiency and innovation.

The Ping Identity Platform empowers organizations to manage, secure, and govern any identity, enabling them to orchestrate exceptional user experiences while safeguarding users and resources. Leveraging Microsoft's External Authentication Methods, our platform enables organizations to seamlessly integrate Ping Identity’s MFA capabilities into the Microsoft ecosystem, enabling them to leverage their existing investments while delivering on the strengths of each solution. This includes the ability to easily build orchestration journeys with no-code, drag-and-drop visual flows to enable MFA, fraud detection, and even extend the passwordless capabilities to other legacy environments like VPNs and databases.

 

As we embark on this next phase of collaboration with Microsoft, we remain steadfast in our commitment to driving innovation and delivering tangible value to our customers. Together, we are shaping the future of authentication, empowering organizations to thrive in an increasingly digital world.

 

Learn more about Ping’s partnership with Microsoft and our of the existing Ping-Microsoft integrations.

Sunday, 05. May 2024

KuppingerCole

The Foundation of Secure Communication: Digital Identities in Zero Trust

In the first episode of this two-part mini-series, Matthias Reinwarth and Charlene Spasic discuss the integral role of digital identities in the Zero Trust framework. They cover how Zero Trust architectures rely heavily on the continuous verification of identities and the enforcement of dynamic access controls. The conversation outlines the challenges and strategies involved in managing ident

In the first episode of this two-part mini-series, Matthias Reinwarth and Charlene Spasic discuss the integral role of digital identities in the Zero Trust framework. They cover how Zero Trust architectures rely heavily on the continuous verification of identities and the enforcement of dynamic access controls.

The conversation outlines the challenges and strategies involved in managing identity lifecycles, emphasizing the need for robust identity management systems that adapt to evolving security landscapes. This episode sets the stage for understanding how Zero Trust architectures transform traditional security paradigms by centering on the identity of users and devices.



Friday, 03. May 2024

auth0

Using a Refresh Token in an iOS Swift App

A step-by-step guide to leveraging OAuth 2.0 Refresh Tokens in an iOS app built with Swift and integrated with Auth0.
A step-by-step guide to leveraging OAuth 2.0 Refresh Tokens in an iOS app built with Swift and integrated with Auth0.

Microsoft Entra (Azure AD) Blog

Microsoft Entra announcements and demos at RSAC 2024

The Microsoft Entra team is looking forward to connecting with you next week at RSA Conference 2024 (RSAC) from May 6 to 9, 2024, in San Francisco! As we enter the age of AI and there are more identities and access points to protect, identity security has never been more paramount. From protecting workforce and external identities to non-human identities—that outnumber human identities 10 to 1—the

The Microsoft Entra team is looking forward to connecting with you next week at RSA Conference 2024 (RSAC) from May 6 to 9, 2024, in San Francisco! As we enter the age of AI and there are more identities and access points to protect, identity security has never been more paramount. From protecting workforce and external identities to non-human identities—that outnumber human identities 10 to 1—the task of securing access and the interactions between them requires taking a more comprehensive approach.  

 

To help customers protect every identity and every access point, I’d like to highlight recent innovations that we’ll be showcasing at this upcoming event: 

 

Expanded passkey support for Microsoft Entra ID   Microsoft Entra ID external authentication methods  Microsoft Entra External ID general availability  Microsoft Entra Permissions Management and Microsoft Defender for Cloud integration general availability  Our vision for cloud access management to strengthen multicloud security 

 

We will be demonstrating these new innovations and sharing more about how to take a holistic approach to identity and access at RSA Conference 2024 (see the table at the end of this blog for more information). Now, let’s take a closer look at Microsoft Entra innovations that we’ll be showcasing at RSAC. 

 

Expanded passkey support for Microsoft Entra ID     

 

In addition to supporting sign-ins via a passkey hosted on a hardware security key, Microsoft Entra ID now includes additional support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android. This will bring strong and convenient authentication to mobile devices for customers with the strictest security requirements. 

 

A passkey is a strong, phishing-resistant authentication method you can use to sign in to any internet resource that supports the W3C WebAuthN standard. Passkeys represent the continuing evolution of the FIDO2 standard aimed at creating a secure and user friendly passwordless experience for everyone.  

 

To learn more about using passkeys in the Microsoft Authenticator app, check out this blog.   

 

Microsoft Entra ID external authentication methods 

 

While organizations increasingly choose to unify their multifactor authentication and access management solutions, thus, simplifying their identity architectures, some organizations have already deployed MFA and want to use their pre-existing MFA provider with Microsoft Entra ID. External authentication methods allow organizations to  leverage any MFA solution to meet the MFA requirement with Entra ID. 

 

At launch, external authentication methods integrations will be available with the following identity providers: Cisco, ENTRUST, HYPR, Ping, RSA, SILVERFORT, Symantec, THALES, and TrustBuilder.  

 

Read our documentation to learn more.  

 

Microsoft Entra External ID general availability 

 

Our next-generation, developer friendly customer identity access management (CIAM) solution, Microsoft Entra External ID will become generally available on May 15, 2024. Whether you're building applications for partners, business customers, or consumers, External ID makes secure and customizable CIAM simple. External ID enables you to: 

 

Secure all identities with a single platform  Streamline secure collaboration  Create frictionless end user experiences  Accelerate the development of secure applications 

Learn more about External ID by reading our announcement blog!  

 

Microsoft Entra Permissions Management and Microsoft Defender for Cloud integration general availability 

 

Deploying applications and infrastructure across multiple clouds has become the norm. Ensuring the security of cloud applications and infrastructure requires integrating identity and permission insights into the overall security strategy. This objective is achieved through the integration of Microsoft Entra Permissions Management with Microsoft Defender for Cloud (MDC), which will soon be generally available in May. 

 

The integration streamlines access and permission insights into other cloud postures through a unified interface. Customers benefit from recommendations on mitigating risks within the MDC dashboard, including unused identities, overprivileged permissions, and unused super identities. This facilitates the enforcement of least privilege access for cloud resources across Azure, Amazon Web Services, and Google Cloud Platform. 

 

Our vision for cloud access management to strengthen multicloud security 

 

Deploying applications and infrastructure across multiple clouds has become common in today’s business landscape. At Microsoft, we have long prioritized the protection of customers’ environments, regardless of the number of clouds they use or the providers they choose.  

 

Our recent 2024 State of Multicloud Security Risk Report reconfirms the importance of securing access in multicloud and presents valuable findings based on one year of actual usage data to enhance organizations’ understanding of their risks and facilitate the development of effective mitigation strategies. Key findings related to access and permissions include: 

 

Only 2% of the 51,000 permissions granted to human and workload identities in 2023 were utilized, with 50% of these permissions classified as high-risk.  More than 50% of identities are identified as super identities, indicating they have access to all permissions and resources within the multicloud environment. 

 

Above all, this report confirms that the complexity of multicloud risk continues to grow. Coupled with the increase in cyberattacks targeting identities, especially those assigned to non-human entities, security teams are overwhelmed. Consequently, organizations are shifting priorities from infrastructure protection to actively monitoring and securing interactions between human and workload identities accessing corporate cloud resources. 

 

We believe Microsoft can help address these challenges with our new vision for cloud access management, offering visibility into all identities and permissions in use, along with proactive risk detection to enhance protection and management of your environment. We will continue our journey to secure access to resources anywhere by developing a new converged platform that encompasses four key solution areas critical for organizations, based on our continuous engagements with customers: 

 

Cloud Infrastructure Entitlement Management (CIEM)  Privileged Access Management (PAM)  Identity Governance and Administration (IGA)  Workload Identity and Access Management (IAM) 

 

Stay tuned to learn more about our vision in the coming weeks.  

 

Where to find Microsoft Entra at RSAC 2024  

 

We’re excited to connect with you at RSAC 2024 and discuss the latest innovations to Microsoft Entra. Please join us at the following identity sessions: 

 

Session Title 

Session Description 

Date and time 

Lesson Learned - General Motors Road to Modern Consumer Identity 

This demo-heavy session will provide key insights into the architectural decisions made by General Motors and the lessons learned establishing a secure and resilient customer identity platform powered by Microsoft Cloud for a consistent set of user experiences across all its global customer touchpoints, including web, mobile apps, in-vehicle applications, and backend services 

Tuesday May 7, 2024, 1:15 PM - 2:05 PM PT 

The Storm-0558 Attack - Inside Microsoft Identity Security's Response 

In June 2023, China-based actor Storm-0558 successfully forged tokens to access customer email in 22 agencies using an acquired signing key. This session will walk you through the insider's view of the attack, investigation, mitigation, and repairs resulting from this attack with a focus on what worked and what didn't when defending against this APT actor. 

Thursday, May 9, 2024, 12:20 PM - 1:10 PM PT 

 

 

Stop by our booth #6044N to check out our theater sessions! 

 

Start your CIAM Journey: Secure external identities, streamline collaboration and accelerate your business! 

As you expand your business, protecting all external identities, such as customers, business guests and partners, is essential. In this session, we will demonstrate how Microsoft Entra External ID is a single solution that helps you integrate security into your apps, safeguarding external identities with adaptive access policies, verifiable credentials, built-in identity governance, and more. We will also showcase how to streamline collaboration by inviting business guests and defining what internal resources they can access across Teams, SharePoint and OneDrive.  

Tuesday May 7, 2024, 3:00-3:20PM 

Microsoft Entra and Copilot: Skills you can use for protecting identities and access 

Get an overview of the latest Microsoft Entra skills available via Copilot for Security to help your organization protect against identity threats and increase efficiency in managing and governing access. 

Tuesday May 7, 2024, 3:30-3:50PM 

Modernize your network access with Microsoft’s Security Service Edge Solution 

In today’s dynamic landscape, securing access to critical applications and resources is more crucial than ever. The identity-centric Security Service Edge (SSE) solution in Microsoft Entra takes Conditional Access to a new level, protecting any network destination with granular access controls that consider identity, device, and network. Join us to learn how you can secure access for anyone to anything from anywhere with unified identity and network access. 

Wednesday May 8, 2024, 2:30-2:50PM 

Bringing Passkey into your Passwordless Journey 

Most of our customers are either deploying some form of passwordless credential or are planning to in the next few years, however, the industry is all abuzz with excitement about passkeys. What are passkeys and what do they mean for your organization's passwordless journey? Join the Microsoft Entra product team as we walk you through the background of where passkeys came from, their impact on the passwordless ecosystem and the product features and roadmap bringing passkeys into the Microsoft Entra passwordless portfolio and phishing resistant strategy.   

Thursday May 9, 2024, 12:00-12:20PM 

 

We can’t wait to see you in San Francisco for RSA Conference 2024! 

 

Irina Nechaeva, 

General Manager of Identity & Network Access 


This week in identity

E51 - Microsoft Entra External IDs / Cisco and StrongDM / CEO view on Cyber

This week Simon and David return with a weekly dose of industry analysis on the global identity and access management space. First up a discussion on Microsoft announcing the GA of their Entra for External IDs - who is it aimed at? Is it ground breaking? Next up is Cisco who announced an investment round into next-gen PAM provider StrongDM. Finally they discuss a great interview by Standard Charte

This week Simon and David return with a weekly dose of industry analysis on the global identity and access management space. First up a discussion on Microsoft announcing the GA of their Entra for External IDs - who is it aimed at? Is it ground breaking? Next up is Cisco who announced an investment round into next-gen PAM provider StrongDM. Finally they discuss a great interview by Standard Chartered CEO Bill Winters and his view of cyber in the board and its strategic value.


Northern Block

Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt)

Discover the future of identity verification with mobile driver's licenses. Join Sylvia Arndt and Mathieu Glaude on The SSI Orbit Podcast for insights. The post Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>Mobile Driving Licenses (mDL) in 2024</strong> (with Sylvia Arndt) app

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

Could digital credentials like mobile driver’s licenses be the game-changer for secure and convenient identity verification?

In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Sylvia Arndt, Vice President of Business Development, Digital Identity at ⁠Thales⁠, to explore the rapidly evolving landscape of mobile driver’s licenses (mDLs) and their potential to transform how we prove who we are.

In this conversation, you’ll learn:

The driving forces behind governments adopting mobile driver’s licenses (mDLs), including improving service accessibility for citizens and combating fraud The role of organizations like AAMVA and NIST in setting standards and governance for mDL implementation Business opportunities unlocked by mDLs, such as enabling seamless online identity verification for industries like banking, notaries, and access management Potential monetization models for issuers and verifiers in the mDL ecosystem The rising prominence of biometric verification like facial recognition in conjunction with mDL usage

Don’t miss out on this opportunity to gain valuable insights and expand your knowledge. Tune in now and start exploring the possibilities!

 

Key Insights: Mobile driver’s licenses (mDLs) are gaining momentum as governments seek to improve service quality, combat fraud, and streamline identity verification processes. Organizations like AAMVA and NIST play crucial roles in setting standards and governance for mDL implementation. Interoperability is a key challenge, with the ISO standard for mDLs emerging as a widely adopted solution. Governments must decide between issuing mDLs through their own wallets or leveraging third-party wallets like those from Apple, Google, and Samsung. mDLs could enable seamless online identity verification for industries like banking, notary services, and access management, reducing transaction abandonment rates. Potential monetization models for issuers and verifiers are being explored, as the value of mDLs lies primarily in the verification side. Strategies: Governments are implementing legislation to allow for the acceptance of digital forms of state-issued identities, including mDLs. AAMVA’s Digital Trust Service aims to facilitate cross-state verifications by providing the necessary public keys to read mDLs. Facial biometrics and liveness detection are expected to become more prevalent in conjunction with mDL verification for enhanced security. Governments and industry stakeholders are exploring ways to vet and register verifiers to ensure responsible use of mDL data. Chapters: 00:00 – Status of Mobile Driving Licenses (mDL) in the US 7:00 – Why Government DMVs like the ISO standard for mobile driving licenses 9:25 – About AAMVA (the American Association of Motor Vehicle Administrators) 14:25 – How do governments perceive the value proposition of issuing mDLs 20:10 – General wallet strategy for DMVs in 2024 27:25 – Where are the opportunities in the mDL verification market? 41:41 – Requiring a registration process for mDL verifiers? 45:17 – Exploring possible new risk vectors that mDL introduces 50:17 – Business model for mDL issuers, and possible disruption to IDV market Additional resources: Episode Transcript American Association of Motor Vehicle Administrators – AAMVA NIST SP 800-63 Digital Identity Guidelines ISO-compliant driving licence W3C Verifiable Credentials Data Model TSA Facial Recognition and Digital Identity Solutions About Guest

Sylvia Arndt is a seasoned leader and Vice President of Business Development, Digital Identity at Thales, with over 20 years of experience driving organic growth through innovative software and service solutions. Sylvia excels in identifying strategic opportunities that advance markets and transform business models, with a strong focus on customer advocacy, operational excellence, and cross-functional collaboration. Her expertise spans various industries, including Computer Software, Digital Identity & Security, Aviation, Travel & Hospitality, Communications, Media & Entertainment, Energy, and Government Services. Sylvia’s international reach extends to over 50 countries, where she has worked closely with customers and business partners, demonstrating her leadership in business strategy, product management, operations strategy, and digital transformation.

LinkedIn: linkedin.com/in/sylvia-arndt

  The post Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>Mobile Driving Licenses (mDL) in 2024</strong> (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Ocean Protocol

Passive & Volume Data Farming Airdrop Has Completed; They Are Now Retired

Claim your rewards now. Predictoor DF Forges Ahead. More DF streams to come Summary This article starts by reviewing Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming. The “yes” happened. This triggered the follow-up actions: We just completed an airdrop to veOCEAN holders. You can now claim OCEAN via the DF Webapp (https://df.oceandao.or
Claim your rewards now. Predictoor DF Forges Ahead. More DF streams to come Summary

This article starts by reviewing Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming.

The “yes” happened. This triggered the follow-up actions:

We just completed an airdrop to veOCEAN holders. You can now claim OCEAN via the DF Webapp (https://df.oceandao.org/rewards) We have retired Passive & Volume DF. Predictoor DF continues, with room to scale up Predictoor DF and introduce new incentive streams. 1. Background 1.1 Ocean Data Farming

Data Farming (DF) is Ocean Protocol’s incentive program. Rewards are weekly. DF has traditionally had three streams / substreams:

Passive DF. Users lock OCEAN for veOCEAN. The longer you lock or the more OCEAN you lock, the more OCEAN you get. Rewards are pro-rata to veOCEAN holdings. 150,000 OCEAN/week. Volume DF. Users allocate veOCEAN towards data assets with high data consume volume (DCV), in a curation function. Rewards are a function of DCV and veOCEAN stake. Up to 112,500 OCEAN/week. Predictoor DF. Run prediction bots to earn continuously. 37,500 OCEAN/week.

Most user DF interactions are via the DF webapp at df.oceandao.org.

1.2 ASI Alliance

Ocean Protocol has been working with Fetch.ai and SingularityNET to form the ASI Alliance, with a unified token $ASI. This Mar 27, 2024 article describes the key mechanisms. It needed a “yes” vote from the Fetch and SingularityNET communities.

1.3 ASI Alliance Impact on DF

If the event of a “yes”, there were important implications for Ocean Data Farming and veOCEAN. This Mar 29, 2024 post describes them. From that post:

To be ready for either outcome [yes or no], we will pause giving rewards for Passive DF and Volume DF as soon as the DF82 payout of Thu Mar 28 has completed. Also in preparation, have taken a snapshot of OCEAN locked & veOCEAN balances as of 00:00 am UTC Wed Mar 27 (Ethereum block 19522003) …
Predictoor DF will continue regardless of voting outcome.

And the section “Actions if ‘yes’ “ held the following key information about veOCEAN, Passive DF, and Volume DF.

veOCEAN will be retired. …
Passive DF & Volume DF will be retired.
People who have locked OCEAN for veOCEAN will be made whole, as follows.
Each address holding veOCEAN will be airdropped OCEAN in the amount of:
(1.25^years_til_unlock-1) * num_OCEAN_locked
In words: veOCEAN holders get a reward as if they had got payouts of 25% APY for their whole lock period (and kept re-upping their lock). But they get the payout soon, rather than over years of weekly DF rewards payouts. It’s otherwise the same commitment, expectations and payout as before.
This airdrop will happen within weeks after the “yes” vote.
That same address will have its OCEAN unlocked according to its normal veOCEAN mechanics and timeline (up to 4 years). After unlock, that account holder can convert the $OCEAN directly into $ASI with the published exchange rate.
Any actions taken by an account on locking / re-locking veOCEAN after the time of the snapshot will be ignored. …

The post also held key information about psdnOCEAN, predictoor DF, and the future of DF.

psdnOCEAN holders will be able to swap back to the OCEAN with a fixed-rate contract. For each 1 psdnOCEAN swapped they will receive >1 OCEAN at a respectable ROI. …
Predictoor DF continues. …
Ocean Protocol Foundation will re-use the DF budget for its incentives programs. These can include: scaling up Predictoor DF [and more].

(We added bold font to help cross-referencing with the “actions” section below.)

2. A “Yes” Happened

As of Apr 16, the vote had concluded. The result was a “yes”.

Artificial Superintelligence Alliance on Twitter: "🎉 It's official! The ASI Alliance is launching - the world's largest decentralized network for accelerating AGI and ASI.Stay tuned for updates on our multi-billion token merger and the incredible things to come!@SingularityNET @Fetch_ai @oceanprotocol pic.twitter.com/Ewh99LlOIY / Twitter"

🎉 It's official! The ASI Alliance is launching - the world's largest decentralized network for accelerating AGI and ASI.Stay tuned for updates on our multi-billion token merger and the incredible things to come!@SingularityNET @Fetch_ai @oceanprotocol pic.twitter.com/Ewh99LlOIY

3. Actions Completed Due to “Yes”

As promised to the Ocean community, we have completed the “Actions if ‘yes’” summarized above. Here are the specifics.

3.1 Promise: veOCEAN will be retired

✅ Action completed: veOCEAN is retired.

The DF webapp functionality to lock veOCEAN is removed. Incentives to lock OCEAN into veOCEAN have been turned off [1].

The DF webapp functionality to withdraw locked OCEAN has been retained; therefore when time passes and the token comes unlocked (up to 4 years), the user can come and withdraw their OCEAN.

3.2 Promise: Passive DF … will be retired

✅ Action completed: Passive DF is retired. (And you should claim your past Passive DF rewards.)

One can no longer enter into Passive DF because it relies on veOCEAN, which is retired. Passive DF rewards are permanently stopped.

To claim your past Passive DF rewards:

Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Passive Rewards” section, click the “Claim All” button. This webapp functionality will remain live until Aug 1, 2024. After that, you will have to use the etherscan interface to claim rewards, which is more complex. (Same for Volume DF & airdrop below.) 3.3 Promise: Volume DF will be retired

✅ Action completed: Volume DF is retired. (And you should claim your past Volume DF rewards.)

One can no longer enter into Volume DF because it relies on veOCEAN, which is retired. Volume DF rewards are permanently stopped.

To claim your past Volume DF rewards:

Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Active Rewards” section, click the “Revoke Token Lock Approval + Claim All” button. This will claim both past Volume DF Rewards and DF Airdrop Rewards. 3.4 Promise: Each address holding veOCEAN will be airdropped OCEAN

[according to the formula, using the Mar 27 snapshot]

✅ Action completed: Each address holding veOCEAN has been airdropped OCEAN. You should claim your airdrop rewards.

The OCEAN reward amount is according to the formula, using the Mar 27 snapshot (as discussed above). The reward is as if you had got payouts of 25% APY for your whole veOCEAN lock period (and kept re-upping your lock). The Appendix gives an example of payout amounts.

Here is the reward per address.

To claim DF Airdrop rewards:

It’s all inside the “Active Rewards” section, just like Volume DF. Therefore… Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Active Rewards” section, click the “Revoke Token Lock Approval + Claim All” button. This will claim both past Volume DF Rewards and DF Airdrop Rewards. 3.5 Promise: psdnOCEAN reward with respectable ROI

Promise expanded: holders will be able to swap back to the OCEAN with a fixed-rate contract. For each 1 psdnOCEAN swapped they will receive >1 OCEAN at a respectable ROI.

✅ Action, over next several days: The ROI is set to 1.25. There are a small number (17) of psdnOCEAN holders, so we are handling them manually. Here are our steps:

Start from this GSheet “snapshot of psdnOCEAN balances” page Have each user send their psdnOCEAN to 0xad0A852F968e19cbCB350AB9426276685651ce41 (DF Treasury multisig) For each inbound tx, we will log in in the GSheet “transactions” page Then we will compute the user OCEAN reward, and send it to the user 3.6 Promise: Predictoor DF continues

✅ Action completed: Predictoor DF has continued while the other DF substreams were paused. It keeps going.

3.7 Promise: Ocean Protocol Foundation will re-use the DF budget for its incentives programs

✅ Action on track: We plan to scale up Predictoor DF rewards over time, especially as it hits development milestones [Ref 2024 roadmap, sec 2.2].

Other potential DF incentives include for running Unified Backend nodes [roadmap, sec 3.2], and for decentralized large-scale model training to support a world model on ground-truth physics [roadmap, sec 2.2].

Given this restructuring, the previous plans for long-term rewards schedule will not be used. Now, we adapt Data Farming budget week-by-week and month-by-month, to maximize the bang-for-buck of each OCEAN deployed for rewards.

4. Conclusion

This article reviewed Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming.

The “yes” happened. This triggered the follow-up actions:

We just completed an airdrop to veOCEAN holders. Users can now claim OCEAN via the DF Webapp (https://df.oceandao.org/rewards) We have retired Passive & Volume DF. Predictoor DF continues, with room to scale up Predictoor DF and introduce new incentive streams. Appendix: Worked DF Airdrop Example

Example. Alice recently locked 100 OCEAN for a duration of four years, and had received 100 veOCEAN to her account.

She will get airdropped (1.25⁴–1)*100 = (2.44–1)*100 = 144 OCEAN soon after the “yes” vote In four years, her initial 100 OCEAN will unlock. In total, Alice will have received 244 OCEAN (144 soon, 100 in 4 years). Her return is approximately the same as if she’d used Passive DF & Volume DF for 4 years and got 25% APY. That is: (1.2⁵⁴-1) * 100 = 2.44 * 100. Yet this updated scheme benefits her more because her 144 of that 244 OCEAN is liquid soon. Notes

[1] We can’t actually turn off the veOCEAN contract on the Ethereum mainnet. Therefore someone could still lock OCEAN for months to years by talking directly to that contract, and then see their OCEAN unlock months to years later. It’s any user’s prerogative if they wish. But there’s no real incentive to do so.

Further resources

If you have more questions about the changes and how they apply to you, you can always contact us on:

Discord: https://discord.gg/TnXjkR5

Telegram: https://t.me/oceanprotocol_community

Beware of scams

It is crucial for you to remain alert. This is a prime time for scammers to capitalize and present you with offers to trick you.

Here’s how you can stay safe:

Check official Ocean Protocol communication: For this particular airdrop, use this blogpost as reference as well as the information published on https://df.oceandao.org/rewards. Always double-check: Before engaging in any actions involving your tokens, verify the information directly from official sources. Cross-reference any announcements on the official X profile of Ocean Protocol. Use official websites: Manually type URLs into your browser and avoid clicking on unsolicited links. Impersonator websites may mimic official sites to deceive and steal your tokens. Our official website is oceanprotocol.com. Telegram and Discord safety: Only trust links in pinned notices by Admins, and avoid private message solicitations. Our admins will never proactively message you or ask you to click on links. Also, our admins do not offer ticket support in Discord. Independent verification: No Admin or official representative will contact you directly to assist with wallet operations or token swaps. Always initiate contact through official channels if you need assistance. Keep your keys private: Never disclose your wallet’s private key or seed phrase (12, 15,18, 24 words) to anyone or enter them on any website. Responding to scams: If you suspect a scam, report it to our admins on Discord and Telegram. Beware of secondary scams offering token recovery for a fee. About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable businesses and individuals to trade tokenized data assets seamlessly to manage data all along the AI model life-cycle. Ocean-powered apps include enterprise-grade data exchanges, data science competitions, and data DAOs. Follow Ocean on Twitter or TG, and chat in Discord.

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Predictoor has over $800 million in monthly volume, just six months after launch with a roadmap to scale foundation models globally. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program.

Passive & Volume Data Farming Airdrop Has Completed; They Are Now Retired was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

Multi-Chain Tokenization Made Simple

The post Multi-Chain Tokenization Made Simple appeared first on Tokeny.

Product Focus

Multi-Chain Tokenization Made Simple

This content is taken from the monthly Product Focus newsletter in April 2024.

I’m excited to share the latest advancements at Tokeny, reinforcing our leadership in multi-chain tokenization capabilities to better serve innovative issuers like you.

We recognize that every issuer has unique needs. Our mission is to remove blockchain barriers with our network-agnostic tokenization platform. With our latest developments in this area, here’s how you can benefit:

Effortless Tokenization on Any EVM Chain: Our SaaS solutions and APIs empower issuers to seamlessly tokenize assets on their preferred EVM-compatible network. For integrated chains like Polygon, Avalanche, Klaytn, and Telos, issuers can quickly deploy tokens within minutes. With our scalable technology, we can promptly integrate with any EVM chain, and we are expanding to include additional chains such as Base, IOTA EVM, and new chains required by our clients.

Seamless Token Migration Between Chains: In the dynamic landscape of blockchain and financial markets, the ability to shift tokens between networks is crucial for risk management. Our team guarantees seamless token migration from one chain to another, preserving all records from the previous chain and maintaining consistent cap table views at our platform despite network transitions.

Unified Multi-Chain Management Platform: As the future of on-chain finance embraces multi-chain environments, our platform is here to support you. It offers a unified interface for you, your agents, and your investors to manage tokens, whether you’re issuing tokens on one blockchain or across multiple chains, all within a single centralized software solution.

As always, our dedicated team is committed to delivering cutting-edge solutions to equip you with the tools you need to thrive in the digital asset landscape.

Thank you for your continued support of Tokeny.

Joachim Lebrun Head of Blockchain Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 The Unified Investor App is Coming… 31 October 2023 Introducing WalletConnect V2: Discover the New Upgrades 29 September 2023 Tokeny becomes the 1st tokenization platform to achieve SOC2 Type I Compliance 1 September 2023 Permissioned Tokens: The Key to Interoperable Distribution 28 July 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post Multi-Chain Tokenization Made Simple first appeared on Tokeny.

The post Multi-Chain Tokenization Made Simple appeared first on Tokeny.


Ocean Protocol

Revealing the Secrets of Startup Success: A Venture Capital Investments Challenge

Podium : Venture Capital Investments Data Challenge Introduction The Venture Capital Investments Challenge engaged data scientists and analysts to decode the complexities of startup funding and success. This challenge drew on an extensive dataset covering various aspects of the venture capital ecosystem. Key datasets included acquisitions, degrees, funding rounds, funds, investments, IPOs, mi
Podium : Venture Capital Investments Data Challenge Introduction

The Venture Capital Investments Challenge engaged data scientists and analysts to decode the complexities of startup funding and success. This challenge drew on an extensive dataset covering various aspects of the venture capital ecosystem. Key datasets included acquisitions, degrees, funding rounds, funds, investments, IPOs, milestones, objects, offices, people, relationships, and several specialized sets designed for in-depth analysis.

Participants analyzed over 66,368 entries, exploring startup funding details and investor engagement. They examined geographical impacts on startups and the influence of educational backgrounds and degrees. Career trajectories and networks from people.csv and relationships.csv also provided insights into successful entrepreneurship patterns.

Through data processing and model development, participants identified trends and predictive factors in the funding dynamics, market positions, and strategic milestones. This initiative showcased participants’ analytical capabilities and set the stage for advanced predictive modeling in investment strategies.

Winners Podium

The top submissions of this challenge were exceptional. Participants demonstrated outstanding ability in utilizing ML and AI to examine and predict startup success within the venture capital landscape and refine investment strategies. Let’s examine the top three submissions that stood out due to their thorough analytics and insightful conclusions.

1st Place: Ahan

Ahan stood out with his application of machine learning to analyze the venture capital landscape. His detailed analysis focused on the implications of founder demographics and funding dynamics on startup outcomes. He revealed that the median acquisition price among startups with disclosed values was approximately $72.6 million, with an average time from initial investment to acquisition of 695 days. This insight highlights the broad variance in startup valuations and the typical timelines investors might anticipate for returns.

Moreover, in his dataset of over 16,000 instances, Ahan identified significant disparities in success rates by founder gender, with male founders achieving a 40.3% success rate compared to 27.4% for female founders. This finding points to potential systemic biases in the venture capital industry and underscores the need for broader diversity and inclusion initiatives.

2nd Place: Dominikus

Dominikus’ entry in the Ocean Data Challenge leveraged detailed venture capital data to build a predictive model distinguishing successful and unsuccessful startups. He restructured a complex dataset into 14 subsets in his analysis, applying statistical encoding and meticulously handling missing data. His statistical models revealed significant findings: startups in the San Francisco Bay Area, affiliated with Stanford University graduates, demonstrated a 65% higher likelihood of funding success than startups in other regions and educational backgrounds.

In his evaluation, Dominikus used precise statistical methods to measure the efficacy of his models. He reported an accuracy rate of 92%, with a precision of 90% and a recall of 88%, effectively illustrating the predictive strength of his analytical approach. Additionally, the ROC curve for his model achieved an AUC of 0.91, underscoring its robustness in classifying the potential success of startups based on multiple factors, including funding history, investor relationships, and regional economic activities.

His analysis provided a clear view of the venture capital landscape, offering insights through correlation studies that identified the relationships influencing startup success.

3rd Place: Bhalisa

Bhalisa Sodo’s analytical project thoroughly examined the factors influencing startup success within the venture capital landscape. His method involved detailed data cleaning and segmentation, processing a comprehensive dataset to uncover the dynamics of startup funding and success. Bhalisa used statistical methods to analyze correlations between founder backgrounds, funding mechanisms, and startup outcomes, presenting a quantitative foundation for his conclusions.

In his findings, Bhalisa showed that startups linked to founders from top-tier institutions like Stanford University were 30% more likely to secure funding and achieve successful exits than others. His predictive models showed an impressive accuracy rate, with the Decision Tree Classifier achieving a classification accuracy of 98% and a recall rate of 97%, highlighting its effectiveness in identifying potentially successful startups based on early-stage data inputs.

Moreover, Bhalisa’s research revealed that startups typically received their first significant funding round within the first two years of operation, and those receiving funding within the first year showed a 60% higher probability of reaching an exit through acquisition or IPO within eight years.

His analysis also noted an increasing trend in funding amounts over time, with the average funding per round growing by 15% annually since 2010, reflecting the escalating scale and stakes within the venture capital ecosystem.

Interesting Facts Higher Success Rates for Stanford Graduates

Startups linked to founders from Stanford University show a 30% higher success rate of securing funding and achieving successful exits than those from other universities. This trend highlights Stanford’s strong network and reputation within the venture capital ecosystem.

Annual Increase in Funding Amounts

Since 2010, the average amount raised per startup funding round has increased by 15% annually. This growth reflects the increasing confidence and investment in startups, driven by the expanding venture capital market and the success rate of technology-driven innovations.

Prevalence of AI and Tech Startups in Investment Portfolios

Over the last decade, investments in AI and technology-focused startups have increased by 35%. This trend reflects the industry’s growing recognition of the transformative potential of AI technologies across various sectors.

Influence of Educational Background on Startup Leadership

Founders with Ivy League educations are 50% more likely to hold C-level positions in their startups. This statistic highlights the strong correlation between prestigious educational backgrounds and leadership roles in high-growth startups, suggesting that education continues to play a critical role in shaping entrepreneurial success.

Gender Funding Gap in Startups

Analysis reveals that male founders receive about 30% more funding rounds and secure 50% higher funding than female founders. Moreover, male-led startups are 20% more likely to reach advanced funding stages, highlighting persistent gender biases in venture capital.

2024 Championship

Each challenge features a prize pool of $10,000, distributed among the top 10 participants. Our championship points system distributes 100 points across the top 10 finishers in each challenge, with each point valued at $100.

Top 10 :: Venture Capital Investments Data Challenge

By participating in challenges, contestants accumulate points toward the 2024 Championship. Last year, the top 10 champions received an extra $10 for every point they had earned.

Moreover, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists retain their intellectual property rights while we offer assistance in monetizing their creations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

Revealing the Secrets of Startup Success: A Venture Capital Investments Challenge was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Okta vs. Ping: The Best IAM for Digital Security

When it comes to selecting an Identity and Access Management (IAM) solution, the stakes are high. Your choice directly affects your organization's security, user experience, compliance, and the bottom line. To make the best choice for your organization, let's take a closer look at the major differences between two leaders in the IAM space, Okta and Ping, especially when considering an upcoming ren

When it comes to selecting an Identity and Access Management (IAM) solution, the stakes are high. Your choice directly affects your organization's security, user experience, compliance, and the bottom line. To make the best choice for your organization, let's take a closer look at the major differences between two leaders in the IAM space, Okta and Ping, especially when considering an upcoming renewal.


Finicity

Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening

Today’s consumers’ expectations for their financial interactions are changing. They require a digitally native, seamless, consistent, instantaneous experience with their financial provider right from the get-go. No longer are they… The post Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening appeared first on Finicity.

Today’s consumers’ expectations for their financial interactions are changing. They require a digitally native, seamless, consistent, instantaneous experience with their financial provider right from the get-go. No longer are they willing to wait several days for identity verifications or for microdeposits to clear to start using their account.  

Yet, we know that everyday bad actors are finding new ways to break the system. As more people and businesses enter the digital economy, it’s critical that we keep them secure across all touchpoints with their accounts and beyond. Financial Institutions must protect their customers’ accounts from fraud to ultimately drive primacy, grow deposits and encourage top of wallet behaviors, thus helping them recoup the estimated $450 average cost of acquisition

Open banking is the thread connecting the ecosystem to make account opening faster, secure, and more frictionless. 

Here’s a common scenario that financial institutions deal with on a daily basis:  

‘John Doe’ opened a new checking account with ‘AcmeBank’ and is ready to fund it with another existing account he has with ‘Partnerbank’   How does AcmeBank know that John is the actual owner of the account at Partnerbank? Should Acmebank proceed with posting the ACH file to the Nacha (ACH) network, letting the transaction go through? If John Doe were a bad actor, and Acme bank allowed the payment to go through without doing appropriate checks, John Doe could move that money elsewhere and AcmeBank could get an unauthorized payment return from ‘Partnerbank’, resulting in fraud losses.   Similarly, some insurance companies simply ask for account and routing number verification before disbursing funds, not verifying the identity of the receiver. Here, John Doe can impersonate another person, and use his own personal details to re-direct an insurance payout or a payroll disbursement to his account.   What is the ecosystem doing about it? 

New rules and guidelines are being published by Nacha – operator of ACH payments – that introduce additional risk management frameworks for ACH senders, as well as recipients. Ecosystem participants such as merchants, ecommerce platforms, lenders, and insurance providers may be required to include account verification and identity verification, multi-factor authentication, velocity tracking and KYC/KYB improvements. Mastercard is a Nacha Preferred Partner for Compliance and Risk and Fraud Prevention with a focus on account validation. 

In addition to more thorough fraud checks being conducted by originators, receivers now also must participate in fraud monitoring and flagging to reduce risk. In the above example, Acmebank, the receiving financial institution, will also need to perform additional fraud checks.  

What can you do? 

Mastercard Open Banking helps financial institutions identify, manage and tackle fraud risk on an ongoing basis.  Examples of our solutions include instant account details verification, device and identity verification. When used in conjunction with other customer fraud solutions, they help secure interactions that consumers have with their financial provider. 

Last year, Mastercard debuted Open Banking Identity Verification for the U.S. market and continues to invest in additional functionality that leverage our extensive fraud and identity networks. Before initiating a transaction, financial institutions can verify a number of factors, including: 

Confirming account ownership information, including name, address, phone and email, in real-time  Validating identity profiles and quantifying identity risk   Examining the risk level of user activity patterns and associations to detect fraudulent behavior  Verifying device authenticity and capturing signals of device fraud 

Beyond Open Banking Identity Verification, Mastercard offers services to streamline account funding, including:  

Account Owner Verification: A one-time API request that returns the account owner(s) name, address, email and phone number for a select account. This verifies that the bank account being linked is owned by the person opening a new account and complements KYC risk mitigation in real time.  Account Detail Verification: Instantly authenticates and verifies account details, including account and routing numbers, to help mitigate fraud, reduce manual entry errors and maximize confidence in payment transactions.  Account Balance Check: Easily determines account balance before moving funds to a new account. This ensures that the amount being moved to the new account is available with an accurate, real-time balance snapshot, and reduces costly NSF returns.  Payment Success Indicator: A score that predicts a transaction’s likelihood to settle for a specific consumer “today” and up to nine days in the future.  

Now let’s look the journey again with our solutions: 

Consumer has opened a new checking account with ‘Acme Bank’ and is ready to fund it using existing bank account at ‘Partnerbank’  Consumer agrees to T&Cs and gives permission through Mastercard’s Connect widget for their bank data to be accessed and shared with Acme bank    Consumer selects their Partnerbank account and enters banking login credentials (or biometrics where applicable)  Consumer selects funding account and amount  Acme bank calls our above APIs in the background to check account and identity details in real-time and proceeds with the processing the payment 

Get ahead and get prepared! Check out Mastercard Open Banking developer’s page for technical documentation or reach out to your Mastercard representatives to learn more. 

The post Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening appeared first on Finicity.


KuppingerCole

1Kosmos Platform

by Martin Kuppinger This KuppingerCole Executive View report looks at the 1Kosmos platform, a solution supporting an integrated and comprehensive approach on identity verification and passwordless authentication, backed by Distributed Ledger Technology (DLT), enabling worker, customer and resident use cases.

by Martin Kuppinger

This KuppingerCole Executive View report looks at the 1Kosmos platform, a solution supporting an integrated and comprehensive approach on identity verification and passwordless authentication, backed by Distributed Ledger Technology (DLT), enabling worker, customer and resident use cases.

Thursday, 02. May 2024

Microsoft Entra (Azure AD) Blog

Public preview: External authentication methods in Microsoft Entra ID

Hi folks,   Today I’m thrilled to share that the public preview of external authentication methods in Microsoft Entra ID is scheduled for release in the first half of May. This feature will allow you to use your preferred multifactor authentication (MFA) solution with Entra ID.   Deploying MFA is the single most important step to securing user identities. A Microsoft Research stu

Hi folks,

 

Today I’m thrilled to share that the public preview of external authentication methods in Microsoft Entra ID is scheduled for release in the first half of May. This feature will allow you to use your preferred multifactor authentication (MFA) solution with Entra ID.

 

Deploying MFA is the single most important step to securing user identities. A Microsoft Research study of MFA effectiveness showed that the use of MFA reduced the risk of compromise by more than 99.2%! Some organizations have already deployed MFA and want to reuse that MFA solution with Entra ID. External authentication methods allows organizations to reuse any MFA solution to meet the MFA requirement with Entra ID.

 

Some of you might be familiar with custom controls. External authentication methods are the replacement of custom controls, and they provide several benefits over the custom controls approach. These include: 

 

External authentication method integration, which uses industry standards and supports an open model  External authentication methods are managed the same way as Entra methods  External authentication methods are supported for a wide range of Entra ID use cases (including PIM activation)

 

I've invited Greg Kinasewitz, Product Manager for Microsoft Entra ID, to tell you more about this new capability.

 

Thanks, and as always, let us know what you think!

 

Nitika Gupta

Group Product Manager

 

--

 

Hi folks,

 

Greg here. I’m super excited to walk you through some of the key capabilities of external authentication methods and readiness from partners. 

 

We’ve heard from some of you about wanting to use another MFA solution along with the power of Entra ID functionality like the rich features of Conditional Access, Identity Protection, and more.  Customers using Active Directory Federation Services (ADFS) with a deployment of another MFA solution have been vocal in wanting this functionality so they can migrate from AD FS to Entra ID. Organizations that are using the Conditional Access custom controls preview have given feedback on needing a solution that enables more functionality. External authentication methods enable your users to authenticate with an external provider as part of satisfying MFA requirements in Entra ID to fill these needs.

 

What are external authentication methods, and how do you use them?

 

External authentication methods can be used to satisfy MFA requirements from Conditional Access Policies, Privileged Identity Management role activation, Identity Protection risk-based polices and Microsoft Intune device registration. They’re created and managed as part of the Entra ID authentication methods policy.  This gives consistent manageability and experience with the built-in methods. You’ll add an external authentication method with the new “Add external method” button in the Entra Admin Center authentication methods management.

 

Figure 1: External authentication methods are added from and listed in authentication methods policies admin experience.

 

When a user is choosing a method to satisfy MFA, external authentication methods are listed alongside built-in methods that the user can use.

 

Figure 2: External authentication methods are shown next to the built-in methods during sign-in.

 

To learn more, check out our documentation.

 

What providers will support external authentication methods?

 

At launch, external authentication methods integrations will be available with the following identity providers. Please check with your identity provider to find out more about availability:

 

 

In addition to the providers that now have integrations in place, external authentication methods is a standards-based open model where any authentication provider that wants to build an integration can do so by following the integration documentation

 

We’re super excited for you to be able to start using external authentication methods to help secure your users, and we’re looking forward to your feedback!! 

 

If you want to learn more about these integrations, please visit the Microsoft booth at the RSA Conference next week. There will also be an RSA Conference session hosted by Microsoft Intelligent Security Association (MISA) where Duo will showcase their external authentication methods integration.

  

Register for our webinar on May 15 to learn more about external authentication methods, see demos, and join in the discussion.

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

Public preview: Expanding passkey support in Microsoft Entra ID

We really, really want to eliminate passwords. There’s really nothing anyone can do to make them better. As more users have adopted multifactor authentication (MFA), attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks, which trick people into revealing their credentials.     How can we defeat these attacks while making saf

We really, really want to eliminate passwords. There’s really nothing anyone can do to make them better. As more users have adopted multifactor authentication (MFA), attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks, which trick people into revealing their credentials.  

 

How can we defeat these attacks while making safe sign-in even easier? Passkeys!  

 

A passkey is a strong, phishing-resistant authentication method you can use to sign in to any internet resource that supports the W3C WebAuthN standard. Passkeys represent the continuing evolution of the FIDO2 standard, which should be familiar to anyone who’s followed or joined the passwordless movement. We already support signing into Entra ID using a passkey hosted on a hardware security key and today, we’re delighted to announce additional support for passkeys. Specifically, we’re adding support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android for customers with the strictest security requirements.

 

Before we describe the new capabilities we’re adding to Microsoft Authenticator, let’s review the basics of passkeys.

 

Passkeys neutralize phishing attempts

 

Passkeys provide high security assurance by applying public-private key cryptography and requiring direct interaction with the user. As I detailed in a previous blog, passkeys benefit from “Verifier Impersonation Resistance": 

 

URL-specific. The provisioning process for passkeys records the relying party’s URL, so the passkey will only work for sites with that same URL.  Device-specific. The relying party will only grant access to the user if the passkey is synched, stored, or connected to the device from which they’re requesting access.   User-specific. The user must prove they’re physically present during authentication, usually by performing a gesture on the device from which they’re requesting access.  

 

Together, these characteristics make passkeys almost impossible to phish.

 

You can host passkeys on dedicated hardware security keys, phones, tablets, and laptops

 

Users can host their passkeys on dedicated hardware security keys (such as FIDO2 security keys) or on user devices such as phones, tablets, or PCs. Windows 10/11, iOS 17, and Android 14 are examples of user device platforms that support passkeys. Each supports signing in with a passkey hosted directly on the user device itself or by connecting to a nearby user device or security key that hosts the passkey, such as a mobile device within Bluetooth range, an NFC-enabled security key, or a USB security key plugged into the user device.

 

If your organization issues dedicated hardware security keys, you sign-in by inserting your key into the USB port or tapping it to the NFC scanner and perform the PIN or biometric verification it requires.

 

To sign-in using a passkey on a user device, simply scan your face or fingerprint with your device or enter your device PIN. It’s also simple to sign-in to an application on a separate device, such as a new phone or a PC. Point the camera of the device hosting your passkey at the QR code displayed on the separate device and use your passkey along with your biometric or PIN to sign in. You may have already followed this process by using an Android or iPhone to sign into services such as Amazon.com.

 

Passkeys may be device-bound or syncable

 

Depending on the scenario, you may prefer a device-bound passkey or a syncable passkey.  

 

A device-bound passkey, as the name suggests, never leaves the device to which it’s issued. If you sign-in using a security key or Windows Hello, you’re using a device-bound passkey. By definition, you can’t back up or restore a device-bound passkey because during these operations the passkey would leave the hardware element. This restriction is important for organizations that must, sometimes by law, protect passkeys from any security vulnerabilities that could arise during synchronization and recovery.  

 

While they offer strong security, dedicated hardware keys can be expensive to issue and manage. If you lose, replace, or destroy the dedicated device, you must provision a brand-new passkey on a new device. And since device-bound passkeys aren’t portable or recoverable, they increase friction for people trying to move away from passwords. To simplify the experience for users who don’t operate in highly regulated environments, the industry introduced support for syncable passkeys. You can back up and recover a syncable passkey, which makes it possible to share the same passkey between devices or to restore it if you lose or upgrade your device—there’s no need to provision a new one.

 

Syncable passkeys on user client devices are easy to use, easy to manage, and offer high security

 

Syncable passkeys on user devices are exciting because they address many of the toughest usability and recoverability challenges that have confronted organizations trying to move to passwordless, phishing-resistant authentication. Hosting the passkey on the user’s device means organizations don’t have to issue or manage a separate device, and syncing it among the user’s client devices and the cloud massively reduces the expense of recovering and reissuing device-bound keys. And on top of all this, replacing passwords with passkeys thwarts more than 99% of identity attacks.

 

We expect this combination of benefits will make syncable passkeys the best option for the vast majority of users and organizations. Android and iOS devices can host syncable passkeys today, and we’re working to add support in Windows by this fall. Our roadmap for 2024 includes support for both device-bound and syncable passkeys in Microsoft Entra ID and Microsoft consumer accounts. Stay tuned for further announcements later this year.

 

Device-bound passkeys in Microsoft Authenticator

 

Industry or governmental regulation, or other highly strict security policies, require that some enterprises and government agencies use device-bound passkeys for signing in to Microsoft Entra. This small fraction of organizations has strict requirements governing the recovery of lost credentials and for preventing employees from sharing credentials with anyone else. Nonetheless, these organizations also want the usability, manageability, and deployment benefits of storing passkeys on user-client devices such as mobile phones.

 

Advantages of hosting passkeys on a user device: 

Organizations don’t have to provision dedicated hardware.  Users are less likely to lose track of their daily computing device.   It’s easy to sign in with a passkey hosted on a user device.

 

We know that device-bound keys are a must-have for many of our largest, most regulated and most security conscious customers. That’s why we’ve been collaborating with these customers, along with the broader FIDO community, to provide additional options. As part of this work, we’re adding support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android. Instead of provisioning separate devices, high-security organizations can now configure Entra ID to let employees sign-in using their existing phone and their device-bound passkey. Users get a familiar phone interface, including biometrics or local lockscreen PIN or password, while their organizations meet strict security requirements because users can’t sync, share, or recover any device-bound passkey hosted in Microsoft Authenticator.

 

Organizations that use device-bound passkeys trade the benefit of large investments that vendors such as Google (see related article) and Apple (see related article) have made in creating high-security, self-service passkey recovery models for the benefit of meeting strict regulatory or security requirements. They become responsible for sharing and recovering device-bound passkeys, including those hosted in Microsoft Authenticator.

 

For detailed guidance on how to get started with device-bound passkeys hosted in Microsoft Authenticator, please refer to our documentation.

 

Microsoft’s commitment to passwordless authentication

 

Microsoft is continuing to enhance our support for passkey in products such as Entra, Windows, and Microsoft accounts. Please continue to send us feedback, so we can help you eliminate passwords from your environment forever.

 

Alex Weinert 

VP Director of Identity Security, Microsoft

 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


KuppingerCole

eIDAS2: A Gamechanger for Global Digital Identity – Implications and Opportunities

by Joerg Resch As eIDAS2 prepares to go live, its implications extend far beyond the borders of the European Union, setting a new global standard for digital identity management. Organizations worldwide need to understand and prepare for these changes, ensuring they can operate effectively in a new era of digital identity. The 2024 European Identity and Cloud Conference (EIC) provides a unique op

by Joerg Resch

As eIDAS2 prepares to go live, its implications extend far beyond the borders of the European Union, setting a new global standard for digital identity management. Organizations worldwide need to understand and prepare for these changes, ensuring they can operate effectively in a new era of digital identity. The 2024 European Identity and Cloud Conference (EIC) provides a unique opportunity to gain insights, share knowledge, and prepare for the future of digital identity, where security, privacy, and user control are at the forefront of digital transactions.

In a digital era characterized by an increasing reliance on online services and transactions, the security and reliability of digital identities has never been more critical. The European Union has taken a groundbreaking step with the publication of EU Regulation 2024/1183, officially known as eIDAS2. This new regulation, set to come into force on May 20, 2024, not only strengthens the framework for digital identities within the EU but also sets a global precedent for how digital identity services can be managed and utilized. Its implementation, which coincides with EIC 2024, could reshape organizational strategies worldwide.

What is eIDAS2?

eIDAS2 builds on the original electronic Identification, Authentication and trust Services (eIDAS) regulation aimed at enhancing trust in electronic transactions across the EU. The revision introduces several pivotal elements, most notably the European Digital Identity Wallets (EDIW). These wallets serve as secure digital tools that allow EU citizens and businesses to store, manage, and utilize personal identification data and electronic attestations of attributes seamlessly across borders. This framework ensures that every natural and legal person in the EU can access public and private services online without sacrificing control over their personal data.

The Significance of eIDAS2 for Organizations Globally

eIDAS2 is not just a regulatory framework for Europe; it is a beacon for global digital identity management. Organizations around the world should pay attention to these developments for several reasons:

Standard Setting in Digital Identity: eIDAS2 sets a high standard for privacy, security, and interoperability that could become a global benchmark. Non-EU organizations dealing with European partners will need to understand these standards to ensure compliance and smooth interactions. Enhanced Security and Trust: With the introduction of conformity assessment bodies and certification mechanisms, eIDAS2 ensures that digital identity tools and services are reliable and secure. This level of trustworthiness is something organizations worldwide might emulate to enhance their digital identity solutions. Innovation in Identity Management: The EDIW promotes innovation in how identities and attributes are managed and utilized. Organizations can use this model to develop similar solutions, improving customer experiences and operational efficiencies. Implications for Accessing Services

A key component of eIDAS2 is its inclusivity. The regulation mandates that the use of the EDIW is voluntary and that services cannot discriminate against those who choose not to use digital wallets. This principle may influence global service delivery models, emphasizing the need for flexibility in how services and identities are managed digitally.

Relevance to the Global Digital Economy

The digital economy is inherently borderless, where services and goods traverse national boundaries in milliseconds. The eIDAS2 framework facilitates this movement in the EU, potentially creating a ripple effect worldwide as other regions seek to ensure their digital identity systems are interoperable with Europe's. This alignment could lead to smoother transactions, enhanced security, and a more connected global digital economy.

European Digital Identity Wallets: A Closer Look

EDIWs are at the heart of eIDAS2. They allow users to control their identity data fully, choosing when and how much information to share when accessing services. This user-centric approach not only enhances privacy but also empowers individuals, fostering a more trustful digital environment. For organizations, understanding how these wallets work and integrating compatible systems will be crucial.

OpenID4VC: Enhancing the European Digital Identity Wallet

One of the core elements of the EDIW is OpenID for Verifiable Credentials (OpenID4VC), a protocol that stands to revolutionize the way verifiable credentials are exchanged and managed within the eIDAS2 framework. OpenID4VC facilitates the secure and seamless exchange of credentials between issuers, holders, and verifiers, making it a pivotal component in the implementation of the EDIW.

This protocol not only simplifies the process of verifying credentials in real time but also ensures that all transactions adhere to the highest standards of security and privacy mandated by eIDAS2. By integrating OpenID4VC, the EDIW allows users to assert personal data or attributes stored in their wallets without revealing any more information than necessary. This capability is crucial for maintaining user privacy and control over personal information. For organizations globally, understanding and implementing OpenID4VC will be essential to interact efficiently with European entities under the new regulations. The protocol's adoption could also set a precedent for similar initiatives worldwide, promoting a more interconnected and interoperable digital identity landscape. The integration of OpenID4VC into the EDIW exemplifies the EU’s commitment to pioneering advanced, user-centric digital identity solutions that could influence future developments in global digital identity frameworks.

eIDAS 2 is the Key Topic at EIC

The fact that EIC, Europe's leading conference on Digital ID coincides with the enforcement of eIDAS2 is serendipitous for all stakeholders. This convergence will provide a platform for immediate feedback, discussions, and strategy development among policymakers, industry leaders, and technology developers. For attendees, it offers a firsthand look at the regulation's rollout and immediate implications, making it an essential event for anyone involved in digital identity, cybersecurity, or European market operations. Join Europe’s identity community at #EIC2024 to learn more about eIDAS 2 in Germany: Progress, Impact, Challenges; eIDAS Architecture Reference Framework Status and Progress; eIDAS 2, the Protocol Challenge and the Art of Timing; and EUDI Wallet Use Cases and hear top-level discussions on The Future History of Identity Integrity, the Latest on eIDAS Legislation and What it Means for People, Business and Government, Real-World Examples of How Smart Wallets will Transform how we Navigate our Digital World, and The Wallets We Want. To discover all the other sessions dedicated to eIDAS2 as well as what else EIC has in store, have a look at the Agenda Overview.


Oracle SQL Firewall

by Alexei Balaganski It might be just my pet peeve, but I never really liked the term “Firewall”. Of course, the history of IT is full of words that have completely changed their meaning over the decades yet still occasionally cause discussions among experts. Is antivirus dead, for example, considering that they stopped making real viruses years ago? Firewall, however, stands out even more. The

by Alexei Balaganski

It might be just my pet peeve, but I never really liked the term “Firewall”. Of course, the history of IT is full of words that have completely changed their meaning over the decades yet still occasionally cause discussions among experts. Is antivirus dead, for example, considering that they stopped making real viruses years ago?

Firewall, however, stands out even more. The original brick-and-mortar one had one purpose only: to limit the spread of fire between buildings. A real firewall does not have any holes, and surely, it cannot apply any logic to different kinds of fire… A network firewall, however, could. That was its primary purpose – to filter network traffic based on defined rules, letting “good” traffic in, and keeping malicious stuff out. Over the following decades, the concept has evolved significantly, with next-generation firewalls adding capabilities like deep packet inspection, intrusion prevention, and even identity management.

Multiple ranges of specialized products have emerged, like Web Application Firewalls specializing on protecting web apps and APIs or even Database Firewalls designed to prevent SQL-specific attacks on relational databases. A modern firewall is thus a sophisticated solution that combines multiple layers of security, often powered by behavior analytics and artificial intelligence – a far cry from the original rules-based one. Is it even fair to continue referring to them as brick walls?

I can see you asking me already: why have I even brought this pet peeve of mine up? Well, recently I was looking at a new security tool — Oracle SQL Firewall — which the company has built into its upcoming Oracle Database 23ai release. And while I wholeheartedly agree with the product’s vision, surely, calling it just a firewall is a bit odd.

You see, all past and current firewalls (even Oracle’s own specialized Database Firewall) are operating on the network level, forming a perimeter around a resource that requires protection and filtering traffic between it and its clients. The problem is that in the modern hyperconnected world, there are so many potential network paths between sensitive data and potentially malicious actors that protecting them all appears to be impossible.

This is why the concept of data-centric security has emerged years ago, focusing on protecting data itself throughout its entire lifecycle instead of constantly plugging holes in existing networks, servers, and applications. Oracle Database’s “killer feature” has always been the ability to keep all kinds of information (relational, document- and graph-based, spatial, and even vector) in one place and run complex workloads like AI training directly within the database. Integrating an additional security layer to prevent SQL-level attacks directly into the DB core is therefore a major step towards data-centric security.

The new Oracle Database 23ai adds multiple new capabilities that can also create new attack vectors. For example, using Select AI to generate SQL queries from natural language prompts is a great tool for data analysts and business application developers. But to enable it, a database must communicate with an external large language model, and conventional firewalls simply cannot protect it from potential abuse.

Figure 1: High-level overview of SQL Firewall’s architecture

Oracle SQL Firewall, on the other hand, operates directly in the database core, making sure that it cannot be bypassed, regardless of the SQL execution path – whether coming from an external client, a local privileged user, or a generative AI solution. Residing directly in the database also provides it with the necessary insight into all the activities happening there.

Over time, it learns how various clients work with the data, establishing their behavior profiles and creating policies based on actions that are allowed for specific data. These allow-lists explicitly define what SQL statements a specific database user is supposed to perform. Everything else – suspicious anomalies, zero-day attacks, and, of course, SQL injection attacks – is blocked. However, it is possible to run SQL Firewall in a permissive mode as well, just generating audit records and alerts.

This protection is not only ubiquitous and impossible to bypass, but also completely transparent to any client applications, local or remote. There is no need to change existing network settings, introduce a proxy service or give a third-party vendor access to your sensitive data for monitoring. As an added benefit, SQL Firewall incorporates mandatory identity checks through session context, making credential theft or abuse much more difficult.

Of course, Oracle already had several security tools with similar coverage for years, including Audit Vault and Database Firewall – and they are even more capable in a way, providing coverage for non-Oracle databases as well. However, SQL Firewall is a core function of the new 23ai release, not an additional product. It is currently available in Oracle Database Enterprise Edition and requires either Oracle Database Vault or Oracle Audit Vault and Database Firewall.

Its configuration can be managed in several ways: either using the Oracle Cloud’s UI (exposed through Oracle Data Safe) or by utilizing command line tools or APIs. Needless to say, it is available at no extra cost and has negligible performance overhead. This way, it not only implements data-centric security, but also helps enforce the “security by design” principle and facilitates the adoption of Zero Trust architectures.

So, is SQL Firewall supposed to replace all the other data security tools? Not at all, its goal is to add another layer of protection into an existing defense-in-depth stack. Often, it will, in fact, be the last line of defense, positioned directly in front of your sensitive data. Should it be called a firewall? Again, while I, personally, don’t like the term, a rose by any other name would smell as sweet… As KuppingerCole Analysts always stress – you should not look at the labels and always check the actual capabilities offered by a product.

With Oracle’s new solution, you can address two major problems at the same time: protecting databases from SQL-based attacks and implementing 100% audit coverage of database activities. Not bad for a firewall, I think…

Wednesday, 01. May 2024

Elliptic

OFAC sanctions Russian drone developer Oko Design Bureau

The US Treasury’s Office of Foreign Assets Control (OFAC) has today sanctioned Oko Design Bureau and added 3 crypto addresses belonging to it to the Specially Designated Nationals (SDN) list as part of its Russia-related designations. 

The US Treasury’s Office of Foreign Assets Control (OFAC) has today sanctioned Oko Design Bureau and added 3 crypto addresses belonging to it to the Specially Designated Nationals (SDN) list as part of its Russia-related designations. 


Shyft Network

Veriscope Regulatory Recap — 16th April to 30th April 2024

Veriscope Regulatory Recap — 16th April to 30th April 2024 Welcome to our latest edition of Veriscope Regulatory Recap. In this edition, we will break down recent developments in cryptocurrency regulations across Europe and the UK. Europe Strengthens Crypto Oversight with New Regulations The European Parliament recently passed a new set of rules aimed at making cryptocurrency transact
Veriscope Regulatory Recap — 16th April to 30th April 2024

Welcome to our latest edition of Veriscope Regulatory Recap. In this edition, we will break down recent developments in cryptocurrency regulations across Europe and the UK.

Europe Strengthens Crypto Oversight with New Regulations

The European Parliament recently passed a new set of rules aimed at making cryptocurrency transactions safer and more transparent.

These rules are part of the Anti-Money Laundering Regulations (AMLR) and mainly affect companies that handle crypto transactions, such as exchanges.

Under these new regulations, companies must now do more thorough checks on their customers and monitor any suspicious activities. They’ll report these to a new regulatory body called the Authority for Anti-Money Laundering and Countering the Financing of Terrorism (AMLA).

According to the authorities, this step will prevent crimes such as money laundering and terrorism financing through crypto transactions.

Central to these regulations is the EU’s Markets in Crypto Assets (MiCA) framework, which will be fully enforced by the end of this year.

UK Plans New Framework for Crypto and Stablecoins

Over in the UK, the government is working on new guidelines for cryptocurrencies and stablecoins, expected to be introduced by July. Their reported aim is to foster innovation while ensuring consumer protection.

Bim Afolami, the economic secretary to the Treasury, highlighted this at the Innovate Finance Global Summit 2024, stressing the importance of the UK staying competitive in financial technology. The upcoming regulations will cover various aspects of crypto operations, including trading and managing digital assets.

“Once it goes live, a whole host of crypto asset activities, including operating in exchange, taking custody of customer assets and other things, will come within the regulator perimeter for the first time.”
- Bim Afolami, economic secretary to the Treasury

This move comes as part of a broader effort to modernize the UK’s financial system. Authorities are also set to get more power to directly access crypto assets in cases of suspected illegal activities.

(Image Source)

Although the UK’s crypto community and industry at large are welcoming the UK’s plan to roll out new crypto regulations by June/July this year, they are also worried about its possible impact on the broader ecosystem.

Hence, the authorities must ensure that the crypto users aren’t burdened with overly stringent measures that could stifle innovation and growth in the sector. All stakeholders must ensure that the user experience and security remain intact despite the new regulatory measures in place.

Interestingly, here’s how the UK and the EU compare in terms of their approach to crypto regulations:

Overall, the new developments in Europe and the UK demonstrate their effort to keep pace with the fast-evolving world of cryptocurrency. While focusing on security and transparency, these regulations also show an understanding of the need to adapt to the ever-changing digital landscape, ensuring that the crypto industry can continue to grow and evolve.

Interesting Reads

Guide to FATF Travel Rule Compliance in Mexico

Guide to FATF Travel Rule Compliance in Indonesia

Guide to FATF Travel Rule Compliance in Canada

The Visual Guide on Global Crypto Regulatory Outlook 2024

Almost 70% of all FATF-Assessed Countries Have Implemented the Crypto Travel Rule

About Veriscope‍

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Veriscope Regulatory Recap — 16th April to 30th April 2024 was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Microsoft Entra (Azure AD) Blog

Announcing General Availability of Microsoft Entra External ID

I'm thrilled to announce that Microsoft Entra External ID, our next-generation, developer-friendly customer identity access management (CIAM) solution will be generally available starting May 15th. Whether you're building applications for partners, business customers or consumers, External ID makes secure and customizable CIAM simple.    Microsoft Entra External ID   &nbs

I'm thrilled to announce that Microsoft Entra External ID, our next-generation, developer-friendly customer identity access management (CIAM) solution will be generally available starting May 15th. Whether you're building applications for partners, business customers or consumers, External ID makes secure and customizable CIAM simple. 

 

Microsoft Entra External ID  

 

Secure and customize external identities’ access to applications 

 

 

 

 

Microsoft Entra External ID enables you to:  

 

Secure all identities with a single solution  Streamline secure collaboration  Create frictionless end user experiences  Accelerate the development of secure applications 

 

Secure all identities with a single solution  

 

Managing external identities, including customers, partners, business customers, and their access policies can be complex and costly for admins, especially when managing multiple applications with a growing number of users and evolving security requirements. With External ID, you can consolidate all identity management under the security and reliability of Microsoft Entra. Microsoft Entra provides a unified and consistent experience for managing all identity types, simplifying identity management while reducing costs and complexity.  

 

Building External ID on the same stack as Entra ID allows us to innovate quickly and enables admins to extend the Microsoft Entra capabilities they use to external identities, including our industry-leading adaptive access policies, fraud protection, verifiable credentials, and built-in identity governance. Our launch customers have chosen External ID as their CIAM solution as it allows them to manage all identity types from a single platform: 

 

"Komatsu will be using Entra External ID for all external-facing applications. This will help us deliver a great experience to our customers and ensure we're a trusted partner that is easy to do business with."

- Michael McClanahan, Vice President, Transformation and CIO  

 

 

Industry-leading identity security provides end-to-end access to applications.

 

 

Streamline secure collaboration  

 

Boundaries between consumers and business customers are blurring, as are the boundaries between partners and employees. Collaborating with external users like business customers and partners can be challenging; they need access to the right internal resources to do their work, but that access must be removed when it's no longer needed to reduce security risks and safeguard internal data. In this changing world, even trusted collaboration needs least-privilege safeguards, strong governance, and pervasive branding. With ID Governance for External ID, the same lifecycle management and access management capabilities for employees can be leveraged for business guests as well. Guest governance capabilities complement External ID B2B collaboration that’s already widely used by Entra customers worldwide to  make collaboration secure and seamless.  

 

For example, you may want to collaborate with an external marketing agency on a new campaign. With B2B collaboration, you can invite the agency staff to join your tenant as guests and assign them access to the relevant resources, such as a Teams channel for communication, a SharePoint site for project management, and a OneDrive folder for file sharing.  Cross-tenant access settings allow you to have granular controls over which users from specific external organizations get access to your resources, as well as control which external organizations your users access.  ID Governance for External ID will automatically review and revoke their access after a period of inactivity or when the project is completed. This way, you can seamlessly collaborate while ensuring only authorized external users have access to internal resources and data. 

 

Control what resources external collaborators can access with cross-tenant access settings.

 

 

Create frictionless end user experiences 

 

Personalized and flexible user experiences are critical to drive customer adoption and retention. External ID lets you reduce end-user friction at sign in by natively integrating secure authentication experiences into your web and mobile apps. You can leverage a variety of authentication options, such as social identities like Google, Facebook, local or federated accounts, and even verifiable credentials to make it easy for your end users to sign-up/sign-in. External ID enables you to immerse end-users in your brand and create engaging user-centric experiences with progressive profiling, increasing end-user satisfaction and driving brand love. 

 

Design secure, intuitive, and frictionless sign-up and sign-in user journeys that immerse external identities in your brand.

 

 

External ID allows you to further personalize and optimize end-user experiences by collecting and analyzing end-user data, improving their user journey while complying with privacy regulations. Our user insight dashboards help monitor user activities and sign-up/sign-in trends, so that you can assess and improve your end-user experience strategy with data.  

 

Accelerate the development of secure applications 

 

Identity is a foundational building block of any modern application, but many developers may have little experience integrating identity and security into their apps. External ID turns your developers into identity pros by making it easy to integrate identity into web and mobile applications with a few clicks. Developers can get started creating their first application in minutes either directly from the Microsoft Entra portal or within their developer tools such as Visual Studio Code. We recently announced that our Native Authentication now supports Android and iOS, allowing developers to build pixel-perfect sign-up and sign-in journeys into mobile apps using either our API or the Microsoft Authentication Library (MSAL): 

 

“A mobile app sign in journey could have taken us months to design and build, but with Microsoft Entra External ID Native Auth, it took the team just one week to build a functionally comparable and even more secure solution.”

– Gary McLellan, Head of Engineering Frameworks and Core Mobile Apps, Virgin Money 

 

Our Developer Center is a great starting point for developer to find quick start guides, demos, blogs and more showcasing how to build secure user flows into apps.

 

 

Backed by the reliability and resilience of Microsoft Entra, developers can launch from a globally distributed architecture designed to accommodate the needs of growing user bases; ensuring their external-facing apps can handle millions of users during peak periods, without disrupting end-user experiences or compromising security. 

 

Try it out!  

 

We are currently offering an extended free trial for all features until July 1, 2024!* Start securing your external-facing applications today with Microsoft Entra External ID. 

 

After July 1st, you can still get started for free and only pay for what you use as your business grows. Microsoft Entra External ID’s core offer is free for the first 50,000 monthly active users (MAU), with additional active users at $0.03 USD per MAU (with a launch discounted price of $0.01625 USD per MAU until May 2025). Learn more about External ID pricing and add-ons in our FAQ.  

 

*Existing subscriptions to Azure AD B2C or B2B collaboration under an Azure AD External Identities P1/P2 SKU remain valid and no migration is necessary – we will communicate upgrade options once they are available. For multi-tenant organizations, identities whose UserType is external member will not be counted as part of the External ID MAU. Learn more. 

 

Learn More  

Want to learn more about External ID? Check out these resources:  

 

Website  Documentation   Developer Center  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Elliptic

Our new research: Enhancing blockchain analytics through AI

Elliptic researchers have made advances in the use of AI to detect money laundering in Bitcoin. A new paper describing this work is co-authored with researchers from the MIT-IBM Watson AI Lab. A deep learning model is used to successfully identify proceeds of crime deposited at a crypto exchange, new money laundering transaction patterns and previously-unknown illicit wallets.
Elliptic researchers have made advances in the use of AI to detect money laundering in Bitcoin. A new paper describing this work is co-authored with researchers from the MIT-IBM Watson AI Lab.

A deep learning model is used to successfully identify proceeds of crime deposited at a crypto exchange, new money laundering transaction patterns and previously-unknown illicit wallets. These outputs are already being used to enhance Elliptic’s products.

Elliptic has also made the underlying data publicly available. Containing over 200 million transactions, it will enable the wider community to develop new AI techniques for the detection of illicit cryptocurrency activity.

Ontology

Ontology Monthly Report — April

Ontology Monthly Report — April April has been a whirlwind of activities and accomplishments for Ontology. From exciting new partnerships and community engagements to significant advancements in our technology, here’s a recap of this month’s highlights: Community and Web3 Influence 🌐🤝 10M DID Fund Launch: We launched a 10M fund to significantly boost our decentralized identity (DID) ecosystem,
Ontology Monthly Report — April

April has been a whirlwind of activities and accomplishments for Ontology. F