Last Update 5:51 AM July 30, 2021 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Friday, 30. July 2021

Affinidi

Web 2.0 vs Web 3.0 — A Bridge Between the Past and the Future

Web 2.0 vs Web 3.0 — A Bridge Between the Past and the Future Web 2.0 represents the Internet as we know it today and includes all the blogs, social media sites, shopping, news generation, and more! It is marked by user-generated content, interoperability across different services, usability, interactiveness, and high levels of participation. While this may seem like a huge leap from t
Web 2.0 vs Web 3.0 — A Bridge Between the Past and the Future

Web 2.0 represents the Internet as we know it today and includes all the blogs, social media sites, shopping, news generation, and more!

It is marked by user-generated content, interoperability across different services, usability, interactiveness, and high levels of participation. While this may seem like a huge leap from the static pages of web 1.0, in reality, there have been little to no changes to the core definition between the two versions.

What has really changed is the way we use existing infrastructure, and from this standpoint, it’s safe to say that it’s really the front-end that has seen the bulk of changes in web 2.0

Salient Features of Web 2.0

In Web 2.0, users can

Classify and sort information Create and develop APIs for interoperability across different software Create and share dynamic and responsive content with others Send and receive information from different sources Access content from mobile devices, multimedia consoles, televisions, and more

From the above features, we can say that the pillars of Web 2.0 are mobile technology, social media, and the cloud.

Will these pillars continue their dominance in web 3.0?

Unlikely, because these technologies don’t create a sense of trust among the entities participating in an interaction because of a lack of built-in security and authorization mechanisms.

As users and technologies become more mature, there emerged a greater need for trust, security, privacy, and control. And this need led to the evolution of web 3.0.

What’s Web 3.0?

Web 3.0 is truly a big leap from web 2.0 as the backend and the infrastructure are going through a transformation. Also known as the Semantic Web, this generation of the Internet uses an advanced metadata system that structures and arranges all kinds of data in such a way that it’s readable by both machines and humans.

Probably the biggest advantage of web 3.0 is that the information will be universal and can be found by anyone, which means no more digging through content for hours to find what you want.

Now, you might wonder how it overcomes the drawbacks of web 2.0.

Well, the pillars of web 3.0 are artificial intelligence and decentralized networks. The use of artificial intelligence enables machine-to-machine interaction, advanced analytics, and other smart operations that were hitherto impossible on the web.

As for decentralized networks, it pushes data to the edges and into the hands of the entities that own it. In the process, it empowers entities to own their data and determine how it can be shared, thereby giving rise to a philosophy called the Self-Sovereign Identity.

These networks also give privacy and security to users through encryption and the use of Distributed Ledger Technologies (DLT), thereby overcoming the trust barriers that were present in web 2.0.

Salient Features of Web 3.0

Here’s a quick glance at the features of Web 3.0

The Semantic Web can understand the meaning of words, so content can be easily found, shared, and analyzed by both machines and humans Uses artificial intelligence to provide relevant results quickly and give insights at speeds that are impossible for humans to match Has the capability to leverage the power of 3D graphics and visuals. Protects user identity and data through advanced authorization mechanisms such as encryption and DLTs Delivers high levels of security and privacy

Below is a bird’s eye view of the differences between Web 2.0 and Web 3.0.

Web 2.0 vs Web 3.0 — A Quick Glance

In all, web 3.0 is a huge leap forward as it creates the infrastructure needed for humans and machines to interact, create, find, and share distributed data, make accurate predictions with artificial intelligence, and be empowered to control one’s identity through a web of trust, security, and privacy.

Affinidi provides building blocks for an open and interoperable Self-Sovereign Identity ecosystem and to enable the robust growth of Web 3.0.

To learn more about what Affinidi does, follow us on LinkedIn, Facebook, or Twitter. You can also join our mailing list to stay on top of interesting developments in this space.

Reach out to us on Discord or email us if you want to build VC-based applications using our tech stack.

The information material contained in this article is for general information and educational purposes only. It is not intended to constitute legal or other professional advice.

Web 2.0 vs Web 3.0 — A Bridge Between the Past and the Future was originally published in Affinidi on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 29. July 2021

FindBiometrics

BioIntelliSense Raises $45M Thanks to Excitement Over Health-tracking Wearables

BioIntelliSense has raised $45 million in a Series B funding round. The financing round, which the company described in a statement as “significantly oversubscribed”, was led by Chimera, and featured […] The post BioIntelliSense Raises $45M Thanks to Excitement Over Health-tracking Wearables appeared first on FindBiometrics.

BioIntelliSense has raised $45 million in a Series B funding round. The financing round, which the company described in a statement as “significantly oversubscribed”, was led by Chimera, and featured contributions from 7wire Technology Partners, Pendrell Corporation, Royal Philips, and Fresenius Medical Care North America, as well as the individual investors Mary Tolan and James Murren.

The capital raise helps to illustrate the excitement over BioIntelliSense’s wearable biometric solutions aimed at health monitoring and the collection of clinical data. One of those solutions, the BioSticker, received FDA clearance for commercial sale at the start of last year; and the company went on to launch a wearable sensor called the BioButton that can monitor health signals including heart rate, respiration, and temperature, in the spring of 2020.

In addition to these pieces of wearable hardware, BioIntelliSense offers a “data-as-a-service” health monitoring platform and a 5G, cloud-connected device for automated health data uploading.

Commenting on 7wireVentures’s investment in the company, Managing Partner Lee Shapiro gestured to positive market dynamics in the sector that BioIntelliSense is targeting.

“The Remote Patient Monitoring market is in hyper-growth mode with projections reaching USD $117.1 billion by 2025,” he said. “BioIntelliSense is poised to grow its footprint with provider and payer organizations seeking to accelerate and operationalize their virtual care programs by deploying effortless medical-grade monitoring and advanced analytics that are built for scale.”

Chimera (UAE) Chairman Syed Basar Shueb added that the COVID-19 pandemic, together with the prevalence of chronic disease, “has resulted in exponential global demand for remote care technologies that provide better patient care at a fraction of the cost,” adding that BioIntelliSense is “uniquely positioned” to address the trend.

The funding round’s announcement comes just a couple of weeks after the news of BioIntelliSense’s partnership with AirStrip, which agreed to integrate the BioSticker and BioButton into its AirStrip ONE remote care platform.

(Originally posted on Mobile ID World)

The post BioIntelliSense Raises $45M Thanks to Excitement Over Health-tracking Wearables appeared first on FindBiometrics.


Ukraine Will Collect Biometric Data From Visa Applicants

Ukraine will soon begin collecting the biometric data of those who apply for a visa to enter the country. The new policy will go into effect on January 1, 2022, […] The post Ukraine Will Collect Biometric Data From Visa Applicants appeared first on FindBiometrics.

Ukraine will soon begin collecting the biometric data of those who apply for a visa to enter the country. The new policy will go into effect on January 1, 2022, and will make the application process consistent with the rest of Ukraine’s identification procedures for foreign nationals.

In that regard, Ukraine already captures the biometric data of foreign nationals and stateless individuals at border checkpoints, and uses that data again when a foreign visitor registers for a residence permit. The visa application was the only stage of the process that did not use biometrics of any kind, though that will obviously change under the new policy.

Moving forward, those who apply for a visa will need to scan their face and all ten fingerprints when they submit their paperwork. That biometric data will expedite the screening process when those people arrive in Ukraine, and when they need to be identified within its borders.

According to Ukraine’s Ministry of Foreign Affairs, the new policy will make it easier for government bodies to coordinate with one another, and help guard against the use of forged documents in the immigration process. All biometric data captured through the immigration program will be stored in a visa and telecommunications database, and then deleted after a relatively brief five-year period. That five-year window distinguishes Ukraine’s policy from that of countries like the United States, which store foreign biometric information forever.

Ukraine’s biometric data collection policy does include a few exceptions. For example, it will not apply to children under 12, those with unique diplomatic passports, or those with disabilities that make it impossible to collect certain kinds of biometric data.

Ukraine started collecting biometric data at the border back in 2018, following a push from the National Security and Defense Council that kicked off one year prior. The country has also been issuing biometric passports for several years.

Source: Interfax-Ukraine

July 29, 2021 – by Eric Weiss

The post Ukraine Will Collect Biometric Data From Visa Applicants appeared first on FindBiometrics.


Grand View Research Credits Governments and Digital Transformation for Eventual Recovery of Air Travel Industry

Grand View Research is predicting that the civil aviation industry will eventually recover from COVID-19. The firm noted that domestic and international air travel was down an astonishing 60 percent […] The post Grand View Research Credits Governments and Digital Transformation for Eventual Recovery of Air Travel Industry appeared first on FindBiometrics.

Grand View Research is predicting that the civil aviation industry will eventually recover from COVID-19. The firm noted that domestic and international air travel was down an astonishing 60 percent during the pandemic, partly due to border closures and partly due to decreased demand as people stayed home to avoid catching and spreading the disease.

However, Grand View Research believes that there is hope on the horizon. The firm’s latest report suggests that the civil aviation market will grow at a modest CAGR of 8.9 percent for the next few years, climbing to $1.09 trillion between 2021 and 2028.

In the short term, Grand View Research believes that the domestic air travel market will recover earlier than the international one. That can be attributed to the more lax standards for domestic travel, since people are not crossing international borders and therefore do not need to worry about another country’s lockdown, quarantine, and entry requirements.

The air travel industry is also taking steps to win back the trust of passengers. Airports are now installing biometric screening technology to enable a touchless passenger experience, and are supplementing that with UV disinfection tech to ensure a sterile environment and make people feel more comfortable when using airport infrastructure.

Until then, governments are injecting cash into the airline industry to make sure that it survives through the lean period. Despite the dip in customer revenue, several major airlines in the US and the Gulf are already solvent thanks to the support of their various governments. That support has been similarly crucial in the Asia Pacific region, which is expected to recover more quickly than other regions around the world.

The Grand View Research findings echo those of other industry stakeholders. SITA, for instance, has predicted that the air travel industry will suffer a $47 billion loss in 2021, but believes that it will be able to recover through an aggressive digital transformation plan.

July 29, 2021 – by Eric Weiss 

The post Grand View Research Credits Governments and Digital Transformation for Eventual Recovery of Air Travel Industry appeared first on FindBiometrics.


Aware Displays Dramatic Year-Over-Year Improvement in Q2 Report

Aware continued on its upward trajectory in the second quarter of the year. The company brought in $4.3 million in revenue in the quarter that ended on June 30. That […] The post Aware Displays Dramatic Year-Over-Year Improvement in Q2 Report appeared first on FindBiometrics.

Aware continued on its upward trajectory in the second quarter of the year. The company brought in $4.3 million in revenue in the quarter that ended on June 30. That number is comparable to the $4.4 million that the company collected in Q1, but more than double the $1.9 million that it generated in the second quarter of 2020.

By the same token, Aware’s losses for the quarter were only $1.5 million, compared to a $3.1 million loss for the same period in 2020. Revenues for the full six-month period now sit at $8.7 million, which is a considerable improvement over last year’s $5.4 million.

Aware credited those results to the success of its subscription-based business model, which has produced better returns as the volume of transactions has increased. In that regard, Aware has now processed more than 18 million transactions in the first six months of the year, a figure that already eclipses the 11 million that it fielded in all of 2020.

Other highlights for the quarter include the integration of the AFIX product line, and major partnerships with Iris ID and Imprivata. The former is working with Aware to deliver a solution that is compatible with the FBI’s Next Generation Identification Iris Service, while the latter is using Aware’s Knomi authentication tech to onboard users of its own electronic prescriptions for controlled substances (EPCS) platform.

“We’ve taken meaningful steps to align our organization with our ongoing transformation, which we believe will enable us to continue making key internal investments while maintaining a robust balance sheet,” said Aware CEO Bob Eckel. “As we grow our recurring subscription-based revenues, we are making significant progress on rolling out additional new core business offerings by year end. We expect these offerings to open additional channels, enabling us to market a wide array of applications to end users of all sizes and capabilities.”

Aware has advocated for the use of biometric technologies in a number of different sectors. Examples include visitor management and online tax filing systems.

July 29, 2021 – by Eric Weiss

The post Aware Displays Dramatic Year-Over-Year Improvement in Q2 Report appeared first on FindBiometrics.


1Kosmos BlockID

Top Identity Verification Systems [Solutions & Software]

Looking for the best identity verification system? We've done the heavy lifting and found top software solutions to make your decision easy.

Looking for the best identity verification system? We've done the heavy lifting and found top software solutions to make your decision easy.


FindBiometrics

CyberLink Posts Top Three Score in ICCV Anti-Spoofing Challenge

CyberLink’s FaceMe facial recognition engine has posted a top-three score in the anti-spoofing challenge at this year’s International Conference on Computer Vision (ICCV). The event was sponsored by ICCV and […] The post CyberLink Posts Top Three Score in ICCV Anti-Spoofing Challenge appeared first on FindBiometrics.

CyberLink’s FaceMe facial recognition engine has posted a top-three score in the anti-spoofing challenge at this year’s International Conference on Computer Vision (ICCV). The event was sponsored by ICCV and the Institute of Electrical and Electronics Engineers (IEEE), and was set up to determine whether or not the world’s leading facial recognition algorithms can distinguish real faces from high-quality masks.

This year’s competition was open to academics, researchers, and commercial vendors from all over the world. One hundred ninety-five teams enlisted during the development stage, though only 56 had solutions that were deemed good enough to make it to the final stage. Of those, only 18 cleared the minimum criteria laid out by the ICCV.

FaceMe made it to the top three with an anti-spoofing accuracy rate of 96.8 percent, a figure that was only 0.16 percent lower than the top-scoring algorithm. That number corresponds to a 3.215 percent average error rate.

Prior iterations of the face spoofing challenge have placed a greater emphasis on 2D facial recognition systems, and measured their ability to thwart video replay attacks. The 2021 ICCV challenge specifically looked at 3D systems, with a particular focus on high-fidelity masks. CyberLink noted that that is a category that many systems have struggled with in the past.

“With the increasing use of facial recognition, the risk for spoofing-attacks rises,” said CyberLink CEO Jau Huang. “Making facial recognition more reliable and secure is one of the top priorities for the providers of this technology.”

FaceLink has consistently improved its standing in multiple rounds of NIST testing, culminating with a top-six score back in January. The engine has been integrated into several access control terminals in the past few months, including Vypin’s eScreener kiosks and ACE Bioteck’s TC-800 Wallie Screen Health Screening System. The technology has also been deployed in ASUS’ Tinker Board 2 SBC to support a range of IoT applications.

July 29, 2021 – by Eric Weiss

The post CyberLink Posts Top Three Score in ICCV Anti-Spoofing Challenge appeared first on FindBiometrics.


iProov H1 Growth Shows Importance of Liveness Detection Amid Selfie Biometrics Boom

iProov is the latest company attesting to the rising popularity of selfie-based identity solutions in its latest quarterly report. The privately held company has revealed that in the first half […] The post iProov H1 Growth Shows Importance of Liveness Detection Amid Selfie Biometrics Boom appeared first on FindBiometrics.

iProov is the latest company attesting to the rising popularity of selfie-based identity solutions in its latest quarterly report.

The privately held company has revealed that in the first half of 2021, it saw a 15-fold increase in the number of end users verified using its biometric technology. With respect to its Genuine Presence Assurance technology, verifications grew at a rate of 25 percent per month.

iProov’s flagship solution is designed to ensure that an end user is in fact present during an authentication session. Genuine Presence Assurance was embraced by the Australian Taxation Office earlier this year, with the government agency integrating it into its myGovID e-services platform; and it was also integrated into Jumio’s high-profile KYX remote onboarding platform, along with its Liveness Assurance solution, to shore up its face-based identity verification capabilities.

In its H1 update, iProov also noted growth on an internal, corporate level. The company has been profitable in the first half of 2021, and increased its staff roster to 85 – a boost of 25 percent. The company also welcome former Cisco CSO Paul King to its advisory board.

The company’s growth is intertwined with that of selfie-based onboarding, which has skyrocketed in popularity over the past few years, especially in the wake of a pandemic that has accelerated digitization trends. And the growth testifies to the value of liveness detection technologies that are designed to catch the signs of fraudulent imposters during the biometric authentication process – value that is increasingly being recognized in the commercial market.

“In the first half of 2021, iProov has secured users’ identities online and protected major organizations from fraud on a vast scale across the globe,” said iProov CEO Andrew Bud. “The dramatic acceleration in the digital identity market, caused by the pandemic, has demonstrated our market leadership based on our unrivaled usability, inclusiveness and resilient security, and the extraordinary scalability of our platform.”

July 29, 2021 – by Alex Perala

The post iProov H1 Growth Shows Importance of Liveness Detection Amid Selfie Biometrics Boom appeared first on FindBiometrics.


Holo

Holo Hosted Elemental Chat in a Browser

Elemental Chat Demonstrates new P2P Possibilities We have reached an important point for Holo as we progress towards the release of the hosting platform. The Pre-Release test of Hosted Elemental Chat continues successfully with more than 200 peers chatting across the world. Messages arrive quickly and participants report a real-time chat experience using the app. Elemental Chat is testing the end-t
Elemental Chat Demonstrates new P2P Possibilities

We have reached an important point for Holo as we progress towards the release of the hosting platform. The Pre-Release test of Hosted Elemental Chat continues successfully with more than 200 peers chatting across the world. Messages arrive quickly and participants report a real-time chat experience using the app. Elemental Chat is testing the end-to-end infrastructure of a peer-to-peer Holochain application being hosted by the peer-to-peer Holo hosting network. This demonstrates potentially massive new possibilities on the web. Why do we say that?

Current stats visible to all users in Elemental Chat

What this means is that each user, when they create their credentials, are creating the cryptographic keys for their instance of the application. Holo cannot write chats on their behalf — ever. Holo cannot remove an entry for a user — ever. The only thing Holo can do, as the infrastructure provider that bridges the connection to the centralised Internet, is prevent an application from being hosted on the platform. Applications of course have their own sets of rules and the creators and developers of those applications will design those.

I’ve asked a few team members to share about what this means — just to give everyone a sense of how important these results really are.

“When I use elemental chat knowing my friends and all of humanity is hosting the service I’m using, the internet feels human and fresh all over again.” — David Atkinson

“We’ve proven that the Holo hosting product, which is a complex system of distributed hApps and centralised infrastructure and which underpins the Holo ecosystem and HoloFuel, works.” — Alastair Ong

“This release is the first release where we’re delivering value to individual web users. Now, ANYONE can participate in Hosted Elemental Chat, you don’t have to be a host.” — Rob Lyon

“Hosted EC is a seed of digital agency using a browser and a basic internet connection.” — Lisa Jetton

Each of us on the Holo team have different personal reasons, drives, and perspectives as to what Holochain and Holo enable in the world and why it matters. Arthur has shared much about this in his blog series on unclosable carriers and the future of communication. Many on the team see Holochain as a distributed infrastructure that can digitally support the distributed coordination required to face and overcome the most pressing global and local challenges of governance and collaboration that lie ahead.

Some of us are excited to enable, at a structural level, the choice about how and where data is stored and who has access to it. Others in our team are concerned about the lack of transparency and the coercive tendencies associated with providers of mainstream Internet services, and are delighted that developers can now replace those apps on Holochain.

We see the positive test results from Hosted Elemental Chat as a beacon of what is to come.

So What’s Next We launched an update to the Holo.Host website. Surprise! We are ready for people to pre-register to try Elemental Chat in a web browser. We are continuing the pre-release test and plan to add even more users in the coming week. We are releasing a new version of Holochain for developers this week. We are testing read-only functionality on Holo so that folks who do not want to login to chat can have visibility to the prototype. We are working to get the HoloPort Alpha Program ready for hosts to register so that we have enough ports online to support the open release of Hosted Elemental Chat and so that they can begin accruing HoloFuel. We are continuing to work on sharding in Holochain — moving this week to integration testing of this key scalability feature. We are working on the new HoloFuel hApp. We are testing Elemental Chess on Holo.

This is a busy summer for Holo and Holochain. More soon.

– Mary

Holo Hosted Elemental Chat in a Browser was originally published in HOLO on Medium, where people are continuing the conversation by highlighting and responding to this story.


Coinfirm

Ransomware Trends, Statistics and Policy Responses

Ransomware is the fastest-growing cyber threat and has come into the spotlight in recent months owing to a series of high profile attacks on critical infrastructure. Here Coinfirm shares some of our findings from the monitoring and analysis of tens of thousands of blockchain addresses directly related to ransomware over the last 18 months. January...
Ransomware is the fastest-growing cyber threat and has come into the spotlight in recent months owing to a series of high profile attacks on critical infrastructure. Here Coinfirm shares some of our findings from the monitoring and analysis of tens of thousands of blockchain addresses directly related to ransomware over the last 18 months. January...

auth0

A Complete Guide to Lombok

Let’s see everything you should know to start using Project Lombok. We will have a look at how to integrate it into your IDE and use it to avoid boilerplate code.
Let’s see everything you should know to start using Project Lombok. We will have a look at how to integrate it into your IDE and use it to avoid boilerplate code.

Workforce and App Privacy Have Changed, So Should You

Go fast but not alone — our leaders are here to help secure your workforce and app privacy.
Go fast but not alone — our leaders are here to help secure your workforce and app privacy.

Authenteq

The Benefits of Automating Your Employees’ Identity Verification

The post The Benefits of Automating Your Employees’ Identity Verification appeared first on Authenteq.

Elliptic

How Your Bank Can Custody Crypto and Remain Compliant

Recent regulatory guidance has clarified that banks may serve as custodians of virtual assets. In response, the banking industry has rushed to understand how it can seize this new business opportunity, without creating an untenable amount of AML and regulatory risk. 

Recent regulatory guidance has clarified that banks may serve as custodians of virtual assets. In response, the banking industry has rushed to understand how it can seize this new business opportunity, without creating an untenable amount of AML and regulatory risk. 


IBM Blockchain

How blockchain can transform traceability in the automotive space

When big automakers first began investigating the potential use cases for blockchain in their industry, they quickly identified compliance as an area of promise. Because it’s immutable, blockchain can be a very powerful tool for tracing things, from the supply chain for an avocado to the ownership of a non-fungible token (NFT). Because it can […] The post How blockchain can transform traceabilit

When big automakers first began investigating the potential use cases for blockchain in their industry, they quickly identified compliance as an area of promise. Because it’s immutable, blockchain can be a very powerful tool for tracing things, from the supply chain for an avocado to the ownership of a non-fungible token (NFT). Because it can […]

The post How blockchain can transform traceability in the automotive space appeared first on Blockchain Pulse: IBM Blockchain Blog.


auth0

Can Your Identity Solution Grow With Your Company?

The tech market in Latin America is growing; Auth0 can help you keep up with a scalable identity solution.
The tech market in Latin America is growing; Auth0 can help you keep up with a scalable identity solution.

Ocean Protocol

Ocean Makes Multi-Network Even Easier

Simplifying Multi-Network on Ocean with a Unified Interface 🦑 The New Reality 🧜‍♀️ Multi-Network Market 🐋 Multi-Network Aquarius 🦑 The New Reality When we launched the Ocean Market as part of v3 we had just moved onto ETH Mainnet from our own custom PoA Mainnet, so all focus for the user interface went into working against that one production network. As we deployed the Ocean Prot
Simplifying Multi-Network on Ocean with a Unified Interface 🦑 The New Reality 🧜‍♀️ Multi-Network Market 🐋 Multi-Network Aquarius 🦑 The New Reality

When we launched the Ocean Market as part of v3 we had just moved onto ETH Mainnet from our own custom PoA Mainnet, so all focus for the user interface went into working against that one production network. As we deployed the Ocean Protocol contracts to more chains to escape rapidly rising transaction fees, the main interface paradigm, of basing the displayed metadata on the user’s connected network quickly became a pain to use. Hello, Could not retrieve asset. 🙃

So we sat down and figured out the best patterns to solve these main pain points, focusing solely on the end user perspective:

Reduce friction when following links to assets outside of ETH Mainnet Retain the DID and existing URLs as the unique identifier for an asset, regardless of network Increase discoverability of assets outside of ETH Mainnet Increase discoverability of all networks Ocean Protocol is deployed to Encourage usage of networks beyond just ETH Mainnet Reduce need to switch wallet networks as much as possible when browsing the market Any possible solution needs to scale easily as we continue deploying to more networks 🧜‍♀️ Multi-Network Market Leeloo agrees words with “multi” in front are better.

Ultimately, we arrived at a solution tackling all this, where the main new paradigm is an interface showing assets mixed from multiple networks. All the time and on every screen where assets are listed. This detaches the metadata and financial data source from the user’s wallet network as it was before.

The displayed networks are now controlled by the new network selector.

The new network selector and revised menubar in the Ocean Market interface.

By default, we auto-select all production networks Ocean Protocol is deployed to. As soon as you interact with this new network switcher, your selection takes over and is saved in your browser so it will be the same the next time you come to the market.

Selecting or de-selecting networks then modifies all Elasticsearch queries going to our new Aquarius, resulting in mixed assets on screen.

Mixed assets from multiple networks.

All assets now indicate which network they belong to, and you are prompted to switch to the asset’s network when we detect your wallet being connected to another network.

One remaining place where user wallet switching is still important.

And in the case of using MetaMask, we added actions to switch your wallet network directly from the UI, which, as of right now, is pretty much the most streamlined user flow possible to switch networks with MetaMask from a Dapp.

With all this, wallet network switching is now only needed once you want to interact with an asset, like downloading or adding liquidity to its pool.

User wallet network also stays important for publishing an asset, so we based the whole publish form on the currently connected network to define onto which network an asset is published.

Publish form with network indicator.

As for our key market statistics in the footer, we switched it to show consolidated numbers as a sum of all production networks. In its tooltip, you can find the values split up by network.

New consolidated market statistics based on each network.

More assets on screen and more controls also led to further UI tweaks to get more space available to the actual main content. We completely refactored the main menu layout, added a global search box to it, and moved some warnings around. And, while we were at it, improved the mobile experience for it. ✨✨

Everything you need from our menu is all there in mobile viewports

And finally, we also automatically migrate all your existing bookmarks from all the networks and combine them into one list.

Developer Changes

For developers, there are new values in app.config.js controlling the networks which are displayed in the network selection popover:

- chainIdsSupported: List of all supported chainIds. Used to populate the Networks user preferences list.
- chainIds: List of chainIds which metadata cache queries will return by default. This preselects the Networks user preferences.

In the background, the code base changed drastically. We now have only one Aquarius but still multiple providers and especially subgraphs, and we had to also technically detach the wallet network from the data source. E.g. for showing prices and financial data, main refactoring work went into correlating the assets based on ddo.chainId with the respective subgraph and querying multiple subgraphs at the same time as needed. For this, we also simplified our GraphQL setup and switched from Apollo Client to urql.

If you’re interested in all the detailed code changes, you can follow along with the main Pull Request which has references and screenshots for all other changes done against it. This is also the best place to start if you run your own fork of the market and want to integrate the latest multi-network changes without looking at just one big change in main.

Multinetwork Interface by kremalicious · Pull Request #628 · oceanprotocol/market

Check it out!

Head to market.oceanprotocol.com and see assets currently mixed from 3 networks by default.

Ocean Market

You can find all the values required to connect your wallet to the networks Ocean Protocol is deployed to in our supported networks documentation, along with a guide on how to set up a custom network in MetaMask.

Supported Networks Set Up MetaMask Wallet 🐋 Multi-Network Aquarius

Aquarius got a complete refactor. Besides numerous optimizations and stabilization, this new Aquarius indexes assets from multiple chains and delivers them all together in its API responses, with a new ddo.chainId value as part of each asset’s metadata.

In addition to making an interface with mixed assets possible, this also brings a huge maintainability advantage as now only one Aquarius instance has to be deployed and maintained instead of one for each supported network.

So multiple Aquarius instances are now reduced to one instance, where for every network a specific indexer is started. The Aquarius API got a new endpoint exposing which chains are indexed under /api/v1/aquarius/chains/list.

/chains/list endpoint response exposing indexed chain IDs Migration to Multi-Network Aquarius

Aquarius v3.0.0+ is the one.

If you use our remote Aquarius instances, all you have to do is point your app against the new aquarius.oceanprotocol.com and then in your interface do things based on ddo.chainId, like modifying your Elasticsearch queries to include assets from specific networks.

We will keep the old dedicated network instances like aquarius.mainnet.oceanprotocol.com running until September 1st, 2021, and we encourage everybody to migrate to aquarius.oceanprotocol.com instead.

With one Aquarius indexing multiple chains it is rarely useful to return all the assets as most likely you are only interested in production network assets when listing them in an app. So we will also remove the GET /assets/ddo endpoint and suggest to replace it with a specific search query to POST /assets/ddo/query, and include the chainId you want, like:

{
page: 1,
offset: 1000,
query: {
query_string: {
query: 'chainId:1 AND -isInPurgatory:true'
}
}
}

If you have your own instances deployed we suggest to deploy a new one with v3.0.0+ to have everything reindex, and finally switch your URLs in your app to this new deployment and adapt your app interface accordingly. The readme has further information on how to exactly deploy this new Aquarius.

GitHub - oceanprotocol/aquarius: 🐋 Off-chain database store for data assets metadata.

Follow Ocean Protocol on Twitter, Telegram, LinkedIn, Reddit, GitHub & Newsletter for project updates and announcements. And chat directly with other developers on Discord.

Ocean Makes Multi-Network Even Easier was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Meeco

Opening the domestic card rails to innovation

Earlier this year, eftpos, in collaboration with FinTech Australia, established the eftpos FinTech Advisory Committee. The Committee was established as a way of giving Australian FinTechs a direct avenue to discuss how they partner and collaborate to access the eftpos payments network for the betterment of Australia’s digital economy. Over ... Read More The post Opening the domestic card rail
Earlier this year, eftpos, in collaboration with FinTech Australia, established the eftpos FinTech Advisory Committee. The Committee was established as a way of giving Australian FinTechs a direct avenue to discuss how they partner and collaborate to access the eftpos payments network for the betterment of Australia’s digital economy. Over the past five months, ten leading FinTech companies joined the Committee, chaired by Ben Tabell, eftpos’ Chief Information Officer. Together, the Committee collaborated to help create an initial report on how to best leverage the eftpos digital roadmap, API programs along with a variety of industry topics.
Enabling Australian FinTechs a direct avenue to discuss how they partner and collaborate to access the eftpos payments network through the Committee resulted in recommendations that covered a number of central themes, including consultation and engagement, regulation, and technology and solutions

Ben Tabell, eftpos Chief Information Officer and Committee Chair
Meeco is honoured to have been one of the companies invited to join the Committee alongside Assembly Payments, Bleu, Monoova, Sniip, Verrency, Ezypay, Azupay, POLi and Paypa Plane. The aim of the Committee is to advise eftpos on ways the company can build on its efforts to make it easier for FinTechs to access the eftpos network, products and services. The focus is to enable FinTechs to build experiences that can work across a broad range of connected devices in the digital economy. eftpos has now released the report in collaboration with FinTech Australia, delivering ten recommendations on how Australian fintechs can best leverage the eftpos digital roadmap and API programs. Of the recommendations in the report, Meeco is especially interested in the inclusion of data as the new currency, mobile wallets and digital identity. These map directly to the work Meeco has had the privilege to explore and validate together with eftpos over the past year. This includes a micropayments Proof-of-Technology using Meeco’s multipurpose wallet decentralised on Hedera and a pilot that is now underway with eftpos’ identity broker solution, connectID, for credentials verification as part of employee onboarding.

We’re delighted that this work with eftpos and Hedera Hashgraph has resulted in us being selected as a FinTech Australia Finnies finalist in the “Excellence in Blockchain/Distributed Ledger” category. The Finnies event and announcement of winners is now delayed to September, due to the rolling COVID restrictions in Australia. The eftpos FinTech Advisory Committee Report; Innovating on the domestic card rails, is now available to download. Read the report We would like to thank eftpos and FinTech Australia for the opportunity to have contributed to the Committee and the report. We hope you find it interesting and useful.

The post Opening the domestic card rails to innovation appeared first on The Meeco Blog.


KuppingerCole

EIC Speaker Spotlight: Peter Busch on Trust as the Key Concept

by Raj Hegde Peter Busch, Product Owner Distributed Ledger Technologies Mobility at Robert Bosch Group, is to deliver a presentation entitled Trust as the Key Concept in Future Mobility on Tuesday, September 14 starting at 3:30 pm. at EIC 2021. To give you sneak preview of what to expect, we asked Peter some questions about his planned presentation. What are the problems of the identi

by Raj Hegde

Peter Busch, Product Owner Distributed Ledger Technologies Mobility at Robert Bosch Group, is to deliver a presentation entitled Trust as the Key Concept in Future Mobility on Tuesday, September 14 starting at 3:30 pm. at EIC 2021.

To give you sneak preview of what to expect, we asked Peter some questions about his planned presentation.


What are the problems of the identity's status quo today?

Yeah, the identity status quo today. This is a tricky question because what we see these days is we get more and more in this area where we need identity, where you need to authenticate yourself in a more deeply- and highly-connected environment. And since one of our main [areas of] focus or visions for Bosch is the Internet of Things [IoT]. And we see a more and more connected car, connected to the infrastructure, connected to the smart city, [and connected] to anything else, even to the streets and the crossroads now. There you need to have a particular idea where you can say, okay, you need to upload or download data from the car, and then the infrastructure or the car needs exactly to know who does that. Where the car data comes from and who you are.

And so today the ID is getting more and more important because without an ID, you have not the chance to do anything when it comes to smart traffic, smart city or anything like that, or anything with dealing with the IoT. So therefore it's [important]. And the problem that I see personally as well is [that] really these days, you only have these federated systems of ID. You get your ID on the internet via Apple, Google, Amazon, or any other proprietary systems. And so, if you have really, really safety critical stuff, I do not really trust these other systems when it comes to highly critical data which needs to be transmitted. And if my ID is really stored somewhere [on someone] else's computer or server or any backend, this could be getting a bigger and bigger problem in the future.


How can decentralized identity solve pressing identity issues? Why is this particularly relevant to the mobility industry?

Yeah, exactly. I think we definitely can shoot at these problems - what I just said, because when we talk about decentralized identity, you do not have these intermediaries in the background again anymore. So, what we have here is decentralized. We have another system of governance. We have another system of control. So, for one particular technical system like self-sovereign identity, for example, you are really self-sovereign of your data. So, you [are] controlling your own ID, your own identity, and you control what kind of data you are giving to the other site when you talk to anyone in the internet or Internet of Things. And so, when you talk about mobility, and you go into your car, not only yourself need to digital identity, the car itself - as a device - needs identity as well.

And if in the future, if you think about [car] charging, for example, parking or zoning in a smart city, or anything like that, then you always – what I said before - you need to authenticate or authorize yourself to do anything and as well, maybe to buy things or to sell things, data in the car. And then absolutely it's so important that you can control what you are doing exactly with your data. Therefore, decentralized systems are quite important. And my hypothesis here, is it won't go without it in the future. Of course, Amazon, Google, or the likes can tell you, okay, trust us, the data is safe, and we can do it on premise and all that stuff, but on really the safety critical things, I do not trust anyone. I do only trust myself in these cases because when it goes about the life of the driver, for example, the safety of one, then we have to trust the person themselves, and that is possible really only with decentralized systems.


When will decentralized identity become a reality?

If you would have asked me like a year ago, I would have said maybe like five or 10 years. We are very far from it because automated driving, as well, needs a longer time. However, things have changed because we are going now in a phase where the governments are making a lot of pressure to really enforce the industry to use these digital identities. And especially there's one very interesting development at the moment that the German government, for example, they are pressing on using self-sovereign identity.

And so, with that, the time is getting faster and faster, or the development time is getting faster and faster. And I expect now to be able to use these systems, like in one, two years already. And together with this, we are in a huge collaboration mode with Gaia X in Europe at the moment. And so this helps again, because we have all the industry groups, a lot of startups, [and] a lot of universities coming together, working on these problems in a very fast mode. And so this helps us maybe in having [within] two or three years’ time, really scalable and usable systems out there.


Could you share some insights on your presentation at EIC 2021?

Right, I already started it, featured it a little bit. Trust is really the key concept because when you talk about, maybe, automated driving, then you need to trust the infrastructure. You need to trust the data which comes into the car because the car has to take some decisions. Where to drive, how to drive and as well, the driver gives some very trustworthy data to the car and to the infrastructure. And that is in a lot of different use cases. You need that trust relationship, and you need technical systems that help you with that. And so there are some use cases I would really talk about a little bit deeper and try to explain what is the situation today. How would we solve this today, and how would we solve that problem or challenge with these new technologies we are seeing upcoming.


Ontology

The Healthcare Industry Has Been Ravaged by Data Breaches and Should Expect More, but Blockchain…

The Healthcare Industry Has Been Ravaged by Data Breaches and Should Expect More, but Blockchain Can Help By Li Jun, Founder of Ontology, the high performance, public blockchain specializing in decentralized identity and data In May of this year, the Republic of Ireland’s Health Service Executive (HSE), suffered a “catastrophic” cyberattack. In addition to directly causing the number o
The Healthcare Industry Has Been Ravaged by Data Breaches and Should Expect More, but Blockchain Can Help By Li Jun, Founder of Ontology, the high performance, public blockchain specializing in decentralized identity and data

In May of this year, the Republic of Ireland’s Health Service Executive (HSE), suffered a “catastrophic” cyberattack. In addition to directly causing the number of hospital appointments in some areas of the system to drop by 80%, the hackers also stole and shared sensitive medical and personal information of patients online as well as details of correspondence with patients. While three quarters of the HSE’s servers have been unlocked since the attack, the HSE’s CEO recently said that it will still be “many more weeks” before health services return to normal.

Meanwhile, in the UK, statistics from the Information Commissioner’s Office (ICO) show that 3,557 personal data breaches were reported across the health sector, the majority within the NHS, in a two year period to the end of March this year. Not all data breaches are reported, so the total is likely to be much higher. The private health data of thousands of people was shared with strangers, with one patient even having strangers turning up at her door to let her know her private details, including her home address, had been mistakenly sent to other patients. Malicious actors have the opportunity to blackmail and disrupt the system when they gain access to sensitive health information, which is arguably the lifeblood of our societies’ public health.

Data breaches and the potential for data breaches such as this are a perfect example of the dangers of storing sensitive identity and health data on centralized systems. While the continued rise of digital services have made maintaining a secure digital identity, a trusted digital format of a company or person’s identity, increasingly important, unfortunately, many healthcare facilities and systems still use centralized systems such as excel spreadsheets on centralized computer systems that are easy for malicious actors to hack — or even pen and paper — to house patient data. In order for health data to be stored securely, it should be stored on decentralized systems built on blockchain technology that are encrypted and secure.

Digital transformation, accelerated by the pandemic in areas from cloud computing, big data, ecommerce, and, perhaps most importantly, health, is continuing to pick up pace. In healthcare, there is enormous potential for digital transformation and the increasing use of health data to improve the efficiency of systems and wellbeing of people worldwide.

In the US, innovation in healthcare and increasing reliance on health data has been compared to the dot-com boom of the late 1990s. Elsewhere, the reliance on data is strong, the UK’s Department of Health and Social Care recently published a policy paper titled “Data Saves Lives”. This paper asserted that in the midst of “the greatest public health emergency that this country has tackled for generations” data was essential in identifying those “who are most vulnerable to coronavirus” helping the system to protect its citizens. Data was essential in their progress in the fight against Covid-19, powering vital research and analysis. Nonetheless, throughout the pandemic, the UK and other countries were managing huge amounts of personal identity and health data that were highly sensitive and could lead to malicious practise if placed in the wrong hands, on insecure centralized systems that are susceptible to breaches.

Reflecting, Phil Booth, Coordinator at MedConfidential, an organization that campaigns for better confidentiality and security for patient data, summarised the key issue of trust and health data storage when he claimed that the major risk of the GP data programme is that it destroys trust. Booth said that convincing patients and GPs to opt into the programme, which involves increased collection and sharing of health data, given the inherent risks involved whether through mismanagement or from malicious actors like hackers, “undermines trust”. Reliance on health data and maintaining secure digital identities for healthcare is only going to become more important in the coming years. With this in mind, it is vital that patients can be confident that when they hand over sensitive data it is being stored and managed securely. Unfortunately, the nature of centralized systems for health data storage, as recently evidenced by these statistics from the UK and the HSE cyberattack, have inherent security risks. The infrastructure behind the systems is too weak to withstand technologically advanced hackers.

The solution lies in decentralized data storage and management. Decentralized systems that are built on blockchain provide an encrypted and secure means for companies and organizations to store their data. Unlike centralized systems, their immutable format makes them very difficult to infiltrate; they are trustless and essentially don’t require users to put their faith in archaic technology or systems prone to human error. As well as this, patients must invest in decentralized identity solutions that help them manage their personal information. Using decentralized identity platforms, users can take back control of the information they share with institutions, ensuring that security and private data management are at the forefront of every exchange.

As digital transformation in healthcare continues to gain traction, it is likely that data breaches will become more, not less, common. Anticipating this, adopting decentralized solutions for health data that puts the control of the data in the hands of the patient, rather than on a centralized database, will reduce both the malicious and unintended dissemination of patient data while ensuring that the true potential of healthcare data and innovation can be realized.

Want more Ontology?

Learn more about our decentralized data and identity solutions on our website and official Twitter account. You can also chat with us on Telegram and keep up-to-date with the latest news via our Telegram Announcement account.

Other Resources

LinkedIn / Medium / Facebook / Reddit / DiscordYouTube

The Healthcare Industry Has Been Ravaged by Data Breaches and Should Expect More, but Blockchain… was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


OWI - State of Identity

Cloudentity: Identity Next

The identity industry is evolving. Once a niche field that very few people understood, digital identity is now a strategic piece in every organization's security architecture. Organizations are successfully breaking away from monolithic, legacy systems yet one key challenge that remains is building user-centric security solutions. To build centralized authorization in an increasingly decentralized

The identity industry is evolving. Once a niche field that very few people understood, digital identity is now a strategic piece in every organization's security architecture. Organizations are successfully breaking away from monolithic, legacy systems yet one key challenge that remains is building user-centric security solutions. To build centralized authorization in an increasingly decentralized world, identity data needs to be dynamic and easily adaptable — without changing a single line of code. Join host Cameron D'Ambrosi and Director of Strategic Alliances at Cloudentity David Lee as they pave way for the next realm of "identity 3.0" and break down open data, the adoption of new identity standards, and active governance systems across the organization.

Wednesday, 28. July 2021

FindBiometrics

eCompliance Adopts iDenfy’s Remote Onboarding Technology

iDenfy continues to add new customers to its client roster. The latest addition is eCompliance Ltd., which is itself a subsidiary of Platforma365. eCompliance Ltd. was set up to provide […] The post eCompliance Adopts iDenfy’s Remote Onboarding Technology appeared first on FindBiometrics.

iDenfy continues to add new customers to its client roster. The latest addition is eCompliance Ltd., which is itself a subsidiary of Platforma365. eCompliance Ltd. was set up to provide identity verification services for financial institutions and other corporate entities that need to comply with strict Know Your Customer (KYC) regulations.

The partnership with iDenfy will allow eCompliance to offer those services remotely. iDenfy’s platform uses document recognition to confirm the authenticity of an ID, and pairs it with facial recognition to match that document to its owner. The utility will make eCompliance more convenient (and more appealing) to its own end users, since people will not need to meet face-to-face in order to achieve KYC compliance.

In eCompliance’s case, biometric identity verification will help prevent money laundering. The company’s clientele includes corporate lawyers, real estate agents, and cryptocurrency exchanges, all of which are avenues through which terrorists have attempted to move funds in the past. iDenfy’s tech will help organizations vet their customers and mitigate risk before entering into a more serious business arrangement.

“iDenfy is delighted to have the opportunity to work with eCompliance in the field of identity verification,” said iDenfy CEO Domantas Ciulde. “We expect this decision to give both eCompliance and its customers a competitive edge during their digital transformations.”

“We are honored to partner with iDenfy to better serve our clients,” added eCompliance CEO Koichi Murayama. “We will now have the opportunity to effectively transition our clients into the digital era of remote verification and identification.”

eCompliance customers will be able to access and order KYC tools through a centralized account dashboard. iDenfy has previously provided onboarding technology for multiple financial institutions, including the Impily and Emirex cryptocurrency exchanges. The Polish FinTech provider Paymento Financial SA has also partnered with iDenfy to offer face-based identity verification services to other organizations.

July 28, 2021 – by Eric Weiss

The post eCompliance Adopts iDenfy’s Remote Onboarding Technology appeared first on FindBiometrics.


Nuance Provides Gatekeeper Authentication for Wings Credit Union

Nuance Communications will be helping Minnesota’s Wings Financial Credit Union to complete its digital transformation. Wings is the largest credit union in the state, and will be adopting Nuance’s Intelligent […] The post Nuance Provides Gatekeeper Authentication for Wings Credit Union appeared first on FindBiometrics.

Nuance Communications will be helping Minnesota’s Wings Financial Credit Union to complete its digital transformation. Wings is the largest credit union in the state, and will be adopting Nuance’s Intelligent Engagement platform and Gatekeeper solution to deliver better online and call center experiences for its customers.

To that end, the Nuance Intelligent Engagement platform comes with a virtual assistant that provides 24-hour contact center coverage. The platform can answer basic questions, or redirect customers to a live agent during business hours. In doing so, it offers seamless and consistent service across all phone and digital channels that a customer may try to use.

Gatekeeper, meanwhile, will keep the credit union and the customers safe during those interactions. The solution uses voice recognition to verify the identities of customers, and provides a higher level of security than knowledge-based authenticators. The technology will help prevent fraud during voice calls, and streamline sessions for customers who will no longer need to repeat any information from earlier sessions.

According to Nuance, the solutions will allow Wings to keep up with shifting customer demands. The company noted that many people now expect online service options in the wake of the pandemic.

“Our members are spending more time online and predominantly interacting with us via audio and digital channels,” said Wings Financial Retail Delivery VP Matt Vignale. “We’re confident that Nuance’s AI technology can enhance our ability to deliver the same personalized, enjoyable experiences our members are used to receiving when visiting our branches.”

“Wings Financial is part of an innovative group of organizations that recognizes the value of voice biometrics and conversational AI solutions to drive growth and delight customers,” added Nuance EVP and Enterprise Division GM Robert Weideman.

Nuance has previously provided Gatekeeper authentication for Bank Australia and the Industrial Bank of Korea. Wings, meanwhile, recently implemented Access Softek’s EasyVest solution to provide customers with automated wealth management services.

July 28, 2021 – by Eric Weiss

The post Nuance Provides Gatekeeper Authentication for Wings Credit Union appeared first on FindBiometrics.


FPC Tracks Biometric Card Enthusiasm in France

Fingerprint Cards is arguing that France will serve as a bellwether for biometric payment cards. The company noted that France has long been at the forefront of the payments industry, […] The post FPC Tracks Biometric Card Enthusiasm in France appeared first on FindBiometrics.

Fingerprint Cards is arguing that France will serve as a bellwether for biometric payment cards. The company noted that France has long been at the forefront of the payments industry, and was one of the first countries to introduce contactless a decade ago.

That pattern is repeating itself with biometric payment cards. BNP Paribas and Crédit Agricole have already completed trials and moved forward with commercial launches, and the early indicators suggest that the technology is rapidly making in-roads with the French public. Fifty-one percent of the population wants a biometric payment card, and FPC says 50 percent would even be willing to pay for the privilege (the comparable figure is 43 percent for the rest of the world). Many would also be willing to switch banks to find one that offers a biometric payment card.

According to FPC, those cards are appealing because they address some of the lingering concerns that people have about non-biometric contactless cards. As it stands, 83 percent of France’s transactions are carried out with payment cards, and 60 percent of those are fully contactless. Both figures are well ahead of the 73 percent and 50 percent averages, respectively, for the rest of the world.

That contactless card usage has only increased during the pandemic, with the number of consumers that use a contactless card as their primary in-store payment option jumping 216 percent in France in the wake of COVID-19. Sixty-four percent, meanwhile, believe that they will use their contactless card more frequently in the future.

However, more conventional contactless cards still have a payment cap, which creates confusion and aggravation at the checkout counter. Many people do not know what their limit is, and the actual French cap is a relatively small €50, which is not high enough for many day-to-day purchases. That means that people still need to touch a payment terminal to enter a PIN, which negates the sanitary and convenience benefits of a contactless solution.

Consumers are also worried about security, and the threat of fraud if a card gets lost or stolen. Biometric cards, on the other hand, can support higher payment caps, and can do so without increasing risk for card holders and payment providers. Both the BNP Paribas and Crédit Agricole cards feature fingerprint technology from FPC.

July 28, 2021 – by Eric Weiss

The post FPC Tracks Biometric Card Enthusiasm in France appeared first on FindBiometrics.


How Health Pass Apps are Transforming Travel and Border Security

The world is starting to re-open, but that doesn’t mean that it will look the same once people start to leave their homes. New habits and technologies have emerged over […] The post How Health Pass Apps are Transforming Travel and Border Security appeared first on FindBiometrics.

The world is starting to re-open, but that doesn’t mean that it will look the same once people start to leave their homes. New habits and technologies have emerged over the course of the past year, and it’s not yet clear what impact that will have on people’s day-to-day lives.

That’s especially true in the travel industry. Screening requirements could be onerous even before the pandemic, and things have only become more complicated now that health has become such an important consideration. Health Pass apps offer a glimpse at what the future might look like, and show how digital technologies are dramatically increasing the amount of personal information that organizations have at their disposal when making a decision.  

One Step at a Time

According to SITA, border security has typically been handled with a defence-in-depth model, which breaks the travel screening process into several stages that begin before the traveler even gets to the airport. The initial screening occurs when someone applies for a visa and receives permission to travel. A second occurs when someone books a flight and the airline shares its Passenger Name Record (PNR) with government officials at its intended destination in the days before departure. The traveler will then be vetted one final time when they check-in at the airport before they board their flight.

At each point, the traveler’s personal information will be cross-referenced with databases, watchlists, and other sources to ensure that they do not present any risk to the country that will be welcoming them. They can be denied entry if they set off any red flags at any point during the booking and travel process, and the government can tell the airline not to let a passenger board if there are any issues with the Advance Passenger Processing (APP) data they receive. The fact that there are multiple screenings means that the host government has multiple opportunities to spot potential troublemakers, and adds redundancy to the process.

The Small Things

Of course, there’s nothing particularly noteworthy about the fact that governments are trying to screen the people crossing their borders. If anything, COVID-19 only demonstrated how priorities can shift over the years. Governments were worried about terrorism in the wake of 9/11. Now they’re more worried about public health, and the threat that comes from an infection carried on an international flight.

The biggest changes occurred at the technological level. COVID-19 prompted digital transformation in the air travel industry as airports and airlines pivoted to mobile screening solutions that did not require any physical contact. That transformation then gave governments the ability to micromanage their border protocols, and made it easier to keep track of more and more personal information.

That felt like a necessity as screening criteria proliferated during the pandemic. Many countries shut down everything but essential travel, but that ‘essential’ designation was not nearly as black-and-white as the name would suggest. For example, some countries still welcomed citizens who lived abroad and were returning home to visit loved ones, even while denying access to foreign nationals.

The result was a hodgepodge of different criteria that took factors like country of origin, health status, citizenship, and professional standing into account. Governments needed a way to consolidate traditional screening categories with several new ones, and to cross-check everything to make sure that each traveler could tick off all of the necessary boxes.

For Your Health

Health Pass applications give border agents the ability to track all of those entry requirements. The apps (including SITA’s own Health Pass) were built during the pandemic to enable people to verify their health status without compromising their privacy. To that end, they allow people to store their vaccination records (or the results of COVID-19 tests) on a mobile device. Individual travelers can then present the app at the border to prove that they are free of COVID-19 without disclosing any other personal information – at least during that in-person interaction.

The catch is that airlines and border agencies already have access to a wealth of personal information, covering everything from name and country of origin through to their facial biometrics (in many cases). Health Pass apps link someone’s health information to their digital identity, and governments are checking both when someone wants to enter their country.

All Aboard

In practice, Health Pass apps give governments the ability to be much more nuanced with their screening criteria. They could welcome tourists from countries with low infection rates, or bar access to unvaccinated citizens from infection hotspots. The tech should help kickstart the economy because it creates safe pathways for tourism and commerce, while still giving administrators the tools they need to prevent coronavirus spread. For instance, governments could change their criteria at a moment’s notice to block travel if there is an unexpected outbreak or some other threat.

The question is to what extent governments will actually use that ability – and whether it will extend beyond the airport. The same app that lets someone get on a plane could be used to determine whether or not they get a seat at a table in a restaurant, and the tech is so intuitive that the customer experience would be roughly the same in both situations.

However, the widespread use of such an app does raise concerns about privacy. In the case of the restaurant, the proprietor might know that their patrons are free of COVID-19, but may not know much else. They are trusting the app to verify the identities of its users, and trusting the testing lab to make sure that each test is assigned to the proper individual. Governments, on the other hand, will likely have access to much more complete digital identity profiles. With so much information, they could theoretically set criteria (both for travel and other programs) that are biased or otherwise invasive in a number of different ways. If they want to realize the efficiency benefits of Health Pass apps, they will need to build trust with the public and guarantee that their information will be used responsibly.

July 28, 2021 – by Eric Weiss

The post How Health Pass Apps are Transforming Travel and Border Security appeared first on FindBiometrics.


auth0

Accelerated Your Road Map? Go Fix Workforce and App Security

Rapid change introduces workforce and app risks. Auth0 Security team’s advice on what to fix now.
Rapid change introduces workforce and app risks. Auth0 Security team’s advice on what to fix now.

Microsof Identity Standards Blog

Might we Re-charter SCIM? - Find out on July 29

How many of you have looked at the SCIM specifications (IETF RFC 7643 / 7644) and thought “could they be made simpler or clearer”?  Here is your chance to make a difference.  The IETF’s 111th Plenary Meeting is running virtually as we speak, and this Thursday (July 29th) one of the events in the plenary is a

How many of you have looked at the SCIM specifications (IETF RFC 7643 / 7644) and thought “could they be made simpler or clearer”?  Here is your chance to make a difference.  The IETF’s 111th Plenary Meeting is running virtually as we speak, and this Thursday (July 29th) one of the events in the plenary is a “Birds of a Feather” (or BoF) meeting for taking new steps with SCIM (the session identifier is sins).  We hope to convince the IETF Area Directors and the community that there is further work to be done in this area, and no matter what your opinion is, you should bring that opinion to the BoF meeting and be heard!   You don’t have to be a standards person – if you are working with SCIM and just getting stuck, that is important implementer feedback that we want to hear. 

  The Lead Up 

The topic has been discussed in informal bi-weekly meetings for the last two months (we call it the SCIM Interest Group), with strong participation and lots of healthy opinions.  We started by reviewing the many different draft extensions that are out there for SCIM, and a lot of those reviews are available on the SCIM IG Youtube channel if you are interested.  We applied for the BoF meeting to get feedback from a larger audience and to judge whether we have momentum, and now we are going to find out. 

  Microsoft View 

When it comes to SCIM, Microsoft is interested in participating for several simple reasons: 

Operational Clarity 

At the time SCIM was born, the cloud was still new and the possibilities were not known.  The very common assumed implementation pattern was a push model from on-premises to cloud, and this led to assumptions about who would be pushing what data where.  These days however, the combinations of push and pull, client and server, are much more varied. We think that some simple updates in language and better profiling of common implementation roles could make the specification much more intuitive to adopt – thus helping overall interoperability across the identity community and getting us closer to making SCIM a must-have cross-domain interface. 

End to End Automation as a First Order Goal 

The Identity world is now increasingly powered by automation and AI.  Connectivity is no longer just about web single-sign on; the new and growing requirement today is for automated corporate oversight, including governance, provisioning, risk detection and threat intelligence. And we don’t only want to automate the connections, we want to automate the establishment of the connection.  We are hoping to research whether there is additional metadata or even schema that could facilitate a more seamless bootstrapping of various protocols in the multi-cloud world, including protocols like Shared Signals and Events(SSE)/CAEP or Fastfed, with a plan to lay the groundwork for the explosion in multi-cloud automation that growing industry verticals like CIEM exemplify. 

Security Best Practice & Multi-cloud Updates 

In the API Security world, a lot has happened since the first SCIM work began.  OAuth 2.0 has become the preferred mechanism for protecting APIs and a number of extensions have evolved that increase security during access token presentation.  We want the negotiation of industry-best security to be well specified for identity data 

 

How to Get Involved 

The most important thing to do is to sign up for the IETF Birds of a Feather meeting.   Registration is here: https://registration.ietf.org/111/.   There is no membership requirement, but there is a fee – a day pass costs $125 but there are fee waivers available if that cost is too great. 

You can also join our merry band in the SCIM Interest Group – connection data is on our wiki: Explanation of our goals, how to get involved, and pages on our dedicated work efforts (github.com) . We meet every 2 weeks, in two different times of day to encourage global participation. For updates, the easiest way to stay informed is to subscribe to the SCIM Mailing list at IETF

We hope your excitement about this standards work is as great as ours and we cannot strongly enough encourage your participation – the best specifications are made from diverse input, and the more breadth we have in implementation experience and point of view, the better we will do.  Join us! 


Infocert (IT)

InfoCert e Fido siglano una nuova partnership, il DigitalScore sarà integrato all’interno di TOP

Il 20 luglio 2021, InfoCert ha siglato una importante partnership commerciale con Fido, l’innovativa piattaforma di digital credit scoring che utilizza nuove tipologie di dati e l’intelligenza artificiale per verificare l’identità dei consumatori online e valutarne l’affidabilità. L’accordo prevede l’integrazione del DigitalScore di Fido all’interno della Trusted Onboarding Platform (TOP), la solu

Il 20 luglio 2021, InfoCert ha siglato una importante partnership commerciale con Fido, l’innovativa piattaforma di digital credit scoring che utilizza nuove tipologie di dati e l’intelligenza artificiale per verificare l’identità dei consumatori online e valutarne l’affidabilità.

L’accordo prevede l’integrazione del DigitalScore di Fido all’interno della Trusted Onboarding Platform (TOP), la soluzione InfoCert che consente di digitalizzare e gestire online i processi di identificazione e contrattualizzazione della clientela già adottata da oltre 120 organizzazioni – in Italia ed in altri 8 paesi – appartenenti al settore finance, insurance, telco e utility.

Come funziona l’integrazione DigitalScore & TOP?

L’algoritmo di Fido è in grado di analizzare centinaia di segnali digitali e creare uno score del cliente che, una volta integrato ai servizi di validazione di identità di InfoCert fornisce una visione più completa e dettagliata delle persone che richiedono l’attivazione di servizi online.

“L’integrazione dei segnali digitali Fido all’interno del processo di TOP ci consente di ampliare il perimetro delle nostre funzionalità di identity corroboration affiancando alle fasi di verifiche documentale e biometrica l’analisi del “rischio digitale” di ogni singola transazione. Grazie a questa integrazione, InfoCert si propone come one-stop-shop per l’onboarding digitale”

Daniele Citterio, CTO di InfoCert

“Siamo orgogliosi di questa collaborazione con InfoCert e soprattutto contenti che il mercato stia riconoscendo il valore della nostra soluzione che è assolutamente complementare ai tradizionali metodi di validazione dell’identità e prevenzione delle frodi.”

Paolo Mardegan, Chief Commercial Officer di Fido.

The post InfoCert e Fido siglano una nuova partnership, il DigitalScore sarà integrato all’interno di TOP appeared first on InfoCert.


auth0

Building an Identity Solution — Quantity Doesn’t Equal Quality

Why adding more developers to an identity project doesn’t always work out as intended
Why adding more developers to an identity project doesn’t always work out as intended

1Kosmos BlockID

Voice Authentication: How It Works & Is It Secure?


ValidatedID

Digital signatures, a fast track to digital transformation in the real estate sector

The latest real estate trend reports show how the pandemic has accelerated the use of technology and the implementation of trends such as teleworking and digitisation of processes. Find out how digital signatures are revolutionising the industry.
The latest real estate trend reports show how the pandemic has accelerated the use of technology and the implementation of trends such as teleworking and digitisation of processes. Find out how digital signatures are revolutionising the industry.

Improve the patient experience with electronic signatures

Electronic signatures are the perfect solution for compliance-based information processing.
Electronic signatures are the perfect solution for compliance-based information processing.

KuppingerCole

Microsoft further strengthens Identity and Security offerings by CloudKnox Security acquisition

by Martin Kuppinger Microsoft last week announced another acquisition, a few days after announcing the acquisition of RiskIQ. This next acquisition is CloudKnox Security, a vendor offering unified privileged access and cloud entitlement management. These technologies, sometimes also referred to as CIEM (Cloud Infrastructure Entitlement Management), are essential for getting a grip on entitlements

by Martin Kuppinger

Microsoft last week announced another acquisition, a few days after announcing the acquisition of RiskIQ. This next acquisition is CloudKnox Security, a vendor offering unified privileged access and cloud entitlement management. These technologies, sometimes also referred to as CIEM (Cloud Infrastructure Entitlement Management), are essential for getting a grip on entitlements and access to cloud resources in multi-cloud and multi-hybrid environments.

Permission Management for hybrid cloud infrastructures

The CloudKnox Cloud Permissions Management Platform delivers a range of capabilities for managing privileges and access for cloud resources. This includes discovery and assessment of the state of privileges. It supports managing and optimizing privileges, for enforcing principles such as least privilege and need to know. It supports monitoring, auditing, and reporting, as well as forensics and incident response. Support spans the major clouds, but also virtualization environments such as VMware NSX, and management of server operating systems.

Extending PAM and hybrid cloud support in Microsoft 365

With that acquisition, Microsoft strengthens their Azure Active Directory offering, by further extending the reach into multi-cloud and multi-hybrid environments, but also extending the capabilities for managing privileged access entitlements and thus the breadth of its PAM (Privileged Access Management) capabilities. We expect to see integration of these technologies into the existing Microsoft stack of identity and security solutions soon.

Microsoft demonstrates with these acquisitions that identity and security are, for long, not just a by-product anymore, but one of the core pillars of their portfolio. Specifically, Microsoft is consequently extending its scope of solutions beyond the own ecosystem, serving the breadth of today’s heterogeneous IT environments.


Okta

How to Toggle Functionality in C# with Feature Flags

Toggling functionality using feature flags in .NET Core apps is quite straightforward, especially with a neat little feature flag management service. In this post, I’ll walk you through how to build a simple web application using Okta for user authentication and how to use ConfigCat to manage and access feature flags. What Are Feature Flags? Feature flags (aka. feature toggles) are a rela

Toggling functionality using feature flags in .NET Core apps is quite straightforward, especially with a neat little feature flag management service.

In this post, I’ll walk you through how to build a simple web application using Okta for user authentication and how to use ConfigCat to manage and access feature flags.

What Are Feature Flags?

Feature flags (aka. feature toggles) are a relatively new software development technique that enables development teams to turn features on and off on the fly. You can think of them as a remote control for your application, or an actual on/off switch for a feature. You can do many great things with feature flags. Typically, the primary purpose of feature flags is to decouple feature releases from code deployments so developers can both deploy a new feature that is turned off (or hidden from the user), and they can turn on a new feature as needed (ie. whentesting is complete, when the marketing team is ready, or whenever you, the developer, feel confident in the feature).

Some Cool Things Possible with Feature Flags Canary Releases and Phased Roll-outs

After deploying a new feature, you can turn it on for just a handful of your users to gather feedback without risking your reputation on a broader audience. Sometimes, it’s a good idea to release a new feature in smaller phases to see how the infrastructure handles the load.

A/B Testing

Sometimes it’s hard to decide if we want to go forward with feature A or feature B, so we need to test first. Testing on live users provides high-quality data. Showing feature A to one group of users, while showing feature B to another group enables developers to measure which feature the user prefers.

Dogfooding

Tech companies, Okta included, usually use their products internally. So, whenever a new feature is on the horizon, it’s a good idea to test it on your organization first, to be sure that you are satisfied with the user experience and quality.

Emergency Kill Switch

“Never release on Friday!” - often said by experienced developers. Major issues tend to rise over the weekend when it is difficult to get hold of the development team and rollback. This toggle comes in handy when you need to immediately turn the latest feature off.

The Anatomy of a Feature Flag

At the end of the day, a feature flag in your code is a Boolean variable. A common practice is to place the new feature in one of the statements of a conditional.

if(isMyNewFeatureEnabled) { doTheNewThing(); } else { doTheOldThing(); } Where Does the Feature Flag Value Come From?

The feature flag value can come from several different places. In some cases, you can determine its value based on other parameters in the application. For example, you can decide that you want a feature to be enabled only in the Staging environment, but not in Production. Sometimes, it is a good idea to put your feature flags in a configuration file, just beside the application like appsettings.json. There are also a growing number of feature flag management services (such as ConfigCat), which usually provide a nice UI to manage your flags.

How to Implement Feature Flags in .NET Core

Now, I will demonstrate how to implement and use feature flags in .NET Core using a sample application created with the Okta CLI. For managing and accessing feature flags from my code, I’m going to use the ConfigCat SDK for .NET.

In this example, my new feature will be Okta’s Twitter feed, embedded to the home page of the application. The feed should only be visible if its feature flag is turned on.

Before You Get Started Set Up a Sample Application with the Okta CLI

Okta is a secure and customizable solution to add authentication to your app. Okta supports any language on any stack and lets you define how you want your users to sign in. Each time a user tries to authenticate, Okta will verify his/her identity and send the required information back to your app.

The Okta CLI provides turnkey application templates with configured login flows.

Install the Okta CLI by following the steps on the GitHub page.

If you already have an Okta Developer account, run the following command:

okta login

If you don’t, you can register a new account by running:

okta register

The Okta CLI is now set up and ready! Run:

okta start

to launch the CLI.

Select the ASP.NET Core MVC + Okta option when prompted.

A ready application will be created with the Okta login flow, enabling you to write secrets with appsettings.json or to add the new client to your Applications on the Okta Developer Console.

You can test run the application by hitting F5or by entering dotnet run in your command line. A browser window will open, and you should be able to log in using your Okta credentials.

I have uploaded my working application to GitHub in case you get stuck: Okta .NET Core 3 Feature Flag Example.

Set Up a ConfigCat Account

Go to ConfigCat.com and sign up for a free account.

You will see your first feature flag My awesome feature flag already created.

You can work with this feature flag, but I’m going to add a different one for my Twitter feed feature.

Accessing feature flags from .NET Core Installing the ConfigCat SDK

In our application code, we’d like to know if the Twitter feed feature should be turned on or off. For that, we need the ConfigCat SDK for .NET to access and download the latest value of our feature flag.

First, install the Nuget package or type:

Install-Package ConfigCat.Client Getting the Feature Flag Value

In the HomeController.cs add the following lines: Run the ConfigCat client. The client will automatically poll and download the latest feature flag values every 60 seconds. You can customize this polling interval if you’d like to.

var client = new ConfigCatClient("YOUR-SDK-KEY");

YOUR-API-KEY is under the SDK Key tab on the ConfigCat Dashboard.

The GetValue() call will return whether the Twitter feed should be enabled or not.

var twitterFeedVisible = client.GetValue("twitterFeedVisible", false);

Here is how my controller looks in one piece:

public IActionResult Index() { var client = new ConfigCatClient("tNHYCC8Nm0OPXt2LxXT4zQ/k-5ZmLLd10isguXVF6PrTw"); var twitterFeedVisible = client.GetValue("twitterFeedVisible", false); return View(twitterFeedVisible); }

And my Index.cshtml.

@model bool @{ ViewData["Title"] = "Home Page"; } <div class="text-center"> <h1 class="display-4">Welcome</h1> @if (Model) { <div class="w-50"> <a class="twitter-timeline" href="https://twitter.com/okta?ref_src=twsrc%5Etfw">Tweets by okta</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> } </div> Targeting Users with Different Features in .NET Core

Sometimes, our product managers want to be able to set different features for different user groups. In these cases, the feature flag value depends on certain properties of our logged-in users. For example, I’d like to turn on Okta’s Twitter feed only for users whose email addresses end with @okta.com. Let’s see how this will look in the code.

First, we need access to the email address of each logged-in user. Since I’m using Okta authentication, I can access this via HttpContext.

var userEmail = HttpContext.User.Claims.Where(claim => claim.Type == "email").Select(claim => claim.Value).FirstOrDefault();

Then, we’ll make a user object and add the email address as a property.

var user = new User(userEmail) {Email = userEmail};

The first parameter is a required identifier. In this example, the email address works perfectly as a unique identifier.

Finally, let’s pass the user object to the GetValue() call for an evaluation.

var twitterFeedVisible = client.GetValue("twitterFeedVisible", false, user);

NOTE: Don’t worry about data privacy; because the feature flag evaluation is on the client-side, sensitive user information will never leave your system.

On the ConfigCat Dashboard, click the TARGET SPECIFIC USERS button and add a targeting rule. In this case, the rule is: if the email address of the logged-in user contains @okta.com, the Twitter feed feature should be enabled.

Run your application and play around with different targeting rules. They are quite fun!

Takeaway

Feature flags are quite simple to implement and add great value to your application’s flexibility. Even if you choose to go with a feature flag service provider or implement your own solution, I’m confident your product team will love the idea of decoupling feature releases from code deployments, as it eliminates a major source of stress from the equation. Also, A/B testing capabilities and percentage-based roll-outs will save your brand reputation and increase confidence when going to production with a new idea.

Learn More About Feature Flags and Authentication Martin Fowler’s blog post from 2017 on the topic, is a must read Okta CLI, the simplest way to create secure application templates Get Started with ASP.NET Core + Okta ConfigCat SDK for .NET manual and docs with advanced use cases

If you have any questions about this post, please add a comment below. For more awesome content, follow @oktadev on Twitter, like us on Facebook, or subscribe to our YouTube channel.


SWN Global

Our CEO Phantom Seokgu Yun says about NFTs

We are currently witnessing various products becoming NFTs in various fields. Everyone in crypto space says that NFTs will be “Something” in the future digital, especially in the era of Metaverse. Our CEO, Phantom Seokku Yun, agrees on this, but there are a few things that need to be defined before achieving this dream. Let’s listen to Phantom, our CEO, what conditions are needed for a proper NFT

We are currently witnessing various products becoming NFTs in various fields. Everyone in crypto space says that NFTs will be “Something” in the future digital, especially in the era of Metaverse.
Our CEO, Phantom Seokku Yun, agrees on this, but there are a few things that need to be defined before achieving this dream. Let’s listen to Phantom, our CEO, what conditions are needed for a proper NFT to be created.

Link: https://cointelegraph.com/news/blockchain-tech-is-holding-nfts-back-because-of-these-three-design-flaws

About MetaMUI Blockchain

Phantom Seokgu Yun is the CEO of the Soverign Wallet Network and currently leads the MetaMUI project. MetaMUI Blockchain is a fourth-generation blockchain and the world’s first identity-based blockchain.
MetaMUI is a blockchain with significant strengths in CBDC, Cross-border payment, Asset Tokenization, and NFTs. MetaMUI is the only identity-based blockchain, so it is legally possible to transfer ownership of assets and NFTs at the time of transaction. In the future, MetaMUI can solve a variety of problems, such as proof of ownership issues in real world or Metaverse, and it can also handle millions and billions of transactions in a few seconds with unlimited scalability.

Tuesday, 27. July 2021

auth0

Adding Auth0 Authentication to a Ktor HTTP API

Learn how to integrate Auth0 with a Ktor application using JWT Authentication
Learn how to integrate Auth0 with a Ktor application using JWT Authentication

State Identity Solutions: Build? Buy? Or Both?

Evaluating Identity Management in the Public Sector
Evaluating Identity Management in the Public Sector

Magic Labs

Magic Product Updates: July Edition

What’s new with Magic? This month has been full of exciting momentum, so let’s dive right in! In this post, I’ll cover the latest highlights and improvements. 🚀 Series A Last week, we announced that Magic has closed a $27M Series A funding round to future-proof authentication! Our leading investor, Northzone, also participated in our seed round. To-date, we’ve raised $31M from

What’s new with Magic?

This month has been full of exciting momentum, so let’s dive right in!

In this post, I’ll cover the latest highlights and improvements.

🚀 Series A

Last week, we announced that Magic has closed a $27M Series A funding round to future-proof authentication!

Our leading investor, Northzone, also participated in our seed round. To-date, we’ve raised $31M from institutional investors including Tiger Global, Placeholder, SV Angel, Digital Currency Group, CoinFund, and Cherubic . In addition, our network of stellar angel investors has expanded to reflect exceptional founders and operators across tech, developer-first tools, crypto, design software, social media, creator platforms, and more.

Our CEO, Sean, goes into the details of what this all means in his latest post.

With the need for cybersecurity and a seamless onboarding experience growing more important each day, our vision to build the passport of the internet is clearer than ever. Overall, we’re grateful for the support and motivated by the new challenges.

✨ Magic community

Big shout-out to everyone who’s been building with Magic since day one, and also those who have joined us recently. Community is a core part of what makes Magic what it is today and will continue to be integral as we head into the next chapter.

Our team is focused on making the developer experience top-notch.

With that, here are some of the most recent product updates.

We made some key upgrades to our search bar, so it’s easier to find the guides you’re looking for.

Check out our newest templates and tutorials:

How to add Magic Link to a Gatsby site How to add Magic Link to a Svelte application

Kudos to Sean Mullen, who created the Svelte guide as our first guest author! For anyone who wants to write for Magic, tell us more about the tutorial you’d like to create by completing this form.

We also want to provide more insight into the cutting-edge trends in development. That’s why we’re creating the space to connect with and learn from thought leaders.

For our latest Close-up Magic meetup, we welcomed Jason Lengstorf, VP of Developer Experience at Netlify to talk about:

What inspired him to start developing using the Jamstack architecture How companies of all sizes can adopt Jamstack His #1 suggestion for someone starting out in Developer Relations

In case you missed it, here’s the Twitter Spaces recording.

Next up, we’re inviting Praveen Durairaju, Sr. Developer Advocate at Hasura, to chat with us about his experience pre- and post-Hasura days.

If you’re curious about what it’s like building a backend with GraphQL and a database from scratch vs. The Hasura Way, this meetup is for you.

Save your spot early and sign up here.

👋 Join us on Discord

Did you know the Magic community has a Discord server? It’s a new spot for us to gather to chat, as well as help answer any questions you might have.

Whether you’re a developer or just keen to join the discussion on auth, decentralized identity, or other modern infra like blockchain — come and say hi!

🔮 The future

Magic passwordless auth — with future-proof identity tech under the hood — will be made possible for developers right out-of-the-box. That means everyone on the planet could sign up and login to apps with decentralized user identity, improving trust for the internet at large.

This is just the start of our exciting journey ahead.

P.S. We’re hiring! If you’re keen to help build a safer, more authentic internet, join us.

Magic Product Updates: July Edition was originally published in Magic on Medium, where people are continuing the conversation by highlighting and responding to this story.


Matterium

Vinay Gupta at EthCC 2021: How to Use Ethereum to Save the Planet

At the latest Ethereum Community Conference, Mattereum CEO Vinay Gupta gave a presentation on how Mattereum is building the path for Ethereum, and crypto at large, to truly establish itself in the world of global trade. The following is an edited transcript of Vinay’s talk at the 2021 Ethereum Community Conference, where he presents Mattereum as a legal-technical infrastructure provider for

At the latest Ethereum Community Conference, Mattereum CEO Vinay Gupta gave a presentation on how Mattereum is building the path for Ethereum, and crypto at large, to truly establish itself in the world of global trade.

The following is an edited transcript of Vinay’s talk at the 2021 Ethereum Community Conference, where he presents Mattereum as a legal-technical infrastructure provider for bridging Ethereum with the physical economy, thus securing blockchain’s role in global trade. Topics discussed include the legal-technical structure of physical asset NFTs, Trust Communities, examples of live physical NFTs, as well as the role of decentralized insurance protocols in the Mattereum project.

Physical NFTs and Asset Passports mentioned in presentation:

Gold Bars

Marketplace:

Lohko PAMP Gold Bar 1 oz (S/N: 74707) - Lohko NFT | OpenSea

Asset Passport: https://passport.mattereum.com/lohko.1oz.gold.pamp.74707/

William Shatner Memorabilia with Third Millenia

Marketplace:

Third Millennia Physical NFT Collection #1, powered by Mattereum Marketplace on OpenSea: Buy, sell, and explore digital assets

Asset Passport: https://passport.mattereum.com/jtkdress.5000247880.62880.2408/10k/

Mattereum Art & Antiquities Collection

Marketplace:

Mattereum Fine Art and Antiquities Collection Marketplace on OpenSea: Buy, sell, and explore digital assets

Asset Passport: https://passport.mattereum.com/mtrm.tvg.handaxe.ach.berg.220/

*Note: the transcript does not include the Q&A from the talk since the livestream only featured Vinay’s audio.

Introduction

What I’m going to be talking about today is managing physical assets using NFTs. So the idea here is that when you buy the NFT, you get an enforceable, practical, legal right that will allow you to take physical possession of the object.

And that is a problem that has both legal layer where we have to be able to say that you actually own the thing. It has technical layers in that we have to make sure we don’t wind up, for example, double spending physical objects. So you have two claims on a single, for example, gold bar. And the third thing that we ought to be dealing with is physical custody to make sure that the physical offsites that we’re dealing with are where we think they are when we go and try and collect them.

So the objective is to kind of build a whole system here where the legals, the technicals and the physical custody all align in a way that allows you to buy an NFT for a physical thing and get the physical thing. So what I’m going to start with is a few slides that just kind of illustrate how we got here and kind of what the idea is and then we can actually take a look at some NFTs for hopefully desirable, physical property that you can buy and sell right now.

So the company is called Mattereum. A little bit about my background: I’m basically a climate guy. I spent basically off and on 20 years figuring out how to relocate tens or hundreds of millions of people as part of climate change. Went into industry in 2014 after a long time in military think tanks. And in the long run, what I’m very interested in is using this kind of blockchain based resource allocation technology for managing very scarce, physical resources and environments like refugee camps, but also potentially space stations, moon base, Mars space.

Because this problem of, you know, who’s got the 10 millimeter wrench and can I borrow it so I can get the satellite dish installed in very austere environments using software to manage scarce physical resources makes a ton of sense to me. And at the kind of 10 year, 15 year horizon, that’s the kind of direction that I’m traveling.

Should also say that I was the release coordinator for the Ethereum chain and my background in crypto goes back into the mid 1990s.

Bringing Ethereum to the Physical World

The bottom line is that we’re trying to get Ethereum into a position where Ethereum can break out of the digital sphere and get right into the physical sphere in the same way that, for example, USDT and USDC gave us access to the dollar economy and allowed us to do price stabilization inside of smart contracts.

What we want to achieve with Mattereum is the same thing, but for houses, cars, gold bars, collectibles, fine art, clothes, whatever it is.

One of our critical sort of conceptual breakthroughs we actually got on Twitter from a fellow by the name of “gmoneyNFT” who basically said NFTs start to make a lot more sense once you realize that everything in the world except money is an NFT. And that was kind of where the lights went on. I’m like, yeah okay, this is exactly what we’re doing. We’re just modeling everything that you can’t get to with the ERC-20s you can get to as an ERC-721. So it’s that same extension kind of like USDT, kind of like USDC, but for everything, represented as NFTs rather than represented as currency instruments. Does that kinda make sense? It’s a big conceptual jump but let’s start there.

The reason for doing this is the world of physical goods is extremely large. I mean the numbers are completely eye watering. And if we can begin to manage this kind of volume using the blockchain at that point, it’s very, very clear what the longterm economic model for something like Ethereum is.

Right now, we have bit of a problem that we have layers of speculation on speculation on speculation on speculation, so when the Ethereum price wobbles, the entire economy wobbles. That is not where we want the world’s global computing infrastructure to be. We want the computing infrastructure to remain rock solid. So if you’ve got some kind of financial speculation over here, the entire edifice doesn’t rock. And we get there by putting say tens of trillions of dollars of physical assets into the Ethereum ecosystem so that 99% of the value in the Ethereum ecosystem is physical goods which have their property rights controlled with Ethereum rather than virtual goods, which are all denominated in value in Ether.

To me, that is the point at which we’ve kind of finally broken through. We’re in the real world. We’re deployed in the real world. And things really begin to change socially, culturally, and in tricky areas with regulation like climate change, this is how we get the leverage on those kind of tricky areas.

So that’s the kind of goal state, right? The goal state is to get out into the physical world, stabilize the economy by using the digital layer of all the smart contracts as a way of controlling value in the physical world rather than having the value only be locked inside of the smart contracts.

Crypto: An Economic Engine for Securing Global Trade

The economic lever for doing this is counterfeiting.

So in areas like wine, for example, roughly 8% of all the wine that was imported to Germany is fake. In high-end wine markets like auction markets, that number can be 20 or 30%. And remember wine is $360 billion annually. So when you start talking about 8% of a $360 billion market, if you can use the NFTs to assert the credible provenance to prove the wine is real, then what that gets you is an economic engine which is preventing the fraud, drives the use case for the blockchain.

And this is a very, very direct powerful economic driver that moves real-world transactions from the kind of website e-commerce paradigm where everything is done with credit cards up into the blockchain space where you do the same transactions for the same wine, but this time you use crypto plus NFTs rather than doing the transactions in the kind of credit card economy.

And you do that specifically because you can eradicate the fraud. That economic engine applies to a gigantic percentage of the world’s trade. I mean, says here at 3.3%, that’s averaged out against all commodities. It’s very spiky. Some things are massively fraud written. Other things were relatively fraud free, but that’s enough of a margin to pay for the transaction cost to moving onto a secure platform.

In this respect what we’re looking at here is a kind of an economic gradient between a low trust economy on credit cards and a high trust economy on crypto and NFTs with a kind of 3.3% of global GDP lever between those two economies. So this is how we kind of get world trade to bump off the credit card system up and onto the Ethereum rails.

I apologize if that’s kind of abstract. We’re very deeply soaked in this stuff. So, you know, it can be a little jargony, I apologize.

Trust Communities: Webs of Trust Around Physical Goods

So the way that we do this jump is we essentially assemble a thing which is kind of like a miniature DAO and we call this Trust Community.

So if I take a physical asset, So here we have the infamous Shatner stuff. Our first customer was William Shatner, the actor from star Trek. Here we have some memorabilia that Shatner signed and the verification of these physical assets is driven by a community of trust around each asset. So what you have is multiple different parties who are each staking value guaranteeing that those assets are real. And that multi-party authentication structure allows also for different kinds of truth.

So for example, if we’re looking at these Shatner toys, maybe one person comes along and says, “I was there when they were signed.” Another person comes along and says, “I’ve got records of when these things were manufactured.” Another person comes along and says, “I’ve done a CO2 calculation for how much energy was consumed in making these toys.” And you’ve got carbon offsets that pay for that, right?

So that as a model: where we’ve got all of these different individuals who are all staking value around a single asset, but each one is picking up a different facet of the truth. This is very much like a DAO and you can also imagine a syndicate where you might get 20, 30, 40, 50 people that were all simultaneously staking behind the opinion of a single expert.

And what this allows us to do is spread the trust and spread the risk associated with physical asset purchases across networks: individuals, staking pools, potentially insurance networks. And we are working very closely with UNN.Finance to actually build staking pools that will allow people to participate in providing the warranty infrastructure around the purchase of physical goods on Ethereum.

So much as I’m historically quite down on DAOs because of issues like joint several liability, if you do it correctly and you’re very careful not to create something which looks like a share, you can still use a lot of the concepts from DAOs but what you’re doing is spreading risk through essentially promissory instruments rather than things which entitle people to share a future profit.

It’s quite delicate, but it’s important to stay out, for example, securities law.

A God’s Eye View of Product Lifecycles

So the final point that I want to make here is that this is not a simple closed system. The gold bars that we sell, for example, have an estimate of the amount of CO2 involved in manufacturing the gold bar and the amount of CO2 that was involved in creating the NFT.

And then we’ve got partner company Nori who has sold us blockchain tokens to cover that gold carbon burden.

What we have here is an integration of the authenticity data about the gold, the legal transfer right instantiated as an NFT and also tokenized carbon to basically take away the environmental load.

What we’re really doing is we’re embedding these physical goods inside of their total context and more and more and more of the total context of the goods becomes visible in the blockchain as we do things like importing data from anti-slavery databases or anti slavery researchers to verify that, for example, gold bars were produced without any slave labor being involved.

And we think that’s fundamentally important because again, you know, if the blockchain really is kind of the world computer, we need to be able to model the externalities of trade in that world computer so that we can accurately price goods. Right? If something comes in contaminated with a whole bunch of, you know, slave centric manufacturing processes, we don’t want to be in a position where that thing is being sold right beside something that was produced artisanally and extremely carefully to make sure no one was exploited. We need to be able to show these distinctions so that we can apply economic or potentially regulatory pressure to get the garbage out the global supply chain.

Examples of Live Physical Asset NFTs

So here we have an OpenSea page for three physical objects: an etching by Salvador Dali, a half-a-million year old stone hand axe made by an unknown hominid, probably a human, but maybe not, and a Tibetan bronze statue of a god called Acalanatha. These are NFTs on OpenSea. You can go find them if you look for Mattereum. This thing currently has a bitterness for just about $6,850 — sorry, $8,650.

So these things are being sold right now. The physical handaxe is stored in a vault in London. And when you buy the NFT, you get the right to take the handaxe out of the vault, or you can leave the handaxe in the vault and just hold the NFT. And every year you pay a storage fee and that covers the storage in the vault for another year.

Now why might you want to do this? Well, maybe you want to own a handaxe. And you don’t want to have it shipped to your house and stick it on a shelf where your kids will decide they’re going to use it for, you know, opening the box of cereal. You want to own the thing because you’re interested in the heritage and the history. You don’t need it physically in your house. More likely. You’re somebody that is interested in doing something like a museum show, and you want to lock in ownership of all of the assets you want to show simultaneously as a batch.

And then once you’ve got the property rights secure for all of these things, then you could prove the logistics into a play to get all the stuff to one place, to have it on display. And then it can go back into vaulting. Um, But I mean, no denying it’s a stop right? It’s a thing which is there just to show how concrete and tangible these things are.

The Value Flows within Trust Communities

Lohko PAMP Gold Bar 1 oz (S/N: 74707) - Lohko NFT | OpenSea

Let’s go take a look at a gold bar, which is maybe a little more practical. So here we have one-ounce gold bar currently owned by Singh Capital, sold by a customer of ours, Lohko. So same thing. This bar is in a vault in Singapore. Ownership of the NFT gives you the right to transfer the physical bar.

Let me talk through the legal magic that makes this possible. So when you buy the NFT, OpenSea takes a 2% payment out of the NFT transfer. So I as the buyer have paid 98% of the money to the seller of the NFT. 2% is being collected by OpenSea and that 2% is then relayed to a set of people in this Trust Community who are authenticating the bar.

So that Trust Community is mapped in a structure called the Asset Passport. So all of these people here, right? You can see we’ve got data, terms of service, vault, NFT, and carbon. Each one of those five entities is being paid a percentage of this 2% fee by the buyer immediately on purchase of the NFT. So this creates a legal obligation between say the vault warranty provider and the new owner of the NFT. And those legal obligations are represented entirely on chain.

So, although you can’t directly verify that the bar is physically there on chain. You know, there is the video camera watching the bar where you could go and get feed. And even if there was you couldn’t trust the camera. What we can represent fully on train is a legal liability to say if the bar isn’t where you said it is, then you will pay. And so what we need to do then is assess the liability on each one of those people and their ability to pay. And that serves as a proxy for the physical presence. So on chain, you can make the estimate about whether people actually have the money to pay their debts. And if they do, you can use that by extension to validate the physical goods.

And that is a very solid contractual relationship between the NFT buyer because if you accepted the money from the NFT buyer to say that you were signing up to provide this data and the data is wrong, you have an extremely clear basis for litigation and Mattereum also handles that litigation.

Let’s go take a look at the vault claim. A party, in this case Lohko Wallet, who are our customer and our partner that is producing the gold, they say they will take a quarter of a percent of the NFT sale price of the bar. And in return, they will accept a liability which is the gold value of the bar specified by the spot price from the London bullion market association, plus 5%.

So they are on the hook. As soon as you buy the NFT for the full price of bar, plus 5%. Now notice you’re not buying the NFT from them. You’re buying the NFT from Singh Capital, who are the current owner of the NFT. So you pay the money to Singh Capital. You also pay the money to Lohko. For Singh Capital you purchase the NFT. From Lohko, you purchase a warranty on the NFT, and this is also true for the other people in this warranty provider network.

This is what I mean by Trust Community. The trust structure continues to pay out every time the NFT is transferred and that creates a continuous legal container for the NFT. It’s a very, very durable structure.

Not say that you have to make a claim. You want to say to Lohko, “Hey, you know, I asked for the bar to be physically shipped to me and it wasn’t physically shipped to me. I’m making a claim. There is a contract. And this is a big, old, scary, legal document. Every time a new asset is added, new contracts are generated. The fields in yellow are being basically pulled out of a data structure on IPFS. So all of the underlying documentation is stored in IPFS. We pull that information out of the IPFS and we use it to prepare these documents, which verify the legal obligation. So everything is very, very tight on the backend. All the documents are stored and identified around their hashes. They can’t be modified. They’re uploaded into IPFS at the same time the NFT is created. The NFT links directly to those documents in IPFS.

The whole print system is extremely tight and we’re working on making this stuff easier to understand. It’s a bit tricky to document because there’s so much detail. So at the point where the NFT is transferred, this is the contract which is being executed between the NFT buyer and Lohko. And it’s the network of those executed contracts that provide the proof that the asset is there.

Here’s some Star Trek stuff. These were the first set of NFTs we did. Exactly the same kind of structure. This time for ownership of a collection of Star Trek toys signed by William Shatner. These guys have digital authentication chips on them. We’re still being a little secretive about how that works. There’ll be a big announcement about it probably the next two or three months. But we have the ability to put a key pair in a physical chip directly on the assets in a vault. The whole system is basically as tight as a drum. It’s a very, very clean, clear model.

The Role of Decentralized Insurance Networks in the Mattereum Protocol

There’s a lot of stuff here which is still to do. I mentioned UNN.finance. So some of these warranties could draw not against an individual’s assets, but they could draw against a pool which is manifested as a smart contract. So that in the event that the goods are not as described, you make a claim on the pool and the pool then pays out. And anybody can put their funds into that pool and collect revenue streams by virtue of the fact that their money is at risk covering these physical goods and it’s covering those transactions.

Once the system is set up that’s really when the system goes from being slightly manual and a little bit legal-flavored to being dramatically sort of crypto native and digital. And I would expect that we would have gold with that set up attached to it probably within two months. We’re getting pretty close to. It might even be a little sooner. And once that’s done, what you get as a situation where one or more experts states their opinion about why something has value and maybe put some of their own money on the line kind of like a deductible and an insurance contract. And then behind that, a lot of other people can then pile in behind that expert and put their money into a benefit pool, which will then cover anything that isn’t covered by the expert’s own funds inside of this deductible framework.

So that point, it becomes possible for people who believe these gold bars are genuine to profit from the sale of each bar because they put value at risk backing the fact that this gold is actually there. That structure is basically the lever that we’re going to use to get, you know, enormous volumes of assets into the Mattereum and into the Ethereum world, so that you would finally have things like smart contracts which are doing things like owning gold. Or buying houses. You can imagine something that’s like a property hedge fund where you simply buy NFTs for real estate and the whole thing is secured by a huge pool, which is just taking a percentage of each transaction in return for making sure that if one of the properties is fraudulent, we doubled sold by somebody, that risk isn’t passed onto the hedge fund, but it’s absorbed by the insurance pool.

And as those models really begin to spin out, I think you’re going to see a very dramatic change in people’s understanding of what blockchain is because the fastest and cheapest, easiest way to buy a house will be to buy an NFT of a house and then walk around to the real estate agent, pick up the keys literally two blocks later.

I expect us to do the first real estate using the system in Q3 of this year. It’s probably only going to be a parking space in London, but once the principal is established that you can do real estate, after that the fun really begins. And that’s basically it. I’m not going to delve down into the guts of the environmental work that we’ve been doing. There’s also a whole bunch of stuff here about circular economy and carbon management, but I think it’s a topic for another talk, and I’ll leave it there.

END.

The Mattereum Protocol is a culmination of decades spent thinking on how to manage scarce resources on the planet such that people’s basic necessities of living are secured, even in the worst of situations.

Vinay distilled his insights on the technological and social transformation of material culture in the book, The Future of Stuff, where he explores how things came to be and how we can build a better path forward where humanity can live in equilibrium with the planet.

The Future of Stuff

If long-reads aren’t your preferred media, our podcast extends this discussion to other builders and thinkers who are shaping material culture across a range of industries.

The Future of Stuff

Vinay Gupta at EthCC 2021: How to Use Ethereum to Save the Planet was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Global ID

GiD Report#170 — A “bombshell” in the battle for the future of crypto

GiD Report#170 — A “bombshell” in the battle for the future of crypto Welcome to The GiD Report, a weekly newsletter that covers GlobaliD team and partner news, market perspectives, and industry analysis. You can check out last week’s report here. This week: What SEC commissioners really think about the Ripple lawsuit What people are saying Everything Ripple/SEC This wee
GiD Report#170 — A “bombshell” in the battle for the future of crypto

Welcome to The GiD Report, a weekly newsletter that covers GlobaliD team and partner news, market perspectives, and industry analysis. You can check out last week’s report here.

This week:

What SEC commissioners really think about the Ripple lawsuit What people are saying Everything Ripple/SEC This week in crypto Chart of the week — ”The 81 countries exploring central bank digital currencies” This week in antitrust Stuff happens 1. What SEC commissioners really think about the Ripple lawsuit Ripple CEO Brad Garlinghouse, Photo: TechCrunch

Crypto’s in a quiet phase at the moment, but there are big rumblings on the regulatory front. One of the key battlegrounds is the ongoing lawsuit between the SEC and Ripple over the sale of alleged securities in the case of XRP.

Last week, both Ripple and the two individual defendants in the case, Chris Larsen and Brad Garlinghouse all cited a recent letter penned by SEC commissioners Hester Pierce (AKA Crypto Mom) and Elad Roisman in relation to the recent settlement with Coinschedule.

You might call it a bombshell revelation.

In the letter, Pierce and Roisman pointed out the continued lack of clarity around what’s a security and what isn’t based on existing guidance and regulatory frameworks:

There is a decided lack of clarity for market participants around the application of the securities laws to digital assets and their trading, as is evidenced by the requests each of us receives for clarity and the consistent outreach to the Commission staff for no-action and other relief. The test laid out in SEC v. W.J. Howey Co., 328 U.S. 293 (1946), is helpful, but, often, including with respect to many digital assets, the application of the test is not crystal clear. Although the Commission staff has provided some guidance,[1] the large number of factors and absence of weighting cut against the clarity the guidance was intended to offer.”

And rather than bring clarity, the SEC’s current preference for enforcement as a means of regulation has only further muddied the waters:

“In this void, litigated and settled Commission enforcement actions have become the go-to source of guidance. People can study the specifics of token offerings that become the subject of enforcement actions and take clues from particular cases; however, applying those clues to the facts of a completely different token offering does not necessarily produce clear answers. Providing guidance piecemeal through enforcement actions is not the best way to move forward; if the Commission intends to continue to do so, then we should at least be clear about which tokens we have identified to have been sold pursuant to securities offerings.

It’s almost as if the SEC is trying to be as vague as possible! The only thing they’re certain about?

…the only certainty we see is that people have questions about how to comply with the applicable laws and regulations.

Indeed, even when presented the opportunity for clarity, the SEC backed away, as Pierce and Roisman highlight:

We nevertheless are disappointed that the Commission’s settlement with Coinschedule did not explain which digital assets touted by Coinschedule were securities, an omission which is symptomatic of our reluctance to provide additional guidance about how to determine whether a token is being sold as part of a securities offering or which tokens are securities.

So even if you wanted to be a law abiding citizen in the space, it isn’t clear how. Rather than address that lack of clarity, the current SEC status quo merely creates an environment of fear and uncertainty. (The commissioners also propose a safe harbor rule — essentially a sandbox where crypto startups can play in as a way to promote innovation.)

Pierce and Roisman conclude:

Whether we decide that all or a subset of token offerings are securities offerings, providing clear regulatory guideposts and then bringing enforcement actions against people who ignore them is a better approach than the clue-by-enforcement approach that we have embraced to date and that today’s settlement embodies. In short, we know folks have questions and confusion persists in the marketplace; it is important that we start providing clear and timely answers.

Sounds reasonable.

2. What people are saying

Roslyn Layton, who has consistently covered the lawsuit over at Forbes:

Another portent of doom for the SEC’s legal team is a recent missive on a settlement with crypto exchange Coinschedule from SEC commissioners Peirce and Roisman, two Republicans who likely voted against the Ripple case driven by former Republication SEC Chair Jay Clayton. The Commissioners were essentially whistleblowers from inside the agency’s top leadership, attesting that the SEC fails to provide clear guidance and fair notice on digital assets.
The Commissioners’ statement probably does not surprise the SEC’s legal team. They likely know that Hinman’s answers to Ripple’s questions could sink their case and their credibility as an agency. It invites a ruling that could bring the SEC’s farce of fair notice to an end.

/gregkidd:

Without the ethical practice of fair notice, the Howey test is displaced by the Tony Soprano test: Jay Clayton’s infamous statement of saying ‘if it’s a security then we’ll regulate it’ and then not doing so or giving fair notice for 7 years”.
Hopefully this will stop SEC racketeering and will motivate them to provide clear rules based regulation rather than operating via shakedown enforcement actions
3. Everything Ripple/SEC SEC.gov | In the Matter of Coinschedule The SEC’s Fair Notice Farce, Starring William Hinman Ripple Cites SEC Commissioners’ Remarks to Support Dismissal of Case — CoinDesk Attorney Hogan Talks About the Smoking Gun BOMBSHELL Letter From the SEC Itself in SEC v. Ripple! 4. This week in crypto EU Proposes Ban on Anonymous Cryptocurrency Transactions EU to tighten rules on cryptoasset transfers New Jersey orders BlockFi cryptocurrency firm to stop offering interest-bearing accounts Via /junhiraga — Crypto Exchange FTX Valued at $18 Billion in Funding Round JPMorgan just became the first big bank to give retail wealth clients access to cryptocurrency funds Via /antoine — Elon Musk, Jack Dorsey, & Cathie Wood Discuss Bitcoin: B Word Conference 5. Chart of the week — ”The 81 countries exploring central bank digital currencies”

Axios:

Central bank digital currency (CBDC) is probably not top of mind for most global consumers. But we may soon have no choice but to think about it — since 81 countries, representing over 90% of global GDP, are now exploring the development of one, Axios’ Kate Marino writes.
Driving the news: The Atlantic Council, a think tank that will testify at a July 27 Congressional hearing on CBDCs, gave Axios a first look at its new interactive map showing just how many world governments are now considering it.
6. This week in antitrust President Biden Announces Jonathan Kanter for Assistant Attorney General for Antitrust | The White House Biden chooses a tough top antitrust cop Biden to Name a Critic of Big Tech as the Top Antitrust Cop Conservative courts could rescue tech 7. Stuff happens Via /gregkidd — BBVA says that it is shutting down banking app Simple, will transfer users to BBVA USA — TechCrunch Via /gregkidd — Are Americans More Trusting Than They Seem? Via /gregkidd — “on ID” Buying Beer — SNL Via /junhiraga — Decentralized identity authentication platform Magic raises $27M Via /j Passwordless authentication startup Magic raises $27M in funding This week: NFT’s: The Rise of Digital Ownership — Zoom Visa acquires Currencycloud, which makes APIs for remittances and currency transfers, in a $963M deal — TechCrunch
“Visa notes that some 43% of all small businesses globally carried out some form of international trade in 2020.”
Exclusive: Swiss privacy app ProtonMail wades into U.S. antitrust war Review: Why Facebook can never fix itself Via /ctomc — Despite the hype, iPhone security no match for NSO spyware Via /j — Identity Information SEARCH & API for Professionals | Pipl Inside TikTok’s Highly Secretive Algorithm — Investigation: How TikTok’s Algorithm Figures Out Your Deepest Desires How the Creator Economy Is Transforming Hollywood (Guest Blog) Via /m — Venmo Curbs Visibility on Payments So Strangers Can’t See Them Via /jvs — Online Checkout Startup Bolt Valued at $4 Billion YouTube’s newest monetization tool lets viewers tip creators for their uploads — TechCrunch

GiD Report#170 — A “bombshell” in the battle for the future of crypto was originally published in GlobaliD on Medium, where people are continuing the conversation by highlighting and responding to this story.


Gravity Earth

Gravity, Tykn advance interoperability of two decentralized ID solutions for the humanitarian…

Gravity, Tykn advancing interoperability of two decentralized identity solutions for the humanitarian sector Gravity and Tykn are pleased to announce our continued collaboration to advance the interoperability of our two distinct decentralized identity solutions for improved identity management in the humanitarian sector. Recently, Gravity and Tykn successfully conducted an interoperability Proof
Gravity, Tykn advancing interoperability of two decentralized identity solutions for the humanitarian sector

Gravity and Tykn are pleased to announce our continued collaboration to advance the interoperability of our two distinct decentralized identity solutions for improved identity management in the humanitarian sector. Recently, Gravity and Tykn successfully conducted an interoperability Proof of Concept between Gravity’s digital ID protocol built on Tezos blockchain and Tykn’s Cloud Wallet built on Sovrin blockchain within the context of the Dignified Identities in Cash Programming (DIGID) project.

The DIGID Project provides one of the first examples — if not the first! — of attempts at interoperability between two different decentralized identity technologies in the humanitarian context.

To date, there have only been a few attempts at demonstrating interoperability within decentralized identity that we are aware of.* With our novel approach, Gravity and Tykn’s collaboration is an exciting first step towards achieving interoperability between different decentralized identity wallets based on two distinct protocols that leverage very different standards and networks.

To better explain the importance of interoperability of digital identity solutions within the humanitarian context, this article includes the following sections:

What is interoperability? Why is interoperability important? What are the benefits of interoperability between decentralized ID solutions in the humanitarian sector? How are Gravity and Tykn addressing interoperability within the context of the DIGID Project? What is interoperability?

Interoperability refers to the basic ability of different systems (for example, the two decentralized identity protocols from Gravity and Tykn) to readily connect and exchange information with one another. In terms of decentralized identity, interoperability involves a scenario where Alice, who has a decentralized identity wallet based on protocol A can share credentials with Bob, who uses a wallet based on protocol B. With full interoperability, Bob can easily decrypt, read, and verify Alice’s credentials. Interoperability between different protocols is an important argument in favour of decentralized identity.

Why is interoperability important?

One of the key use cases envisioned for the use of decentralized identity in humanitarian aid is that NGO A can issue credentials to a beneficiary who can then easily share these credentials with NGO B. Interoperability allows NGOs A and B to use different decentralized identity protocols for issuance and verification, while still allowing a beneficiary to easily share credentials between the two organizations to gain access to different services.

Interoperability is important because it reduces the risk of vendor lock-in and siloed data management solutions.

Using a decentralized identity protocol with a high degree of interoperability may help mitigate risks arising from being dependent on a single vendor, both for humanitarian organizations and for beneficiaries receiving assistance. For example, decentralized identity solution Vendor A could experience a system-wide fault or suddenly cease activity. With siloed data management solutions, beneficiary data would all be lost in this scenario. However, with interoperable decentralized identity solutions, beneficiary data may still be available from decentralized identity solution Vendor B.

To date, there have only been a few attempts at demonstrating interoperability within decentralized identity. As such, the interoperability test conducted between Gravity and Tykn within the context of the DIGID project is one of the first of its kind that we are aware of.

What are the benefits of interoperability between decentralized ID solutions in the humanitarian sector?

Interoperability between decentralized ID solutions:

facilitates the implementation of multiple digital ID solutions with no vendor lock-in removes inefficiencies and barriers present in current siloed beneficiary databases increases collaboration within and among organizations working to support similar target populations provides greater opportunities for market innovation within the digital ID space, driving digital ID market growth How are Gravity and Tykn addressing interoperability within the context of the DIGID Project?

Launched by a consortium of some of the largest international NGOs in the world, the DIGID Project aims to address the issue of lack of official or recognized identity for recipients of humanitarian assistance by piloting digital identity solutions in Kenya.

This project strives to give control and ownership of personal data back to individuals, and at the same time increase collaboration between NGOs and their beneficiaries, with user consent as a key.

Earlier this year, Gravity’s digital ID solution powered the DIGID Project with the Kenya Red Cross Society (KRCS) to help vulnerable Kenyans without a proof of identity create digital identities on Gravity’s platform and access much-needed KRCS cash assistance. While the DIGID project leverages Gravity’s identity stack built on Tezos blockchain with beneficiaries creating Gravity identity wallets, the aim is to ensure that services that are at the intersection between Gravity’s ID platform and third-parties are also able to issue and read credentials from other decentralized identity protocols.

For sustainability purposes, it is critical that any digital ID solutions that are built for the long term are as interoperable as possible.

Thus, a key next step in the DIGID Project post-pilot phase following the creation of Gravity digital IDs for KRCS aid beneficiaries was to test interoperability between Gravity’s digital ID solution and another third-party solution (in this case, Tykn’s Cloud Wallet) to show how NGOs using different decentralized identity technologies could increase collaboration between humanitarian aid organizations and their beneficiaries.

The interoperability proof of concept demonstrated how two NGOs can use different decentralized identity platforms to register and deliver assistance to the same set of beneficiaries. This is made possible by associating a beneficiary’s different Decentralized Identifiers (DIDs) with each other. This needs to be done because each DID is specific to the two different decentralized identity platforms being used. Allowing for these DIDs to be associated with one another means that NGOs can use both sets of DIDs and issue credentials to both decentralized identity platforms. This is useful because it allows NGOs to share data with each other and beneficiaries without having to agree on a common identifier (such as a phone number, national ID number or system generated ID number) that is needed to identify a beneficiary between two separate systems.

Beyond this initial interoperability test within the context of the DIGID Project, Gravity and Tykn look forward to continuing to innovate together to make interoperability in the decentralized identity and humanitarian aid spaces possible for real-world use.

Gravity and Tykn are grateful to the DIGID consortium of international NGOs for making this interoperability test in the context of the DIGID Project possible.

To learn more about Gravity, visit our website at www.gravity.earth or follow us on LinkedIn and Twitter.

*For example, a collaboration between two decentralized identity providers based on the Sovrin network allowed for interoperability between their respective digital identity wallets in the context of the Swiss pharmaceutical industry. Another example is an initiative to build a universal interface that allows for the verification of credentials issued using different decentralized identity protocols.

Gravity, Tykn advance interoperability of two decentralized ID solutions for the humanitarian… was originally published in Gravity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Blockchain Commons

2021 Q2 Blockchain Commons Report

It was another busy quarter for Blockchain Commons, with a focus on work on our Gordian reference apps, which demonstrate our architectural models and specifications. However, we had numerous other releases as well, including a lot of documentation to let everyone know what we’re doing. Our major work included: Overviews Releasing a video overview of our specifications and technologies; Publishing

It was another busy quarter for Blockchain Commons, with a focus on work on our Gordian reference apps, which demonstrate our architectural models and specifications. However, we had numerous other releases as well, including a lot of documentation to let everyone know what we’re doing.

Our major work included:

Overviews

Releasing a video overview of our specifications and technologies; Publishing our list of Gordian Principles;

Reference Apps

Releasing Gordian QR Tool and Seed Tool through the Apple App Store; Debuting our Sweeptool command-line tool; Experimenting with Timelocks for the next generation of #Smart Custody;

Coding Processes

Increasing our focus on the Rust programming language; Working with a sponsor on our first security review;

New Docs

Publishing docs on our UR and SSKR specifications; Kicking off two translations of Learning Bitcoin from the Command Line;

Other Priorities

Beginning work with our summer interns; Continuing to testify for the Wyoming legislature; Celebrating the fifth anniversary of self-sovereign identity; and Talking about the future of the BTCR DID. Read More

(Also see our previous Q1, 2021 report.)

Some Overviews

Blockchain Commons is getting big, so we produced some overviews of our work!

Video Overview. Over the last few years, Blockchain Commons has produced a number of specifications, including our fundamental work on Uniform Resources (URs), our other innovations such as Lifehashes and Object Identity Blocks, and our updates of sharding technology with SSKR. If you’re trying to get a handle on the technologies we’re using and the specifications that we’re creating with our Airgapped community, we invite you to take a look at our technology overview video, which runs through our core conceptual and specification work one element at a time.


The Gordian Principles. We’ve also begun publishing some descriptions of what the Gordian architecture means to us. It’s focused on four principals that we believe are crucial to digital-asset management: independence, privacy, resilience, and openness. They’re all about ensuring that you’re the controller of your digital assets. Besides writing a short overview, we’ve also begun adding descriptions to each of our reference apps, discussing how they embody those principles.

And, those reference apps were our biggest news for the quarter …

Reference App Work

Our reference apps show off how Blockchain Commons’ Gordian Principles work in real life. We made great progress on them this quarter.

Gordian Releases. Where we had three reference apps available for beta testing in Q1, in Q2 we advanced to having two available for release from Apple.

Gordian Seed Tool is the app previously known as Gordian Guardian. It allows for the management of your cryptographic seeds in a way that’s secure and resilient. You store your seeds in Seed Tool, ensuring that the data is encrypted and that it’s redundantly backed up to iCloud, and then you derive and export keys as they’re needed. As with all of the Gordian reference apps, this one demonstrates the usage of a number of Blockchain Common’s specifications, including SSKRs, UR, and even our new request/response airgap methodology that we just debuted this February.

Gordian QR Tool is a simpler tool that similarly allows for the storage of QR codes in a secure and resilient way. Thus, if you had a largely unused seed or key, you could export it as a QR and store it here. QR Tool can also be used to store other private information such as 2FA seeds or the brand-new Smart Health Cards. QR Tool recognizes many categories of QR codes, including the UR types that Blockchain Commons has defined, allowing easy categorization and sorting of your cryptographic data. Our QR Tool was slightly held back by inadequacies we found in Apple’s QR creation functions, which can produce overly large QRs. We’ve already begun work translating a better QR library into Swift and expect to integrate that into v1.1 of QR Tool in the future.

QR Tool and Seed Tool also offer the first demonstration of the interactions possible in a Gordian architecture. One of the most powerful examples involves Blockchain Commons’ SSKR specification. Seed Tools allows a user to not only shard a seed using SSKR, but also to encode those shares as QR codes. The encoded shares can then be given to friends and family who have QR Tool, ensuring the safe storage of all the shares required to restore your seed!

We have more reference app releases planned for the future. Gordian Cosigner remains available as a Testflight beta release, and is our reference app for demonstrating how to conduct airgapped signing. We are also beginning work on the Gordian Recovery app, which we announced last quarter; it’ll demonstrate methodologies for recovering assets held by third-party wallets.

Sweeptool Debut. Recovering funds held in a third-party HD wallet can be tricky since you don’t always know which addresses were actually used. We’ve also been attacking that problem with a command-line tool called sweeptool, which sweeps funds anywhere in the hierarchy defined by a descriptor. Sweeptool is the work of one of our intern graduates, and is already available as an alpha release.

Timelock Experiments. Our current architecture for #SmartCustody depends on multisigs to allow for the partitioning of keys and the creation of resilience. However, we’re already thinking about the next generation of #SmartCustody, which will allow the use of timelocks to recover funds in the case of the incapacitation of a principal. We did some deeper investigation into miniscript this quarter, which allows for easier integration of timelocks into Bitcoin addresses, and also mapped out some new architectures for progressively sweeping funds forward to keep ahead of timelocks. However, we discovered that at this point miniscript isn’t yet integrated with the descriptor-wallets that are another of the core elements of the Gordian architecture. This prevents integration with our Gordian apps.

We’ve published a preliminary paper on the usage of Timelocks, but it isn’t finalized yet because of these issues. Meanwhile, one of our interns has begun work on a Rust-based Timelock project called mori-cli. This work is all meant to ensure we’re ready when miniscript is added to Bitcoin Core, or when timelocks are integrated into descriptors in some other manner.

Coding Processes

Releasing apps is just one step of a larger coding process, which requires careful consideration of what languages (and libraries) to use and careful reviewing of their security.

Rust Focus. Sweeptool is written in Rust, which is an increasing focus at Blockchain Commons. We’ve also been using it for our musign-cli work and some of our torgap work and have touched upon it in Learning Bitcoin from the Command Line. We think that Rust is an important language for the future of Bitcoin development in large part because of its safety guarantee, including memory safety properties, which can resolve 70% of security problems. Our main barrier for wider use of Rust has been the fact that C and C++ are the main languages used by Bitcoin Core, iOS, and Android, but we’ve already seen the start of changes there, and hope to be able to integrate Rust even more fully in the future. The fact that the Rust-based Bitcoin Dev Kit is actually in advance of Bitcoin Core in some features is very encouraging.

Security Review. Bitmark, one of our Sustaining Sponsors, has hired Radically Open Security to conduct a security review of our SSKR libraries, bc-shamir and bc-sskr. This is crucial element in making Blockchain Commons’ specification work widely available, since it would be improper for us to review our own security code. As our first partners planning to use one of our libraries in a shipping project, as opposed to just our specifications, Bitmark is stepping up to the plate to fund the review of our core SSKR libraries. We expect this pattern to repeat with future partners and libraries in the future. It shows the power of our open libraries: future companies will be able to depend on our SSKR libraries thanks to Bitmark’s contribution, and then they’ll be able to make contributions of their own that will be much less than if they’d had to security review an entire suite of themselves.

New Docs

The blockchain infrastructure is empowered by users and developers who know how to use it properly. Teaching them is the goal of our documentation projects.

UR & SSKR Docs. The ultimate purpose of our reference applications is to demonstrate how our new specifications can be used in actual applications. Hand-in-hand with that, we’re also producing documentation that lays out in more detail how those specifications work. We’ve begun collecting many of those docs in the documents area of our crypto-commons repo. This quarter we added to our docs repo with several documents about our Sharded Secret Key Reconstruction system, including SSKR for Users and SSKR for Developers. We also released a series of articles on Uniform Resources (URs) intended to show developers how they’re constructed and used.

We are also planning further support for our interoperable specifications such as UR and SSKR at a virtual interoperable wallet specification workshop. We’re currently working on locking down a date.

Learning Bitcoin Translations. Our best-known tutorial at Blockchain Commons, is our Learning Bitcoin from the Command Line course, which we pushed to v2.0 last year and which has 1,700 stars and 100 watchers on Github. We’re thrilled that translations are now ongoing into both Portuguese and Spanish thanks to a half-dozen core volunteers. Some of Blockchain Commons’ own programmers and interns entered the blockchain industry thanks to this course, so we’re looking forward to these translations opening the doors even wider.

Other Priorities

Finally, we have a lot of more varied projects that all saw some progress.

Our Summer Interns Have Begun Work. Thanks to a grant from the Human Rights Foundation, we have a class of a dozen interns this summer. We’re engaging them with weekly calls to introduce them to crucial use cases and to let them talk with Bitcoin and Lightning experts from the industry and human-rights experts, most recently including John Callas from the EFF and Alex Gladstein from HRF.

Some of our human-rights-focused intern projects include a self-sovereign donation app, scripts to automate the setup of privacy and/or bitcoin services, documents on managing pseudonymity, and tracking of blockchain-based legislation.

Some of our more general intern projects include deployment of a full Esplora node for Blockchain Commons, the creation of scripts to make it easier for others to do so, documentation on bitcoin fee-estimation use cases, improvements to our Spotbit server including expansion of the Spotbit API and client app, and the aformentioned work on Mori and on Learning Bitcoin translations.

Our first completed intern ‘21 work is a short i2p chapter for Learning Bitcoin from the Command Line, talking about an alternative (or supplement) to traditional Tor privacy.

Wyoming Testimony. Christopher’s testimony has continued in Wyoming, talking about DAO and identity, to improve and extend both laws. The most recent testimony on May 28 continued to work through the ramifications of the definitions of principal authority for self-sovereign identity.


Self-Sovereign Identity. Speaking of which, it was the fifth anniversary of the concept of self-sovereign identity, which first suggested that we should control our identities on the internet, not be beholden to centralized agencies. Christopher wrote an article for Coindesk celebrating that anniversary, talking about how far we’ve come and reflecting on how things could be improved.

BTCR. Finally, we’ve also been talking about another of our blockchain projects that originated in Rebooting the Web of Trust: BTCR, a self-sovereign decentralized identifier that uses the Bitcoin blockchain. Recently Christopher and the other creators of BTCR talked with The Rubric about the past and future of the DID.

As you can see, we’ve got lots of ongoing work that continue to expand the ideas of responsible key management and self-determination on the internet. If you’d like to support our work at Blockchain Commons, so that we can continue to design new specifications, architectures, reference applications, and reference libraries to be used by the whole community, please become a sponsor. You can alternatively make a one-time bitcoin donation at our BTCPay.

Thanks to our sustaining sponsors, Bitmark, Blockchainbird, and Unchained Capital, and our new project sponsor Human Rights Foundation(@HRF), as well as our GitHub monthly sponsors, who include Flip Abignale (@flip-btcmag), Dario (@mytwocentimes), Foundation Devices (@Foundation-Devices), Adrian Gropper (@agropper), Eric Kuhn (@erickuhn19), Trent McConaghy (@trentmc), @modl21, Jesse Posner (@jesseposner), Protocol Labs (@protocol), Dan Trevino (@dantrevino), and Glenn Willen (@gwillen).

Christopher Allen, Executive Director, Blockchain Common

Monday, 26. July 2021

auth0

The Hacker Mindset

How thinking like a hacker can increase your cybersecurity (really)
How thinking like a hacker can increase your cybersecurity (really)

Global ID

EPISODE 10 — How to decentralize identity and empower individuals

EPISODE 10 — How to decentralize identity and empower individuals If the internet decentralized information and crypto decentralized money and payments, then verifiable credentials will decentralize identity. In this episode, we chat with Dev Bharel, the software architect leading the charge around verifiable credentials at GlobaliD. Past episodes: EPISODE 09 — Understanding GlobaliD’s identit
EPISODE 10 — How to decentralize identity and empower individuals

If the internet decentralized information and crypto decentralized money and payments, then verifiable credentials will decentralize identity. In this episode, we chat with Dev Bharel, the software architect leading the charge around verifiable credentials at GlobaliD.

Past episodes:

EPISODE 09 — Understanding GlobaliD’s identity platform EPISODE 08 — Owning your identity and data with VP of Design Antoine Bonnin EPISODE 07 — Understanding the future of fintech with Ayo Omojola EPISODE 06 — Establishing trust and safety in tomorrow’s networks EPISODE 05 — How ZELF combines the power of payments and messaging EPISODE 04 — The future of blockchain with the creator of Solana EPISODE 03 — Should we trust Facebook? EPISODE 02 — JP Thieriot on why Uphold isn’t delisting XRP EPISODE 01 — The SEC’s crypto turf war and why XRP isn’t a security

Have a question for us? A topic you’d like covered? A guest you’d like to see? Let us know!

GlobaliD on Twitter Dev on Twitter

EPISODE 10 — How to decentralize identity and empower individuals was originally published in GlobaliD on Medium, where people are continuing the conversation by highlighting and responding to this story.


Anonym

The Links Between Privacy and Disinformation: Julia Angwin

Privacy, as we know, is a complex topic. When it converges with technology, it becomes more complicated.  One aspect of this complexity is where privacy fits in with disinformation—false or misleading information spread intentionally to confuse or manipulate people. Disinformation is also called bad information, “weaponized information” and “fake news”.  Dis

Privacy, as we know, is a complex topic. When it converges with technology, it becomes more complicated. 

One aspect of this complexity is where privacy fits in with disinformation—false or misleading information spread intentionally to confuse or manipulate people. Disinformation is also called bad information, “weaponized information” and “fake news”. 

Disinformation is different from misinformation—false or misleading information spread unwittingly—but the two together have eroded trust in information to the point we’re in an “information crisis”. (The WHO characterizes the spread of mis and disinformation and the mistrust in information about COVID-19 as an “infodemic”.)

Since our remit is protecting information privacy, we were interested to hear how Julia Angwin, award-winning investigative journalist, editor-in-chief of The Markup and author of Dragnet Nation, linked privacy to disinformation in a recent podcast with Aspen Digital‘s Executive Director Vivian Schiller. The episode is part of the Aspen Institute’s Aspen Digital’s Disinfo Discussions, an initiative of the Institute’s new Commissioner of Information Disorder exploring disinformation and misinformation in the information crisis.

Angwin links privacy to disinformation in two ways

1. Lack of privacy generates disinformation

Angwin says relentless data surveillance has forced individuals to protect their privacy by engaging in some degree of information obfuscation: “You can’t avoid being seen, so you change how you’re being seen. This is disinformation.”

When Angwin wrote her landmark book, Dragnet Nation, she tried all the ways she could to escape what she calls “dragnet surveillance”. She says: “I wasn’t trying to evade the FBI; I was just trying to get out of indiscriminate tracking that is everywhere. I did all sorts of things like getting a burner phone and fake identities and different types of accounts, and what I found was that I was really engaging in quite a bit of disinformation.” 

“The world of relentless surveillance has actually forced individuals to try to protect their privacy through misdirection and obfuscation. This is literally a coping technique for a world of relentless surveillance … the world of surveillance creates a need and a requirement for everyone to engage in a bit of disinformation,” she said.

Angwin says this has set the standard for a level of disinformation we haven’t seen before.

2. Lack of privacy subjects people to disinformation

Angwin says digital technology has allowed the creation of propaganda (a form of disinformation) on a mass scale. Where once the purview of governments, she says propaganda has become an industry and vulnerable communities are most at risk from the disinformation it spreads.

“The industry needs to know who to distribute propaganda to—and that’s where the data exploitation market comes in. You can find vulnerable people because you can buy lists of them. When people used to send junk mail, these lists were called ‘sucker lists.’ These were people who are going to fall for your scam. Now, of course, you can buy any kind of sucker list on Facebook or Instagram.” And “any kind” is right. These lists can now be granularly targeted by various factors including a person’s political and religious leanings, household income, age, location and more.

Angwin believes propaganda weaponizes disinformation and lies “because they can be sent directly to the people who are most vulnerable to receive them. News is being filtered and tailored to people.”

She points out that Facebook sells “sucker lists” via its ads, for example. Users can buy ads targeted to certain people, based behavioural data. 

“We are served information that they believe might be attractive to us. There are millions of people who might be susceptible to dis and misinformation,” Angwin says.

What’s the fix?

Asked whether it’s possible to individually control this tracking and data exploitation, Angwin says no. “Honestly as an individual you have very little control because we need to use these technologies to participate in daily life. Ultimately your ability to decline participation in the data exploitation market is pretty minimal. Most you can do is minimize around the margin but not opt out entirely.” (We’d recommend you use MySudo.)

Angwin says most platforms are only offering “an illusion of control” over our personal data. “You go into some sort of privacy menu, and you can turn all these knobs and dials. The studies have shown that the more knobs and dials there are, the more you feel you’re in control and the more willing you are to accept privacy violations. So they offer you these very confusing settings, and you move them around and you think, ‘Oh, I’ve really solved this.’ The truth is, not really, you haven’t.” 

She concedes some platforms are getting more aggressive at turning off tracking, and cites Apple in this, but says often platforms are simply swapping one form of tracking for another.

As such, Angwin believes there aren’t remedies, so much as considerations. She implores people to think about “whether the benefits of microtargeting really outweigh the risks”. 

“The idea of categorizing all human behavior and then allowing commercial interests to use those to target individuals: Are the risks to society higher than the benefits to advertisers?” she asks.

Listen to the full podcast episode, ‘Privacy in the Age of Disinformation’

You can mitigate the risks to your personal privacy and your exposure to disinformation by using MySudo app, and you can rapidly produce branded cybersecurity solutions for your customers to do the same through our complete privacy toolkit, Sudo Platform

Photo by Timothy Hales Bennett on Unsplash

The post The Links Between Privacy and Disinformation: Julia Angwin appeared first on Anonyome Labs.


Affinidi

Selective Disclosure: Share What You Want

Selective Disclosure: Share Just What You Want Selective Disclosure — Affinidi Selective disclosure is one of the pillars of self-sovereign identity as it enables individuals to share just what they want with others. It empowers the owner of a piece of data to disclose parts of a large data set, so the receiving entity knows just what’s needed. As a result, an individual has the freedom
Selective Disclosure: Share Just What You Want Selective Disclosure — Affinidi

Selective disclosure is one of the pillars of self-sovereign identity as it enables individuals to share just what they want with others. It empowers the owner of a piece of data to disclose parts of a large data set, so the receiving entity knows just what’s needed.

As a result, an individual has the freedom to choose what is shared and no longer has to choose between an all-or-none approach. Undoubtedly, selective disclosure enhances an individual’s privacy and level of control over his or her personal information, including deciding who gets to see what data and how this data is used by others.

It also aligns with privacy and regulatory frameworks such as the GDPR, California Consumer Privacy Act, and more.

Lastly, selective disclosure supports minimal and progressive disclosures as well.

It is best implemented through verifiable credentials.

Affinidi provides building blocks for an open and interoperable Self-Sovereign Identity ecosystem. Reach out to us on Discord or email us if you want to build VC-based applications using our tech stack.

Follow us on LinkedIn, Facebook, or Twitter. You can also join our mailing list to stay on top of interesting developments in this space.

Selective Disclosure: Share What You Want was originally published in Affinidi on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto Regulatory Affairs: US Puts Stablecoins Atop the Policy Agenda

Last week the top brass from across US regulatory agencies met to discuss the future of stablecoins - and to advance the development of a stablecoin regulatory framework. 

Last week the top brass from across US regulatory agencies met to discuss the future of stablecoins - and to advance the development of a stablecoin regulatory framework. 


Indicio

Building a Hyperledger Indy Network – A technical overview

The post Building a Hyperledger Indy Network – A technical overview appeared first on Indicio Tech.

Panel: Start Simple to Scale Decentralized Identity

The post Panel: Start Simple to Scale Decentralized Identity appeared first on Indicio Tech.

Okta

Easy Distributed Tracing with Spring Cloud Sleuth

Spring Cloud Sleuth allows you to aggregate and track log entries as requests move through a distributed software system. In a monolithic system, it’s relatively easy to track requests as they move through the codebase because all requests can easily be logged to the same log file. You can generally just filter the log by the thread ID. But in a distributed system, a single client request may spra

Spring Cloud Sleuth allows you to aggregate and track log entries as requests move through a distributed software system. In a monolithic system, it’s relatively easy to track requests as they move through the codebase because all requests can easily be logged to the same log file. You can generally just filter the log by the thread ID. But in a distributed system, a single client request may sprawl across any number of discrete cloud services. Any given service may have multiple instances handling different parts of the request. There is no single log file, with a request spread across multiple server instances. How do you use logs in this situation? How do you trace a request flow across a service mesh?

In this tutorial, you will see how Spring Cloud Sleuth can be integrated into a Spring Boot application. The Spring Boot application will be secured using Okta as an OAuth 2.0 & OIDC provider. You’ll use the Okta CLI to configure Okta and Spring Boot. You’ll also download and run a Zipkin server to collect the Spring Cloud Sleuth entires and visualize them.

Table of Contents

What is Spring Cloud Sleuth? Bootstrap a Spring Boot App Using the Spring Initializr Configure Spring Boot for OIDC Authentication Create a Security Configuration Class Make an Example Spring Cloud Sleuth App Launch a Zipkin Server Launch Two App Instances Create a Valid JWT with OIDC Debugger Confirm Spring Cloud Sleuth Works Learn More About Spring and Spring Boot What is Spring Cloud Sleuth?

Spring Cloud Sleuth’s solution is to inject span and trace IDs into log entries. A trace ID is the unique identifier that an entire request flow will share. It’s like the glue that sticks all of the log entries together. A span is more local and is defined for each request received for each request sent event. They define particular interaction points.

The initial span, or root span, is generated when a client request is received from outside the distributed system. This request lacks trace and span information. The root span becomes the trace ID for the rest of the request flow through the system.

The diagram below shows how Sleuth span and trace generation would work through a hypothetical service network.

In practice, the span and trace IDs look like the following in log entries (the bracketed section after the INFO). Notice how the span ID and trace ID are the same? That’s because this is the root span, the beginning of the request tree identified by that particular trace ID.

service1.log:2016-02-26 11:15:47.561 INFO [service1,2485ec27856c56f4,2485ec27856c56f4] 68058 --- [nio-8081-exec-1] i.s.c.sleuth.docs.service1.Application : Hello from service1. Calling service2

These are the Sleuth span and trace IDs associated with the service name.

[service1,2485ec27856c56f4,2485ec27856c56f4] [SERVICE NAME,TRACE,SPAN]

Once you have all these nifty log entries with IDs in them, you need a log aggregation and analytics tool to make sense of them. You can use any of the freely available tools: Kibana, Splunk, Logstash, etc. if you want. In this tutorial, you’ll use Zipkin. Zipkin is a Java-based distributed tracing system designed for this use case and seamlessly works with Spring Cloud Sleuth.

In this tutorial, you will create a simple, example Spring Boot service with two endpoints. You’re going to run two different instances of this service and use HTTPie to make a request to one instance of the service. This service will make a request to the second instance of the service, which will return a reply to the first service, which will return a reply to your original request. This request process will be logged, and the Sleuth log entries automatically pushed to a local Zipkin server, and you will be able to visualize the request flow using the Zipkin server.

The point of the example is to demonstrate how to integrate Spring Cloud Sleuth and how Spring Cloud Sleuth allows you to track request flow across different services, also, how the whole process can be secured using Okta and JSON Web Tokens.

Before you get started, you need to have a few things installed.

Java 11: This project uses Java 11. OpenJDK 11 will work just as well. Instructions are found on the OpenJDK website. OpenJDK can also be installed using Homebrew. Alternatively, SDKMAN is another excellent option for installing and managing Java versions. Okta CLI: You’ll be using Okta as an OAuth/OIDC provider to add JWT authentication and authorization to the application. You can go to our developer site to learn more about Okta. You need a free developer account for this tutorial. The Okta CLI is an easy way to register for a free Okta developer account or log in to an existing one and configure a Spring Boot app to use Okta as an auth provider. The project GitHub page has installation instructions. HTTPie: This is a powerful command-line HTTP request utility that you’ll use to test the Spring Boot server. Install it according to the docs on their site. Bootstrap a Spring Boot App Using the Spring Initializr

Spring has a great project called Spring Initializr. Why no “e”? Because it’s cool, that’s why. Joking aside, it’s pretty great. You can go to start.spring.io and quickly configure a starter project. You can browse the project online before you download it. You can get a link you can share and save for the configured project. And you can even use a REST API, which is what you’ll do below.

Open a bash shell and run the command below. This will download the project as a compressed tarball, untar it, and navigate you into the project directory.

curl https://start.spring.io/starter.tgz \ -d dependencies=web,cloud-starter-sleuth,okta,cloud-starter-zipkin \ -d baseDir=spring-cloud-sleuth-demo | tar -xzvf - cd spring-cloud-sleuth-demo

The command uses a lot of the default settings. It uses Maven as the dependency manager. It uses Java as the programming language. It uses Spring Boot 2.4.5 (the current release at the time of writing this tutorial). It creates a JAR as the build target. Finally, it specifies Java 11 (again, at the time of this tutorial).

The most important thing it configures is four dependencies.

Spring Web - adds Spring MVC for building RESTful web applications using Tomcat as the default server Spring Cloud Sleuth - adds the basic dependencies to write Sleuth-compatible log entries Zipkin Client - adds the ability to write Sleuth entries to a Zipkin client Okta Spring Boot Starter - adds Okta’s Spring Boot Starter, which helps configure Spring Boot for use with Okta as an OIDC and OAuth 2.0 provider

The demo application can be run with ./mvnw spring-boot:run. However, it has no endpoints defined and doesn’t do anything except start. So, before you make it do something more exciting, first you need to configure Okta and JWT auth.

Configure Spring Boot for OIDC Authentication

OIDC is an authentication protocol that, along with OAuth 2.0, provides a spec for a complete authentication and authorization protocol. This is the protocol that Okta and Spring implement to provide the secure, standards-compliant JSON web token (JWT) authentication solution that you’ll use in this tutorial. Creating an OIDC application on Okta configures Okta as an authentication provider for your Spring Boot application.

Open a bash shell and navigate to the demo project root directory.

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Web and press Enter.

Select Okta Spring Boot Starter. Accept the default Redirect URI values provided for you. That is, a Login Redirect of http://localhost:8080/login/oauth2/code/okta and a Logout Redirect of http://localhost:8080.

What does the Okta CLI do?

The Okta CLI will create an OIDC Web App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. You will see output like the following when it’s finished:

Okta application configuration has been written to: /path/to/app/src/main/resources/application.properties

Open src/main/resources/application.properties to see the issuer and credentials for your app.

okta.oauth2.issuer=https://dev-133337.okta.com/oauth2/default okta.oauth2.client-id=0oab8eb55Kb9jdMIr5d6 okta.oauth2.client-secret=NEVER-SHOW-SECRETS

NOTE: You can also use the Okta Admin Console to create your app. See Create a Spring Boot App for more information.

Open your src/main/resources/application.properties file. You should see something like the following.

okta.oauth2.issuer=https\://dev-123456.okta.com/oauth2/default okta.oauth2.client-secret={yourClientSecret} okta.oauth2.client-id={yourClientId}

While you’ve got the properties file open, add a couple of new properties.

spring.application.name=${APP_NAME} server.port=${APP_PORT}

These two properties will allow you to specify the application name and port from the command line when your boot the app. You need to be able to do this because you’ll need to run two distinct application instances with different names on different local ports.

Create a Security Configuration Class

To configure the Spring Boot application as a resource server, create a SecurityConfiguration class. The following class tells Spring Boot to authenticate all requests and use JWT-based auth for the resource server.

src/main/java/com/example/demo/SecurityConfiguration.java

@EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests(authorizeRequests -> authorizeRequests.anyRequest().authenticated()) .oauth2ResourceServer().jwt(); } } Make an Example Spring Cloud Sleuth App

Replace the DemoApplication class with the code listed below. I’ll explain the code below.

src/main/java/com/example/demo/DemoApplication.java

package com.example.demo; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.web.client.RestTemplateBuilder; import org.springframework.context.annotation.Bean; import org.springframework.http.HttpEntity; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpMethod; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestHeader; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.client.RestTemplate; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Bean RestTemplate myRestTemplate() { return new RestTemplate(); } @RestController static class SimpleRestController { private static Logger log = LoggerFactory.getLogger(SimpleRestController.class); private final RestTemplate restTemplate; SimpleRestController(RestTemplateBuilder restTemplateBuilder) { this.restTemplate = restTemplateBuilder.build(); } @Value("${APP_NAME}") private String appName; @GetMapping("/a") String a(@RequestHeader(name = "Authorization") String authToken) { log.info("Handling a - " + appName); HttpHeaders headers = new HttpHeaders(); headers.add("Authorization", authToken); HttpEntity request = new HttpEntity(headers); ResponseEntity<String> response = restTemplate.exchange( "http://localhost:8082/b", HttpMethod.GET, request, String.class ); String result = response.getBody(); log.info("Reply = " + result); return "Hello from /a - " + appName + ", " + result; } @GetMapping("/b") String b(@RequestHeader(name = "Authorization") String authToken) { log.info("Handling b - " + appName); return "Hello from /b - " + appName; } } }

The demo app creates two endpoints, cunningly named a and b. To demonstrate how Sleuth works across services in a network of microservices, you will run two instances of this application. Using HTTPie, you’ll call endpoint a, which will call endpoint b. Endpoint b will return a result to endpoint a, which will return a combined result to your shell.

The logging statements are important. This is where the Sleuth span and trace markers are going to get injected into your logs and sent to the Zipkin server (which you’ll set up in a moment).

In endpoint a, notice that the code grabs the JWT token from the header as a request param and uses Spring’s RestTemplate to make a request to endpoint b including the token in the request. Both endpoints will require this token for authentication, so this is important. It’s also important that you use Spring’s RestTemplate and that you inject it as a bean because this allows Spring Cloud Sleuth to automatically include the Sleuth trace ID in the request, which is the main point here, tracking request flow across different services.

If you don’t inject the RestTemplate as a bean but instead instantiate it directly in the method or use a different HTTP client, you will need to manually add the Sleuth trace ID.

Launch a Zipkin Server

Before you start the two instances of the Spring Boot app, you need to launch your Zipkin server. Because the project includes the Zipkin Client dependency (along with the Spring Cloud Sleuth dependency), Spring Boot is configured to send logging information to a Zipkin server at the default port :9441. This includes a graphical interface that allows you to search and view log traces.

For more information on the Zipkin server, take a look at their quick start page. The Zipkin server you’re launching uses a default, in-memory store that does not persist data. When you restart the service, all data will be lost. This is great for testing. For production, you might want to use a persisting store. Zipkin supports Cassandra, Elasticsearch, and MySQL. See the GitHub page for more information.

There is a Spring Boot annotation, @EnableZipkinServer, that launches a Zipkin server for you. However, this annotation is deprecated. Instead, they suggest downloading the server as a JAR file and launching that.

Open a new bash shell. Use the following command to download the latest Zipkin server as a JAR file.

curl -sSL https://zipkin.io/quickstart.sh | bash -s io.zipkin:zipkin-server:LATEST:slim zipkin.jar

Run the server and leave it running while you’re working on the tutorial.

java -jar zipkin.jar

You’ll see some output on the console ending with the following.

2021-06-21 10:19:20:587 [armeria-boss-http-*:9411] INFO Server - Serving HTTP at /0:0:0:0:0:0:0:0%0:9411 - http://127.0.0.1:9411/

There’s nothing to see yet, but you can open the Zipkin dashboard in a browser: http://localhost:9411.

Launch Two App Instances

Now you’re ready to launch the instances of your app. First, you need to run the following commands in two separate shells. Notice that you’re using the environment variables to pass the application name and port to the Spring Boot app.

Service A:

APP_NAME="Service A" APP_PORT=8081 ./mvnw spring-boot:run

Service B:

APP_NAME="Service B" APP_PORT=8082 ./mvnw spring-boot:run

You should see console output like the following for both apps.

... 2021-06-21 10:26:41.539 INFO [Service A,,] 126840 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2021-06-21 10:26:41.769 INFO [Service A,,] 126840 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8081 (http) with context path '' 2021-06-21 10:26:41.792 INFO [Service A,,] 126840 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 2.781 seconds (JVM running for 2.975)

Try a test request. Open a new bash shell (the fourth, for those counting). Use HTTPie to make a GET request at the /a endpoint.

http :8081/a

You’ll get denied.

HTTP/1.1 401

This is expected. You need to include a valid JWT.

Create a Valid JWT with OIDC Debugger

An easy way to get an access token is to generate one using OpenID Connect Debugger. First, you must configure your application on Okta to use OpenID Connect’s implicit flow.

Run okta login and open the resulting URL in your browser. Go to the Applications section and select the application you just created. Edit its General Settings and add Implicit (Hybrid) as an allowed grant type, with access token enabled. Then, make sure it has https://oidcdebugger.com/debug in its Login redirect URIs. Click Save and copy the client ID for the next step.

Now, navigate to the OpenID Connect Debugger website. Fill in your client ID, and use /oauth2/default/v1/authorize for the Authorize URI. The state field must be filled but can contain any characters. Select token for the response type.

Scroll down and click SEND REQUEST.

You should get a success page with an access token. Then, if you’re curious, you can inspect the decoded token.

Copy the token value to the clipboard.

In the bash shell that you will make the HTTP requests from, save the token value as a shell variable.

TOKEN=eyJraWQiOiJJdXVaRG00RTg5WUg5U1JoeC1tbWl... Confirm Spring Cloud Sleuth Works

Great. At this point, you should have four bash terminals going.

Spring Boot service A Spring Boot service B Zipkin server Bash terminal for making requests with HTTPie (where you just stored your JWT)

You’re going to make a simple GET request to service A on endpoint /a. Service A will log the request and make a call to service B on endpoint /b. Service B will log that request and return a reply back to service A. Service A will log that it received a reply and will return the reply to your bash terminal.

This example is a little contrived, but the point is to show you how each log even will have a unique Sleuth span value while the entire request flow will share the same trace value.

Make a request to service A endpoint /a using the JWT you just created.

http :8081/a "Authorization: Bearer $TOKEN"

If all went well, you should see this.

HTTP/1.1 200 ... Hello from /a - Service A, Hello from /b - Service B

Take a look at the console output from the two services.

Service A

2021-06-21 15:28:38.800 INFO [Service A,ef0fe81ff18325ff,ef0fe81ff18325ff] 14429 --- [nio-8081-exec-1] e.d.DemoApplication$SimpleRestController : Handling a - Service A 2021-06-21 15:28:39.577 INFO [Service A,ef0fe81ff18325ff,ef0fe81ff18325ff] 14429 --- [nio-8081-exec-1] e.d.DemoApplication$SimpleRestController : Reply = Hello from /b - Service B

Service B

2021-06-21 15:28:39.556 INFO [Service B,ef0fe81ff18325ff,76b95441dd8d0300] 14466 --- [nio-8082-exec-1] e.d.DemoApplication$SimpleRestController : Handling b - Service B

All of these log entries have the Sleuth span and trace IDs injected into them, along with the service names.

[Service A,ef0fe81ff18325ff,ef0fe81ff18325ff] [SERVICE NAME, TRACE, SPAN]

Notice how the first ID is the same for all three entries. That’s the Sleuth trace ID that ties the entire request sequence together. Also, notice that for the entries for service A, the span and trace IDs are actually the same. That’s because this is the initial Sleuth logging event that kicks off the request tree, so that ID is the ID of the root span, which becomes the trace ID for the rest of the tree.

Take a look at the Zipkin dashboard at http://localhost:9411.

Click Run Query. You’ll have one result.

Click on Show, and you’ll see a detailed summary of the request tracing and logging.

If you look at the detailed graph, you’ll see three spans. The original GET request from the bash shell is the first one. Within that, a span encompasses the GET request to service B from service A and a third span encompasses service B receiving the GET request.

Learn More About Spring and Spring Boot

In this tutorial, you learned a little about Spring Cloud Sleuth and how it can trace requests through service meshes built with Spring Boot. You created an example application that you started two instances of and used Spring Cloud Sleuth to track an example request through the service network. Next, you secured the services using Okta JWT OAuth 2.0 and OIDC. Finally, you ran a local Zipkin server that allowed you to visualize the Sleuth span and trace entries in your logs.

You can find the source code for this example on GitHub in the okta-spring-cloud-sleuth-example repository.

We have a slew of other posts on Spring Boot you might like:

Build Native Java Apps with Micronaut, Quarkus, and Spring Boot R2DBC and Spring for Non-Blocking Database Access How to Use Client Credentials Flow with Spring Security Kubernetes to the Cloud with Spring Boot and JHipster

If you have any questions about this post, please add a comment below. For more awesome content, follow @oktadev on Twitter, like us on Facebook, or subscribe to our YouTube channel.


How to Write a Secure Python Serverless App on AWS Lambda

Modern authentication systems generate JSON Web Tokens (JWT). While there are several types of JWTs, we’re concentrating on access tokens. When a user successfully logs in to an application, a JWT is generated. The token is then passed in all requests to the backend. The backend can then validate the token and reject all requests with invalid or missing tokens. Today, we are going to build a si

Modern authentication systems generate JSON Web Tokens (JWT). While there are several types of JWTs, we’re concentrating on access tokens. When a user successfully logs in to an application, a JWT is generated. The token is then passed in all requests to the backend. The backend can then validate the token and reject all requests with invalid or missing tokens.

Today, we are going to build a simple web application that uses the Okta authentication widget to log users in. The access token will be generated and sent to an API written in Python and deployed as an AWS Lambda function, which will validate the token. Let’s get started!

Table of Contents

Install AWS Serverless CLI, Python 3, and Tornado Create an Okta Account and Application CORS and Effect on AWS Lambda Build a Simple HTML and JavaScript Client Build a Web Server in Python Create an AWS Lambda Function in Python Validate a JWT Offline in a Python Lambda Function Learn More About Python, JWTs, and AWS

NOTE: The code for this project can be found on GitHub.

Install AWS Serverless CLI, Python 3, and Tornado

If you haven’t already got an AWS account, create an AWS Free Tier Account.

Next, install the AWS SAM CLI.

Next, if you don’t already have Python installed on your computer, you will need to install a recent version of Python 3.

Now, create a directory where all of our future code will live.

mkdir aws-python cd aws-python

To avoid issues running the wrong version of Python and its dependencies, it is recommended to create a virtual environment so that the commands python and pip run the correct versions:

python3 -m venv .venv

This creates a directory called .venv containing the Python binaries and dependencies. This directory should be added to the .gitignore file. This needs to be activated to use it:

source .venv/bin/activate

You can run the following command to see which version you are running.

python --version

Finally, you need to install the Tornado Python library to build a web server for the front end.

pip install tornado Create an Okta Account and Application

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Single-Page App and press Enter.

Use http://localhost:8080 for the Redirect URI and accept the default Logout Redirect URI of http://localhost:8080.

What does the Okta CLI do?

The Okta CLI will create an OIDC Single-Page App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. It will also add a trusted origin for http://localhost:8080. You will see output like the following when it’s finished:

Okta application configuration: Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta Admin Console to create your app. See Create a Single-Page App for more information.

CORS and Effect on AWS Lambda

As an important aside, we need to make a design decision as to how to pass the access token from the web front end to the Python backend Lambda function. There are several ways this can be done, it can be passed as an authorization header, in a cookie, or as a query or post parameter.

As the backend will be implemented as an AWS Lambda function, this limits our choice due to Cross-Origin Resource Sharing (CORS) restrictions. Web pages are hosted on a web server that has a domain name called the origin domain. When a web page needs to communicate with a backend API, a JavaScript function makes an HTTP request to the backend server. If the domain name, or even the port number, of the backend server, differs from the origin domain, then the browser will refuse the response due to CORS.

In order to overcome CORS restrictions, the backend server needs to set response headers that give the browser permission to accept the response data. The most important header is the one that specifies which origin domain can receive the response:

Access-Control-Allow-Origin: http://www.example.com

It is also possible to allow access from any origin domain:

Access-Control-Allow-Origin: *

Do be careful about allowing any domain, as it will almost certainly be flagged at a security audit, and may be in violation of an information security regulation.

CORS also adds further restrictions on which request HTTP headers are allowed. In particular, the Authorization header is forbidden. The restriction can be overcome by adding a second response header:

Access-Control-Allow-Credentials: true

There is however a further complication. The browser doesn’t know whether the server will allow the authorization header to be sent. To overcome this, the browser will make a preflight request to the server to determine whether the actual request will be allowed. The preflight request is an HTTP OPTIONS request. If the response contains the correct CORS headers then the actual request will be made.

The application we are going to build has a backend API implemented in Python, which will validate the access token on each request. This function is deployed as an AWS Lambda function. Unfortunately, the container in which the Lambda function is deployed will receive the preflight request. It will attempt to validate the token in the authorization header. This will fail as the container doesn’t have the public key required to validate the token, resulting in a 403 Forbidden response.

We can’t use the authorization header, and cookies are often blocked, so we will send the token as a POST parameter.

Build a Simple HTML and JavaScript Client

We will start by building a simple web front end in HTML and JavaScript. It will be served by a web server written in Python.

First of all, create a directory called client which will contain static content.

Next, create a file called client/index.html with the following content:

<html> <head> <meta charset="UTF-8" /> <title>How to write a secure Python Serverless App on AWS Lambda</title> <script src="https://global.oktacdn.com/okta-signin-widget/5.7.3/js/okta-sign-in.min.js" type="text/javascript"></script> <link href="https://global.oktacdn.com/okta-signin-widget/5.7.3/css/okta-sign-in.min.css" type="text/css" rel="stylesheet"/> <link href="style.css" rel="stylesheet" type="text/css" /> <script src="control.js" defer></script> </head> <body> <h1>How to write a secure Python Serverless App on AWS Lambda</h1> <div id="widget-container"></div> <div class="centred"> <form id="messageForm"> Message: <input id="message" name="message" type="message"/> <input type="hidden" id="token" name="token"/> <input type="button" value="Send" onclick="onmessage()"/> </form> <textarea id="messages" name="messages" rows="10" cols="50">Messages</textarea><br/> </div> </body> </html>

The Okta Sign-In Widget’s JavaScript and CSS files are loaded from Okta’s global CDN (content delivery network). The widget-container will be replaced by the login form when the page loads. The page contains a simple form that has a test box for a message and a hidden input that will hold the access token. The text area at the bottom will display responses from the server.

Now, create a stylesheet called client/style.css. Here is an example:

body { background-color: #ccccff; text-align: left; } h1 { text-align: center; font-size: 50pt; font-style: italic; color: #0000FF; clear: both; } h2 { text-align: center; font-size: 30pt; font-style: normal; color: #0000FF; clear: both; } .centred { text-align: center; display: block; margin-left: auto; margin-right: auto; }

Next, create a file called client/control.js with the following JavaScript:

var accessToken = null; var signIn = new OktaSignIn({ baseUrl: 'http://{yourOktaDomain}', clientId: '{yourClientId}', redirectUri: window.location.origin, authParams: { issuer: '/oauth2/default', responseType: ['token', 'id_token'] } }); signIn.renderEl({ el: '#widget-container' }, function success(res) { if (res.status === 'SUCCESS') { accessToken = res.tokens.accessToken.accessToken; signIn.hide(); } else { alert('fail);') } }, function(error) { alert('error ' + error); }); function onmessage() { const url = "http://localhost:3000/api/messages"; var headers = {} if (accessToken != null) { document.getElementById('token').value = accessToken; } fetch(url, { method : "POST", mode: 'cors', body: new URLSearchParams(new FormData(document.getElementById("messageForm"))), }) .then((response) => { if (!response.ok) { throw new Error(response.error) } return response.text(); }) .then(data => { messages = JSON.parse(data) document.getElementById('messages').value = messages.join('\n'); }) .catch(function(error) { document.getElementById('messages').value = error; }); }

Let’s see what this JavaScript does. It declares a variable that will hold the access token. It then creates an OktaSignIn object. Replace {yourOktaDomain} and {yourClientId} with the values from the Okta CLI.

The renderEl() function displays the login form and performs the authentication process. On successful login, the access token is extracted from the response and saved. The login form is then hidden.

The onmessage() function is called when the user hits the submit button on the form. It stores the access token in the hidden input on the form and then makes a POST request to the backend server. It writes the response from the server into the text area.

Build a Web Server in Python

Now you are going to build a web server in Python to serve the static content. A web server is required because some of the JavaScript will not work if you simply load the page into a browser.

You will make the server a Python package, which is simply a directory, in this case, called server, containing Python code. Python packages require a file called __init__.py. This is run when the package is loaded and is often just an empty file.

mkdir server touch server/__init__.py

Next, create a file called server/FileHandler.py containing the following Python code:

from tornado.web import StaticFileHandler class FileHandler(StaticFileHandler): def initialize(self, path): self.absolute_path = False super(FileHandler, self).initialize(path)

It uses the Python Tornado framework and implements a static file handler that serves any files from the directory specified in the path constructor parameter.

Next, create a file called server/__main__.py containing the following Python code:

import signal import sys from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from tornado.options import define, options from tornado.web import Application, RequestHandler from server.FileHandler import FileHandler define("port", default=8080, help="Listener port") options.parse_command_line() application = Application([ ('/()$', FileHandler, {'path': "client/index.html"}), ('/(.*)', FileHandler, {'path': "client"}), ]) http_server = HTTPServer(application) http_server.listen(options.port) print("Listening on port", options.port) try: IOLoop.current().start() except KeyboardInterrupt: print("Exiting") IOLoop.current().stop()

This, first of all, looks for a command line parameter called --port to obtain a port number to listen on which defaults to 8080.

An Application object is created which implements a Tornado web server. The application is constructed with a list of tuples. Each tuple has two or more values. The first value is a URI. This can be a regular expression. Any URI component in parentheses is captured as a path parameter. The second tuple value is a Python class that handles requests for matching URIs. Any remaining values are path parameters captured from the URI. They become constructor parameters as an instance of the class is created on each request. In this case, the file or directory containing the static content is specified.

Finally, the server is created and started.

Now, the front end can be tested by starting the server and pointing a web browser at http://localhost:8080.

python -m server --port=8080

You should be able to log in using your Okta credentials. You will not be able to send a message at this stage as there is currently no backend.

Create an AWS Lambda Function in Python

You need to create a basic AWS Lambda function. You can use the SAM CLI to build, run, and deploy the application. Lambda functions can be built and run locally in a Docker container which emulates the AWS environment.

First of all, create a directory called auth-app, and create a Python package called messages inside it:

mkdir -p auth-app/messages touch auth-app/messages/__init__.py

Next, create a file called auth-app/messages/requirements.txt containing a list of packages to be loaded by pip:

jwt requests tornado

Next, create a simple Lambda function. Create a file called auth-app/messages/messages.py containing the following Python code:

def message(event, context): return { 'statusCode': 200, 'body': 'Hello World!' }

The function has two parameters, which are both dictionaries. You will be using the event map later to extract request parameters. The function has to return a dictionary containing the HTTP response code and the response body.

Next, you need to create the deployment template file auth-app/template.yaml:

AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > auth-app Sample SAM Template for auth-app # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 10 Resources: OktaKeys: Type: String MessagesFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: Environment: Variables: OKTA_KEYS: !Ref OktaKeys CodeUri: messages/ Handler: messages.message Runtime: python3.7 Events: Messages: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /api/messages Method: post

The globals section defines a timeout. This is the maximum time the function is allowed to complete a request.

The resources section defines one or more Lambda functions. It defines any environment variables that will be passed to the function. The CodeUri defines the Python package containing the function. The Handler defines the Python function to call. The Runtime defines the language and version of the executable environment, in this case, Python 3.7. The events define the API, the path is the request URI, and the method is the HTTP request method.

Next, build the application in a Docker container. The first time this command is executed, SAM will pull a Docker image. This can take some time to download.

cd auth-app sam build --use-container

The build creates a directory called auth-app/.aws-sam. This should be added to your .gitignore file.

Now, you can run the application locally:

sam local start-api

Then test it using curl:

curl -i -X POST http://localhost:3000/api/messages Validate a JWT Offline in a Python Lambda Function

Offline JWT validation requires a public key. Authentication providers, such as Okta, provide a URL that returns a public key set (JWKS). The key set is a JSON array. We are going to base64 encode the JSON array to make it more manageable. Issue the following command to get the base64 encoded keys:

curl https://${yourOktaDomain}/oauth2/default/v1/keys | base64

Next, create a file called env in the auth-app directory that overrides environment variables in the template file:

{ "MessagesFunction" : { "OKTA_KEYS": "base64 string from key provider" } }

Next, you are going to extract the public key from the key set. There can be multiple keys, but I will assume that there is only one, which is often the case. Add the following Python code to auth-app/messages/messages.py:

import base64 from jwt import (JWT, jwk_from_dict) from jwt.exceptions import JWTDecodeError import os public_key = None def get_keys(): keys = base64.b64decode(os.environ['OKTA_KEYS']) jwks = json.loads(keys) for jwk in jwks['keys']: public_key = jwk_from_dict(jwk) get_keys()

NOTE: This post uses local validation of JWTs rather than using the introspect endpoint to validate them remotely. This is done for efficiency.

The function gets called when the file is loaded. It extracts the JWKS from the environment variable and does a base64 decode to get the JSON string. This is then turned into a Python dictionary. It then calls jwk_from_dict(), which extracts the public key.

Next, add a verify functions which validates the token using the public key:

def verify(token): result = {} try: decoded = instance.decode(token, public_key, False) except JWTDecodeError: result = { 'statusCode': 403, 'body': 'Forbidden '} return result

You also need a helper function to extract the URL encoded POST form data:

def get_post_data(body): postdata = {} for items in body.split('&'): values = items.split('=') postdata[values[0]] = values[1] return postdata

Finally, you need to modify the main message() function to do validation and return the messages or an error:

def message(event, context): body = get_post_data(event['body']) result = verify(body['token']) if not bool(result): messages.append(body['message']) result = { 'statusCode': 200, 'headers': { 'Access-Control-Allow-Headers': 'Content-Type', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'OPTIONS,POST,GET', }, 'body': json.dumps(messages) } return result

NOTE: Notice that the response includes the CORS headers.

You now have a complete application. Build and start the AWS backend:

sam build --use-container sam local start-api --env-vars env

Start the frontend webserver:

python -m server --port=8080

Now, point a web browser at http://localhost:8080. Type in a message and submit the form. You should get a 401 error message displayed. Log in using your Okta credentials. Now send another message. You should see a list of messages.

TIP: When working with complex web applications, always have the developer console open on the browser. It will save a lot of time diagnosing JavaScript and network errors.

Learn More About Python, JWTs, and AWS

You have now built an application that uses Okta authentication to obtain a JWT access token. A Python API validates the token using a public key before processing any requests.

You only did local deployment as a proof of concept. To deploy a Lambda function into the cloud use:

sam deploy --guided

This will prompt and guide you through the deployment process and give you the URL to the deployed function.

While writing this post, I experienced first-hand how confusing and overly complicated Amazon documentation can be. As you can see, the actual minimal code to make things work is quite simple.

There are some downsides. The functions have to be started when requests arrive. This can add latency. Also, you have no control over which instance of a function will handle a request. Typically each request will be handled by a different instance of a function. Any data which needs to be available across requests must be stored in cloud storage or a database.

Serverless applications are definitely the way forward. The beauty is that you can simply deploy a function into a cloud, and not have to create any server environment to host the function. The functions can be written in a number of programming languages including Go, Java, and Python.

The cloud replicates the functions depending on demand. They scale to zero, meaning that they use no resources, and hence incur no costs when not being used.

You can find the source code for this article on GitHub in the okta-aws-python-example repository.

If you enjoyed this post, you might like related ones on this blog.

Build and Secure an API in Python with FastAPI Building a GitHub Secrets Scanner The Definitive Guide to WSGI Build a CRUD App with Python, Flask, and Angular Build a Simple CRUD App with Python, Flask, and React

Follow us for more great content and updates from our team! You can find us on Twitter, Facebook, subscribe to our YouTube Channel or start the conversation below.

Sunday, 25. July 2021

KuppingerCole

Analyst Chat #86: Zero Trust Means Zero Blind Spots

The path toward a Zero Trust architecture to improve cybersecurity for modern enterprises in a hybrid IT landscape often seems overly complex and burdensome. Alexei Balaganski is this week's chat partner for Matthias and he draws attention to an often overlooked benefit of such an infrastructure. One key idea of Zero Trust is to actually reduce complexity and unnecessary effort and instead focus o

The path toward a Zero Trust architecture to improve cybersecurity for modern enterprises in a hybrid IT landscape often seems overly complex and burdensome. Alexei Balaganski is this week's chat partner for Matthias and he draws attention to an often overlooked benefit of such an infrastructure. One key idea of Zero Trust is to actually reduce complexity and unnecessary effort and instead focus on what really needs to be protected.




Identosphere Identity Highlights

Identosphere 42 • Life the Universe and Decentralized ID • UK ID and Attributes Consult • Ethical Design of DID

The latest news and updates in decentralized identity! European Digital Identity, Blockchain ID, information and educational content surrounding the SSI sector
Welcome and Thanks to our Patrons!

Support Identosphere via Patreon — Get Exclusive Content!!

Read previous issues and Subscribe : https://newsletter.identosphere.net

Content Submissions \ Business Inquiries \ Help Offering : newsletter@identosphere.net

We created a Group on Tru.net Upcoming

Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa • ♾️

1/2 Day IIW Virtual Event The Business of SSI • 8/4

Pravici- Verifiable Credentials — Bring students and employees back to school and work while respecting privacy • 8/18

Turing trustworthy digital identity conference • 9/13

IIW 33 • 10/12-14

Digital Trust World 2021 10/4-7 ‘the Conference for Authentication, Biometrics, Fraud & Security and Identity,’

Call for Solutions GovTech Lab Luxembourg (@GovTechLab_LU)

The #GovTechLab launches a new #callforsolutions "Trust My Data"! Do you have an idea to #digitise and #secure the exchange of state certified #data? Find out more and apply for the #innovation partnership

Big Picture Bright Story: Self Sovereign Identity Brightlands (EN)

In this May 2019 blog post, the benefits (The good) of SSI are illustrated with a range of examples, comparing SSI-based business transactions to their current non-SSI-based equivalents and thus more cumbersome.

In addition, examples are given of the disadvantages (The bad); how SSI technology can be misused by unscrupulous organizations and how a combination of technology, knowledge and legislation could mitigate this risk.

Finally, we give examples of the harmful side (The ugly); how SSI technology can be used by criminals and what countermeasures are possible.

The challenge for self-sovereign identity (SSI)

The reason why we have seen less uptake in SSI solutions is because the people behind these solutions fail to recognise the design principles that will be most important to its success. Instead, we see people focusing on technological nirvanas like blockchain or an over-emphasis on governance.

Public Sector New Directions for Government in the Second Era of the Digital Age Kuppinger Cole

The Blockchain Research Institute™, in collaboration with the Washington DC based Chamber of Digital Commerce and other experts have produced a 120-page report on how the Biden-Harris administration could reimagine US technology strategy and policy—and take action to implement it.

Digital identity and attributes consultation Gov UK

Digital access to the attributes these documents contain can solve these issues. It can also have benefits such as improving inclusion. If you do not have a passport, perhaps another government service can validate your age. There are also opportunities for data minimisation by disclosing only that information which is required (for example, that you’re over 18), rather than full disclosure of your data, including your date of birth, name, or address.

Plans for governing body to make digital identities as trusted as passports Gov.UK

The consultation sets out how the government can build confidence in digital IDs so they have a similar status in law as physical proofs of identity that businesses and individuals already trust.

The consultation is open to any member of the public and closes on 13 September.

European Digital Identity European Self-Sovereign Identity Consortium (ESSIC) esatus, Danube Tech, TNO

Our approach connects various identity networks. The interoperability between those identity networks on a technical and governance level will thrive, the exchange of identity data between the networks will be simplified and cross-border use cases will be enabled. Our focus is to provide a solution to these current challenges.

Where Stands the Sovereign Self? Kuppinger Cole

Doc Searls, Co-founder and board member of Customer Commons, and Director of ProjectVRM, is to deliver a keynote entitled Where Stands the Sovereign Self? at theEuropean Identity and Cloud Conference 2021. [...] we asked Doc some questions about his planned presentation.

Lord Holmes discusses state of digital identity in the UK

The next iteration of the framework mentioned earlier is due to be published this summer and I look forward to that. It will be essential for that work to not only be underpinned by the twelve guiding principles but also to swiftly ‘sandbox’, stand up parallel proofs in specific sectors and proceed with pace.”

Healthcare "Member as API" - The Interoperability and Patient Access final rule and Verifiable Credentials

The Interoperability and Patient Access final rule (CMS-9115-F) delivers on the government's promise to put patients first, giving them access to their health information when they need it most and in a way they can best use it. As part of the MyHealthEData initiative, this final rule is focused on driving interoperability and patient access to health information by liberating patient data using CMS authority to regulate Medicare Advantage (MA), Medicaid, CHIP, and Qualified Health Plan (QHP) issuers on the Federally-facilitated Exchanges (FFEs).

Advances in health "must ensure self-sovereign identity" Healthcare Global

Meanwhile a new report has found that the majority of the British public is willing to embrace digital healthcare tools  such as apps and digital therapies prescribed by a trusted healthcare professional. 

Shaw adds: “The vital point to make is this: innovations in health technology must ensure self-sovereign identity.

Standards Work DIF Grant #1: JWS Test Suite

DIF announces its first community microgrant, sponsored by Microsoft and rewarding the timely creation of a comprehensive test suite for detached-JWS signatures on Verifiable Credentials

How a combination of Federated identity and Verifiable Credentials can help with Customer onboarding Pranav Kirtani

Before we dive into how Federated systems like OIDC and SAML along with Verifiable Credentials (VC) can help improve customer onboarding to your application, let us first understand what are the current methods being used for onboarding.

Use Case Implementation Workstream usecaseCCI@covidcreds.groups.io

This is the Use Case Implementation Workstream of theCOVID Credentials Initiative (CCI). This workstream identifies privacy-preserving verifiable credentials (VCs) that are most useful to the COVID-19 response and provides a forum and platform for those who are implementing COVID VCs to present their projects/solutions.

Company Updates cheqd is launching a self-sovereign identity network on Cosmos this year

It's Verim in New clothes. Kaliya still doesn’t like this model. Requiring verifiers to pay issuers is really really privacy problematic.

We want to provide a common and public infrastructure easily accessible to anyone and any organisation that provides B2B and B2B2C payment rails between issuers, holders, and receivers of trusted data. [...] cheqd is not and does not plan on dictating a single payment model. Rather, our product vision is to enable each ecosystem to decide this on their own throughLayer 1 vs Layer 2 mechanisms and customisable tokenomics.

MyData What is the Me2B Respectful Tech Specification?

The Me2B Respectful Tech Specification is a sorely needed ethical and safety standard for the internet. It consists of a series of tests that objectively measure the behavior of a connected product or service. The Specification helps people (“Me-s”) understand how technology is treating them, and helps businesses (“B-s”) build technology that is safe and respectful for the people that use it.

Why all data governance needs to consider children’s rights Emmaday

Last month, UNICEF published a Manifesto on Good Data Governance for Children, an initiative that was the result of a year of collaboration between a working group of 17 experts, many of them affiliated with the Berkman Klein Center for Internet & Society and UNICEF.

Research-Papers Ethical Design of Digital Identity Environmental Implications from the Self-Sovereign Identity Movement

In a world that is becoming more digital, it is relevant to find some guidelines for organizations to design digital identity more ethically. A universal identity system on the internet is still missing and there are no clear standards for organizations to design digital identity. With this research, knowledge and insights have been obtained to advance organizations to design digital identity more ethically. A contribution has been made by proposing the conditions to enable improvements for a more ethical design. 

Sovereignty, privacy, and ethics in blockchain‑based identity management systems

Self-sovereign identity (SSI) solutions implemented on the basis of blockchain technology are seen as alternatives to existing digital identification systems, or even as a foundation of standards for the new global infrastructures for identity management systems. It is argued that ‘self-sovereignty’ in this context can be understood as the concept of individual control over identity relevant private data, capacity to choose where such data is stored, and the ability to provide it to those who need to validate it. 

Blockchain, Self-Sovereign Identity and Digital Credentials: Promise Versus Praxis in Education

This article is primarily interested in the affordances of the technology as a public good for the education sector. It levers on the lead author’s perspective as a mediator between the blockchain and education sectors in Europe on high-profile blockchain in education projects to provide a snapshot of the challenges and workable solutions in the blockchain-enabled, European digital credentials sector.

Identity Not SSI Police in Latin America are turning activists’ phones against them

Experts say that seized devices have become a trove of information for authorities cracking down on social movements and opposition leaders.

Calls for New FTC Rules to Limit Businesses’ Data Collection and Stop Data Abuse

“I want to sound a note of caution around approaches that are centered around user control. I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions … I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.

Huge data leak shatters the lie that the innocent need not fear surveillance

Few pause to think that their phones can be transformed into surveillance devices, with someone thousands of miles away silently extracting their messages, photos and location, activating their microphone to record them in real time.

Such are the capabilities of Pegasus, the spyware manufactured by NSO Group, the Israeli purveyor of weapons of mass surveillance.

NSO rejects this label. It insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”

10 assertions about the future of social

We can’t solve identity. There will never be a single identity that we use across the web. Instead, there may be open protocols that allow us to auth with different providers.

Blockchain + Identity Blockchain: A Holochain Perspective

The architects of Holochain began with a basic question: what if everyone could actually just hold their own data and share it with the network as needed? If everyone could just host themselves rather than relying on mining nodes to do it? We could avoid all this massive replication, which would obviously be much more efficient. We would just need to do it in a way that still ensures data integrity. We would have to be completely confident that, as everyone represents their own data to the network, there is no way for people to misrepresent their data.

Self-Sovereign Identity (w/ Fabian Vogelsteller & Constantin Kogan)

Constantin Kogan joins Fabian Vogelsteller, Ethereum developer, LUKSO founder, creator of Mist browser, web3.js, Feindura (CMS), ERC20, and ERC-725 protocols, and author of Meteor.js. 

Magic Raises $27M to Future-Proof Authentication

Magic makes it plug and play for developers to add secure, passwordless login, like magic links and WebAuthn, to their applications. Users are no longer exposed to password-related risks from the very start.

Thanks for Reading!

Read more \ Subscribe https://newsletter.identosphere.net

Support this publication https://patreon.com/identosphere

Contact \ Submission: newsletter [at] identosphere [dot] net

Saturday, 24. July 2021

Caribou Digital

Digital Public Goods for Development

Photo by Olumide Bamgbelu on Unsplash There is a growing focus on the idea of digital public goods (DPGs) largely driven by a recognition that the process of digital transformation needs to be influenced to achieve development outcomes. Endorsed by the UN Secretary General’s Roadmap for Digital Cooperation, the Digital Public Goods Alliance defines digital public goods as “open source software
Photo by Olumide Bamgbelu on Unsplash

There is a growing focus on the idea of digital public goods (DPGs) largely driven by a recognition that the process of digital transformation needs to be influenced to achieve development outcomes. Endorsed by the UN Secretary General’s Roadmap for Digital Cooperation, the Digital Public Goods Alliance defines digital public goods as “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” DPGs are a growing part of discussions about how to ensure that digital transformation delivers on development outcomes and goals.

Caribou Digital convened a group of senior policymakers, donors, and practitioners to build on the important work being done to shape the concept of digital public goods, and how they can be advanced to meet the challenges presented by increasingly pervasive and rapid digital transformation. What follows is a collated and editorialised account of the major points discussed during the conversation in the form of four key takeaways and two concrete actions.

Digital transformation is neither good, nor bad, but never neutral.

Digitalisation is transforming the world faster than ever imagined, thus there is value in discussing the concept of DPGs. In the words of one participant, ‘we need to shape it in a way that is more acceptable to us, that serves the world, rather than a few corporations’. Focusing on DPGs is a response to that — and to the recognition that the private sector is not always equipped to deliver public sector outcomes and that the interests of the public and private sector are not always aligned. For example, at the 2018 Annual Meeting of the ID4Africa Movement, a poll of the delegates identified vendor lock-in as their main concern. This emphasis on challenges of public procurement of digital infrastructure has led to a focus on open source platforms for the public sector and public sector service delivery. This is reflected in the United Nations’ Secretary General’s Roadmap towards Digital Cooperation, the Digital Impact Alliance’s work on DPGs,and the Digital Public Goods Alliance (DPPGA).

Beyond DPGs as open-source platforms.

Discussion around DPGs is dominated by the use of open source platforms to address challenges such as vendor lock-in identified by those African governments from the survey. These solutions take the form of tools such as the open-source ID platform MOSIP or the interoperable financial infrastructure Mojaloop;the DPPGA’s database currently lists nearly 600 platforms that qualify as DPGs. However, critically, it’s not the platforms that define DPGs but rather the standards that determine whether a platform is a public goods,such as data security, privacy, and legal compliance. But DPGs need to be viewed beyond open-source models. As participants in the discussion flagged, open-source technologies do not inherently serve the public good, as they are frequently used by the private sector to serve private interests rather than the public good, such as mobile operator MTN using the open-source financial service platform Mojaloop in Malawi. Ensuring that digital infrastructure serves the public good goes beyond questions around hardware and software to include questions of governance, security, and the relationship between the public and private sector. These issues of digital governance are critical: defining digital goals, standards, codes, and accountability mechanisms. The development sector has few shared policies, standards, and codes;the DPGs conversation should drive consistency in governance as much as in systems and platforms.

Context eats technology for breakfast.

While technologies — open source and otherwise — can help address critical issues and progress towards the SDGs, the particularities of context demand flexibility and responsiveness. Thus, an approach to DPGs that emphasises standards and governance is critical to ensuring that we can build frameworks that can help influence processes of digital transformation towards the public good. This approach must stress protection to mitigate the harms that we increasingly recognise can be amplified through digital transformation of public goods.

Expand beyond government to society.

The development community needs to expand its’ conception of public goods beyond the current focus on their role in government and state infrastructure, such as digital identity and financial rails. The understanding of public goods needs to be expanded to meet critical challenges faced by people in both the Global South and North: the challenges of work, such as disintermediating employment platforms (see our Platform Livelihoods work and others such as the Fairwork Foundation), and the erosion of a public sphere through collapsing media models (see our work on alternatives to ad-based business models and work such as the Public Media Stack). The DPPGA’s work on content, such as the Global Digital Library, is also helpful here.

Concrete steps to advance the role of Digital Public Goods in development Define our problems.

DPGs are a theme of our time, but they are clearly an evolving conversation. It’s critical that the development community starts from a definition of the core problems it seeks to address and articulates what they are trying to solve for, in order to provide guidance for how to best solve them. For example, we need to be precise about where the market failure is across the delivery value chain. Because if we don’t define our problem, we’ll waste effort and resources building things that are not as effective as they need to be.

Define standards for public goods.

To move beyond systems to focus on standards, governance, and regulations that can support the public good, we need to bring stakeholders together to agree on shared vision, values, and goals that can be supported by appropriate standards. For example, there is growing concern around the protection aspects of technologies used in cash transfers and digital identity, yet there are no shared standards across the development and humanitarian sector to guide approaches to these critical technologies. If we want to start talking about things like Open Banking, protection must be right at the core of the conversation.

This conversation on public goods brought together policy makers and practitioners from across a wide variety of contexts who spoke frankly and freely about their approach to and work on DPGs. These conversations, like the ones we remember from post- conference and seminar coffee, drinks, and dinner, are the conversations that our series of diagnostic discussions facilitates.

Participants:

Chris Locke : Caribou Digital

Jonathan Wong : UNESCAP

Anand Varghese : DAI

Chris Burns : USAID

Emily Middleton : Public Digital

CV Madhukar : Omidyar

Ben Ramalingam : Humanitarian Innovation Hub

Greta Bull : CGAP

Kari Jacobsen : NORAD

Tomicah Tilleman : New America Foundation

Jason Munyan : UN Office of Special Envoy for Tech

Emrys Schoemaker : Caribou Digital

Digital Public Goods for Development was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Matterium

The Future of Stuff Podcast Ep.

The Future of Stuff Podcast Ep. 4: Ending Modern Slavery with Technology & Social Movements, with Helen Burrows How can we effectively combat slavery in global trade when policy and governments fall short? Human rights lawyer and activist Helen Burrows joins the show to discuss how corporations and consumers can bring an end to slavery and labor abuses through better technology and socia
The Future of Stuff Podcast Ep. 4: Ending Modern Slavery with Technology & Social Movements, with Helen Burrows

How can we effectively combat slavery in global trade when policy and governments fall short? Human rights lawyer and activist Helen Burrows joins the show to discuss how corporations and consumers can bring an end to slavery and labor abuses through better technology and social awareness.

Episode Notes:

On this episode, I’m joined by Helen Burrows. A human rights lawyer turned activist, in recent years she’s focused on combatting slavery in the supply chains which give us the clothes on our backs, the phones in our pockets, and so much more.

We discuss the scale of this dark facet of material culture, how it flourished with trade globalization, how governments have failed to do anything about it, and how a combination of technological innovation and social action between corporations and consumers can create a better path forward. There were some technical difficulties during recording, so please forgive the audio quality. Nevertheless, I hope this conversation illuminates your understanding of the real costs of our stuff and how we can make it better, for everyone.

All music courtesy of Zoe Keating. Track: “Optimist”

Episode Outline Introduction [00:01:09] Our Slavery Footprint [00:02:41] Corporate versus Social Action | Satyagraha and Holding Firmly to Truth [00:04:13] Breaking Open Corporate Silos with Blockchain [00:08:45] Removing the Veil of Ignorance [00:12:05] Hacking the Profit Mindset for Social Good [00:19:44] Mattereum: Satyagraha-as-a-Service [00:21:42] Understanding as a Prelude to Change [00:23:29] Automated Morality [00:28:25] The Flaws of Trade Globalization [00:33:05] On Helen’s Work with Mattereum [00:35:45] Introduction [00:01:09]

Garrison: All right. Let’s just get into it. Cool. Would you like to introduce yourself to the listeners?

Helen: My name is Helen Burroughs. I am a lawyer by training, but for the past 20 years, I’ve worked in legal and judicial reform with around 50 countries around the world. On anything from access to justice, to anti-corruption, human rights, to human trafficking.

In the last five or so years, I deepened my interest and work in the anti-slavery field. Considering the work I’ve done on that and related areas for a lot of years in lots of countries, I’ve seen how completely ineffective governments have been doing anything to eliminate this, currently impacting around 40 million people around the world and realize that it’s companies and consumers that are the answer to this issue, I think, and that we can see more traction in less time by focusing in that direction, rather than on relying on governments to get the law and its enforcement right.

So, my interest has been in, how do we increase the capacity or decrease rather the capacity of supply chains and make them more transparent, so we know where our stuff comes from, how it’s made and what harm or not it may have done along the way to our doorstep.

Our Slavery Footprint [00:02:41]

Garrison: Anti-slavery work, on the supply chain front is something I’ve been aware of for a few years now. And you hear a bunch of stories. You hear a bunch of pilot programs from basically every supply chain company under the sun, but you never really see any sort of a huge fundamental shift in how people do things.

Before we get into the weeds, I want to relay something and something I would suggest listeners do is there’s a website called slavery footprint dot org. And it has, you know, information on the subject in terms of the scope of it. And it also has a survey that you can take. It’ll ask you about your diet and the overall kind of composition of your household. How many cars do you have? How many bedrooms or home office, your wardrobe composition like whether or not you have leather shoes , electronics, and it gives you a projection of roughly the number of slaves that are working for you to secure your lifestyle.

When I took the survey, it projected 26. Instantly in my head, I imagined what about like my neighbors and then everyone else in the town and then in the state, in the country. And then I just imagine that number just like just rocketing up. It is quite striking.

Corporate versus Social Action | Satyagraha and Holding Firmly to Truth [00:04:13]

Garrison: Something I planned on talking about like later in the conversation. But I liked that you mentioned it right off the bat, which was that you lost hope, unfortunately, that government would enact the necessary policy to get more transparency out of the corporate supply chain, have it be a mandated thing, an industrial standard, and that you think it’s gonna come down to consumers and awareness and like social movement from the consumer side. I think that kind of difference between corporate action and the action of people as a whole is a really interesting thing cause it’s very different. And it’s very difficult.

There are instances of this happening in the past very effectively.

There’s a concept I like called Satyagraha, which was Gandhi’s driving philosophy in terms of how he conducted his social movements. And it was a direct influence on the civil rights movement with Martin Luther King and also with Nelson Mandela, fighting apartheid in South Africa.

And what Satyagraha means is “holding firmly to truth,” or it’s an insistence on truth. You show that institutions of power, like what they’re actually doing to people, and then it’s on them. You know this without a doubt. It’s irrefutable. What are you going to do? And it could be remarkably effective. So I wonder, do you think that would be possible now, like that kind of very thorough systemic change?

Helen: I think we’re sitting on the cusp of it. And I dunno whether that’s a kind of wince answer but I don’t think we’re there yet. I think that the movements that you’ve mentioned, this is a little bit different because it’s often been the oppressed rising up and not taking any shit anymore. Whereas slavery by its nature is hidden. Slaves cannot get out in the streets and voice their concern about the status quo and demand emancipation. That’s not how the mechanics of this industry functions. So we’re not looking at the same type of movement, but the same type of energy is absolutely required.

The same type of commitment is required because obviously it wasn’t just Martin Luther King and his people. The movement was massive and included people from everywhere of every color. And so it’s not just that, but the nucleus of those movements is different to the nucleus of this movement. And I don’t think consumers, society, are there enough in enough numbers yet to produce any kind of tipping point to transformation. So I think it’s also gonna take the convergence of industry saying, “We don’t want to do any harm. We’re trying our best not to, but we actually don’t know the harm that we’re doing and we don’t know a better way of doing business.”

Holding onto the truth, holding firm to the truth, is what we want to do is provide that truth because at the moment that the truth is, you know, a little bit colored in here and there, but there’s plenty of gaps that we don’t know. So it’s the democratization of truth, actually, to be able to see it on a broader scale.

So companies can say, “We’re not done with that. We want to be sustainable environmentally and as regards to social labor standards. And we’re not okay with this. So we’re going to change how we operate.” But being upfront about that and consumers saying, “Now I know where my product comes from and how it’s made at every point along the way, I can now make an informed choice about whether or not I want to buy that thing.”

So I think the movement is coming, but we have to preempt the movement by providing the foundations for this democratization of knowledge and therefore truth to put more fuel underneath the energy that exists. But I mean, we have seen different movements. The vegan movement for example is fundamentally different to civil rights movement, for example.

That’s one that’s done great things. So it’s not like it’s the only type of movement that can succeed absolutely, but it depends a lot on the commitment and the energy of the people at the nucleus. So yeah, I think that we’re not there yet. We’re on the cusp of it

Breaking Open Corporate Silos with Blockchain [00:08:45]

Garrison: On this aspect of kind of putting it on the otus of industry to adopt these standards into like really put an effort into doing this. You’re interested in blockchain technology in the application of this. Would you care to go into that in more detail?

Helen: Ah, so I think the benefits of blockchain technology offer themselves in a couple of different realms in this space. One is I think, philosophically again, the sort of democratization of information, access to it, and the concept of the disintermediation of information and the breaking down of silos because that’s been a huge problem across the board. But, you know, as it relates to slavery, the company will keep its information about its supply chain and what it makes might have found in terms of labor abuses. They’ll keep that really secret, of course. And then this government department keeps its information secret and the NGOs don’t have outlets to communicate what they know and workers themselves don’t have outlets to communicate. So by breaking down all of those kinds of barriers, and I’m not talking about blockchain technology itself, but the philosophy behind removing silos is something that is uniquely innovative about the approach. And in terms of the technology itself: the immutability of information. So obviously, if you put junk in, you get junk out, like we always and so it’s a case of ensuring that we have adequately robust sources of triangulated, checked, verifiable data that goes in. And as long as that process can be assured, then what then ends up on the blockchain can not be changed obviously. And that’s a radical departure from what we’ve ever been able to do before in terms of the collection and presentation of information about this side of how things work.

Garrison: Yeah. In some sense it’s yeah. A clever hack for truth or I guess consensus is a better word. It’s not capital T truth in this sense. But one thing that I’ve noticed, being active in this space for about four years now is there’s been so many supply chain blockchain projects. So many of them. Entire consortiums have been formed and they do these track and trace pilot programs, but they don’t go anywhere.

Helen: No, they don’t because track and trace is so two-dimensional. It just tells you that this flask that I’m holding, that nobody can see: where it originated and then it was tagged and then we can trace it as it travels all over the world. And so it ends up being recycled I thought. It doesn’t tell me anything about the labor conditions of the people who manufactured the aluminium. It doesn’t tell me where the paint was made and who painted it and how noxious it may be. It doesn’t tell me where the plastic came from. It doesn’t tell us anything that we need to know about this product in order to make ethical choices about whether I want to buy it.

Removing the Veil of Ignorance [00:12:05]

Garrison: In some of the writing I’ve been doing for Mattereum, I describe it as a zero history problem. This concept of zero history in our stuff is really problematic. Cause like you said, we don’t really know where it comes from. We don’t know about the labor conditions that brought it into the world. And it’s just kind of part of this larger systemic problem of managing production and consumption in society. Like that cycle is so bloated. It’s just a ever-hungry machine that keeps churning.

Helen: Exactly, it wasn’t that long ago that there were four seasons in the fashion year. Now there’s 52. So, you know, just the exponential increase of what we’re told we have to buy. And I’m just using fashion as one example of how it is across the board. We have to have this schedule and we have to have that, this will make our lives easier, quicker, faster, whatever. And we get caught up in it certainly.

It’s really in our faces. It’s a gluttonous consumer and level of consumerism that we don’t even realize that’s what we’re a party to fueling. We ripped off too. There’s lots of products out there that will have lovely green labels on them and tell us that um their cotton is organic, but it doesn’t tell us whether that cotton was picked by child slaves in Kazahkstan. The cotton’s organic so that’s all right. Or, it, it might say that this tuna was caught without a net or these prawns were sustainably farmed, but it doesn’t tell you that the guys on the boat doing the fishing have been on vessels for the last 25 years, passed on the high sea from boat to boat, completely enslaved, and their only chance of escape is to get sick and die and be thrown over the board. It doesn’t tell you anything like that. So as consumers, we don’t even know what we need to know. Because we’re not being told, we’re being told the cotton’s organic, there were no nets involved in his fishing and we think done deal, that’s it.

And we’ve been greenwashed and whitewashed. And we don’t look underneath the curtain. We’re not interested in looking underneath the hood enough to know more, to find out more, but I don’t really put the faults of that at the feet of consumers necessarily because we do rely on authority, including governments to do the right thing.

Even in the UK where arguably they’ve got one of the strongest pieces of anti-slavery legislation in the world, the level of dilution that was required in order to get that piece of law over the line meant that all companies have to do is pop the hood and have a quick look to see if anything looks out of place. And if it is then that’s fine. There’s no obligation to delve right down to the top or up to the top of your supply chain and tell us exactly what’s going on in micro detail. There’s nothing like that. So, you know, we rely on government to tell us what the standard is and what the requirements ought to be and what we need to know, but they’re not doing that job because of the various divergent pressures on them that pull them in a different direction. Then we get this sort of green, white-washing layered on top of that inside this sort of grotesquely, huge consumerist society that we are. And it’s just a recipe for continued disaster.

Garrison: And the culpability here is multi-dimensional. Like at a fundamental level, there’s also a part of it where our desire, like our capacity for desire, what we desire. Our desire as a function is being kind of manipulated. But there’s a self awareness on corporations that have multimillion dollar marketing budgets to sell us things or using algorithms to get us to market things to ourselves in the weird feedback loop. But the idea that, oh, people want things that are green. They want things that are sustainably sourced. Uh, Let’s give it to them. Even though there’s been plenty of cases where that was actually completely false. It’s a fancy packaging that looks organic and earthy, but there’s still in fact horrible stuff happening.

Helen: Often they don’t know. I remember having a conversation with the person who was at the time the head of sustainability at McDonald’s and of 80,000 supply chains that they had to keep an eye on. They said that their their due diligence was really comprehensive and robust. And in the last 12 months they had found one case of labor exploitation among 80,000 supply chains across multiple countries. You know, If you only find one case of exploitation in that many supply chains and loads of countries, your system doesn’t work. That’s the only explanation.

So companies often don’t know. And companies don’t have access at the moment to the capabilities that they need to get right down there in supply chains. If you think a mobile phone, how many components are in a mobile phone, trying to trace all of those diffuse supply chains around to their source, and then you get to the cobalt. And you’re like, we got as far as the smelter and then we were screwed. We didn’t know where it came from. There’s lots of really logistical, legitimate, as well as illegitimate reasons why companies don’t actually know, but instead of facing up to that and saying we’ve got gaps here in our knowledge, they just stick a label on and say, “Our cotton’s organic.”

And we go, oh, great. Organic cotton. I can sleep at night now I bought that organic cotton and I didn’t buy that rubbish pesticide laden one. Or 10% of the profits from this company goes to help women rescued from trafficking and prostitution in India. Great. I can sleep well. We rely on that one box tick, whatever their marketing people choose that box to be.

I think it’s the whole story. But again, we don’t know what we don’t know. And it’s getting out there, what we do need to know about these products. And in the environmental space, that’s incredibly complicated because it vastly depends on the product and its processes. Consumables in the anti-slavery space are a little easier in some respects, given the standards sort of universal of how you should treat people who work for you.

And you know what international law says about all of that. But not easy in terms of accessing the information, because this is a. $200 billion class a year industry. There’s a lot invested in the status quo and they will take a lot to deconstruct all of that. So there’s a lot invested in making sure that people don’t know what actually happens in various parts of various supply chains, because it is terrible.

So it’s not an easy thing to try and deal with. We need to start and lots of project, as you said, with the track and trace work is there’s just lots of anti-slavery and environmental protection work going on globally. As of course, we all know, but what’s missing is that straight line truth. That complete knowledge, or where there isn’t complete knowledge, the courage to say these are gaps and who can fill them for us. We’d like to know, and then whatever the result is, we’ll work with it and we’ll do what we can to make it better. And, you know, with that level of commitment and consumers behind them, then that’s what could produce this change.

Hacking the Profit Mindset for Social Good [00:19:44]

Garrison: Yeah, absolutely. I also wonder. Corporations operate under certain incentives. Profits are what they optimize for. So I wonder if there’s a way to almost like Trojan horse this insistence on truth at the systemic level by saying like, “ You’ll have all these incredible efficiency gains that it’ll actually be more valuable for you to do this.

Helen: Yeah. Yeah. There’s a number of ways that this is financially at a level of value attractive to companies. Obviously we’re not going to go and start by trying to twist the arms of companies. You’d start with the companies that are really making, have made great inroads and demonstrated great levels of investment and commitment. You’d start there cause they’re the ones that want to do the right thing, but yet there’s massive efficiencies to be gained.

Look at all of the money that’s wasted in supply chains because time wastage, lags. So much goes wrong in the supply chain that could be massively tightened up to create huge time and cost efficiencies as well as anything else. And also, the other side of it, the flip side of it is that, what is it about 98% of a company’s value is its reputation. And when companies like like Zara get called out because the machinists have sewn notes into the hems of coats saying, pockets of coats saying, “Please help us. We haven’t been paid.” That wipes a lot off the slate in terms of that company’s value. And a lot of companies get called out. Obviously there’s a risk analysis to be done about whether you allow that risk to be on the table. That has the capacity to wipe a lot of value off of companies.

So for companies wanting to manage that risk, manage their risk, and also increase cost and time efficiencies in how they do business, then absolutely having a comprehensive line of sight down their supply chain is a very smart thing to do.

Mattereum: Satyagraha-as-a-Service [00:21:42]

Garrison: Absolutely. This podcast is presented by Mattereum. So listeners who may not be aware of Mattereum, but will probably be curious as to how we approach this kind of challenge.

So one of the kind of core inventions within the project is the Asset Passport. So just as you know, people have passports in order to travel freely internationally and cross borders, it’s a digital identity for an object instead. So this identity, this passport has very detailed information about the object, where it came from, what it is, weight. Is it vaulted somewhere? Any other kind of manufacturing or production data that’s necessary to have a very firm grasp Identifying this item. And also legal guarantees baked in so that there’s access to recourse if there’s like a commercial fallout. There’s arbitration and dispute resolution and stuff like that, all baked into an object that’s attached to this global marketplace by default.

We could have these digital identities created with supply chains at the point of production. So as soon as something is built before it’s, before it goes into the vast machinery of commerce, a provenance making sure that there’s no zero history there, like from the raw material to the finished product there’s this history that you can follow.

So in, in a way Vinay described it recently in a way that I like. He described as Satyagraha as a service: you’re seeing the truth of the objects that you produce as a company or that you consume or buy as a consumer.

Understanding as a Prelude to Change [00:23:29]

Garrison: There was a book that I I didn’t, I wasn’t able to read the whole thing before the call, called Blood and Earth (by Kevin Bales.)

He gives a really kind of clear description of him kind of seeing this thing. Actually going to the source of like where they’re mining minerals that ended up going into cell phones and seeing it in person. And it’s just there. And these operations are just simply operating. And just parts of the world as if there’s no, without a care in the world. But one thing he said that I think is the key to this is that once you know this, once you’re aware of this, you can’t forget it. You can’t ignore it. Or at least you can daydream all you want but once the facts are presented to you, you have to do something. And I think that idea that if people knew without a doubt, that that there were literally millions of slaves existing in the world today. We think of slavery as a bygone era like this dark past but it’s happening in either similar or in different ways. It’s kind of obscured, you know, in the giant machinery of trade.

I think one of the challenges with kind of driving awareness of this and trying to get a traction on the social front is you know, with social media and the way like the media landscape is nowadays we’re just constantly bombarded with information. Everything is happening all at once. Being connected as we are: it is both a beautiful and terrifying thing, depending on how that manifests. So do you think there’s a concern of effectively reaching the necessary number of people? Social awareness to the extent that actual change could be implemented.

Helen: I don’t think access with people will be the problem. It will be the approach that needs to be different from any approach that’s been used before. Because the approach that we often see is the picture of poverty, the physical child, woman, in a distressing situation. And that’s meant to tug on our heart strings adequately for us to take action.

All it does is make us feel bad and disconnects us because we don’t know that person is one of 40 million. Which is too big for us to contemplate in terms of how can I make a difference to 40 million people. We have to connect it, human to human, and have it resonate between your life and their life. My life and their life. We have to make the connection. Also, most people don’t know that 80% of the stuff they buy has been touched somewhere along the line by a slave, at least one. They don’t know that, but even if they did know that they would, we would still say maybe this one was the 20% that wasn’t touched by a slave and I can still sleep at night.

So we’re so disconnected from the reality, but it happens to someone, somewhere else, that I can’t influence because it’s too far, too disconnected from my reality. And just too big. We get overwhelmed and we disconnect. So the approach must be different in order to engage people. Because I think we now have the platforms where we can reach people with the right message.

We can do it very quickly and I think more effectively than before.

Garrison: Yeah. It’ll certainly be a challenge. You’re dealing with complexity on the business front, the technology side of things, the social dynamics. It’s a genuine mission. And it has to be compelled by a desire to see things become better for people and just reducing suffering in the world. That has to be the north star that people —

Helen: It does.

And I think that the vast majority of people are not bad. They don’t want to do harm. And I believe that many businesses that do harm didn’t mean. The exegencies of businesses and the logistics of operation forced them down a road. Maybe, obviously they agreed to it, but maybe they didn’t set out — I don’t believe that people set out to do harm. So I do believe that there’s enough of us, both as consumers and businesses that do want to do the right thing and with the right tools in our hands and the right information in our hands, we can and now we have the capacity to be able to provide that, but I don’t pretend that it’s a quick fix.

This is generational attitudinal change that is required. But I do believe that we’re very close to having enough people on board as consumers and businesses, to be able to start making a dent.

Automated Morality [00:28:25]

Garrison: And I think showing people what’s possible is really important. Even as design artifacts. Being able to show people like here’s what this would be like in practice. It’s really difficult for people to change how they do things, how they operate in their daily lives. But one example that Vinay gives: the concept of automated morality. What if you could configure through some mobile app — you have it integrated with your payments or your payment card or whatever it is — make it to where it’s able to detect if there’s lack of transparency or even just actual documented evidence of bad labor conditions or slavery. And it tells you, it notifies you before you make the purchase.

Helen: Yeah, exactly. We have the capacity now to predict when slavery may be more likely to occur. Imagine, for example: As a few years ago happened with the conflict in Syria, mass migration into certain parts of Turkey. That was very auditable. We already knew, for example, that there was some dodgy factories close to the border in Turkey. You’ve got desperate people fleeing across the border need to earn money: bang.

I know that’s a very simplistic example, but still the fact remains that you can use the knowledge we have to predict when things are going to go south, when things are going to go wrong. And when things could result in human or environmental harm. And we can begin to do that so people could be warned like you say, when something actually happens or when the sort of geopolitical socioeconomic indicators start showing us red flags. But we can also automate morality by building in buttons into our Amazon whatever search is saying, do you want slave free tech? Do you want an environmental harm-free tech? And then you get your products filtered by those things as well. So there’s definitely ways of making it super easy for us as consumers to make the right choices. But obviously then what sits beneath that needs to be robust and comprehensive. It can’t be that the picked cotton was organic, but we know nothing about the 99% other parts of that product.

So we have to be really on point about everything that sits underneath the water of that iceberg.

Garrison: And that integration has a very clever side effect. If you did this and it prevented people, like people’s willingness to purchase from them because of the obscurity or whatnot at that point. It’s like, okay, well then prove it and then you’ll be one of the most, you know, a more popular brand.

Helen: Exactly. And I think a lot of the approaches that have been taken today to been finger pointing at company is going your bad. Like Zara, when it all came out about notes sewn into coat pockets. You’re bad. You’re a terrible company. And so companies felt backed into a corner, not wanting to be forthcoming with information that they have about their supply chains, because they’re scared of being ridiculed. And, you know, I totally understand that. Whereas if we come from the perspective of we don’t know exactly what’s going on, but we want to find out and when we do find out we want to deal with it. So we’re going to just be upfront and open. And that’s real transformation of culture that is required in order for businesses to feel confident to do that, to air their dirty laundry.

Garrison: There’s like a consumer literacy there like just knowing and understanding things. Everything is so complex these days. Like no one understands how the global financial system works. It’s incredibly important. It drives how resources are allocated around the planet, but not even the top top professionals in global finance could tell you how exactly does it work because there’s so much automation. There’s so many automated tooling from like high-frequency trading systems and such where like it’s all these processes that are just happening often autonomously and that’s happening further. So the thing we need to do is to automate this insistence on truth and sustainability so that these kind of darker aspects of supply chains that exist now isn’t just automated and further kind of optimized to the flow of goods, as it is.

The Flaws of Trade Globalization [00:33:05]

Helen: Yeah. Yeah. I think that underneath all of this is a fundamental restructure of global business actually, and ceasing of our current approach, which is let’s just go and find that from a country where we can get cheaper labor. That whole approach that is adopted pretty much universally. We have to shift that culture. We have to alter that culture, that approach because otherwise we are going to go and seek out and exploit the most vulnerable and with already existing socioeconomic pressure, you know, in whatever context those people live in. If you’ve got added to the international commercial pressure on that, it becomes unavoidable.

It’s the culture of how we do business internationally, that also needs to be considered. That’s the bigger picture.

Garrison: And there’s definitely a connection between resilience, economic resilience and scale of commerce. Like globalization a few decades, like rapidly scaled commerce around the world. But because of that, because it had to fracture and spread into this complicated matrix of distributors and manufacturers. It became less centralized within a certain region or like a state in the United States, for example, it became this global machine and it’s very easy for things to slip through the cracks.

Helen: Exactly. And that’s the problem is the complexity is just in so many cases, seemingly impossible to uncover. Global trade liberalization was marketed to us as a really great thing for countries to make friends and be closer. You know, we never were told about well, this is the flip side of that, that we’re going to go to China where we can get labor for X or India, or this country in Africa where labor’s less so we’re going to make them work really hard for not very much money so we can bring in cheap stuff. We were given the good news story about global trade deals or international trade deals and the like.

A chain is only as strong as its weakest link. So even if you’ve got one corruptible feeling dodgy state involved, that is the only opportunity that is required for you to sour and taint that whole supply chain. And with the number of countries that we go to and use for the production cycle of various consumables, every single least developed country in the world is being flogged.

On Helen’s Work with Mattereum [00:35:45]

Garrison: I think we’ve covered quite a bit. Is there anything in particular that you’d want to cover? Oh would you like to inform listeners what you’re working now?

Helen: With Mattereum?

Garrison: Yeah.

Helen: So I am working with Mattereum to develop the approach that we will take to this because what we want to do is add rails of data onto Asset Passports to complement the information that’s currently provided in it to inform people about the environmental and social provenance of particular products or particular articles. So we’re developing the standards, the approach, how we will screen claims and test claims so we’re not going to be part of the greenwashing movement, so we can certainly distinguish ourselves from that and offer something that really is different and much better than has been before. What we need the technology to do for us in order to operationalize that.

Garrison: This is definitely one of the uh, better undertakings that I think anyone can be doing now. It’s a difficult subject to talk about, but I’m optimistic about the future.

Helen: Good. So I think we have to be otherwise reality becomes a bit —

yeah. Yeah. It’s good to be optimistic. I think we have reason to be optimistic.

The Future of Stuff Podcast Ep. was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 23. July 2021

Cognito

Why Manual Review for Authentication Is Outdated

Last year, many companies had to quickly adjust their customer authentication strategies to digital. Those with a system already in place were in a good spot. Those who didn’t had to catch up quickly — or fail. Manual authentication review is now outdated. Although manual verification was once completely necessary, we now have the tools […] The post Why Manual Review for Authentication Is Outdat

Last year, many companies had to quickly adjust their customer authentication strategies to digital. Those with a system already in place were in a good spot. Those who didn’t had to catch up quickly — or fail. Manual authentication review is now outdated. Although manual verification was once completely necessary, we now have the tools and customer verification solutions to do it remotely and...

Source


IDnow

How to ensure a player-friendly ID verification during sports tournaments

Economic experts believe that the EURO 2020 boosted the economy of participating countries and other European countries alike in ways including the expected bookmakers – but also food delivery services, small businesses, and pubs saw a surge in order intakes during this month. As an ID verification platform provider, our angle on the EURO 2020 […]

Economic experts believe that the EURO 2020 boosted the economy of participating countries and other European countries alike in ways including the expected bookmakers – but also food delivery services, small businesses, and pubs saw a surge in order intakes during this month.

As an ID verification platform provider, our angle on the EURO 2020 and its economic impact laid primarily on the gaming sector and its tremendous spike in player verification throughout the tournament.

Especially in the UK, the EURO 2020 enjoyed high viewing figures while bets been placed were skyrocketing:

ITV (UK TV Channel) pulled in 27.6m viewers during England’s 2-1 extra-time win on Wednesday 11th of July, making it the most-watched football match ever shown on one channel in England. There were reported 30 million pounds worth of gamblers across some of the biggest gambling operators in the UK. Entain, who also owns brands such as Ladbrokes, and coral reported that this was the biggest sports event with estimates of 250 million pounds worth of bets! Some operators in the industry reported having seen more players betting on the England and Italy final than any other sports event during the tournament.

While keeping a lookout, especially for our gaming customers in Germany and the United Kingdom, we watched closely when exactly the numbers of player verification spiked and how to respond to it adequately concerning ID verification.

Encouraging players to verify themselves even before the EURO 2020

Operators predicted that they would see up to 80% of players signing up between June 10th -Jun 17th, for example, if 50,000 customers signed up, only a few of them will hit the deposit limit, then operators would reach out to players in the first week of July to verify following background checks.

A lot of time before the kick-off to the tournament, many operators and businesses found themselves advertising and creating campaigns to have players verify themselves before the EURO 2020 kicked off to help increase user experience.

Although Euros kicked off officially on the 11th of June, we started seeing spikes in player verification from the first week of June. Driven by the events themselves, certain promotions were targeted at the beginning of Euros for a better and seamless customer experience.

Leveraging AutoIdent to enable a quick and user-friendly onboarding

The overall conversion rate for the end-users was improved by the efforts made by gambling operators by giving players both AutoIdent and VideoIdent options. This was taking into consideration background checks that take place before the ID Verification and what browsers and devices might be used by the players to meet customer demand. However, for many of our operators, IDnow’s automated solution Autoident was the recommended route for players to have themselves verified as quickly as possible.

When Germany was no longer part of the tournament (29th of June), we observed the player going through the withdrawal stage sooner than expected. Now what we observe is one-off bettors’ volumes did spike during the period when the German game was on however, this volume has dropped.

Cutting down the ‘30-day grace period’ because of Germany’s new gaming regulation

There were other curveballs for gambling operators waiting as of the 01st of July, a new State Treaty on Gambling 2021 (Glücksspielstaatsvertrag, GlüStV) was introduced and activated in Germany with the undeniable impact on the user experience.

Usually, gambling operators share a grace period of allowing players to go through identity verification with ID documents within 30 days of registration. However, some operators had to reduce the 30-day grace period for a short while to 3 days to help remain and regulate the new regulations that came into place.

What I observed in my work managing gambling clients at IDnow was the new players casually spending low amounts during the Euro’s allowed many operators to have a soft KYC check so they can allow players to enjoy remote gambling and have fun responsibly. Within mind that deeper KYC Verification needs to be done with a 3-30 days grace period. Operators were right to predict that the spikes in player verification were higher in July after the 01st of July regulations went live.

Fighting criminal activities and underage gambling with ID verification

IDnow was prepped for the EURO 2020 volume and the increase in player verification spiked up by 114% compared to the previous month. This was expected behavior for many operators however, what we also found was an increased attempt from organised crime groups.

Organised crimes groups usually use gambling platforms to launder money. There are many steps that IDnow had to think about to fight against these activities and protect our clients and operators from these groups.

There are some severe consequences to how far the money earned and used through gambling operators can go, such as embezzlement to trafficking. We are also observing an all-time increase in gambling addiction in underage gamblers.

Studies have shown and shared via several UK-issued newspapers that in the past two years, 50,000 + children are being addicted to online gambling and the nation’s Gambling Commission showed 450,000 children aged 11 to 16 gamble regularly. This is why it was very important to understand what we work with and wanted to achieve was beyond enabling great customer experience but also protect minors and comply with set regulations in these strict markets.

Identity verification has not always been mandatory. In the past 10 years, we have seen markets like the UK, Australia, and now in the last couple of years the USA becoming more mindful and rethinking how the remote gambling business is impacting youth and adults.

When IDnow was picking up a higher number of fraudulent activities across several clients during the EURO 2020 tournament, IDnow fraud agents found patterns of what could be organised crime attacking specific organisations such as Money mules (recruited by criminals to receive money into their bank).

In these cases, it was highly challenging to allow just machine learning and reply solely on automated solutions to see fraudulent ID’s being processed. At IDnow we have a hybrid model of process ID’s. it has helped increase the confidence of verifying players and end-users and by using humans to look over the automated decision to be closest to getting the verification methods right as possible.

Player Verifications behaviours

Some players also had been verified previously during the grand national as a one-off fun activity however, these players are not sports bettors who use the online platform to bet regularly. This is the same behaviour we expected at IDnow with players who participated in EURO 2020 will not continue to use these online platforms at the end of the tournament.

Even though there was a spike during the euros, and we observe a drop in verifications and back to a normal level of betting, we are currently likely to see the player verification increase in August due to the domestic leagues starting again.

Can your company handle player spike during sports tournaments? Find out more in our Fact Sheet

By

Razia Ali
Senior Account Manager Global Gambling
Connect with Razia on LinkedIn


Infocert (IT)

InfoCert, continua la crescita a livello internazionale. Da oggi disponibile il nuovo portale in Rumeno

Da giugno 2020 InfoCert ha avviato un presidio commerciale in Romania per lo sviluppo del mercato locale, considerato ad alto potenziale per la digitalizzazione. La decisione rientra nel piano strategico di espansione internazionale di InfoCert e, in particolare, nell’area Eastern Europe dove l’azienda è già attiva in paesi quali Slovenia, Croazia, Polonia e Ungheria. Oggi, coerentemente […]

Da giugno 2020 InfoCert ha avviato un presidio commerciale in Romania per lo sviluppo del mercato locale, considerato ad alto potenziale per la digitalizzazione. La decisione rientra nel piano strategico di espansione internazionale di InfoCert e, in particolare, nell’area Eastern Europe dove l’azienda è già attiva in paesi quali Slovenia, Croazia, Polonia e Ungheria.

Oggi, coerentemente con questa strategia, siamo lieti di annunciare la pubblicazione sul nostro sito web infocert.digital di una sezione tradotta e dedicata al mercato Rumeno, navigabile a questo link:

InfoCert RO

Il mercato digitale della Romania è uno dei più dinamici e a maggiore tasso di crescita dell’area Eastern Europe, presidiato da InfoCert con le proprie soluzioni, TOP (Trusted Onboarding Platform) e GoSign.

Attraverso l’apposito form di contatto è inoltre disponibile, anche per le aziende rumene, l’iscrizione all’InfoCert Partner Program, che offre importanti vantaggi a quanti desiderano integrare coi propri servizi o rilasciare ai propri clienti le soluzioni InfoCert.

The post InfoCert, continua la crescita a livello internazionale. Da oggi disponibile il nuovo portale in Rumeno appeared first on InfoCert.


Infocert

InfoCert, continues its growth at international level. From today available the new Romanian website

Since June 2020 InfoCert has launched a commercial presence in Romania for the development of the local market, which is considered to have high potential for digitization. The decision is part of InfoCert’s strategic international expansion plan and, in particular, in the Eastern European area where the company is already active in countries such as […] The post InfoCert, continues its growth a

Since June 2020 InfoCert has launched a commercial presence in Romania for the development of the local market, which is considered to have high potential for digitization. The decision is part of InfoCert’s strategic international expansion plan and, in particular, in the Eastern European area where the company is already active in countries such as Slovenia, Croatia, Poland and Hungary.

Today, in line with this strategy, we are pleased to announce the publication on our website infocert.digital of a section dedicated to the Romanian market:

InfoCert RO

The Romanian digital market is one of the most dynamic and fastest growing in the Eastern Europe area, covered by InfoCert with its solutions, TOP (Trusted Onboarding Platform) and GoSign.

Through the dedicated contact form it is available, also for Romanian companies, the subscription to the InfoCert Partner Program, which offers important advantages to those who want to integrate InfoCert solutions with their services or release them to their customers.

The post InfoCert, continues its growth at international level. From today available the new Romanian website appeared first on InfoCert.


Find out the e-signature legally recognized by your country

There are different types of electronic signatures (e-signature, advanced e-signature, qualified e-signature), each of which may have a different legal validity depending on the country in which it is used. Thanks to the research conducted by our Business Complience experts, on the page “GoSign international coverage”, InfoCert offers GoSign users the possibility to easily discover […] The post

There are different types of electronic signatures (e-signature, advanced e-signature, qualified e-signature), each of which may have a different legal validity depending on the country in which it is used.

Thanks to the research conducted by our Business Complience experts, on the page “GoSign international coverage”, InfoCert offers GoSign users the possibility to easily discover which types of electronic signatures are recognized globally, country by country.

Find out now which types of digital signatures are legally recognized in the country of your interest and discover the Gosign solution (Desktop & PRO or Business, also available on mobile app Android and iOS) that best suits your needs.

Learn more about

GoSign international coverage

Visit the InfoCert Shop to discover the

GoSign Suite

or contact us for more information about our offer.

The post Find out the e-signature legally recognized by your country appeared first on InfoCert.

Thursday, 22. July 2021

Dock

Historical Data Available from Dock’s Proof of Authority Chain

Since the successful transition to Proof of Stake on July 7th, 2021, Dock’s mainnet has been working smoothly with over 434,000 blocks produced at the time of this post’s publishing. The key statistics for the network can be viewed real-time on our explorer. As mentioned in an

Since the successful transition to Proof of Stake on July 7th, 2021, Dock’s mainnet has been working smoothly with over 434,000 blocks produced at the time of this post’s publishing. The key statistics for the network can be viewed real-time on our explorer.

As mentioned in an earlier post, the transition to Proof of Stake involved changing the consensus mechanism of our network from Aura to BABE, and this has made it necessary for us to retire the existing Proof of Authority network and start a new chain. However, we have taken precautions to minimize the impact of switching to a new chain on our users, by preserving the existing chain’s state in the new Proof of Stake network’s genesis block, as well as working with Subscan, our explorer provider, to make the historical data from the Proof of Authority chain available for our users. Today we are pleased to announce that these historical data can now be viewed at https://dock-poa.subscan.io/. Here, you can leverage the explorer’s robust search functionality to look up the details of the blocks, events, extrinsics, and transfers on the retired chain.

We will follow up shortly with more information on how to participate in the further decentralized governance of the new Proof of Stake mainnet. In the meantime, please head to our blog and docs to learn more about the new network, including how to get involved as a staker (also referred to as "nominator") or a validator. You can also reach out to us via Twitter or Discord.


Coinfirm

Anti Money Laundering Authority: the EU Commission’s AML Regulator Proposal

On June 20 2021, the EU Commission published a package of 4 AML regulatory proposals. One of these is the institution of a supervisory body to specifically cover AML, the Anti Money Laundering Authority (AMLA). The proposal references AML failures from EU credit institutions as a major factor in the recommendation to create the AMLA....
On June 20 2021, the EU Commission published a package of 4 AML regulatory proposals. One of these is the institution of a supervisory body to specifically cover AML, the Anti Money Laundering Authority (AMLA). The proposal references AML failures from EU credit institutions as a major factor in the recommendation to create the AMLA....

IBM Blockchain

Making permissioned blockchains interoperable with Weaver

Distributed ledger technology (DLT) has gone beyond its experimental phase and is now actively managing several enterprise workflows around the world in areas like trade logistics, export finance, inter-bank payments, and regulatory compliance. But this has not led to convergence, either to a default technology stack or to a single global network that everyone runs […] The post Making permission

Distributed ledger technology (DLT) has gone beyond its experimental phase and is now actively managing several enterprise workflows around the world in areas like trade logistics, export finance, inter-bank payments, and regulatory compliance. But this has not led to convergence, either to a default technology stack or to a single global network that everyone runs […]

The post Making permissioned blockchains interoperable with Weaver appeared first on Blockchain Pulse: IBM Blockchain Blog.


Magic Labs

Magic Raises $27M to Future-Proof Authentication

Today, we’re thrilled to announce that Magic has raised $27 million in Series A funding, bringing our total funding to $31 million. This round is led by Northzone, with participation from Tiger Global, Placeholder, SV Angel, Digital Currency Group, CoinFund, and Cherubic — along with a roster of more than 80 stellar angel investors, including: Alexis Ohanian — Co-founder of Reddit, Initial

Today, we’re thrilled to announce that Magic has raised $27 million in Series A funding, bringing our total funding to $31 million.

This round is led by Northzone, with participation from Tiger Global, Placeholder, SV Angel, Digital Currency Group, CoinFund, and Cherubic — along with a roster of more than 80 stellar angel investors, including:

Alexis Ohanian — Co-founder of Reddit, Initialized Capital Balaji Srinivasan — Ex-CTO at Coinbase, Co-founder of Earn.com Ben Pruess — President at Tommy Hilfiger, Ex-VP at Adidas Casey Neistat — YouTuber (12M subscribers) Guillermo Rauch — CEO of Vercel & Next.js Jacob Jaber — CEO of Philz Coffee Jason Warner — CTO of Github Kayvon Beykpour — Head of Consumer Product at Twitter, Periscope Naval Ravikant — Co-founder of AngelList Roham Gharegozlou — CEO of Dapper Labs Ryan Hoover — Founder of Product Hunt, Weekend Fund Sahil Lavingia — CEO of Gumroad Scott Belsky — CPO of Adobe, Author of “The Messy Middle” Soona Amhaz — General Partner at Volt Capital / TokenDaily Varsha Rao — CEO at Nurx, Ex-Head of Global Ops at Airbnb

This new capital will help us double down on empowering developers and future-proofing our technology, to ensure Magic is the most secure, seamless, and scalable way to onboard users sans passwords.

Since launching on Product Hunt in April 2020, Magic has been in hyper-growth mode. This year, we went from a few people in a San Francisco loft to a 30+ all-remote team spread around the world. We’ve over 10X’d the number of developers building with Magic and our community continues to grow at a fast clip each month. Now, we’re securing millions of user identities for companies of all sizes and verticals.

Trailblazing customers like UserVoice, Decrypt, Polymarket, Fairmint, and more integrate Magic as their core auth flow. We’ve helped our customer base expedite time-to-market, boost conversion rates, reach more audiences, level up security, and reduce cost. And we’re just getting started.

Our vision is to build the passport of the internet in order to safeguard the trust between users and internet services.

The legacy model

User trust is one of the biggest challenges of the internet. Despite explosive growth in the number of people now connected to the internet — over 5.1 billion users, 67% of the planet — user trust is at an all-time low. Why?

The current user trust model of the internet is fundamentally broken.

A majority of the internet ecosystem has been trading user security, trust, and privacy in exchange for convenience and unsustainable profit growth. These dynamics at play resemble a teetering Jenga tower about to collapse.

We are ensnared in a cybertrust paradox: relying on both a handful of mega-corporations and relative geopolitical stability for access to vital online services — sometimes forcefully so.

These corporations may:

Go out of business and stop providing services Get hacked and cause massive damage to businesses and users Restrict critical access due to geopolitical motivations Exploit user privacy and compete with businesses built on their own platform due to misaligned incentives Ignore compatibility with modern tech stacks like Jamstack, blockchain, and other forms of decentralized infrastructure

Big tech companies become centralized custodians, amassing troves of user identity data, creating single-points-of-failure with “too big to fail” level risks. With motivations to expand and maintain growth at all costs, they acquire more companies and absorb even more user identities. Close to 80% of all recorded acquisitions happened in the last 8 years alone.

This problem compounds itself. One password leak makes other compromises easier, and the rate of lost or stolen passwords is only accelerating, as more companies are moving online due to the pandemic. Facebook’s most recent data breach compromised phone numbers and personal data, making it easier for hackers to impersonate users and scam them into handing over login credentials. In this instance, over 500 million users’ data were leaked.

To hedge against these risks, companies are under more pressure to keep data safe and act swiftly and transparently in a cyberattack. So they turn to their developers to implement authentication in-house. This often ends up being extremely expensive, involving building large teams to continuously address a multitude of security, compliance, infrastructure, reliability, and scale challenges. Despite these resources, 66% of breaches took months or even years to discover in the first place.

Data breaches and lost/stolen passwords are a looming challenge of our times. Traditional forms of authentication haven’t changed much in decades and passwords are already obsolete.

Now more than ever, we need digital identity infrastructure that’s secure and sustainable — that scales with modern internet ecosystems.

The future

At Magic, we believe the solution starts with developers.

Instead of deferring responsibilities to end-users to improve their own security hygiene with solutions like password managers, Magic makes it plug and play for developers to add secure, passwordless login, like magic links and WebAuthn, to their applications. Users are no longer exposed to password-related risks from the very start.

So, what makes Magic authentication unique? Instead of usernames and passwords, Magic uses public and private keys to authenticate users under the hood. A decentralized identifier is signed by the private key to generate a valid authentication token that can be used to verify user identity.

Traditionally, usernames are publicly recognizable identifiers that help pinpoint a user, whereas passwords are secrets that were created by the user and are supposed to be something only they know.

You can think of public and private keys as materially improved versions of usernames and passwords. The public key is the identifier and the private key is the secret. Instead of being created by users and prone to human error (e.g. weak/reused passwords), the key pair is generated via elliptic curve cryptography that has proven itself as the algorithm used to secure immense value sitting on mainstream blockchains like Bitcoin and Ethereum.

Using blockchain key pairs for authentication gives Magic native compatibility with blockchain, supporting over a dozen of blockchains. This enables blockchain developers to use Magic SDK to provide user-friendly onboarding experiences to mainstream users and tap into the potential of the rapidly expanding blockchain industry that is growing 56.1% year over year and projected to reach $69.04 billion by 2027.

The key pairs are also privacy-preserving (no personally identifiable information) and exportable. This allows user identity to be portable and owned by users themselves (self-sovereignty). The world is already moving towards this direction with novel solutions from companies like Workday and Microsoft.

We’re first committed to enabling a passwordless future, by providing developers with the easiest way to integrate passwordless login methods into their applications, paving the way to eventually encourage worldwide adoption of decentralized identity.

We’re hiring!

To accelerate our momentum, we are growing our team! We are hiring across the board — engineering, product, research, and marketing.

We are a diverse team with experience working at leading tech companies such as Stripe, Docker, Amazon, Auth0, Box, and Apple.

You can check out our openings on our careers page. If you’re excited about our mission and don’t see a role that matches your skills, please write to careers@magic.link and share how you can help.

To our customers, community, and investors: we’re incredibly grateful for your support. Absolutely thrilled to be on this journey together and can’t wait to share what’s in store for you all!

Onward! 🔥

Learn More About Magic ✨

Website | Documentation | Careers | Press | GitHub | Twitter

Magic Raises $27M to Future-Proof Authentication was originally published in Magic on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

EIC Speaker Spotlight: Dr. Barbara Mandl & Dr. Angelika Steinacker on the Women in Identity Workshop

by Andrea Beskers Women in Identity exists to foster new relationships and mentorship between women in technical fields and beyond. At EIC 2021, Dr. Angelika Steinacker and Dr. Barbara Mandl will hold a pre-conference workshop on Monday, September 13 starting at 9:00 am. To give you sneak preview of what to expect, we asked Angelika and Barbara some questions about their planned workshop.

by Andrea Beskers

Women in Identity exists to foster new relationships and mentorship between women in technical fields and beyond. At EIC 2021, Dr. Angelika Steinacker and Dr. Barbara Mandl will hold a pre-conference workshop on Monday, September 13 starting at 9:00 am.

To give you sneak preview of what to expect, we asked Angelika and Barbara some questions about their planned workshop.


Which audience do you hope to attract for your joint workshop on Women in Identity?

We are hoping for a very diverse audience. We are addressing everyone who is currently working or planning to work in the space of Identity and Access Management and or Security. Our Agenda is put together in such a way, that it is meant for a very diverse audience. The workshop itself is meant to be inclusive, regardless of position, gender, or nationality. It should be interesting for everyone. For managers in Identity and Access Management, technical specialists, sales persons and students. As it is a workshop, we would love to have people who join the entire workshop from the beginning to the end, and people who really like to participate and like lively discussions and are very open-minded.


What are the goals of Women in Identity?

Women in Identity is a registered non-profit membership organization. Everyone at every level within Women in Identity is working on a voluntary basis and our teams are located across the globe. We want to inspire, elevate, and support a more diverse workforce in the digital identity industry. I am the Women in Identity ambassador for the DACH region (Germany, Austria, and Switzerland). We are now a group in DACH of more than 25 women in this network who want to make identity inclusive, exchange ideas and discuss further steps to encourage women to get involved in STEM.


What will the agenda look like?

The agenda will be very interesting. We will have short presentations and a fireside chat with discussions with all attendees. The agenda will also include breakout sessions to discuss various topics in depth, for example technical IAM achievements, the future of Identity and Access Management, agile methods in IAM programs, and we will present the results of each of the breakout sessions to the entire audience.


What would you like the takeaways to be?

Mainly, we would like our participants to be excited about the new ideas and the thoughts which were presented in the workshop, and which they discussed in the breakout sessions. Ideas and thoughts, nurturing a new impulse to contribute, and also knowing how to contribute, to create inclusiveness in the Identity and Access Management and Security workplace.

We would like people to have a refreshed motivation and a new sense of curiosity for Identity and Access Management and Security through new ideas and thoughts, which will be discussed and presented at the workshop.


OWI - State of Identity

SheerID: ID vs. Identity

This week, join OWI's Managing Director, Cameron D'Ambrosi, host Sai Koppala, Chief Marketing Officer at SheerID as they put protecting consumer privacy front and center. They tackle hot topics like Google's stance on third-party cookies and the evolution of a new gold standard towards "zero-party data." You'll learn how SheerID digitally verifies 2.5 billion customers via more than 9,000 authorit

This week, join OWI's Managing Director, Cameron D'Ambrosi, host Sai Koppala, Chief Marketing Officer at SheerID as they put protecting consumer privacy front and center. They tackle hot topics like Google's stance on third-party cookies and the evolution of a new gold standard towards "zero-party data." You'll learn how SheerID digitally verifies 2.5 billion customers via more than 9,000 authoritative data sources, providing insights from hundreds of global brands without relying on the need to share or sell customer data.


KuppingerCole

EIC Speaker Spotlight: Doc Searls on Decentralized Identity

by Raj Hegde Doc Searls, Co-founder and board member of Customer Commons, and Director of ProjectVRM, is to deliver a keynote entitled Where Stands the Sovereign Self? at the European Identity and Cloud Conference 2021. To give you sneak preview of what to expect, we asked Doc some questions about his planned presentation. Why does decentralized identity matter to companies today? Y

by Raj Hegde

Doc Searls, Co-founder and board member of Customer Commons, and Director of ProjectVRM, is to deliver a keynote entitled Where Stands the Sovereign Self? at the European Identity and Cloud Conference 2021.

To give you sneak preview of what to expect, we asked Doc some questions about his planned presentation.


Why does decentralized identity matter to companies today?

You as an individual have sovereignty and agency when you put your shoes on in the morning, when you drive your car, when you ride your bicycle. [But] we don't have that much [agency] yet on the internet. We don't have our own bicycle. We don't have our own car. We have a different one for every website that we go to. We have no more privacy than what websites and services provide separately. We don't have any scale across all of them. We're always agreeing to terms that other parties are proffering to us and giving us very few choices about how we answer and no records of that. And that's because we're always thinking that whatever we have as an individual is up to a company. And there are so many companies that we're dealing with as individuals that we don't have scale. We don't have full agency. We have no more agency than what the company provide for us.

That's too much responsibility for companies, and it's not responsibility that they necessarily want. It's too much. Why should you have to keep records of all these preferences, all these different things. It doesn't make any sense. You should be able as an individual - because you have agency - to change your address, to change your last name, to have a shopping cart you could take from site to site for the good of the companies that you're dealing with. You know, that's something that works for people and it works for companies as well. And that's why we need agency. We haven't had it yet online. We created the online environment more or less on the old industrial model where we had to have scale.

Every company separately needed all the scale they could get across all the customers that they wanted, and to hold that scale exclusively, so customers had no way to move from company to company or site to site with their own tools where they could deal at scale. They have a browser, but the browser is all they got really. After that, it's up to the company to remember who was that? That's a cookie I put in your browser. The customer should have more independence than that, and that independence should pay off for the companies. And we don't have it yet. That's a big “to do”.


What is the business value of decentralized identity?

The business value of decentralized identity, which I think of as distributed identity, by the way, is independence for the customer. It's independence for the individual. It's that everybody has the capacity to obey all of Kim Cameron's original seven laws of identity: Minimum disclosure for a constrained use, justifiable parties, plurality of operators [and technologies etc].

Those things are all really good for business, and we don't have them yet. And we don't have the tools on our side. We need wallets of our own, or something is like a wallet, that is our instrument for making verifiable claims and presenting verifiable credentials that can be verified easily and minimally disclose. There's a lot of thought and a lot of development that's going into, how do we make this work on the corporate side? I haven't seen enough on my side, on the individual side, that I just have to have that wallet. That's a great wallet. I have that. What can I put in that wallet? Have I seen his company before? What's my relationship with them? Where did I get these credentials? What are those? I shouldn't have to think about any of that.

I think in the abstract, it all works, but we need to make it work in the concrete and it’s never going to work unless we have something that is an app on the front page of our phones; something that's a body function for us like our wallets are in our back pockets and our purses. And like apps are on our phones. We need those. There are some in the world now. There are some SSI wallets, but we need ones that are what I call “inventions” that mother necessity. You have to have them - take one look at it and say, I got to have that. I gotta have that. That's going to save me all kinds of trouble.


What can organizations do to kickstart decentralized initiatives?

I would suggest that companies go to their employees and say, don't think of yourself as an employee, think of yourselves as a customer or a user if you don't have customers, you just have consumers, they don't pay anything, but you have them. What would you want that works not only for this company, but for every company you deal with? What gives you scale? What can we do in cooperation with every other company out there, that's going to give you scale as a person, not just as an employee, but as a person? And especially think of how do we not trap customers? How do we not put them into a walled garden or into a silo? How do we liberate them so they can deal in a more efficient and useful way with all the companies they deal with? And look at what the fundamentals of SSI are, and how they would work with that.

But how do we put that to use in the world, for you as an individual, and get all of the input you possibly can, because I guarantee that the employees of your company know more about what they would want, as individuals operating in the world, than the company could begin to guess at.


What are the risks at play if organizations don't work on decentralized initiatives soon?

The main risk is getting left behind on something that will be universal, eventually. It's like 1991, and we don't have the browser yet, but the internet is already here. We are all going to be using the internet. Something like the browser is going to show up. Everybody's going to have one. Are you just going to put a brochure on the web, and say I'm going to hand it off to the marketing department? Or are you going to base the whole world – your entire company - on what the internet is going to do for you, that everybody can see through their browser?

That's the future we're going to have with SSI. We're going to have self-sovereign customers. They are going to present verifiable credentials to your company and other companies. And you're going to participate in that. How do you get involved with that? We don't have the browser yet. And I don't think we do. We don't have the equivalent of the browser for SSI. We're going to have the equivalent of a browser, probably a number of things that might be like a browser, but it'll play the role of a browser in the sense that it's something. I have a better example, of maybe an email client. You know, we all had - if you were fluent in basic computerese – Mutt and Pine, and things that worked in the command line. And then later say Eudora, that you could use as an email client. This is long before Google sort of normalized it with Gmail, but you could still take your mail off Gmail and put it on your own server.

And people are going to have their own instruments for expressing their self-sovereign identities and really identifiers. Again, I have verifiable credentials that I'm going to present. I go to the concert and all you need to know is I have a ticket. I just flash something on my phone. You've got a ticket, you get in. We have some of that right now with QR codes. I went to a game the other day and showed a QR code, got in no problem. That starts to get there. That’s a form of SSI, but that's one company's way of presenting. That’s one B2B solution that happens to work in a B2C way.

We need the C2B solutions. Browsers are C2B. They're not B2C. We don't have a different browser for every company we deal with. We have one browser we deal with, with every company, one email account we deal with, with every company. We have one SMS client that we use to deal with every company. We have one phone number that we give when a phone number is required as an identifier. Those are examples of scale. We should have many, many other kinds of scale, but you can't begin to see that if you're only looking at it at an individualized B2C way. You're going to have to imagine a future where every one of us has, self-sovereign ways to present verifiable credentials that work across the whole world in a normalized way. And where does your company play in that? And I think right now, the way to play in it, is jump in with a whole lot of other companies that are doing are doing base level work on common protocols, the standards, and ways to make this new industry scaffold itself up. Get together with other companies on that base level stuff.


The title of your keynote is 'Where stands the Sovereign Self?' What do you hope to achieve with your keynote at EIC 2021?

I hope to achieve, as I've tried to do I think with every keynote I've given there going back to the beginning, full respect for what can only be solved from the individual side. There are so many business problems that can only be solved from the customer side. And, I think with SSI especially, we finally have a lot of activity around that. But I'm still not seeing enough that starts with the individual, that says how do we get this person’s scale? How do we make this work across entire markets and across the entire world? How can we start equipping them - individuals – with ways to manage their lives? [We] have barely thought that out. Google and Apple provide calendars and contacts. And so does Microsoft, but they're not exactly compatible. They're very siloed.

That's our personal data. We're always talking about personal data [and things like] harvesting personal data. The personal data that matters are: my health, my finances, what I own, my contacts, my calendar - nobody has solved these yet. And [there are] standards laying around in a world. Phil Windley is here [at EIC]. He'll tell you about picos and other things. I suggest you talk to him. If I was 25 years old right now, I would start with picos. I would start with things that people own. How do we make owning things a lot easier? How do we make communicating with companies easier about what we own outside of companies’ separate siloes? That plays into SSI. That plays into I go to the store and I say, yeah, I'm a member of this. I already have this. I already bought this one. You know, I go, did I buy this? Did I buy this keyboard? I have a box over there that has three Apple keyboards. I didn't even know I had these. If I had known, my life would [have been] simpler. I wouldn't have bought another one the other day. But I don't know those things because I don't have record of that. When I get the receipt, I should put it through something that remembers it. Nobody's thinking about these things, or if they are, they're not gaining any scale. Give people scale around controlling their lives. We have the internet now. We have digital technology now. We're all digital beings. Give us ways that we can manage our lives. Fire marketing, get them out of the room, because they're only thinking still like: how do we find out all about what the customer wants? For one company, a piece. Get rid of that. Just think about what the customer needs. You're a customer. Everybody here is a customer. Think about what you could use in your life that you don't have right now, that gives you scale across every company you deal with in ways that are useful to those companies, where going to appreciate getting just a verifiable credential that says: I bought this before. I belong to this. I have a warranty. Lots of things like that, but in a standard way across all the companies you're dealing with. That is the frontier.


Infocert

Tinexta enters the French market through the acquisition of the majority stake of CertEurope, a leader in the local Certification Authority market

The company will be acquired by InfoCert and will join the Digital Trust Division. The consideration for 60% of the capital is equal to Euro 43.8 million. Rome, 21 July 2021. Tinexta S.p.A., leading company in Digital Trust, Cybersecurity, Credit Information & Management and Innovation & Marketing services, through its subsidiary InfoCert S.p.A., has finalised […] The post Tinexta enters

The company will be acquired by InfoCert and will join the Digital Trust Division. The consideration for 60% of the capital is equal to Euro 43.8 million.

Rome, 21 July 2021. Tinexta S.p.A., leading company in Digital Trust, Cybersecurity, Credit Information & Management and Innovation & Marketing services, through its subsidiary InfoCert S.p.A., has finalised today a Binding Offer, in the form of a Put Option Agreement, to the French company Oodrive S.A.S. for the acquisition of 60% of CertEurope S.A.S.’ capital. Oodrive is backed by Tikehau Capital, the global alternative asset management group since 2017.

CertEurope, headquartered in Paris, is one of the three largest Certification Authorities in France with a well-known brand and a market share of around 40% in the eIDAS certificate sector. The company has the authorisations and accreditations to issue all types of certificates required by the French market, in compliance with the technical requirements established by the National Agency for the Security of Information Systems (ANSSI).

Through the acquisition, Tinexta will enter the French market, the second largest in the European Community, and InfoCert, the largest Certification Authority in Europe, will be enabled to sell its solutions on the territory. CertEurope’s well-established business relationships with some important trade associations (lawyers among the others) and with the major national retailers (resellers of digital services) represent a potential accelerator for the penetration of InfoCert’s solutions into the French market.

The agreement provides for the purchase of 60% of CertEurope’s capital for a total consideration of Euro 43.8 million (which includes an earn-out of Euro 3.8 million depending on the 2021 and 2022 result performances), assuming zero net financial debt at closing.

The option right inherent in the minority interests in the company’s capital may be exercised in 2023, on the basis of specific Put/Call1 agreements. The discounted value of the Put/Call option of the minority interest is estimated at approximately Euro 28.4 million.

The investment for 100% of the capital is estimated at Euro 72.2 million, composed as follows:

€ millionInitial Cash-Out40.0Earn-out Debt*3.8Put Options Debt*28.4Total Investment72.2
*Discounted value, non-interest bearing.

The acquisition of CertEurope will be financed with the existing liquid assets. The Enterprise Value of the company is equal to Euro 66.7 million, at a multiple of between 12x and 13x the 2020 EBITDA proforma for the acquisition of the majority stake and at a multiple of between 12x and 13x the expected 2022 EBITDA for the exercise of the option right on the remaining shares in 2023.

In the 2020 fiscal year, CertEurope recorded Revenues of Euro 14.1 million, up 6.9% compared to the previous year, a proforma2 EBITDA of Euro 5.2 million, with an operating EBITDA Margin equal to 37% on the revenues.

In accordance with the French legal system, the conclusion of a final agreement would occur after the seller has conducted the information-consultation process of the works council. Closing is expected by the fourth quarter of 2021. This operation is subject to the completion of the foreign investment control procedure in France.

The total value of the Digital Trust market in France is estimated3 at approximately Euro 150 million, with a growth forecast equal to 23% per year over the next few years, reaching Euro 500 million in 2025. The competitive environment is made up of a few major brands (including CertEurope, with around 10% market share, the third largest player) and a large number of smaller competitors.

Enrico Salza, Chairman of Tinexta S.p.A., said: “The Group continues growing, taking a further step towards international development and progressively consolidating its position abroad, confirming its vocation as a European player in digital identity”.

Pier Andrea Chevallard, CEO of Tinexta S.p.A., commented: “Through CertEurope, Tinexta group will be able to operate in France, a large market with great potential, also in regulated activities, and will have a high standing local platform and a brand of undoubted recognition to boost the international expansion.

We embarked on this process with determination because we are confident that this is a highly strategic operation, which can be a steppingstone to expand our presence in Europe. The high degree of complementarity with InfoCert’s expertise and services will enable us to achieve much value from this operation”.

“This agreement opens up significant opportunities – added Danilo Cattaneo, CEO of InfoCert – in CertEurope we found a group of skilled and motivated people that is already working with us in tailoring top selling InfoCert solutions for the specific needs of French companies and professionals, that will have available all the solutions of the Group with a local expert presence.

We shared the vision as a group to further invest resources in R&D to provide expertise and valuable solutions for the digital transformation that will be a leading trend in Europe for the years to come.

In the acquisition of CertEurope Tinexta was assisted by Roland Berger for the support to the preparation of the business plan, by PWC for the financial and tax due diligence activities and by DLA for the legal due diligence and the support to the negotiation of the contractual texts.

TINEXTA S.p.A.

Tinexta, listed on the STAR segment of the Milan Stock Exchange, reported the following consolidated results as of 31 December 2020: revenues of EUR 269.1 million, EBITDA of EUR 77.9 million and net profit of EUR 37.9 million. Tinexta Group is one of Italy’s leading operators in its four business areas: Digital Trust, Cyber Security, Credit Information & Management, Innovation & Marketing Services. The Digital Trust Business Unit provides, through the companies InfoCert S.p.A., Visura S.p.A., Sixtema S.p.A. and the Spanish company Camerfirma S.A., products and services for digitisation, electronic invoicing and certified e-mail (PEC) for large companies, banks, insurance and financial companies, SMEs, associations and professionals. The Cyber Security Business Unit operates through the companies Yoroi, Swascan and Corvallis and constitutes one of the national poles in the research and provision of the most advanced solutions for data protection and security. In the Credit Information & Management Business Unit, Innolva S.p.A. and its subsidiaries offer services to support decision-making processes (Chamber of Commerce and real estate information, aggregated reports, synthetic ratings, decision-making models, credit assessment and recovery) while RE Valuta offers real estate services (appraisals and evaluations). In the Innovation & Marketing Services Business Unit, Warrant Hub S.p.A. is a leader in consultancy in grants, loans and tax relief as well as industrial innovation, while Co.Mark S.p.A. provides Temporary Export Management consultancy to SMEs to support them in their commercial expansion. As of 31 December 2020, the Group had 1,403 employees.

Website: www.tinexta.com, Stock ticker: TNXT, ISIN Code IT0005037210

This option, although classified as debt under IFRS/IAS, does not entail any financial expense prior to its exercise, which may not occur until 2023. The perimeter of the transaction refers to the legal entity CertEurope S.A.S. after a carve out and carve in process that will be completed before the closing. More specifically, with the carve out some assets and 13 Human Resources will be transferred, while after the carve in 24 Human Resources will join CertEurope. Source: Grand View Research, company information, interviews with market participants, Desk research, Roland Berger.

CONTACTS

Chief Investor Relations Officer
Josef Mastragostino
investor@tinexta.com
Chief External Relations & Communication Officer
Alessandra Ruzzu alessandra.ruzzu@tinexta.com Press Office
Carla Piro Mander
Tel. +39 06 42 01 26 31 carla.piro@tinexta.comMedia Advisor
Barabino & Partners S.p.A.
Foro Buonaparte, 22 – 20121 Milano Tel.: +39 02 7202 3535
Stefania Bassi: +39 335 6282 667
s.bassi@barabino.itSpecialist
Intermonte SIM S.p.A.
Corso V. Emanuele II, 9 – 20122 Milan Tel.: +39 02 771151

The post Tinexta enters the French market through the acquisition of the majority stake of CertEurope, a leader in the local Certification Authority market appeared first on InfoCert.


Infocert (IT)

Tinexta entra nel mercato francese con l’acquisizione della quota di maggioranza di CertEurope, leader nel mercato locale delle Certification Authority

La società verrà acquisita da InfoCert ed entrerà nella Divisione Digital Trust. Il corrispettivo per il 60% del capitale è pari a Euro 43,8 milioni. Roma, 21 luglio 2021. Tinexta S.p.A., società leader nei servizi Digital Trust, Cybersecurity, Credit Information & Management e Innovation & Marketing, attraverso la sua controllata InfoCert S.p.A., ha finalizzato oggi […] The post Tinexta

La società verrà acquisita da InfoCert ed entrerà nella Divisione Digital Trust. Il corrispettivo per il 60% del capitale è pari a Euro 43,8 milioni.

Roma, 21 luglio 2021. Tinexta S.p.A., società leader nei servizi Digital Trust, Cybersecurity, Credit Information & Management e Innovation & Marketing, attraverso la sua controllata InfoCert S.p.A., ha finalizzato oggi una Binding Offer, sotto forma di Put Option Agreement, alla società francese Oodrive S.A.S per l’acquisizione del 60% del capitale di CertEurope S.A.S.. Oodrive è detenuta dal 2017 da Tikehau Capital, gruppo globale di gestione patrimoniale alternativa.

CertEurope, con sede a Parigi, è una delle tre più grandi Certification Authority in Francia con un brand molto conosciuto, una market share pari a circa il 40% nel comparto dei certificati eIDAS. La società possiede le autorizzazioni e gli accreditamenti per l’emissione di tutte le tipologie di certificati richiesti dal mercato francese in conformità ai requisiti tecnici stabiliti dall’Agenzia nazionale per la sicurezza dei sistemi informatici (ANSSI).

Attraverso l’acquisizione, Tinexta entrerà nel mercato francese, il secondo per dimensione nella Comunità Europea e InfoCert, la più grande Certification Authority in Europa, sarà abilitata alla vendita delle proprie soluzioni sul territorio. I consolidati rapporti commerciali che CertEurope intrattiene con alcune importanti associazioni di categoria (tra le altre, avvocati) e con i grandi rivenditori nazionali (reseller di servizi digitali) rappresentano un potenziale rilevante acceleratore per la penetrazione nel mercato francese delle soluzioni di InfoCert.

L’accordo prevede l’acquisto del 60% del capitale di CertEurope a fronte di un corrispettivo complessivo pari a Euro 43,8 milioni (che include un Earn-out pari a Euro 3,8 milioni in funzione delle performance di risultato 2021 e 2022), ipotizzando un indebitamento finanziario netto pari a zero al closing.

Il diritto di opzione inerente alle quote di minoranza del capitale della società potrà essere esercitato nel 2023, sulla base di specifici accordi di Put/Call1. Il valore attualizzato dell’opzione Put/Call della quota di minoranza è stimato in circa Euro 28,4 milioni.

L’investimento per il 100% del capitale è stimato pari a Euro 72,2 milioni, così composto:

€mCash-Out iniziale40,0Debito per Earn-out*3,8Debito Put Options*28,4Investimento totale72,2*Valore attualizzato, infruttifero.

L’acquisizione di CertEurope sarà finanziata con la liquidità esistente. L’Enterprise Value della società è pari 66,7 milioni di euro, ad un multiplo compreso tra 12x e 13x l’EBITDA 2020 proforma per l’acquisizione della quota di maggioranza e un multiplo compreso tra 12x e 13x l’EBITDA 2022 atteso per l’esercizio del diritto di opzione sulle restanti quote nel 2023.

Nell’esercizio 2020, CertEurope ha registrato Ricavi per Euro 14,1 milioni, in crescita del 6,9% rispetto all’anno precedente, e un Ebitda proforma2 di Euro 5,2 milioni, con un Ebitda Margin pari al 37%.

In conformità con l’ordinamento giuridico francese, la formalizzazione dell’accordo definitivo avverrà dopo che la parte venditrice avrà condotto il processo di informazione-consultazione del comitato aziendale. Il closing è previsto entro il quarto trimestre del 2021. L’operazione è soggetta al completamento della procedura di controllo degli investimenti esteri in Francia.

In conformità con il regime legale francese, il signing è previsto nel mese di settembre, al termine del processo di Hamon Law che coinvolge le Risorse Umane della società acquisita e che viene avviato oggi. Il closing è previsto entro il quarto trimestre 2021. L’operazione è sottoposta al completamento delle procedure di Antitrust e Golden Rule in Francia.

Il valore totale del mercato del Digital Trust in Francia è stimato3 in circa 150 milioni di euro, con previsioni di crescita pari al 23% all’anno nei prossimi anni, fino a raggiungere i 500 milioni di euro nel 2025. Il contesto competitivo è composto da alcuni brand rilevanti (tra cui CertEurope, con circa il 10% della quota di mercato, terzo player) e da un elevato numero di competitor di piccole dimensioni.

“Il Gruppo – ha dichiarato il Presidente di Tinexta S.p.A., Enrico Salza – continua a crescere compiendo un ulteriore passo di sviluppo internazionale e consolidando progressivamente la propria posizione anche all’estero, confermando la vocazione di player europeo per l’identità digitale”.

“Attraverso CertEurope – commenta Pier Andrea Chevallard, Amministratore Delegato di Tinexta

S.p.A. – il gruppo Tinexta potrà operare in Francia, un mercato vasto e dalle grandi potenzialità, anche nelle attività regolamentate, e disporrà di una piattaforma locale di elevato standing e di un brand di indubbia riconoscibilità per accelerare l’espansione internazionale.

Abbiamo avviato il percorso con determinazione poiché siamo convinti che si tratti di un’operazione di elevato contenuto strategico, in grado di rappresentare un trampolino di lancio per l’ampliamento della nostra presenza in Europa. L’elevata complementarità con le competenze e i servizi di InfoCert ci consentirà di estrarre molto valore da questa operazione”.

“L’accordo annunciato oggi apre opportunità significative – aggiunge Danilo Cattaneo CEO di InfoCert – in CertEurope abbiamo trovato un gruppo di persone competenti e motivate che già sta lavorando fianco a fianco con noi per fornire ad aziende e professionisti francesi le soluzioni InfoCert integrate con i prodotti CertEurope e adattate alle specifiche esigenze e normative locali.

Abbiamo condiviso la visione come gruppo di investire ulteriormente in ricerca e sviluppo per essere in grado di fornire tutta la competenza e le soluzioni necessarie alla trasformazione digitale che sarà realizzata in Europa nei prossimi anni”.

Tinexta è stata assistita nell’operazione di acquisizione di CertEurope da Roland Berger per il supporto alla predisposizione del piano industriale, da PWC per le attività di due diligence finanziaria e fiscale e da DLA per la due diligence legale ed il supporto alla negoziazione dei testi contrattuali.

Tale diritto di opzione, anche se classificato come indebitamento secondo gli IFRS/IAS, non comporta nessun onere finanziario prima del suo esercizio, che potrà verificarsi solo nel 2023. Il perimetro dell’operazione fa riferimento alla legal entity CertEurope S.A.S. successivamente ad un processo di carve out e carve in che verrà completato prima del closing. In particolare, con il carve out verranno trasferiti alcuni asset e 13 Risorse Umane, mentre a seguito del carve in entreranno in CertEurope 24 Risorse Umane. Fonte: Grand View Research, company information, interviews with market participants, Desk research, Roland Berger.

TINEXTA S.p.A.

Tinexta, quotata al segmento STAR della Borsa di Milano, ha riportato i seguenti Risultati consolidati al 31 dicembre 2020: Ricavi pari a euro 269,1 milioni, EBITDA pari a euro 77,9 milioni e Utile netto pari a euro 37,9 milioni. Tinexta Group è tra gli operatori leader in Italia nelle quattro aree di business: Digital Trust, Cybersecurity, Credit Information & Management, Innovation & Marketing Services. La Business Unit Digital Trust eroga, attraverso le società InfoCert S.p.A., Visura S.p.A., Sixtema S.p.A. e la società spagnola Camerfirma S.A., prodotti e soluzioni per la digitalizzazione: firma digitale, identità digitale, onboarding di clientela, fatturazione elettronica e posta elettronica certificata (PEC) per grandi aziende, banche, società di assicurazione e finanziarie, PMI, associazioni e professionisti. La Business Unit Cybersecurity opera attraverso le società Yoroi, Swascan e Corvallis e costituisce uno dei poli nazionali nella ricerca ed erogazione delle soluzioni più avanzate per la protezione e la sicurezza dei dati. Nella Business Unit Credit Information & Management, Innolva S.p.A. e le sue controllate offrono servizi a supporto dei processi decisionali (informazioni camerali e immobiliari, report aggregati, rating sintetici, modelli decisionali, valutazione e recupero del credito) e RE Valuta S.p.A. offre servizi immobiliari (perizie e valutazioni). Nella Business Unit Innovation & Marketing Services, Warrant Hub S.p.A. è leader nella consulenza in finanza agevolata e innovazione industriale, mentre Co.Mark S.p.A. fornisce consulenze di Temporary Export Management alle PMI per supportarle nell’espansione commerciale. Al 31 dicembre 2020 il personale del Gruppo ammontava a 1.403 dipendenti.

Sito web: www.tinexta.com, Stock ticker: TNXT, ISIN Code IT0005037210

InfoCert SpA

InfoCert, Tinexta Group, è la più grande Certification Authority europea, attiva in oltre venti Paesi. La società eroga servizi di digitalizzazione, eDelivery, Firma Digitale e conservazione digitale dei documenti ed è gestore accreditato AgID dell’identità digitale nell’ambito di SPID (Sistema Pubblico per la gestione dell’Identità Digitale). InfoCert investe in modo significativo nella ricerca e sviluppo e nella qualità: detiene un significativo numero di brevetti mentre le certificazioni di qualità ISO 9001, 27001 e 20000 testimoniano l’impegno ai massimi livelli nell’erogazione dei servizi e nella gestione della sicurezza. Il Sistema di Gestione della Sicurezza delle Informazioni InfoCert è certificato ISO/IEC 27001:2013 per le attività EA:33-35. InfoCert è leader europeo nell’offerta di servizi di Digital Trust pienamente conformi ai requisiti del Regolamento eIDAS (regolamento UE 910/2014) e agli standard ETSI EN 319 401, e punta a crescere sempre di più a livello internazionale anche mediante acquisizioni: detiene il 51% di Camerfirma, una delle principali autorità di certificazione spagnole e il 16,7% di Authada, Identity Provider tedesco all’avanguardia. InfoCert, infine, è proprietaria dell’80% delle azioni di Sixtema SpA, il partner tecnologico del mondo CNA, che fornisce soluzioni tecnologiche e servizi di consulenza a PMI, associazioni di categoria, intermediari finanziari, studi professionali ed enti.

Per maggiori informazioni:

InfoCertPress Relations Advisor BMP Comunicazione per InfoCert team.infocert@bmpcomunicazione.it Pietro Barrile +393207008732 – Michela Mantegazza +393281225838 – Francesco Petrella +393452731667 www.infocert.itTinexta S.p.A.Chief Investor Relations Officer Josef Mastragostino investor@tinexta.com Chief External Relations & Communication Officer Alessandra Ruzzu +39 331 622 4168 alessandra.ruzzu@tinexta.com Press Office Manager Carla Piro Mander Tel. +39 06 42 01 26 31 carla.piro@tinexta.comMedia Advisor Barabino & Partners S.p.A. Foro Buonaparte, 22 – 20121 Milano Tel.: +39 02 7202 3535 Stefania Bassi: +39 335 6282 667 s.bassi@barabino.itSpecialist Intermonte SIM S.p.A. Corso V. Emanuele II, 9 – 20122 Milano Tel.: +39 02 771151

The post Tinexta entra nel mercato francese con l’acquisizione della quota di maggioranza di CertEurope, leader nel mercato locale delle Certification Authority appeared first on InfoCert.


Elliptic

Customer Success Story: Paysafe Integrate with Elliptic

Elliptic selected by Paysafe to support cryptocurrency payments compliance Leading specialized payments platform Paysafe adopts Elliptic’s blockchain wallet screening capabilities, Elliptic Lens, to further strengthen compliance for transactions to and from their crypto partners.  Here’s their story.
Elliptic selected by Paysafe to support cryptocurrency payments compliance

Leading specialized payments platform Paysafe adopts Elliptic’s blockchain wallet screening capabilities, Elliptic Lens, to further strengthen compliance for transactions to and from their crypto partners.  Here’s their story.


PingTalk

Two-factor Authentication: What It Is and What You Need To Know

Cybersecurity sometimes feels like an endless game of cat and mouse, with hackers and companies battling over access to your business systems and the sensitive data you store, including your users’ identity information and passwords. From online banking apps to digitized school records, more and more enterprises are securing access by implementing 2-factor authentication of users instead of relyin

Cybersecurity sometimes feels like an endless game of cat and mouse, with hackers and companies battling over access to your business systems and the sensitive data you store, including your users’ identity information and passwords. From online banking apps to digitized school records, more and more enterprises are securing access by implementing 2-factor authentication of users instead of relying on a simple password.

 

Let’s explore 2-factor authentication in more detail, including a close look at how it works and the value it provides for users and companies alike.
 


Aergo

BananaClips on AERGO

As all of you who are reading this article already know, a blockchain has a very difficult system architecture to manipulate or to discard data once recorded in the network. In addition, due to this feature, assets of users created and stored in the blockchain network are guaranteed more securely than any other system. BananaClips is an online marketplace that utilizes these features and gets the

As all of you who are reading this article already know, a blockchain has a very difficult system architecture to manipulate or to discard data once recorded in the network. In addition, due to this feature, assets of users created and stored in the blockchain network are guaranteed more securely than any other system. BananaClips is an online marketplace that utilizes these features and gets the leverage through them. The AERGO blockchain is mainly used as a platform to create value/assets for customer uploaded videos and to improve trust in transactions. Today, I would like to explain to you how BananaClips uses AERGO and why it makes AERGO and BananaClips more valuable systems and services.

Uploading Videos on BananaClips

BananaClips is also a platform where anyone could upload their created and valuable videos in public. When users upload a video to BananaClips, the following process is performed internally.

Saving the original video file to a secure archiving location Automatic video analysis Extracting copyright information Extracting search information Detecting unsafe contents Monetizing contents

During this process, the blockchain is utilized in steps 2, 3, and 6.

Video Analysis and Extracting Copyright Information

When a video file is analyzed, various pieces of information called metadata are extracted such as the size of the video screen (i.e. resolution), geographic location information, whether the video is horizontal or vertical, and so on. It is very important to keep this information safe as it can be easily edited or deleted. It can be said that this is the first step to secure copyright.

BananaClips keeps this extracted metadata on the AERGO blockchain network to prevent modification or deletion of customers’ video contents. In addition, by providing metadata for free through the AERGO network, we are step by step in preparation for standardization for copyright protection and revitalization of the community.

Digital Watermark

Even simply extracting metadata and storing it on the blockchain network would be useless if it could not be utilized. For this reason, Banana Clips uses its own digital watermark technology to store invisible encrypted information, i.e. steganography, in the video content itself. This steganography data becomes a compass from which you can check the metadata extracted from above and stored securely on the AERGO network. In other words, no matter how the video content is altered or misused, if users/systems can check the metadata and related licenses, users/systems can further protect the copyright.

Smart Contract

The extracted metadata would be added through a smart contract deployed on the AERGO network created by BananaClips, and anyone could view and check the metadata via the AERGO network. However, if anonymous users/systems want to add or modify metadata, it can cause confusion due to incorrect data and misguided behaviors. Therefore, only BananaClips can save and edit the metadata in the smart contract for the present. In the future, we plan to establish a decentralized autonomous organization (DAO) so that various members and partners can participate in this smart contract, and we will develop to standardize it.

Monetizing contents

BananaClips issues NFT with new value on the AERGO network for an uploaded video content that has been prepared through the above processes. However, in order to improve the user experience, new tokens would be issued and provided to users in a different way from other NFT services.

It is not an easy task for customers who want to upload and monetize contents to understand and prepare their personal wallets and coins for issuing. If you, who are reading this article, think about when you tried to understand about a blockchain and coins at first, I assure you will fully understand that this process is not easy. Making this process simple and easy is also one of the biggest challenges and differentiation factors BananaClips has prepared for you.

Fewer steps and simplifications do not compromise security at all. Rather, a roll called “confirmer”, which is not present in the existing ARC2, is added to prevent incorrect transactions that may occur by attacking a smart contract by any change. This role and step to guarantee by our system has been included to achieve a higher level of security.

Additionally, BananaClips users can withdraw their NFT assets at any time through AERGO or Ethereum (we plan to support more blockchain networks) and can obtain video contents again at any time after withdrawal via our service. Since our marketplace can be used again through re-deposit their NFT assets, it will provide usability at a more advanced level than other services in terms of scalability to support various platforms.

The NFT smart contract was developed according to the ARC2 standard. Here, the additional APIs provided after research and development by our engineering team will not lack in providing a higher level of service to BananaClips customers. Soon, the AERGO community will be able to meet the advanced NFT open marketplace.

BananaClips on AERGO was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Holochain

Blockchain: A Holochain Perspective

#AHolochainPerspective
Two Different Approaches to Decentralized Data Integrity

Holochain is quite different from blockchain, but because they are designed to solve some of the same problems – and because people try to understand Holochain in terms of blockchain all the time – we figured it would be a good idea to frame at least one key aspect of Holochain in comparison to blockchain.

A complete primer on Holochain and blockchain would need a good deal of detail about what blockchain really is and how it works, and we’d probably be addressing a lot of common technical misconceptions about blockchain in the process. This is not that article.

Instead, this piece focuses on the approach each technology takes to solving an important challenge, which is really the fundamental challenge of decentralized computing: how to ensure that data is accurate and tamper-proof in a way that is efficient enough to scale.

We’ll look at blockchain’s approach, then Holochain’s.

Blockchain and Global Consensus

Blockchain is a cryptographically secured, decentralized ledger of data. You can think of a blockchain as a record of events: the things people said, who agreed to what, who sent money to whom, and so on. Up until blockchain was invented, these sorts of records were pretty much always stored in centralized databases, such as those held by government entities or private companies. Blockchain was created as a way for people to interact and transact without needing to trust such intermediaries.

What makes blockchain secure and trustable – in other words, what ensures its data integrity – is that the data is not just cryptographically protected but also replicated to many different computers (called nodes) controlled by different people or organizations. Only when a piece of data is adopted by the multitude of nodes is it considered factual, at which point it’s committed to the record. For someone to alter the record, they would have to not only break cryptographic barriers but also change most of the copies that are floating out there – a pretty much impossible task.

The way that nodes reach ‘consensus’ on what data to commit to the record varies from blockchain to blockchain, but it typically involves some type of competition among the nodes to write the next chunk of entries, or ‘block’, to the chain. Ultimately the selection of the winning node is random, so it’s not exactly consensus in the way that people mean the term in the real world. But the important takeaway is that, one way or another, the blockchain nodes come to terms on a global state of data, where all nodes hold a replica of all the same data.

And so here we come to the scalability problem: it requires tremendous computing work for all the nodes to write and hold the same data. This makes blockchains notoriously slow processors: the Bitcoin network processes just a few transactions per second, while the Ethereum network currently processes dozens. Users are accustomed to waiting minutes, sometimes up to an hour, for a single transaction to be confirmed.

The problem gets worse as you try to make blockchain do more things, which has been its aspiration, more or less, since Ethereum positioned itself as a decentralized world computer when it launched in 2014. That’s the point at which blockchains began to be able to store not just transaction records but all kinds of files and even executable application code that performs functions when accessed, resulting in new data that also gets written to the blockchain. It’s been an attractive idea to imagine that much of what we do on the web today could be hosted on blockchain networks as decentralized applications (dApps). In reality, though, apps like social networks, communication platforms, travel-booking systems, ride-sharing systems, calendar systems, and so on need much faster throughput, by many orders of magnitude, than blockchain can provide. Can you imagine waiting several minutes (and paying a gas fee!) to get a message through on a chat platform? Or for an edit to show up on a collaborative editing tool like Google Docs? Can you imagine how much computation and storage would be required to accommodate on a blockchain all of the photos and videos on social media, with all nodes needing to write all of that data and keep it forever? It doesn’t work. Facebook receives over 4 million likes every minute and currently stores over 250 billion photos.

Efficiency is such a challenge in blockchain that there are entire companies, including some of the most talked-about crypto projects today, dedicated to figuring out how to make blockchain scale better. Some of them focus on ‘layer-1’ solutions, which attempt to increase the throughput of blockchain protocols themselves, while others are ‘layer-2’, which perform computations or store data off-chain and then periodically merge records into the blockchain. Most of these solutions, though, seem to be setting their sights on low-throughput applications such as financial transacting as opposed to live collaboration apps, social networks, media platforms, and so on. And the few that do seem capable of handling a broader set of applications make compromises on decentralization, concentrating hosting and consensus mechanisms among centrally authorized nodes.

Still, the accomplishment of blockchain is not to be taken lightly. It’s more resilient to corruption than any ledgering or value-storage system than has ever existed before, and it is changing global finance as a result, with lots of room for growth still. But what’s probably even more important is the awareness it has sparked of what’s possible. Its aspirations have infected broad communities of people with a sense that we could communicate and transact without centralized intermediaries. Blockchain’s scalability challenges may ultimately limit its utility, but it has already revolutionized how humans think about interacting.

Holochain and Embodied Local States

The architects of Holochain began with a basic question: what if everyone could actually just hold their own data and share it with the network as needed? If everyone could just host themselves rather than relying on mining nodes to do it? We could avoid all this massive replication, which would obviously be much more efficient. We would just need to do it in a way that still ensures data integrity. We would have to be completely confident that, as everyone represents their own data to the network, there is no way for people to misrepresent their data.

That is fundamentally what Holochain does. Holochain is a framework for ensuring data integrity within a decentralized application without relying on anyone other than the users themselves.

A Natural Solution

At this point in the conversation, people familiar with blockchain are often skeptical. What’s to prevent people from lying about their state? From, say, spending the same money in two different places? (Holochain supports applications far more diverse than just currencies, but it’s often useful to use currencies as an illustration.)

We’ll get to some of the mechanics that make this possible in a moment. First, let’s look at the principles behind the mechanics, by way of analogy to nature… starting with some of its smallest objects.

Consider the covalent bonding of a chlorine atom and a hydrogen atom to create a molecule of hydrogen chloride. This requires the hydrogen atom to have a free electron available, i.e. not shared with any other atom. How does the chlorine atom ‘know’ whether the hydrogen atom has an electron available? It’s simply apparent. The hydrogen atom embodies whether a free electron exists in its state. It’s not able to misrepresent whether there’s a free electron, and it’s not able to ‘double-spend’ its electron, because the availability of an electron is evident to other relevant atoms upon inspection. There is global visibility, on demand, of local state.

It would be ridiculous to believe that, in order to know whether an atom has a free electron, there should be a global, synchronized ledger of the whereabouts of all electrons in the universe. Or – to use a natural example with somewhat larger objects – that the status of the trillions of cells in our bodies should be registered on a global body tracking system. The cells already embody the changes: the levels of oxygen in the blood cells, for example, determine whether they offer oxygen to organ tissue cells in exchange for carbon dioxide. Then the reverse happens once the blood cells reach the lungs, where they exchange carbon dioxide for fresh oxygen. These interactions are determined on a cell-to-cell basis, without reference to any body-wide ledger of blood cell oxygen levels.

Holochain’s premise is that it’s equally unnecessary for all nodes in a decentralized application to hold a record of everyone’s state, as happens in blockchain, or for nodes to reach consensus before a user commits a state change to their own record. The local embodiment of state can act as its own authority, as long as the structure of data is tamper-proof. Also, only information necessary for larger-scale coordination needs to be widely shared, with all shared data strongly tied to where it came from. In this way, Holochain is an agent-centric system for decentralized computing: the users (agents) themselves are the definitive source of information in the system.

Okay, with those principles established, let’s look briefly at some of the architecture that makes Holochain’s data structure tamper-proof and scalable. After a cartoon break, that is.

Holochain’s Core Components

Source Chain. Each user hosts their own data on a source chain, which is a cryptographically signed record of everything you’ve ever done or said within Holochain applications, stored locally on your machine. Source chains, like blockchains, are hash chains, which associate a cryptographic fingerprint (or ‘hash’) with every record (or ‘entry’). Hashes are unique to the particular data they represent: changing just one comma to a period in a thousand-page book would result in a completely different hash.

DHT. Data that needs to be shared with the network is published to a shared environment called a distributed hash table, or DHT. Your tweets and comments in a Twitter-like app, your ride requests in an Uber-like app, your edits in a collaborative document editor… all of these are on the DHT. (Data that doesn’t need to be shared can remain private to your source chain.) Each user running a Holochain app stores a tiny slice of the app’s DHT, in addition to hosting their own data.

DNA. Each application’s rules for sharing data are written into the application code itself, known as DNA. The DNA is what says that this is an application for tweeting (which involves sharing data with a certain structure) versus calling rides or co-editing documents (which involve sharing different data with different structures). It also defines who can join the app’s network: can anyone join, do you need an invitation code or to pay, or is there some other criteria? A copy of the DNA is hosted by every application user, which means that any user is able to validate whether data being shared to their slice of the DHT conforms to the application’s rules.

But What Makes It Tamper-Proof?

Okay, cool structure maybe, but why can’t someone simply alter their source chain and misrepresent their data to others?

You can think of a source chain like a diary: each page contains a header, which identifies the fact that something happened and when it happened, and an entry, which contains the content of what happened (such as “I sent 100 units of currency to so-and-so”). Some of these entries might have been published to the DHT and others might not have, but in all cases the headers, which contain the hashes of the entries, are shared to the DHT. In other words, I may or may not have published the contents of a given diary page, but everyone is able to see that I wrote something on the page, and they are able to see the unique fingerprint that corresponds the contents of the page (which would completely change if I were to ever modify the contents even slightly).

Let’s say you and I are doing some transaction such that I need to send you 250 units. The app’s rules (encoded in the DNA) will say that in order for this transaction to go through, you need to verify my account balance, which means that I need to show you enough information from my diary in order for you to do so. (Remember, there is global visibility of local state, to whatever degree is necessary for a given action to be validated.) Your computer can very quickly add up all the pluses and minuses on the pages in my transaction diary, my source chain. You know that I’m not hiding any pages because you can check the DHT and see exactly how many pages have writing on them. And there’s no way I could have altered a previous page without making it obvious I’ve done so, because every action I take is a new timestamped event with a new header and new unique hash that also gets shared to the DHT. Plus a system of header monitoring by ‘neighbors’ ensures that I’m unable to fork or roll back my source chain without getting flagged. If anything doesn’t add up, or if it seems like something has been obscured, the transaction simply fails the validation rules and does not take place.

But What Makes It Scalable?

Most of blockchain’s challenges with scalability are really challenges of managing global consensus. Since Holochain maintains data integrity without the need for consensus, it doesn’t run into the same limitations.

There is no need for universal agreement. Keeping with our currency example for a moment: how many computers need to confirm our transaction in order for it to be executed? If this were blockchain, all nodes would need to come to terms with one another and keep a record of our transaction forever. In Holochain, the transaction is complete when just two computers have written it: yours and mine. Then, afterward, we publish the data to the DHT, and randomized groups of nodes store it so that others can confirm for themselves, later as the need arises, that we’re representing our states accurately. Data validation is party-to-party, just like for all the cells in our body, just like for all the atoms in the universe. This feature alone eliminates all of the computing required to reach global consensus.

There is no need for universal state. It’s true that many types of data do need to be published to the network – tweets and comments, ride requests, document edits, and so on, to keep with our earlier examples. It’s also true that an app sometimes needs system-wide tracking to monitor aspects of overall activity, just as the body has ways of monitoring and responding to changes in blood oxygen levels overall; this is another reason that the DHT often needs to store some amount of shared data. Unlike on a blockchain, though, each piece of data on the DHT is replicated only enough times to make sure the needed data is always available, including when the author might be offline. We’re talking about maybe dozens of replications in Holochain rather than potentially thousands or more in blockchain. And this limited replication is strategically distributed across all the users participating in the app, which means that each user performs just a little bit of extra work to hold a very small portion of the DHT.

Each DHT contains data for only one application. A blockchain contains all the data from all the applications running on that chain: every Ethereum node, for example, contains all the historical data for all the dApps running on Ethereum. In Holochain, each app has its own shared storage space in the form of its DHT. As Holochain architect Arthur Brock put it recently, “If I just want to run a Twitter-like app, why should I also have to run your crypto exchange, gambling app or collectible cartoon animals? On Holochain, users only run the apps they actually use.”

Each new user to an application adds storage and computing capacity. In blockchain applications, where miners and stakers write and store data, the network capacity is constant no matter how many nodes are added, so increased user activity increases the strain on computing resources. Holochain applications are entirely hosted by the users themselves, so as the demand for the app grows, so does the computing power to run it.

A Truly Peer-to-Peer Network

Let’s use one more example, a social networking application similar to Facebook or Instagram, to summarize the different approaches to data integrity taken by Holochain and blockchain. Let’s also add in the approach that today’s centralized social networks take, as a point of additional comparison. In this social network, you do all the things you’re accustomed to: posting text and images and videos, commenting on other people’s posts, and chatting privately with friends.

In the centralized scenarios common today, all of the data – including your photos and comments and messages – are held by the company who owns the platform. The social network is supposedly secure and scalable by virtue of its being centralized: the company takes responsibility for maintaining the network, and they are paid to do so in one or more ways. As we have seen, though, our data is often not as secure as these companies might like us to believe: data breaches are extremely common (since data stored in one centralized place makes a honeypot for hackers), plus our data is routinely sold to advertisers or leaked to other third parties (the Cambridge Analytica scandal was but one extreme example). A blockchain version of a social network would theoretically be hosted by the miners or stakers running the blockchain nodes. Integrity would be ensured by virtue of your data being written to the blockchain only after a consensus of nodes determines the data to be legitimate, then being replicated across all nodes and stored by them forever. But the data load would be extremely high in this scenario, creating a major scalability problem. Many blockchains and blockchain apps try to solve this problem by reducing the number of nodes that need to reach consensus, or by doing much of the computing or hosting work off-chain on centrally authorized servers. These approaches all compromise on decentralization and point us back, in one way or another, toward scenario #1.

(In point of fact, even a truly on-chain social network would be only nominally decentralized, since the miners and stakers, who need to be paid to do their job just like a centralized company does, effectively become a new kind of intermediary. But this is the subject of another article.)
In a Holochain version, the entire network – all of the data and even the application code – is hosted by the users themselves. It’s truly peer-to-peer. Data integrity is ensured through global visibility of local state, established on an as-needed basis. You share your photos, comments, and messages to the network through a shared table called a distributed hash table (DHT), but you remain the primary authority on everything you’ve published. The network is scalable by the fact that no global consensus is necessary, by the fact that the DHT involves only as much data and replication as necessary, and by the fact that every user shares a small piece of the load. The more users join the network, the more capacity it has for scale. And there are no intermediaries at all – no one who needs to be paid or trusted to write and store your data on your behalf.

If a peer-to-peer approach is that much better, why hasn’t everyone been doing it this way all along? One factor is probably the technological complexity involved, but another is probably that it’s difficult for people to imagine that everyone could host their own data and not be able to misrepresent themselves. Even though Holochain has been around in some form for several years, its approach to data integrity is enough of a departure from blockchain’s that developers and users are only just beginning to understand its potential, similar to how it took several years for Ethereum’s capabilities to be widely understood.

That does seem to be changing, however, especially since Holochain’s refactored state model went live and is proving to be many times more performant than previous versions – and also since so many new applications are preparing to launch on Holochain. And we can expect greater and greater awareness of Holochain as more applications go live in the coming months.

At a time when blockchain still has so much potential for growth, it may seem odd to be already talking about a post-blockchain application space. But given the scalability challenges blockchain faces and Holochain’s readiness to leapfrog these issues, it might be time to begin thinking outside the blocks.

One way to stay tuned about Holochain and Holo is to sign up for the occasional newsletter.

Wednesday, 21. July 2021

ShareRing

Transcript: Beyond Bitcoin – DeFi with Algorand, Algomint, and MeldVentures

In our very first episode we interviewed two of the industry’s leading DeFi experts to explore their views on current and future use-cases, as well as the impact on business and consumer markets. This episode is available with English, Spanish, Chinese, Korean, and Japanese subtitles. Full Transcript: Beyond Bitcoin – DeFi with Algorand, Algomint, and... The post Transcript: Beyond Bitcoin – DeF

In our very first episode we interviewed two of the industry’s leading DeFi experts to explore their views on current and future use-cases, as well as the impact on business and consumer markets. This episode is available with English, Spanish, Chinese, Korean, and Japanese subtitles.

Full Transcript: Beyond Bitcoin – DeFi with Algorand, Algomint, and MeldVentures

Leyla: Hello everyone and welcome to Beyond Bitcoin, a new monthly series where we break down the latest trends in cutting edge technology, blockchain, and beyond- And why it matters to you!

My name is Leyla and I’m co-hosting on behalf of BSN Global. And joining me is ShareRing’s Communications Manager Timothy Kingery.

Timothy: Thank you Leyla! 

and with us today for our series premiere we’d like to welcome our industry experts:

Summer Miao, Head of Community Asia Pacific for Algorand Foundation

and Michael Cotton, Founder of MeldGold and Co-Founder of MeldVentures and Algomint, here to share their insights and answers to the real questions that matter about DeFi, thank you both for taking time out of your busy schedules to join us!


Summer: Thank you

Michael: Thanks for having us


Timothy: Starting off, can you tell us a little about yourselves and how you came to work for one of the top blockchain companies in the world?

Michael: Thanks Timo, 

 I started the Gold industry straight out of highschool, so with the money I’d saved up working through highschool I started building my own gold business. 

That business went on to be one of the largest gold companies here in Australia,  so now currently employs 60 staff and is spread across the country including one the highest-rated security vaults for precious metals in Australia.

And then around 5 years ago I started looking at blockchain tech and really loved the concept and loved the idea. For me I saw the true innovation and I saw the future and what really held, and then we started building out this concept of bringing gold to blockchain and really creating a gold currency once more.

Since then we’ve gone on to partner with companies such as MKS PAMP, the largest gold trader in the world, based in Switzerland, Heraeus out of Asia, and Perth Mint here in Australia.

And then I head up MeldVentures- an Australian-based technology investment company. And MeldGold is really moving it’s way through to become the digital gold standard for gold across the world.

Leyla: Wow, that is incredibly impressive, Michael

Your passion for gold and innovation have clearly paid off, so thank you so much for sharing your experience with us today.

Michael: Thank you

Leyla: Summer, how about you? What brought you here?

Summer: 

Okay so actually, it’s an interesting story with a series of happy coincidences-

So I joined the blockchain space in early 2018 – back then I was still working in the traditional education industry. 

Then there happened to be a blockchain startup looking for a senior marketing fellow. That’s the first time I heard about the word “blockchain” and I did some research out of pure curiosity. 

You see, 2018 was the year when Bitcoin turned 10, however, many were still skeptical towards Bitcoin and the technology behind it. And after vast research, I was able to have a better understanding of the blockchain and I was amazed by the beauty of the idea and the great potential behind it. 

So I thought to myself, okay, “This might be the next generation of the “Internet”, and look how the internet has changed our lives and become something that you cannot live without. This might be something.” So with these expectations, I decided to change the direction of my career and I took a leap into the blockchain industry. 

Then, one year after, another happy coincidence happened; Algorand Foundation started seeking a team member in Asia for local community building. That’s when I seized my moment and luckily I became part of this great team.

Timothy: Excellent story, I really resonate with that view of it being like the new internet! Thank you for sharing that with us.

And judging by their popularity and lead in this industry, it’s easy to say joining Algorand was a wise choice!

Your team has climbed to the top of the charts in terms of market cap and has established itself as an industry-leading DeFi platform of choice by many new DeFi blockchain companies, with Algomint here being just one of many promising projects.

But for those of us in the industry two years ago, DeFi wasn’t really in the headlines at all. What stuck out to you about Algorand, of all blockchain companies? 

Summer: Okay so without a doubt, Algorand is a world-renowned project which has attracted global attention with everything it’s launched, and as you might know there is intense competition in the public chain sector.

So to me, Algorand stood out from the rest for a few big reasons:

Firstly, and maybe most importantly, is the Algorand team and it’s leadership:

Our Founder, Silvio Micali, is also a Turing award winner. He is the co-inventor of many widely-used blockchain protocols, such as Zero-Knowledge Proofs, Verifiable Random Functions, and the Probabilistic Encryption.

Many of which have also served as the foundation of modern cryptography and under his leadership we’ve got some of the greatest minds in the industry leading us. That alone gives me confidence that Algorand would succeed.

Then the technical aspects:

Algorand has managed to find an approach that solves the blockchain trilemma without any compromise.

So, a little background here, the blockchain trilemma is one of the main challenges that blockchains have faced throughout the decade in which the three key objectives:

Security

Scalability

And Decentralization

Source: SEBA BANK AG

Summer: Couldn’t be fully achieved without sacrificing at least one of them. 

So Algorand, by employing a novel pure Proof-of-Stake consensus algorithm solved the trilemma and created a truly secure, fast, and scalable blockchain platform.

One fun fact: the name Algorand is a combination of two words: Algorithm and Random – So we call our approach Algorand, because we crucially rely on algorithmic randomness for it’s efficiency

And it’s also worth mentioning that Algorand has a fast-growing ecosystem, and many use cases are from traditional industries. So, a number of companies, actually so far seven hundred have chosen to build on Algorand. So all of these reasons certainly helped my decision to apply, yeah.

Timothy: That’s a fantastic explanation, and with that in mind I can see how it was an easy choice!

And that’s really impressive regarding your Founder too, Silvio, I didn’t know he played such a big role in developing those protocols! 

Zero-Knowledge Proofs, Verifiable Randomness, that kind of stuff is used by so many different blockchains, that is really fundamental to this industry, that’s amazing.

We’ll include a link in the description below if you’d like to learn a little more about these after the show. But before we get too far, we’d like to break down in the simplest terms possible what exactly IS DeFi? 

And what impact will this have in people’s lives?

Summer: Okay, I’ll take this question-

DeFi is short for “decentralized finance,” it’s a term for a variety of financial applications in cryptocurrency or blockchain to disrupt financial intermediaries.

To put it more specifically, DeFi stands for financial services with no central authority. So DeFi takes out traditional elements of the financial system, such as exchanges, or banks; and it replaces the middleman with a blockchain smart contract.

Okay, so you may ask: So no intermediaries to steal a part of my cake, what other advantages does DeFi have compared to traditional finance? And what impact will it have for the world?

So, First of all, DeFi will make financial services more accessible and everyone will have a fair chance in the financial system. All information which are required when creating a bank or a brokerage account, such as the government-issued ID, Social Security number, or proof of address are not necessary to use DeFi. So this is especially helpful for those who are unbanked in Africa and Southeast Asia who have no access to basic financial services.

Second, no central party or authority can reverse transactions or turn off the services. So you can control your assets independently, all transactions are publicly traceable, and you know exactly where your money is all the time.

And last but not least, transactions are enforced and governed by the “smart contract.” Basically, money is programmable according to predefined computer code, which is publicly available for anyone to audit. With these, DeFi can be applied in real-life services. 

Take insurance as an example – just think of regular insurance – except that the rules of insurance events are encoded and the claims are paid out automatically – how convenient is that!? 

Michael: And to add to what Summer is saying as well, I think the other piece of this is not just allowing users to access these services, but it’s really giving some of the brightest minds in the world the freedom to build out these products.

And the barriers to entry for financial products is extremely high, where as in the DeFi space and the blockchain space, you know, those with the best ideas and the strongest capability can actually deliver valuable products with a lot more ease.

Leyla: Well, that is truly fascinating, thank you for breaking that down for us

So, you explained DeFi includes a number of financial products and services, 

Could you tell us about some of the most popular and how they compare to their counterparts in traditional finance?

Summer: Okay so, imagine trying to invest a single dollar in a way that would have any tangible return as an asset without being high risk? It’s virtually impossible. Even with financial inclusion, price point accessibility is not enough.

Through DeFi solutions, the users will be given the chance to take part in trading global tech stocks through a digital tokenization model that divides one share of a traditional equity into an individual 1/10,000 micro-equity token of representative value. 

This will advance microfinance by moving beyond micro-loans into micro-equity that can create wealth regardless of socioeconomic standing. So, this is actually the work of focus of MESE, one of our ecosystem partners, they are providing accessible wealth creation tools to the underbanked. MESE is a micro-equity stock exchange powered by the Algorand blockchain they aim at letting users invest very small amounts in selected stocks. 

This is made possible as the negligible transaction fees on Algorand allow MESE to provide micro-equity token shares that would not be viable at a chain with substantive transaction fees.

So you will be able to hold one crypto-based micro-equity token of big companies like Microsoft, Apple, Tesla, Twitter, Amazon, or Google, which is to hold a cryptocurrency that represents 1/10,000th share in a real stock. 

Michael: And to add to that as well and I think Summer touched on it briefly is in terms of micro-loans, the other really interesting part this opens up then is the ability to then take micro-shares or take assets in companies like Google or Apple and actually access loans against those assets.

In the traditional space that’s an incredibly difficult task and in most cases non-existent. But, within DeFi, the ability to do that in a trustless way, in under a minute, and access a loan at really competitive interest rates is a really fascinating piece of the puzzle.

Because, number one, it allows people to access opportunities they might have before them that they may not want to sell their current holdings in other platforms, yet they want to take advantage of this opportunity –

Or, alternatively, they might just need a short-term loan, they might be short on funds, or have an emergency, you know, it happens to the best of us.

So the ability to access those funds quickly, easily, and at a much better rate is really important. 

Tim: Very Cool!

Some of those stocks are prohibitively expensive, the last I checked you can’t buy a share of Google for under $2,000.

So will this be as easy as buying stocks with any other app? Or will it still require some cryptocurrency knowledge?

Summer: 

MESE aims to make the investment as easy as possible, so the experience will be no different than using the other app.

I believe the key to mainstream adoption is to make blockchain invisible for users. While the user is using the service based on blockchain technology, they won’t need to understand or even realize it’s blockchain related. 

Just like nowadays, we are using all the services enabled by the internet, but for most of the users, they don’t quite care what technology is behind this.

Tim: Well said,

Yeah, I think a lot of us would agree with you there, most don’t really care how the TV works on the inside, we just want the entertainment that it provides. 

I think as more blockchain products become more user-friendly focused like ordinary apps, we’ll see a huge increase in adoption.

So these DeFi loans seem like a no-brainer, you know, if you do the math comparatively with their traditional finance counterparts, but if you’re already savvy in Bitcoin you’re probably not too excited at the prospect of low double-digit returns on the year.


But for the millions, or billions of people who don’t necessarily enjoy the thrill or inherent risks of investing, when do you think DeFi loans would be in the hands of ordinary users, like my neighbor Bill? 

Is anyone working on making this like, really mainstream?

Summer: Yes, that’s what we are heading to, right? 

I think the key question behind this is “when will blockchain technology obtain mainstream adoption?” or “Is blockchain technology business ready?” 

So while the benefits of blockchain are clear, for blockchain use cases, such as DeFi, if it wants to be welcomed into mainstream adoption and become an alternative for real-world use cases it has to solve issues like high execution cost, scalability & speed, and the security issues.

To advance ahead, the industry needs a next-generation, open-source, permissionless blockchain being decentralized, scalable, and secure with a built-in transactional speed and low fees to power real-world use-cases. 

I believe Algorand is one of the ideal platforms to push blockchain into mainstream adoption. And hopefully your neighbor Bill and my friend Jane will soon enjoy the blockchain services with ease. 

To be honest, it will take some time but it will be worth the effort.

Michael: I think too, as Summer said, blockchain really does need to be invisible for broader adoption,

I think it also creates opportunities for people like Bill to understand the technology and really come to terms with the idea that it’s secure, and it’s an incredible tech, but it’s also not that scary.

Another benefit of DeFi is that those usual roadblocks that I said earlier for entrepreneurs and developers to create interesting products are dramatically reduced. 

So there is definitely a great variety of products both live and coming, and as we do start to see new generations of Blockchain technology coming to market like Algorand, we are seeing more secure options arise, which inherently means broader adoption is coming.

For example, at Meld Gold we are working with several new projects launching on Algorand blockchain to create yield with Gold, something that’s almost unheard of in the traditional sector. 

But the brilliance of Algorand and several of the products building on top of this incredible technology is they are making it easier and easier for people like Bob to access these kinds of opportunities.

Timothy: Gold Yielding!?

If I’m following you correctly, that’s a way to earn an Annual Percentage Yield from gold you own that you would otherwise be just sitting on? 

Can you tell us a little more about this?

Michael: Yeah, of course, traditionally assets like physical gold have been cumbersome to deal with, store, and move. 

But with Blockchain technology, once that gold is digitized, you can now own that gold. It can be vaulted and stored, trustlessly accounted for, and have a digital representation of that gold on blockchain, which opens up a whole new world of opportunities. 

In reality, it’s essentially unlocking the value within gold that traditionally would not be accessible. Now with your gold represented digitally you can utilize DeFi via platforms, like some of the lending platforms coming to Algorand to earn yield on that gold like you would in dollars or other currency.

Timothy: Wow, yeah that’s the first I’m hearing of anything like this, that’s brilliant!

So aside from the consumer products and benefits we’ve discussed, are there any DeFi products or services that you think would be useful for businesses?


Michael: Most definitely, all the advantages Blockchain brings to consumers play out for businesses as well. As you can imagine they are just larger transactors and larger equity holders, so the value is just multiplied. 

Taken by the same token, the ease of use for businesses is also key, it needs to be invisible for businesses just like it is for consumers. And with all the advantages Blockchain brings and how incredibly powerful the tech is, imagine if it was as user-friendly as the apps we are using today.

Summer: Yes, It’s a wonderful thing to see the convergence between the DeFi products and the traditional business. 

I would like to share a good example: 

Opulous, it’s one of the DeFi use cases building on Algorand. 

It’s a decentralized peer-to-peer DeFi loan platform that provides artists with a more direct and accessible way to secure funding. Also, it offers everyone a fair chance to invest in one of the world’s fastest-growing and most lucrative industries: music. 

Opulous loans are backed by future music royalty earnings and copyrights. Unlike cryptocurrency, these assets have predictable future values and carry no risk.

The platform will analyze a borrower’s music data and copyright assets to ascertain how much money they are eligible to borrow. It will then establish a custom loan repayment plan based on their predicted future earnings.

Musicians currently earning a steady royalty income will be eligible to receive up-front loans equal to the value of revenue they generate over 12 months, paying as little as 4% in interest rates.

Michael: And beyond just Bob we’re also now seeing institutions and startups and corporations looking at what DeFi can offer. 

For example, on the start-up front, a tech startup raises $5,000,000 to develop / expand on their project, and rather than sit earning bank interest they can utilize DeFi to generate stronger, meaningful returns to add to their development fund, really increasing their runway and increasing their likelihood of success.

And I agree with Summer about Opulous, you know, as one of the more interesting use cases, allowing artists to digitize future music royalties on blockchain and reducing that friction in market entry while delivering value both to creators, investors and fans is really interesting.

And I think this really opens up what we’ll see in some of this next generation, which is the kind of concept where, for example, farmers could take their future crop yields, digitize them, and gain loans at better interest rates with less friction.

Leyla: So fascinating, well, you both touched on a lot there, but I’m intrigued about the applications to music.

Is this something that would combat the piracy problem causing musicians to lose revenue since file-sharing and torrents became so popular?

Michael: Great question, Blockchain technology via NFTs does help solve this. 

This is a fairly new area with a lot of development happening as we speak, but even right now, a song can be loaded into an NFT via hash code and played through a decipher; which means only an original can be played. 

But, the brilliance of that is if you get sick of a song, you can actually trade and sell it on the secondary market.

And then what we see happening is royalties can then be attached to those NFTs and the actual original creating artist can still receive royalties on-chain, completely trustlessly, over time.

And with Algorand this can all happen at lightning-fast speed, at incredibly low cost, and with each component in a trustless manner.

Timothy: Wow, that is a fantastic concept, I think a lot of musicians would be thrilled to learn about this, but I think we’ll have to save a deep dive into NFTs for another episode!

Moving forward though, with this new array of DeFi financial products on the way, do you see this causing more banks to downsize or close? 

Are banking jobs or loan officer jobs being eliminated?

Michael: Look, I think it needs to be looked at from a different lens, and I think that’s the innovation lens. 

If DeFi becomes truly a challenger to traditional finance, which I think it is well on it’s way to being, we won’t see the banking industry die out. 

I think we’ll see them be forced to change– adapt, innovate themselves, the difference is normally the changes aren’t this drastic. This is one of the rare occurrences where we see huge leaps forward and the market trying to catch up while being wary of what it holds.

Summer: Yeah, I agree with Michael, so for central banks, cryptocurrencies are now considered a potential threat. However, we can see that when facing the threat, banks are making positive changes to adapt to a new technology trend. 

As one would say, If you can’t beat them, join them!

Banking and governments hold the most power in the world, it may be naive to think that they will stand by as crypto and blockchain replace them.

At this point, nearly all world powers have considered releasing a digital version of their currency from their central bank — the main reason is to head off Bitcoin and crypto from gaining too much momentum. 

The new reality tends to look like centralized authorities will be adopting blockchain solutions to avoid the risk of being extinct.

Leyla: I think a lot of us would be happy to see financial services across the board improve, even if they’re simply forced to, to remain competitive!

How do you see DeFi maturing over the next 5 years? 

And what sort of products do you think will go mainstream and how will that affect the current players in the industry?

Michael: I think we’ll see a lot of the current players adapting to meet the market, so we’ll see a lot more adoption from the financial sector and the traditional sector coming to blockchain.

With these additional funds and power coming into the marketplace, we’ll see further accessibility and broader acceptance. 

As always, you know, additional competition always brings additional innovation, you know, the more competitive the marketplace is, the harder those within the marketplace work to remain an edge and be competitive and also meet the market in terms of what the demands are.

In terms of interesting new products and concepts coming into the marketplace, there’s a huge spectrum of great things coming, but one of the ones that I like most is trustless funds management. 

This is kind of a new concept, and the idea is:

Rather than giving your money to an institution where you deposit and you wait to see what the sort of interest levels are and really trusting a third party to hold those funds and deliver that return–

It’s actually done via smart-contracts, which is a trustless vehicle that allows you to:

One, see all the investments that are being made, but also allows you then earn a return via that vehicle without having to trust that third-party, really removing that element of counter-party risk, where someone might disappear with the money isn’t actually possible in a vehicle like this.

So it really is a new level of transparency for investments.

Timothy: Wow, yeah that’s really interesting, so just to clarify for our audience, you said there is no institution to deposit my money with, so this is a non-custodial solution?

So the money is in my possession and in my control the whole time rather than with the fund manager, but still being traded?

Michael: Yeah so, Tim it’s actually non-custodial in the sense that it’s not actually controlled by anyone else,

It’s actually controlled by the smart contracts, so what it means is the only one who can actually access and withdraw the funds is you. 

I think the other piece to add to that which I think is really interesting as well is with this new generation of automated or autonomous funds management any investment can be made, 

it’s not a matter of investing a million dollars, you could invest as little as $10 in a project like this. Which is another interesting element that really changes how we’ll see the new financial structures.

Leyla: I’m always wary of trusting anyone with my money, and I think a lot more people would be receptive to the idea of managed investing if they could maintain control of their funds at all times.

While we’ve been talking about more of the basic level of use cases, what interests you the most about the DeFi space?

And is there any sort of milestone or advancement you’re really excited about? 

Summer: Okay, I’d like to share with you the recent updates regarding Algorand:

We will continue to grow our ecosystem in various aspects including DeFi and NFT, and especially the stablecoin ecosystem. 

So, there have been a lot of stablecoins issued on the Algorand blockchain such as USDT, USDC, Euro (Monerium), Candian dollar (QCAD), Brazilian Real (BRZ), Turkish Lira (BiLira). 

We are also collaborating with our partners on the digital version of the local currency, such as Mexican Peso, Indonesian Rupiah and also, Japanese Yen.

As a project we are investing a lot of our resources and partnerships into working on various stablecoins that are beyond US dollars. 

This way, it will enable more DeFi use cases, with massive social impact. And we expect to unleash the potential of stablecoins and accelerate the convergence between decentralized and the traditional finance.

Timothy:  Very cool! 

For those of our audience who may not know what stablecoins are, these are digital tokens backed and redeemable 1-for-1 with their paper counterparts.

The primary purpose of this is to make them easily tradable across all cryptocurrency exchanges and DeFi platforms.

While there are some tokenized forms already available for most common fiat currencies, such as the US Dollar with USDT,  USDC– Algorand is forging ahead to add many more, so that’s definitely something to keep an eye on!

And Michael, what are you most excited about coming up?

Michael: We have a really strong focus on what the future of DeFi looks like, what does the next 10 years look like for this next generation of finance?

Working really closely with a lot of the Algorand building community I get first-hand view of what’s coming and what’s being developed.

So for me, you know, talking with all these other projects, for us where the focus is on creating the most collaborative and powerful ecosystem we can that fills in all the pieces of what a traditional ecosystem or financial ecosystem requires. And ensuring that in that development we’re also making sure it’s a really inclusive environment. 

This is something where the entire world can adopt and participate, and doesn’t exclude any particular party, which I think is a really important part of where we’re heading. 

Leyla: That sounds very exciting, Can you expand on the ecosystem you envision?

And what sort of other DeFi financial products do you see becoming commonplace in the next 10 years?

Michael: 

I think the key thing to keep in mind is we shouldn’t force technology like blockchain where it doesn’t belong. Likewise, where it is more valuable over the alternative option it should 100% be adopted. 

In broad terms, at its core, anything that removes the need to hand over control of your assets, such as dollars, or some of the new generation of assets, such as your personal information or medical history, is something where blockchain fits really perfectly.

The other side of this is removing friction, cost, adding speed, or creating certainty in a transaction. This is another really valuable set of components that bring real value within the blockchain space. 

This ecosystem could expand into things like controlling your own personal identity. This would allow data to be delivered to you as you like it, but also control who actually gets to see your data. 

For example, getting paid to see ads for products you are actually interested in, versus advertising companies getting paid for that same information.

Taking the money you earned then onto a loan platform to generate great yield while verifying your identity on that platform via a KYC token in your wallet – But you’re not actually giving up your personalized data, is a really good sort of stepping process as to what we can see coming from this. 

And then finally taking that yield you’ve just earned and via an atomic transaction, which is like a trustless swap with a third party that you can verify is a good actor through their KYC token, on the same transaction is something really useful and something you could not do before…

and this is just the beginning!


Leyla: Incredible!

Well guys, honestly, I have many more questions but we are out of time!

For myself and those viewing who would be interested in learning more about DeFi and the types of products we’ve talked about, where would you recommend we start?

Michael: Thanks guys very much for having me, both Tim, Leyla, and BSN Global, really appreciate it – and for making this possible. 

For anyone interested you can visit the Meld Ventures website where you can see all the projects we are working with.

You can also view Algomint on Medium and Twitter, we just actually released a really good article about DeFi and why it fits so perfectly within the Algorand ecosystem. 

And we have a whole series of other articles coming out, so we have regular content coming to market.

Summer: And thanks for having me in this very first session of Beyond Bitcoin. 

I really like the idea of putting the blockchain in a way that is easy and fun. 

While being in the industry for around 3 years – not long but enough to witness the rapid and amazing development happening every day. I am also glad to see the shift in people’s opinion towards blockchain and a growing interest comes from the traditional industry. 

I’d like to take this chance and ask everyone to join the journey to witness the development of this groundbreaking technology. Maybe you missed the early stage of the internet boom, but you can still catch the trend of blockchain!

So follow Algorand on Twitter, Facebook, and Linkedin, you will be amazed by what you can find. 

Stay tuned! Thank you guys!

Timothy: Thank you both again for coming on the show, we look forward to seeing how your companies continue to shape the future of DeFi and the larger blockchain industry.

And for all our viewers, thank you as well for joining us, and before you go; if you have any remaining questions for our guests about tonight’s episode, and you can’t find the answer on any of the links below–

Leave them in the comments below!

Those with the most likes will be short-listed for personal replies by tonight’s guests! And If you liked this episode, give us a thumbs up and subscribe with the bell icon to tune into the next one!

Thank you guys!

The post Transcript: Beyond Bitcoin – DeFi with Algorand, Algomint, and MeldVentures appeared first on ShareRing.Network.


Cognito

A New Era: Electronic Know Your Customer Verification

Technology has evolved so rapidly over the past few decades, it can be hard sometimes for businesses to keep up. KYC is an essential part of many industries, but the slow nature of manual or in-person KYC doesn’t mesh with the demands of the modern customer. Efficiency and speed are the norm, and everyone is […] The post A New Era: Electronic Know Your Customer Verification appeared first on Cog

Technology has evolved so rapidly over the past few decades, it can be hard sometimes for businesses to keep up. KYC is an essential part of many industries, but the slow nature of manual or in-person KYC doesn’t mesh with the demands of the modern customer. Efficiency and speed are the norm, and everyone is used to nearly instantaneous results. With eKYC, or electronic know your customer...

Source


KuppingerCole

Martin Kuppinger Dr. Srijith Nair David Doret: Panel Session - The Future of Access Management




Katryna Dow: Everything Will Be Tokenized: The Future of Identity

We're on track towards a world where everything that can be, will be tokenized. Tokenization plays a critical part in enabling more equitable value creation for people, organisations and things. Providing the means to issue and store value, trace provenance, and most importantly achieve consensus to instantly trust. However, in order for this tokenized world to emerge we first need the infrastru

We're on track towards a world where everything that can be, will be tokenized. Tokenization plays a critical part in enabling more equitable value creation for people, organisations and things. Providing the means to issue and store value, trace provenance, and most importantly achieve consensus to instantly trust.

However, in order for this tokenized world to emerge we first need the infrastructure for people and their digital twins to participate in equitable and fair ways. This will include digital identity, verifiable credentials and payments.

This session will feature some of the use-cases, practical steps, insights and learning along the way.




David Doret: Visualizing IAM




Josh Green Paul Fisher: Expert Chat: Interview with Josh Green




Josh Green: The Path To Going Passwordless




Martin Kuppinger: The Future of Access Management: Beyond passwords, beyond static entitlements

In his talk, Martin Kuppinger will deconstruct the term Access Management and look at the various elements and concepts behind. Access Management is multi-facted and includes many concepts. On the other hand, many of the areas we should find being supported in Access Management are still missing in most implementations. So: What does it need for a modern, comprehensive Access Management? How will

In his talk, Martin Kuppinger will deconstruct the term Access Management and look at the various elements and concepts behind. Access Management is multi-facted and includes many concepts. On the other hand, many of the areas we should find being supported in Access Management are still missing in most implementations. So: What does it need for a modern, comprehensive Access Management? How will this look differently from now? Will we get rid of the burden of annoying authentication procedures or reviewing static entitlements we don’t understand? Which roles should policies play? Could we move forward to just-in-time entitlements? And will we finally get rid of passwords.

Martin Kuppinger will cover trends that are already visible, options you can take today, but also evolutions that are just visible at the horizon and innovations vendors should focus on today.

He will deliver you a high-level playbook for tactical and strategic steps for evolving what you have in Access Management towards a broader, future-proof solution.




Christian Tilly: Sabotage protection - How to double-secure internal company secrets

How do you protect secret information from sabotage? You should consider two possible scenarios when answering this question: Sabotage can be caused from the outside as well as from the inside. In principle, a potential threat can also come from people within your own company. An essential step is therefore to make sensitive documents and directories accessible only to employees who really need th

How do you protect secret information from sabotage? You should consider two possible scenarios when answering this question: Sabotage can be caused from the outside as well as from the inside. In principle, a potential threat can also come from people within your own company.
An essential step is therefore to make sensitive documents and directories accessible only to employees who really need them for their work: Following the need-to-know principle.
In the case of facilities that are vital to life or defense, these employees must also be instructed in how to protect themselves against sabotage.
Consistent checks to ensure that protection instructions have been given are therefore part of the administrator's duties, which in turn requires additional time and organizational capacities.
In this practical presentation, you will learn how automated permission management can relieve IT administrators and at the same time reduce errors caused by manual processes while ensuring compliance with special requirements for the assignment of rights, e.g. through separate data protection instructions.




IBM Blockchain

Making fungible tokens and NFTs safer to use for enterprises

Before cryptocurrencies, blockchain technology was unknown to most people. It was blockchain’s unique ability to manage the ownership of (virtual) currency in a decentralized and reduced-risk manner that made all the difference. The introduction of permissioned blockchains made the same functionality even more appealing to the enterprise world in the context of decentralized business asset […] T

Before cryptocurrencies, blockchain technology was unknown to most people. It was blockchain’s unique ability to manage the ownership of (virtual) currency in a decentralized and reduced-risk manner that made all the difference. The introduction of permissioned blockchains made the same functionality even more appealing to the enterprise world in the context of decentralized business asset […]

The post Making fungible tokens and NFTs safer to use for enterprises appeared first on Blockchain Pulse: IBM Blockchain Blog.


Finicity

Finicity and Green Dot Announce Secure Data Access Agreement to Deliver More Accessible, Seamless and Secure Money Management to Customers

Latest API integration enables leading digital bank’s customers to securely connect to approved third-party apps and accounts PASADENA, Calif. — July 21, 2021 — Finicity, a Mastercard company and leading provider of open banking solutions, announced today that it has signed a data access agreement with Green Dot (NYSE: GDOT), a digital bank and fintech […] The post Finicity and Green Dot Announc

Latest API integration enables leading digital bank’s customers to securely connect to approved third-party apps and accounts

PASADENA, Calif. — July 21, 2021 — Finicity, a Mastercard company and leading provider of open banking solutions, announced today that it has signed a data access agreement with Green Dot (NYSE: GDOT), a digital bank and fintech focused on delivering trusted, best-in-class banking and payment solutions that seamlessly connect people to their money. 

“We’re excited to announce Green Dot as the next in our data access agreements lineup,” said Steve Smith, CEO of Finicity. “The company is a big proponent of client empowerment, giving their banking clients the control to utilize their financial data to benefit them. This aligns well with Finicity’s mission to bring greater transparency to consumers to improve financial health and inclusion.”

The Finicity direct API experience will first be available through Green Dot’s flagship digital bank GO2bank and will allow customers to link their accounts to third-party apps that use Finicity’s secure data network and financial data – further enabling them to connect, manage and move their money in a secure and seamless environment. 

“This integration enables customers to put their financial data to work for them by offering secure, seamless connections to tools and features that can have a meaningful impact,” said Abhijit Chaudhary, SVP and GM, Direct to Consumer Products, Green Dot. “The majority of Americans, and particularly low- to moderate-income consumers, can benefit from tools that help them feel more in control and connected to their money, and our partnership with Finicity is an exciting step in that direction.”

Through this agreement, Finicity is extending its leadership in direct data access through the use of an application programming interface (API). Finicity’s signed data access agreements with many of the nation’s largest financial institutions, credit card companies, and wealth management institutions currently cover 63% of their open banking platform traffic with direct API access. In addition, Finicity works with many of the most popular PFM (personal financial management) tools, as well as the largest lenders and most innovative payment providers, among other services.

To learn more about Finicity data services and their commitment to fast, reliable, and high-quality data, visit www.finicity.com

About Finicity

Finicity, a Mastercard company, helps individuals, families, and organizations make smarter financial decisions through its safe and secure access to fast, high-quality data. The company provides a proven and trusted open banking platform that puts consumers in control of their financial data, transforming the way we experience money for everything from budgeting and payments to investing and lending. Finicity partners with influential financial institutions and disruptive fintech providers alike to give consumers a leg up in a complicated financial world, helping to improve financial literacy, expanding financial inclusion, and ultimately leading to better financial outcomes. Finicity is headquartered in Salt Lake City, Utah. To learn more or test drive its API, visit www.finicity.com

About Green Dot

Green Dot Corporation (NYSE: GDOT) is a financial technology and registered bank holding company committed to transforming the way people and businesses manage and move money, and making financial wellbeing and empowerment more accessible for all. Green Dot’s proprietary technology enables faster, more efficient electronic payments and money management, powering intuitive and seamless ways for people to spend, send, control and save their money. Through its retail and direct bank, Green Dot offers a broad set of financial products to consumers and businesses including debit, prepaid, checking, credit and payroll cards, as well as robust money processing services, tax refunds, cash deposits and disbursements. The company’s Banking as a Service (“BaaS”) platform enables a growing list of America’s most prominent consumer and technology companies to design and deploy their own customized banking and money movement solutions for customers and partners in the US and internationally. Founded in 1999 and headquartered in Pasadena, CA, Green Dot has served more than 33 million customers directly, and now operates primarily as a “branchless bank” with more than 90,000 retail distribution locations nationwide. Green Dot Bank is a subsidiary of Green Dot Corporation and member of the FDIC. For more information about Green Dot’s products and services, please visit www.greendot.com.

The post Finicity and Green Dot Announce Secure Data Access Agreement to Deliver More Accessible, Seamless and Secure Money Management to Customers appeared first on Finicity.


Anonym

Calls for New FTC Rules to Limit Businesses’ Data Collection and Stop Data Abuse

Recent acting chair of the US Federal Trade Commission (FTC) Rebecca Slaughter wants new rules to shift the burden for avoiding data abuse away from consumers and firmly onto businesses. Slaughter argues that the current model of giving users choice and control over their personal data, while important, isn’t working to stop data privacy and market competition abuse. Inste

Recent acting chair of the US Federal Trade Commission (FTC) Rebecca Slaughter wants new rules to shift the burden for avoiding data abuse away from consumers and firmly onto businesses.

Slaughter argues that the current model of giving users choice and control over their personal data, while important, isn’t working to stop data privacy and market competition abuse. Instead she wants businesses to stop excessively collecting and abusing consumer data in the first place, including through the use of dark patterns. And she points out the FTC doesn’t have to wait for Congress to act on this matter.

Speaking at the Centre for Economic Policy Research (CEPR)’s “Privacy and Antitrust: Integration Not Just Intersection” panel discussion in June, Slaughter said:

“I want to sound a note of caution around approaches that are centered around user control. I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions … I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions. 

“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.”

New FTC chair, Lina Khan, was sworn in on June 15, 2021, so we’ll watch this space.

In the meantime, we point to our complete privacy toolkit Sudo Platform as a way for businesses to be proactive about limiting the amount of customer data they collect and manage. 

Sudo Platform is a powerful collection of identity protection capabilities delivered as a vast toolkit of easy-to-use APIs and SDKs that make it simple to:

quickly integrate identity protection and privacy capabilities into a brand’s new and existing products and services  give users control over their personal data and identity offer next generation privacy tools for communications, browsing, payments and more deliver your customers unprecedented privacy, security, and trust. 

Find out more here. Listen to the entire proceedings of the “Privacy and Antitrust: Integration Not Just Intersection” panel here.  

Photo By Chinnapong

The post Calls for New FTC Rules to Limit Businesses’ Data Collection and Stop Data Abuse appeared first on Anonyome Labs.


Ocean Protocol

OceanDAO Round 7 Results

Over 21.7M OCEAN voted in the latest community funding cycle OceanDAO Grants Hello, Ocean Community! DAOs have been capturing the crypto zeitgeist over the last two quarters at record speed. As the term DAO becomes commonplace in the blockchain-verse, it is incredible that we, as an OceanDAO community, are at the forefront of these novel coordination experiments. With the shared goal of l
Over 21.7M OCEAN voted in the latest community funding cycle OceanDAO Grants

Hello, Ocean Community!

DAOs have been capturing the crypto zeitgeist over the last two quarters at record speed. As the term DAO becomes commonplace in the blockchain-verse, it is incredible that we, as an OceanDAO community, are at the forefront of these novel coordination experiments.

With the shared goal of leading Ocean Protocol down the path of self-sustainability, the quality and the turnout in OceanDAO Round 7 has been tremendous. Round 7 saw the most proposals submitted to date! This turnout comes as no surprise to our active community who have been continually collaborating on discord and engaging in our weekly Ocean Town Hall calls.

For up-to-date information on getting started with OceanDAO, we invite you to get involved and learn more about Ocean’s community-curated funding on the OceanDAO website.

The goal is to grow the DAO each round. We encourage the Ocean ecosystem to apply or re-apply AND to vote! Thank you to all of the participants, voters, and proposers.

OceanDAO Round 7 Results Round 7 Rules

Proposals with 50% or more “Yes” Votes received a grant until the “Total Round Funding Available” is depleted in descending number of votes received order.

35% of “Total Round Funding Available” was earmarked for New Projects. Earmarked proposals were eligible for entire “Total Round Funding Available”; returning (general) grants were eligible for 65%.

The grant proposals from the snapshot ballot that met these criteria were selected to receive their $OCEAN Amount Requested to foster positive value creation for the overall Ocean ecosystem.

Voting opened on July 8th at Midnight GMT Voting closed on July 12th at 12:00 GMT

Proposal Vote Results:

22 proposals submitted 14 funded or partially funded 69 Unique Wallets Voted 345 voters across all proposals (same wallet can vote on multiple proposals) 315 total Yes votes 30 total No Votes 17,788,048.6 $OCEAN voted Yes on proposals 4,004,355.107 $OCEAN voted No on proposals 21,792,403.71 $OCEAN Tokens voted across all proposals Recipients

Congratulations to the grant recipients! These projects have received an OceanDAO grant in the form of $OCEAN tokens.

See all the expanded proposal details on the Round 7 Ocean Port Forum!

If your Proposal was voted to receive a grant, if you haven’t already, please submit a Request Invoice to the Ocean Protocol Foundation (OPF) for the Ocean Granted amount.

Proposal Details Earmarked Grants

DataX is a decentralized exchange dedicated to trading datasets in Ocean Market. The vision is for the project to be a pioneer of the Data Data movement and kickstart adoption to a new data economy led by Ocean Protocol.

Nano Sensor Powered Smart Cities will onboard and establish a new dataset on the OCEAN marketplace with regular input and up-date from nanosensors in the urban landscape to measure out hidden asset layers and enable predictive maintenance.

Onshore OCEAN proposes creating a data science-centric community within Ocean Protocol. The initial focus is on building a landing page with PoC Jupyter Notebooks for exploratory data analysis & training simple deep learning models using Ocean.

Local Network Egress Traffic aims to aid in the adoption of Compute-to-Data functionality for small programming teams & entry-level/junior programmers with complimentary documentation featuring clear coding samples that can be used as a template.

oceanSPICE Simulations creates a web solution to simulate the creation and economy of data assets with dynamic pricing on Ocean Protocol marketplaces via TokenSpice in the backend.

Cluster Finance aims to be a toolkit for developers around the world to easily build data-centric apps like data unions and social networks on Ocean Protocol.

General Grants

Data Whale

Data Union App

Clean Docs

RugPullIndex

VisioTherapy

Ocean Pearl

Ocean Ambassadors

Ocean Academy

OceanDAO Ecosystem

Continue to support and track progress on all of the Grant Recipients here!

Much more to come — join our Town Halls to stay up to date and see you in Round 7. Thank you!

Follow Ocean Protocol on Twitter, Telegram, LinkedIn, Reddit, GitHub & Newsletter for project updates and announcements. And chat directly with other developers on Discord.

OceanDAO Round 7 Results was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Ontology Weekly Report (July 14–20, 2021)

Highlights Last week, Ontology announced a partnership with BlockBank, an AI-powered crypto application that combines traditional mobile banking with cryptocurrency and blockchain. We are also continuing to promote our Harbinger Program v2 and are excited for new Harbingers to join us! Latest Developments Development Progress We have completed development of Ontology’s EVM, designed to m
Highlights

Last week, Ontology announced a partnership with BlockBank, an AI-powered crypto application that combines traditional mobile banking with cryptocurrency and blockchain. We are also continuing to promote our Harbinger Program v2 and are excited for new Harbingers to join us!

Latest Developments Development Progress We have completed development of Ontology’s EVM, designed to make Ontology fully compatible with Ethereum smart contract templates. We are 50% done with testing. We have completed ETH RPC support and we are 90% done with testing. We have completed 95% of Ontology’s new Ethereum account system. We have completed 90% of the transaction optimization flow logic from the network layer to the transaction pool. This will significantly improve the transaction pool processing performance. We’ve completed 90% of the EVM//OEP-4 asset seamless transfer technical solution, which facilitates the conversion between OEP-4 assets and EVM assets. Product Development ONTO mobile app v3.9.2 was released, adding BSC cross-chain functionality. Ontology hosted AMAs with Piggy Finance and Polygon, attracting thousands of attendees. Ontology hosted an NFT lottery with TWINCI. The NFTs sold out in 30 minutes and the promotional video was viewed over 2,000 times. Ontology also hosted an NFT giveaway with dFuture, attracting over 5,000 participants. On-Chain Activity 118 total dApps have been launched on MainNet; 6,647,155 total dApp-related transactions, an increase of 5,851 from last week. 16,005,514 total transactions, an increase of 34,313 from last week. Community Growth 1,196 new members joined our global community this week. Our community is continuing to grow, and we encourage anyone who is curious about what we do to join us. The Harbinger Program v2 continues and we are excited for new Harbingers to join us! We held a Twitter Spaces session, “Building Thriving Blockchain Communities”, hosted by Humpty Calderon. ApeSwap and Idena attended as guest speakers. As always, we’re active on Twitter and Telegram where you can keep up with our latest developments and community updates. Global news Ontology partners with BlockBank

Ontology announced a partnership with BlockBank, an AI-powered crypto application that combines traditional mobile banking with cryptocurrency and blockchain. ONT ID will be utilised in BlockBank’s V2 App, as well as DDXF, to help users stay in control of their data and privacy!

On July 19, ONT was listed on Japanese FSA licensed exchange, DeCurret. Users can now trade the ONT/JPY pair and enjoy a Taker fee discount.

Ontology in the Media

Forkast — Why decentralization is essential for protecting user data and privacy

Centralized data management is outdated and vulnerable to hacking, says Li Jun, founder of Ontology. Here is an excerpt from his recent article published on Forkast:

Traditionally, different sectors have used paper or centralized spreadsheets on internal computers to house information and data sets. However, as we have seen, increasingly sophisticated technology has led to a higher risk of threats to private information, especially when housed on centralized systems. Moving forward, it is essential that businesses and individuals invest in decentralized solutions to store and manage data. On a macro scale, end-to-end technology run on blockchain enables private information to be shared securely, while users remain in full control of their data. From an individual perspective, investment in decentralized digital identity solutions can help users take back control of how and when their data is shared, especially when the current systems are not taking adequate precautions to protect their privacy.

ChainNews — From casting to circulation, comprehensively combing the NFT industry value chain

On June 21st, Alipay released two limited-edition NFT skins, which sold out in minutes. NFT are growing in popularity because they allow real-world assets that to be tokenized and quantified. At the same time, NFTs are highly liquid, which makes NFTs easy to purchase and transfer.

On the third anniversary of Ontology’s MainNet, we launched 10,000 limited edition NFTs. This follows the launch of other Ontology-based NFTs such as HyperDragons, Ontology 2.0 Medals, and Citizen NFTs. Ontology is also providing DID solutions to the music NFT platform, ROCKI. In the future, we will continue to explore the opportunities to combine NFTs with DeFi as well as more traditional industries.

Want more Ontology?

You can find more details on our website for all of our decentralized solutions across identity and data, or keep up with us on Twitter. Our Telegram is for discussion, whereas the Telegram Announcement is designed for news and updates if you missed Twitter!

Other Resources

LinkedIn / Medium / Facebook / Reddit / DiscordYouTube

Ontology Weekly Report (July 14–20, 2021) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Affinidi

Streamlining Global Supply Chains with Verifiable Credentials

Global supply chains are hugely complex as they span across companies and countries. As a result, there’s also a high chance of forgeries, inefficiencies due to multiple authentications, and delays that can, in turn, have a compounding effect on other companies’ operations and even a nation’s economy as a whole. The good news is that a lot of these issues can be resolved with Verifiable Cred

Global supply chains are hugely complex as they span across companies and countries. As a result, there’s also a high chance of forgeries, inefficiencies due to multiple authentications, and delays that can, in turn, have a compounding effect on other companies’ operations and even a nation’s economy as a whole.

The good news is that a lot of these issues can be resolved with Verifiable Credentials.

Digitizing Credentials

The paper credentials that are being used now in the supply chain industry are slow as they require physical verification, and are easily forgeable as well..

Since VCs are machine-verifiable and tamper-proof, they can be used to securely manage paper credentials and to track the movement of goods across international borders. Check out this PoC to better understand how you can digitize credentials for the supply chain using Affinidi’s stack.

Tapping into a Verifiable Global Marketplace

Verifiable credentials can be used to establish a trustworthy trade connection with another entity on the global marketplace to buy and sell goods across countries and to handle payments, shipping, and the logistics that come with it.

The advantage with VCs is that they are built on JSON-LD and hence, can be used to verify the credentials of buyers and sellers between different systems and data models. This interoperability of JSON-LDs opens up enormous opportunities for everyone involved.

Trustana is a B2B trade platform built on Affinidi’s SDKs, and it connects buyers and sellers in the F&B space.

Building Reputation

Another key benefit of verifiable credentials is that it helps to build an organization’s reputation over time. For example, if a potential supplier can look at the prompt payments and associations of a buyer, it may be inclined to partner with that company. In this sense, verifiable credentials create trust among companies as the underlying data is accurate, authentic, and verifiable.

In turn, this can iron out the inconsistencies and uncertainties that may come up in the global supply and demand market such as pandemics, weather-related events, political instabilities, and more.

As a buyer, it also opens up more opportunities and lesser dependence on just a handful of suppliers.

From a supplier’s standpoint, it has the option to dynamically redirect its goods, products, and services to new areas or areas of high demand as its credentials can be established easily.

Here is a PoC built on Affinidi’s stack that uses VCs to enhance the reputation of a company and its products.

Reducing Forgery

Transforming documents into verifiable credentials addresses this basic concern of forgery and hence has no further need of physical verification time and again.

This is because forging products and documents is impossible with VCs, as every entity in the supply chain has a unique Decentralized Identifier (DID) that is immutable and secured with public-key encryption.

These credentials are compliant with the W3C standards as well.

In all, VCs help to create a transparent supply chain, where you can track the movement of goods, ensure compliance, know the reputation of companies you’re dealing with, and reach out to more players in a trusted way.

To learn more about you can leverage Affinidi’s stack to build a transparent supply chain network, reach out to us on Discord or email us.

Follow us on LinkedIn, Facebook, or Twitter. You can also join our mailing list to stay on top of interesting developments in this space.

The information material contained in this article is for general information and educational purposes only. It is not intended to constitute legal or other professional advice

Streamlining Global Supply Chains with Verifiable Credentials was originally published in Affinidi on Medium, where people are continuing the conversation by highlighting and responding to this story.


Coinfirm

Jobs: Software Developer (Blockchain)

At Coinfirm, we are an international company centred around blockchain technology, providing AML and transaction security solutions for the financial and cryptoasset industries. Coinfirm is full of professionals with experience in compliance, finance and IT powering the mass adoption of blockchain. With actionable intelligence, we support our partners and clients around the globe, includi
At Coinfirm, we are an international company centred around blockchain technology, providing AML and transaction security solutions for the financial and cryptoasset industries. Coinfirm is full of professionals with experience in compliance, finance and IT powering the mass adoption of blockchain. With actionable intelligence, we support our partners and clients around the globe, including industry heavyweights such...

EU Commission’s ‘Travel Rule’ Proposal

The EU Commission published their legislative package on AML and CTF on June 20, 2021. The package comprises of 4 proposals, one of which is the transposition of FATF’s Recommendation 16 – aka the ‘Travel Rule’ into EU regulations. The provisions of the Travel Rule will be transposed into the existing EU wire transfer Regulation...
The EU Commission published their legislative package on AML and CTF on June 20, 2021. The package comprises of 4 proposals, one of which is the transposition of FATF’s Recommendation 16 – aka the ‘Travel Rule’ into EU regulations. The provisions of the Travel Rule will be transposed into the existing EU wire transfer Regulation...

KuppingerCole

Cisco Future Hybrid Cloud

by Mike Small In early June 2021 Cisco announced its vision for the Future Cloud. This vision comprises two distinct elements – UCS, a unified hyperscale computing infrastructure and tools to provide end to end observability of hybrid cloud services. Hybrid Management Challenge As organizations adopt a hybrid IT delivery approach this increases the challenges of managing and securing the diff

by Mike Small

In early June 2021 Cisco announced its vision for the Future Cloud. This vision comprises two distinct elements – UCS, a unified hyperscale computing infrastructure and tools to provide end to end observability of hybrid cloud services.

Hybrid Management Challenge

As organizations adopt a hybrid IT delivery approach this increases the challenges of managing and securing the different elements.  Some of which are delivered as cloud services and some in other ways.  Usually, these different elements need different management and security tools and, where hundreds of applications are being delivered, this imposes a significant management burden. 

In response, some cloud vendors have started to offer tools that extend beyond their own cloud.  A more widely adopted approach is the use of VMware; this is supported by a wide range of clouds and provides common tools that can be used wherever it is deployed.

Cisco Infrastructure for Hybrid IT

Cisco’s approach to this challenge is UCS.  According to Cisco – this is a programmable self-aware, self-integrating system based on the concept that infrastructure is code. Servers are designed to be stateless, with their identity, configuration, and connectivity extracted into variables that can be set through software. This enables management tools to help guarantee consistent, error-free, policy-based alignment of server personalities with workloads. 

UCS is supported by Cisco Intersight which is an SaaS management tool that provides a single control point for the complete hybrid USC.   On paper this provides a solution to a real problem however, it depends upon the customer choosing Cisco products to get all the benefits.

The Network for the Hybrid Cloud

The network is the key enabler of the hybrid cloud.  The availability of wide area high performance networks is what has made practical the remote delivery of business-critical services.  Indeed, the large CSPs (cloud Service Providers) have all invested heavily in their own private networks as a critical part of their service infrastructure. 

In addition, the new 5G mobile networks are widening the availability of high-performance connectivity into new use cases including manufacturing, logistics, and travel.

The app on the end user device is just the tip of the iceberg – the app depends upon the work done by the distributed services that provide the functionality.  Therefore, the end user experience is becoming dominated by the performance of the networks that connect these services.  While the performance of CPU, RAM and storage are still important for the individual services, the end user experience is determined by the end-to-end performance of the networks.

Governing the Hybrid Cloud Network

While the performance of hybrid IT services is dependent upon the network much of the network involved will be outside of the direct control of the business.  This makes it important to take a governance-based approach with clear service level agreements supported by measurement of delivered performance.

The end-to-end network traversed by a transaction delivered from a hybrid deployment may involve a mobile network (radio and backhaul), the public internet, the in-cloud network, the connection to the organization’s data centre as well as within it.  Identifying where unacceptable delays, that are outside of the agreed service levels, are occurring is not a simple task. 

This challenge is increased by modernised applications which exploit multiple distributed containerised services.  What used to be a subroutine call on a server is now a service request, and the performance of this this depends upon the network.  One business transaction now involves a web of service interactions.  This architecture also increases the attack surface since, these services themselves are points of vulnerability if not properly protected.

Cisco ThousandEyes

Cisco’s response to these challenges is their ThousandEyes Platform. According to Cisco this combines a variety of active and passive monitoring techniques to provide insight into the end users’ experience across the applications and services. As well as monitoring enterprise network performance though locally installed agents it also exploits pre-deployed software vantage points across the globe to provide real-time internet outage detection.  The platform enables enterprises to monitor and measure the end-to-end network performance to identify bottlenecks.  It can also help to detect abnormal traffic patterns that could indicate security compromises.

What about SASE?

There are various vendor definition of SASE (Secure Access Service Edge).  Cisco defines it as “Secure access service edge combines networking and security functions in the cloud to deliver seamless, secure access to applications, anywhere users work.” Although Cisco did not focus on SASE in these announcements the distributed nature of a service based architecture means that the service edge, if it exists at all, is highly complex and hence will be increasingly difficult to manage and secure.

Opinion

Cisco has correctly identified some of the key challenges from the hybrid IT delivery model that is now commonplace in organizations.  A common approach to managing and securing the heterogeneous components of this infrastructure is a major headache. 

However, UCS adds yet another flavour to an already crowded field and Cisco Intersight depends upon the use of UCS.  Organizations need to consider UCS in the context of the wider market including vendors such as VMware, popular open-source solutions such as OpenStack, other hyperconverged infrastructure products, as well as emerging hybrid IT management tools from the major cloud vendors. 

In contrast Cisco have correctly identified the network is a critical element of hybrid deployments and that it has the potential to become the major performance and security concern for modernised application architectures.  In this context their ThousandEyes platform is very relevant to the governance of networks in a service-oriented hybrid architecture.

 Some relevant links:

Stairway to Cloud Security: A Step-by-Step Guide to Mitigate Risks and Achieve Strong Security A Cloud for All Seasons Analyst Chat #84: Hybrid IT 3 - Managing and Governing

Dark Matter Labs

DM Note #5

Mission Holding at DM: The Case of Nature-based Solutions This is the fifth in a series of DM notes, in which we write about insights from our work on the ground, following internal learning sessions called DM Downloads that are organized every two weeks or so. The aim is to make our practice more legible, for us as well as for you. DM Note #5 is a reflection on how we build and organize mi
Mission Holding at DM: The Case of Nature-based Solutions

This is the fifth in a series of DM notes, in which we write about insights from our work on the ground, following internal learning sessions called DM Downloads that are organized every two weeks or so. The aim is to make our practice more legible, for us as well as for you.

DM Note #5 is a reflection on how we build and organize missions internally, taking Nature-based Solutions as a case study.

Mission Oriented Innovation: An Introduction

One of the most interesting aspects about the mission to send a human to the moon and back launched by President John F. Kennedy was the design of the Defence Advanced Research Projects Agency — commonly known as DARPA. Its relationship to the government bureaucracy, connection to Department of Defence procurement, and organizational structure all enabled it to work towards the breakthrough innovations that made the mission a success. Inspired by that we ask: how do we build, shape and hold the missions to address the kind of complex systemic challenges we have to face collectively today that are beyond the realm of technology alone?

Mission-oriented innovation is itself an emergent practice. Building on a theoretical background grounded in the industrial strategy and innovation systems literature, to date it has primarily been used by local and national governments seeking to address the grand challenges they face by taking a more proactive role in economies.

DML has been working to put a mission-oriented innovation approach front and centre in its work around various domains: Nature-based Solutions, Spatial Justice, Mental Wealth and other emerging ones. These missions fit within the broader mission of DML as an organisation, which we currently define as follows:

In a context of climate breakdown and technological disruption, Dark Matter Labs focuses on accelerating societal transition towards collective care, shared agency, long-termism and positive interdependence. Our aim is to discover, design and develop the institutional ‘dark matter’ that supports a more democratic, distributed and sustainable future. Our daily work ranges from policy and regulation to finance and data, from governance and democratic participation to organisational culture and identity. To keep that transparent, we undertake open work in collaborative partnerships to discover and provoke alternative visions of the future, designing how they might look in practice, and experimenting in context to reveal how they could work and enable the necessary change.

Mission-oriented innovation implies a few shifts, moving:

○ from market fixing to market shaping (rationale)
○ from incremental to transformational change (ambition)
○ from analysis to experimentation (approach)
○ from sectors to challenges (domain)
○ from independent to collaborative work (relationships)

Innovation is central — but not necessarily technological innovation. It could be financial, legal, regulatory, institutional, democratic or cultural, which increases the importance of broad participation.

According to IIPP’s Missions: A Beginner’s Guide, here’s how to set good missions:

1. Be bold and inspirational, with a clear narrative, high engagement and wide societal relevance.
2. Set a direction with clear, concrete targets that are measurable and time bound.
3. Be highly ambitious, requiring research and innovation, but with a credible chance of success.
4. Be cross-disciplinary and cross-sectoral.
5. Drive multiple, bottom up solutions.

However, there are big differences in between a government driving a mission that is technological, such as landing a human on the moon, VS when an organisation like DML works to drive a mission internally and with its collaborators. See below an emergent typology of missions; DM’s approach being closer to the last one described.

Source of the visual: Rowan Conway

We recognize that a mission only deserves to be called as such when it has a clear directionality. As such, we have focused on our most advanced internal mission, which we are developing across and beyond each singular commission, contrat or grant, to make more legible how we are currently doing this work, what we are learning in the process, and how best we can move forward.

Our Nature-based Solutions Mission

Now, more than ever, we need to learn to collaborate in symbiosis with nature. Institutions are recognizing the urgency to advance this ambition, setting net-zero targets and announcing ambitious tree-planting and other nature restoration ambitions. But we are struggling to reach implementation. This is a multi-faceted problem and, amongst others, can be linked to the lack of funding, the disconnect between data and mapping or the complexity of turning policy into reality when confronted with land competition.

Mission framing

DML’s Natural Assets mission focuses on providing the infrastructure components for radical collective action to regenerate, enhance and scale natural or modified ecosystems for human and non-human thriving.

Urban landscapes — NbS

We are working collaboratively to identify strategic risks within the status quo and how to address them across landscapes (e.g. peatlands, urban nature, watersheds, rivers) through financial instruments and mechanisms; regulatory and outcomes frameworks; inclusive governance; asset management tools; and smart contracts.

Deep Code Innovations — NbS

We are now developing various proofs of possibilities with partners; amongst others, in Québec/ Canada, Scotland/ UK, Madrid/ Spain, Milan, Italy, and Sejong/ South Korea. For each project, various shared deep code innovations are being developed; we therefore work across verticals (mission, initiatives, etc.) as well as across horizontals (finance, regulation, etc.).

Proofs of possibilities being deployed across the NBS mission and deep code innovations. On the left, you find the categories of the deep code innovations (described in the previous visual), as well as which project is working to develop each of them.

This year, we will focus on building the technological data platforms and investment vehicles to start assembling the infrastructures for a thriving planet. This work will be advanced thanks to the considerable support we have received recently for Trees As Infrastructure (TreesAI) from Morgan Stanley, Google.org, Climate-KIC and the World Economic Forum.

TreesAI Website

TreesAI is currently developing a transparent, verifiable financial platform to grow Nature-based Solutions for resilient cities. Through novel accounting and contracting models we value trees as assets, reorienting administrative procedures and attention toward the diverse environmental benefits and behaviours of urban forests.

The components of the TreesAI platform that will be tested could then also be adapted and applied either in support or another initiative in the same NbS mission, or to support another mission in a different context or region.

To read more about why municipalities are struggling to reach tree-planting targets, click here. To read more about our proposition for supporting cities to transition towards resilient urban forest infrastructures, click here.

TreesAI Framework Key Reflections on Mission Building at DM

Here are some of they key learnings captured from our conversations during the last DM Download.

Mission sequencing — There are different levels of maturity for a mission and therefore different requirements at each stage: Mission-building (early stage), Mission-shaping (building an initial portfolio), Mission-holding (critical mass). Each of these stages has different types of activities involved in moving it forward. We are currently working to define them in more detail. Community of practice — It requires a certain critical mass of projects and resources for a mission to sustain a community of practice. With enough experiments on the ground, it allows for the mission constituents to better benefit from peer-to-peer learning, mutual accountability, collective credibility, and shared capabilities that go beyond the impact of any single project. At the early stage of mission building, these multiple interactions can be burdensome. However, regular sharing around theory and framework can still be valuable. Shared deep code innovations — Some of the deep code innovations that are being developed (across regulation, finance, contracts, etc.) can be adapted, bundled and implemented across various initiatives and missions, which raises the question of how our entire portfolio is linked and how we create lines of overlap and compound learning across the organization (see a very preliminary sketch of that below). Preliminary framework partially representing DML’s cross-mission portfolio dashboard Mission pollinators— We need people to translate and pollinate content across projects that sit together within a mission and/ or across other domains. Learning and adaptation across initiatives (for instance between our NbS mission projects Trees As Infrastructure and Nouveaux voisins)—at the level of strategic risks, co-benefits and other key questions —cannot be done without people actively working in both projects and with the necessary brain space to spot patterns. Deep code pollinators— We also need people who are more specialized in one deep code innovation (e.g. finance and capital innovation, or risk & contracting innovation) to work across various missions. They are able to spread and compound relevant learning across different domains that share similar deep code challenges. Place-based anchors — Place-based cultural sensitivity, knowledge and networks are essential for a mission to be adapted and implemented in a new region. Working internationally requires us to find a balance between having some of our team members working across geographies, some anchored in local contexts, and also partnering with solid and complementary local stakeholders. Mission wide funding — To actively craft a mission — creating spaces for sharing, compounded learning, and actively making the system visible — requires resource for a few people to take on the pollination, facilitation and sense-making activities. Since funding is normally provided at a project level, funding this cross-project connectivity is a challenge that we need to be proactive in addressing. Towards civic-led missions — Organizationally and institutionally, we’re as deeply problematized by the monopoly power of state as we are by the monopoly of the private sector, which raises the question of how we can move to a mission that is fundamentally civic-led, beyond state and market actors, in a way that is legitimate and drives public good. Emergent order — The framework and practices of how to organise in a mission-led approach depends on the actors involved and the resources available, and emerges over time. The current mission framework at DM, for instance, has been an emergent one that is still being defined organically as we go, driven by principles rather than by fixed structures. Evolving roles — As with any emergent systems organising, as a mission grows the people building it shed roles and take on new ones to adapt to the emerging needs. Get in touch

If you enjoyed this 5th DM Note, please also read our previous note here, follow us on Medium for more to come and “clap” the article to show appreciation. And please feel free to reach out and share your thoughts on this as we continue to grow a community of interest/ practice/ impact around the world.

Carlotta Conte / NbS + TreesAI
carlotta@darkmatterlabs.org
Dark Matter Laboratories UK

Jonathan Lapalme
jonathan@darkmatterlabs.org
Laboratoires de Matière sombre / Dark Matter Labs Canada

DM Note #5 was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto Regulatory Affairs: White House Ransomware Task Force to Tackle Crypto Payments

Last week reports emerged that the Biden administration has formed an interagency ransomware task force - and that the role of cryptocurrencies in ransomware will be right atop the policy agenda. 

Last week reports emerged that the Biden administration has formed an interagency ransomware task force - and that the role of cryptocurrencies in ransomware will be right atop the policy agenda. 


Okta

Easy Xamarin Forms Auth with PKCE

OAuth 2.0 is a protocol that controls authorization to access a secured resource such as a native app, web app, or API server. For native applications, the recommended method for controlling access between your application and a resource server is the Authorization Code flow with a Proof Key for Code Exchange (PKCE). In this article, you will learn how to build a basic cross-platform application w

OAuth 2.0 is a protocol that controls authorization to access a secured resource such as a native app, web app, or API server. For native applications, the recommended method for controlling access between your application and a resource server is the Authorization Code flow with a Proof Key for Code Exchange (PKCE). In this article, you will learn how to build a basic cross-platform application with Xamarin.Forms and implement Authorization Code flow with PKCE using Okta Xamarin SDK. You will also learn about the Xamarin Community Toolkit - a collection of common elements for mobile development with Xamarin.Forms.

To continue, you will need:

A basic knowledge of C# Visual Studio 2019 or Visual Studio for Mac An Okta Developer Account (free forever, to handle your OAuth needs) An Android or iPhone device, or an emulator for testing (For the article, we show Android as an example, but this will work for either platform.) Create a Xamarin.Forms App with Xamarin’s Community Toolkit

Xamarin is a framework from Microsoft used to build cross-platform mobile apps from one shared source code; it’s written in C# and compiled for each platform’s runtime. Xamarin.Forms is an abstraction on that, enabling developers to share the UI code as well.

Xamarin.Forms is now included in Visual Studio! Create a new project by clicking File>New Project in Visual Studio, select Mobile App (Xamarin.Forms), and click Next. Name the app MovieExplorer and click Create. On the next window, select Flyout and click Create.

Visual Studio will automatically create a solution with three projects: MovieExplorer.Android for Android, MovieExplorer.iOS for iOS, and MovieExplorer for shared code and user interface. To install Xamarin Community Toolkit, either run Install-Package Xamarin.CommunityToolkit -Version 1.1.0 in all three projects or right-click on the MovieExplorer solution and click Manage Nuget Packages to add the library to each project:

I will also add TMDbLib library to access The Movie Database Api for retrieving movie information.

Explore Movies with Xamarin.Forms and Xamarin Community Toolkit

In the MovieExplorer shared project, rename ItemsPage.xaml to MoviesPage.xaml and replace its content with the following:

<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:xct="http://xamarin.com/schemas/2020/toolkit" x:Class="MovieExplorer.Views.MoviesPage" Title="{Binding Title}" xmlns:local="clr-namespace:MovieExplorer.ViewModels" xmlns:views="clr-namespace:MovieExplorer.Views" x:Name="BrowseItemsPage"> <xct:TabView TabStripPlacement="Bottom" TabStripBackgroundColor="Blue" TabStripHeight="60" SelectedIndex="0" TabIndicatorColor="Yellow" TabContentBackgroundColor="Gray"> <xct:TabViewItem Icon="cinema.png" Text="Popular" TextColor="White" TextColorSelected="Yellow" FontSize="12"> <RefreshView x:DataType="local:ItemsViewModel" Command="{Binding LoadMoviesCommand}" IsRefreshing="{Binding IsBusy, Mode=TwoWay}"> <CollectionView x:Name="PopularMoviesView" ItemsSource="{Binding PopularMovies}" SelectionMode="None"> <CollectionView.ItemTemplate> <DataTemplate> <views:MovieCell /> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> </RefreshView> </xct:TabViewItem> <xct:TabViewItem Icon="movie.png" Text="Discover" TextColor="White" TextColorSelected="Yellow" FontSize="12"> <Grid> <Label HorizontalOptions="Center" VerticalOptions="Center" Text="TabContent2" /> </Grid> </xct:TabViewItem> </xct:TabView> </ContentPage>

Note the http://xamarin.com/schemas/2020/toolkit namespace that is necessary in order to use components from Xamarin’s Community Toolkit.

The movies page consists of two tabs: One to show popular movies and a second to search for a movie. The TabView control from Xamarin’s Community Toolkit is a fully customizable control with lots of customization options. The popular movies tab displays items from the PopularMovies collection using the CollectionView control, and it uses RefreshView control for pull to refresh functionality. The layout of each movie is defined in the MovieCell template:

<?xml version="1.0" encoding="UTF-8"?> <Grid xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:MovieExplorer.ViewModels" xmlns:xct="http://xamarin.com/schemas/2020/toolkit" xmlns:tmdb="clr-namespace:TMDbLib.Objects.Search;assembly=TMDbLib" x:Class="MovieExplorer.Views.MovieCell" Padding="10" x:DataType="tmdb:SearchMovie" ColumnDefinitions="*, *" RowDefinitions="Auto,Auto, Auto, Auto"> <Image Grid.RowSpan="3" Aspect="Fill" Source="{Binding Path=BackdropPath, StringFormat='https://image.tmdb.org/t/p/w500{0}'}"/> <Label Grid.Row="0" Grid.Column="1" Text="{Binding Title}" FontAttributes="Bold" Style="{DynamicResource TitleStyle}" /> <xct:Shield Grid.Row="1" Grid.Column="1" HorizontalOptions="Start" Status="{Binding VoteCount}" Subject="{Binding VoteAverage}" StatusTextColor="Black"></xct:Shield> <Label Grid.Row="2" Grid.Column="1" Text="{Binding ReleaseDate, StringFormat='Release Date: {0:dd-MM-yyyy}'}" FontSize="16" VerticalOptions="CenterAndExpand" VerticalTextAlignment="End"></Label> <Label Grid.Row="3" Grid.ColumnSpan="2" Text="{Binding Overview}" Style="{DynamicResource SubtitleStyle}" VerticalOptions="End" /> <Grid.GestureRecognizers> <TapGestureRecognizer NumberOfTapsRequired="1" Command="{Binding Source={RelativeSource AncestorType={x:Type local:ItemsViewModel}}, Path=ItemTapped}" CommandParameter="{Binding .}"> </TapGestureRecognizer> </Grid.GestureRecognizers> </Grid>

For each movie, the app will show the movie’s backdrop image, title, overview, and release date. It will also use a Shield control to show the movie’s vote average and the number of votes.

Next, rename ItemsViewModel to MoviesViewModel and replace it with this:

public class MoviesViewModel : BaseViewModel { private SearchMovie _selectedItem; public ObservableCollection<SearchMovie> PopularMovies { get; } public Command LoadMoviesCommand { get; } public Command<SearchMovie> ItemTapped { get; } public MoviesViewModel() { Title = "Movies"; PopularMovies = new ObservableCollection<SearchMovie>(); LoadMoviesCommand = new Command(async () => await ExecuteLoadMoviesCommand()); ItemTapped = new Command<SearchMovie>(OnItemSelected); } async Task ExecuteLoadMoviesCommand() { IsBusy = true; try { PopularMovies.Clear(); var popularMovies = await MoviesService.GetPopularMovies(); foreach (var movie in popularMovies) { PopularMovies.Add(movie); } } finally { IsBusy = false; } } public void OnAppearing() { IsBusy = true; SelectedItem = null; } public SearchMovie SelectedItem { get => _selectedItem; set { SetProperty(ref _selectedItem, value); OnItemSelected(value); } } async void OnItemSelected(SearchMovie item) { if (item == null) return; await Shell.Current.GoToAsync($"{nameof(MovieDetailPage)}?{nameof(MovieDetailViewModel.MovieId)}={item.Id}"); } }

The BaseViewModel contains common functionality such as INotifyPropertyChanged implementation, IsBusy property, and it exposes MoviesService that LoadMoviesCommand uses to get a list of popular movies.

Finally, open MoviesPage.xaml.cs, and make sure to wire up the MoviesViewModel as a data source for the MoviesPage:

public MoviesPage() { InitializeComponent(); BindingContext = _viewModel = new MoviesViewModel(); }

Now, run the app, and you will see a list of popular movies:

Next, let’s add a second tab to search for movies. The second tab will use MoviesViewModel too, so add both a collection for movie search results and a command to search for movies:

public ObservableCollection<SearchMovie> DiscoverMovies { get; } public Command SearchMoviesCommand { get; } async Task ExecuteSearchMoviesCommand(string text) { IsBusy = true; try { DiscoverMovies.Clear(); if (!string.IsNullOrEmpty(text)) { var searchResult = await MoviesService.DiscoverMovies(text); foreach (var movie in searchResult) { DiscoverMovies.Add(movie); } } } finally { IsBusy = false; } }

Next, add the markup for a second tab to the MoviesPage.xaml:

<xct:TabViewItem Icon="movie.png" Text="Discover" TextColor="White" TextColorSelected="Yellow" FontSize="12"> <StackLayout Orientation="Vertical"> <SearchBar x:Name="SearchBar" Placeholder="Discover Movies"> <SearchBar.Behaviors> <xct:UserStoppedTypingBehavior Command="{Binding SearchMoviesCommand}" CommandParameter="{Binding Text, Source={x:Reference SearchBar}}" StoppedTypingTimeThreshold="500" MinimumLengthThreshold="3" ShouldDismissKeyboardAutomatically="True" /> <xct:EventToCommandBehavior EventName="TextChanged" Command="{Binding SearchMoviesCommand}" CommandParameter="" /> </SearchBar.Behaviors> </SearchBar> <RefreshView x:DataType="local:MoviesViewModel" IsRefreshing="{Binding IsBusy, Mode=TwoWay}"> <CollectionView x:Name="SearchMoviesView" ItemsSource="{Binding DiscoverMovies}" SelectionMode="None"> <CollectionView.ItemTemplate> <DataTemplate> <views:MovieCell /> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> </RefreshView> </StackLayout> </xct:TabViewItem>

The SearchBar control is a user input control for initiating a search. I also use UserStoppedTypingBehavior from Xamarin’s Community Toolkit to ensure that the search command executes when the user stops typing for 500 milliseconds and he/she has entered at least three characters.

Re-run the app and try searching for movies:

Finally, I will add a page to show movie details when you click on a movie on either of the two tabs.

First, rename ItemDetailPage.xaml and ItemViewModel.cs to MovieDetailPage.xaml and MovieDetailViewModel.cs.

The movie details page will display a poster image of the movie and the cast. It will also let you watch a trailer of the movie (with the help of MediaElement from Xamarin’s Community Toolkit):

<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:views="clr-namespace:Xamarin.CommunityToolkit.UI.Views;assembly=Xamarin.CommunityToolkit" xmlns:viewModels="clr-namespace:MovieExplorer.ViewModels;assembly=MovieExplorer" xmlns:tmdb="clr-namespace:TMDbLib.Objects.Movies;assembly=TMDbLib" x:Class="MovieExplorer.Views.MovieDetailPage" x:DataType="viewModels:MovieDetailViewModel" Title="{Binding Title}" Padding="5" BackgroundColor="Gray"> <Grid ColumnDefinitions="*, 2*" RowDefinitions="3*, 2*"> <CollectionView ItemsSource="{Binding Movie.Credits.Cast}" VerticalOptions="Start"> <CollectionView.Header> <StackLayout> <Label HorizontalTextAlignment="Start" Text="Cast:" FontSize="Medium" FontAttributes="Bold" /> </StackLayout> </CollectionView.Header> <CollectionView.ItemTemplate> <DataTemplate> <Label Padding="10,2" x:DataType="tmdb:Cast" Text="{Binding Name}" TextColor="White" FontSize="Medium"></Label> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> <Image Grid.Column="1" Grid.Row="0" VerticalOptions="FillAndExpand" Aspect="Fill" HorizontalOptions="EndAndExpand" Source="{Binding Path=Movie.PosterPath, StringFormat='https://image.tmdb.org/t/p/w500{0}'}"></Image> <views:MediaElement Source="{Binding VideoUrl}" Grid.Row="1" Grid.ColumnSpan="2" ShowsPlaybackControls="True" AutoPlay="False"/> </Grid> </ContentPage>

The MovieDetailViewModel is simple: it loads the movie details from TMDB and builds a full URL for the trailer:

public class MovieDetailViewModel : BaseViewModel { private int movieId; private string videoUrl; public string VideoUrl { get => videoUrl; set => SetProperty(ref videoUrl, value); } public int MovieId { get { return movieId; } set { movieId = value; LoadMovie(value); } } public Movie Movie { get; private set; } public async void LoadMovie(int itemId) { Movie = await MoviesService.GetMovie(itemId); Title = Movie.Title; OnPropertyChanged(nameof(Movie)); if (Movie.Videos.Results.Any()) { VideoUrl = await GetYouTubeUrl(Movie.Videos.Results[0].Key); } } public async Task<string> GetYouTubeUrl(string videoId) { var videoInfoUrl = $"https://www.youtube.com/get_video_info?html5=1&video_id={videoId}"; using (var client = new HttpClient()) { var videoPageContent = await client.GetStringAsync(videoInfoUrl); var videoParameters = HttpUtility.ParseQueryString(videoPageContent); var playerInfo = JObject.Parse(WebUtility.HtmlDecode(videoParameters["player_response"])); return playerInfo["streamingData"]["formats"][0]["url"].Value<string>(); } } }

Run the app, click on a movie, and you will be able to watch the trailer in the app:

Finally, it’s time to secure the app with Okta!

Add Authentication with Okta’s Xamarin SDK

I will use Okta to quickly and securely implement user authentication so that I don’t have to implement it from scratch or roll it into my own identity management system. Okta supports user authentication, multi-factor authentication, social authentication, and all OpenID connect flows out of the box - it essentially takes care of any scenario you would ever need!

The Authorization Code flow with PKCE requires us to generate a code verifier - a cryptographically secure random string and a code challenge created from the verifier. The app will open an external browser tab and pass the code challenge to the Okta authorization server, which then stores the challenge, authenticates the user, and redirects the user back to the app with a temporary authorization code. Next, the app requests to exchange the authorization code for tokens and passes the code verifier that is generated. The authorization server generates the code challenge from the verifier, compares it with the stored challenge, and if the two values match, Okta will return access and the ID token to the app.

Okta’s Xamarin SDK implements the Authorization Code flow with PKCE so that you do not need to build it yourself.

To get started, I will use Okta CLI to set up an Okta application. Run okta apps create if you already have an Okta account, or run okta start to create an account first. Note the OrgUrl, as you will need it in a couple of minutes. Enter MovieExplorer for the app name and select 3: Native App (mobile) for the type of application. For Redirect URI, add .login to the suggested reverse domain name, and add .logout for the Post Logout Redirect URI.

Okta cli will configure a new OIDC application and output its client-id, which you will also need:

Next, add the Okta.Xamarin package to all projects and add Okta.Xamarin.Android and Okta.Xamarin.iOS to your respective projects.

Now, you need to make a platform specific configuration. Create an OktaConfig.xml file in the Assets folder of the Android project, and add the following content:

<?xml version="1.0" encoding="utf-8" ?> <Okta> <ClientId>{ClientId}</ClientId> <OktaDomain></OktaDomain> <RedirectUri>com.okta.dev-7462271.login:/callback</RedirectUri> <PostLogoutRedirectUri>com.okta.dev-7462271.logout:/</PostLogoutRedirectUri> </Okta>

Make sure to replace {ClientId} and {yourOktaDomain} with the correct values. Also, RedirectUri and PostLogoutRedirectUri must match the values you’ve entered in the Okta cli when configuring the app.

Next, open the MainActivity.cs and change the MainActivity class to inherit from OktaMainActivity<App>:

public class MainActivity : OktaMainActivity<App>

Override OnSignInCompleted and OnSignOutCompleted methods:

public override async void OnSignInCompleted(object sender, SignInEventArgs signInEventArgs) { await Shell.Current.GoToAsync("//MoviesPage", true); var user = await OktaContext.Current.GetUserAsync<UserInfo>(); ((AppShell) Shell.Current).User = user; } public override void OnSignOutCompleted(object sender, SignOutEventArgs signOutEventArgs) { Shell.Current.GoToAsync("//LoginPage", true); }

Finally, add two new activities to intercept login and logout redirects:

[Activity(Label = "LoginCallbackInterceptorActivity", NoHistory = true, LaunchMode = LaunchMode.SingleInstance)] [IntentFilter(actions: new[] { Intent.ActionView }, Categories = new[] { Intent.CategoryDefault, Intent.CategoryBrowsable }, DataSchemes = new[] { "com.okta.dev-7462271.login" }, DataPath = "/callback")] public class LoginCallbackInterceptorActivity : OktaLoginCallbackInterceptorActivity<MainActivity> { } [Activity(Label = "LogoutCallbackInterceptorActivity", NoHistory = true, LaunchMode = LaunchMode.SingleInstance)] [IntentFilter(actions: new[] { Intent.ActionView }, Categories = new[] { Intent.CategoryDefault, Intent.CategoryBrowsable }, DataSchemes = new[] { "com.okta.dev-7462271.logout" }, DataPath = "/callback")] public class LogoutCallbackInterceptorActivity : OktaLogoutCallbackInterceptorActivity<MainActivity> { }

Again, the values in the DataSchemes must match the values that you entered in the Okta cli when configuring the app.

This concludes the configuration for the Android app. Now, let’s prompt the user to sign in with Okta.

Right click on the Views folder in the shared MovieExplorer project, click Add->New Item, select Content Page, enter StartupPage as a name and click Add. Open StartupPage.cs and add OnAppearing method:

protected override async void OnAppearing() { base.OnAppearing(); string token = ""; try { // should check for valid login instead token = OktaContext.Current.GetToken(TokenKind.AccessToken); } catch (Exception) { } finally { // only open Login page when no valid login if (string.IsNullOrEmpty(token)) { await Shell.Current.GoToAsync($"//{nameof(LoginPage)}"); } else { await Shell.Current.GoToAsync($"//{nameof(MoviesPage)}"); } } }

When the StartupPage appears, the user is redirected to the login page if the access token is not present.

The login page contains a login button that invokes the LoginCommand and kicks off the authentication process:

public class LoginViewModel : BaseViewModel { public Command LoginCommand { get; } public LoginViewModel() { LoginCommand = new Command(OnLoginClicked); } private async void OnLoginClicked(object obj) { await OktaContext.Current.SignInAsync(); } }

When the user signs in, the OnSignInCompleted method fires in MainActivity, and the app will navigate to the MoviesPage.

Next, add the StartupPage to the AppShell.xaml so that it is the first page that the app loads upon launch:

<TabBar> <ShellContent Route="StartupPage" Shell.FlyoutBehavior="Disabled" ContentTemplate="{DataTemplate local:StartupPage}" /> </TabBar>

Also, add an event handler to the Logout menu item in AppShell.xaml.cs:

private async void OnMenuItemClicked(object sender, EventArgs e) { await OktaContext.Current.SignOutAsync(); }

Finally, add a header to the AppShell.xaml to display the currently logged in user details:

<Shell.FlyoutHeaderTemplate> <DataTemplate> <StackLayout> <Label Text="{Binding User.Name}" TextColor="Black" Margin="0,5,0,0" FontSize="Large" HorizontalTextAlignment="Center" VerticalTextAlignment="Center" /> <Label Text="{Binding User.PreferredUserName}" TextColor="Black" Margin="0,0,0,10" HorizontalTextAlignment="Center" VerticalTextAlignment="Center" /> </StackLayout> </DataTemplate> </Shell.FlyoutHeaderTemplate>

Run the app, click the Login button and sign in with your Okta credentials. On the Movies page, open the flyout menu, and you should see your user account details:

That’s it! Okta’s Xamarin SDK simplifies using Authorization Code flow with PKCE to a couple of method calls.

Learn More About Xamarin, OpenID Connect, and Okta

I hope this tutorial was interesting to you and that you enjoyed it. You can get the full source code of the project from GitHub. For more Xamarin and Okta articles, check out these posts:

Easy Xamarin Essentials with Web Authenticator Build Login in Xamarin with Xamarin.Forms Implement the OAuth 2.0 Authorization Code with PKCE Flow Okta Xamarin SDK

Make sure to follow us on Twitter and subscribe to our YouTube Channel so that you never miss any excellent content!


Indicio

The decentralized identity revolution is powered by machine-readable governance

Machine-readable governance is the key to making decentralized identity accountable, effective, and trustworthy. Much is talked, much is written about governance in decentralized identity. “Trust frameworks,” “trust marks,” “transitive trust” all form part of a lexicon that can be confusing to those looking to enter the field, develop a product or service, or find a […] The post The decentralize
Machine-readable governance is the key to making decentralized identity accountable, effective, and trustworthy.

Much is talked, much is written about governance in decentralized identity. “Trust frameworks,” “trust marks,” “transitive trust” all form part of a lexicon that can be confusing to those looking to enter the field, develop a product or service, or find a simple solution to a problem. In part, this is because the word “governance” is used to cover a lot of ground, from corporate structure and network policy, to responsibilities, values, policies, and legal agreements. Corporate and technical governance can overlap to the point where it’s easy to end up thinking that governance is a problem that needs to be solved before we can use the technology.

There are many reasons why this isn’t true. From a design perspective, it’s impossible to lay out all the rules of an information ecosystem in advance, and unwise to try and do so, given that an ecosystem must be able to grow organically to become an ecosystem. The architecture of decentralized identity has clear and simple constants defining its essence: privacy by design, security by design, permanent and portable identity for everyone and everything. The key to sustainable innovation is to combine these constants with flexible parameters. This allows a simple set of rules to generate an infinite number of variations. If all these varieties are interoperable, we end up with a network of networks.

All of this is not to understate the value of paperwork—and the need for common agreement around issues like interoperability. Mars will need a constitution—and a great one. But we must get there first.

Machine-readable governance enabled by Agents
The thing is, we have a way to enable clear, flexible, governance right now through Agents and machine-readable governance. Agents are the software that allow data to be shared and authenticated by consent between parties, and this makes them the most important governance entities in decentralized identity. That’s because they are programmable with governance rules that define how digital information can be accepted, exchanged, verified, updated, and revoked. They enable people and organizations to trust an exchanged credential.

When Indicio deployed its decentralized ecosystem for a pilot on digital COVID credentials in Aruba (the ecosystem is now called Cardea and is a project of Linux Foundation Public Health), Agents containing machine readable governance made the whole system work:

Tourists downloaded a “digital wallet” agent to accept a COVID test credential The COVID testing lab used an agent to issue a test credential The government used an agent to verify the test credential and then issue a proof-of-test credential to the tourist. Hospitality businesses across the island used downloadable verifier agents to scan and verify the government issued credential when the tourists showed up

Machine-readable governance allowed the government to implement its COVID test policies quickly within an architecture for privacy-preserving technology. People knew what was needed. People knew that changes could be quickly dialed into the governance if information changed. And above all, people knew who to trust. Instead of starting with top-down rules, the combination of Agents and machine-readable governance enabled bottom up governance to get the job done.

Start local, scale global
This is a critically important point. While decentralized identity is a new and unfamiliar concept to many people, most have an intuitive grasp of decentralized governance. In politics, it is synonymous with the view that local knowledge will drive better governance for the governed than a decision taken far away; in the European Union, this is the principle of subsidiarity: Power should be exercised as close to the citizen as possible.

When it comes to decentralized identity, this is why figuring out what works at the local or hyperlocal level becomes a powerful source of trust. Because specific, local knowledge is essential to building transparent, competent, and reliable solutions—the three distinguishing features of trustworthiness, according to philosopher Onora O’Neill—the technology is more likely to be trusted and adopted when it can incorporate specific, local knowledge.

When we have learned what works to solve a myriad of local problems, we can then scale up to governance at a global level. The governance frameworks created as we deploy these solutions are not going to be mutually exclusive; but the search for one framework to rule them all now is likely to be fruitless at best and a hindrance at worst.

Instead, we need to focus on maximal privacy through minimal rules, and let the interplay of agents and machine-readable governance handle all the pre-existing, national, international, sector-specific, and institutional governance frameworks that will need to be accommodated as decentralized identity expands across the globe.

If you want to know how Indicio can deploy machine readable governance to solve your identity and authentication needs, contact us.

The post The decentralized identity revolution is powered by machine-readable governance appeared first on Indicio Tech.

Tuesday, 20. July 2021

Caribou Digital

Researching Gender and Platform Livelihoods in Ghana

By: Akosua K. Darkwah & Nana Akua Anyidoho, University of Ghana Ads for Glovo (food delivery) and Bolt (ride-hailing) services in Accra. Photo-credit: Nana Akua Anyidoho & Akosua K. Darkwah) Over the course of 2021, the University of Ghana and Caribou Digital, with the support of the Mastercard Foundation, will undertake a study to understand the impact of COVID-19 on young women

By: Akosua K. Darkwah & Nana Akua Anyidoho, University of Ghana

Ads for Glovo (food delivery) and Bolt (ride-hailing) services in Accra. Photo-credit: Nana Akua Anyidoho & Akosua K. Darkwah)

Over the course of 2021, the University of Ghana and Caribou Digital, with the support of the Mastercard Foundation, will undertake a study to understand the impact of COVID-19 on young women’s experiences working and selling through online platforms in Ghana.

Women in Ghana have a long history of participation in the economy as workers and owners of enterprises. Therefore, the nature of Ghanaian women’s work, including the extent to which it is empowering, has long been of interest to researchers. With the increasing digitization of work, more so in the wake of the COVID-19 pandemic, questions about the content and impact of women’s work have gained more urgency. This background informs our study of women’s platform livelihoods in Ghana, using a female empowerment approach. The study also builds on previous work on platform livelihoods by both Caribou Digital and researchers at the University of Ghana.

An Overview of Ghana’s Digital Economy

Ghana’s mobile phone and internet penetration rates are 140% and 66.8% respectively, and its social media rate is 23%. On the African continent, Ghana ranks third in private sector digital platforms after Nigeria and South Africa; it has a total of 72 platforms, 42% of which are local in origin. Ghana’s ranking on the UN’s E-Government Index (EGDI) score is also impressive. The EGDI is a weighted average of normalized scores of three dimensions of e-government, specifically the scope and quality of online services, the extent of development of telecommunication infrastructure and the inherent human capital. In a 2018 survey, Ghana was the only African country to transition from a middle-level to a high-level EGDI.

Ghana’s achievement on these indices can be partly attributed to a range of government policies aimed at improving the e-print of the country over the last decade and a half. Among these policy initiatives is the National ICT for Accelerated Development Policy designed to promote the participation of young people in Ghana in the global digital economy. The government has also rolled out training programs to provide young people with skills to participate in the gig economy. One such initiative, the Digital Marketing and Entrepreneurship Program, offered a three-month training course for 3000 young people who were then to be absorbed by Ecobank, a Pan-African financial company. In the digital transformation of public services, such as in the renewal of national health insurance, the government itself offers a source of employment for young people with digital skills.

There are also non-state actors that support young people interested in digital technology. A pioneer in this respect is Meltwater Entrepreneurial School of Technology (MEST), which was set up in 2008 to provide training, mentoring and financial support to potential tech entrepreneurs. More recent examples include iSpace Ghana which received an award from Google in recognition of its diversity and inclusion programming. Many of these training organizations are, however, located in Accra, limiting opportunities for individuals living outside the capital. The city is also the site of the Accra Digital Centre, a technology park, and Google’s first AI centre on the continent.

While a 2019 World Bank Group report describes Ghana as an average performer in terms of global entrepreneurship and innovation, some of its digital companies have gained global recognition, including mPedigree, mPharma, Logicel, Rancard Solutions and SoftTribe, the last of which is owned by Herman Chinery-Hesse, a man who has been called Africa’s Bill Gates. Currently, there are 96 active digital commerce platforms in Ghana, about half (53) of which are of African origin. The most common of these are freelancing platforms and rental platforms such as Airbnb. Indigenous platforms that have made an imprint on the landscape include Esoko which started as a service to provide content support to farmers. There are also a range of E-commerce shops, some of which are businesses designed entirely with digital platforms in mind, such as Wear Ghana, and others as additions to traditional stores. Since the pandemic, some businesses are organizing webinars that allow them to operate in a safe manner andalso harness skills and knowledge worldwide. An example is the virtual trade fair,organized for the fruits and vegetable industry.

The country has a fairly robust electronic payment infrastructure to support online commerce. By 2017 estimates, mobile money transactions are used primarily for remittances (43%), payment of utility bills (8%) and wages (7%). Nonetheless, close to 99% of transactions still involve cash, although the pandemic led to an increase in cashless financial transactions, with a 50% growth in the amount of money sitting on mobile accounts between June 2019 and June 2020. The availability of mobile money transactions has encouraged the development of other services such as micro-insurance for low-income earners in the informal economy.

Who is engaged in platform livelihoods in Ghana?

Platform livelihoods are “active human efforts, sometimes combined with tools or assets, deployed to create value outside of the constructs of a stable employer-employee relationship, mediated by the infrastructure and accompanying logic of digital platforms”. Platform livelihoods are made up of platform work (e.g. Glovo food delivery or working on Upwork.com) and platform sales.

A study by Research ICT Africa in Rwanda, Tanzania, Kenya, Mozambique, Ghana, Nigeria and South Africa suggests that, in general, people with university education are not attracted to platform livelihoods because of lower earnings relative to other work opportunities available to them. However, this depends on labour market conditions in individual countries. Even in Ghana where platforms are dominated by workers and sellers with secondary education, it may be seen as a viable livelihood opportunity even by university graduates. One study describes a young Ghanaian graduate who quit his job to work fulltime for Upwork.com, a freelancing platform. While he earned a higher income than he might have in non-platform employment, he reported working 48 hours without sleep to complete a task on time, mindful that client endorsements could greatly undermine his future work options.

Although the number of individuals involved in platform work in Ghana is fairly low, women are well-represented in this form of employment. We do not have sex-segregated data across the different forms of platform livelihoods in Ghana, but there are some statistics that give us some indication from studies of specific forms of platform livelihoods. The survey by Research ICT Africa in seven countries in Africa estimates that 2% of Ghana’s population is involved in platform work and, further, that slightly more women (2.1%) are involved than men (1.9%). However, men and women tend to be engaged in different forms of platform livelihoods. Uber is a good example of this phenomenon. The company set up shop in Accra in June 2016 and extended its operation in 2017 to Kumasi, the second largest city. By 2018, it had registered over 3,000 driver partners on its platform. Although the company publicly declared its commitment to recruiting female driver partners, its driver partners remain almost exclusively male.

The few existing research on platform livelihoods in Africa indicate it is a sector dominated by young people. We do not have the statistics to confirm this for Ghana, young people are often the target of various initiatives by the state, business owners and development partners to improve young people’s access to platform work. With funding from the Mastercard Foundation and Solidaridad, the Springboard Road Show Foundation is running a Coronavirus Recovery and Resilience Program (CoRE). This program offers young people a guide to sustainable livelihoods in a post-pandemic world. Others have gone a step further to support women to set up online businesses. For instance, The Mastercard Foundation, through its Young Africa Works program, has partnered with the Ghana Enterprise Agency and Lokko House to set up an online retail shop that will enable women-owned and -led enterprises to promote their products and connect with customers internationally.

Researching Gender and/in Platform Livelihoods

Despite their participation in the sector, Ghanaian young women’s experiences of platform livelihoods and its implications for their lives is an underexplored subject. Our study will examine the extent to which participation in platform livelihoods empower Ghanaian women. It will also explore the impact of the COVID-19 pandemic on women’s participation in platform work and sales. The focus will be on young women aged between 18 and 35 years living in urban communities where the digital infrastructure is fairly well developed. What are women’s experiences of working on platforms? How has this changed in the wake of the global pandemic? Above all, what are the enabling and constraining factors for meaningful and dignified work for female platform workers and sellers in Ghana? Specifically, how successful are policy and programmatic initiatives in boosting young women’s participation in this sector and also ensuring that their experiences are empowering.

In order to ask these questions, we will use a mixed methods approach. In addition to content analysis of online newspapers, we will employ a range of qualitative methods including expert interviews, in depth interviews, and focus group discussions with young female platform workers and sellers. We will also survey young women performing platform work across different segments.

We will adopt participatory video storytelling as part of the research methods, to allow us to dig deeper into the lived experiences of select female workers. We will provide them with the tools (a phone, video training and mentoring) to tell their story in their own words. See our previous participatory video storytelling work here.

We are looking forward to this research and would welcome any feedback at anyidoho@ug.edu.gh and adarkwah@ug.edu.gh. You can also follow us on Twitter on @cariboudigital and @NAnyidoho.

Researching Gender and Platform Livelihoods in Ghana was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Zero Trust Through Identity-Based Segmentation

As workers become more mobile and workloads move into the cloud, the traditional model of enforcing security at the network perimeter becomes ineffective. A Zero Trust model of strict identity verification and access control for every user or workload offers an alternative that secures data while ensuring it is accessible to those who need it.

As workers become more mobile and workloads move into the cloud, the traditional model of enforcing security at the network perimeter becomes ineffective. A Zero Trust model of strict identity verification and access control for every user or workload offers an alternative that secures data while ensuring it is accessible to those who need it.




Finicity

Balance Insights for Money Movement

You need to move money, but with that comes risk. How do you know a customer has enough funds available in their account? The answer: balance checks. With balance checks, you can guarantee that a customer has sufficient funds to facilitate confident money movement. And with live balance checks, you can get the most accurate, […] The post Balance Insights for Money Movement appeared first on Fini

You need to move money, but with that comes risk. How do you know a customer has enough funds available in their account? The answer: balance checks. With balance checks, you can guarantee that a customer has sufficient funds to facilitate confident money movement. And with live balance checks, you can get the most accurate, most up-to-date account information. Let’s dig deeper into how that works.

What Are Balance Checks?

Many payment solutions rely on balance checks to ensure that an account has the funds necessary to make a payment. For card payments, this comes in the form of a network authorization. For ACH transfers, balance checks help mitigate financial risks like payment failures, overdraft fees, and NSF fees, all of which can happen if an account holder tries to move money they don’t have. 

Why Implement Balance Checks?

Balance checks deliver visibility into a customer’s bank account balance so payment facilitators can move money confidently and mitigate risk. Implementing balance checks into your workflow can produce benefits like:

Streamlined payment experiences Reduced ACH payment errors Higher rate of payment success Mitigate money lost due to payment failures Better decision-making thanks to more accurate, up-to-date data Improved consumer experience with fewer fees and more successful payments

Altogether, balance checks help payment facilitators build better experiences for consumers that in turn yield better business outcomes.

How Finicity Enables Live Balance Checks for Streamlined, Lower-risk Account Funding

As part of our Finicity Pay solution set, we offer several balance check options so that payment facilitators can enhance the payment experience and move money confidently. Some balance checks are cached, meaning that you receive lightning fast account balance data with timestamps. For the greatest accuracy, live balance checks deliver real-time, instant balance information at the moment of the request. This might be ideal for higher value or riskier payments.

Our live balance call delivers real-time balance information (both cleared balance and available balance) complete with time stamps directly from a consumer’s financial institution. And thanks to the speed of our high-quality data connections, you can get that important balance information in seconds.

For use cases that may not require a live balance check, we also offer balance checks that occur throughout the day at regular intervals so you can retrieve lightning fast results.

These balance check solutions rely on our open banking platform, which is founded on secure data connections that ensure a seamless, low-friction experience, and consumers permissioning their data for their use. At Finicity, we empower the consumer to benefit from their financial data. That means they remain in control of their data and get a trustworthy experience.

Live balance checks aren’t the only payment-experience-improving solutions that Finicity offers. To complement live balance checks, our Finicity Pay solution set also enables you to verify:

Account and routing number Account owner and address

Live balance checks are the key to more successful, more confident money movement. To see Finicity’s instant balance insights in action, request a demo today.

The post Balance Insights for Money Movement appeared first on Finicity.


Global ID

GiD Report#169 — It’s all about creators

GiD Report#169 — It’s all about creators Welcome to The GiD Report, a weekly newsletter that covers GlobaliD team and partner news, market perspectives, and industry analysis. You can check out last week’s report here. This week: It’s all about creators Facebook gets it, but they’re playing catchup What’s up with Discord? Chart of the week More Facebook c
GiD Report#169 — It’s all about creators

Welcome to The GiD Report, a weekly newsletter that covers GlobaliD team and partner news, market perspectives, and industry analysis. You can check out last week’s report here.

This week:

It’s all about creators Facebook gets it, but they’re playing catchup What’s up with Discord? Chart of the week More Facebook clones Stuff happens 1. It’s all about creators Tech YouTuber Linus Sebastian of Linus Tech Tips, Photo: Linus Media Group

And when it comes to the creator economy, it’s about tools, not the actual influencers.

Here’s the NYTimes:

The creator economy, which provides digital tools to influencers and helps them run their businesses, is a huge, largely unexplored market. The venture capital firm SignalFire estimates that 50 million people around the world consider themselves content creators, while the technology news site The Information estimates that venture capital firms have invested $2 billion into 50 creator-focused start-ups so far this year.
Subscription services like OnlyFans and Patreon, where fans pay creators for access to premium content, also helped investors realize there was a strong business case for building tools for creators. Now the word “creator” has become a buzzword, appended to all types of businesses to attract investors. So much so that Alexander Finden, a tech entrepreneur, coined the term “creator washing.”
“There are more creator economy start-ups than creators,” Turner Novak, founder of Banana Capital, which invests in early-stage tech start-ups, joked on Twitter in April.

It’s never been easier to be a creator — from content creation to discovery and distribution to monetization. The next big phase though will be all about giving creators better and direct access to their trusted communities and giving them ultimate control of their creations.

Hello, Content Creators. Silicon Valley’s Investors Want to Meet You. 2. Facebook gets it, but they’re playing catchup

Here’s Vox on Facebook’s concerted effort to attract (and pay) creators:

Facebook has nearly 2.9 billion users, so lots of people use Facebook to reach that giant audience. But Facebook wants even more people posting more stuff on its platforms, so it’s going to pay out $1 billion by the end of 2022 to encourage creators — people who make internet content for fun and profit but generally aren’t running full-fledged media companies — to make stuff for Facebook and Instagram. The impetus here is clear: Facebook wants more engaging stuff on its apps, and it’s also trying to compete with the likes of TikTok, Snapchat, and YouTube.
But it’s worth noting that there’s a meaningful difference between Facebook’s newest gambit and the one that Google’s YouTube has been using to great success: Facebook, for now, is giving creators a lot less money.
When you make stuff for YouTube, you get a chance to make money the same way YouTube makes money — from ads that run next to the videos you upload to YouTube. At Facebook, though, there are two different pools of money: One is generated by ads connected to the videos and photos you post on Facebook, and the other is generated by ads everywhere else on Facebook. The first pool is the one that Facebook’s creators can access. The other one is really, really, big. And that’s the one Facebook is keeping all for itself.
This is one of those that’s a little easier to understand with visual aids. So: Here’s a YouTube video by Mr. Beast, the site’s most popular creator. YouTube gets paid for the ads that run before and during the clip, and Jimmy Donaldson, the 23-year-old behind Mr. Beast, gets 55 percent of the revenue those ads generate.
YouTube can also make money other ways, like selling banner ads on its homepage. But the vast majority of its money comes from ads attached directly to the videos it shows to more than 2 billion people every month. So YouTube is directly aligned with the people who generate the stuff that powers YouTube.
Which means there’s a real money gap for creators who thrive on Facebook versus those on YouTube; it’s why top YouTube creators like Donaldson stick with YouTube instead of trying to branch out onto other platforms. And it’s why YouTube says it paid out $30 billion to its content partners over the last three years.

So while Facebook has users and communities, that lack of alignment will continue to hurt its ability to attract top creators to its platform.

Facebook wants creators, but YouTube is paying creators much, much more Inside Facebook’s Data Wars 3. What’s up with Discord?

It’s not all about advertising though — a line Discord has stuck with.

Here’s the WSJ:

Discord nearly tripled its revenue last year solely by selling subscription access to exclusive perks for users. By contrast, the companies behind other free online hangouts — including Facebook Inc., Twitter Inc. and Snap Inc. — primarily sell targeted ads built around sharing users’ personal information.
In an interview, Discord co-founder and CEO Jason Citron, 36 years old, said the company has balked at the advertising model favored by its peers because ads would be too intrusive. People use Discord to hold conversations in real time, he said, as opposed to passively reading, making or commenting on posts. He also said he thinks that consumers in general dislike ads and don’t want their data shared with brands.
“We really believe we can build products that make Discord more fun and that people will pay for them. It keeps our incentives aligned,” he said.

Again with the aligned incentives.

What Is Everybody Doing on Discord? Discord buys Sentropy, which makes AI software that fights online harassment — TechCrunch 4. Chart of the week 5. In the meantime, get ready for more Facebook clones

Like privacy focused, subscription based MeWe — as per Axios:

MeWe, a subscription-based social media platform that bills itself as a privacy-focused alternative to Facebook, is looking to raise two new rounds of cash to help fuels its growth, its founder and chief evangelist Mark Weinstein tells Axios’ Sara Fischer.
The big picture: MeWe is part of a growing class of alternative social media networks that have gained traction among users who feel they’ve been unjustly censored by mainstream social platforms like Facebook and Twitter.
6. Stuff happens Via /j — Judge allows Ripple to depose SEC official who decided ETH is not a security Via /j — Datakeeper Nederland on LinkedIn: Datakeeper is live! Via /junhiraga — Blockchain-Based Fantasy Soccer Platform Sorare to Raise $532M in Funding: Report — CoinDesk Apple, Amazon, Google and Facebook Face at Least 70 Antitrust Probes, Cases Via /junhiraga — Will conversational commerce be the next big thing in online shopping? Via /rcb — Affirm shares fall on report Apple is working with Goldman Sachs to offer buy now, pay later service The World Bank, IMF and BIS push for central bank cryptocurrencies to improve cross-border payments Powell: A CBDC Would Make Cryptocurrencies Obsolete — Blockworks Via /toddjcollins — Ex-Plaid employees raise $30M for Stytch, an API-first passwordless authentication platform — TechCrunch Transmit Security raises $543M Series A to kill off the password — TechCrunch Google “bought off Samsung” to limit app store competition, 36 states allege Evolving RippleNet for a Tokenized Future | Ripple Can Ripple Be a Player in the NFT World? — Decrypt What Biden’s data portability push means for banks BlockFi Releases Visa Rewards Credit Card to US Clients — Blockworks Visa Partners with 50 Crypto Platforms on Card Programs — Blockworks Sen. Warren warns of cryptocurrency risks, presses SEC on oversight authority Via /vs — Voice cloning of growing interest to actors and cybercriminals

GiD Report#169 — It’s all about creators was originally published in GlobaliD on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

Creating an HTTP API with Ktor and Kotlin

Explore the details of a simple Ktor application exposing an HTTP API
Explore the details of a simple Ktor application exposing an HTTP API

KuppingerCole

EIC Speaker Spotlight: Gerald Horst on Effective Integration of Marketing, CRM, and Customer Identity and Access Management

by Warwick Ashford Gerald Horst, PwC’s Digital Identity team leader in EMEA, is to deliver a presentation entitled Modernizing the Customer Journey at the European Identity and Cloud Conference 2021. To give you sneak preview of what to expect, we asked Gerald some questions about his planned presentation. Why is there an increasing interest in digital transformation and how has the

by Warwick Ashford

Gerald Horst, PwC’s Digital Identity team leader in EMEA, is to deliver a presentation entitled Modernizing the Customer Journey at the European Identity and Cloud Conference 2021.

To give you sneak preview of what to expect, we asked Gerald some questions about his planned presentation.




Why is there an increasing interest in digital transformation and how has the pandemic impacted digital transformation projects?

Well, every year, PWC surveys CEOs across the globe about their strategic priorities in this “new normal”. This year, actually more than 5,000 CEOs, which is more than ever before, responded, and by the way, half of them were our CEOs from Europe. And first of all, we found out, most of them are very positive about the economy and the business outlook for their respective firms. And almost half of them are investing with double digit numbers in digital transformation, especially since most of these companies started their transformation efforts due to the pandemic. By the way, digital transformation, we defined it as any large-scale change to the way a company operates their business, and that is supported by the development of an enabling technology. So, why are companies investing in digital transformation?

We learned from the investigation that -and many of them are already on a transformation journey - that they are hoping to realize different benefits in two years from now, including cost savings, revenue growth, product innovation, product enhancement, gathering data and improving their decision-making. But what is apparent is that the organizations want to transform to make the best of technology advantages. There is lot changing: AI, 5G, and they want to use digital to enhance the customer experience, provide a more competitive set of products and services to their customers, generate data and improve, their offering by creating customer insights and really acting upon the information they have at hand.


What are companies doing right when transforming successfully?

It's interesting because we also wanted to know by surveying companies and understanding their digital transformation who actually the winner are. And winning means growing in a disruptive market, achieving the objectives that they set out for themselves at the beginning of the transformation, and we asked them specifically about what the success factors are. And we got over 500 responses from CEOs, and there were five key things: First and foremost, is that the winning companies actually put [the] customer at the center of the transformation. They really, really relentlessly focus on the customer experience. They know their customers very well and they understand the customer needs and behaviors.

Secondly, they also understand that trust is a key market differentiator. So a winning companies have a clear purpose in life, and they keep the promises to their customers and to their employees.

Thirdly, the winning companies are the ones that are investing in enterprise-wide process and data capabilities. They connect from front-end with back-end. They secure their critical infrastructure and enable all that through key technology implementations. And this is probably one of the most difficult tasks because doing company-wide activities [and] going after a company wide objective, requires companies to really break down the silos. Both functional and political. So that's all about stakeholder management.

Fourthly it's about leadership. I mean, CEOs who sponsor digital transformations, they need to play an active role in the transformation and they should not delegate. And then last but not least, of course, the role of technology. We must acknowledge that as much as technology is supporting the vision, it is also the ability to differentiate themselves by using that technology well. Like I said, much focus is on AI and 5G, yet winning companies have more focus on technologies that enable the digital experience for both their customers and their employees than to focus on technologies that drive efficiencies and savings.


Given that there is increased investment in driving online business, why is Consumer Identity & Access Management a critical success factor in any digital transformation?

So, like I said, companies that are going through a digital transformation and who actually are the “winning” companies, they put the customer in the center. Relentless focus on customer behavior [and] on customer needs. If you're in an industry, you know what customer identity and Customer Identity and Access Management is all about. Companies that are successful in transforming, are clearly looking for a common set of factors when they select technologies: ease of use and faster processes are considered the most important attributes to encourage adoption of technologies by customers, and [that] makes them the most essential features to get really right upfront. And implementing Customer Identity and Access Management is providing exactly that, and moreover, it optimizes security and privacy capabilities, which is adding to the trust factor. Like I said before, trust is a competitive factor, and this brings us to the topic of Zero Trust.


What is the relevance/value of a Zero Trust approach to security in this context?

Well, identity really is the new parameter. If you look at it from a customer's perspective, I was doing business online, but also it's the new parameter from an organization's perspective that is securing its business. It has been investigated that by 2023, I believe 75% of security failures will result from inadequate management of identity. So access and privileges. This is growing. I mean, we can see a staggering amount of hacks, attacks etc. happening every day. We are learning fast from many breaches that cyber-crime has actually become an industry, and the breaches are mostly the direct result of weak or compromised access and identity management processes. So I've discussed the topic of Zero Trust over the past year quite a few times in webinars. It's what actually stands for “trust nobody”[or trust] nothing [that] comes from both internal and external networks. So from this, identity is the new parameter. We need to verify the identity of the requester before granting access – [that] is clear. And we need to make sure that any access provided is compliant with rules regulations.


How should organizations go about balancing user experience, cost, and security requirements?

Right, now we know from experience, that our clients need to find the correct balance between customer experience, security, and costs when they are transforming the front end and when they're introducing consumer identity and access management solutions. This needs to involve a lot of stakeholders, ranging from financial, to IT, to marketing, to compliance and security. And as such, you need to really break down those silos in organizations. Functional [and] political silos, to get them all aligned, and to enable their company-wide transformation. And this means that you have to work on having a common vocabulary and have the stakeholders understand each other. For that purpose, we at PWC, we really looked at this at this challenge and we developed over the past two years a customer identity game, which is a form of a gamification that is very innovative and accelerates the start and the facilitation of the identity programs that we are doing with our clients and the technology implementation that come from it. And by playing games with our stakeholders, they do actually come to this common vocabulary, and they really understand each other. They understand the role of digital identity and the strategy they have to automate related [to] onboarding as an example, much faster than before. And the design and implementation of solutions is also much done, much more effectively and playing games is fun.


What types or organizations are leading successful projects in terms of integrating marketing, CRM, and CIAM?

Well, I mean, my answer would say that all business to consumer companies are adapting to changing customer and market conditions. So, they are really implementing consumer identity and access management programs. They're integrating it - well, they're implementing CRM solutions, whether they're integrating or not, that's I think a little bit further down the road. But of course, you've seen that in financial services, and also in retail, but especially in financial services, banks, insurance companies, they have been implementing Consumer Identity and Access Management way before others due to competitive pressures, but also [due] to [the] compliance needs they have.


Do you see CIAM becoming increasingly important in future for online business models?

Well, absolutely. I think, you know, like I said before, what you see is that companies are going through digital transformations for a number of reasons, that the companies that are actually doing that successfully, need to put the customer in the center of that transformation.

They need to understand what their customer actually needs. It's all about ease of use, it's about building trust. It's about automating processes and making it agile and speeding up those processes. So yes, CIAM is going to be increasingly important for that, because basically that's what CIAM is offering. It's, it's offering ease of use, it's offering faster processes, it's offering trust by means of security and privacy capabilities. Yes, so I do believe that CIAM is increasingly gaining importance. It actually is a critical success factor for any digital transformation that is going to be successful.


Do you think this will be influenced by that fact that platforms like Salesforce and ServiceNow are increasingly taking over Identity & Access processes?

Yes, of course. Clearly, platforms like Salesforce and ServiceNow that you mentioned are increasingly supporting clients to transform their businesses into online business models, which is why these providers themselves also are offering wider and deeper IAM and CIAM capabilities than ever before. And as PWC, we have both, you know, both these platform teams and skills, and we have these identity teams and skills in house. We get more questions from clients that are utilizing Salesforce platforms, for example. And they ask us to investigate whether a CIAM from Salesforce can actually deliver the ROI that they are looking for.


What can attendees of EIC expect in your presentation?

I look very much forward to being a Munich. I hope that it will be a physical event and I really hope to present [on site] and I look forward to that. I will, in my presentation, make clear that implementing powerful Customer Identity and Access Management and integrating marketing and Customer Relationship Management functions with Customer Identity and Access Management, if done well, of course, is [a] critical success factor for any digital transformation. And it will help business owners achieve the ROI that they're looking for. I will also share how to get there in terms of an effective way to implement CIAM and integrate a CRM. And especially I will focus on how to break down those silos, those functional and political silos - bring stakeholders together, manage their expectations, keep them aligned. And thirdly, I will discuss a very interesting client case, basically to clarify my messages, all that in 20 minutes, making identity matter, together with PWC. I look forward to it. Thank you.



Ocean Protocol

OceanDAO Round 8 is live!

What you need to know for Round 8 of OceanDAO OceanDAO Grants Hello, Ocean Community! For those new, OceanDAO is a community-curated funding system directed towards projects building in the Ocean Protocol ecosystem. The Ocean ecosystem becomes self-sustainable as the builders of the Web3 data economy leverage Ocean Protocol to create products, services, and resources that the community fi
What you need to know for Round 8 of OceanDAO OceanDAO Grants

Hello, Ocean Community!

For those new, OceanDAO is a community-curated funding system directed towards projects building in the Ocean Protocol ecosystem.

The Ocean ecosystem becomes self-sustainable as the builders of the Web3 data economy leverage Ocean Protocol to create products, services, and resources that the community finds valuable.

Grant Funding Categories:

Building or improving applications or integrations to Ocean Community or developer outreach (grants don’t need to be technical in nature) Unleashing data Building and/or improving core Ocean software Improvements to OceanDAO itself

For up-to-date information on getting started with OceanDAO, we invite you to get involved and learn more about Ocean’s community-curated funding on the OceanDAO website.

The goal is to grow the DAO each round. We encourage the $OCEAN ecosystem to apply or reapply AND to vote! Thank you to all of the participants, voters, and proposers.

OceanDAO Round 8 Announcements and Guidelines Round 8 Rules

There is $275,000 USD in grant funding available in Round 8. The max request limit per proposal is $17,600 USD.

Please Note: The amount requested is in USD, but the amount paid is in OCEAN token. The conversion rate is the market price on the Proposal Submission Deadline (August 3rd at midnight GMT). This determines how many OCEAN will be awarded if a proposal is voted to receive a grant.

Proposals with 50% or more “Yes” Votes received a grant, until the “Total Round Funding Available” is depleted in descending number of votes received order.

35% of “Total Round Funding Available” is earmarked for New Projects. Earmarked proposals were eligible for entire “Total Round Funding Available”; returning (general) grants are eligible for 65%.

As a builder, submit your project proposal on the Round 8 Ocean Port Forum. As a voter, this is where you can see all the expanded proposal details to make an informed vote decision.

Additionally, you can check out the OceanDAO Round 8 Proposal Submission Dashboard to see all the proposals in one place as they are submitted.

The grant proposals from the snapshot ballot that meet these criteria are selected to receive their Funding Amount Requested (in OCEAN) to foster positive value creation for the overall Ocean Protocol ecosystem.

Timing Proposals submission deadline is August 3rd at midnight GMT A 2-day proposal due diligence window, ending Aug 5, 2021 at 23:59 GMT Voting Starts & Add OCEAN to Voting Wallet By Aug 5, 2021 at 23:59 GMT Voting Ends on Aug 9, 2021 at 12:00 PM GMT

If your Proposal is voted to receive a grant, please submit a Request Invoice to the Ocean Protocol Foundation (OPF) for the OCEAN granted amount.

OceanDAO Ecosystem

Continue to support and track progress on all of the Grant Recipients here!

Much more to come — join our Town Halls to stay up to date. Thank you!

Follow Ocean Protocol on Twitter, Telegram, LinkedIn, Reddit, GitHub & Newsletter for project updates and announcements. And chat directly with other developers on Discord.

OceanDAO Round 8 is live! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


SWN Global

Our CEO Seokgu Yun became a CBDC expert at the Digital Euro Association!

Dear MetaMUI family, We are so glad to announce that our CEO Seokgu (Phantom) Yun has joined the Digital Euro Association as a CBDC expert. Seokgu (Phantom) Yun is a pioneer of digital currency and distributed systems since 1992, he is the CEO and Chief Scientist of SWN Global, he has over 25 years of experience in cryptography, algorithms and security architectures. Phantom has provid
Dear MetaMUI family,

We are so glad to announce that our CEO Seokgu (Phantom) Yun has joined the Digital Euro Association as a CBDC expert.

Seokgu (Phantom) Yun is a pioneer of digital currency and distributed systems since 1992, he is the CEO and Chief Scientist of SWN Global, he has over 25 years of experience in cryptography, algorithms and security architectures. Phantom has provided security solutions to DAVOS and G20 Summits, LG, Samsung, and Yahoo, among other multinational corporations. Following his professional achievements and academic research at Korea Advanced Institute of Technology and the University of Southern California, Phantom founded Sovereign Wallet Network (SWN).

He currently leads the MetaMUI CBDC & NFTs Platform, the 1st Identity Based-blockchain, and is building the Internet of Sovereign Digital Currencies.

Hopefully, he will contribute to increasing awareness of exploring alternative Blockchain technology as a solution to CBDC implementation issues, hence will add significant value to the DEA efforts towards a Digital Euro successful implementation.

#DEA #MetaMUI #CBDC #DigitalEuro #Cross_Border_Payments #NFT
https://home.digital-euro-association.de/community/en


IDnow

Balancing regulation and customer experience in Europe

Our latest article gives you a quick overview of the european eKYC space and what solutions offer the best user experience. Understanding the eKYC mix in Europe The digital banking revolution in Europe has ushered in a new way for banks and fintech to acquire new customers to create an end-to-end good user experience. As […]

Our latest article gives you a quick overview of the european eKYC space and what solutions offer the best user experience.

Understanding the eKYC mix in Europe

The digital banking revolution in Europe has ushered in a new way for banks and fintech to acquire new customers to create an end-to-end good user experience. As new digital experiences removed customer pain points from sending, saving, and investing money, customers soon became accustomed to frictionless digital banking journeys. In addition to maintaining a frictionless customer experience when it comes to KYC, banks and fintech must also adhere to and navigate the sometimes disparate national and EU AML regulations while using the latest and most flexible technologies that fit within their tech stack. Fortunately, several eKYC solutions, such as bank identity, AI-based identity, and agent-based identity, offer a superior user experience, adhere to regulation, and utilize future-proof technologies. 

eKYC in Europe: what to consider

Approaching KYC in Europe requires a range of considerations. The first core lens, regulation, requires banks and fintechs to focus on a national and EU level. While the EU level serves more of a “one size fits all model”, AML requirements at the national level vary. For example, in the UK, banks and fintechs can use automated solutions to perform an eKYC, but in Germany, the regulator requires a video-based solution for eKYC. At the EU level, banks and fintechs must adhere to AMLD 5 and ALMD6. AMLD 5 came into force in the EU in 2018, and required all EU member states to transcribe the directive into their own legislationby the 10th of January 2020. This direct expands the scope of AML requirements and aims to streamline the AML procedure across member states. 

Another core consideration is if the customer already has an existing banking relationship. Customers applying for their first financial product in a country often require a branch visit. But, customers who have a current banking relationship can avail of quicker onboarding services, such as MobileID in Sweden or ID confirmation via the current banking account number in Spain. 

How can a bank or fintech offer a compliant, superb user experience onboarding?

Understanding the many lenses of the customer or user experience is critical when assessing the right eKYC in digital financial services. A good user experience is characterized by several factors and attributes such as efficiency, visible and relevant help and support, error prevention, and consistency. Within Europe, a number of eKYC options are available that not only adhere to local and regional regulations, but allow banks and fintechs to offer a superior onboarding user experience. 

The most common type of eKYC, which provides a high level of usability, is agent-based authentication. This type of authentication, either run in-house or via specialized eKYC companies, sees the applicant speak to a qualified agent who completes the identity and document check. From a user experience perspective, it offers an incredibly high amount of convenience, as applicants can typically undertake the agent-based video call at a time that is convenient for them. This solution is increasingly popular and relevant as new customer segments feel more comfortable with digital banking and completing tasks in digital channels that were once only available in the physical world before Covid-19. 

One solution with high potential in Europe is using existing bank account identification to fulfill eKYC requirements. The solution leverages an applicant’s existing bank account details and identity, backed by open banking technology. To complete the three-step process, a customer presents a valid ID, completes a micro-transfer money transfer within the existing online banking platform, and lastly confirms their details. The bank account identity solution is an efficient process, provides little room for error, and provides a high level of consistency. The entire experience can occur within one or two applications. 

AI-based identification often combines machine learning, AI, NFC, and biometrics capabilities to provide a highly accurate and quick eKYC solution. This type of identification is often the quickest to complete, and is future-proof as the solution is able to evolve based on million of datapoints feeding the algorithms. AI-based identification is a great solution for banks or fintechs already employing AI or data-driven products and experiences because it provides a high level of brand and usability consistency. 

Finding the right balance in between UX & banking regulations

The digital financial service space is growing more competitive by the day, and offering customers a seamless, innovative and relevant user experience is quickly becoming a hygiene factor. While designers and product owners are inspired by the user experience of non-regulated tech companies, such as Spotify or Netflix, banks and fintechs must ensure they abide by not only national, but regional regulation around AML. AML requirements, specifically KYC and customer identification have historically been a cumbersome, manual and time consuming process in physical and digital onboarding and therefore risk disrupting what an otherwise exceptional user experience is. 

However, a range of eKYC options, including AI-verification, bank account verification, and agent verification are becoming increasingly utilised across Europe to fulfill AML requirements, but offer seamless onboarding, with innovative solutions that align with forward-thinking brands. These solutions prove to be a win-win-win; for the bank or fintech, for the customer, and for keeping the financial services industry a safer place. 

By

Meaghan Johnson
CX & UX Consultant for fintech and digital banking
Connect with Meaghan on LinkedIn


Aergo

Inflation: The Great Impact On Cryptocurrency

Introduction: Inflation And Economics Inflation Has Been A Major Topic Of Discussion Globally Specifically In The United States, But What Is Inflation? Our globalized and homogenized economy has witnessed excessive inflation since the The Great Recession which began in 2008 and its impact cannot be overstated, but what is inflation? Inflation, by definition, is the process of sustained increasing

Introduction: Inflation And Economics

Inflation Has Been A Major Topic Of Discussion Globally Specifically In The United States, But What Is Inflation?

Our globalized and homogenized economy has witnessed excessive inflation since the The Great Recession which began in 2008 and its impact cannot be overstated, but what is inflation? Inflation, by definition, is the process of sustained increasing costs over a period of time. Inflation occurs when a currency like the dollar or the Euro loses value over the course of a specific time-frame: simultaneously, goods begin to rise resulting from this economic occurrence. Inflation, over the course of the last forty years, has seen substantial debate where participants have argued its negative elements as much as its positives. Economists throughout the last century have argued that inflation occurs when the supply of money is greater than the demand of money. Inflation at a moderate rate has been argued to boost consumer demand and consumption, under the premise that higher levels of spending are crucial for economic growth and expansion. Even famous economist John Maynard Keynes argued that inflation helps prevent the Paradox of Thrift, which leads to delayed consumption. But how can inflation be a drag and negative on the economy, lets find out.

Purchasing Power Of The U.S. Dollar: Is This Economic Paradigm Sustainable?

Inflation, by design, accompanied by a sustained increase in prices, leads to the erosion of purchasing power. Essentially, if a consumer does not consistently expand upon their purchasing power through the acquisition of monetary instruments, inflation can cause their purchasing power to plummet over the course of time. An increase in prices leads to a ubiquitous impact throughout all sectors of the economy: this includes the cost of doing business, borrowing money, mortgages, government bond yields and much more. If inflation has both positive and negative impacts on the traditional economy, how would this economic phenomenon affect a new digitized homogenized economy; this economic system being cryptocurrency. Inflation has resulted in negative setbacks to economic systems since the dawn of civilization, can Bitcoin and cryptocurrencies fix inflation, and how does inflation in traditional markets affect cryptocurrencies?

Bitcoin And Cryptocurrency: An Economic Savior?

Bitcoin VS. The Dollar In Real Time: A Question Of Value And Inflation

It is evident that the US dollar and many fiat currencies have lost substantial value over the course of their respective time-frames, can Bitcoin and other cryptocurrencies be the answer to this economic dilemma? Bitcoin has consistently proven itself, surrounding economic valuation, to be a perfect hedge against fiat inflation “though Bitcoin has seen massive erratic downtrends, its consistent trajectory over time remains upward. Bitcoin, by design, is meant to combat inflation through two methods, its total supply will be no more than 21 million, ever throughout time and every four years, the amount of Bitcoin that can be mined is reduced by half. Investing and holding Bitcoin and other blockchain projects have been an enticing prospect for individuals who realize the detriment to purchasing power that high inflation rates can create. Resulting from inflation, the fiat in a consumers bank account consistently depreciates in value over time, whereas many cryptocurrency projects have created of deflationary measures to prevent runaway inflation: the burning of excess token supply has been one answer to this problem. Bitcoin and Ethereum has offered investors an alternative to the loss of purchasing power, but how are these cryptocurrencies less prone to runaway inflation? It is crucial to understand such a notion.

A Brief History Of Inflation: The Complexity Of Bitcoin: The Anti-Inflationary Paradigm: Decentralized Control

Cryptocurrency Design: The Coexistence And Equilibrium Of Healthy Inflation And Deflation

It has become evident over the course of history that countries have had their economies ruptured by their respective national governments. During the late 1780’s, France, under King Louis XVI, experienced rapid runaway inflation resulting from the monarchy consistently printing money to finance the American Revolution. This was a major factor on how King Louis XVI came face to face with the guillotine and Europe witnessed an entire continental upheaval from a political, economical and social standpoint resulting from the outcomes of the French Revolution. Within the 21st century, the global community has witnessed many nations that have nearly collapsed because of government manipulation of their respective economies; Venezuela and Zimbabwe are the first nations that come to mind. Bitcoin fixes this notion.

Bitcoin, by design, is non-tamperable, immutable, decentralized, trustless and free from government manipulation. Bitcoin cannot fall prey to state and federal governments adjusting interest rates or printing more money to achieve political or monetary policies. Bitcoin, like gold, is seen as a digital store of value that allows for decentralized peer-to-peer transference throughout the globe for extremely low costs. Bitcoin is an incredibly scarce commodity, by design, only 21 million will ever be created and there are almost 8 billion people on the planet in the year 2021. Scarcity, amalgamated with intrinsic value, leads to upward growth over the course of time. Simultaneously, it is easy to predict when a specific amount of Bitcoin will be minted; by the year 2140, all the Bitcoin in the world will be mined, whereas gold, although it is scarce, can consistently be found or discovered at anytime. Bitcoin’s design is a paradox by nature, amalgamating the positives of inflation and deflation. Inflation occurs because more Bitcoin is mined but resulting from the reduction of Bitcoin being mined over time, the economic instrument’s inflation rate will also decrease. However, it is incumbent to recognize that not all cryptocurrencies have the same economic paradigm as Bitcoin surrounding inflationary and deflationary design.

Stablecoins And Alts: Inflationary Impacts On Cryptocurrency

Stablecoins Are Cryptocurrencies That Are Pegged To The U.S. Dollar Or Another Form Of Fiat: How Does This Impact This Category Of Cryptocurrency

Cryptocurrency has become an ecosystem of ecosystems, perpetuating the expansion of diverse digital economic models and tokenomics that have conceived of lush and vibrant digitized economic systems, but how does inflation affect stablecoins and “alt-coins?” An increasingly popular category of cryptocurrencies, known as stablecoins, are pegged to the U.S. dollar and are incredibly useful in times of market corrections, collapses and downtrends, but how does being pegged to a fiat currency affect your investment? Since these stablecoins are pegged to fiat currencies, they’re at mercy to the inflation rates that these currencies succumb to. In essence, your investment will be directly correlated with inflation rates of traditional currencies. However, many stablecoins like USDC and DAI have created paradigms that combat this notion of inflation through economic incentives and rewards.

The design of many stablecoins have conceived of a paradigm to prevent the recurring onslaught of inflation and have sought to garner a solution to this issue by offering annual percentage yields, similarly to what banks used to offer to their clients and customers. Simultaneously, DeFi technologies have allowed for investors to lend their stablecoins for an annual interest rate for their respective loans, enabling investors to enjoy compounding interest. In Nigeria and Zimbabwe specifically, Nigerians and Zimbabweans have been turning to stablecoins for protection against inflation which has ran rampant in each country over the last ten years and it is important to reference that Nigeria’s inflation rate has become an aggregate average over the last ten years of a staggering 12%. Simultaneously, these stablecoins also enable users to send digital currency with near-zero transaction fees. Stablecoins allow for individuals who’re prone to runaway inflation in their respective nations to protect their savings through annual yielding stablecoins that are pegged to the U.S. dollar. Stablecoins, a category of “alt-coins” have created a bastion of economic trust for individuals who seek refuge from inflationary onslaught.

Altcoins such as AERGO, Cosmos, Algorand and Quant have proven during the latest downtrend that economic incentive and interest towards these cryptocurrencies are vibrant and expansive; in a time of constant downward price spirals throughout the cryptocurrency markets, these coins have not lost nearly as much of their respective all time high valuations as other cryptocurrencies and this is important to reference. Although alt-coins can be more risky when it comes to investment prospect, their economic yields that result in economic price discovery during bull-markets cannot be overstated. For many retail investors, altcoins are the driving force to wealth if invested properly and with projects that have utility and hype behind them.

Conclusion: Bitcoin And Cryptocurrencies: The Great Inflation Hedge

The Great Inflation Range: A Vison Of A Sub One Bitcoin Inflation Rate

Currencies throughout history has succumbed and witnessed the onslaught that runaway inflation can have on nations, empires, monarchies, republics and dynasties, but today, we have a technology that specifically combats runaway inflation through a variety of methods, and that currency is Bitcoin. By design, Bitcoin aims to amalgamate the positives of inflation and deflation, preventing governments to manipulate it to pursue political or monetary policy or gain and also Bitcoin combats runaway inflation through the economic principle of scarcity. Stablecoins have also become a bastion to protect the net worth of hundreds of thousands of people throughout the world, specifically in the developing world where economic aid and preservation are needed most. It is evident that cryptocurrencies have become a hedge for investors throughout the world since its inception in 2009, one year after the grand economic collapse known as The Great Recession.

Disclaimer: Cryptocurrency investing and gambling requires substantial risk, do not invest or gamble more than you can afford to lose! I am not a financial adviser and I am not responsible for any of your trades. I am an investor of Icon Coin and the information within this article represent my own thoughts and opinions. It is incumbent that you always do your own research before investing in anything!

Inflation: The Great Impact On Cryptocurrency was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

The Rails Guide to Securing an API

In this tutorial we are going down a different track then our last Ruby Post (bad pun intended). Instead diving into building a very simple API that, of course, we will secure with access tokens minted by our very own Okta OAuth server. We’ll make requests to this API via Postman to keep things nice and simple. Now let’s get chugging along. (OK, that’s the last pun for a bit.) Prerequisites for

In this tutorial we are going down a different track then our last Ruby Post (bad pun intended). Instead diving into building a very simple API that, of course, we will secure with access tokens minted by our very own Okta OAuth server. We’ll make requests to this API via Postman to keep things nice and simple. Now let’s get chugging along. (OK, that’s the last pun for a bit.)

Prerequisites for this blog post include:

Postman or PostmanCanary A text editor (I am using VS Code in my examples) Rails 6 An Okta Developer Account (free forever, to handle your OAuth needs)

Now let’s get started!

Build the API

Open up the terminal and create a brand new Rails application:

rails new okta_protected_api cd okta_okta_protected_api

Now let’s install the JWT gem. Open your gemfile and add this one line:

gem 'jwt'

Like so:

Now let’s run bundle install in the terminal:

bundle install

Now let’s create a route. For this post, I’ll have an API that returns anime I am watching or I am excited about, so I will call it animes. Add this line to the config/routes.rb.

resources :animes, only: [:index]

And now let’s create a controller.

cd app/controllers touch anime_controller.rb

Now provide the code for the controller.

class AnimesController < ApplicationController def index animes = ["Haikyu", "The Great Pretender", "Jujutsu kaisen", "Dr. Stone", "Attack on Titan"] render json: { animes: animes }.to_json, status: :ok end end

Now let’s do an arbitrary test to make sure our API works. Run the Rails app and navigate to http://localhost:3000/animes

rails s

You should see:

Add Security methods

This looks nice! A great list of stuff to watch. Anime aside, let’s secure this API. To do this, we will add some security methods to our application controller. We are going to add a private method that will use the JWT library.

private def valid_token(token) unless token return false end token.gsub!('Bearer ','') begin keys = [] JWT.decode(token, nil, true, { algorithms: ['RS256'], jwks: { keys: keys } }) return true rescue JWT::DecodeError render json: { errors: ['Not Authenticated'] }, status: :unauthorized end false end

Note: The above is a function that will expect an OAuth token from an Okta OAuth server. Our code should look something like this in the application controller:

But wait a second. We are still missing a couple of pieces of key logic! For one, when do we call this valid token method? So let us fix that first. We are going to create a public method that will check HTTP headers for a valid JWT. It should look like so:

def require_jwt token = request.headers["HTTP_AUTHORIZATION"] if !token head :forbidden end if !valid_token(token) head :forbidden end end

Now let’s add that code to our application controller. It should now look like this:

Now let’s add a before_action to our controller with our new method. Add this code right under ApplicationController:

before_action :require_jwt

Now the finished result of the application controller should look like this as pure code:

class ApplicationController < ActionController::Base before_action :require_jwt def require_jwt token = request.headers["HTTP_AUTHORIZATION"] if !token head :forbidden end if !valid_token(token) head :forbidden end end private def valid_token(token) unless token return false end token.gsub!('Bearer ','') begin keys = [] JWT.decode(token, nil, true, { algorithms: ['RS256'], jwks: { keys: keys } }) return true rescue JWT::DecodeError render json: { errors: ['Not Authenticated'] }, status: :unauthorized end false end end Find and Add Your Keys Endpoint

However, one thing is still missing. The keys value is an empty array when it should reflect our public keys from an Okta OAuth server. We can find our JWKS endpoint if we go https://{yourdomain}.okta.com/oauth2/default/v1/keys.

When you go there you should see something like this:

Copy the JSON key-value for “keys” and paste it into the keys array variable in your code.

For example, my end result looks like this:

Test your API

Now it is time to test if your API is protected. Start up the server.

rails s

Now let’s open http://localhost:3000/animes.

You should now see:

Sweet! We have secured our API, but to be sure let’s test with Postman! We are going to create a new request for our anime API:

Click on Send. You should still get the unauthorized access denied screen.

To fix this, we will need to get an access token from Okta and send it to our API. Create a new request and point at http://localhost:3000/animes as a GET request. Then select Auth, and select Authorization code from the dropdown for Grant Type, and select Send client credentials in the body for Client Authentication. For the header prefix, write Bearer. Replace the authorization URL with the authorization URL and token URL from your Okta Developer Account.

The format of the URLs should be something like below:

Authorization URL: https://.okta.com/oauth2/default/v1/authorize Token URL: https://.okta.com/oauth2/default/v1/token

In Postman, it should all look like the following. (If yours looks different, try to make it look like below with your own values.)

Now let’s get our Client ID and Secret from Okta. Go to Okta Applications and create a new web app. Feel free to name it whatever you want. Make sure you add the Postman URL to the Base URI and the redirect URI. The Base URL should be https://oauth.pstmn.io and the redirect URI should be https://oauth.pstmn.io/v1/callback. It should look something like this:

Click Done.

On the next page, you should see the Client information like so:

Now take your Client ID and Client secret and copy it into Postman. In Postman, click Get New Access Token:

It should redirect you to an Okta login page:

Just sign in as a user and you should see something like this:

Note: Make sure you allow popups from Postman. Otherwise, you might get stuck on a loading screen in Postman.

If everything works according to plan, you should see this dialog:

All that’s left is to send our token to our API:

Try clicking Send, and you should see our data come back:

Oh yeah, it’s working! Now, what if we want to add some extra validation, like only tokens with a certain scope can hit our API? Let’s modify our code on the application controller. We are going to replace some code with this snippet that checks for the profile scope in the token and returns a boolean if it is present in the token or not.

token_payload = JWT.decode(token, nil, true, { algorithms: ['RS256'], jwks: { keys: keys } }) scopes = token_payload[0]["scp"] return scopes.include? 'profile'

The application controller will now look like this:

class ApplicationController < ActionController::Base before_action :require_jwt def require_jwt token = request.headers["HTTP_AUTHORIZATION"] if !token head :forbidden end if !valid_token(token) head :forbidden end end private def valid_token(token) unless token return false end token.gsub!('Bearer ','') begin keys = [] token_payload = JWT.decode(token, nil, true, { algorithms: ['RS256'], jwks: { keys: keys } }) scopes = token_payload[0]["scp"] return scopes.include? 'profile' rescue JWT::DecodeError render json: { errors: ['Not Authenticated'] }, status: :unauthorized end false end end

For reference here is also a screenshot of my code:

Now restart your server.

Get a new token with just OpenID and email. Postman reference request below:

You should now get an unauthorized message when you hit Send again:

Now let’s get a new access token with the profile scope again:

Your request should now be successful:

If you just want want to clone and try it, just use the GitHub repo.

Happy coding!

Learn More About Ruby on Rails and OAuth

For more Ruby on Rails and Okta articles, check out these posts:

Easy Authentication for Ruby On Rails Login An Illustrated Guide to OAuth and OpenID Connect Simple Authentication with Rails and OmniAuth Is the OAuth 2.0 Implicit Flow Dead?

Make sure to follow us on Twitter, subscribe to our YouTube Channel and check out our Twitch channel so that you never miss any awesome content!


PingTalk

What is Identity and Access Management (IAM)?

Identity and access management (IAM) is a security framework that helps organizations identify a network user and control their responsibilities and access rights, as well as the scenarios under which the privileges are issued or denied. IAM typically refers to authorization and authentication capabilities like:   Single sign-on (SSO) so you can give users the ability to sign on once a

Identity and access management (IAM) is a security framework that helps organizations identify a network user and control their responsibilities and access rights, as well as the scenarios under which the privileges are issued or denied. IAM typically refers to authorization and authentication capabilities like:

 

Single sign-on (SSO) so you can give users the ability to sign on once and with a single set of credentials to gain access to multiple services and resources

Multi-factor authentication (MFA) so you can gain a greater level of assurance of a user’s identity by requiring the user to provide two or more factors as proof of identity
 

Access management so you can make sure only the right people gain access to the right resources
 


Urbit

On Christopher Alexander

![](https://media.urbit.org/site/posts/essays/timelesswaybridge.png) _A well-designed house not only fits its context well but also illuminates the problem of just what the context is, and thereby clarifies the life which it accommodates — Notes on the Synthesis of Form_ There is an unofficial design canon at Urbit. If you were a fly on the wall, you would hear the name "Christopher Alexander" re
![](https://media.urbit.org/site/posts/essays/timelesswaybridge.png) _A well-designed house not only fits its context well but also illuminates the problem of just what the context is, and thereby clarifies the life which it accommodates — Notes on the Synthesis of Form_ There is an unofficial design canon at Urbit. If you were a fly on the wall, you would hear the name "Christopher Alexander" regularly mentioned around the office. Christopher Alexander is an architect, albeit one who isn't taught at most architecture schools. However, his work encompasses far more than what is usually understood as "architecture". Alexander's work is concerned with the production of functional beauty. What's remarkable about his books is that they are not empty philosophical manifestos—Alexander gives specific, actionable guidance on creative problems. His best books are manuals for how to construct forms that are "alive". Alexander threads the needle between highly specific instruction and generalized abstract concepts—his success at this task can be seen in his influence on fields outside of architecture, from computer science to network theory. In his most recent work, Alexander touches upon subjects ranging from religion to math and connects it all into a theory about how the universe functions. But people read him for less lofty reasons: to build a house that is beautiful, design a comprehensive programming language, or figure out how to approach a complex problem. In this post, I'll give an overview of Alexander's work and hear from some Urbit staff about his relevance to software development. Alexander's first book, _Notes on the Synthesis of Form,_ detailed a new way to approach the design process. Released in 1962, it incorporated networks, set theory, language, and math into a new method for accurately modeling and solving design problems. Although Alexander was an architect, the book was concerned with fundamental issues: how do you solve complex problems with hundreds of interconnected variables? The answer lies in decomposing the problem into subsystems and then "diagramming" those subsystems back into a singular integrated form. The process of decomposing a set into [highly-interconnected subsets](https://en.wikipedia.org/wiki/Community_structure) involves a significant amount of computation—Alexander wrote a computer program for the IBM 7090 mainframe which was described in his 1962 paper _HIDECS 2: A Computer Program for the Hierarchical Decomposition of a Set with an Associated Graph._ But this was only half of his design process; after the creation of subsets which are the fundamental units of the design problem, the subsets must be diagrammed. This second half of the process involves integrating and synthesizing all of the variables within a single subset into a conceptual diagram. These fundamental diagrams are then integrated and combined _themselves_ into a single diagram which is the final result of the process. The diagram below is Alexander's solution to the problem of designing a village in India with a huge number of variables and constraints. ![](https://media.urbit.org/site/posts/essays/villagediagram.png) _Notes on the Synthesis of Form_ can be seen as embodying many of the tensions in 1960s America. Computers were seen as a suspicious technology which embodied the institutionalized science that created the nuclear bomb and cold war era tensions. These suspicions were understandable: computers were massive esoteric machines housed in entire rooms of university and government buildings. They were physically and mentally inaccessible to the average American citizen, while the elites who did use them were held in contempt by the populace. Student protests against being just another cog in "the machine" or "IBM card" were partly right—at this point in the early 60s, many intellectuals were interested in [cybernetics](https://en.wikipedia.org/wiki/Cybernetics:_Or_Control_and_Communication_in_the_Animal_and_the_Machine) and the [mathematical modeling of "human" problems](https://en.wikipedia.org/wiki/Theory_of_Games_and_Economic_Behavior) and Alexander drew from these fields frequently in his book. However, when _Notes on the Synthesis of Form_ came out in 1964 computer culture was shifting towards an alliance with the counterculture—a cultural and technological process that would culminate with these massive pieces of institutional technology becoming "personal computers". He criticized those who wished to keep math and computers away from human creativity: he saw the possibility of computers yoked to creative human beings. He believed this symbiotic approach to technology was necessary—without a new approach to design, the world would continueto produce the ugliness endemic since the industrial revolution. According to Alexander, a designer could only successfully solve this problem by combining his creativity with the analytic and computational power of machines. Through his books and [presentations](https://www.youtube.com/watch?v=98LdFA-_zfA), Alexander would have a lasting impact on object-oriented programming and [pattern based thinking in software design](https://en.wikipedia.org/wiki/Design_Patterns). In _Notes on_ _the Synthesis of Form_ Alexander applauds the buildings and towns of many primitive societies but he notes that there is no hope of a return to this "unselfconscious" state. Therefore, modern people must develop a new design process to reach the same end. But what is this end that Alexander is trying to achieve? Alexander states his aim negatively in _Notes on the Synthesis of Form_ as "the absence of misfits between a form and its context". An [apophatic](https://en.wikipedia.org/wiki/Apophasis) statement might be good enough for mystics but, for someone like Alexander who was concerned with precision, a positive statement was necessary. His positive statement wouldn't come until 1979 with the publication of _A Timeless Way of Building_. In 1975, Alexander would begin publishing his most famous book series which included _The Timeless Way of Building_ and _A Pattern Language_. In these works Alexander took a more holistic approach to creative production and placed less emphasis on design models. This new approach cemented the importance of culture as the substrate that supports a people's inherited "pattern language". Just like a spoken language, a pattern language is developed incrementally over time by people who adapt to the same local environment. Alexander wrote that "a pattern language gives each person who uses it, the power to create an infinite variety of new and unique buildings, just as his ordinary language gives him the power to create an infinite variety of sentences". _A Pattern Language_ is an attempt to create a new language from observing the dialects of buildings around the world. Alexander recognized that there were large-scale patterns between people and the buildings they found beautiful—a productive tension between specificity and more general patterns common to all human beings. Alexander's pattern language is therefore more general than historically generated languages. The "pattern language" approach is now native to many software developers: seeing solutions as complexes of layered and interconnected systems. _A Pattern Language_ is the book that is closest to an instruction manual. It is meant to take the prospective builder all the way through the process of designing and constructing a building, neighborhood, or city. Alexander writes about the macro-patterns of cities and the micro-patterns of buildings, from road design to doorknobs. Alexander wanted to enumerate patterns that interlock and produce beauty at different scales while still being as precise and actionable as possible. In _Notes on the Synthesis of Form_, Alexander was concerned with how to best model reality in order to solve a problem, while _A Pattern Language_ focused on the production of a vernacular that would allow people to produce beautiful buildings. ![](https://media.urbit.org/site/posts/essays/patternlanguageexamples.png) Underlying these different pattern languages is a deep, precise, and objective appreciation of an important quality of human existence—what Alexander calls "the quality without a name". This "quality" is the ultimate goal of Alexander's work. Greater efficiency and instrumentality cannot be ends in themselves. For Alexander, it is a moral imperative to save the world from ugliness and to make human culture match the natural world in beauty, complexity, and aliveness. It is described in _The Timeless Way of Building_ as "the root criterion of life and spirit in a man, a town, a building, or a wilderness. This quality is objective and precise, but it cannot be named". The "quality" cannot be named not because it is a vague or airy concept, but because "the quality is too particular, and words too broad". Nevertheless, Alexander makes an attempt to circumscribe the concept with the words alive, whole, comfortable, free, exact, egoless, eternal. Alexander would say that a Japanese temple and a German church are both approaching the "quality" from different histories and cultures. They are examples of two vernacular languages developing in parallel. The similarity between two modern cities is an example of how we've been severed from our pattern languages and therefore the "quality"; we are forced to design from barren earth. Global homogeneity is a symptom of disconnect from both temporal and spatial order—it is the death of our inherited languages. ![](https://media.urbit.org/site/posts/essays/timelesswayexamples.png) _The Nature of Order_ is Alexander's most recent book series and explores the metaphysical implications of Alexander's thought. According to Alexander, the failure of the modern built world is because we have learned to view the world as fundamentally dead. Alexander makes an argument that is close to [hylozoism](https://en.wikipedia.org/wiki/Hylozoism) or [IIT](https://en.wikipedia.org/wiki/Integrated_information_theory): all structures have varying amounts of aliveness (an aspect of the "quality"). Because of the modern worldview that sees all structure as dead we have become blind to this deeper aspect. The basic component which makes structures alive are strong and varied centers. Furthermore there are fifteen fundamental attributes of centers which allow them to network and become alive. These fifteen abstract units are what patterns are composed of—patterns that are then woven into languages (this process of formalizing implementations as higher-level abstractions should be familiar to any computer scientist). _The Nature of Order_ is the most philosophical and [spiritual](https://www.firstthings.com/article/2016/02/making-the-garden) of Alexander's work and ties together his overarching preoccupations: how to solve complex human problems and why the task of making the world beautiful is important. You might be wondering how Christopher Alexander is relevant to Urbit? That depends on who you ask. **Galen (~ravmel-ropdyl)** > I suppose the main thing is that most designers tend to become fixated on the mechanics of what they're building. Most design theory is about how things are made, not what it's like to live inside of them or actually rely on them on a daily basis. Alexander is much more concerned with the first-person experience of inhabiting a place or a building. I guess his approach is just much more empathetic which means it's also more practical. He's concerned with _how people actually live_ and how their environment evolves alongside them. > > We live, work, and develop on Urbit. We're often the first people to notice bugs because we are the end users ourselves. Alexander makes the point that old buildings were responsive to feedback because the architect was also the inhabitant. This tight feedback means that the system as a whole is dynamic and alive. We want Urbit to be a house that you live in and make yours, because unlike all the other software you use, your Urbit is actually _yours_. **Josh (~wolref-podlex)** > _The Nature of Order_ has had a big influence on my thinking. At first, the idea of all physical structures being alive sounded like something out of a new age bookstore. But not only do I think that Alexander justifies this claim; it started to make intuitive sense when I looked closely at my experience. > > When I first read _The Nature of Order_ I was living in an industrial neighborhood of San Francisco—the scale of the buildings and streets felt inhuman and everything was made of metal, glass, and concrete. I chose to leave. I moved into a 100+ year-old flat and the difference was immediate and visceral. The space felt like it was designed from experience rather than _ex nihilo_. The apartment was meant to be lived in, surrounded by public spaces where strangers could safely spend time outdoors together. > > If physical objects can have the quality of "aliveness" so too can information objects. Alexander's book is alive. Many of the software projects I've worked on have been cold fish. Keeping this quality in mind is an important part of my decision making. I want Urbit to be a living structure. **Édouard (~fabled-faster)** > I enjoy telling people unfamiliar with Urbitand interface design with respect to Urbit, that the object of our work is essentially that of designing an environment "like a piece of paper". > > Consider how paper as a constrained environment is one that yet can bear nearly any visual expression imaginable—apart from containing drawings or human-made marks, paper can be used to display photographs, paper can be multiplexed to form lenticular holograms, paper can be folded into a crane, etc. > > The "sheet of paper and its resulting forms" I manifest as a design ideal here is what Alexander might refer to as a "semilattice". A mathematical term he co-opted to describe an ideal structural relationship between elements in a system, as opposed to a "tree", which is a top-down structural system that forces elements into rigid categorical constraints. [In his own words](https://www.patternlanguage.com/archive/cityisnotatree.html): > > > _For the human mind, the tree is the easiest vehicle for complex thoughts. But the city is not, cannot and must not be a tree. The city is a receptacle for life. If the receptacle severs the overlap of the strands of life within it, because it is a tree, it will be like a bowl full of razor blades on edge, ready to cut up whatever is entrusted to it. In such a receptacle life will be cut to pieces. If we make cities which are trees, they will cut our life within to pieces._ > > A city is a good ideal to maintain for the construction of a new internet and notion of personal computing, but I've tended towards my own interpretation of a piece of paper as a semilattice-embodying object due to its accessibility and ability to be imagined by anyone, designer or not. Our object of design is that of a receptacle of life, of deep expression, of multiplexed possibilities. When I imagine an Urbit in the not-so-distant future, I imagine it as a _method_ (to contrast against those who pin software's expression down to "tools" or "functions") by which I can become "whole" again in a digital context, in all my complexity. Christopher Alexander wanted to collapse the distinction between those who live in buildings and those who design them. Urbit wants to do the same thing with software—in order to not produce ugliness, we should live in what we build. Although we have personal computers now, all of our data is housed in enormous warehouses that make the industrial computers of the 1950s look tiny. What we think of as “our information” actually belongs to someone else and, more often than not, is weaponized against us. Alexander’s moral imperative can be taken and applied to our digital structures. We need to become close with our software again.

Monday, 19. July 2021

Finicity

NavyFederal.org: Navy Federal Puts Consumers in Control of Their Data

Navy Federal Credit Union’s VP of omni-channel strategy and innovation Ryan Fairley discusses APIs, personal data and how open banking benefits members so they can control their personal information, manage how its accessed by external parties like third-party apps, and receive more transparency into the data sharing and open banking process. Navy Federal recently signed […] The post NavyFederal

Navy Federal Credit Union’s VP of omni-channel strategy and innovation Ryan Fairley discusses APIs, personal data and how open banking benefits members so they can control their personal information, manage how its accessed by external parties like third-party apps, and receive more transparency into the data sharing and open banking process.

Navy Federal recently signed its first data sharing agreement with open banking platform provider,  Finicity, a Mastercard® company. This allows Navy Federal customers to securely link their accounts to applications that use Finicity’s open banking platform.

You can read the full post Putting you in control of your personal data with a new API on the Navy Federal site.

The post NavyFederal.org: Navy Federal Puts Consumers in Control of Their Data appeared first on Finicity.


Caribou Digital

Platform Livelihoods: Working, trading, renting, and engaging in digital marketplaces

Marketplaces have been around for centuries, but as they become digital, they have drastically expanded in scope and variety. Photo via Wikimedia: a Javanese market place in Indonesia Recently at Caribou Digital we have been working on several projects exploring youth employment, entrepreneurship, and digital financial inclusion in a changing global economy — an economy shaken by the pandemic and i
Marketplaces have been around for centuries, but as they become digital, they have drastically expanded in scope and variety. Photo via Wikimedia: a Javanese market place in Indonesia

Recently at Caribou Digital we have been working on several projects exploring youth employment, entrepreneurship, and digital financial inclusion in a changing global economy — an economy shaken by the pandemic and increasingly shaped by emerging digital technologies.

One of the biggest shifts in the economy has been the rise of digital platforms — digital marketplaces in everything from goods and services to ideas and software, or, in the parlance of the industry, everything from e-commerce and gig work to social media and app stores. Some of the most massive companies in the world are in the business of platforms, which now have literally billions of users. As such, millions of people depend on these platforms to make a living.

This post updates and refines our approach to “platform livelihoods,” an umbrella term (or lens) we use to connect the broad structural transformations associated with the rise of platforms to the experiences, challenges, and opportunities facing workers, entrepreneurs, and the self-employed. This approach intentionally draws on several elemental activities to join and cross-pollinate between distinct debates in the digital development literature about gig work, e-commerce, the sharing economy, and the attention economy, for while each of these new “economies” is different, they each have platformization at its core.

The content of this Medium post will also appear on our microsite www.platformlivelihoods.com, which contains a literature review and details of several of our recent and ongoing projects. We shared an earlier version of this concept in a post called “The Platform Livelihoods Framework’’ in October 2020, written as part of our project with Qhala and with the support of the Mastercard Foundation on the Quality of Youth Digital Livelihoods in Kenya.

A broad look at “platform livelihoods”
In our view, platform livelihoods are the ways people earn a living by working, trading, renting, or engaging in digital marketplaces.

It’s an intentionally broad concept, and many terms work together to create it:

The term “platform” is common in industry and specialized discussions about the digital economy. There are many typologies of platforms available, but at the broadest level, Cusumano, Gawer, and Yoffe (2019) make a useful distinction between (a) innovation platforms — software (like an operating system or suite) that runs other third-party software and (b) marketplace platforms — digital hosts, usually businesses themselves, that connect buyers and sellers of goods, services, or labor in two-sided or multi-sided markets. Social media platforms can be understood as either (2a) a special kind of marketplace platform or as (c) a close cousin — monetizing attention to user-created or creative content via advertisements.

Platformization is powerful and increasingly common, but never neutral. With the platformization of almost every kind of market (including labor markets) comes new “logics” — new structures, rules, incentives, and pressures. Platformization makes new winners and losers, scrambles value chains, and alters age-old relationships among labor, capital, and the state. For the digital development discussion, we are more interested in marketplace and social media platforms, as shown in our approach’s emphasis on “digital marketplaces.”

“Livelihoods” draws on a broader discussion in the development literature. As a concept, it is broader, more fluid, and more flexible than “jobs” or even “work.” “Livelihoods” better encompasses part-time, casual, and informal work, and is sensitive to combinatory practices in which several activities and skills are necessary for survival. Rather than emphasizing a specific role or job title, livelihoods involve making a living as a set of strategies to combine assets, capabilities, labor, and social resources.

Here’s how “platforms” and “livelihoods” connect: when combined, they refer to more than gig work. The broad umbrella of “platform livelihoods” suggests that there are many ways to earn a living and many ways in which platforms are becoming involved. To convey this heterogeneity, we’ve settled on a view which focuses on a few elemental livelihood activities — on what people do — rather than on what platforms provide. These are working, selling, renting, and engaging.

Working links most closely to “gig work” and is perhaps the most prominent and hotly debated example of platform livelihoods. Individuals rely on platforms to match their labor to compensation outside the contexts (and any protections) of employer-employee relationships. Trading maps onto e-commerce and social commerce. Individuals or small enterprises offer products and services to customers via marketplace platforms and/or social media. Almost every microenterprise involves trading — wholesalers, retailers, manufacturers, service providers, and even artisans and artists can sell their products and services via platforms small and large. Renting is the monetization of assets via a platform. Lending or leasing a tractor or truck by the hour or day, or offering a room of one’s house on Airbnb all fall under this kind of asset utilization, as does lending (renting) money on peer-to-peer loan platforms. Engaging has, literally, captured the online world’s attention: Instagram influencers, YouTube and TikTok content creators, even affiliate marketers getting commissions for well-placed clickable ads. All of these can be seen as engaged in platform livelihoods, too. The key element distinguishing this platform livelihood activity from the others is that it involves at least three parties besides the platform: the creators, the audience, and the advertisers. Through a remarkable, though often problematic, combination of platform design, scale, algorithmic weighting, and (often) targeting data, the result is an “attention economy,” in which engagers can prosper by being compensated for bringing attention to content (and/or to the ads alongside that content).

This view of “many livelihoods” touches almost every sector of the economy. Indeed, many familiar roles are actually combinations of these activities. Take, for example, the ride-hailing driver who owns their car, drives nine hours a day via two different platforms, and rents their vehicle to another driver in the evenings. Their efforts are better understood as a combination of renting (assets) and working (labor) rather than as the application of labor alone. In this view, it is best to avoid assuming that people are exclusively renters or traders or engagers; instead, many will combine these practices in distinct ways to pursue their distinct livelihoods.

We situate these activities “in” instead of “by,” “via,” or “for” platform marketplaces. These platforms aren’t merely tools that workers and sellers can use, but structures they must navigate. Nor are these individuals engaged in activities “for” the platforms as employees or agents; individuals and small firms operate without employer-employee contracts. For now, “in” best describes how individuals pursue livelihoods within the contexts (affordances and constraints) at least partially established by platforms.

Advantages of the platform livelihoods lens The lens is broad, making it easier to see connections between different livelihoods

The breadth of “working, trading, renting, and engaging in digital marketplaces” is part of what makes the concept of platform livelihoods useful.

There are robust and ongoing policy conversations about platform workers on labor platforms (summarized, for example, in this recent ILO report) and about platform sellers and e-commerce (summarized, for example, in this recent UNCTAD report). But the literature and policy conversations about platform work and platform sales rarely seem connected, despite commonalities and overlaps in how platform logics change markets, and how power and prosperity is allocated within them. To be clear, we don’t want to challenge or replace the frames employed by these existing inquiries. Instead, we aim to complement and put them in dialogue, drawing new connections between them.

Meanwhile, economic developments, such as the mega-merger of Gojek and Tokopedia in Indonesia, mean that in some cases, workers, sellers, sharers, and creators are bound up together in the same platform ecosystems. Thesuperplatform” or “super app” presents a different environment for livelihoods, collapsing several economic sectors onto a singular interface with interconnected data about users accruing to the platform itself, and may require different policy responses than distinct, sector-specific platforms.

The lens allows assessments of people’s experiences.

Together with Qhala, we conducted a review of 75 primary research studies of platform workers, traders, renters, and engagers from throughout the majority world/emerging markets. We drew on frameworks from the ILO, Richard Heeks, and Julie Zollmann and Catherine Wanjala, coding studies to identify “twelve elements — the kinds of experiences that individuals share and value when discussing their livelihoods with friends, family, and even the occasional researcher. They are a mix of economic, subjective, and broader human development experiences”.

These elements can prompt attention not only to earnings and access, but also to how the experience of platform livelihoods intersects with human development goals such as gender inclusion, social protection, and opportunities for advancement and fulfillment.

Figure 1: Twelve experience elements of platform livelihoods The lens helps count and compare roles and vocations

When compiling the literature review, we offered “a landscape of nine illustrative types of platform livelihood,” noting that “these are roles that individuals or small enterprises can fill, rather than ‘business models’ or the names of specific platforms.” Our argument remains that these nine types, while not comprehensive, “represent enough of the diversity in platform livelihoods to make two key distinctions. These types mix local and global (digital only) markets,” and “some of these roles are for individuals seeking work and offering their labor. Some of these roles are for small enterprises and even small farms, looking for new sales channels and new ways to connect with markets.”

But this map is only one of many we could make to list, identify, and contrast different kinds of platform livelihoods. Notice the switch in the illustrative map from elemental activities (working, trading, renting, engaging, etc.) to more specific roles, industry sectors, and types. These labels better approximate how we might count and differentiate among those with and those without platform livelihoods.

Figure 2: Nine illustrative platform livelihood types

Further complicating this enumeration are the ways in which people fractionally mix livelihood activities. If a person works as a teacher during the week and a freelancer via a platform on the weekend, is that person a platform worker? If a restaurant relies on delivery platforms for 30% of its sales, is that business a platform trader? What if the same business had to stop in-person dining during the pandemic, and was getting 90% of sales through delivery platforms? What if that business uses three different delivery platforms, deriving fractional earnings from each? As we detailed in our literature review, several studies point to how platforms may play a partial rather than exclusive role in many people’s livelihoods.

Similarly, sometimes seemingly individual accounts are actually “hidden MSEs” or otherwise represent hidden hierarchies. Freelancers can maintain a single profile with high ratings, and then subcontract work. Airbnb hosts may employ a housekeeper. Small online retail shops may have an assistant or two. Are these other workers, each more or less invisible to the buyers and the platforms themselves, pursuing platform livelihoods as well?

Indeed, given how broad these activities are, and how they govern the variety of products and services digital giants like Facebook and Google provide, it can be difficult not to classify anything done for work digitally as a platform livelihood. For example, people can learn their crafts via watching YouTube videos or run an entire business on a Google suite, but these activities don’t involve directly matching buyers and sellers. We think it is best to restrict platform livelihoods to the activities that directly involve exchanges of value — transactions — between buyers and sellers. That is likely how the platforms see their role — they don’t always expect a 100% share of anyone’s livelihood, but they do seek to provide the infrastructure to facilitate as many transactions as they can, to generate revenue through advertisements, transaction fees, subscriptions, etc.

That said, we acknowledge that there are other elements of people’s livelihoods that aren’t easily understood as transactions at all — for example, cultivating family support and connections, gifting instead of exchanging, supporting instead of lending — that are also core parts of social life, and are increasingly also mediated by platforms, whether via mobile money like MPESA, social sites like Facebook, or purpose-built sites like GoFundMe. (For recent work in this space see Sibel Kusimba’s Reimagining Money). We would be interested in exploring and discussing whether these behaviors might be better considered a fifth elemental livelihood activity.

Regardless, this discussion of specific vocations or livelihood types underscores how any efforts to enumerate those involved in platform livelihoods will face definitional challenges — tough decisions of where to draw the lines between platform livelihoods, other digital livelihoods, and livelihoods more broadly. On the other hand, this lens helps bridge the arbitrary gap between an individual worker and a small enterprise, and helps us be sensitive to how platform livelihoods often coexist with fractional and/or non-digital ways of earning a living.

Our assessment in late 2020 was that “the early research and policy literature has been concentrated in platform work, especially ride-hailing, freelancing, and microwork, but in the longer run, platform sales [trading] (whether via marketplaces, social commerce, or search and discovery) may end up altering the livelihoods of a greater number of people around the world.” We’re excited to test this hypothesis in Indonesia. With the support of the Bill and Melinda Gates Foundation, DFS Lab, and Rise Indonesia, we will be fielding a study in 2021–22 that, among other things, will estimate the proportion of people nationwide involved with platform livelihoods, capturing platform working, trading, renting, and engaging in the same study, perhaps for the first time.

The lens allows for deeper dives into new digital practices

A recent essay by our colleague Bryan Pon helpfully contrasts classic platform e-commerce, where platform hosts set the rules on dedicated e-commerce sites, and bottom-up, appropriated social commerce, where individuals use more flexible social media platforms to sell goods and services. (There is significant attention to this segment among the financial inclusion community, for example, by Women’s World Banking and by CGAP, which calls these practices informal e-commerce.) We see social commerce as a channel or a modality: a kind of platform livelihood activity that can be applied to working (job-seeking), trading (selling), renting (asset-sharing), and, in the case of social media influencers, engaging.

Don’t forget maps, searchers, reviews, and ads. Paid for the click, rewarded for the link, punished by a bad review — people are cultivating their skills to navigate the platformed dynamics of algorithms. Engagers get paid for clicks, traders and renters promote their brand/business identity, “and as we noted in the previous post, there is often (still) search and discovery. Millions of small businesses pay for advertising on Facebook and Google and other social media platforms. Many work to refine how their businesses appear on maps, review sites, or other digital databases and apps. These activities, too, support platform livelihoods.”

This wide range of activities points to the complexity and content dependency of digital literacies as craft. For example, our colleague Jessica Osborn shared a great illustration of this in a debrief of a 2019 Caribou Digital Live Learning visit in China; the team met with several farmers and manufacturers who have had to hire social media managers to keep up with the demands of social commerce.

Figure 3: Farmers in China describing their social media and e-commerce initiatives

But for those without the wherewithal to hire social media specialists, the “digital literacies” or platform practices demanded of a small business may be as diverse as the platform landscape itself — the success of a even a tiny microenterprise may increasingly depend on a mix of placing ads and appearing in searches, on conducting authentic social commerce, and on navigating formal digital marketplaces. To link back to the “livelihoods” perspective, this represents a set of new skills and strategies that gig workers, the self-employed, and entrepreneurs must all acquire, refine, and deploy in ways that earn them a living in platformed markets.

Conclusion

There are many organizations and researchers involved in studying and building policy around the activities this post calls “platform livelihoods.” The community of practice is broad, as are its perspectives and frames.

Our goal with the lens of “platform livelihoods” isn’t to refute or diminish the existing frames and debates about gig work, e-commerce, or any other element of an increasingly platformed economy, but rather to bring together disparate parts of this arena, to spark conversations and share knowledge. Therefore, we hope the concept is useful to you.

Please reach out to us with questions and suggestions for the next revision, and watch for new studies on youth livelihoods, gender, platform agriculture in Kenya, and the quantitative Indonesia research to be posted to our website soon. All the materials in this lens are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. We ask only that you provide attribution if this turns out to be useful to your research.

To sum up, a variety of platform types (marketplaces for labor, goods and services, assets and attention, and platforms for innovation) have emerged in every sector of the economy (freelancing to farming) to support a variety of elemental yet combinatory livelihood practices (working, trading, renting, and engaging).

Ultimately, few analog livelihoods may remain untouched by digitalization and platformization. The world is in the midst of a shift in how markets function, with implications beyond the digital development discussion. It is our hope that this lens, and the contrasts and commonalities it may reveal, will inform conversations and efforts to make the digital economy more inclusive.

Platform Livelihoods: Working, trading, renting, and engaging in digital marketplaces was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

How to Make a Twitter Bot in Python using Tweepy

A reply-to-mention Twitter bot that can reply to every tweet where it got mentioned with a specific keyword with an auto-generated image.
A reply-to-mention Twitter bot that can reply to every tweet where it got mentioned with a specific keyword with an auto-generated image.

KuppingerCole

Sep 14, 2021: CyberArk Virtual Roundtable

The world has changed – a rapidly accelerated and sustained move to remote/hybrid work, an increase in cloud workloads and infrastructure, and DevOps tools have driven digital transformation as businesses modernize and enhance productivity. This has resulted in an explosion of identities – and these identities are now the attack surface.
The world has changed – a rapidly accelerated and sustained move to remote/hybrid work, an increase in cloud workloads and infrastructure, and DevOps tools have driven digital transformation as businesses modernize and enhance productivity. This has resulted in an explosion of identities – and these identities are now the attack surface.

IBM Blockchain

Brewing a more traceable and sustainable beer industry with blockchain

Whether you’re at Oktoberfest, a craft brewery or your local pub, the beer in your glass typically begins with four key ingredients: grain, water, yeast and hops. The variety comes from mixing those ingredients in different proportions and adding others to create distinct flavors. The beer itself tells a story of how those ingredients were […] The post Brewing a more traceable and sustainable be

Whether you’re at Oktoberfest, a craft brewery or your local pub, the beer in your glass typically begins with four key ingredients: grain, water, yeast and hops. The variety comes from mixing those ingredients in different proportions and adding others to create distinct flavors. The beer itself tells a story of how those ingredients were […]

The post Brewing a more traceable and sustainable beer industry with blockchain appeared first on Blockchain Pulse: IBM Blockchain Blog.


Elliptic

REvil Revealed - Tracking a Ransomware Negotiation and Payment

What actually happens during a ransomware attack? We follow a real case involving the REvil ransomware - from initial infection and negotiation, through to the cryptocurrency payment and laundering of the funds.

What actually happens during a ransomware attack? We follow a real case involving the REvil ransomware - from initial infection and negotiation, through to the cryptocurrency payment and laundering of the funds.


KuppingerCole

Microsoft’s Threat Intelligence Play is Good News for Customers in Fight Against Ransomware

by Paul Fisher This week, Microsoft made official its agreement to acquire Threat Intelligence vendor RiskIQ in a deal rumoured to be worth around $500m. It is not an unusual event; Microsoft has absorbed five businesses already in 2021, and usually it is to acquire a discrete technology it deems useful or sometimes to push into emerging markets. This latest acquisition falls into both camps. W

by Paul Fisher

This week, Microsoft made official its agreement to acquire Threat Intelligence vendor RiskIQ in a deal rumoured to be worth around $500m. It is not an unusual event; Microsoft has absorbed five businesses already in 2021, and usually it is to acquire a discrete technology it deems useful or sometimes to push into emerging markets.

This latest acquisition falls into both camps. While Threat Intelligence is not an emerging in the normal sense, it has acquired a new importance in the last 18 months as global cyber-attacks reached a new level of intensity. The fact this has run in parallel with the global epidemic is no coincidence – criminal gangs thrive on vulnerability.  In this case it has also thrived on the sudden switch to remote working and the opening up of millions more internet facing endpoints.

Why make the acquisition now?

As the world’s largest supplier of business software, Microsoft will be fully aware of the threat to its customers from this current wave of ransomware; more sophisticated and aggressive than previously recorded. Governments around the globe have now listed ransomware as a systemic threat to national security and economic wellbeing. It is deadly serious.

Ransomware attacks have been successful in part because of the openness of today’s IT infrastructures. But business needs openness and the complexity of applications, architectures, IoT connectors, digital identities running across multiple clouds and hybrid cloud environments. This is a positive development as it means that organizations are embracing leading edge technologies to embrace collaborative working, automation, and deliver innovation.

Microsoft has been working to ensure that its platforms are compatible with the new openness and remote working, and takes security seriously. It makes sense to acquire a Threat Intelligence capability, a move that is commercially driven as much as as adding capability that will help its customer base defend against ransomware and other forms of cyber-attack.

What is RiskIQ?

RiskIQ is among several leading Threat Intelligence companies that have emerged in the last decade; other well know names include FireEye, InfoBlox, LookinGlass, IntSights and Recorded Future. All offer analysis and interpretation of global threat traffic and enable advance warning of possible attacks, adding an extra layer of cyber defence for organizations. Key to this is tracking of internet facing endpoints targeted by cyber criminals.

RiskIQ was a good fit in that it already had close ties with Microsoft through connectors and support for Azure Sentinel SIEM – it is also relatively cheap if the $500m purchase price is accurate. The RiskIQ mission statement of “safely bringing people together, connecting people across the world” might sound a little trite but it’s a laudable ambition and chimes well with Microsoft’s own goals. 

Microsoft already had an endpoint protection play with 365 Defender, but RiskIQ offers much more than monitoring activity at endpoints and applications. RiskIQ helps customers discover and assess the security of their entire enterprise attack surface through connections in the cloud, AWS, other clouds, on-premises, and from the supply chain. Its Internet Intelligence Graph deploys “virtual users" that simulate human-web interactions to map relationships between internet-exposed infrastructure worldwide.

What does the acquisition mean for Microsoft customers?

Several Microsoft customers will already use a Threat Intelligence platform from one of the existing vendors but for those that do not, and particularly for SMBs the integration of RiskIQ technology and intelligence gathering into O365, Azure, Active Directory, Teams etc. is something to be welcomed. Microsoft has a good record on integration, and this should be quickly offered as part of the Microsoft universe to all customer levels. For Microsoft it means a great new marketing tool to convince customers that it has their back in the fight against ransomware, now and in the future. Stopping ransomware by plotting its path and patterns before it can hit home is a great defensive move – once Ransomware is in and activated, it’s pretty much game over as we are seeing across the world.

Will it be a success?

In short, yes. Integrating RiskIQ across the Microsoft stack will add undoubted cyber security value for its customer base. Even better for Microsoft it will allow them to offer a robust and trusted integrated Threat Intelligence platform in competition with RiskIQ’s former competitors. More worrying for them, the Redmond software giant can easily afford to offer Threat Intelligence as integral part of its overall security marketing strategy for O365 and beyond, in the name of keeping the Internet safe for business. Quite an option to have.

For more on this topic:

Leadership Compass: Unified Endpoint Management Executive View: Sentinel Security Platform Market Compass: Endpoint Protection Detection and Response Insight: Cloud Security Blog: Microsoft Acquires Nuance to Drive AI-Based Workplace Innovation Blog: Microsoft Adding New Capabilities to Azure Active Directory

 


Okta

Discovering macOS Settings with PlistWatch

In the Apple operating systems macOS and iOS, software applications store essential configuration data in an information property list (plist) files. The plist files are managed by the operating system. Although macOS does have utilities for reading and writing plist files, they are low level. It’s a manual and time-consuming process working with plist files. There is, however, a little known t

In the Apple operating systems macOS and iOS, software applications store essential configuration data in an information property list (plist) files. The plist files are managed by the operating system. Although macOS does have utilities for reading and writing plist files, they are low level. It’s a manual and time-consuming process working with plist files.

There is, however, a little known tool called PlistWatch that enables changes to plist files to be monitored in real time. The tool is written in Go, which requires Go specific knowledge to run it.

Today, we are going to set up and use PlistWatch. We will also discover how it works. Let’s get started!

Prerequisites to Building and Installing a Go Application

First things first, you can only follow this article on a Mac computer, as plist is Mac-specific.

If you don’t already have Go installed on your computer, you will need to download and install Go..

What Is a Property List File?

Applications running on macOS and iOS devices have one or more plist files. They take the form of a dictionary of key-value pairs stored as an XML document. The files are usually created in the Xcode IDE as part of application development. Some properties are required by the operating system. Xcode automatically sets important properties. Applications that need to access system resources, such as location services, will need to define a property that requests the required level of access. System tools can change property values to change the behavior of applications. For more information see “About Information Property List Files.”

Property list files can be found in the directories /Library/Preferences and ~/Library/Preferences. The file names take the form of a reversed domain name, an application name, and a .plist file extension. Examples are: com.apple.dock.plist and com.google.Chrome.plist.

A typical macOS system has several hundred plist files each containing many key-value pairs. Finding a particular property for a particular application can be time consuming.

Using macOS Commands to Manage Property List Files

The main command line tool for managing plist entries is defaults. Try it now:

defaults read

You will see a large amount of output. This is the combined content of all of the plist files for all installed applications.

If you know what you are looking for you can specify a domain and optionally a key within the domain. For example, to find the orientation of the Dock, type:

defaults read com.apple.dock orientation

There is also the command line tool /usr/libexec/PlistBuddy. This allows the management of individual plist files. The user interface is not at all intuitive.

The Xcode IDE has a plist editor. Open the plist file /Library/Preferences/com.apple.dock.plist in Xcode and use the editor to view and change values.

How to Install and Run PlistWatch

PlistWatch is a tool, written in Go, that makes it much easier to manage plist files. While running, it monitors the plist files for changes. Whenever an entry changes, the command that caused the change is displayed. PlistWatch available as source code in GitHub.

First of all, clone the GitHub repository:

git clone https://github.com/catilac/plistwatch.git cd plistwatch

Go is a compiled language. The next step is to build an executable binary called plistwatch from the source code:

go build

Next, move the binary to a directory on the PATH:

mv plistwatch /usr/local/bin

Now, run the tool. You will see no output until a plist entry gets changed.

plistwatch

Let’s change the position of the Dock to see some output. Select Apple > System Preferences > Dock & Menu Bar.

Now make some changes, such as moving the Dock and moving it back by clicking the Position on screen options. You should see the changes being reported by plistwatch. You may also see other events being reported.

The output is a sequence of defaults commands that were executed by the UI.

defaults write "com.apple.dock" "orientation" 'left' defaults write "com.apple.dock" "orientation" 'bottom'

The system preferences actually used the defaults command to change the orientation property in com.apple.dock.plist. Many changes to properties do not take effect until the application is restarted. The system preferences UI actually restarted the Dock when its orientation was changed.

NOTE: The single and double quotes in the output are not actually required and they can be ignored.

You can end the plistwatch program by typing control+C.

How Does PlistWatch Work?

The main.go code is quite simple. Every second it executes the command defaults read and captures the output. It then decodes the output into a map structure. It then compares the last map structures with the current map structure and outputs any differences. Let’s drill down into the decoding and map comparison operations.

The output from the defaults read command is not in any standard format, such as JSON or XML. It is a nested set of key = value pairs enclosed in {}. There is a lot of code in the PlistWatch project to handle the decoding and encoding of the output. It basically turns the output into a map, keyed by the domain, which is the plist file name without the .plist extension. The values are key-value pairs, lists and maps.

The comparison is performed in a Diff() function in diff.go. It takes two parameters, the current plist map, and the previous plist map.

It first iterates over the keys of the current map, which are the domain names.

Then, it checks if the domain is a key in the previous map. If that key isn’t present, it means that the plist has been updaed and it prints out a defaults write for the domain.

Then, it extracts the values for the domain from the current and previous maps. The values are themselves maps of key-value pairs.

Then, it iterates over the keys of the previous value map. If it isn’t present it means that the key has been deleted and it prints out a defaults delete for the domain.

Then, it iterates over the keys and values of the current value map. The value can be a string, an integer, a list, or a map. It determines the value type by reflection and does a type-specific comparison of the value. If the values differ, it prints out a defaults write for the key and changed value.

Finally, it iterates over the keys of the previous map, which are the domain names. If the key isn’t in the current map then it means that the plist has been deleted and it outputs a defaults delete for the domain.

Conclusion

MacOS applications use plist files for configuration and other properties such as strings to be displayed in user interfaces. Plist files store information in a dictionary structure. The keys are strings and the values can be strings, integers, lists, and dictionaries. There are command line tools for managing plist files, but they are low level and not easy to use.

PlistWatch is an application that looks for changes in all of the plist files every second. When a change is detected it prints out the defaults command that would reproduce the change. Note that plist files can be changed by various applications, so the displayed defaults command may not have been used to make the change. PlistWatch provides a user-friendly means of monitoring changes in plist files. It also makes it easy to determine which properties are changed by operations.

One use case for using PlistWatch would be to capture the property changes required to customize the screen layout. The output can be used to create a script that applies the changes to another computer.

If you liked this post, you might enjoy these others on Apple: Build an iOS App with Secure Authentication in 20 Minutes Ionic + Sign in with Apple and Google What Apple’s App Tracking Changes Mean for Developers

As always, if you have any questions please comment below. Never miss out on any of our awesome content by following us on Twitter and subscribing to our channel on YouTube!

Sunday, 18. July 2021

Identosphere Identity Highlights

Identosphere #41 • Smart Property • Privacy War @ W3C • Verifiable Open Badges

We curate the latest news and updates in decentralized identity, with upcoming events, standards talk, use-cases and business strategy.
Welcome and Thanks to our Patrons!

Support Identosphere via Patreon — Get Exclusive Content!!

Read previous issues and Subscribe : https://newsletter.identosphere.net

Submissions \ Inquiries : newsletter at identosphere dot net

Coming Up this WEEK! 1/2 Day IIW Virtual EventUser-Experience • 6/22

A European Framework for Decentralized Digital Identity Wallets • 7/15

1/2 Day IIW Virtual EventThe Business of SSI • 8/4

Turing trustworthy digital identity conference • 9/13

IIW 33 • 10/12-14

Big Picture Identity gets a new look: examining the W3C Verifiable Credentials standard Information Age

David Chadwick, product director at Crossword Cybersecurity, discusses what the W3C Verifiable Credentials standard   

Bosch: Digital identity – enabling secure collaboration with blockchain technology

anyone could issue an identity, access authorization or even a certificate to anyone. For example, “I might give my neighbor a permit to access my garage or certify that he can use my car,” Nik says. The neighbor would then store this evidence in his e-wallet.

Affinidi Role of Public Key Cryptography in Self-Sovereign Identity Indicio: The decentralized identity revolution is powered by machine-readable governance

we have a way to enable clear, flexible, governance right now through Agents and machine-readable governance. Agents are the software that allow data to be shared and authenticated by consent between parties, and this makes them the most important governance entities in decentralized identity. 

How to Gain Control Over Our Digital Selves NGI Community

Digital identity must switch from a common silos to a set of containers; where identity transforms into** identifiers that are no longer centralised and linked to one another. And over which the citizen can always be not only the controller, but the ruler**.

Podcasts Looking Ahead Two Insiders Look at the Future of the Identity Industry Ping Identity

The days of centralized control are fading.

"Whatever we have going forward can’t be centralized. It has to be distributed and out where it's being used."

The Future of Stuff Podcast: Ep.1 What is Smart Property? with Vinay Gupta

some widely different use cases (wedding planning?), the relationship with the sharing economy, the distribution of power in such a system, some privacy and security concerns, and the prospect of programming social interaction and value flows around goods and services.

Frontier Talk #4 | Doc Searls - Building the Future of Identity

the two disciplines that matter really are the are the individual and the company how do we work for the for both of those at the same time

Company Updates Evernym July 2021 Release Notes

The organization of the Verity code repository is undergoing significant changes, as we are moving to a multi-node Kubernetes deployment using Helm charts and Terraform. This will make it easier for us to scale and manage Verity, reducing downtime during maintenance and upgrades. But it will also make it easier for interested developers to run the Verity code outside of AWS.

Spherity leads the GAIA-X Identity and Trust group to accelerate the Energy Transition in Europe

GAIA-X is creating a proposal for the next generation of a European data infrastructure: a secure, federated system that meets the highest standards of digital sovereignty while promoting innovation.

Indicio.Tech Node Operator Program

Join a group of innovative companies building the future of decentralized identity.

Learn what it takes to run a decentralized network. Learning is doing.

Approved Node Operators are eligible to receive one year of complimentary business level service support while running your node with full best practice checks, your own private Slack channel for Q&A, specialized guidance based on your individual needs, and other exclusive business level support benefits.

Litentry & Friends Episode 3: Litentry & Integritee

the idea is that linen tree will build a T side chain so-called a trusted executive environment sidechain using integrity's framework to store identity linking information in a privacy preserving and secure manner

Microsoft Identity Issue and accept verifiable credentials using Azure Active Directory | Azure Friday

Sydney Morton joins Scott Hanselman to show how verifiable credentials enable you to own and prove who you are in the digital world.

Microsoft Is Using Bitcoin to Help Build a Decentralized Internet Reason

Daniel Buchner, a self-described libertarian, is the project lead for ION. Reason sat down with him at the Bitcoin 2021 conference in Miami to talk about how this technology could usher in a new era of hyper-individualized online identity and privacy.

Standards Work The Future of Open Badges is Verifiable Kerri Lemoie

When Open Badges was kicking off ten years ago, it was conceived to be a recognition infrastructure for skills attained and achievements accomplished anywhere at any time. Badges could assert skills learned informally, formally, really in any aspect of and through life. 

Someone Tweeted a pointer to this - the Design Goals of DID in the W3C Spec.

Decentralized Identifiers are a component of larger systems, such as the Verifiable Credentials ecosystem [VC-DATA-MODEL], which influenced the design goals for this specification. The design goals for Decentralized Identifiers are summarized here.

The Privacy War inside the W3C Protocol.com

new entrants from the ad-tech industry and elsewhere aren't just trying to derail standards that could hurt their businesses; they're proposing new ones that could actually enshrine tracking under the guise of privacy. "Fortunately in a forum like the W3C, folks are smart enough to get the distinction," Soltani said. "Unfortunately, policymakers won't."

Public Sector The future of digital identity in Australia

We now have a rare opportunity to break through the challenges of fidelity and provenance that plague cyberspace and, with a gentle course correction, deal with a bigger and yet more manageable problem than digital identity, namely the quality and reliability of data itself.

Towards a universal, self-sovereign and privacy preserving digital identity for the public sector

The EU-funded IMPULSE focuses on building a decentralised Self-Sovereign Identity (SSI) model by combining two of the most promising technologies available today, such as Artificial Intelligence and blockchain networks, with the aim of facing the limitation of the existing electronic identification systems in the public sector..

RaonSecure & RaonWhiteHat to continue paving the way for widening adoption of Decentralized Identity-based services

The DID-based technology developed by RaonSecure and RaonWhiteHat has already been deployed in public institutions. For example, the technology is currently being leveraged in the authentication process of the MMA’s online service that has been opened since January 2020. In this regard, a total of 283,000 DIDs has been issued as of May 2021 and the DID-based simple authentication service has been used over 2.25 million times. The efficiency of the service has allowed the corresponding public institutions to drastically reduce the cost of its authentication process.

SSI In Africa #180 The Blockchain Potential in Somalia: Education + Adoption-Blockchain in Africa

”I think the verification of identity is the biggest win and the lowest hanging fruit." Abbas Gassem

Using Decentralized Digital IDs and Blockchain to Help Millions in Africa Get Identified

We have a Layer 2 network built on top of the Algorand blockchain called the Flex Network (FN). Every issuing authority and verifier organization will be required to either run or use a SaaS API, to interact with a FN node. The node acts as a trustless way to create, update, and fetch information about digital identifiers (DIDs) from the blockchain. This enables verifiable credentials to be independently issued and verified against the public keys associated with the issuer and holder DIDs.

Healthcare Blockchain, Interoperability, and Self-Sovereign Identity: Trust Me, It’s My Data

This lack of management is critical as studies have shown that “patients only visited their primary care physicians 54.6% of the time when seeking care. Where do they go for that other 45.4%? Patients receive care from other organizations where that provider may not have access to the patient’s medical records.”

COVID-19 Covid health passes revived for bigger role in keeping economies open DigiMe

Venues such as restaurants, bars and pubs are to be encouraged to use Covid health passports as a key part of moves to not just open up the economy for a short burst, but enable it to stay open longer term.

SITA: Lessons from the Lab

Using the Aruba Health App, visitors to the island who provided the required health tests to the Aruba government were issued a unique trusted traveler credential using blockchain technology. This credential was then verified by hotels, restaurants, and entertainment venues through the unique QR code on a visitor's mobile device without sharing any private data. The digital credential also enabled the Aruba government to restrict visitors from leaving their hotel rooms until they had received a negative PCR test result.

Business Case SSI Business Models and Go-to-Market Stepan Gershuni

Historic development of online businesses was tightly associated with selling user data, traffic or ads. Marc Andreessen notes that this is not due to some intrinsic superiority of such models — but rather just due to the fact that there was no secure and reliable online payment method during the first 10–15 years of the internet era.

How Verifiable Credentials can Create New SaaS and Recurring Revenue Models Mathieu Glaude

There’s an opportunity to begin issuing credentials that can be reused within a particular ecosystem, and at various touch-points during the consumer lifecycle. Credentials unlock serious value for subscription.

MyData Governance as a game changer aNewGovernance

we put a great emphasis on our international dimension: we believe we are contributing to building a new paradigm that should be shared beyond the European borders. The objective is to encourage human-centric fair use of data (primary, secondary…), moving away from both All-State and Platform-centric (Winner takes all) models.

Walking the Talk of Respectful Marketing Me2BAllliance

Our challenges are in the myriad of status quo marketing tools and techniques that don’t align with the new model of respectful marketing. Can we use analytics to be more effective yet still respect our visitors? Can we ethically use tools provided by big tech companies if their principles don’t seem to align with ours? While this is challenging, it is also exciting to help pioneer this new inevitable model.

Research Papers  Decentralised digital identity: what is it, and what does it mean for marginalised populations?

Margie Cheesman, digital anthropologist, and doctoral candidate at the Oxford Internet Institute, explores the subject of decentralised digital identity, what it is and what it means for marginalised populations.

An Accessible Interface Layer for Self-Sovereign Identity

The mechanisms and evolving standards collectively known as self-sovereign identity (SSI) offer the prospect of a decentralized Internet by providing a central pillar for a human-centered data ecosystem (HCDE)

Identity not SSI The Working Principles of 2FA (2-Factor Authentication) Hardware

In 2019, the World Wide Web Consortium (W3C) announced Web Authentication API (WebAuthn), the new global standard comprising the implementation of FIDO U2F security keys in most modern browsers. By that time, Google already eliminated phishing attacks by having its 85,000 employees use hardware tokens

Beyond Identity Working With Intelligent Machines Jon Udell

Here is Copilot’s tagline: “Your AI pair programmer: get suggestions for whole lines or entire functions right inside your editor.” 

Thanks for Reading!

Read more \ Subscribe https://newsletter.identosphere.net

Support this publication https://patreon.com/identosphere

Contact \ Content Submissions: newsletter [at] identosphere [dot] net


KuppingerCole

Analyst Chat #85: Hybrid IT 4 - Hyperconverged, Edge and Cloud in a Box

This episode concludes the four-part series on hybrid IT. To wrap things up, Mike Small and Matthias focus on the latest developments in hybrid infrastructures, between containers, hyperconverged, edge and cloud in a box.

This episode concludes the four-part series on hybrid IT. To wrap things up, Mike Small and Matthias focus on the latest developments in hybrid infrastructures, between containers, hyperconverged, edge and cloud in a box.




Meeco

KC:Live: Exploring the Symbiotic Relationship between Security and Risk

Managing the granting and revoking of access is paramount to enabling effective risk management. Enforcing distinct access control requires an interconnected access management system that aligns with use case policies, regulations and commercial models. As we move towards a digitally connected everything, policies and risk management is becoming increasingly more ... Read More The post KC:Liv
Managing the granting and revoking of access is paramount to enabling effective risk management. Enforcing distinct access control requires an interconnected access management system that aligns with use case policies, regulations and commercial models. As we move towards a digitally connected everything, policies and risk management is becoming increasingly more complex. What we used to do for government and enterprise, we will soon be doing for households as people contend with smart homes, cars, appliances, wearables and the range of identities involved. Join this KCLive event to explore the symbiotic relationship between security and risk. Thought leaders and senior practitioners come together to discuss access management strategies that are the need-of-the-hour to kick-start the digital transformation journey. Everything will be tokenized Our CEO & Founder Katryna Dow will be joining the agenda to discuss what access and risk means in a world where “everything will be tokenized”. We’re on track towards a world where everything that can be, will be tokenized. Tokenization plays a critical part in enabling more equitable value creation for people, organisations and things. Providing the means to issue and store value, trace provenance, and most importantly achieve consensus to instantly trust. However, in order for this tokenized world to emerge we first need the infrastructure for people and their digital twins to participate in equitable and fair ways. This will include digital identity, verifiable credentials and payments. Over the past year Meeco has been working on the increasing connections between enterprise and the decentralized world. This session will feature some of the use-cases, practical steps, insights and learning from along the way, including case studies from eftpos and Hedera Hasghgraph. You can join this free virtual event is on Wednesday, July 21 2021 at 2PM CEST. Register now Or click here: https://www.kuppingercole.com/events/2021/07/access-management-playbook We look forward to seeing you and exploring how we can secure and manage the risk in this exciting new world!

The post KC:Live: Exploring the Symbiotic Relationship between Security and Risk appeared first on The Meeco Blog.

Friday, 16. July 2021

Cognito

The Fundamentals of Verifying Know Your Customer Information

KYC, or know your customer, is an essential part of the customer onboarding process. Modern businesses need to make sure they’re complying with the latest KYC regulations. But what is KYC verification, exactly? How has it changed for the modern landscape? Is KYC the same in 2021 as it was in 2020? These are all […] The post The Fundamentals of Verifying Know Your Customer Information appeared fi

KYC, or know your customer, is an essential part of the customer onboarding process. Modern businesses need to make sure they’re complying with the latest KYC regulations. But what is KYC verification, exactly? How has it changed for the modern landscape? Is KYC the same in 2021 as it was in 2020? These are all good questions to be asking, since it’s extremely important to stay on top of these...

Source


Spherity

Spherity leads the GAIA-X Identity and Trust group to Accelerate the Energy Transition in Europe

Spherity leads the GAIA-X Identity and Trust group to accelerate the Energy Transition in Europe Spherity supports the interoperability and portability of data for a future data exchange model in the European energy market GAIA-X is a project initiated by Europe for Europe and beyond. GAIA-X is creating a proposal for the next generation of a European data infrastructure: a secure, fed
Spherity leads the GAIA-X Identity and Trust group to accelerate the Energy Transition in Europe Spherity supports the interoperability and portability of data for a future data exchange model in the European energy market

GAIA-X is a project initiated by Europe for Europe and beyond. GAIA-X is creating a proposal for the next generation of a European data infrastructure: a secure, federated system that meets the highest standards of digital sovereignty while promoting innovation.

As part of GAIA-X, Spherity supports a secure data infrastructure for the energy industry. Photo by Sigmund

The “energy data-X” consortium has been qualified as one of the 16 winning consortia for the next step in the GAIA-X funding competition. The core principle of the consortium is to build a secure data space for the energy industry that enables secure and sovereign data use and establishes a foundation for innovative business models. Spherity leads the Identity and Trust group to build an infrastructure for a future data exchange model in the European energy market. Establishing a large and comprehensive energy data space enables interoperability and portability of data and data-driven applications within and across various sectors.

The “energy data-x” project aims to build a prototype data space for the energy industry within the European Energy and Climate Policy frame. This enables secure and sovereign data usage while enabling data-based business models for market partners. Four focal points will be central within the project in the upcoming years:

AI technologies will be used to utilize data for grid operation processes such as power and load flow forecasts. Algorithms for preventive maintenance and plant availability forecasts will be optimized based on plant and operating data. Smart meter gateways are being further developed at the endpoints of the energy system to communicate sensor data as an edge device in a star-shaped manner in the GAIA-X data space. New approaches to the realization of market communication should enable more time synchronization, efficiency, and transparency.

The project consists of 15 strong energy ecosystem partners contributing to data-driven innovations that also support European energy and climate policy goals. The four transmission system operators 50Hertz, Amprion, TenneT, and TransnetBW, as well as the distribution system operator Energienetze Mittelrhein, are participating as consortium partners from the grid sector. ARGE Netz covers the subject area of plant operation. With PPC, SAP, and Spherity, three technology experts are on board. The Fraunhofer Institutes IEE and IOSB-AST cover the research area.

The distribution grid operators E.ON and enercity Netz, Steag and Viessmann in the area of plant operation, and the International Data Spaces Association for networking with other European Data Spaces are represented in the consortium as associated partners.

About GAIA-X

The GAIA-X project was initiated with the objective of creating a next generation of a European data infrastructure: a secure, federated system that meets the highest standards of digital sovereignty while promoting innovation. This project is the cradle of an open, transparent digital ecosystem, where data and services can be made available, collated and shared in an environment of trust. Representatives from several European countries and further international partners are currently involved in the project and the GAIA-X is in continuous exchange with the European Commission.

About Spherity

Spherity is a German decentralized digital identity software provider, bringing secure identities to enterprises, machines, products, data and even algorithms. Spherity provides the enabling technology to digitalize and automate compliance processes in highly-regulated technical sectors. Spherity’s products empower cyber security, efficiency and data interoperability among digital value chains. Spherity is certified according to the information security standard ISO 27001.

Stay sphered by joining Spherity’s Newsletter list and following us on LinkedIn. For press relations, contact communication@spherity.com.

Spherity leads the GAIA-X Identity and Trust group to Accelerate the Energy Transition in Europe was originally published in Spherity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

Tokeny’s Talent|Luis’s Story

The post Tokeny’s Talent|Luis’s Story appeared first on Tokeny Solutions.
Ernane Luis is Full Stack Developer at Tokeny Solutions.  Who are you?

Brazilian guy with a lot of stamps in the passport, as well as a Software Engineer with 4 years of experience in producing Ethereum DApps. I also have a Master’s degree in Computer Science. I am a highly motivated and efficient professional with a strong entrepreneurial spirit.

How did you land at Tokeny Solutions?

During the process of writing my Master’s thesis, I realized my passion for blockchain, which led me to search for blockchain projects and eventually brought me to work at Tokeny.

How would you describe working at Tokeny Solutions?

When I started working at Tokeny, the product was in its early stages, and that was the challenge I was looking for. Working on the product, from inception to delivery in production, was a great sense of accomplishment. My day-to-day work goes from reviewing code, to actual development, to integrating microservices, to fixing bugs. In the end, we are producing creative and effective customer solutions by aligning technology goals with business goals.

What are you most passionate about in life?

Freedom. Having lived and visited more than 35 countries, I have gained a taste of freedom, and that’s what I will search for, to live a life as free as possible. And blockchain technology just fits perfectly for this lifestyle. So I am very happy to be part of the revolution of the internet and decentralized technology.

What is your ultimate dream?

People on this planet could live a life that fulfills their sense of freedom. There, people could get off their own hamster wheel forever. That’s a very personal sensation, but we all share as humans.

What would you change in the world if you could?

Nothing. I believe in decentralization and that chaos is necessary to create order. I may by making things worse. A single mind making decisions for all humans is a bad idea. We all have different needs and what is good for me might not be good for you. I rather see mankind make progress based on their mistakes.

He prefers: check

Coffee

Tea

check

Movie

Book

Work from the office

check

Work from home

check

Cats

check

Dogs

check

Text

Call

check

Burger

Salad

check

Ocean

Mountains

Wine

check

Beer

check

City

Countryside

check

Slack

Emails

check

Casual

Formal

check

Crypto

Fiat

Morning

check

Night

More Stories  Tokeny's Talent 16 Jul at 13:10 Tokeny’s Talent|Luis’s Story Tokeny's Talent 19 Mar at 10:28 Tokeny’s Talent|Xavi’s Story Tokeny's Talent 19 Feb at 8:43 Tokeny’s Talent|Eva’s Story Tokeny's Talent 23 Apr at 10:10 Tokeny’s Talent|Joachim’s Story Tokeny's Talent 15 Jan at 10:42 Tokeny’s Talent|Nida’s Story Tokeny's Talent 25 Jun at 8:25 Tokeny’s Talent|Mario’s Story Tokeny's Talent 20 Nov 2020 Tokeny’s Talent|Shurong’s Story Tokeny's Talent 28 May at 8:44 Tokeny’s Talent|Barbora’s Story Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent|Luis’s Story appeared first on Tokeny Solutions.


digi.me

Covid health passes revived for bigger role in keeping economies open

Venues such as restaurants, bars and pubs are to be encouraged to use Covid health passports as a key part of moves to not just open up the economy for a short burst, but enable it to stay open longer term. While cases of the Delta variant are surging in England, with predictions they might hit 100,000 a day by mid-August, the vaccination programme has significantly weakened the link between get

Venues such as restaurants, bars and pubs are to be encouraged to use Covid health passports as a key part of moves to not just open up the economy for a short burst, but enable it to stay open longer term.

While cases of the Delta variant are surging in England, with predictions they might hit 100,000 a day by mid-August, the vaccination programme has significantly weakened the link between getting ill and hospitalisations and deaths.

Continue reading Covid health passes revived for bigger role in keeping economies open at Digi.me.


Europechain

How NFTs Can Benefit From Digital Identities

Proving the origins of something can often be very difficult, especially in the anonymous digital space. Can digital identities combat this problem?

In the 17th century, the Dutch Republic (present day Netherlands) was one of the world’s leading economies. The country ranked among the top financial powerhouses of the day, boasting of the highest per capita income in the entire world from about 120 years, from 1600 to 1720. (The Dutch financial system was, in fact, the most advanced of its time.) Then, circa 1630, something odd happened.

Source


SelfKey

SelfKey Wallets Latest Versions Released and KEY & KEYFI Airdrop Distribution Completed 🚀

SelfKey Weekly Newsletter Date – 14 July, 2021 The latest versions of the SelfKey Wallet released, and the completion of the KEY & KEYFI airdrop for Binance Hodlers. The post SelfKey Wallets Latest Versions Released and KEY & KEYFI Airdrop Distribution Completed 🚀 appeared first on SelfKey.

SelfKey Weekly Newsletter

Date – 14 July, 2021

The latest versions of the SelfKey Wallet released, and the completion of the KEY & KEYFI airdrop for Binance Hodlers.

The post SelfKey Wallets Latest Versions Released and KEY & KEYFI Airdrop Distribution Completed 🚀 appeared first on SelfKey.


Okta

A Quick Guide to Elasticsearch for .NET

Implementing search functionality in your .NET Core apps doesn’t have to be hard! Using Elasticsearch makes it easy to develop fast, searchable apps. In this post, I’ll walk you through building a simple web application using Okta (for user authentication), Elastic Cloud (the official Elasticsearch hosting provider), and the fabulous Elasticsearch NEST SDK. Why Use Elasticsearch? Elastics

Implementing search functionality in your .NET Core apps doesn’t have to be hard! Using Elasticsearch makes it easy to develop fast, searchable apps.

In this post, I’ll walk you through building a simple web application using Okta (for user authentication), Elastic Cloud (the official Elasticsearch hosting provider), and the fabulous Elasticsearch NEST SDK.

Why Use Elasticsearch?

Elasticsearch is an analytics and search engine based on the Apache Lucene library. It is developed in Java, following an open-core business model. Deep down within Elasticsearch lays a schema-free, JSON document-oriented database. This means that data is stored as documents, similar to rows in a relational database like MySQL. A document’s data is separated into fields, like columns in a relational database. Documents are essentially just JSON objects. Elasticsearch indexes these documents and searches them quickly. So, even if you have to search through millions of documents, searches are still going to be lightning fast.

Set Up a Sample Application with the Okta CLI

Okta is a secure and customizable solution to add authentication to your app. Okta supports any language on any stack and lets you define how you want your users to sign in. Each time a user tries to authenticate, Okta will verify their identity and send the required information back to your app.

There are now two options to get started with Okta. You can create an account and set up a sample .NET Core application with Okta authentication using the Okta Developer Console and by following the quick start guide. Or you can use the recently released Okta CLI, which provides turnkey application templates configured and ready to use after completing a few simple steps in the command line.

Install the Okta CLI by following the steps on the GitHub page.

Simply run the following command to start the CLI:

okta start

If you already have an Okta Developer account, you can run:

okta login

Select the ASP.NET Core MVC + Okta option when prompted.

A ready application should be created, with the Okta login flow already set up—this includes writing secrets to appsettings.json and adding the new client to your Applications on the Okta Developer Console.

You can test run the application by hitting F5 in Visual Studio or by entering dotnet run in your command line. A browser window will open and you should be able to log in using your Okta credentials.

I have uploaded my working application to GitHub in case you get stuck: Okta .NET Core 3 Elasticsearch Example.

Set Up an Elasticsearch Instance in the Cloud

Go to Elasticsearch Cloud and register for a new account. A free trial will be enough for us here. Elastic Cloud is an Elasticsearch hosting provider that will save you the trouble of learning how to set up, configure, and deploy Elasticsearch locally or in production.

Select Elastic Stack and leave everything else on default.

Hit Create deployment.

Make sure you save your deployment credentials for later.

Once the deployment is complete, click Open Kibana. You may have to log in with the credentials from above.

Click Add Data.

On the Sample Data tab, select the Sample eCommerce orders card.

After this, you have an Elasticsearch database set up and filled with sample data with eCommerce orders. Time to start searching.

Open the Elasticsearch Deployments page. Under the Applications header, click on Copy endpoint next to Elasticsearch. Paste this somewhere for later. Should be something like: https://c0fd6d13bb70485c9b062af5d1a24a82.us-central1.gcp.cloud.es.io:9243. You will need it to be able to connect from your sample application along with the credentials from earlier.

Add Elasticsearch to a .NET Core Application

The plan here is to add a search bar to the application and query the Elasticsearch database of sample orders. The search will result in a list of orders where the name of the customer matches our search condition.

Make sure the application that you created earlier works properly. Open the solution in a preferred code editor and run it.

Log in with your Okta credentials to see Okta is configured as expected. You will provide the search functionality for logged-in users only.

Install the Elasticsearch NEST Client

NEST is an official high-level client created by Elasticsearch that maps all requests and responses as types and provides an interface to execute queries.

Let’s add the NEST Client from NuGet via package manager or by running the following command from the directory where the .csproj file lives:

dotnet add package NEST Configure the Elasticsearch NEST Client

Open appsettings.json and add your deployment endpoint and credentials.

"ElasticCloud": { "Endpoint": "https://c0fd6d13bb70485c9b062af5d1a24a82.us-central1.gcp.cloud.es.io:9243", "BasicAuthUser": "elastic", "BasicAuthPassword": "YourPassword" }

Create a new file called SearchClient.cs in the folder where the .csproj file lives. Add the following code to add a ISearchClient interface.

using System; using Microsoft.Extensions.Configuration; using Nest; using okta_aspnetcore_mvc_example.Models; namespace okta_aspnetcore_mvc_example { public interface ISearchClient { ISearchResponse<Order> SearchOrder(string searchText); } }

Make a SearchClient implementation and initialize the ElasticClient in the constructor just above the ISearchClient declaration, so that your final SearchClient.cs file looks like this:

using System; using Microsoft.Extensions.Configuration; using Nest; using okta_aspnetcore_mvc_example.Models; namespace okta_aspnetcore_mvc_example { public class SearchClient : ISearchClient { private readonly ElasticClient client; public SearchClient(IConfiguration configuration) { client = new ElasticClient( new ConnectionSettings(new Uri(configuration.GetValue<string>("ElasticCloud:Endpoint"))) .DefaultIndex("kibana_sample_data_ecommerce") .BasicAuthentication(configuration.GetValue<string>("ElasticCloud:BasicAuthUser"), configuration.GetValue<string>("ElasticCloud:BasicAuthPassword"))); } public ISearchResponse<Order> SearchOrder(string searchText) { return client.Search<Order>(s => s .From(0) .Size(10) .Query(q => q .Match(m => m .Field(f => f.CustomerFullName) .Query(searchText) ) )); } } public interface ISearchClient { ISearchResponse<Order> SearchOrder(string searchText); } }

The SeachClient class takes IConfiguration to read the configuration values (Elastic endpoint URL, user name, password) from appsettings.json to access the Elastic instance.

The SearchOrder() function takes a searchTextparameter to search within the orders by the full name of the customer and list the first ten10 results. See the documentation on how to write more complex queries.

Using the Elasticsearch NEST Client

First, you need to add the SearchClient in Startup.cs as a Singleton object. Add the following line of code to the bottom of the ConfigureServices() method:

services.AddSingleton<ISearchClient, SearchClient>();

Now, it is ready to be initialized by the HomeController’s constructor.

public class HomeController : Controller { private readonly ISearchClient searchClient; public HomeController(ISearchClient searchClient) { this.searchClient = searchClient; } // ... }

Then add the Model to represent the search text and the search result orders. Add a new file in the Models folder called SearchResultsModel.cs:

using System.Collections.Generic; namespace okta_aspnetcore_mvc_example.Models { public class SearchResultsModel { public string SearchText { get; set; } public List<Order> Results { get; set; } } }

Also add an Order.cs:

using System; using Nest; namespace okta_aspnetcore_mvc_example.Models { public class Order { public int Id { get; set; } [Text(Name = "customer_full_name")] public string CustomerFullName { get; set; } [Date(Name = "order_date", Format = "MMddyyyy")] public DateTime OrderDate { get; set; } [Number(Name = "taxful_total_price")] public decimal TotalPrice { get; set; } } }

Add Search.cshtml with a search bar and some logic to display the result orders in a table.

@model SearchResultsModel @{ ViewData["Title"] = "Search Page"; } <div class="text-center"> <form class="form-inline" asp-action="Search" asp-controller="Search"> <div class="form-group"> <input class="form-control m-2" asp-for="SearchText" placeholder="Search"/> </div> <button type="submit" class="btn btn-primary m-2">Search</button> </form> @if (Model?.Results != null) { <div> <table class="table"> <thead> <tr> <th>Customer Name</th> <th>Order Date</th> <th>Total Price</th> </tr> </thead> <tbody> @foreach (var result in Model.Results) { <tr> <td>@result.CustomerFullName</td> <td>@result.OrderDate</td> <td>@result.TotalPrice</td> </tr> } </tbody> </table> </div> } </div>

Add a Search(string searchText) function to your HomeController to handle the search text and retrieve the results.

[Authorize] [HttpPost] public IActionResult Search(string searchText) { var response = searchClient.SearchOrder(searchText); var model = new SearchResultsModel {Results = response.Documents.ToList()}; return View(model); }

Adding the [Authorize] attribute is a good idea here to make sure this endpoint is only accessible to logged-in users. Okta will handle the rest.

Run and Test

Let’s see what we have done. Run the application and search for Mary. You should see something like this:

Takeaways

Elasticsearch is great for numerous use cases, including searching for semi-structured or unstructured data. They provide NEST, a high-level easy-to-use client for .NET that maps all requests and responses as types. It takes advantage of specific .NET features to provide higher-level abstractions, such as auto-mapping. By combining Okta and Elasticsearch, you can build secure and efficient applications with minimal coding and debugging.

Learn More About Using Elasticsearch and Okta in .NET Core Elasticsearch open-core repository on GitHub How to tune Elasticsearch for faster search speeds Kibana, an awesome data visualization tool for Elasticsearch Okta CLI, the simplest way to create secure application templates Get Started with ASP.NET Core + Okta

If you have any questions about this post, please ask in the comments below. For more tutorials like this one,, follow @oktadev on Twitter, like us on Facebook, or subscribe to our YouTube channel.

Thursday, 15. July 2021

KuppingerCole

Identity Verification: Why It Is Needed and How It Can Benefit the Business

The COVID-19 pandemic has transformed the way customers engage with brands and led to increased digital interaction. But this has increased the incidence of fraud during the account creation process. As a result, businesses now face the challenge of verifying customer identity and verifying that those entities interacting with their brand are human and who they claim to be. Join identity expe

The COVID-19 pandemic has transformed the way customers engage with brands and led to increased digital interaction. But this has increased the incidence of fraud during the account creation process. As a result, businesses now face the challenge of verifying customer identity and verifying that those entities interacting with their brand are human and who they claim to be. Join identity experts, Martin Kuppinger, Principal Analyst at KuppingerCole and Armin Ebrahimi, head of Distributed Identity at Ping Identity, as they discuss how best to tackle these challenges without adding unnecessary friction to the user experience to reduce the risk of customer drop off and lost business. 




Dark Matter Labs

Droits et travers en matière de propriété : micro-traités avec la Terre

Droits et travers en matière de propriété : Micro-traités avec la Terre Repenser nos responsabilités envers la nature grâce à l’intendance des terres [AN ENGLISH VERSION OF THIS TEXT IS AVAILABLE HERE Le droit vivant sur une planète vivante — préface de John Borrows Nous devons redonner de la vigueur à nos relations avec la terre. Sur la ferme où j’ai grandi, j’ai app
Droits et travers en matière de propriété : Micro-traités avec la Terre Repenser nos responsabilités envers la nature grâce à l’intendance des terres

[AN ENGLISH VERSION OF THIS TEXT IS AVAILABLE HERE

Le droit vivant sur une planète vivante — préface de John Borrows
Nous devons redonner de la vigueur à nos relations avec la terre.
Sur la ferme où j’ai grandi, j’ai appris que nous dépendions du monde naturel. Le sol, les semences, l’enneigement, la pluie, la chaleur, les insectes, les oiseaux et une centaine d’autres éléments influaient sur la façon dont les plantes poussaient et les animaux se développaient, ainsi que sur notre mode de vie.
Les expériences et les enseignements de mes grands-parents sur la réserve n’ont fait que renforcer cette notion. Comme dans d’autres systèmes autochtones, le savoir anishinaabe était alors transmis d’une génération à l’autre pour nous aider à bien vivre. Mes grands-parents nous ont appris que les poissons, les cerfs, les grenouilles, les loutres, les tortues, les lapins, les herbes médicinales, la nourriture et la santé humaine sont inextricablement liés dans une toile complexe, essentielle au maintien de la vie.
Mon expérience sur la ferme et la réserve m’ont amené à comprendre que nos efforts personnels sont indispensables pour encourager et prendre soin de ces relations. Nous avons un rôle actif à jouer pour favoriser des environnements sains.
Les renseignements dans la publication ci-dessous nous aident à prendre conscience des travaux qui nous attendent.
Les codes profonds nécessaires au maintien de la vie demeurent bien présents dans la terre et nous pouvons les intégrer dans nos vies contemporaines. Nos relations juridiques peuvent accommoder les changements qui doivent être apportés pour retrouver une vie riche et diverse sur le plan écologique.
Mais nous devons agir maintenant. La Terre souffre de notre abus et de notre négligence. La souffrance que nous vivons en tant qu’êtres humains vient de notre échec à vivre selon les leçons apprises par nos ancêtres dans de nombreuses régions du monde. En effet, il est impossible pour nous d’agir comme si nous ne faisions pas partie du monde naturel.
John Borrows
Chaire de recherche du Canada sur le droit autochtone
Faculté de droit de l’Université de Victoria
S’autodétruire pour vivre

Les nombreuses crises qui marquent notre époque, surtout celles liées au climat et au logement, renvoient à un problème plus profond, celui d’avoir rompu notre lien avec la terre.

De nos jours, nous vivons tous selon des paradigmes d’autodestruction, que nous soyons Autochtones ou non. Les peuples autochtones ont été systématiquement marginalisés et ont vécu de grandes souffrances en raison du colonialisme et de la Loi sur les Indiens. Et nos façons d’habiter génèrent aujourd’hui des écocides et alimente un système socio-économique de plus en plus gourmand et inégal — et ce tant pour les autochtones que les non autochtones.

Bien des gens s’entendent pour dire que la COVID-19 ne représente que la pointe de l’iceberg. Alors que la perte d’habitats et de biodiversité prend de l’ampleur partout sur la planète, l’apparition du coronavirus pourrait marquer le début de pandémies de masse répétées. L’émergence de tels virus étant peut-être l’exemple le plus puissant de cette autodestruction (qui atteint maintenant nos corps).

Il est donc impératif de repenser nos lois et systèmes de gouvernance afin de transformer les liens systémiques et spirituels que nous entretenons avec la terre et la nature. Et nous avons besoin les uns des autres pour y arriver.

La présente publication repose sur plusieurs ateliers et conversations que nous avons eus avec des leaders autochtones à travers le Canada au cours des deux dernières années. Elle se penche sur la façon de repenser les droits (et responsabilités) de propriété et la protection de l’environnement à la croisée des avenirs civique et autochtone.

« Du point de vue autochtone, la réconciliation entre les Canadiennes et les Canadiens autochtones et non autochtones exige aussi une réconciliation avec le monde naturel. Si les êtres humains règlent les problèmes qu’il y a entre eux, mais qu’ils continuent à détruire le monde naturel, alors la réconciliation demeurera inachevée. »
― Honorer la vérité, réconcilier pour l’avenir : sommaire du rapport final de la Commission de vérité et réconciliation du Canada, 2015
Les lois sont chargées d’histoire : la relation entre le monde vécu et la loi

Les outils que nous avons inventés afin de rendre possible la ‘possession’ de la planète à travers la propriété privée racontent une histoire sur notre vision du monde et nos valeurs. Au fil des époques et dans différentes cultures, des traditions juridiques diverses ont été instaurées afin de solidifier cette vision, mais elle raconte toujours cette version originale d’un rapport au monde, de la place de l’être humain au sein de la nature, et de notre relation (et division) avec l‘environnement qui nous entoure.

Les traditions juridiques ainsi que les lois ou constitutions qu’elles inspirent ne sont pas une source de vérité originelle. Elles ne sont pas divines. Comme le mentionne John Borrows dans son ouvrage Law’s Indigenous Ethics, nous ne devrions pas ignorer les demandes « initiales » des lois. Mieux comprendre « l’origine » de nos cadres législatifs permet alors de saisir les visions du monde qu’ils supportent et si elles sont toujours appropriées aujourd’hui.

Il est donc important de reconnaître et rendre explicite les histoires, les mondes vécus et les ordres constitutionnels divers qui sous-tendent les perspectives occidentales et autochtones.

La loi raconte l’histoire d’un monde vécu.

Lorsqu’il est question de réconciliation, le dialogue entre la résurgence autochtone et la décentralisation des perspectives occidentales doit reconnaître l’existence de mondes vécus radicalement différents. Ce dialogue demeurera ainsi ouvert et continu.

À la manière du Traité de la ceinture wampum à deux rangs des Haudenosaunee, nous devons viser une compréhension mutuelle, sans intervenir ou forcer une réconciliation. Nous devons laisser place à l’émergence d’avenirs civiques et autochtones distincts, et d’autres communs.

Analyse de la situation actuelle : qu’y a-t-il derrière les lois occidentales lorsqu’il est question de la terre?

Les systèmes juridiques occidentaux comme la « common law » ou le droit civil puisent leurs racines dans le libéralisme. Ce sont les penseurs des Lumières, notamment Thomas Hobbes, John Locke et Jean-Jacques Rousseau, qui en ont établi les fondements philosophiques. L’origine de la pensée libérale prône que la « loi » doit défendre la liberté, le consentement et l’égalité.

Les systèmes juridiques occidentaux reposent sur un assemblage de concepts fondamentaux qui dictent une relation précise avec le monde qui nous entoure :

Si vous ne détenez pas de droits, vous êtes détenu. La propriété s’exprime par des ententes (écrites) entre des êtres humains souverains et autonomes. La Terre est « la toile de fond devant laquelle les êtres humains vivent au fil de l’histoire. » (Mills, 2016) Une terre passive : la vision occidentale par défaut de la nature

Les différents outils législatifs qui s’intéressent au monde naturel tentent surtout d’encadrer des pratiques humaines, alternant entre protection et exploitation. Pensons par exemple aux multiples lois et règlements associés aux industries extractives, agricoles ou touristiques.

En encadrant des pratiques d’extraction telles que le minage, la fracturation hydraulique et la pollution, l’ensemble de lois environnementales actuel n’est pas réellement conçu pour protéger la nature. Il vise plutôt à réglementer l’utilisation et l’abus de la nature par les êtres humains.

Dans le meilleur des cas, nos lois défendent une nature à « isoler », alors que dans le pire des cas, elles stipulent les règles d’exploitation d’une nature à « utiliser »

L’ensemble de nos lois définissent donc la nature comme passive, silencieuse et fragile.

Une terre abstraite : la vision occidentale par défaut de la propriété

Tant la « common law » que le droit civil encadrent la propriété comme un ensemble de droits, plutôt que de responsabilités. Des terres sont détenues et des titres de propriété sont transférés d’une personne à l’autre. Il s’agit d’une entente entre personnes.

La terre est perçue comme étant abstraite et dénudée de sens physique. Elle est absente de l’entente légale et donc de la relation.

La terre : une histoire de colonisation

La propriété privée, comme mécanisme légal et contractuel, a été et demeure un instrument important de colonisation. Le pouvoir colonial de la propriété privée agit comme un moyen de dépossession et comme un mécanisme menant à la disparition des systèmes politiques et juridiques autochtones.

L’architecture profonde de nos systèmes vient cristalliser les dynamiques de pouvoir. Faire disparaître la colonisation du cœur même de nos systèmes signifie entre autres de redonner une voix à la terre dans nos instruments juridiques et contractuels.

Reconnaître le fait que la terre et la nature ne sont pas que des toiles de fond, mais bien des acteurs présents qui “parlent”, vient déstabiliser la primauté de l’être humain, un concept au centre de la pensée libérale occidentale. En d’autres mots, cela remet en question les prémisses de base du monde créé par la « common law » et le droit civil.

Les systèmes constitutionnels libéraux parviendraient-ils même à survivre à une relation où la nature est considérée comme active?

À ce moment de notre Histoire, celui de la crise climatique, nous n’avons pas d’autre choix que de découvrir la réponse à cette interrogation.

Autant d’arpents de terre qu’un homme peut labourer, semer, cultiver, et dont il peut consommer les fruits pour son entretien, autant lui appartient-il en propre. Par son travail, il rend ce bien-là son bien particulier, et le distingue de ce qui est commun à tous. […] Au commencement, tout le monde était comme une Amérique, et même beaucoup plus dans l’état que je viens de supposer que n’est aujourd’hui cette partie de la terre nouvellement découverte. Car alors on ne savait nulle part ce que c’était qu’argent monnayé.
― John Locke, Deux traités sur le gouvernement, 1689
« Le but premier du Gouvernement devrait être de donner à tous les Indiens leur propriété individuelle, et ce, dès que possible. »
― Lettre du lieutenant-gouverneur et intendant des affaires indiennes, 11 novembre 1878
Décentraliser la vision occidentale par défaut/Faire le ménage de notre maison

Les systèmes juridiques occidentaux incluent déjà des voies pouvant mener à une relation plus dynamique avec la terre.

En suivant celles-ci, les mondes juridiques occidental et autochtone pourraient amorcer un dialogue productif.

Pluralisme juridique

Les ordres juridiques libéraux du monde occidental peuvent donner lieu à un pluralisme juridique. L’interaction du droit civil (au Québec) et de la « common law » (dans le reste du Canada) en est un bon exemple. Toutefois, il faut faire preuve de grande prudence au moment de réfléchir à la façon dont la loi des colonisateurs peut interagir avec les ordres juridiques autochtones, surtout en ce qui concerne la relation à la terre.

Incommensurabilité juridique

Les ordres juridiques libéraux du monde occidental peuvent donner lieu à un pluralisme juridique authentique avec des systèmes qui partagent un cadre constitutionnel commun. Toutefois, il est aussi possible que cela crée une incommensurabilité juridique (c’est-à-dire une incompatibilité) entre les lois des colonisateurs et les divers ordres juridiques autochtones, puisqu’ils émanent de familles constitutionnelles et, au bout du compte, de visions du monde radicalement différentes.

En tentant de désacraliser les points de vue juridiques occidentaux et d’explorer les nouvelles relations avec la terre qui pourraient en découler, nous devons mettre de côté nos attentes relativement à ce à quoi pourrait ressembler le dialogue entre les ordres juridiques autochtones et un cadre juridique occidental qui entre davantage en relation avec ces derniers.

Dit autrement, la croisée des avenirs civique et autochtone pourrait ne pas renvoyer à une enquête autonome, mais plutôt à une énigme sans fin.

Une autre relation avec la nature

L’émergence des Droits de la nature propose une transformation de nos relations avec la nature dans l’ordre légal actuel.

Cette approche souhaite conférer une identité individuelle juridique à la nature. En donnant à cette dernière un statut juridique actif et autonome, elle constitue un modèle de loi plus écocentrique qui augmente considérablement la présence de la terre dans notre monde juridique. À cet égard, les Droits de la nature constituent une avancée bien réelle dans la tentative de décentraliser les visions juridiques par défaut du monde occidental.

Cependant, la nature demeure prise dans un discours de droits qui émane du concept que nous pouvons choisir de conclure une entente légale avec elle. À ce titre, la vision par défaut de l’Occident reste bien vivante.

L’approche des Droits de la nature souligne également le côté artificiel du concept de cadastres conçu par les êtres humains, puisque les frontières des écosystèmes sont insaisissables : où commence et finit la rivière à qui on a accordé une identité individuelle juridique?

Comme ces droits doivent néanmoins être portés par des êtres humains qui agissent comme défenseurs, il est possible de se demander si cela pourrait avoir une incidence sur le taux de décès des personnes qui se portent déjà à la défense de l’environnement contre des forces économiques pouvant être brutales. En effet, 160 personnes land defenders ont été tuées en 2018, une réalité qui n’est qu’une des conséquences visibles des gestes abusifs posés partout dans le monde à l’endroit de ceux et celles qui défendent leurs terres pour le bien de leur communauté et des générations futures : criminalisation, harcèlement, etc.

Quelques exemples de Droits de la nature

Des Droits de la nature n’ont été accordés pour l’instant qu’à de vastes ensembles. Mais que se passerait-il si nous adoptions une approche micromassive pour instaurer des Droits de la nature, c’est à dire par une mobilisation massive autour de plus petits terrains?

Une autre relation avec la propriété

Les fiducies foncières communautaires représentent une solution de rechange à la propriété. L’entité juridique prend la forme d’un organisme sans but lucratif qui administre la terre en vertu d’une fiducie perpétuelle afin qu’elle puisse être utilisée en permanence par une communauté.

Quelques observations :

La terre est réellement présente au sein d’une fiducie foncière communautaire. Il s’agit d’une entente de personne à personne à lieu. Nous y trouvons donc les traces d’un mécanisme contractuel à acteurs et actrices multiples. Au-delà des droits, il y a une responsabilité envers la préservation et le caractère abordable de la terre à perpétuité. La terre est présente dans l’entente et les êtres humains ont une responsabilité d’intendance envers celle-ci. La terre demeure perçue comme quelque chose qui peut être détenu grâce à des ententes écrites.

En tenant compte de ces observations …

Que se passerait-il si nous donnions une présence active encore plus grande à la terre? Nos mécanismes contractuels de propriété seraient-ils toujours pertinents?

Que se passerait-il si nous ajoutions des responsabilités envers, notamment, les autres espèces présentes sur ces terres?

Qu’est-ce que la propriété de toute façon? : au-delà de l’illusion des droits absolus

Les droits de propriété ne sont pas absolus.

Au Canada, les droits de propriété sont sous compétence provinciale. Ils ont délibérément été exclus de la Charte canadienne des droits et libertés (Partie I de la Loi constitutionnelle de 1982), ce qui veut dire qu’ils ne bénéficient d’aucune protection constitutionnelle.

Plus important encore, malgré des histoires largement répandues sur le plan culturel, les droits de propriété ne sont pas absolus. Ils existent conjointement avec d’autres lois et mécanismes, et sont limités par leurs interactions avec ces derniers.

En d’autres mots, la propriété n’est pas qu’un ensemble de droits, elle est déjà aussi, en théorie, un ensemble de responsabilités : sur les plans municipal, provincial, fédéral, constitutionnel, international et autochtone.

Que signifient les titres ancestraux autochtones pour la propriété privée?

Qu’est-ce qu’un titre ancestral autochtone?

Une doctrine (constitutionnelle) de la « common law ». Le paragraphe 35 de la Loi constitutionnelle de 1982 reconnaît et confirme les droits ancestraux, sans toutefois définir ou créer de droits autochtones. Sui generis : c’est un droit inhérent qui découle des systèmes légaux, sociaux et politiques passés ou présents qui maintient une relation avec les terres ancestrales (avant la colonisation). Il est « reconnu » et non « accordé » (c’est-à-dire qu’il n’est ni défini par le système juridique colonial ni un produit de celui-ci). Un ensemble de jurisprudence (voir Calder, Delgamuukw-Gisdayway, Campbell, Haida, Mikesew Cree ou Tsilhqot’in) a été produit pour interpréter la nature et l’étendue de ces droits. C’est l’antithèse de la Loi sur les Indiens.

Ce que la jurisprudence récente peut nous apprendre sur la propriété privée

La Nation Tsilhqot’in c. la Colombie-Britannique (2014) a obtenu une déclaration attestant que « le titre ancestral confère des droits de propriété semblables à ceux associés à la propriété en fief simple, y compris le droit de déterminer l’utilisation des terres, le droit de jouissance et d’occupation des terres, le droit de posséder les terres, le droit aux avantages économiques que procurent les terres et le droit d’utiliser et de gérer les terres de manière proactive ». (Nation Tsilhqot’in, jugement de la Cour Suprême, paragraphe 73.)

L’ensemble croissant de jurisprudence sur la reconnaissance et la définition des titres ancestraux en vertu du paragraphe 35 (un droit protégé par la Constitution) permet d’avancer vers un point déterminant où les titres ancestraux autochtones et les droits de propriété privée pourraient se croiser, et où les premiers pourraient venir limiter les seconds.

La question qui suit pourrait alors se poser …

« Les titres ancestraux autochtones viendront-ils évincer des intérêts bénéficiaires privés relatifs à la terre, ou les terres privées viendront-elles empêcher que des titres ancestraux autochtones soient attribués à ces terres? »
- John Borrows, 2015

Comme tels, les titres ancestraux autochtones sont une « coquille vide ». Dans le paragraphe 35, les ordres juridiques autochtones donnent du contenu et une signification aux titres ancestraux.

Il existe plus d’une centaine d’ordres juridiques autochtones différentes au Canada qui ont ou qui auront quelque chose à dire sur la « propriété privée », ainsi que sur la relation qu’entretiennent les êtres humains avec la terre.

Une plus grande reconnaissance des titres ancestraux au Canada combinée à la résurgence des ordres juridiques autochtones pourraient bien venir transformer ces « coquilles vides » en trous de vers légaux (autrement dit: des raccourcis entre deux univers légaux parallèles).

Parce que la jurisprudence autochtone canadienne nous y mènera un jour ou l’autre… Parce que le système juridique occidental dominant donne lieu à des paradigmes d’autodestruction… Et, plus important encore, parce que nous avons une responsabilité en matière de réparation et de réconciliation…

Nous devons renouer notre relation (juridique) avec la terre.

7 points of convergence: A compass for making kin Des preuves bien enracinées de possibilités

L’initiative Civic-Indigenous 7.0 a été pour nous un exercise de base où enraciner une conception renouvelée des initiatives écologiques et socio-économiques, ainsi que les mécanismes financiers et de régulation s’y rapportant, à l’intersection des visions occidentales et autochtones.

Dans la suite de ce blogue, nous explorons comment les sept points de convergence développés dans le précédent blogue peuvent servir d’ancrages pour la mise sur pied de Nouveaux voisins, une initiative axée sur la réconciliation avec la nature dans les terrains privés en vue de créer une nouvelle forme de micro-traité.

Alors que nous nous efforçons d’unir des visions passées, présentes et futures du monde grâce à ces preuves de possibilités, nous savons qu’il pourrait y avoir des tensions et même de la dissonance entre des approches plus technocratiques ou relationnelles.

Si vous avez des idées sur comment aborder ces tensions ou sur la façon dont les sept points de convergence pourraient servir à élaborer une autre initiative, n’hésitez pas à nous en faire part.

Nouveaux voisins : la réconciliation avec la nature commence dans notre cour

L’organisme Nouveaux voisins travaille actuellement avec un écosystème croissant de partenaires à la régénération de terrains gazonnés pour accroître la biodiversité, atténuer certaines conséquences des changements climatiques et aider à repenser notre relation problématique avec la nature d’un point de vue municipal (villes<> banlieues <> milieux ruraux).

Jusqu’à maintenant, Nouveaux voisins et quelques partenaires se sont concentrés sur trois choses : 1. la création de discours (étendre la notion de quartier à d’autres êtres vivants et questionner la stigmatisation des terrains à la pelouse parfaite comme l’option de paysagement par défaut); 2. l’élaboration d’autres approches paysagères par l’entremise de projets pilotes (comme celui-ci) ; 3. la création d’outils pour appuyer cette transformation socio-écologique (plateforme web et trousse d’outils, bacs à sable réglementaires, mécanismes financiers et fiscaux pour tenir compte des services écologiques créés collectivement, etc.).

La quête de la pelouse parfaite est profondément enracinée dans notre culture occidentale. Il s’agit de la plus grande culture irriguée en Amérique du Nord et ses conséquences néfastes pour l’environnement sont nombreuses. La transformation socio-écologique proposée par Nouveaux voisins et d’autres acteurs similaires pourrait donc avoir un impact de grande envergure.

Elle nous permettrait aussi de nous éloigner d’une vision où la nature est gérée par les êtres humains pour en venir à reconnaître les avantages de celle-ci et de ses écosystèmes pour les êtres humains et, finalement, comprendre l’auto-souveraineté de la nature au sein de laquelle ont lieu des échanges métaboliques entre les espèces (y compris les êtres humains).

Consultation avec la nature : vers 1 million de cours liées les unes aux autres

À ce jour, des Droits de la nature ont surtout été attribués à un site en particulier (lac, rivière, etc. + avec une communauté précise) ou par un État (Bolivie+ Équateur + Ouganda), parfois avec peu d’impact sur le terrain. Il est aussi possible de faire un constat similaire quant aux efforts de conservation écologique.

Certains scénarios préliminaires et hypothétiques pour chaque étape de mise en œuvre.

En commençant par des actions non officielles, cumulatives et de plus petite envergure, des initiatives comme Nouveaux voisins pourraient créer un mouvement collectif ascendant de restauration écologique/Droits de la nature.

L’impact serait très restreint si seulement quelques propriétaires participaient. Mais cela pourrait devenir un excellent moyen de coopération avec la nature si des millions de gens à travers le Canada décidaient de prendre part à cette transformation, et s’il existait des outils et des infrastructures pour soutenir celle-ci.

Des instruments pourraient par exemple être conçus pour améliorer la prise de décisions collective sur le genre d’espèces à sauver en premier en fonction du taux d’extinction, ou le genre de services écologiques à soutenir dans un domaine donné, tout ça grâce aux choix que nous faisons et encourageons collectivement dans nos cours.

Nous avons explorer les options possibles en ce sens dans ce blogue.

Redéfinir ensemble la beauté pour valoriser l’utilité

De nos jours, la plupart des gens n’ont pas les moyens d’embaucher un ou une spécialiste pour transformer leur cour (p. ex. dans un des projets pilotes de Nouveaux voisins, la régénération d’un terrain de 40 mètres carrés à Outremont [un quartier montréalais] coûtait environ 4000 $). D’un autre côté, se contenter de laisser la pelouse pousser et se régénérer sans intervention humaine peut aussi apporter son lot de défis, comme des risques d’allergie ou un certain nombre d’années pour atteindre un aspect sauvage intéressant et acceptable selon les normes culturelles d’aujourd’hui.

Pour nous débarrasser des pelouses et retrouver un aspect plus naturel, il pourrait nous falloir une nouvelle génération d’instruments et de modèles financiers civiques capables de tenir compte de formes de valeur multidimensionnelles et longitudinales (séquestration du carbone, hausse de la biodiversité, réduction des îlots de chaleur, etc.), et de mettre à profit les capitaux privés et institutionnels nécessaires pour relever les défis environnementaux liés à la remise à un certain état sauvage.

Il nous faut une meilleure compréhension des différents bénéfices dans une seule cour (séquestration du carbone, hausse de la biodiversité, santé publique, qualité de vie, participation communautaire, sécurité alimentaire, etc.), ainsi que plus d’information sur les valeurs différenciées créées à différentes échelles (une seule cour versus une rue versus un quartier versus une région).

Les visions du monde autochtones impliquent souvent une philosophie d’interdépendance et d’appartenance, comme l’indique l’expression mohawk Akwe Nia’Tetewá:neren (toutes mes relations). Il est entendu que nous sommes tous liés les uns aux autres et à toute la vie sur Terre. Comment afficher un tel respect pour toutes les espèces vivantes dans notre cour? Comment créer un lien entre la compréhension scientifique des systèmes écologiques acquise au cours des siècles et la sagesse autochtone acquise durant des milliers d’années de relation intime avec le territoire sur lequel nous vivons aujourd’hui?

Carte préliminaire des avantages conjoints de Nouveaux voisins Intégrer des responsabilités envers la nature dans les ordres juridiques actuels

Selon la recherche et les conversations que nous avons menées avec diverses parties prenantes, il semble que les paliers réglementaires municipaux et provinciaux pourraient être les meilleurs points d’entrée pour reconnaître nos responsabilités envers la nature sur une propriété privée donnée.

Sur le plan municipal : Plutôt qu’encourager l’entretien des pelouses (p. ex. une couverture végétale d’une certaine hauteur maximale en cm), les municipalités pourraient déterminer le pourcentage minimum (p. ex. 50 %) de la surface d’une cour qui devrait être réservée à des services écologiques et concevoir des outils pour aider leur communauté à atteindre cet objectif.

Sur le plan provincial : Les lois actuelles sur le patrimoine semblent un bon point d’entrée, soit en reconnaissant les droits de la nature dans le cadre de servitudes de conservation (comme cela a été fait avec le Community Legal Environmental Defence Fund), soit en proposant des options de conservation juridiques volontaires, comme le fait le Gouvernement du Québec (p. ex. une réserve naturelle, une servitude de conservation, le don ou la vente de propriété, ainsi que la désignation d’un habitat floristique).

Ces dernières sont accompagnées d’un revenu ou d’une réduction des taxes municipales, mais en général, elles ne s’appliquent qu’à de vastes terrains. Une approche plus collective et démocratique en matière de conservation et de restauration est indispensable.

Lois autochtones : Il est possible d’intégrer la sagesse autochtone à d’autres ordres juridiques et, par conséquent, de préparer le terrain pour une éventuelle reconnaissance de la pluralité des ordres légaux. Toutefois, il a été suggéré de commencer par « faire le ménage de notre maison » à court terme avant de prendre part directement à un ordre juridique à titre de personne non autochtone.

Toutefois, un Autochtone nous a dit que lorsqu’un fonctionnaire de la Ville de Montréal est venu chez lui avec une règle pour mesurer la hauteur de sa pelouse à la suite d’une plainte faite par un voisin, il a mentionné suivre la loi ancestrale de son peuple et ce qu’elle dit au sujet de la relation avec la terre. Le fonctionnaire a quitté la propriété sur-le-champ. Cette anecdote illustre bien la pluralité juridique et la façon dont la négociation entre des traditions juridiques autochtones et coloniales peut se manifester dans le cadre de micro-décisions sur le terrain.

La prochaine étape : Une conversation avec des représentantes et représentants des différents paliers de gouvernement doit avoir lieu pour clarifier la façon d’intégrer des responsabilités envers la nature dans les ordres juridiques actuels.

Travailler à la réconciliation de notre réglementation

La réglementation est une des tâches les plus importantes du gouvernement. Elle renvoie à une relation codifiée entre le commerce, l’État, la société civile et la nature, assurant des protections et des terrains d’action adéquats dans tous les domaines. Au moment où nous nous retrouvons à la croisée des chemins face à des défis, mais aussi des possibilités sans précédent, il importe de comprendre que le jeu change et donc, que les règles doivent elles aussi changer, y compris en ce qui concerne les propriétés dont nous sommes locataires ou propriétaires.

Voici quelques idées de la façon dont cela pourrait se dérouler.

La propriété comme un (micro) traité : voir au-delà de la propriété Sommes-nous capables de voir la propriété comme un traité conclu avec un lieu, plutôt que comme la possession de celui-ci?

Pouvons-nous, aux confins de la pensée juridique occidentale, voir la propriété comme un traité plutôt que comme un acte visant la possession privée? Pouvons-nous créer un prototype de microtraité numérique (un traité intelligent), par exemple, qui pourrait interagir en temps réel avec toutes ses relations et responsabilités, y compris les limites déjà en place qui s’appliquent à la propriété privée en vertu de la loi des colonisateurs?

Posséder une maison et prendre soin du terrain sur lequel elle est située pourrait alors devenir un acte de réconciliation profonde, ou, au minimum, un acte de « préparation » afin qu’un dialogue puisse avoir lieu entre la notion actuelle de propriété privée et les ordres juridiques autochtones par l’entremise des titres ancestraux. Bien que cela ne garantisse pas une intégration tout en douceur des titres ancestraux et des droits de propriété privée, cela pourrait néanmoins stimuler l’interface… ou notre imagination.

Les voies qui mènent au-delà de la propriété demeurent insaisissables. Cependant, des éléments d’émerveillement précis issus d’un contact préliminaire avec la pensée juridique autochtone pourraient nous aider à ne plus simplement « faire les choses comme d’habitude » et à participer plus activement à la décentralisation de la vision occidentale par défaut. Voici quelques éléments d’émerveillement pour poursuivre la réflexion :

« En tant que citadins, une part de notre difficulté est d’apprendre comment vivre avec la force de la Terre et de nous laisser conseiller par celle-ci. Allez parler aux arbres et aux plantes pour comprendre le langage, les histoires, la science et les traités de la nature. Et parce que la Terre est vivante, cela veut dire que l’histoire ne peut pas se résumer à “il était une fois en 1701”. Il s’agit plutôt de ce qui se produit aujourd’hui et de ce qui peut se produire à l’avenir. Le paragraphe 35 fait partie intégrante de la loi canadienne et peut nous aider à créer des lois qui n’émanent pas seulement des parlements ou des assemblées législatives, mais qui émanent aussi de nous, qui font partie de nous. Il n’y a pas que les avocats et les juges qui peuvent pratiquer le droit. Nous pouvons tous et toutes le faire. Et nous pouvons commencer en récréant le lien entre les gens et leurs terres. C’est de nous qu’il s’agit. Nous n’avons pas à l’attendre. Le lien est là. Il nous entoure. »
― John Borrows, extrait de son enseignement près de l’étang, Toronto, 2019
Communiquez avec nous

Nous en sommes encore aux premières étapes de la mise sur pied de Civic-Indigenous 7.0, ce qui veut dire que nous sommes ouverts à toute rétroaction qui pourrait éclairer nos travaux futurs. Nous sommes aussi à la recherche de gens qui aimeraient nous accompagner dans cette aventure, alors n’hésitez pas à communiquer avec nous le cas échéant.

Jayne Engle — La Fondation McConnell
jengle@mcconnellfoundation.ca

Jonathan Lapalme — Dark Matter Labs
jonathan@darkmatterlabs.org

Cet article existe également en version PDF (en anglais). Communiquez avec nous pour obtenir un exemplaire.

Remerciements

Le présent article a été écrit originellement en anglais par Marie-Sophie Banville et Jonathan Lapalme de Dark Matter Labs, tous deux établis à Tiohtià:ke/Montréal, et en collaboration avec Des villes pour tous/la Fondation McConnell, la Chaire de recherche du Canada sur le droit autochtone de la Faculté de droit de l’Université Victoria, le Center for First Nations Governance et Nouveaux voisins. Les éléments graphiques ont été créés par Hyojeong Lee de Dark Matter Labs et par l’équipe de Nouveaux voisins. Le texte a été traduit de l’anglais par Julie Lanctot et révisé par Emile Forest.

Nous tenons aussi à remercier plusieurs autres collaborateurs et collaboratrices qui ont pris part à des ateliers proposés par Civic-Indigenous 7.0 et aux conversations ayant mené à la rédaction de l’article. Ces personnes font partie des organismes suivants : Waterloo Institute for Social Innovation and Resilience (WISIR), MaRS Discovery District, Evergreen / Villes d’avenir Canada, Mi’kmaw Native Friendship Centre (Halifax), Centre for Indigenous Innovation and Technology / Troon, Fédération canadienne des municipalités (Initiative de développement économique communautaire entre Premières nations et collectivités), Cando (CEDI), Center for Democratic and Environmental Rights, Community Environmental Legal Defense Fund (CELDF), Centre québécois du droit de l’environnement (CQDE), la Fondation David Suzuki et plusieurs autres.

Références

Droits et travers en matière de propriété : micro-traités avec la Terre was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Holochain

Three Holo Networks

Holochain Dev Pulse 100

In the wake of our HoloPort Alpha Program announcement, I’d like to take a look at the guts of our HoloPort Alpha Program — specifically the separate networks and why they matter.

(Also, this is the hundredth Dev Pulse! An exciting milestone! 🎉 Huge thanks to Amanda, Micah, Klaye, Paulii, Jarod, Lucas, Mary, the dev teams, and all the other people who’ve ever worked on it.)

The Holo network: actually three separate networks

The Holo hosting dev team will assign all hosts in the HoloPort Alpha Program to one of three networks:

The alphaNet is an alpha release of the production-ready Holo network. From the HPOS to the centralised infrastructure, it runs on code that’s known to be stable — appropriate for production use, though not necessarily performant, feature-complete, or bug-free. It sees a slow update cycle, only incorporating changes as they’re proven to work right. The flexNet runs on code that includes features and bugfixes that want to be promoted to the alphaNet but need pre-release testing. This will be a mixture of stable and unproven components that will eventually make it into the release once they’re considered reliable. Components could come from the alphaNet, the devNet, or a special branch of the codebase that focuses on just one feature. The devNet tracks the current state of development and is very unstable. This is where the daily work shows up, and is only used and tested by Holo team members. Automated tests are run on the devNet each night on new features and bugfixes, as well as to catch ‘regressions’ — previously working features that have been broken by new code.

The networks are almost completely separate, from web browser to HoloPort. All the centralised components — the web server that serves up the Chaperone script to the user, the matchmaker service that finds the user a HoloPort to connect to, the Holo router that connects the user to their chosen HoloPort, and the proxy and bootstrap services that connect HoloPorts to each other — are replicated so they isolate the alphaNet from the devNet. This keeps bugs from leaking out into the experience of users on the alphaNet, and it prevents resource-heavy automated tests from slowing the alphaNet down.

The Holo devs are describing this as a huge improvement in their ability to test changes — not only does it contain bugs, but it also vastly simplifies our testing and deployment setup. Each network can be deployed automatically — from the HoloPort, through the centralised infrastructure, to the scripts that run in the user’s browser. These are the kinds of quality assurance upgrades we need right now as we prepare for public alpha testing.

Explaining the requirements for the HoloPort Alpha Program Remote SSH enabled

Because the Holo hosting dev team will be the ones choosing how many HoloPorts they need for each network, SSH will be enabled by default on all HoloPorts for this stage of the program. This is a big departure from our previous approach, so I want to clarify three things:

This is only for the duration of the HoloPort Alpha Program. We’ll only be using it to switch your HoloPort between networks and perform support diagnostics. In the future, you’ll be able to turn SSH off any time you like from the Host Console Settings. If you disable it, your HoloPort will not be used in the program — that is, it won’t be serving hApps to users, and you won’t be eligible to receive HoloFuel. 95% uptime

In the future, web users’ source chains will be backed up by multiple HoloPorts with an uptime of at least 90%. We’re aiming for 5× redundancy, which will give a total uptime of 99.999%, allowing users to access their app even in the unlikely event that four of their assigned HoloPorts are offline.

For now, though, each web user only gets assigned to one HoloPort. If it happens to go offline, all the users it’s serving can no longer access their app. That’s why the HoloPort Alpha Program requires 95% uptime as one of its eligibility criteria. This will be adequate for testing purposes; anything less will give an unpleasant user experience. We think this is worth paying for, which is why you’ll be getting a stipend in HoloFuel if you join the program.

If you’re eager to get involved, head over to the Holo forum and watch for announcements in the coming weeks.

An invitation: help improve the developer documentation!

The folks at Sprillow, a Holochain dev shop that focuses on good user experience (you should check out their Acorn hApp), also care a lot about developer experience too. They’ve invited everybody with some technical background to join their 24-hour Documentation Drive, starting today at 12:00 UTC (sorry for the late notice). You don’t need to be a Holochain expert yourself; you just need to be willing to dig a little and improve things just a bit.

It’s no great secret among hApp developers that the learning journey is a bit rough and wild; there’s a lot that could be improved. And the success of the Holochain ecosystem will rest on hApp developers, so this is a low-commitment, high-impact way to help out!

Join the documentation effort by filling out this form.

Thanks, Sprillow, for organising this!

Dev Camp update: 1000 registrants!

We’ve been floored by the level of interest in the upcoming Holochain Dev Camp. That’s a 4× increase over last Dev Camp; interest is accelerating!

If you haven’t signed up yet, here’s the registration form.

Breaking changes in develop: decryption function and must_get

If you’re developing your hApp against the develop branch of Holochain, two new breaking changes have landed.

One decryption function, x_25519_x_salsa20_poly1305_decrypt, had its parameters mistakenly reversed behind the scenes. The fix isn’t actually a breaking change; the host function signature and parameter order haven't changed. But if you discovered that you had to swap sender and recipient in order for it to work, you’ll need to swap them back to what the documentation says. All DHT retrieval functions have been disabled in validation callbacks and replaced with new deterministic ones. If your validation callbacks need to load dependencies from the DHT, use the new must_get_* host functions to protect validation from sources of non-determinism. These functions are guaranteed to retrieve something; if there's something wrong with the dependency, they'll short-circuit in the host with the proper error.

Cover photo by Paul Carmona on Unsplash


Evernym

July 2021 Release Notes

Our monthly developer update details what's new and what's coming next on our product roadmap. The post July 2021 Release Notes appeared first on Evernym.

Our monthly developer update details what's new and what's coming next on our product roadmap.

The post July 2021 Release Notes appeared first on Evernym.


IBM Blockchain

Blockchain newsletter for July: Blockchain and other technologies make for powerful solutions

With blockchain’s growing maturity, innovators of all stripes are finding new opportunities in combining blockchain with other technologies. Blockchain + AI. Blockchain + IoT. Blockchain + confidential computing. In all these synergistic combinations, the formula adds up to greater value overall. EXCLUSIVE video preview: IBM’s Jerry Cuomo on Blockchain and AI Hear from Jerry Cuomo, […] The post

With blockchain’s growing maturity, innovators of all stripes are finding new opportunities in combining blockchain with other technologies. Blockchain + AI. Blockchain + IoT. Blockchain + confidential computing. In all these synergistic combinations, the formula adds up to greater value overall. EXCLUSIVE video preview: IBM’s Jerry Cuomo on Blockchain and AI Hear from Jerry Cuomo, […]

The post Blockchain newsletter for July: Blockchain and other technologies make for powerful solutions appeared first on Blockchain Pulse: IBM Blockchain Blog.


KYC Chain

KYC for Market Makers: A Case for Advanced Identity Proofing and Institutional Verification

Using KYC-Chain, our market maker clients build fast and detailed insights into the potential clients and projects they are considering onboarding. Advanced ID proofing brings the transparency, oversight and risk mitigation the space needs. The post KYC for Market Makers: A Case for Advanced Identity Proofing and Institutional Verification appeared first on KYC-Chain.

Dark Matter Labs

Financing sustainability transitions: in search of a new, civic-led orthodoxy

Credit: CIVIC SQUARE — construction of a neighbourhood grow room Finance at the Climate Crossroads Since the signing of the UNFCCC’s Paris Agreement in 2015, $3.8 trillion has been invested in fossil fuels, and much of the money set aside to ‘build back better’ will not address climate change. The role of finance in both the causes of climate breakdown and how we pull our societies back from
Credit: CIVIC SQUARE — construction of a neighbourhood grow room Finance at the Climate Crossroads

Since the signing of the UNFCCC’s Paris Agreement in 2015, $3.8 trillion has been invested in fossil fuels, and much of the money set aside to ‘build back better’ will not address climate change. The role of finance in both the causes of climate breakdown and how we pull our societies back from the brink is a crucial and hot topic that many individuals and organizations pursuing a low-carbon, climate-resilient, just, and inclusive future are grappling with.

While Sustainable Finance has emerged in response and has seen significant expansion over the past decade with a further 55% of growth forecast in 2021, the practice of taking environmental, social and governance considerations into account when making investment decisions is unlikely to lead to transformative change of the kind required. Not only do ESG investments manifest mostly as a “do less harm” exercise with a focus on risk reduction instead of value creation, but they also hardly do better than conventional funds in directing investment to sustainable activities.

The micro-shifts of ESG investing are a step in the right direction but wholly insufficient in the face of the climate and ecological crisis. Mainstream approaches leave much of the prevailing investment logic — the paradigms, structures, and practices of investing — untouched. Incrementalism and tweaks at the fringes of the finance system leave these sustainable investments steeped in many of the same finance orthodoxies that got us to this point of crisis in the first place. Put bluntly, moving our pensions from companies involved in oil exploration to less carbon-intensive industries such as biotech, financial services, or digital technology is not going to drive the transformational changes we need locally and globally. This is why ESG investing has come under increasing criticism in recent months for failing to deliver on its impact promise.

The TransCap Approach

The shortcomings of the Sustainable Finance movement have been noted by many already, not least one of the partners in this collaboration: the TransCap Initiative.

“Transformation capital is an investment logic at the intersection of systems thinking and finance practice. It has been designed to catalyse sustainability transitions in the places that matter most for human prosperity — such as cities, landscapes, and coastal zones — as well as in value chains and other real-economy systems. It recognises the world as a complex adaptive system and embeds systems thinking, human-centred design, and sensemaking in all stages of the investment process.”
Credit: TransCap Initiative — transformation capital white paper

This new investment logic presents a critique of the ESG mindset and argues for why we need to push the world of finance to think beyond the singular goal of investment as a means for capital to multiply itself, and instead to focus on increasing the effectiveness by which Sustainable Finance is deployed. It argues that viewing the world through a systems lens and focusing on real assets in the places where we live, work and play — instead of in stock markets — is more aligned with our shared ambition to build not just a low-carbon but also a climate-resilient, just, and inclusive society.

“Transformation capital intends for investors to deploy capital primarily to create change dynamics that propel a (real-economy) system in a specific direction, both in order to set the system on an environmentally and socially sustainable footing as well as to enable the continued multiplication of capital in the long run.”
How this manifests locally

But how does that manifest locally? And how does it interact with those who live within the systems these investments seek to change? How can we better connect the financial system to communities in a way that serves them rather than relegates them to a source of value extraction? How can we ensure relationships move beyond the continued multiplication of financial capital as the primary driving force and instead prioritise wellbeing, social justice, and ecological regeneration?

Conventional investment portfolios systematically avoid opportunities to generate compound value (both private and public) by remaining purposefully divorced from the tangible and intangible assets embedded in communities themselves. Furthermore, they are not composed in a systemic manner through which individual assets can be brought into synergistic relationship with each other so that they generate multiplier effects that amplify their value to the community. A simple example might be a residential fund investing in a rental portfolio in a neighbourhood. The fund manager knows that the attractiveness of the place and their success is partly based on the sense of community and safety as well as the quality of nearby parks, schools, shops and transport connections. But there is no mechanism or incentive for these connections to be made.

While financial capital is a powerful lever in driving a system’s behaviour, there are other types of capital important to society’s sustainability transition. One of them is ‘community capital’, which manifests in different ways. It can take the form of the relationships of trust within communities that facilitate coordination and cooperation for mutual and civic benefit. It can be the organisational and institutional infrastructure in which deliberative and decision-making capacity lies within a community and amongst its members. It can even be the combined financial and social investment potential of a community to generate multiple shared value from collective civic action.

Credit: CIVIC SQUARE— #TheBigLunch weekender at The Front Room, South Loop Park, Birmingham

These types of community capital are rarely aligned in a purposeful way with financial capital flows, sustainable or otherwise. And yet these types of capital and human-to-human interactions are the bedrock of real value creation in our neighbourhoods and communities. Worse yet, there has been a historic and turbulent relationship between financial capital and the people and places it shapes. Extractive models of financial investment have bred distrust and often directly contributed to the inequities at the heart of the crises we face. Whether it’s through housing developments that privilege profits over affordable, green and comfortable living for all, or high-street chains motivated by shareholder interests over those of the community, the long term risks associated with these investments are socialised and the rewards are privatised — the 2008/2009 Global Financial Crisis serving as a powerful reminder.

Credit: CIVIC SQUARE — exploring modern making methods, small sites and community builds

As technology advances, these harmful power imbalances risk being replicated further and faster. The incentives, values and ownership models baked into finance will play crucial roles in the means of creation and distribution: how behavioural data from mobile phones, public spaces and smart homes is owned and used; or how Artificial Intelligence is developed and applied to our everyday lives; how gene editing is applied and owned in our food system; or even how social media and other online platform that breed polarising content, unsustainable consumption patterns and mass disinformation structure their business models. Whatever the scope of the transition, how it is financed matters.

This makes it all the more crucial to see sustainability transitions and the finance that drives them through an economic, political, cultural and social justice lens and recognize that these transitions happen alongside — and not isolated from — other structural shifts in society. In other words, sustainability transitions are not just about carbon neutrality, but rather a need to move away from the prevailing logic of an environmentally and socially extractive economy towards a regenerative and inclusive economy capable of fundamentally addressing the most tangible and pressing challenges of the 21st century. As the Climate Justice Alliance puts it,

“The transition itself must be just and equitable; redressing past harms and creating new relationships of power for the future through reparations. If the process of transition is not just, the outcome will never be. Just Transition describes both where we are going and how we get there.”
Rewiring capital and value

Rebalancing this relationship between communities and finance requires more than just new capital formations, new funds and new grant structures. It requires the systemic rewiring of capital itself, away from asymmetric power, control and value extraction, towards more distributed and democratised forms.

It also requires a rewiring of how we see value. Reconfiguring the relationships between financial capital and community capital can unleash the multiplicity of equitable returns and co-benefits, recognising that value flows from investments can take many forms, but too few are incorporated into conventional financing models. Reductions in liabilities, for instance the health risks associated with fuel poverty, are rarely accounted for fully. Longitudinal and intangible outcomes, such as increased social resilience from one generation to another, fail to be valued fully when investment decisions are made. The failure to capture these values reinforces a warped view of what constitutes a ‘good’ or ‘bad’ investment.

Credit: Dark Matter Labs — Bridging the Impact Investment Gap: Infrastructuring Tomorrow

If our finance systems are to go beyond capital formations motivated largely by short term, narrow economic returns alone (financial ROI / social ROI that use conventionally static or negatively biased discount rates), and begin to recognise the multiple civic returns possible, there is a need to explore new ways of measuring, accounting and financing for civic value creation for a new symmetry of power and mutuality. It is in this rewiring of capital and value that we can create the systemic conditions for just transitions.

Our TransCap x CivicValue x CommCap exploration

The intersection of new investment logics (transformation capital), new forms and flows of value (civic value) and the tangible and intangible assets in our communities (community capital) is where our shared exploration sits. It is where we seek to further unpack and explore the tensions and contradictions that their combination surfaces by testing and validating key components of each, and how they intersect.

The aim is to generate insights and learnings relevant to a diverse range of place-based transitions. While the transitioning of, for example, urban neighbourhoods towards a just carbon neutrality and universal wellbeing will be different to transitioning rural land systems towards sustainable and ecologically enriching uses, there will be common challenges and relevant points of connection to draw from.

We have purposefully brought together a collaboration that spans a diverse set of vantage points, knowing that diversity is strength and that to work in complexity is to embrace entanglements and often conflicting perspectives.

The TransCap Initiative is a collaborative space for developing, demonstrating, and scaling systemic investing in some of the places that matter most for human prosperity — such as cities, landscapes, and coastal zones — as well as in value chains and other real-economy systems. We apply systems thinking, human-centred design, and the principles of mission-driven innovation to explore how financial capital can catalyse the transformation of systems in service of a low-carbon, climate-resilient, just, and inclusive future. The TransCap Initiative is currently convened by the Climate-KIC International Foundation.

Thirty Percy Foundation and Lankelly Chase Foundation are philanthropies passionate about systemic change, climate justice and place-based transitions to a future that is regenerative and just. We have a history of supporting ambitious, values-aligned partners through grant funding, and of challenging systems that enable excessive accumulation of wealth at the expense of social and ecological wellbeing.

Dark Matter Labs is a strategic discovery, design and development lab working to transition society in response to technological revolution and climate breakdown, focusing on the institutional innovation that sits between the systems and structures that shape society. This includes reimagining new place-based institutions with capacity for collective sense-making, implementing, and financing of regenerative projects, to re-coding wider value models through contracting in order to harness the potential of placed-based transitions.

CIVIC SQUARE is based at the heart of a neighbourhood in Birmingham, we are focused on 21st century regenerative civic and social infrastructure that underpins how we respond together to our growing societal challenges equitably. We do this through our key mission areas of the public square, neighbourhood lab and creative and participatory ecosystem we seek to work in a number of ways alongside our neighbours and local community, making the invisible visible through uncovering the dark matter, investing in the radical reimagination of our systems through the dream matter, and giving it playful, participatory, everyday, accessible form in the ordinary matter.

The strength of this collaboration is that each partner brings perspectives and assumptions that have some common ground but aren’t completely aligned. The point is to collaboratively explore the tensions that arise and, from a position of empathy and humility, work towards reconciliation that can further the cause we broadly share.

Working out-loud

We want to recognise our privilege in this work and avoid falling into the trap of building a knowledge base in isolation of wider stakeholders, including those within neighbourhoods and communities who will help determine the shape and feel of each place-based transition. Our intention is for this exploration to build upon existing knowledge and inspire a wider ecosystem of stakeholders to continue to develop and test out new ideas in a way that builds a ‘movement-of-many’ capable of displacing dominant systems of investment and finance.

To that end, we will be ‘working out-loud’ over the course of this collaboration. Partners will take it in turns to write a WeekNote, sharing learnings as we go.

You can find the TransCap x CivicValue x CommCap WeekNote here.

Our shared enquiries

To guide us over the next 6 months, we have co-developed a set of shared core enquiries that capture what we feel are fundamental questions that need further exploration.

How can we best challenge the modus operandi of investors and guide those with aligned intentions towards new logics, practices and instruments adaptive to different contexts, communities and scales of transition? How can we best build the capacity for change amongst communities, intermediaries and investors that address historical tensions between capital and societal interests, as well as legacy lock-ins that remain deep-rooted causes of the crisis? How can we best develop new financial, technical and participatory innovations, which not only systemically rebalance the agency of change from financial capital towards community capital, but are transferable / translatable across multiple different boundaries and scales of ownership (e.g. urban neighbourhoods but also rural land / bio-regions)?

What sits behind these three leading enquiries are a set of cascading enquiries that represent our collective ‘hunches’ for where further work could have value.

How might we best build the capacity for change amongst communities, intermediaries and investors that address historical tensions between capital and societal interests, as well as legacy lock-ins that remain deep-rooted causes of the crises we face? How might we navigate the tension between urgency for action and the deeper shifts required to deliver equitable systemic change that is grounded in historic and cultural inequities? How might an ecosystem of investors be nurtured to provide different types of capital that serve local, just climate transitions in different ways? How might new investment logics address ownership, wealth and power concentrations that drive extractive models of financing, while supporting new economic thinking relating to revenue, risks and returns? How might ‘participatory imagining’ practices help establish investments with shared intention, in a way that builds legitimacy across all stakeholders involved? How can new ways of measuring the quality of investments drive rigour and legitimacy (and comprehensively account for its multiple benefits beyond financial ROI)? How can minimum viable prototypes, proxies and proofs of possibilities help make abstract concepts and ideas around new investment logics tangible to a diverse group of stakeholders? Towards tangible actions and longer-term proofs of possibility

As a collective, we are committed to working towards concrete action in the near term, while maintaining a broader perspective for ‘movement-building’. Our principles for implementation are set out below:

Short term impact with long term horizons

This initial exploratory phase will inform what can be implemented in the next 12–18 months. This recognises that the time to make the transformational investments necessary is long overdue and urgent, and also that communities want to see positive changes sooner rather than later. But these short term actions must also be deliberate in how they act as steps towards a radically different future, rather than extending the status quo.

Testing approaches at multiple layers of systems

Emerging ideas and actions will focus around investment logics and fund structures, but also institutions and people. Crucially, to support innovation at those levels, ideas will also include innovation of the technical instruments that can enable these new ideas.

Place-focus

Given the very broad ambitions for this collaboration, we plan to initially focus on one place — Ladywood in Birmingham — to help ground our thinking. This is in part because CIVIC SQUARE is rooted here, is working alongside neighbours in the area and has long term commitments to exploring new models of social infrastructure, community retrofit and more in the area, but also because as with many neighbourhoods facing the challenges it is, the transition needs are great. That’s not to say that we won’t be considering other local contexts and types of transitions concurrently and over time.

Matching top-down and bottom-up

We must find better ways of linking finance — which typically wants a standardised model to which large amounts of investment can be applied — with communities, who typically require a bespoke approach tailored to their needs and may need relatively small amounts of capital at a time. We will explore other examples of how this challenge has been overcome and that have been able to deliver meaningful, scaled outcomes.

Creating a long-lasting platform for sustainability transitions

In times of adversity, community capital has a direct relationship to the ability of neighbourhoods to adapt and cope with change. Therefore, implementing the changes required to make sustainability transitions happen will require accountable, empowered and securely-resourced local partnerships or even new institutions to be established. We will explore how these institutions might emerge and also be sustainable over the long term and not be entirely dependent on potentially changeable political agendas.

Get in touch

If you’re interested to learn more from this collaboration, or are exploring similar themes and issues relating to the topics outlined above, please feel free to reach out to our designated points of contact:

Dominic Hofstetter, TransCap Initiative dh@transformation.capital

@dhofstetter_x

Immy Kaur, CIVIC SQUARE

immy@civicsquare.cc

@ImmyKaur

Tom Beresford, Dark Matter Labs

tom@darkmatterlabs.org

@t_bez12

Jen Hooke, Thirty Percy Foundation

jen@thirtypercy.org

@jenhooke

Dominic Burke, Lankelly Chase Foundation

dominic@lankellychase.org.uk

@_dominicburke

Financing sustainability transitions: in search of a new, civic-led orthodoxy was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


OWI - State of Identity

Ping Identity: The Privacy Paradox

Join host Cameron D'Ambrosi and Chief Customer Information Officer at Ping Identity, Richard Bird as they discuss safeguarding remote workplaces as a huge shift of activity has been driven to the cloud due to the pandemic, vaccine passports, portable medical results, and records being something that humans are empowered to hold and manage, and the privacy paradox from GDPR to CCPA. Get your questi

Join host Cameron D'Ambrosi and Chief Customer Information Officer at Ping Identity, Richard Bird as they discuss safeguarding remote workplaces as a huge shift of activity has been driven to the cloud due to the pandemic, vaccine passports, portable medical results, and records being something that humans are empowered to hold and manage, and the privacy paradox from GDPR to CCPA. Get your questions answered in this can't miss podcast!


Affinidi

Role of Public Key Cryptography in Self-Sovereign Identity

You must have heard the words “cryptography” and “public-private key” in the context of data security. But what are they really? And why should you care? To answer these questions, let’s get a bit into the basics of Public Key Infrastructure (PKI) and its application in a digital identity framework called Self-Sovereign Identity (SSI). What is Public Key Infrastructure? In simple term

You must have heard the words “cryptography” and “public-private key” in the context of data security. But what are they really? And why should you care?

To answer these questions, let’s get a bit into the basics of Public Key Infrastructure (PKI) and its application in a digital identity framework called Self-Sovereign Identity (SSI).

What is Public Key Infrastructure?

In simple terms, public key infrastructure is a framework for encryption and digital signatures that ensure secure communication between two entities. The use of encryption algorithms and private-public key pair make this framework highly conducive for the secure transfer of data.

Encryption

Encryption is done with a mathematical algorithm that takes a typed text as its input and runs it through mathematical permutations to create a long output with alphanumeric characters, thereby making it difficult to break or hack into.

Asymmetric encryption is a kind of encryption implemented with a private-public key pair.

Public and Private Keys

A pair of keys, called the public-private key pair, work in tandem with each other, such that you can’t use one without the other.

As the name suggests, the private key is kept private by an entity while the public key is shared with the recipient(s).

Let’s understand this with an example where John and Mary are communicating with each other.

John initiates a conversation with Mary and asks for her public key, which Mary shares with him. John writes a message and encrypts it with Mary’s public key and sends it to her. Since Mary has the associated private key, she decrypts John’s message using her private key and reads it. To respond, she asks for John’s public key, which he shares Mary encrypts the message with John’s public key and sends it to him. John decrypts the message with his private key and reads the message.

Since it’s impossible to decrypt a message without the associated private key, there’s no motivation for hackers to intercept it. This is what makes PKI a secure way of communicating over open networks like the Internet.

Role of PKI in SSI

Self-Sovereign Identity (SSI) is a philosophy where you own your data, store it in a place of your choosing like a digital identity wallet, and share it with just the entities you want to.

There are many ways to implement SSI using cryptographic keys, and in this article, let’s look at two methods.

Using a Unique Digital Identifier

In this implementation, a unique digital identifier (typically a large number) is also generated along with your private-public key pair. You must register the public key and the unique digital identifier on the blockchain while the private key can be stored in a digital wallet.

At the time of authentication, the system looks up the unique identifier and the associated public key while you prove the ownership of this identifier with your private key.

Using Digital Signatures

Another implementation is through decentralized identifiers and verifiable credentials.

There are three parties to this process and they are:

Issuer — The entity issuing a credential such as a government ID Holder — The owner of the credential, i.e., the entity on whom the issuer generates the credential. Verifier — The entity that checks the validity and authenticity of the credential presented by the holder.

The issuer signs a verifiable credential with its private key, encrypts it with the holder’s public key, and sends it to the holder. Next, the holder stores this VC in an identity wallet and can decrypt it using his or her private key to access the VC.

At the time of verification, the holder creates a verifiable presentation that can include one or more credentials and digitally signs it with his or her private key, and encrypts it with the verifier’s public key.

The verifier decrypts the VC and checks for authenticity using the public keys of both the holder and the issuer.

Thus, these are how the public key infrastructure framework is used for implementing SSI-based applications.

Affinidi provides building blocks for an open and interoperable Self-Sovereign Identity ecosystem. Reach out to us on Discord or email us if you want to build VC-based applications using our tech stack.

Follow us on LinkedIn, Facebook, or Twitter. You can also join our mailing list to stay on top of interesting developments in this space.

The information materials contained in this article is for general information and educational purposes only. It is not intended to constitute legal or other professional advice.

Role of Public Key Cryptography in Self-Sovereign Identity was originally published in Affinidi on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Oct 19, 2021: Meeting the Identity and Access Challenges in a Multi-Cloud World

Multi-cloud deployments are becoming increasingly common as organizations seek to remain competitive in the digital economy and address demands for increased remote working. But while cloud migration is enabling business success, it is not without its identity and access challenges.
Multi-cloud deployments are becoming increasingly common as organizations seek to remain competitive in the digital economy and address demands for increased remote working. But while cloud migration is enabling business success, it is not without its identity and access challenges.

EIC Speaker Spotlight: Annie Bailey on the Future of Artificial Intelligence

by Andrea Beskers Annie Bailey, Senior Analyst at KuppingerCole, will host the Artificial Intelligence stream at European Identity and Cloud Conference 2021. To give you a sneak preview of what to expect, we asked Annie some questions about the future of AI. What is AI's future?  We can think about the future of AI in three different ways. So we could of course think about it fro

by Andrea Beskers

Annie Bailey, Senior Analyst at KuppingerCole, will host the Artificial Intelligence stream at European Identity and Cloud Conference 2021.

To give you a sneak preview of what to expect, we asked Annie some questions about the future of AI.




What is AI's future? 

We can think about the future of AI in three different ways. So we could of course think about it from the technological advancements, which are happening right now, and really speeding up at a rate that we haven't seen before. So there are advancements in AI in conversation and cognitive use cases, towards analytical use cases, towards integration and automation for all sorts of moving things: into cars, into robots, into drones, and then of course connecting all of these together in the processing side of these. In just about any industry you can think of, there are really strong and sometimes surprising advancements here in AI. And so that future is really driving forward at a rate that's quite exciting. We could think about the future AI from the regulations standpoint too, and that's really going to have a huge effect on what AI will become and how it's going to be integrated into our businesses and our lives.

There are of course, many different frameworks around the world. There's not all that much legally binding regulation that's out there, but on the horizon, there is a recommendation for a regulation which has been put out by the European commission. We'll talk about that a bit more in depth later. We can think about the future of AI as it's being shaped by business. There are so many use cases here that are really working towards the process optimization side and assisting in decision-making, assisting in transferring information into a way that can support fast decision-making. So these three things are really creating the future of AI - so we could say technology, their regulations and its applications in business.


What do the regulations indicate about AI's future? 

If we consider the journey that AI has been on and the journey that regulation has been on concerning AI, it's gone from frameworks and ethical statements about how AI should relate to and be a part of human society to where we are now. And today, there is a clear cut proposal for regulation from the European commission, and this could be adopted sometime in the next year to two years, so we're of course not in the place where we have legally binding regulation in the European union yet, but it's on the table. If we think about what this regulation is actually trying to do, it's working to harmonize the use of AI, and its development across the European union. So it's very typical of EU regulations. There needs to be a standard and interoperability between all member states.

What's quite interesting is they go into a tier based system on the risk that any AI system could potentially give to any of its users or any stakeholders around it. It breaks it down into regulations for systems, which have an unacceptable risk and how to deal with those, and what counts as an unacceptable risk and then down to high risk, limited risk, and then finally to minimal risk. And so we're moving towards a little more clear cut understanding of how AI is going to be used in our societies, and what's going to be acceptable and not, and what the penalties for that will be. So that's obviously very good in terms of removing uncertainty, but that's of course, laying down some boundaries, and with boundaries its going to bring oversight, need for governance and accountability here. So both sides of this you can argue are going to swing wide the gates without so much uncertainty, the future of AI as much brighter because it's clear what is acceptable and what is not. On the other hand, [there’s] a more limited take on the future of AI, because of course there are horizons and boundaries, which should not be crossed.


How are enterprises getting AI into their business? 

Businesses have a few options for bringing AI into their systems and processes. The first could be point solutions, and these are AI systems that have been developed and trained to do one very clear, specific job, or to meet one very clear, specific use case. This could be anything from the health industry detecting malignant tumors in images and scans to supply chain management, to anomaly detection for a specific use case. These are all wonderful and great and meet a very specific and defined need, but of course, if you think about the spread of companies, which will need to be developing these solutions, it's endless and every situation, every scenario that a company may encounter, would like AI to solve that problem or to step in or to support these uniqueness factors are so great that we're never going to come to the end of different point solutions.

Another option has turned up in what we're calling AI service clouds. Now these are cloud solutions that deliver the whole range of machine learning development and governance tools, as well as the computing power and the cloud power behind it. These options tend to be really good for organizations which want to create their own point solution for their own scenario, and so these are becoming quite interesting. AI is being brought into the product itself to help automate and structure the process of selecting a model and going through training and validation, then through the lifecycle management and the ML ops [Machine Learning Operations] side. With that, businesses are able to either go for the point solution, something which is already designed for a very clear, specific use case, or they can be a bit more creative and design it themselves with the support of an AI service cloud.


Flip the script, what does AI mean for our future? 

We've been talking about what the future of AI is, and of course what's impacting that, but we have to think about this as a relationship. What does the impact of AI mean for our future? And for us, it's going to mean a much higher dependence on data, and we're already in this data-driven world. Every day it's bringing us closer to having more data-driven insights, to being able to collect data from across the organization to break down silos and use this in really smart ways. AI is going to accelerate that and bring quite measurable benefits from that, which is only going to speed up the process. With that higher dependence on data, we have to have more resilient data value chains and governance around data. This is absolutely essential and will be even more so as we speed up this process.

And of course, AI is many, many things, and it's such an ambiguous term, but you can think of it as a form of communication because AI is bringing information into a form which can be used to make decisions. And so that's a form of communication, or of course you have the natural language aspects where you're being able to communicate your needs in a more natural way, rather than coding, querying or just using keyword searches. All of these methods of communication and bringing communication more into our daily business processes is going to bring to light the need for transparency. There's going to be more opportunities to ask the questions of how was that data collected, or why was that decision made, or why does this impact me in the way that it does? Those aspects, the dependence on data, the need for then governance and security of that data, and then transparency into our heightened ability to communicate, are going to be some outcomes of AI in our lives.



Indicio

First Indicio Certificate Alumni Event

The post First Indicio Certificate Alumni Event appeared first on Indicio Tech.

Aergo

DApp Contest Update

Dear AERGO Community, Here is an update on our Dapp contest. We have a lot of developers from all over the world signed up to the contest. One overwhelming feedback is for additional time to develop the Dapp. We may have underestimated the time necessary to create a great Dapp. To ensure that all developers have adequate time to develop and test their Dapp, we are extending the deadline to comple

Dear AERGO Community,

Here is an update on our Dapp contest. We have a lot of developers from all over the world signed up to the contest. One overwhelming feedback is for additional time to develop the Dapp. We may have underestimated the time necessary to create a great Dapp. To ensure that all developers have adequate time to develop and test their Dapp, we are extending the deadline to complete and submit the Dapp by September 30, 2021. Winners will be announced in October 15, 2021. We have informed all parties, including the Samsung Blockchain team, who are part of our judging panel. Thank you for your support and participation.

Detailed contest rules can be found here:

Blocko will host a hackathon contest to find and award the best DApps from the developer community. The Samsung Blockchain team will be part of the judging panel and lend their support to the list of DApps in the millions of Samsung mobile devices. Sponsored by the AERGO Foundation, AERGO will be awarding token prizes to the top DApps.

Prize: 500,000 AERGO token purse 250,000 First place 100,000 Second place 50,000 Third place 20,000 Fourth place 10,000 Fifth place 5,000–14 participants picked at random Judging: Blocko AERGO Foundation team Samsung Blockchain team Contest Rules DApps need to comply with Samsung Blockchain Wallet requirements. Please refer to the SDK here: https://developer.samsung.com/blockchain DApp can be webapp format (mobile optimized) or Android app Restricted categories: No adult themes No large investment promising high return type No gambling Encouraged categories: DeFi Games Social Other

Teams may submit as many DApps as they would like

Individuals or companies may participate

Samsung will determine which DApps will be listed on the Samsung Blockchain DApp Marketplace

Schedule:

Applications will be accepted starting March 11, 2021

DApp submission deadline is September 30, 2021

Winners will be announced on October 15, 2021

All date and times are Korea Standard Time

Submission:

Project submission form

Any questions regarding the contest, please contact dapp@aergo.io

DApp Contest Update was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Blockchain Commons

Gordian Seed Tool Reveals the Foundations of Cryptography

Blockchain Commons has released Gordian Seed Tool, a new iOS app that allows for the creation, storage, backup, and transformation of cryptographic seeds is now available on the Apple appstore. It is an independent, private, and resilient vault, which can protect the most important underlying secret used by most cryptocurrencies: the cryptographic seed. Read More

Blockchain Commons has released Gordian Seed Tool, a new iOS app that allows for the creation, storage, backup, and transformation of cryptographic seeds is now available on the Apple appstore. It is an independent, private, and resilient vault, which can protect the most important underlying secret used by most cryptocurrencies: the cryptographic seed.

Read More

Though most wallets focus on the private keys used to unlock their cryptocurrency transactions, Gordian Seed Tool takes a step back and allows you to manage the fundamentals upon which private keys are built: entropy and seeds. You can use coin flips, die rolls, card draws, or iOS randomness as the entropy to generate your seeds, or you can import them from other applications. Using those seeds, Gordian Seed Tool can then derive unique, non-correlatable public and private keys as you need them.

The focus of Gordian Seed Tool is to protect your seeds. It does so by supporting new approaches to security such as QR-based airgaps, where you store your secrets on a closely held device that isn’t fully networked, such as an offline device in airplane mode, or a strongly protected device like the secure enclave in Apple’s iPhone and more modern Macinotoshes. You then communicate with that device primarily through QR codes or text that can easily be transmitted across that gap of air. Your seeds are thus protected by modern mobile-device security such as data encryption, trusted hardware, and biometric access protection.

That data encryption comes about through Apple’s trusted encryption routines. Seed Tool then adds resilience through integration with iCloud. If your choose for your mobile iOS device to have a network connection, its seeds will be stored in iCloud using end-to-end encryption. If you lose your device, you can retrieve those seeds with a replacement device logged into the same Apple account. (If you prefer, you can also use a cellular networked device, such as an iPod Touch or wifi-only iPad, leveraging airplane mode, to entirely ensure that your seeds never leave your device.)

You’ll be able to easily identify your seeds using the Blockchain Commons object identity block, which includes a visual lifehash, a human-readable name, an icon, and an abbreviated digest. Put them together and you should be able to recognize each seed at a glance.

Besides generating seeds, storing them, and backing them up, Gordian Seed Tool can also transform your seeds. It can derive popular Bitcoin and Ethereum public and private keys from your seeds, answer requests for other derivations, encode seeds as bytewords, BIP39 words, or hex, or shard them into SSKR shares and save them to different locations or as social recovery with your family, friends and close colleagues. Your seed is the basis for a whole tree of secure data, and Gordian Seed Tool gives you access to all of it.

Gordian Seed Tool is just one of several Blockchain Commons apps that demonstrate the Gordian principles of independence, privacy, resilience, and openness. The Gordian QR Tool offers a way to store confidential QRs such as 2FA seeds, SSKR shares, and (for that matter) cryptoseeds. Our forthcoming Gordian Cosigner will demonstrate how to eas