Last Update 6:44 PM November 23, 2020 (UTC)

Identosphere - Organization Blog Feeds

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Monday, 23. November 2020

Federal Blockchain News

Tracking Oil & Gas across the border: Mavennet CEO Patrick Mandic

Mavennet CEO Patrick Mandic talks about tracking oil & gas imports across the US/Canada border with blockchain technologies.
Mavennet CEO Patrick Mandic talks about tracking oil & gas imports across the US/Canada border with blockchain technologies.

Saturday, 21. November 2020

decentralized-id.com

KERI - Key Event Receipt Infrastructure

An identity system based secure overlay for the Internet is presented. This includes a primary root-of-trust in self-certifying identifiers. It presents a formalism for Autonomic Identifiers (AIDs) and Autonomic Namespaces (ANs). They are part of an Autonomic Identity System (AIS). This system uses the design principle of minimally sufficient means to provide a candidate trust spanning layer for th

Website - Resources - GitHub - Identifiers & Discovery WG

KEY EVENT RECEIPT INFRASTRUCTURE (KERI) DESIGN Samuel M. Smith Ph.D. v2.54 2020/10/22, v1.60 2019/07/03 [arXiv]

An identity system based secure overlay for the Internet is presented. This includes a primary root-of-trust in self-certifying identifiers. It presents a formalism for Autonomic Identifiers (AIDs) and Autonomic Namespaces (ANs). They are part of an Autonomic Identity System (AIS). This system uses the design principle of minimally sufficient means to provide a candidate trust spanning layer for the internet. Associated with this system is a decentralized key management infrastructure (DKMI). The primary root-of-trust are self-certifying identifiers that are strongly bound at issuance to a cryptographic signing (public, private) key-pair. These are self-contained until/unless control needs to be transferred to a new key-pair. In that event an append only chained key-event log of signed transfer statements provides end verifiable control provenance. This makes intervening operational infrastructure replaceable because the event logs may be therefore be served up by ambient infrastructure. End verifiable logs on ambient infrastructure enables ambient verifiability (verifiable by anyone, anywhere, at anytime). The primary key management operation is key rotation (transference) via a novel key pre-rotation scheme. Two primary trust modalities motivated the design, these are a direct (one-to-one) mode and an indirect (one-to-any) mode. In the direct mode, the identity controller establishes control via verified signatures of the controlling key-pair. The indirect mode extends that trust basis with witnessed key event receipt logs (KERLs) for validating events. The security and accountability guarantees of indirect mode are provided by KERIs Agreement Algorithm for Control Establishment (KACE) among a set of witnesses.

Decentralized key management Sam Smith (Manning)

● Why any form of digital key management is hard
● Standardsand best practices for conventional key management
● The starting point for key management architectures: roots-of-trust
● The special challenges of decentralizedkey management
● The new tools that verifiable credentials (VCs), decentralized identifiers (DIDs), and self-sovereign identity (SSI) bring to decentralized key management
● Key management for ledger-based DID methods
● Key management for peer-based DID methods
● Fully autonomous decentralized key management with Key Event Receipt Infrastructure (KERI)

UNIVERSAL IDENTIFIER THEORY

Abstract—A universal theory for identifiers is presented. This theory is based on a unified model of identifiers that include cryptographic autonomic identifiers (AIDs) and legitimized (authorized) human meaningful identifiers (LIDs). This model provides truly decentralized trust bases each derived from the cryptographic root-of-trust of a given AID. An AID is based on a self-certifying identifier (SCID) prefix. Self certifying identifiers are not human meaningful but have strong cryptographic properties. The associated self-certifying trust basis gives rise to a trust do- main for associated cryptographically verifiable non-repudiable statements. Every other type of identifier including human meaningful identifiers may then be secured in this resultant trust do- main via an end-verifiable authorization. This authorization legitimizes that human meaningful identifier as an LID though its association with an AID. The result is a secured trust domain specific identifier couplet of aid|lid. AIDs are provided by the open standard key event receipt infrastructure (KERI). This unified model provides a systematic methodology for the design and implementation of secure decentralized identifier systems that underpin decentralized trust bases and their associated ecosystems of interactions.

Key Event Receipt Infrastructure (KERI): A secure identifier overlay for the internet – Sam Smith – Webinar 58 SSI-Meetup Presentations KERI Overview Key Event Receipt Infrastructure Samuel M. Smith Ph.D. sam@keri.one https://keri.oneversion 2.54 2020/10/22

Separation of Control
Shared (permissioned) ledger = shared control over shared data.

Shared data = good, shared control = bad. Shared control between controller and validator may be problematic for governance, scalability, and performance.
KERI = separated control over shared data. Separated control between controller and validator may provide better decentralization, more flexibility, better scalability, lower cost, higher performance, and more privacy at comparable security.
The Duplicity Game: or why you can trust KERI

Inconsistency vs. Duplicity

inconsistency: lacking agreement, as two or more things in relation to each other duplicity: acting in two different ways to different people concerning the same matter Internal vs. External Inconsistency Internally inconsistent log = not verifiable. Log verification from self-certifying root-of-trust protects against internal inconsistency. Externally inconsistent log with a purported copy of log but both verifiable = duplicitous. Duplicity detection protects against external inconsistency.
Key Event Receipt Infrastructure (KERI) Model for a Universal DKMI - December 2019

KERI Nomenclature

self-certifying identifier: includes public key digital signature: unique non-repudiable (cypher suite known) digest: collision resistant hash of content signed digest: commitment to content controller: controlling entity of identifier message: serialized data structure event: actionable message key event: key management operation inception event: unique self-signed event that creates identifier and controlling key(s) rotation event: self-signed uniquely ordered event from a sequence that changes the set of controlling keys verifier: cryptographically verifies signature(s) on an event message. witness: entity that may receive, verify, and store key events for an identifier. Each witness controls its own identifier used to sign key event messages, controller is a special case of witness. receipt: event message or reference with one or more witness signatures key event log: ordered record of all self-signed key event messages key event receipt log: ordered record of all key event receipts for a given set of witnesses validator: determines current authoritative key set for identifier from at least one key event (receipt) log. judge: determines current authoritative key set for identifier from the key event receipt logs from a set of witnesses. pre-rotation: commitment to next rotated key set in previous rotation or inception event
KERI for Muggles IIW #31 Day 1 - Session #220 October 2020

KERI is a new approach to decentralized identifiers and decentralized key management that promises significant benefits for SSI (self-sovereign identity) and ToIP (Trust over IP) infrastructure

Verifiable Trust Bases Samuel M. Smith Ph.D. sam@keri.one https://keri.one version 2.53 2020/10/20 - Renewing the Web of Trust
KERI enables cryptographic proof-of-control-authority (provenance) for each identifier. A proof is in the form of an identifier’s key event receipt log (KERL). KERLs are End Verifiable: End user alone may verify. Zero trust in intervening infrastructure. KERLs may be Ambient Verifiable: Anyone may verify anylog, anywhere, at anytime. KERI = self-cert root-of-trust + certificate transparency + KA2CE + recoverable + post-quantum.
GitHub decentralized-identity/keri - Key Event Receipt Infrastructure - the spec and implementation of the KERI protocol KERI Whitepaper Implementation Notes for KERI [HackMD]

The interpretation of the data associated with the digest or hash tree root in the seal is independent of KERI. This allows KERI to be agnostic about anchored data semantics. Another way of saying this is that seals are data agnostic; they don’t care about the semantics of its associated data. This better preserves privacy because the seal itself does not leak any information about the purpose or specific content of the associated data. Furthermore, because digests are a type of content address, they are self-discoverable. This means there is no need to provide any sort of context or content specific tag or label for the digests. Applications that use KERI may provide discovery of a digest via a hash table (mapping) whose indexes (hash keys) are the digests and the values in the table are the location of the digest in a specific event. To restate, the semantics of the digested data are not needed for discovery of the digest within a key event sequence.

decentralized-identity/keriox - Rust Implementation of the KERI Core Library decentralized-identity/keripy - Python Implementation of the KERI Core Libraries decentralized-identity/kerigo - Go implementation of KERI (Key Event Receipt Infrastructure) decentralized-identity/kerijs - JavaScript (nodes) Implementation of the KERI core library. Background Resources SmithSamuelM/Papers Whitepapers Presentations

Self-Certifying Identifiers

Girault, M., “Self-certifiepublic keys,” EUROCRYPT 1991: Advances in Cryptology, pp. 490-497, 1991 Kaminsky, M. and Banks, E., “SFS-HTTP: Securing the Web with Self-Certifying URLs,” MIT, 1999 Mazieres, D. and Kaashoek, M. F., “Escaping the Evils of Centralized Control with self-certifying pathnames,” MIT Laboratory for Computer Science, 2000 Mazieres, D., “Self-certifying File System,” MIT Ph.D. Dissertation, 2000/06/01 TCG, “Implicit Identity Based Device Attestation,” Trusted Computing Group, vol. Version 1.0, 2018/03/05

Autonomic Identifiers

Smith, S. M., “Open Reputation Framework,” vol. Version 1.2, 2015/05/13
Smith, S. M. and Khovratovich, D., “Identity System Essentials,” 2016/03/29

Smith, S. M., “Decentralized Autonomic Data (DAD) and the three R’s of Key Management,” Rebooting the Web of Trust RWOT 6, Spring 2018 Smith, S. M., “Key Event Receipt Infrastructure (KERI) Design and Build,” arXiv, 2019/07/03 Conway, S., Hughes, A., Ma, M. et al., “A DID for Everything,” Rebooting the Web of Trust RWOT 7, 2018/09/26 Stocker, C., Smith, S. and Caballero, J., “Quantum Secure DIDs,” RWOT10, 2020/07/09

Certificate Transparency

Laurie, B., “[Certificate Transparency: Public, verifiable, append-only log(https://queue.acm.org/detail.cfm?id=2668154),” ACMQueue, vol. Vol 12, Issue 9, 2014/09/08

Google, “Certificate Transparency,” Laurie, B. and Kasper, E., “Revocation Transparency,” Related W3C DID Security Concerns 2020/01/14

Certificate Transparency Solution

Public end-verifiable append-only event log with consistency and inclusion proofs End-verifiable duplicity detection = ambient verifiability of duplicity Event log is third party infrastructure but it is not trusted because logs are verifiable. Sparse Merkle trees for revocation of certificates (related EFF SSL Observatory)

Non Conformist Innovation Summit Closing Keynote #2 - Sam Smith

The Economics of Its & Bits - Digital Identity - Freedom Privacy Control Security

Friday, 20. November 2020

Hyperledger Foundation

Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources

Welcome to the Weekend Update. Our goal with this post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise blockchain... The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.

Welcome to the Weekend Update. Our goal with this post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise blockchain community. This week’s edition is a double issue, covering events over the next two weeks as many will be offline over the Thanksgiving holiday in the U.S.

If you have suggestions for resources or events that we should spotlight in a future Weekend Update, let us know here using #HLWeekendUpdate. 

Five Years of Hyperledger

Over the next two weeks, there will be two panels from the series celebrating Hyperledger’s 5th anniversary:

November 24 at 4:30 pm SMT/8:30 am GMT: Examining Blockchain’s Transformative Role in Digitising Trade and Trade Finance December 1 at 1:00 pm EDT: Blockchain’s Role in the Face of Disruption

Blockchain Expo Europe

The Blockchain Expo series returns virtually on November 25-26 to host its fourth annual Europe event. It will bring together key industries from across the globe for two days of top-level content and discussion across five co-located events covering Blockchain, IoT, Cyber Security & Cloud, AI and Big data.

On November 25 at 10:50 am CET, Hyperledger’s Marta Piekarska-Geater will moderate the live keynote panel “Moving into the next phase – Blockchain in action.” 

For more details, go here.

Calling Developers Looking to Get Involved in the Hyperledger Labs Community

We’re seeking contributors to BAF, a Hyperledger lab that serves as an automation framework for rapidly deploying production-ready DLT platforms. It leverages Kubernetes and Red Hat Ansible. 

Learn more here.

Hyperledger Jobs Board

Ready to start a new job or career in enterprise blockchain? The Hyperledger Jobs Board is a great place to start. See openings from members all around the world.

Virtual Meetups

Saturday, November 21, at 8:30 UTC / 14:00 IST: Hyperledger India Chapter hosts “Blockchain Techfest 2020 – Part 3” Tuesday, November 24, at 10:00 UTC / 19:00 JST: Hyperledger Kansai hosts virtual meetup (Japanese) Wednesday, November 25, at 17:30 UTC / 18:30 CET: Hyperledger Milan hosts “Italian Chapter proposal & Trade Finance Highlights” (Italian) Wednesday, November 25, at 18:00 UTC / 19:00 CET: Hyperledger Madrid hosts “Hyperledger Cactus: Un framework de integración para plataformas DLT” (Spanish) Thursday, November 26, at 16:00 UTC / 17:00 CET: Hyperledger Vienna hosts “Hyperledger Avalon: Sichere off-chain Programmlogik” (German) Thursday, November 26, at 17:00 UTC / 18:00 CET: Hyperledger Barcelona hosts “Introducción a la tecnología Blockchain y 27 casos de uso” (Spanish)

See the full Virtual Meetup schedule here

The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.


decentralized-id.com

Self Sovereign Identity 101

Self Sovereign ID 101 - Curated tweets by DecentralizeID

GS1 - The Global Language of Business

Gs1 introduced the barcode in 1974. We are a global, neutral, non-profit standards organisation that brings efficiency and transparency to the supply chain. Our standards are proven by industry and can help you achieve your public policy goals. Designed by consensus, our standards are proven, open and benefit from collaboration with respected global companies as well as local SME's. GS1 tools help

Gs1 introduced the barcode in 1974. We are a global, neutral, non-profit standards organisation that brings efficiency and transparency to the supply chain. Our standards are proven by industry and can help you achieve your public policy goals.

Designed by consensus, our standards are proven, open and benefit from collaboration with respected global companies as well as local SME’s. GS1 tools help organisations exchange critical data - from manufacturing all the way to the consumer - creating a common language that underpins systems and processes all over the world.

ID over HTTPS GS1’s decentralized approach to resolving identifiers over HTTPS (IIW30) Decentralized resolution of identifiers with HTTPS - DNS doesn’t need to be evil (not that evil anyway) 11 Transferable Principles from GS1 Digital Link Phil Archer, Mark Harrison, Henri Barthel (firstname.secondname@gs1.org)

The GS1 Digital Link standard [DL] offers a means through which identifiers that exist offline can be resolved to multiple, related online resources. In the simplest example, a barcode is scanned to extract the identifier which is then resolved to a Web page that describes the barcoded item. This superficial example is only scratching the surface of much more powerful underlying mechanism. It was designed to service the needs of the GS1 community (manufacturers, supply chains and retailers), but the principles do not depend on the GS1 system and can readily be transferred to other identification systems.

History GS1 - How we got here

Since 1973, we have opened offices in over 110 countries and amassed more than 2 million members using supply chain standards that make business easier. Learn about key dates in our history.

1973: The barcode standard is agreed 1974: The first barcode is scanned 1977: The GS1 system is launched 1983: Barcodes are used on wholesale multi-packs 1989: GS1 moves beyond barcodes With wide area networks making an impact on supply chains, we create our first international standard for electronic data interchange. 1990: Responsibilities grow The US and international arms of GS1 come together formally, creating a single organisation with a presence in 45 countries. 1995: First healthcare standards created 1999: The GS1 DataBar arrives 2000: 90th local office opens In just ten years, we double the number of countries in which we have a local presence. 2002: Global standards forum launched Our Global Standards Management Process is launched. This global forum gives GS1 members one place to discuss standards. 2004: The first standard for RFID is created 2007: GS1 enters the business-to-consumer world As ecommerce grows, we begin to create open standards that give consumers direct access to key product information. 2013: A 40-year celebration With a presence in over 100 countries and more than a million members, we celebrate 40 years of the global language of business.
Standards

In a world of growing data, GS1 standards help you single out what really matters. They give you a common language to identify, capture and share supply chain data– ensuring important information is accessible, accurate and easy to understand.

Standards development

A neutral participant, GS1 facilitates dialogue and the development of standards-based solutions among business and technical people from nearly sixty countries. Industries represented include retail and consumer goods, fresh foods, healthcare, transport and logistics, governments and many more.

The GSMP (Global Standards Management Process) is a community-based forum for businesses facing similar problems to work together and develop standards-based solutions. Standards created by industry, for industry.

How we develop standards

Our standards development team guide the regular upgrading of our standards through a document development life cycle whose rules everyone agrees to. The Global Standards Management Process (GSMP) enables us to reach consensus around the creation and adoption of new standards smoothly and rapidly!

GSMP Manual

The GSMP 4-Step Process is designed to ensure that business needs and requirements are understood before standards and guidelines are developed, and that supporting materials are created afterward. Each step culminates in the completion of one or more outputs, created through a consensus-based process within a working group and with larger consensus confirmed through community review and eBallot.

Global Data Model (GDM) Governance Manual

The retail landscape is changing at an unprecedented rate. In this connected world, consumers increasingly rely on product information for purchasing decisions. The purpose of the Global Data Model (GDM) is to simplify and harmonise the exchange of master data. The GDM will identify and define—in a globally consistent way—the set of foundational attributes needed to list/order, move, store and sell a product, both digitally and physically. By harmonising foundational data across the industry around the globe, it will enable an improved consumer experience and reduce complexity by delivering more reliable and complete product information to consumers.

GSMP Value Statement

Are business challenges holding back your company’s full potential and growth? You are not alone. Business is easier when you speak the same language as your customers, suppliers and partners. Though we all work in our own way, problems and differences become solutions when we all work together. That’s where GS1 can help.

The GSMP is a community-based forum for businesses facing similar problems to work together and develop standards-based solutions to address them. Standards created by industry, for industry.

Work Request System

You can shape GS1 global standards by submitting a request to develop a new standard or enhance an existing one.

Introduction to GS1 Work Request Templates for submitting Work Requests Global Working Groups Standards Maintenance Groups (SMGs) improve existing standards GSMP Data Accuracy SMG

This processes all maintenance Work Requests for the GS1 Package Measurement Rules Standard and the Package Measurement Rules Implementation Guideline. This group acts as a pool of experts for all Data Accuracy SMG work requests and coordinate with associated Mission Specific groups as defined in the GSMP Manual. The work the Data Accuracy SMG allows our community to increase savings throughout the supply chain by synchronising accurate dimensions and weight data which enables better transportation, logistics and retail shelf planning.

GSMP Electronic Data Interchange (EDI) SMG

This group maintains and improves GS1 EDI (Electronic Data Interchange) standards. Examples of standards maintained in this group (but not limited to) are:

EANCOM® GS1 XML GS1 UN/CEFACT XML
GSMP Global Master Data (GMD) SMG

The group maintains and improves the GS1 Master Data standards. Examples of standards maintained in this group (but not limited to) are:

Master Data Standards GS1 Attributes, definitions, code lists, and guidance definitions GDSN solutions and GDSN Validation Rules GS1 Web Vocabulary Global Data Model Standards and Attribute Definitions for Business
GSMP Global Product Classification (GPC) SMG

The GPC Standards Maintenance Group maintains and improves the GS1 Global Product Classification (GPC) standard.

The GS1 Global Product Classification (GPC) standard helps global trading partners to group products in the same way, everywhere in the world. The resulting common business language is clear and instantly understandable.

GSMP Identification SMG

The ID Standards Maintenance Group maintains and improves the GS1 Automatic Identification and Data Capture (AIDC) standards including Identification Keys, Barcodes, Electronic Product Code, and Radio-Frequency Identification (RFID) standards.

The GS1 General Specification is the core foundational GS1 standard that defines how identification keys, data attributes and barcodes must be used in business applications GS1 Identification Keys provides companies efficient ways to access and share information about items in their supply chains. Barcodes are symbols that can be scanned electronically using laser or camera-based systems. The Electronic Product Code™ (EPC) is syntax for unique identifiers assigned to physical objects, unit loads, locations, or other identifiable entity playing a role in business operations. GS1’s EPC Tag Data Standard (TDS) defines the Electronic Product Code (EPC), including its correspondence to GS1 keys and other existing codes. TDS also specifies data that is carried on Gen 2 RFID tags, including the EPC, User Memory data, control information, and tag manufacture information.
GSMP Images, Digital & Electronic Assets (IDEAs) SMG

This group will maintain and improve the GS1 Digital Assets Standards. Examples of standards maintained in this group (but not limited to) are:

GS1 Product Image Specification Standard GS1 Pharmaceutical Image Implementation Guideline GS1 Mobile Ready Hero Images Guideline
GSMP Traceability and Event Sharing Standards Maintenance Group SMG

The SMG maintains updates to the GS1 EPCglobal standards that support physical event capture and sharing and the Global Traceability Standard that supports tracking and tracing of goods and information about the goods. This includes all simple work requests indicated as impacting the event data sharing and traceability standards.

In addition, the group acts as a pool of experts for all Mission Specific Work Groups that are related to the SMG, as defined in the GSMP Manual in section 3.4. Work Groups.

Mission-specific Working Groups (MSWGs) develop new standards GSMP Digital Signature MSWG

Provide a GS1 standard solution approach to digital signatures. Otherwise, there will be no open, brand owner determined digital signatures to set as an alternative to proprietary digital signature use in barcodes with GS1 standards.

GSMP EPCIS & CBV 2.0 MSWG

Since its initial ratification as an EPCglobal standard in 2007 and its subsequent integration into the GS1 “Share” portfolio, EPCIS and its companion standard the Core Business Vocabulary (CBV) have been updated twice (2014 and 2016) and published by ISO (as ISO/IEC 19987 and 19988, respectively). In the meantime, EPCIS and the CBV have gained importance as visibility enablers in various industries. Updates are needed to ensure the relevance of these standards into the coming decades.

GSMP GS1 Digital Link MSWG

This group will define a standard structure for URIs that enables reliable encoding of GS1 identifiers and sub-identifiers, regardless of the domain name, such that those keys can be extracted without looking up information on (or even being connected to) the web.

GSMP GLN Modernisation MSWG

This work group will update the GLN Standards to be more clear and concise and provide guidance to enable industry partners to create, manage, share, and use the GLN to meet their party and location use cases needs in a scalable, standardised manner.

GSMP Pharmaceutical Clinical Trial Processes MSWG

Develop a GS1 standard and/or guideline that would detail the best practice approach to the implementation of GS1 standards in the pharmaceutical clinical trials supply chain. This would include identification of products, locations, patients and caregivers.

GSMP RFID Low-Level Reader Protocol (LLRP) MSWG

This group will define and develop a revised version of the Low Level Reader Protocol (LLRP) Standard to clarify its use within the RFID community, identify and add new features needed to align with the Gen2V2 Air Interface standard while ensuring new features do not cause any disruption—and that the revised version of LLRP is backwards-compatible with existing deployments. Any additional functionality not currently included in the Gen2V2 standard is considered out of scope of this project.

GSMP Scan4Transport MSWG

This Work Group will review the business requirements identified by industry and develop a GS1 standard to enable the industry to encode the minimum required transport data in a 2D barcode on a logistics label.The group will consider emerging standards such as uniform resource identifier (URI) for addressing the business needs.

Thursday, 19. November 2020

Oasis Open

Invitation to comment on ebXML Messaging Protocol Binding for RegRep v1.0 from the ebCore TC

Specifies a messaging protocol binding for the Registry Services of the OASIS ebXML RegRep Version 4.0 OASIS Standard. This binding is compatible with both the versions 2.0 and 3.0 of ebMS as well as the AS4 profile and complements the existing protocol bindings specified in OASIS RegRep Version 4.0. The post Invitation to comment on ebXML Messaging Protocol Binding for RegRep v1.0 from the ebCo

First opportunity for public review. Ends December 19th.

We are pleased to announce that ebXML Messaging Protocol Binding for RegRep Version 1.0 CSD01 from the OASIS ebXML Core (ebCore) TC is now available for public review and comment. This is the first public review for this work.

This specification defines a messaging protocol binding for the Registry Services of the OASIS ebXML RegRep Version 4.0 OASIS Standard. This binding is compatible with both the versions 2.0 and 3.0 of ebMS as well as the AS4 profile and complements the existing protocol bindings specified in OASIS RegRep Version 4.0.

The documents and related files are available here:

ebXML Messaging Protocol Binding for RegRep Version 1.0
Committee Specification Draft 01
23 October 2020

Editorial source (Authoritative):
https://docs.oasis-open.org/ebcore/ebrr-ebms/v1.0/csd01/ebrr-ebms-v1.0-csd01.odt
HTML:
https://docs.oasis-open.org/ebcore/ebrr-ebms/v1.0/csd01/ebrr-ebms-v1.0-csd01.html
PDF:
https://docs.oasis-open.org/ebcore/ebrr-ebms/v1.0/csd01/ebrr-ebms-v1.0-csd01.pdf

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/ebcore/ebrr-ebms/v1.0/csd01/ebrr-ebms-v1.0-csd01.zip

How to Provide Feedback

OASIS and the ebCore TC value your feedback. We solicit feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.

This public review starts on 20 November 2020 at 00:00 UTC and ends 19 December 2020 at 11:59 UTC.

Comments on the work may be submitted to the TC by following the instructions located at:

http://www.oasis-open.org/committees/comments/form.php?wg_abbrev=ebcore

Feedback submitted by TC non-members for this work and for other work of this TC is publicly archived and can be viewed at:

http://lists.oasis-open.org/archives/ebcore-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with the public review of these works, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about this specification and the ebCore TC may be found on the TC’s public home page:

https://www.oasis-open.org/committees/ebcore/

Additional information related to this public review can be found in the public review metadata document [3].

Additional references:

[1] http://www.oasis-open.org/policies-guidelines/ipr

[2] http://www.oasis-open.org/committees/ebcore/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-Limited-Mode
RF on Limited Terms Mode

[3] Public review metadata document:
– https://docs.oasis-open.org/ebcore/ebrr-ebms/v1.0/csd01/ebrr-ebms-v1.0-csd01-public-review-metadata.html

The post Invitation to comment on ebXML Messaging Protocol Binding for RegRep v1.0 from the ebCore TC appeared first on OASIS Open.


Hyperledger Foundation

#RadicalCollaboration | Hyperledger CA2 SIG leading a working group at Open Climate Collabathon

When looking at climate change, it seems like there are thousands of factors that need to be considered. The truth is, there is really just one: human collaboration. No matter... The post #RadicalCollaboration | Hyperledger CA2 SIG leading a working group at Open Climate Collabathon appeared first on Hyperledger.

When looking at climate change, it seems like there are thousands of factors that need to be considered. The truth is, there is really just one: human collaboration. No matter if looking at behavior or trying to find a solution, human collaboration makes all the difference.

The question is how can we collaborate? How do we come together to work on a shared cause? How do we scale? How do we sustain the effort? 

There are many possible answers to these questions. In this article, we will explain what true collaboration looks like for us. We share the same vision, value each other, and each of us is solely here because of an intrinsic motivation: to tackle climate change. “We” are the Hyperledger Climate and Accounting (CA2) Special Interest Group (SIG).

CA2 SIG 

The CA2 SIG fosters and engages a multi-stakeholder network to exchange ideas, needs, and resources in order to develop and consolidate open source distributed ledger technology (DLT) solutions for common climate accounting mechanisms and frameworks. Our recent post Tackling Climate Change with Blockchain: An Urgent Need, Ready Opportunity and Call to Action provides more in-depth information about the CA2 SIG and our different working groups (WG).

One of the WGs to highlight in this blog post, which kicked off this summer, is the Carbon Accounting and Certification Working Group. The mission of this WG is to identify how DLTs could improve corporate or personal carbon accounting and make carbon accounting and certifications more open, transparent, and credible. By now, our first prototype is ready to calculate customers’ utility emissions according to the Greenhouse Gas Protocol Scope 2 and store the emission records immutably to a Hyperledger Fabric blockchain.

Purpose-driven

Our vision is to build an open climate accounting system that covers all human-caused greenhouse gas emissions. A system like this would make real trust and transparency possible for the first time. We are convinced that only then do we have a true shot at tackling climate change as a human society. 

In case you are still wondering why we invest our time in CA2 SIG: we take our spot in the bottom-up approach of Article 6 of the Paris Agreement and prevent irreversible damage to our planet if temperatures reach a 1.5 degree warming.

To drive our vision at a larger scale, the Carbon Accounting and Certification WG is taking part in the Open Climate Collabathon from November 9th to 23rd, 2020. 

The Open Climate Collabathon is an international open source initiative to establish a global platform for climate pledges, action tracking, carbon mitigation, and finance, that reflects the current state of planet Earth (i.e., temperature increase, resilience, etc.). The project focuses on leveraging emerging digital technologies like blockchain & distributed ledgers and other innovations to create a globally shared digital hub for coordinating a timely climate change response, internationally.

Collabathon = radical collaboration + hackathon

Our process

The Carbon Accounting and Certification WG does not just stand for a purpose-driven team but also a multicultural community of experts, students, developers, and climate activists. Together we worked toward the idea of multi-channel data architecture and created the Open Business Application Framework over an iterative process during the past year. The Open Business Application Framework is a generic model with a layered architecture that can be applied to many scenarios in climate space. Especially to projects that need a distributed ledger as a common data layer and want to benefit from the Decentralized Identifiers (DIDs) (Hyperledger Indy) as well as the Verifiable Credentials (Hyperledger Aries) standards defined by the W3C.

What we do

With this idea, we joined the Open Climate Collabathon to lead a working group with two different tracks – technical and non-technical – during the two weeks sprint to get more people involved.

From the technical side, we concentrate our resources on the Common ID & Agents as well as the Common Data Layer and the interoperability between them. The first stream implements the Hyperledger Lab TrustID to the existing Hyperledger Fabric deployment. The TrustID project develops a chaincode and an SDK that, together, enable identity management in Hyperledger Fabric as a decentralized alternative to CAs by using the DID standard specified by the W3C. In a second stream, we move our local Hyperledger Fabric deployment to a distributed, multi-cloud Hyperledger Fabric network consisting of three independent organizations. The private permissioned ledger would just be one part of the Common Data Layer. In the future, we strive for interoperability between additional ledgers like Hyperledger Besu or Ethereum Mainnet

Last but not least – maybe even most importantly – what we do is not only technical but covers the business issues of using DLT for climate action. During a breakout session at the collabathon focused on the business issues, we came up with follow-up tasks such as:

Explore how a Distributed Autonomous Organization (DAO) or digital currency could be used to motivate climate action. Find reliable emissions factors for energy in other countries beyond the USA. Develop a business plan for a virtual renewable energy network. This would allow members who can not get solar panels directly, because they rent, move frequently, or have houses that are not structurally eligible to get the benefits of renewable energy and offset their emissions as well.   Develop a business plan for energy efficiency with blockchain

Start collaborating

Of course, we are always looking for further collaborators. If you feel addressed by one of the tasks, are an expert in one of the areas or you just want to experience radical collaboration to make a difference, join us at:

Hyperledger CA2 SIG | Carbon Accounting and Certification Working Group  Hyperledger CA2 SIG | Mailing List Open Climate Collabathon | Wiki Hyperledger Working Group Open Climate Collabathon | Discord Channel

We are looking forward to welcoming you.

The post #RadicalCollaboration | Hyperledger CA2 SIG leading a working group at Open Climate Collabathon appeared first on Hyperledger.


decentralized-id.com

ESSIF - European Self Sovereign Identity Framework

European Self-Sovereign Identity Framework (ESSIF) This section contains the documents generated as technical specification for the ESSIF Use Case. These documents act as the base for the architecture definition for ESSIF V1. The content of this documents should be taken as high-level technical documents, and conceptual technical documents. The content of these documents will be updated as long as
European Self-Sovereign Identity Framework (ESSIF)

This section contains the documents generated as technical specification for the ESSIF Use Case. These documents act as the base for the architecture definition for ESSIF V1.

The content of this documents should be taken as high-level technical documents, and conceptual technical documents. The content of these documents will be updated as long as the use case implementation evolves.

ESSIF: The European self-sovereign identity framework

The European self-sovereign identity framework (ESSIF) is part of the European blockchain service infrastructure (EBSI). The EBSI is a joint initiative from the European Commission and the European Blockchain Partnership (EBP) to deliver EU-wide cross-border public services using blockchain technology.

The EBSI aims to become a “gold standard” digital infrastructure to support the launch and operation of EU-wide cross-border public services. It is a multi-blockchain network with multiple use-cases such as notarization of documents, ESSIF, certification of diplomas and trusted data sharing. While there is an EBSI wallet, it’s for test purposes only and not for the public. The consensus of the permissioned network will be achieved via proof of authority (POA) with one note per member state.

In European Blockchain Partnership NL-GER-BE started a initiative on European Self Sovereign Identity framework (eSSIF). ○ How to facilitate cross-border interaction with SSI. ○ How to make/keep national SSI projects interoperable. ○ How to integrate/align existing building blocks such as eIDAS, e-delivery, once-only with SSI. ○ How to conceptualize and build an identity layer in the new European Blockchain Services Infrastructure. ○ How to preserve European/democratic values in the implementation of Self Sovereign identity.

SSI-Meetup: Understanding the European Self-Sovereign Identity Framework (ESSIF) – Daniël Du Seuil and Carlos Pastor – Webinar 32

Daniël Du Seuil, Programm manager and blockchain architect with the Flemish public service, and Carlos Pastor, from BME in Spain, give an overview of the vision, objectives, and approach of the European Self-Sovereign Identity Framework (ESSIF).

Self-Sovereign Identity Framework and Blockchain

The Techruption Blockchain Project is a public-private partnership project in the Netherlands, within which large corporates, small companies, startups and scientific institutions collectively create disruptive technological innovations around distributed ledger (blockchain) technologies (DLT). DLTs are particularly useful in business and governance situations that involve multiple parties that do not necessarily trust one another to negotiate and execute electronic business transactions. In many cases such transactions require the ability to establish and validate identities and identity attributes, or to check whether or not they have been revoked.

Seven participants of the project (Accenture, APG, Brightlands, Chamber of Commerce, De Volksbank, Rabobank, and TNO) are developing a self-sovereign identity framework (SSIF) for the creation, validation and revocation of such identities that can be used in conjunction with blockchain technologies and the (disruptive) applications that are enabled by such technologies. The goal is to specify, validate and ultimately build a trustworthy, open digital infrastructure for self-sovereign identities that is secure, decentralized, open source, supports privacy (e.g., GDPR compliance) in multiple roles, and lacks a single point of failure or large information honey-pot. We aim to follow well-established requirements for user-centric identity systems

EU PROJECT ESSIF-LAB, AIMED AT FASTER AND SAFER ELECTRONIC TRANSACTIONS VIA THE INTERNET AS WELL AS IN REAL LIFE, OPEN FOR START-UPS AND SMES

The project will award 62 subgrants in two types of open call: one infrastructure-oriented open call targeting technical enhancements and extensions of the SSI framework and two business-oriented open calls targeting SSI business and social innovations and applications. The infrastructure-oriented open call (open to any type of innovator) and the first business-oriented open call (limited to start-ups and SMEs) are expected to open in March 2020. The exact opening date as well as the terms of reference will be available around February 2020 at essif-lab.eu.

ESSIF-Lab Consortium The eSSIF-Lab Project

The European Self-Sovereign Identity Lab (eSSIF-Lab) views itself as an ecosystem of parties that work together to make existing (and new) Self-Sovereign Identity (SSI) technology into a scalable and interoperable infrastructure that businesses can use very easily for conducting (business) transactions with other businesses and individuals alike.

NGI ESSIF-LAB EUROPEAN SELF-SOVEREIGN IDENTITY FRAMEWORK LAB

eSSIF-Lab is a project funded by the European Commission and aims to advance the broad uptake of SSI as a next generation, open and trusted digital identity solution for faster and safer electronic transactions via the Internet, as well as in real life.

In this project, 5,6M€ EU funds will be made available to European innovators including academic research groups, SME’s and start-ups that want to build or improve SSI (Self-Sovereign Identity) components. The aim of the eSSIF-Lab is to create a range of interoperable, open-source SSI components that will be used within Europe and possibly world-wide

SIMPLER AND SAFER DIGITAL LIVING WITH SELF-SOVEREIGN IDENTITY (TNO)

On 1 November 2019 the EU Cascaded Funding project ‘eSSIF-Lab’ (European Self-Sovereign Identity Framework Lab) has started. In this project, EU funds will be made available to SMEs and start-ups that want to build or improve SSI components. The aim is to create a range of interoperable, open-source SSI components that people will actually use, not just in the Netherlands, but specifically also within Europe and perhaps world-wide. The first possibilities for SME’s and startups to contribute are expected in March 2020. To get more information about possibilities contact Oskar van Deventer.

[VIDEO] Gataca asks the Expert: Daniël Du Seuil

Gataca chats to Daniël Du Seuil about his efforts leading ESSIF, a European Framework for Self Sovereign Identity as a cornerstone of EBSI, the European Blockchain Services Infrastructure. EBSI is a joint initiative from the European Commission and the European Blockchain Partnership (EBP) to deliver EU-wide cross-border public services using blockchain technology.

In this interview we answer key questions such as: “When decentralized identities will be a reality in Europe?”, “How adoption will unfold?” and “How Covid changed EU identity priorities?”

Wednesday, 18. November 2020

OpenID

OpenID Foundation Follow-up to ACDS on CDS

November 18, 2020 Mr Andrew Stevens Chairman, Consumer Data Standards Australia Mr Paul Franklin Executive General Manager, Consumer Data Right, Australian Competition and Consumer Commission Ms Kate O’Rourke Principal Advisor, The Treasury, Australia Mr Daniel McAuliffe Project Lead, Consumer Data Right, The Treasury, Australia   RE: OpenID Foundation Follow-up to ACDS on CDS   Dear […]

November 18, 2020

Mr Andrew Stevens
Chairman, Consumer Data Standards Australia

Mr Paul Franklin
Executive General Manager, Consumer Data Right, Australian Competition and Consumer Commission

Ms Kate O’Rourke
Principal Advisor, The Treasury, Australia

Mr Daniel McAuliffe
Project Lead, Consumer Data Right, The Treasury, Australia

 

RE: OpenID Foundation Follow-up to ACDS on CDS

 

Dear Mr Stevens, Mr Franklin, Ms O’Rourke, and Mr McAuliffe,

This communication follows the letter I sent on August 13, 2019, as Chair of the OpenID Foundation’s Financial-grade API (FAPI) Working Group. In my prior communication, I noted the Foundation performed an analysis of the Australian Consumer Data Standards (ACDS) that highlighted some deviations from the OpenID Connect and Financial Grade API ( FAPI ) standards.  Now that the majority of deviations have been removed this enables CDR Data Holders and Data Recipients to demonstrate their technical conformance with the FAPI standards. This increases the reliability of systems leveraging these standards, the repeatability in successive systems and trust among all stakeholders in the ecosystem.

The ACDS’s adoption of FAPI enables the members of your community to leverage the OpenID Foundation certification program. It is a mature model in use today, at scale, via the Open Banking Implementation Entity from UK regulators, identity providers (Data Holders) and relying-parties (Data Recipients) to self-certify their OpenID Connect and FAPI deployments. The tests are available to all, today, at no cost, at any time. The test suite can be run by participants themselves locally on their infrastructure or by using the OIDF’s hosted service. At a time of their choosing participants’ test results are checked by the OIDF, a modest fee and they are added to a publicly available list of organizations that have demonstrated conformance to the FAPI standard.  This greatly assists participants, all types, large and small to achieve ACDS compliance and interoperate globally.

The Foundation recently updated the FAPI conformance suite to ensure that servers following the CDR standards comply with the underlying FAPI specifications.  A number of Australian organizations have tested these tests against their CDR environments. The interoperability and security issues found in the deployments of Data Holders and Data Recipients were then able to be fixed well before they caused concerns.

The success of testing the tests’ allows the Foundation to launch a FAPI compliance service for CDR data holders. This new service also optionally covers the new pushed authorization spec that CDR plans to start introducing in November 2020.  This is timely and important given the CDR’s is a new protocol without the benefit of existing test suites and few vendor implementations.

The purpose of this communication is to gauge the Australian Competition and Consumer Commission’s (ACCC), and relevant internal parties, interest in supporting the OIDF’s launch of a FAPI technical conformance service for CDR participants. Your support would help expand its value to Accredited Data Recipients, and influence evolving the service in the future. We welcome ACCC’s involvement in the FAPI Working Group at any time. Any feedback on the FAPI CDR compliance service is welcome, especially prior to launch.

The Foundation’s considerable investment in its certification program ensures trusted implementations of open standards. The return is measured in positive impacts on interoperability and security. The UK’s use of the Foundation’s test suites has resulted in reduced engineering costs for all parties and facilitated market entry for new participants. This becomes particularly important as CDR is expanded to more entities. It highlights the importance that standards like FAPI evolve within their working groups.

OIDF’s certification program has proven its value to UK OpenBanking. It has revealed and assisted in resolving a significant number of interoperability and security problems in production systems in the nine largest UK banks while reducing integration costs for all. The certification and FAPI teams continue to work to ensure the tests reflect the intent of the specification authors and the needs of users.

We have run a series of joint workshops with the OpenBanking Implementation Entity in the UK, the Financial Data Exchange in the US to increase understanding of the standards and the benefits of the certification tools.  We hope we could run similar workshops with the assistance of the appropriate Australian entities.

Please consider engaging with the OpenID Foundation on the launch of the CDR testing service. Your involvement benefits the community at large by alignment with ACCC and ACDS goals. We would be happy to arrange a call to answer any questions you might have. Thank you for your consideration.

 

Regards,

Nat Sakimura
Chair, OpenID Foundation
Co-Chair FAPI Working Group

The post OpenID Foundation Follow-up to ACDS on CDS first appeared on OpenID.


Hyperledger Foundation

Perun, a blockchain-agnostic state channels framework, moves to Hyperledger Labs

We are excited to announce that Perun, a joint DLT Layer 2 scaling project between the Robert Bosch GmbH’s “Economy of Things” project and the Perun team of Technical University... The post Perun, a blockchain-agnostic state channels framework, moves to Hyperledger Labs appeared first on Hyperledger.

We are excited to announce that Perun, a joint DLT Layer 2 scaling project between the Robert Bosch GmbH’s “Economy of Things” project and the Perun team of Technical University of Darmstadt (TUDa), joins Hyperledger as a Labs project. The project’s goal is to make blockchains ready for mass adoption and alleviate current technical challenges such as high fees, latency and low transaction throughput. 

The Perun Hyperledger Labs project implements cryptographic protocols invented and formally analyzed by cryptography researchers at TUDa and the University of Warsaw. Designed as a scaling solution, the Perun protocol can be used on top of any blockchain system to accelerate decentralized applications and lower transaction fees. The payment and state-channel technology of Perun protocol is especially useful for high-frequency and small transactions. By providing a cheap, fast, and secure transaction system, the Perun protocol is a major step forward in the mass adoption of blockchain applications. 

Overview over the Perun Protocol

The Perun protocol allows users to shift transaction and smart contract execution away from the blockchain into so-called payment and state-channels. These channels are created by locking coins on the blockchain and can be updated directly between the users and without any on-chain interaction. This makes state-channel-based transactions much faster and cheaper than on-chain transactions. The underlying blockchain guarantees that all off-chain transactions will be enforced on-chain eventually. In comparison to other channel technologies like the Lightning Network, the Perun construction offers the following unique features:

Perun’s state-channel virtualization: To connect users that do not have a joint open state-channel, existing state-channels can be composed to form so-called virtual channels. These virtual channels are created and closed off-chain over the state-channel network intermediaries. Once opened, the virtual channel is updated directly off-chain between the two connected end users.

Blockchain-agnostic: Its modular design enables the flexible integration of Perun’s state-channel technology into any Blockchain or traditional ledger system. 

Interoperability: The blockchain agonistic design and state-channel virtualization enable transaction and smart contract execution even across different blockchains (cross-chain functionality).

High security: The Perun protocol specifications have been mathematically proven using the latest methods of security research (see perun publications).

The Perun protocol can be used for a wide range of applications in different areas such as finance/FinTech, mobility, energy, e-commerce, telecommunication and any other use case where direct microtransactions are needed.

The Hyperledger Labs Project

As a first step, we are developing a secure and efficient standalone payment application within the Perun Hyperledger Labs project. The labs project currently consists of the following main parts that together form the Perun Framework:

perun-eth-contracts: Provides the Ethereum smart contracts required for implementing the Perun protocol. go-perun: An SDK that implements core components of the Perun protocol (state-channel proposal protocol, the state machine that supports persistence and a watcher) and an Ethereum blockchain connector. It is designed to be blockchain agnostic and we plan to add support for other blockchain backends. perun-node: A multiuser node that uses the go-perun SDK to run the Perun protocol and provides an interface for users to manage their keys/identities; off-chain networking; open, transact and settle state-channels.

The Perun framework is built with flexibility in mind and can be integrated into many different environments since most components, like networking, logging or data persistence, are interchangeable and use state of the art software architecture practices that ensure flexibility and crypto agility.

Since joining Hyperledger Labs, we’ve been very active developing the software and we have released the first couple of versions. At the current stage the following functionality is given: 

Two party direct payment channels on ethereum Fully generalized state channel functionality Command line interface

Features on the roadmap — also depending on community response:

Virtual channels  SSI integration with Hyperledger Aries Additional blockchain backends Cross-chain channels

We are thrilled to be part of the Hyperledger community and are looking forward to your feedback and contributions. We are hoping to jointly build exciting projects on top of Perun to unleash its true potential and build towards a decentralized future. Check out the Perun Lab repositories to see the code and start contributing. Feel free to contact us through the Hyperledger channel #perun if you have any questions.

The post Perun, a blockchain-agnostic state channels framework, moves to Hyperledger Labs appeared first on Hyperledger.

Tuesday, 17. November 2020

Digital Identity NZ

Get involved with DINZ this November!

Next week we’re celebrating our first Aotearoa Digital Identity Hui Taumata. We’re particularly excited to be bringing you a Kapa Kōrero session with Kaye-Maree Dunne, Jane-Renee Retimana, Belinda Allen and Ben Tairea. The quartet will be exploring perspectives from Te Ao Māori, and the relevance of Te Tiriti in our collective work on digital identity. The post Get involved with DINZ this Nove

Next week we’re celebrating our first Aotearoa Digital Identity Hui Taumata.  We’re particularly excited to be bringing you a Kapa Kōrero session with Kaye-Maree Dunne, Jane-Renee Retimana, Belinda Allen and Ben Tairea.  The quartet will be exploring perspectives from Te Ao Māori, and the relevance of Te Tiriti in our collective work on digital identity.
 
The Digital Identity Transition Team will also be presenting – updating us on the next steps for the Interim Trust Framework, as well as the team’s approach to introducing the Digital Identity Bill.
 
We will hear from David Birch on what breaks digital identity ecosystems.  He will explore some stress tests for digital identity, and outline some of the fundamental use cases that must be considered for digital identity to succeed.  Bianca Lopes will then take us on a multisensory trip around the globe, looking at how different countries have addressed privacy challenges, and what lessons we might apply in New Zealand.
 
Register now to join us next week, and spread the word amongst your colleagues.  Please also contact us if you know of anyone who would like to attend, but for whom finances are a challenge.  We have some generous supporters who are happy to assist.
 
Looking ahead to a busy few weeks…
Following on from our earlier kōrero on the 2020 Trust and Identity Research and AML Reliance report, we have two members working group initiatives kicking off.  Each group will meet twice before the end of the year:

AML Reliance kōrero – Monday 30 November and Monday 14 December Trust & Identity Education (research followup) – Monday 7 December and Monday 21 December

 
We will also bring the DINZ community together for our Annual Meeting on 10 December.  Please register if you are interested in attending.  Here, we’ll be announcing our four new (or re-elected) Executive Council members.  Thank you for the wonderful response to the call for nominations – we have eleven highly skilled and passionate people who have been put forward.
 
It’s a great time to get involved, and to spread the word!

Ngā Mihi,

Andrew Weaver
Executive Director

To receive our full newsletter including additional industry updates and information, subscribe now

The post Get involved with DINZ this November! appeared first on Digital Identity New Zealand.


OpenID

OpenID Foundation Executive Director Job Description

The OpenID Foundation is seeking an Executive Director with the experience, skills, strategic vision, and commitment to advancing the Foundation’s open standards initiatives. This is a unique opportunity to lead a well-respected, member-driven, vendor-neutral, international standardization organization. The OpenID Foundation Executive Director (ED) reports directly to the Foundation’s Board of Dir

The OpenID Foundation is seeking an Executive Director with the experience, skills, strategic vision, and commitment to advancing the Foundation’s open standards initiatives. This is a unique opportunity to lead a well-respected, member-driven, vendor-neutral, international standardization organization.

The OpenID Foundation Executive Director (ED) reports directly to the Foundation’s Board of Directors and interacts extensively with the entire Board, Foundation contractors, Foundation members, and other organizations and individuals advancing open digital identity and open payments initiatives worldwide. The Executive Director has general oversight and executive management responsibilities for the business of the OpenID Foundation, with a strong emphasis on evolving the strategic mission of the organization and its international outreach and growth. The ED represents the OpenID Foundation in the international arena, is an advocate for open standards and adoption, and develops new business opportunities to extend the breadth and depth the OpenID Foundation’s contributions to useful open standards worldwide.

Responsibilities:

Advance the adoption of open standards-based secure digital identity and payment systems worldwide. Provide strategic vision to develop, update, and execute the OpenID Foundation’s strategic plans. Establish, grow, and maintain global liaisons with members, other standards bodies, industry groups, governments, and other strategic standards and ecosystem contributors. Work with the Foundation’s Board of Directors and community participants to provide a high-quality environment to facilitate the development and adoption of the Foundation’s specifications and initiatives. Drive new business opportunities by expanding OpenID Foundation activities relative to the standardization life cycle, including certification. Identify and advance new revenue opportunities for the organization consistent with its open standards mission. Drive new membership growth worldwide while maintaining high satisfaction among the organization’s current members. Ensure that the OpenID Foundation is well represented and influential at strategic industry meetings and events by demonstrably meeting the needs of the industry. Ensure that the OpenID Foundation is sensitive to the challenges of working across many countries and cultures. Work closely with the Foundation’s board and staff on overall organization operations, including the development and management of the yearly budget. This includes developing and reporting metrics assessing organizational and operational health.

Qualifications:

The ideal candidate is transparent with high integrity leadership and should have 5+ years Executive Director/CEO/CIO/COO/VP/executive management experience and demonstrated experience with and knowledge of open standards development and adoption. An ideal candidate will have direct experience with technology standards bodies and will be a recognized contributor/leader in open standards, with experience supporting a Board of Directors. While the OpenID Foundation is based in the United States with its primary working language being English, candidates living in other countries with knowledge of other languages and/or experience of working in other countries and multinational organizations will be strongly considered.

Additionally, strong business and communication skills, demonstrated experience in delivering keynote presentations at conferences, and the ability to serve as the OpenID Foundation spokesperson globally are required. Experience working with non-profit and volunteer organizations is a plus. Recognition as a leader in the international open standards community is strongly preferred.

A primary goal of the OpenID Foundation’s Board of Directors in this leadership transition is to encourage and identify as diverse a group of applicants for the Executive Director position as possible.

To Apply or Recommend Candidates:

Those interested in the position should send a note to edsearch@oidf.org and include a Curriculum Vitae (CV). Suggestions for people to approach about the position are also welcomed there.

The post OpenID Foundation Executive Director Job Description first appeared on OpenID.


decentralized-id.com

Zug ID

Zug leveraged uPort, a decentralized identity platform to create the world’s first live implementation of a self-sovereign government-issued identity project on the Ethereum blockchain, along with the city of Zug, the Institute for Financial Services Zug (IFZ) of the Lucerne University, along with integrator TI&M for the platform and Luxoft to implement voting. In the summer of 2017, they launc
Blockchain-based digital ID now available for all residents 11/2017 (Official)

Update (as of end of July 2020)

The IT department of the city of Zug is currently developing the e-train app in collaboration with Swiss providers. With the eZug app, Zug residents can use the city of Zug’s online services digitally, especially when they are out and about using mobile devices.

The cantonal ZugLogin is integrated into the eZug app for secure identification. This means that the ZugLogin services available today can also be called up.

The initial offer from the city of Zug, which is expected to be available on eZug from autumn 2020, is based on today’s demand on the city’s website and in a first phase includes services of the residents’ registration office and the debt collection office. Paid offers can be paid for directly in the eZug app. Encrypted documents such as a debt collection extract, a confirmation of residence or a home ID are sent directly to the app and can only be viewed by the addressee. All online services that are offered via the eZug app are still offline, i.e. available on paper or at the counters of the city of Zug.

The pilot project with the digital, blockchain-based ID is now complete. However, blockchain technology still has great potential. Future applications, for example in connection with voting, should remain possible. Therefore, the blockchain verification also exists in the product backlog of the eZug solution so that such applications can also be implemented with the new solution.

Introducing civil identity on the blockchain (Consensys)

Zug leveraged uPort, a decentralized identity platform to create the world’s first live implementation of a self-sovereign government-issued identity project on the Ethereum blockchain, along with the city of Zug, the Institute for Financial Services Zug (IFZ) of the Lucerne University, along with integrator TI&M for the platform and Luxoft to implement voting. In the summer of 2017, they launched a pilot program to register resident IDs on the public Ethereum blockchain. After the pilot program, Zug officially launched the program in November 2017.

AirBie “Crypto-E-Bikes” have been used more than 1500 times so far. 7/2019

The city of Zug has issued about 300 electronic IDs so far. Anyone who has such a blockchain-based E-ID has been able to borrow e-bikes since last November. The basis for this is a platform launched by the city of Zug in collaboration with the Zurich start-up AirBie. “Crypto-Bike-Sharing” is the name of the pilot project. According to Martin Gabriel, Project Manager IT of the City of Zug, the aim is to offer a service for the E-ID.

Digital identities a reality in Greater Zurich Area 4/2019

Digital identities are already a part of everyday life in two trailblazing regions in Switzerland: citizens in the canton of Schaffhausen and the city of Zug have access to a type of digital passport via an app. The services are continuously being extended, which in turn streamlines processes.

Crypto Valley Association Crypto Valley

The Crypto Valley Association is an independent, government-supported association established to take full advantage of Switzerland’s strengths to build the world’s leading blockchain and cryptographic technologies ecosystem.

We support and connect startups and established enterprises through policy recommendations, projects across verticals, initiating and enabling research, and organizing conferences, hackathons, and other industry events.

With active connections to similar hubs around the world, we also ensure Crypto Valley’s participation in the global efforts to foster blockchain and cryptographic technology innovation.

Schaffhauser Schaffhauser eID+

Thanks to the Schaffhausen eID +, canton residents can set up an electronic identity on their mobile phone and have the data recorded in it confirmed by the residents’ office. The identity created in this way then enables secure and easy access to various electronic government services without additional logins and passwords. In addition, the eID + app allows documents to be stored securely on the mobile phone so that they are always at hand.

In addition to additional government services, eID + is also to be used as an electronic identification medium in the private sector. Corresponding inquiries from various companies were brought to KSD and are currently being examined. With a view to the draft of the E-ID Act approved by the Federal Council, the official introduction of the Schaffhausen eID + Procivis and the canton’s core e-government team to gain further experience and knowledge.

History Self Sovereign Identity for Government Services in Zug, Switzerland CASE STUDY:OCTOBER 2018 Andrew Young and Stefaan Verhulst

Wuermli, Zug’s city clerk, notes the need for “innovative access to local services” as well as “increased security by keeping private data under complete control of individuals.”15 ConsenSys’s Kohlhaas points to the 2017 Equifax hack as an example of the vulnerability of centralized identity databases.16

Swiss Government Using uPort to Register Zug Citizens 7/2018

The stage of digital transformation is all set to blow up the way government IDs work in the present day. A live example of which is set by the Swiss government — the citizens living in Zug canton have their own digital, decentralised, sovereign identity. This identity can be used to take part in all government related activities like casting vote, proving identity and what not!

Zug plans voting using blockchain Digital ID 6/2018

The city of Zug in Switzerland will make Swiss history on June 25th when it tests the first blockchain-based vote.

Zug is also known as “Crypto Valley” for its cryptocurrency acceptance and numerous start-ups.

Swiss City of Zug issues Ethereum blockchain-based eIDs 02/2018

Switzerland is having a hard time with electronic identity, the City said in its first announcement on the Zug eID. There is an indisputable need for an electronic identity system, and soon too, if we want to catch the momentum of digitalisation. Ever more digital applications in the private and public sectors require an unambiguous, forgery-proof identification that is not based solely on a password. Currently the focus is only on centralised solutions, including those pursued by the federal government with external partners, like the Suisse ID [1, 2], for example. But so far these solutions have not become accepted. This is mainly because they are relatively complicated to use — and even today they are technically obsolete.

So Zug has decided to go its own way and started a pilot project. We want a single electronic identity — like digital passport — for all kinds of applications, said mayor Dolfi Müller. And we want this digital ID to reside not centrally on the City’s premises, but on the blockchain. Our role is only to verify and confirm the identity of a person.

Zug ID: Exploring the First Publicly Verified Blockchain Identity 12/2017

How it works: Mechanics of the Zug ID

Alice is a resident in Zug and hears of the new Zug digital identity system. She downloads the uPort ID app from the Apple App Store and creates an account. In this moment, the uPort app creates a unique private key on her phone and deploys two smart contracts on the Ethereum network that act as the user’s identity hub (currently identities are being deployed on public testnet Rinkeby, however Main-net support will follow soon).

More specifically, Alice’s private key manages a controller contract, which allows her to recover access to her identity should she lose access to her phone. The controller contract in turn controls her identity (proxy) contract, or permanent identifier. With this setup, Alice is now in complete control of her identity and all its associated data and can’t lose access due to loss of her private key. You can read more about uPort’s core architecture and identity contracts here.

First official registration of a Zug citizen on Ethereum 11/2017

On November 15, 2017, the first digital Zug ID will officially be registered on the Ethereum blockchain in front of a live press audience.

Since June, we have been improving the Uport platform and working with our Swiss partners at ti&m to prepare the city for an official launch. This launch is today. This major milestone demonstrates the power of Ethereum such that a city government can issue to its citizens a digital verification of their citizenship.

What is a uPort identity? 2/2017

In more detail, a uPort identity is a complete digital representation of a person (or app, organization, device, or bot) that is able to make statements about who they are when interacting with smart contracts and other uPort identities, either on-chain or off-chain. This ability to make statements about themselves, without relying on centralized identity providers, is what makes uPort a platform for self-sovereign identity.

Blockchain identity for all residents 7th July 2017 (Official ANN)

From September 2017, the city of Zug will be the first municipality in the world to offer all residents the opportunity to acquire a digital identity. This is based on an app that secures personal information using blockchain technology and links it to a crypto address. The residents register their identity independently via the app. The identity is then certified at the residents’ registration office in Zug. The city of Zug intends to gain further experience with blockchain applications by September and is expected to carry out an “e-voting” consultative vote among residents in spring 2018.


Hyperledger Foundation

Hyperledger began in 2015 when many different companies interested in blockchain technology realized they could achieve more by working together than by working separately. These firms decided to pool their resources and create open-source blockchain technology that anyone could use. These far-sighted companies are helping blockchain to become a more popular and industry-standard technology.
An Introduction to Hyperledger

Hyperledger began in 2015 when many different companies interested in blockchain technology realized they could achieve more by working together than by working separately.

These firms decided to pool their resources and create open-source blockchain technology that anyone could use. These far-sighted companies are helping blockchain to become a more popular and industry-standard technology.

Hyperledger was put under the guardianship of the Linux Foundation (for a host of reasons that we’ll talk about later) and has grown rapidly in the last few years.

As of publication date, Hyperledger has more than 230 organizations as members—from Airbus to VMware—as well as 10 projects with 3.6 million lines of code, 10 active working groups, and close to 28,000 participants who have come to 110+ meetups around the world. Through 2017, the project was mentioned in the press an average of 1,500 times a month.

Those of us involved with Hyperledger think the future of blockchain will involve modular, open-source platforms that are easy to use. With Hyperledger, we aim to create an environment that enables us to make this vision a reality.

Tensions Emerge Between Hyperledger Blockchain Group’s Biggest Supporters

From the sidelines of the enterprise blockchain world, it looks like a tug-of-war has broken out, with IBM and its favored Hyperledger implementation, known as Fabric, on the one side, and the Intel-backed Sawtooth on the other. The latter team also has a budding champion in the form of newly appointed TSC chair, and Sawtooth lead maintainer, Dan Middleton of Intel.

Why bankers should care that two rival blockchains linked up

Brian Behlendorf, the executive director of Hyperledger, explains why Hyperledger and the Enterprise Ethereum Alliance have joined each other’s organizations.

Software Giants Microsoft And Salesforce Flock To Hyperledger Blockchain Consortium > “Yes, THAT SalesForce” @by_caballero

Hyperledger Blockchain Consortium is growing to new heights with two software giants, Microsoft and Salesforce, coming aboard. It is a big move for blockchain, but also for the enterprises buying in.

Inaugural Hyperledger Global Forum Showcases Strong Community Momentum

For Hyperledger, a project of The Linux Foundation that started less than three years ago, the event is a time to reflect on milestones. Hyperledger has surpassed 260 members, with more than a dozen new members including Citi and Alibaba Cloud announced today. In the last year, Hyperledger launched its 11th project, Ursa, and released development updates to the Hyperledger Burrow, Hyperledger Fabric and Sawtooth frameworks. Additionally, Hyperledger and the Enterprise Ethereum Alliance jointly announced membership in each other’s communities as a way to further bolster enterprise blockchain adoption.

Hyperledger Blockchain Performance Metrics White Paper – Hyperledger Hyperledger Identity

@Hyperledger - Identity Working Group - get involved - Identity Standards

Looking back on 2019: Identity, blockchain and Verified.Me

Hyperledger Identity Vendors

Indy, Aries & Ursa An overview of Self-Sovereign Identity: the use case at the core of Hyperledger Indy

Credential issuers, holders, and verifiers are peers on an SSI network. Any person or organization can play any or all of the roles, creating a decentralized system for the exchange of trustworthy, digital credentials.

Credential issuers determine what credentials to issue, what the credential means, and how they’ll validate the information they put in the credential. Credential holders determine what credentials they need and which they’ll employ in workflows to prove things about themselves. Credential verifiers determine what credentials to accept, and which issuers to trust.
Strengthening Hyperledger Indy and Self-Sovereign Identity

Forrester’s recent “Top Recommendations for Your Security Program, 2019,” testifies to this, describing SSI as a “win” for customers and businesses, and urged chief information security officers (CISO) to “Empower your customers to control their own identities via self-sovereign identity.”

They can do this because exchanging verifiable digital credentials is at the heart of SSI. This ends the need for massive data silos, honeypots, and unsecured data repositories housed at countless corporations and organizations. Instead, anyone can hold secure and verifiable information about themselves, and through Zero-Knowledge Proofs (ZKP), minimize the information they decide to share with others. (ZKPs are an important type of advanced privacy-preserving cryptography now available in the open source community within the recently announced Hyperledger Aries project).

EdX Courses Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa EdX - Linux Foundation

Learn how Hyperledger Aries, Indy and Ursa add a necessary layer of trust to the Internet, creating and using independent digital identities rooted on blockchains or other distributed ledgers.

Becoming a Hyperledger Aries Developer

Develop blockchain-based production-ready identity applications with Hyperledger Aries.

Professional Certificate in Developing Blockchain-Based Identity Applications
Understand the problems with existing Internet identity/trust mechanisms today and learn how a distributed ledger, such as Hyperledger Indy, can be used for identity. Discuss the purpose, scope, and relationship between Aries, Indy, and Ursa and understand how these open source blockchain technologies provide reliable self-sovereign identity solutions that add a necessary layer of trust to the Internet. Understand the Aries architecture and its components, as well as the DIDComm protocol for peer-to-peer messages. Deploy instances of Aries agents and establish a connection between two or more Aries agents. Create from scratch or extend Aries agents to add business logic and understand the possibilities available through the implementation of Aries agents.
Fabric What is an Identity? (1.1.0)

For an identity to be verifiable, it must come from a trusted authority. A membership service provider (MSP) is that trusted authority in Fabric. More specifically, an MSP is a component that defines the rules that govern the valid identities for this organization. The default MSP implementation in Fabric uses X.509 certificates as identities, adopting a traditional Public Key Infrastructure (PKI) hierarchical model (more on PKI later).

TrustID: A New Approach to Fabric User Identity Management

All the authentication and management of identities in the system is performed on-chain through an “Identity Chaincode.” This chaincode has the following parts:

Chaincode proxy: This receives and routes every TrustID authenticated transaction. It’s responsible for authenticating users, interacting with the ID registries, and routing user calls to external chaincodes. It also implements the desired access policies by the different organizations. User Registry: This stores every user DID. It implements basic setter and getter operations and enforces the desired access rights per organization. Service Registry: This pays the registry role for services. External service chaincodes: This ensures service chaincodes with whom users want to interact can be deployed in any channel. Once requests are successfully authenticated, the proxy chaincode is responsible for forwarding transactions to them.
Sawtooth Identity Transaction Processor Configuration File

The Identity transaction processor configuration file specifies the validator endpoint connection to use.

If the config directory contains a file named identity.toml, the configuration settings are applied when the transaction processor starts. Specifying a command-line option will override the setting in the configuration file.

Identity Transaction Processor CLI (identity-tp)

The Identity transaction processor CLI, sawtooth-identity, handles on-chain permissioning for transactor and validator keys to streamline managing identities for lists of public keys.

identity-tp

The identity-tp command starts the Identity transaction processor, which handles on-chain permissioning for transactor and validator keys to streamline managing identities for lists of public keys.

Monday, 16. November 2020

OpenID

OpenID Foundation Co-Sponsoring Santander Digital Trust Hackathon

The OpenID Foundation is excited to be a co-sponsor for the Santander Digital Trust Hackathon. This global virtual hackathon will capitalize on Santander’s Digital Trust API and the need to innovate how data is shared, verified and trusted. The goal is to reimagine how we verify identity and validate data, while ensuring users’ privacy. OpenID […] The post OpenID Foundation Co-Sponsoring Santand

The OpenID Foundation is excited to be a co-sponsor for the Santander Digital Trust Hackathon. This global virtual hackathon will capitalize on Santander’s Digital Trust API and the need to innovate how data is shared, verified and trusted. The goal is to reimagine how we verify identity and validate data, while ensuring users’ privacy.

OpenID Foundation member, yes.com, is contributing the following to the Hackathon:

provide our APIs for identity assurance (OpenID Connect 4 Identity Assurance) in a sandbox so developers can take a look onto emerging standard support developers in implementing their prototypes present about OIDC4IDA (including yes.com experience) promoting the Hackathon/OIDF/OIDC via yes.com social network channels

The Hackathon is open to everyone; developers, innovators, marketers, business experts etc. You may work individually, in teams of 2-5, and companies and nonprofit organizations are welcome to participate.

There is a prize pool of $15,000 to be given to the top ten winners.

November 10: Registration opens and the Hackathon begins
December 10: Hackathon closes
December 11-17: Jury reviews the projects
December 17: Winners announced!

To register and find more information on the Hackathon: https://santander.devpost.com

Learn more about the Foundation’s Financial-grade API (FAPI) Working Group.

Learn more about the Foundation’s eKYC & IDA Working Group.

New to open banking and FAPI? Start here to learn how FAPI is driving global open banking initiatives including valuable developer resources.

The post OpenID Foundation Co-Sponsoring Santander Digital Trust Hackathon first appeared on OpenID.


Federal Blockchain News

US Customs & Border Patrol Innovates Blockchain for Import Security

Vincent Annunziato, Director of Transformation & Innovation Division of the CBP Office of Trade, talks about shaping blockchain technologies to make imports safer and more secure. CBP's current projects include tracking steel, oil, and natural gas imports from Canada.
Vincent Annunziato, Director of Transformation & Innovation Division of the CBP Office of Trade, talks about shaping blockchain technologies to make imports safer and more secure. CBP's current projects include tracking steel, oil, and natural gas imports from Canada.

decentralized-id.com

Dragonchain & Factor

On Factor/MyFii DID, the identity factors are decentralized to the individual owners. Individual users hold their own identity information in granular form as “factors”. Factors can be any data attributed to an individual and can be based on verifications by one or more external 3rd parties, or they can be self-declared/self-certified. These factors can then be provided to other parties needing ide

Guest post by Holly Jolly Jeffrey (Linkedin) and Dragonchain Founder\CEO Joe Roets.

Technology Overview

As enterprises continuously digitize their businesses, there is an increasing need for security and trust built around digital identities. Current identification systems are siloed and enterprises are having major challenges to manage Personal Identifiable Information (PII) in an increasingly regulated environment (eg. GDPR). On top of that, privacy and control of their data is becoming a greater concern for users as well. Factor is using blockchain technology that allows users to authenticate to a website or other service with a secure cryptographic signing mechanism. No username or password will be necessary or required for the sign in process.

Dragonchain itself provides for “protection of business data” which can also be used to protect PII/privacy data for users. We do this by stripping the business payload of every transaction before the business’ block goes to the network for verification. In Dragon Net (see Dragonchain architecture), the proof is decentralized, not the data. This means that all transaction data may be proven in the future to explicit parties without unnecessary exposure.

Structural approach to privacy and data protection

This separation is a structural approach to the privacy and data protection issue which does not rely on guesses of safety against unknown attack vectors in cryptographic functions. This model allows the business to use best practices devops and existing staff to protect their data.

On Dragonchain, the proof is decentralized, not the data. All transactions are verified in blocks from 5 different contexts in Context Based Verification on Dragon Net. This allows the business to decentralize or share the data as necessary for their particular business model, yet have a proofing system scaled for Enterprise.

On Factor/MyFii DID, the identity factors are decentralized to the individual owners. Individual users hold their own identity information in granular form as “factors”. Factors can be any data attributed to an individual and can be based on verifications by one or more external 3rd parties, or they can be self-declared/self-certified. These factors can then be provided to other parties needing identity verification. These factors can also be held within smart contracts with controlled and metered access.

Compatibility with W3C standards?

Factor was developed prior to W3C standardization, but we expect to provide mappings and support for W3C standards via our MyFii integration. Dragonchain and Factor are designed to be interoperable with traditional and blockchain systems. Dragonchain holds multiple interoperability and scalability patents that are employed in the Factor/DID system. The systems are literally interoperable with any other system via standard RESTful service API integration.

Features & Specifications

Factor is a blockchain based identity and access solution that allows users to take control of their identity and eliminates the risk for enterprises to store PII (e.g. risk of data breaches, and GDPR compliance). Delivered as a service, built on top of the Dragonchain platform, Factor Identity authenticates users to applications using public key encryption. Third party Factor Identity Providers confirm claims or factors about a user without sharing the actual data, it de-risks the storage of PIIs associated with users and data breaches.

This identity solution does not store personal identifiable information, instead, the hashes of the actions are recorded into our decentralized system. This makes it virtually impossible to hack all of the business data because it’s not stored in one location. It also provides better access control and management to the data. Individuals or companies can control who has the ability to view and edit data, reducing unauthorized access.

Factor provides a better solution than what federated identity provides: businesses can manage and update a person’s consumable identity components or factors into one place. The verification process of Factor relies on third party Factor Identity Providers who are able to verify sensitive data for businesses. Factor has logic built into its system that can update their data once a Factor Identity Provider verifies the data. This can create ease and reduce costs for businesses and end users, for example in handling KYC between multiple banks for the same customer.

Protection against quantum attacks?

When it comes to legitimate concerns in regards to future quantum attacks, “The answer is in the architecture of the software itself. Considerations of possible Quantum compute attack are very focused on the cryptographic side of the equation. Yet, if you segregate the actual data from exposure on a public chain, the data will be structurally secure. I would never, ever, put sensitive information like customer’s personal information or military and Space Force information anywhere near an actual public blockchain, encrypted or not, as this data will be subject to all manner of attacks. The data should be segmented and separated away from exposure on a public blockchain. The proof could be affected by quantum, but if any of the many other quantum-resistant algorithms work, you can also apply those at any level.

It’s like some of the things that happened during the Cold War. Both sides would attack each other’s data or systems and the other side would not know for years that this had happened. When it comes to blockchain based proof, just knowing that something was stolen or tampered with would be remarkably valuable. At the very least you would be able to measure the level of security. As an example, would my enemy spend three billion dollars to attack this particular part that I’m about to 3D print on a ship?

You can also look back and see direct evidence for simultaneous forks on Bitcoin, Ethereum, Ethereum Classic, (and every other Interchain) overlapping the block time for the transaction in question’’, - Joe Roets, Dragonchain Founder & Architect explains.

Factor benefits: Determine where data will be stored, including the geographic region, and allow for tighter control over jurisdictions where nodes are operated. Keep data private and ensure it never leaves the private blockchain unless explicitly granted, or by including references to the underlying data (stored elsewhere) in the form of a hash value in the payload. Comply with requests to be forgotten. This is accomplished by removing the underlying data (“off-chain”) reference via a hash value. Filter access control to applications, providing selective exposure of specific factors. Eliminate liabilities, such as the retention of PII or other sensitive data. With Factor an individual user can: Prove you are over 21 without exposing your full Drivers License or Passport. Prove you reside in a particular region without exposing your address. Prove you have a valid email, without exposing the address. Replace identifiers such as a phone number or email address which were never designed to be safely used for logins Case Studies Case Study 1 - SafePass and Covid-19

In the first quarter of 2020, Dragonchain was able to partner with Medek Health Systems. The city of Apopka was looking for an application to help enable employees get back to work, as safely as possible. Medek Health Systems leveraged Dragonchain to ensure privacy and personal medical data of individuals is respected.

In the application, a medical assessment provides directions on next steps to take. Should an individual go into quarantine? Or perhaps consider getting tested or seeking medical help for having Covid-19 symptoms? After completing a questionnaire, a medical risk score is assigned to a user. Users of the application are even able to use telemedicine for Covid-19, and have access to medical professionals directly through the app.

Every step along the way is recorded and validated on Dragonchain’s hybrid blockchain protocol. A combination of ledgering and smart contracts is used to trace the workflow, and provide proof of the pre-described workflow. The system takes into consideration new methods and tests to become available over time, making it a flexible solution. Individuals are able to proof their employee or other authority that the prescribed process has been followed. Within a matter of weeks the application was launched in the Google Play Store and the Apple App store. Medek Health Systems continues to integrate more blockchain based solutions into the application, including our decentralized identity solution Factor.

Learn more about SafePass Case Study 2 - Decentralized Identity in Social Media

Den (social media platform) will integrate with Factor, Dragonchain’s privacy focused decentralized identity technology. Doing this allows for the verification of the identity of Denizens with controlled exposure to Den and the world. The identity system will provide protection against impersonation with cryptographically signed messages and actions.

Lairs will be able to limit use based upon identity Factors, and will allow a community to decide whether their Lair will be made up of users that are identified, anonymous, or a mix of both. This also allows the creation of a model for an anonymous account economy.

Read the Den whitepaper Resources Learn more about Factor Self Sovereign Identity and Decentralized Identity Quick Take on Decentralized Identity Dragonchain Spins-off Decentralized Identity Solutions and Services Interchain™ Between Blockchains and Traditional Systems Remain Sovereign with Blockchain Interoperability Quantum computing and data protection with blockchain. Dragonchain architecture and whitepaper

BCGov - Government of British Columbia Canada

"Digital technologies are transforming the way British Columbians live, work and play. To meet their changing expectations, government’s transformation is also underway. It’s about using modern tools and technologies to deliver the services people want and maximizing the power of data to improve the services people need."
gov.bc.ca - The official website of the Government of British Columbia.

The Government of B.C. is made up of ministries, agencies and Crown corporations. Browse ministry, agency and Crown corporation websites for the latest news, service plans, publications and more.

news.gov.bc.ca publications.gov.bc.ca British Columbia - Digital Government - B.C.’s digital future (source)

Digital technologies are transforming the way British Columbians live, work and play. To meet their changing expectations, government’s transformation is also underway. It’s about using modern tools and technologies to deliver the services people want and maximizing the power of data to improve the services people need.

Developers BCDevExchange - (source)

We’re a supportive community in British Columbia, Canada enabling the government to deliver better digital services.

DevHub (source) - One place that brings together resources to help build digital products for the BC Government bcgov/mobile-signing-service

This is the Agent component to the BCDevX Mobile App Signing Service. The Signing Service is designed to be a self-serve system that enables development teams to sign and deploy build artifacts in a secure environment.

Shared Resources - source

The Common Components team is working on ways to reduce the time and cost of delivering digital products and services. We plan to achieve this by making it easy to find, onboard to and use components like code and microservices that solve common problems across government. Our work is a priority action in BC’s Digital Framework.

Digital Toolkit (source)

The BCGov Digital Toolkit is the reference repository for digital best practices across our teams

Verifiable Organizations Network VonX - Global digital trust for organizations

The Verifiable Organizations Network (VON) is a community effort to establish a better way to find, issue, store and share trustworthy data about organizations—locally and around the globe. Community partners are using jointly developed software components to enable the digitization of government-issued public credentials—registrations, permits, and licenses. Currently, VON components are based on Hyperledger Indy distributed ledger technology.

Verifiable Organizations Network - A Production Government Deployment of Hyperledger Indy Presentation by John Jordan and Stephen Curran at HGF 2018 (Transcript)

VON’s founding members are governments who are, by law, trusted issuers of data about organizations. The Province of British Columbia, Province of Ontario and the Government of Canada have come together to create the initial services needed to establish VON.

VON’s founders have delivered new Indy-based open source components which form VON’s backbone. TheOrgBook is a publicly accessible repository of verifiable claims about organizations. VON-X enables services to verify and issue credentials.

bcgov/TheOrgBook - A public repository of verifiable claims about organizations. A key component of the Verifiable Organization Network.

OrgBook BC is a deployment of an underlying software component called a Verifiable Credential Registry (VCR). A VCR is more general component that can drive OrgBooks (repositories of information about registered organizations), and other repositories of verifiable information across a variety of use cases, including education, government services, public works projects and many more. The first generation of OrgBook BC was built on top of the software whose source code is in this repository. The current iteration of OrgBook BC is powered by the Aries Verifiable Credential Registry (VCR) (Aries VCR). TheOrgBook was implemented using custom protocols defined locally by the Verifiable Organizations Network (VON) team here in BC, Aries VCR is based on Hyperledger Aries protocols defined by a global community at the Linux Foundation.

bcgov/von - Verifiable Organizations Network bcgov/von-ledger-explorer - The VON Ledger Explorer

BCOVRIN to Google Sheets Connector

bcgov/issuer-kit-demo-verifier-chat - Issuer Kit Demo Verifier Chat bcgov/BCSC-SS

Resources to make it easier for public organizations to offer the widely used BC Services Card a secure and verified government issued identity card as a login option for online services.

bcgov/orgbook-api - Autocomplete component + Various developer tools and documentation for using the OrgBook API Demonstrators Indy Catalyst - AgentBook - Agent to Agent Messaging Technology bcgov/aries-vcr

Hyperledger Indy Catalyst is a set of application level software components designed to accelerate the adoption of trustworthy entity to entity1 communications based on Decentralized Identity / Self-Sovereign Identity technology and architecture. Indy Catalyst is builds upon globally available open standards and open source software. At present, Indy Catalyst builds upon Hyperledger Indy, common enterprise open source software, frameworks and patterns such as PostgreSQL, Python, Angular and RESTful APIs. Efforts will be taken to design the software to facilitate the incorporation of evolving open standards and technology. The impetus for Indy Catalyst came from the Verifiable Organizations Network (VON) project. More information about VON can be found at vonx.io

bcgov/orgbook-configurations - Build and Deployment Configurations for the Indy-Catalyst version of the OrgBook

This repository contains the openshift-developer-tools compatible OpenShift configurations for the indy-catalyst instance of the OrgBook.

BC Gov – Indy Catalyst Agent + Agent Framework: What are they? Indy Catalyst Agent (Nick/Andrew – BC Gov) (bcgov/indy-catalyst/)

What is an “Agent”?

It acts as a fiduciary on behalf of a single identity owner (or, for agents of things like IoT devices, pets, and similar things, a single controller) It holds cryptographic keys that uniquely embody its delegated authorization. It interacts using interoperable agent-to-agent protocols.
bcgov/von-bc-registries-agent-configurations - Build and Deployment Configurations for the Indy-Catalyst version of the BC Registries Agent

This repository contains the openshift-developer-tools compatible OpenShift configurations for the indy-catalyst compatible instance of the von-bc-registries-agent.

GreenLight - Decentralized Workflow Technology bcgov/von-network - A portable development level Indy Node network.

A portable development level Indy Node network, including a Ledger Browser. The Ledger Browser (for example the BC Gov’s Ledger for the GreenLight Demo Application) allows a user to see the status of the nodes of a network and browse/search/filter the Ledger Transactions.

bcgov/greenlight - A demonstration of the verifiable organization network showing a new restaurant gathering the permits necessary to open.

GreenLight (an instance of decentralized workFlow) demonstrates a basic application for deploying the VON-X library, in order to enable issuer registration, claims verification, and credential submission to TheOrgBook. It includes Docker tooling for deployment of the application behind a Caddy reverse proxy.

Proof of Concept Registration SafeEntryBC

Allowing Businesses and Citizen’s to create “Safe Entry Points” that require the presentation and proof of a set of digitally verifiable credentials in order to authorize access.

This is an instance of bcgov/dts-esr-demo

BC Essential Services Gateway

Allowing Businesses to register as an Essential Service and in turn be able to issue Essential Service credentials to their employees.

This is an instance of bcgov/dts-safe-entry-demo

Safe Entry Points

instances of bcgov/vc-visual-verifier

Traveller Safe Entry

Demonstrates how digitally verifiable credentials can be used to authorize access to or through a point of entry.

Essential Service Safe Entry

Demonstrates how digitally verifiable credentials can be used to provide essential services workers with authorized access to a location or facility.

IIW Book

So far the killer demo of #IIW 28 is “IIWBook” from @jljordan42 & BC Gov & Streetcred teams. First you get a #verifiablecredential of your verified email address and IIW attendance, then you can create peer-to-peer DID-to-DID connections with any other IIWBook user. Mind blown! pic.twitter.com/2exkD3xdXP

— Drummond Reed (@drummondreed) May 1, 2019
IIW Book Highlights of Internet Identity Workshop (IIW) #28

The BC Gov team led a killer agent interop demo

https://iiw.vonx.io/ to start the demo You can use any of 3 different agent/wallet apps First you get a verifiable credential of your email address Then you get a VC that you were an IIW attendee Then you were added to the IIWBook directory Then you could create your own private peer-to-peer connection with any other IIWBook member
IIW Book! Come get a REAL IIW attendance verification credential and prove it to your IIW friends using your phone!

Key understanding: a demonstration of an identity authentication, issuing, relying party connection and document/chat exchange using authenticated identities.

BC Gov , MATTR, STREETCRED – IIW Book Redux

There was continued discussion about a Connectathon last February where 6 different systems were verified as connecting. Evernym was not a system that passed, but this was not held as a lack of ability as much as direction of activities – they had more important features they were working on.

Link to presentation provided by John Jordon on Tuesday (Part 1) & Wednesday (Redux): http://IIW.vonx.io

The VON IIW 28 Demonstration: IIWBook

In late March 2019, the VON team created AgentBook to demonstrate the interoperability of independently created agents. These agents were able to successfully establish DID-based communication channels. Now, with IIWBook, we’ve added a (literal) new layer by extending the core of AgentBook with the ability to exchange verifiable credentials. Even more exciting, through collaboration with Streetcred.id and Spark New Zealand, we have a mobile agent (or two!).

Other Repositories bcgov/BC-Policy-Framework-For-GitHub - Policy information for BC Government employees using GitHub

This repo’s content is focused on providing information, examples and guidelines to facilitate the creation and governance of BCGov Open Source projects. We don’t want to duplicate the good work GitHub has done with their own online guides. Our focus is giving BC Government people and projects the information they need to get started on GitHub while remaining compliant with BC standards and policies.

bcgov/design-system - British Columbia Government Design System for Digital Services

It’s a collection of digital resources and tools – including a library of reusable UI/interface components and design patterns. The system makes it easier and faster to build custom B.C. government websites and applications.

api-guidelines - BC Government API Guidelines

Purpose : The purpose of these guidelines is to promote consistency and provide guidance around the use of Application Programming Interfaces (APIs) across the BC government, and to enable exchange and integration of data between systems, agencies, businesses and citizens.

digital-principles - A set of principles to guide the Province of BC’s continued Digital Government evolution

The Digital Principles are meant to guide the work of individual public servants and vendor partners as the Province of British Columbia continues to evolve into a Digital Government. This includes everything from the day-to-day work of individuals to the design, development and delivery of digital products and services.

bcgov/digital-policy

BC’s Digital Framework (currently an alpha version) drives a coordinated, intentional approach to support all public service employees as we transition into a digital government that meets the internet-era needs and expectations of British Columbians. A key action outlined in the Digital Framework is to “create a new digital and data policy framework to guide the work of public servants.” A core product team within the Office of the Chief Information Officer (OCIO) is taking an agile, principle-based approach to co-developing a new Digital Policy Framework. This policy framework covers all aspects of BC government information management (IM) and information technology (IT) management.

bcgov/digital_marketplace

The Digital Marketplace is a web application that administers British Columbia’s Code With Us and Sprint With Us procurement programs. It enables (1) public sector employees to create and publish procurement opportunities, and (2) vendors to submit proposals to these opportunities.

Technology Code of Practice The BC Technology Code of Practice, being developed pursuant to BC’s Digital Framework and the priority actions therein, is a DRAFT set of criteria to help the BC Government design, build, and buy better technology. The Code is envisioned to be used as a cross-government agreed standard in government’s new technology funding review process. Trust over IP bcgov/a2a-trust-over-ip-configurations - OpenShift build and deployment configurations for the Access to Audio Trust Over IP components.

This repository contains the openshift-developer-tools compatible OpenShift configurations to customize the builds and deployments of vc-authn-oidc for use with the A2A Trust Over IP project.

Indy bcgov/indy-sdk-postgres-storage - PostgreSQL plug-in for use with the indy-sdk bcgov/indy-email-verification { "id": "verified-email", "subject_identifier": "email", "configuration": { "name": "verified-email", "version": "1.0", "requested_attributes": [ { "name": "email", "label": "Verified Email", "restrictions": [ { "schema_name": "verified-email", "schema_version": "1.2.2", "issuer_did": "MTYqmTBoLT7KLP5RNfgK3b" }, { "schema_name": "verified-email", "schema_version": "1.2.3", "issuer_did": "MTYqmTBoLT7KLP5RNfgK3b" } ] } ], "requested_predicates": [] } } Agents Agent Framework (Tomislav Markovski – Streetcred ID)

Library for building SSI agent services w/ .NET Core - streetcred-id/agent-framework (now hyperledger/aries-framework-dotnet)

bcgov/aries-cloudagent-container - Runnable Docker image for the Hyperledger Aries Cloudagent bcgov/von-personal-agent - A personal agent for the von network. bcgov/VON-ESB-DRS-Agent - Piloting the Dispute Resolution Suite with connections to the OrgBook bcgov/von-agent-template - Template for a von-x based agent bcgov/von-bc-registries-agent

Components for plugging the BC Registries into the verifiable organizations network.

bcgov/von_agent Forked from PSPC-SPAC-buyandsell/von_agent - VON agents using indy-sdk bcgov/von-bc-registries-audit - Audit scripts for aries vcr/orgbook and bc registries issuer bcgov/aries-cloudcontroller-node Verifiable Credentials Essential Services Delivery coordination using Digitally Verifiable Credentials

This repository contains the build, deployment, and application configurations needed to pull a number of separate applications into a single environment and deploy them as a group of interrelated services.

bcgov/vc-visual-verifier - Verifiable Credential Visual Verifier Verifiable Credential Authentication with OpenID Connect (VC-AuthN OIDC)

The integration this document defines is how holders of verifiable credentials can leverage these to authenticate with parties. Note, how the holder became in possession of supported verifiable credentials is out of scope for this integration.

Like any new technology there is adoption required of new concepts, this particular integration aims to provide an easy integration path that allows parties to start leveraging the power of verifiable credentials for user authentication in a non-disruptive fashion. This is achieved by extending the vastly popular OpenID Connect family of specifications.

Credential Issuer Services

instances of bcgov/issuer-kit

Unverified Person Issuer

An issuer used to obtain a digital identification credential that is used to authorize access to other services within the PoC.

Health Gateway

An issuer used to obtain a personal health number credential that is used to authorize access to other services within the PoC.

Essential Services - Organization

An issuer used to obtain a business level essential services credential that is used to authorize access to other services within the PoC.

Essential Services - Access

An issuer used to obtain an essential services access credential that is used to authorize access to other services within the PoC.

Med Lab

An issuer used to obtain a “lab result” credential that is used to authorize access to other services within the PoC.

bcgov/devops-credential-issuer DID /decentralized-identity/did-common-typescript</a> - A common bundle of shared code and modules for working with DIDs, DID Documents, and other DID-related activities DID-Auth bcgov/did-auth-extension - DID Auth browser extension. bcgov/http-did-auth-proxy - DID Auth HTTP proxy.

This is a DID Auth HTTP proxy that uses HTTP Signatures based on Decentralized Identifiers for authenticated HTTP requests.

bcgov/did-auth-relying-party - DID Auth relying party.

This is a DID Auth relying party that can verify incoming DID Auth messages expressed as Verifiable Credentials based on Decentralized Identifiers.

PSPC-SPAC-buyandsell

Public Services and Procurement Canada: buyandsell.gc.ca — Services publics et Approvisionnement Canada : Achatsetventes.gc.ca

PSPC-SPAC-buyandsell/von_tails - Tails file server for von_anchor issuer and holder-prover anchors PSPC-SPAC-buyandsell/von_base PSPC-SPAC-buyandsell/von_anchor - VON anchor classes for interaction with sovrin/indy ledger via indy-sdk PSPC-SPAC-buyandsell/von-image - Standard docker images for building VON components PSPC-SPAC-buyandsell/von-x - VON-X is a Python library enabling rapid deployment of Hyperledger Indy credential issuer, holder, and verifier services, particularly for integration with TheOrgBook. PSPC-SPAC-buyandsell/didauth - DID authentication by way of HTTP Signatures for Hyperledger Indy agents PSPC-SPAC-buyandsell/von_agent - VON agents using indy-sdk PSPC-SPAC-buyandsell/von_connector - service wrapper API per agent, via django application PSPC-SPAC-buyandsell/ReferenceVonActuator - Java implementation of actuator of reference von_connector implementation PSPC-SPAC-buyandsell/von_conx - Reference implementation (sample) for a VON Connector using tools of VON_X PSPC-SPAC-buyandsell/demo-agent - agent and api wrapper code base

Friday, 13. November 2020

Hyperledger Foundation

Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise... The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise blockchain community. 

If you have suggestions for resources or events that we should spotlight in a future Weekend Update, let us know here using #HLWeekendUpdate. 

Five Years of Hyperledger

Hyperledger first launched in December 2015 with founding members IBM, J.P. Morgan, Accenture and DTCC, among several others. Fast forward to today and, 16 projects later, Hyperledger is one of the fastest growing projects at the Linux Foundation.

We have a special lineup of activities, events, and news to help us recognize five years of Hyperledger that kicks off on November 16 and will last through mid December. Five weeks, five discussions, gathering the brightest in our community to address and debate some of the biggest opportunities, challenges, and issues facing enterprise blockchain. 

See all the details here

Webinar: Securing your open source blockchain project with IBM Blockchain Platform

IBM’s Lukas Staniszewski will present an overview of the latest trends in the market, an update on the growth and impact of industry leading blockchain solutions and observations about emerging use cases and new network topologies. He’ll share lessons learned from working with hundreds of clients and outline the key challenges in operating networks/solutions at scale. During his talk, he will provide an overview of the: development and network management tooling, resiliency and security solutions, and how you can distribute and decentralize your network.

Tune on Wednesday, November 18, at 10:00 am EDT. For more information and to register, go here.

Hyperledger Telecom Special Interest Group Guest Speaker: Nima Afraz, CONNECT Center, Trinity College Dublin

Dial into the Telecom SIG meeting to learn more about Decentralized 5G Marketplaces based on Hyperledger Fabric. The fifth generation of cellular networks or 5G promises revolutionary improvements compared to the previous generation that reaches beyond merely multiplying the bandwidth and reducing latency. 5G is expected to enable a wide range of new internet-based services such as vehicular communications and Smart City infrastructure that, in addition to connectivity, require on-demand fine-grained infrastructure and resource access to operate. This talk will cover how Hyperledger Fabric could facilitate the implementation of a distributed marketplace for 5G network infrastructure sharing that does not rely on a central source of trust.

For more information on the meeting, which is Thursday, November 19, 2020, at 9AM PST, go here.

Recent Research – Deep Dive into Blockchain Accountability

For an academic analysis of Hyperledger Fabric’s accountability, read the paper Accountability in a Permissioned Blockchain: Formal Analysis of Hyperledger Fabric published by researchers from the Institute of Information Security at University of Stuttgart in Germany.

Virtual Meetups

Saturday, November 14 at 10:00 UTC / 13:00 AST: Hyperledger Riyadh hosts “Introduction to Blockchain Network Administration” Saturday, November 21, at 8:30 UTC / 14:00 IST: Hyperledger India Chapter hosts “Blockchain Techfest 2020 – Part 3”

See the full Virtual Meetup schedule here

The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.


decentralized-id.com

Twitter Collection 2020-11-14

Decentralized Identity - Week 2 November - Curated tweets by DecentralizeID

Thursday, 12. November 2020

Berkman Klein Center

Extended Reality

The Potential of Augmented, Virtual, and Mixed Reality Experiences for Remote Teaching and Learning By Melyssa Eigen, Sandra Cortesi, & Alexa Hasse In collaboration with Pedro Maddens Toscano, Maya Malik, Leah Plunkett, & Urs Gasser (PI). Illustrations and animations by Euan Brown, Rebecca Smith, Melanie Tan, & Claudia Thomas Due to the global pandemic, the past several mon
The Potential of Augmented, Virtual, and Mixed Reality Experiences for Remote Teaching and Learning

By Melyssa Eigen, Sandra Cortesi, & Alexa Hasse
In collaboration with Pedro Maddens Toscano, Maya Malik, Leah Plunkett, & Urs Gasser (PI). Illustrations and animations by Euan Brown, Rebecca Smith, Melanie Tan, & Claudia Thomas

Due to the global pandemic, the past several months have been a whirlwind of changing norms and lifestyle modifications, as well as an increase in impact of the digitalization of our day-to-day lives (highlighting the importance of high-quality access to digital technologies and areas where such access might be improved). With the inability for life to “return to normal” anytime soon, we are bound to see more change, especially in the educational context. For some of us, remote teaching is the only option to continue school. There are some guidance documents, but each state, school district, and individual educator has been asked to assume significant leadership in shaping their own remote learning space and defining the technologies available towards creating a successful online experience.

For educators and students with high-quality access to digital technologies and the skills and agency to use these tools and platforms (the pandemic has shed new light on the persistence of digital divides and participation gaps in the learning space and the urgency to close these gaps), there are a number of opportunities available to enhance their remote teaching and learning experiences to help them in their transition. The tools range from learning platforms such as KhanAcademy; social media platforms like Facebook; video sites such as YouTube; storing and sharing platforms like Google Classroom or Dropbox; and more playful platforms like Minecraft and Scratch, or our own Digital Citizenship+ (Plus) Resource Platform (DCPR), that hosts an evolving collection of free and Creative Commons-licensed learning experiences, visualizations, and other educational resources (many of these resources are available in over 35 languages).

Illustration by Claudia Thomas
The XR environment includes multiple technologies, like augmented reality (AR), virtual reality (VR), and mixed reality (MR). It allows users to integrate a variety of creative technologies in order to expand both their real-world and virtual surroundings. With AR, users can overlay virtual effects in their real-world environment, whereas with VR, users can act in a completely virtual environment. Further, MR gives users the ability to use real-world objects when interacting in virtual environments. All in all, there are a lot of possibilities for users under the “umbrella” of XR technologies. In some instances, users may need VR headsets, but in other cases, individuals can access the technologies with just a mobile device or computer and access to the Internet.

One emerging set of tools in this space falls under the category of extended reality technology (XR), which is becoming increasingly prevalent in the lives of young people. For example, as part of an ongoing project by Youth and Media at the Berkman Klein Center for Internet & Society at Harvard University, we are exploring the potential roles XR may play in the youth context while also recognizing the various access and usage barriers. A forthcoming publication — sign up for our team’s mailing list here to receive an early copy — addresses XR as it relates to domains such as education, physical and mental health, and social issues, as well as the potential roadblocks that stand in the way of these technologies becoming widely available. Additionally, the publication introduces the reader to over 30 XR experiences that have the potential to enhance teaching and learning across these domains in the youth context.

There are a variety of XR tools that can be explored in remote environments that may help improve the teaching and learning experience, enhance students’ motivation and engagement, or simply make a learning activity more novel or fun. It’s important to note, however, that while there is an increasing evidence base around using AR and VR to teach different subjects, a great deal of the literature focuses on using these technologies to teach a certain course or topic area. As the following report indicates, however, far fewer studies examine the efficacy of XR technologies compared to the use of non-XR tools in the educational space. As such, we envision XR technologies as one possible tool educators might consider integrating in the classroom — depending upon the affordability and accessibility of these technologies, which we elaborate upon further in this piece.

Preliminary evidence indicates three key areas with respect to potential ways XR technologies may be integrated in the classroom. First, XR technologies can be used to foster skill-based learning, such as learning a language. Research demonstrates that immersion may be helpful in learning a second language, and that VR, for example, can effectively simulate an immersive language experience. Second, XR can expand the possible activities youth can learn from in a hands-on manner. Such technologies can, for example, allow young people to travel inside the human body and explore cells for a biology course, or, for a physics class, examine how charged particles can interact with each other. Thus, XR has the potential to increase the topics young people can learn from by turning abstract concepts into concrete experiences. And third, XR can allow for new functionalities, or affordances, that allow for young people to learn in ways that have not yet been possible with other technological tools. For students taking vocational classes interested in architecture or construction, for instance, XR can simulate architectural designs that are more realistic that computer-aided designs — allowing individuals to walk through a space and explore various objects in it.

Illustration by Claudia Thomas

In addition to these three areas, there are several other areas connected to the educational space that appear promising in the context of XR. For example, XR technologies may be able to help fill in some of the gaps created by transitioning from in-person to remote learning. Against the backdrop of the global pandemic and disrupted education in regions around the world, field trips, which are often one of the most exciting aspects of formal education, are unlikely to occur in the coming months. While not equivalent to an in-person experience, XR technologies, such as VR, can create the opportunity to explore places that have not been possible for many to travel to. For example, students could explore the depths of the ocean or even take a rocket to outer space. Instead of in-person field trips to museums, students can use XR technologies to explore exhibits — some museums have their own XR-based apps, such as the Louvre’s “Mona Lisa: Beyond the Glass” VR app or the “Anne Frank House VR” experience. Additionally, some XR technologies allow young people to create their own AR and VR exploration experiences.

Illustration by Claudia Thomas

XR technologies have also been designed with the aim to cultivate skills that are relevant in (and out) of the classroom, both for educators and students. For example, there are XR experiences that allow educators to practice their pedagogical skills and those that seek to strengthen students’ presentation skills. In addition to skill-building, XR technologies can simulate different scenarios to prepare educators and students alike for emergency situations. For example, in a remote health class, one could use an XR app to teach students CPR. Other use cases could involve different forms of disaster prevention as well as vocational training.

Another promising application of XR technology centers around addressing social issues in the educational setting, such as how to prevent and respond to bullying. In the context of remote learning, while students may not be in physical contact with one another, this does not eliminate the potential for bullying. Given the isolated nature of remote learning, anti-bullying programming remains vital to the educational setting. In response, there are a number of different XR technologies that help students learn about one another’s differences in an effort to prevent and mitigate bullying, which educators may implement in a remote curriculum. These platforms use a variety of techniques to promote empathy, ranging from stand-up comedy, to placing the user in the role of the bully or bullied, to focusing on training teachers to identify bullying.

Illustration by Rebecca Smith

With respect to the adoption of XR technologies in the educational space, research in the context of higher education suggests two key factors are important. First, the XR technologies need to fit into current curriculum standards and educators’ instructional methods. This finding aligns with research around the diffusion of innovations more broadly. For any innovation to be implemented in a given setting — from the workplace to the classroom — it must fit within systems currently in place. Second, as with the diffusion of other innovations, the adoption of XR is influenced by the cost of these technologies — not only in terms of the monetary value, but, for instance, the cognitive load of learning how to use XR.

While this brief overview focuses on the opportunities that XR technologies may present in the educational space, it’s also important to highlight that these technologies come with challenges related to accessibility, privacy, and safety. In order to take advantage of the affordances of XR, as with other digitally networked technologies, Internet connectivity is needed. This may prevent a significant number of people from leveraging the technology — even more so during the pandemic with many schools and libraries closed. Further, we often associate XR with expensive VR headsets, which can be financially burdensome and inaccessible to many. These barriers are real. At the same time, less costly VR headset options, such as Google Cardboard, as well as free and low-cost VR and AR apps have become available, which students can use on their phones without further equipment.

Illustration by Rebecca Smith

Additionally, as with many other networked technologies, such as artificial intelligence, there are concerns around the extent to which the design of XR incorporates the voices and perspectives of underrepresented groups — whether in terms of age, ethnicity, race, gender and sexual identity, religion, national origin, location, skill and educational level, and/or socioeconomic status. In terms of additional issues around inclusion, XR is more reliant (compared to other digital technologies) on individuals’ ability to control their physical motions (e.g., quickly moving one’s hand). How can these technologies be made more accessible for those with limited mobility?

Second, there are numerous challenges around privacy, data and data protection, and commercial risk, some of which are resonant with general concerns that come with digitally networked technologies, while others are specific to XR. For example — like many digitally connected systems — XR has the ability to collect, aggregate, analyze and monetize users’ data — data which is “durable, searchable, and virtually undeletable.” Given that children and youth are often pioneers in exploring emerging technologies, they may experience XR-related privacy and data protection risks before adults enact strategies to mitigate such concerns. Moreover, the unprecedented amount of data that networked technologies such as XR are able to collect can be sold to third parties, including companies that can target marketing to youth. These targeted messages may put youth at risk for commercial exploration and exposure to content that may impact their perspectives and behaviors in ways not optimal for healthy development.

Illustration by Rebecca Smith

Moreover, research shows that young people (like many adults) generally lack an adequate understanding of the processes of data collection and resultant commercial profiling and marketing to which they can be subject. The risks associated with commercial profiling are only worsened in the context of COVID, with young people spending increased amounts of time in the digital environment and commerce currently being driven online. In addition to collecting information such as young people’s product preferences and location, XR technologies also have the potential to gather large amounts of data about nonverbal behavior, such as gestures, facial expressions, and eye gaze — even if one is only using the system for short periods of time. Indeed, spending a mere 20 minutes within a VR simulation may capture slightly under two million recordings of one’s body language. In the context of education, researchers have used nonverbal data gathered through VR to predict test scores, and estimate the number of mistakes made while learning a specific task. A young person’s future — from the university they are admitted to, to their employment opportunities and quality of working life — could be (positively or negatively) impacted by the nonverbal data that XR technologies collect. The data captured through XR will also strengthen companies’ efforts to target advertising to youth — a study shows that data about one’s head movements in VR is associated with how positively someone rates the content in a simulation. Legal protections against these and related privacy risks vary across jurisdictions and application areas. When it comes to student privacy issues in the context of formal education in the U.S., this publication (pages 4–5) by authors from the Youth and Media team and the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society at Harvard offers both a general point of entry into the privacy analysis, as well as a roadmap for exploring additional information and engaging with decision-makers who might be involved in determining whether, how, with which students, and for what purposes XR technologies can be employed, and what privacy safeguards need to be put in place.

And third, there are safety risks connected to XR technologies. As with many other networked technologies, such as computers and mobile phones, some users may experience eye strain using XR. Unlike many other Internet-enabled technologies, however, XR may induce side effects such as nausea, dizziness, seizures, and discomfort wearing the needed equipment. Recently, there have been efforts to make VR headsets, for instance, more comfortable to wear for those of different racial and ethnic backgrounds. Additionally, XR offers another online space for young people to be cyberbullied in. Research shows that harassment in XR, such as VR, may take different forms, such as environmental (e.g., throwing virtual objects) or physical (e.g., unwanted physical contact). Given the immersive nature of XR, will these types of harassment be experienced as more intense compared to bullying via text message or chat? And how can educators, parents, and other stakeholders most effectively reduce and prevent these forms of online harassment?

With time, we hope that we as researchers will be even more equipped to understand the impact the global pandemic may have on learners from all backgrounds. By all indications, however, there are several reasons to be concerned. On a global level, students — particularly those from underrepresented communities — are facing serious disruptions in accessing learning opportunities, particularly those that are online. Projects such as UNICEF’s Voices of Youth and UNESCO’s COVID-19 Education Response, which are examining some of the challenges youth are facing in the context of learning, have found that, over the coming months, it will be increasingly crucial to ensure that young people can connect learning with their needs and interests. Engaging in interest-driven learning online with a community of supportive peers and mentors — or, connected learning — can help inspire young people to be more creative and foster their ability to learn online. As with any other educational technology, the value of XR depends on many contextual factors and will require careful assessment before it is introduced in any learning environment — whether online or in-person. However, the technology comes with the promise to provide educators with additional options in terms of remote student engagement, and ways to support and complement efforts to equip youth with the skills to participate and thrive in an increasingly digitally connected economy and society.

Illustration by Claudia Thomas

Extended Reality was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 10. November 2020

ID2020

ID2020 Welcomes BLOK Solutions to the Alliance

ID2020 is proud to welcome UK-based BLOK Solutions as its newest Alliance partner. Based in London, BLOK Solutions was founded to help enterprises understand and address their business challenges with flexible, innovative solutions that leverage blockchain and other digital ID technologies. Their solutions range from digital wallets for the banking sector, which enables financial institutio

ID2020 is proud to welcome UK-based BLOK Solutions as its newest Alliance partner.

Based in London, BLOK Solutions was founded to help enterprises understand and address their business challenges with flexible, innovative solutions that leverage blockchain and other digital ID technologies.

Their solutions range from digital wallets for the banking sector, which enables financial institutions to comply with KYC/AML regulations, to an application developed to help monitor progress toward the UN Sustainable Development Goals by tracking the build-up of plastics in the oceans and changes in land use and the landscape.

Their most recent solution, BLOK Pass, offers individuals a self-sovereign record of their COVID-19 test results and other risk factors. The technology was developed under the company’s biotech arm, BLOK BioScience.

Initially envisioned as a means to help businesses and governments manage the safe and incremental return to public life in the midst of the pandemic, BLOK expects the application to be applied more broadly in the future to support immunization certificates and other medical test results.

BLOK Solutions chief technology officer, Areiel Wolanow has been appointed to serve on the ID2020 Technical Advisory Committee. An expert in distributed ledger technology, financial inclusion, and alternative banking and insurance models, Wolanow has addressed the G20 and other major world forums and advises central banks, financial regulators, and the UK Parliament on issues related to blockchain and financial inclusion.

“We are delighted to welcome BLOK Solutions to the ID2020 Alliance and applaud their commitment to privacy-protecting, user-controlled digital identity,” said ID2020 Executive Director, Dakota Gruener. “We are grateful to Areiel for agreeing to serve on the Technical Advisory Committee and look forward to leveraging his extensive experience in this new role.”

“Times of crisis, like this pandemic, are pivotal for technological advancement. But they also present a concrete risk: that this very technology may be distorted and abused under the guise of combatting a common enemy,” said Wolanow. “We’re seeing this dynamic at play around digital ID, which we consider a basic human right. It’s hard to overstate how crucial it is that we get digital ID right and ID2020 has been a beacon, steering us towards this end. I’m incredibly happy to see BLOK joining the Alliance and thoroughly proud of this appointment to support ID2020 in this cause.”

We are excited to welcome BLOK Solutions to our community and look forward to collaborating with them toward a future in which all people have better forms of ID that are privacy-protecting, user-controlled, and interoperable.

###

About ID2020

ID2020 is a global public-private partnership that harnesses the collective power of nonprofits, corporations, and governments to promote the adoption and ethical implementation of user-managed, privacy-protecting, and portable digital identity solutions.

By developing and applying rigorous technical standards to certify identity solutions, implementing pilot programs, and advocating for the ethical implantation of digital ID, ID2020 is strengthening social and economic development globally. Alliance partners are committed to a future in which all of the world’s seven billion people can fully exercise their basic human rights and reap the benefits of economic empowerment and to protecting user privacy and ensuring that data is not commoditized.

www.ID2020.org

About BLOK Solutions

BLOK Solutions is a global technology company focused on accelerating the implementation of blockchain into real-world enterprises. BLOK Solutions provides a trust layer to ensure authenticity and security to a range of business cases, from Fintech and biotechnology to decentralized identity, transportation, and geospatial data interrogation. The company’s mission is to enable prosperous, more equitable societies while protecting the privacy and freedom of individuals across the planet.

blok.solutions

blokbioscience.com

ID2020 Welcomes BLOK Solutions to the Alliance was originally published in ID2020 on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 09. November 2020

Hyperledger Foundation

Five Years of Hyperledger

Time flies when you are writing code. And building community. And making new markets. #5yearsofHyperledger The Linux Foundation announced the Hyperledger project on Dec 17, 2015, which means that Hyperledger... The post Five Years of Hyperledger appeared first on Hyperledger.

Time flies when you are writing code. And building community. And making new markets. #5yearsofHyperledger

The Linux Foundation announced the Hyperledger project on Dec 17, 2015, which means that Hyperledger is on the brink of turning five. This milestone seems a fitting time to take stock of what we’ve done, the impact we’ve had and where we want to go from here. So we’ve asked a number of our community members to join us for a series of webinars covering the past, present and future of Hyperledger and enterprise blockchain.

We invite you to be part of this five-week-long event that will open on November 16 with a Fireside Chat between Forbes associate editor Michael del Castillo and Brian Behlendorf and end with a 5th Anniversary Networking Celebration session on December 17. 

In between those sessions, we will hold a series of five webinars covering a range of topics and featuring speakers from across the community and around the world. Here’s the full line-up of sessions:

Monday, November 16, at 1:00 pm EST: Opening Fireside Chat: Grading the First Five Years of Hyperledger  Tuesday, November 17 at 1:00 pm EST: Five Years of Hyperledger: What’s the current impact and where will we drive change next? Tuesday, November 24, at 4:30pm Hong Kong/Singapore time/8:30am UK time: Examining Blockchain’s Transformative Role in Digitising Trade and Trade Finance Tuesday, December 1, at 1:00 pm EST: Blockchain’s Role in the Face of Disruption Wednesday, December 9, at 1:00 pm EST: The Future of Money: Bringing CBDC’s and Other Virtual Currencies to Reality Tuesday, December 15, at 1:00 pm EST: The Next Five Years of Enterprise Blockchain, Hyperledger and Beyond Thursday, December 17, at 1:00 pm EST: 5th Anniversary Networking Celebration

To find out more about these panels, speakers, and other ways that Hyperleger is marking this anniversary, please visit: https://www.hyperledger.org/5-years-2.

The post Five Years of Hyperledger appeared first on Hyperledger.


Oasis Open

Accenture, Splunk, and Others Back Baseline Protocol for Public Blockchains

9 Nov 2020 – Industry leaders across the globe are coming together to support the new Baseline Protocol, part of the Ethereum OASIS open source initiative. The Baseline Protocol  defines a method for using the Ethereum Mainnet or other public blockchains to coordinate complex, confidential workflows between enterprises–without moving company data out of traditional systems […] The post Acce

​Chainlink, Morpheus.Network, Nethermind, Provide, and Unibright Also Sponsor Baseline Open Source Project  

9 Nov 2020 – Industry leaders across the globe are coming together to support the new Baseline Protocol, part of the Ethereum OASIS open source initiative. The Baseline Protocol  defines a method for using the Ethereum Mainnet or other public blockchains to coordinate complex, confidential workflows between enterprises–without moving company data out of traditional systems of record. In today’s announcement, Accenture, Chainlink, Morpheus.Network, Nethermind, Provide, Splunk, and Unibright join Baseline’s founding sponsors, the Ethereum Enterprise Alliance, Ethereum Foundation, and ConsenSys. 

“We’re seeing a groundswell of excitement for using the Baseline method, as more and more organizations and developers realize the simplicity and security it offers,” said Guy Martin, executive director of OASIS Open, the nonprofit consortium that hosts the project. “The support of Accenture, Splunk, and all the other Baseline sponsors shows how ready the market is for a low-cost blockchain solution that doesn’t compromise data integrity and that can be used by everyone.”

Support for Baseline
“Our clients have significant operational demands of multiparty systems like blockchain,” said Michael Klein, director of blockchain technology at Accenture. “We continue to see innovations in and around blockchain technologies that are rising to meet those demands. For Accenture, the notable work happening around the Baseline Protocol is an example of the next wave of evolution in this space. We see tremendous potential in the Baseline Protocol’s ability to maintain the integrity and uniqueness of privately-held assets— providing ‘double-spend’ protection without compromising confidentiality—and believe this will expand the applicability of blockchain technology to a broader range of enterprise use cases.”

“As a founding sponsor to the Ethereum OASIS Open Project, the EEA is very pleased to see global leaders rallying around the Baseline Protocol to drive Mainnet business use cases,” said EEA Executive Director Dan Burnett, the Ethereum OASIS Open Project Governing Board Chair. “Baseline is enabling the Ethereum Mainnet to be a new tool for enterprise solution development from automation, business integration, finance to supply chain and sustainability efforts. EEA members alongside the EEA Mainnet Working Group will continue to collaborate on how tools like Baseline can deliver value to consumer-facing industries through peer-to-peer transactions and services.”

“Morpheus.Network is really pleased to be a sponsor of the Baseline Protocol! Baseline is an ideal complement to our supply chain SaaS middleware platform. One common challenge within the supply chain industry is encouraging the open flow of communication and information to help facilitate the movement of goods around the world, while balancing that with the need to keep sensitive and confidential information strictly private. Baseline enables this flow of information by allowing companies to use the data in their existing databases and ERP systems, and ensure that it remains synchronized and consistent with other third-party databases, in a completely private way while using the always-on, distributed, censorship-resistant and tamperproof Ethereum mainnet as a common frame of reference,” said Noam Eppel, Morpheus.Network Co-Founder & COO.

“Nethermind is proud to be a sponsor of the OASIS Open Project alongside industry peers. As maintainers of, and the first Ethereum client embracing Baseline, we are excited that the solutions delivered by Nethermind and Provide enable rapid adoption, allowing enterprises to reinforce their integrations with the unique notarization capabilities and liveness of the Ethereum mainnet. There has never been a better time for enterprises to harness its power,  nor has the mainnet ever been as ready as it is now.” said Tomasz Stańczak, Nethermind Founder & CTO.

“The dawn of the Data Age has created monumental business and societal shifts, while forcing businesses to digitize faster than ever before,” said Nate McKervey, Head of Blockchain and DLT, Splunk. “Data is no longer a nice to have, it’s an essential component of nearly every business worldwide. Organizations that can confidently synchronize their data in real-time and empower themselves through their data insights will leap ahead. The Baseline Protocol enables automation of business processes by leveraging data in new or traditional systems while maintaining integrity and confidentiality. This unlocks data in ways previously impossible, which coincides with Splunk’s vision of helping organizations turn data into doing.” 

About Baseline and Ethereum OASIS

Baseline is part of Ethereum OASIS, an OASIS Open Project that provides a neutral forum for supporting open source projects and specifications that advance interoperability for blockchain applications worldwide. Everyone is welcome to contribute to the Baseline Protocol, and suggestions for new Ethereum OASIS projects are encouraged.

The post Accenture, Splunk, and Others Back Baseline Protocol for Public Blockchains appeared first on OASIS Open.


Federal Blockchain News

SecureKey CTO on Replacing the SSN in DHS systems

An interview with CTO Dmitry Barinov of SecureKey Technologies, a Toronto-based firm known for its VerifiedMe platform, on the recent DHS Silicon Valley Innovation Program award to create an "alternative identifier to the Social security number", part of an effort by DHS to reduce collection and use of social security numbers in its operations.
An interview with CTO Dmitry Barinov of SecureKey Technologies, a Toronto-based firm known for its VerifiedMe platform, on the recent DHS Silicon Valley Innovation Program award to create an "alternative identifier to the Social security number", part of an effort by DHS to reduce collection and use of social security numbers in its operations.

Sunday, 08. November 2020

Decentralized Identity Foundation

DIF & OIDF

Collaborating on next-generation standards DIF is proud to announce that it has entered into a liaison agreement with the OpenID Foundation (OIDF). This provides a mechanism for both parties to work together on areas of mutual interest, allowing working groups to align and coordinate through dual-members. The first major collaboration, which has already been underway for weeks, is a process for r

Collaborating on next-generation standards

DIF is proud to announce that it has entered into a liaison agreement with the OpenID Foundation (OIDF). This provides a mechanism for both parties to work together on areas of mutual interest, allowing working groups to align and coordinate through dual-members. The first major collaboration, which has already been underway for weeks, is a process for revising the Self-Issued OpenID Connect (SIOP) chapter of the OpenID Connect (OIDC) specification. This revision takes place in the AB/Connect working group of OIDC, which focuses at any given time on extending the core OIDC specification.

Given the foundational nature of this collaboration (and the potential for breaking changes or mental-model shifts in the next major version), the work on DIF’s Self-Issued OpenID Connect Provider DID Profile working draft, referred to colloquially as “did-siop,” has been paused. It is expected to resume, still housed as a work item in DIF’s DID Authentication Working Group, after the work on the new SIOP specification in AB/C is finalized.

DIF members from the group are actively contributing to the new version of SIOP specification through OIDF’s structure and channels. If you are interested in advancing or contributing to this work substantially, you are very welcome to join the calls, details of which you can find here. If you are not a member of OIDF, you may contact the chairs of the DID Auth working group, or the DIF’s Liaison Officer to OIDF, which are listed at the bottom of the working group’s webpage.

NOTE: The latest version of the “did-siop” specification has not been approved by the DIF steering committee; it is currently paused at v0.1 and remains an unofficial Working Group Draft at time of press.

DIF & OIDF was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 06. November 2020

Hyperledger Foundation

Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise... The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise blockchain community. 

If you have suggestions for resources or events that we should spotlight in a future Weekend Update, let us know here using #HLWeekendUpdate. 

Open Source Strategy Forum 2020 (November 12-13)

Open Source Strategy Forum (OSSF) is the only conference dedicated to driving collaboration and innovation in financial services through open source software and standards. It brings together experts across financial services, technology, and open source to engage the community in stimulating and thought-provoking conversations about how to best (and safely) leverage open source software to solve industry challenges. This year’s virtual event will take place November 12-13. 

On Thursday, November 12 at 11:55 am EST, Hyperledger’s Karen Ottoni will take the OSSF stage with Makoto Takemiya of Soramitsu to discuss “Open source blockchain’s emerging role as the platform for digital currencies.”

Find out how to register for Open Source Strategy Forum 2020 here.

Open Climate Collabathon Call to Action

The Open Climate Collabathon is an open event mobilizing a global network of universities, civic tech groups, startups and youth to crowd-develop an integrated climate accounting platform, designed to help the world track and achieve the goals of the Paris climate agreement by leveraging state-of-the-art digital technologies. 

Traditionally, the most important Collabathan event of the year happens in November when the UNFCCC hosts the Conference of the Parties (COP). Since COP26 was postponed, the Open Climate Collobathan is organizing a global sprint running virtually from November 9-23. The sprint is as part of a “movement of movements” to ensure 2020 maintains an active climate policy and action agenda. It will be a bottom-up event to engage thousands of participants to collaborate on key prompts via open dialogs and hacking sessions.

Find out how to get involved here.

Recent Research – Smart Contract Reliability Report

Researchers from the Budapest University of Technology and Economics, in cooperation with colleagues from the University of Coimbra, were authors of the research paper Using Fault Injection to Assess Blockchain Systems in Presence of Faulty Smart Contracts that appeared in IEEE Access.

Virtual Meetups

Saturday, November 7, at 8:30 UTC / 14:00 IST: Hyperledger India Chapter hosts “Blockchain Techfest 2020 – Part 2” Wednesday, November 11, at 17:00 UTC / 18:00 CET: Hyperledger Madrid hosts “Contribuye en la traducción de la documentación de Hyperledger Fabric” (Spanish) Wednesday, November 11, at 22:00 UTC / 17:00 COT: Hyperledger Latinoamerica hosts “Casos de Uso Empresariales con Hyperledger: Everis” (Spanish) Friday, November 13, at 15:00 UTC / 16:00 CET: Hyperledger Budapest hosts “Reimplement your composer business network into Fabric Java chaincode”

See the full Virtual Meetup schedule here

The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.


MyData

MyData view on the leaked EU Data Governance Act, Nov 5th 2020

  We welcome the regulation as a needed common ground for clarifying the role of data intermediaries, building trust in these intermediaries and setting the direction for data governance, similar to what GDPR did for data protection. At the same time, we advocate for careful scrutiny of the articles, as the Data Governance Act will... Read More The post MyData view on the leaked EU Dat

  We welcome the regulation as a needed common ground for clarifying the role of data intermediaries, building trust in these intermediaries and setting the direction for data governance, similar to what GDPR did for data protection. At the same time, we advocate for careful scrutiny of the articles, as the Data Governance Act will...

Read More

The post MyData view on the leaked EU Data Governance Act, Nov 5th 2020 appeared first on MyData.org.

Thursday, 05. November 2020

Hyperledger Foundation

DEON: A Hyperledger-based DEcentralized Off-grid Network

The DEON project focuses on the application of blockchains to secure data sharing in private networks and was initiated in 2018 in the wireless and sensor networks laboratory at the... The post DEON: A Hyperledger-based DEcentralized Off-grid Network appeared first on Hyperledger.

The DEON project focuses on the application of blockchains to secure data sharing in private networks and was initiated in 2018 in the wireless and sensor networks laboratory at the Yale Institute for Network Science. The use case of off-grid communication networks was identified with the goal of enabling their full decentralization in terms of data management and identity management. Off-grid (communication) networks are peer-to-peer networks that are autonomous, without super nodes and not dependent on the Internet’s physical infrastructure. Several recent developments like goTenna[1] and the Beartooth[2] offer standalone wireless devices that could be used to form local peer-to-peer networks. Other developments like the well-known Guifi community network[3] in Catalonia, Spain, are community-led paradigms in that space. Although these networks promote decentralization, openness, and fairness, they rely on legacy, centralized technologies for specific parts of their architecture like data and identity management. The integration of blockchains into off-grid networks appeared promising since blockchain provides attributes like transparency, privacy, distribution of governance and decentralization that are highly desirable in off-grid networks.

We chose Hyperledger Fabric as the framework to build our architecture because of its flexibility, performance and the potential we saw behind this huge community of enthusiasts and developers to further advance the technology. After we got familiar with the framework, we identified a key missing aspect: a decentralized identity management component. Unlike the rest of Fabric’s architecture, its native identity management is centralized and based on Certificate Authorities. So we came across the following questions: “which of the nodes in the network is going to host and manage the CA?”, “can we stick with the initial plan of having equal nodes?”and “how can we preserve the decentralization of off-grid networks if we rely on centralized nodes?”

Looking at the rest of the Hyperledger ecosystem, we found that Hyperledger Indy and Aries enable decentralized identity management. We could transform the centralized CAs of Fabric to a distributed CA entity, aka the Indy ledger, so that they are accessible by all nodes but not hosted by a specific one. This approach would work like a distributed oracle of trust in the network but necessitated some changes in Fabric and other extensions in the entire stack to bring DIDs into the “Fabric world.”

Approach and proposed architecture

The first Fabric extension identified as necessary for the integration was an Indy-based MSP to verify identities, signatures and transactions signed by DIDs. In addition, we had to enable the Fabric SDK (Go) to sign transactions using DIDs. For the Indy/Aries part, we leveraged the Hyperledger Aries Cloud Agent (aca-py) [4], which is deployed in each node of the network and serves as both the verifier and the issuer in the network. On one hand, it signs Fabric transactions using DIDs and issues credentials to the users of the network, and on the other it verifies proofs and transactions signed by DIDs. The DEON Core Service leverages Fabric private data collections and an interface to IPFS to expose a REST API for secure, transparent, fast and privacy-preserving data storage. An overview of the integration is shown in the figure below. The proposed architecture can be employed by off-grid networks of any kind, from IoT to communication and inter-enterprise consortia networks, for enabling self-sovereign identity and user-centric data sharing.

Figure 1: HL Fabric-Indy/Aries integration

Project outcomes and future work

The work started as a joint effort between the wireless and sensor networks lab of Professor Leandros Tassiulas at the Yale Institute for Network Science and Tata Consultancy Services (TCS), which put its expertise in decentralized identity to work in the development of the identity parts. Part of the work and initial benchmarks of the architecture deployed in off-grid settings are presented in the paper “A Blockchain-based Decentralized Data Sharing Infrastructure for Off-grid Networking[5].”

Currently the Yale team is working on improving the code to make it more usable by others as a standalone solution for integrating Fabric and Indy/Aries. The team is also working on feature enhancements for the DEON platform, such as supporting Fabric v2, updating DEON identity agents to the latest version of the aca-py agent, extending the DEON REST API with admin functionalities and finally looking for new applications of the platform. The code of the project can be found on GitHub https://github.com/off-grid-block.

[1] goTenna off-grid device: https://gotenna.com/
[2] Beartooth off-grid device: https://beartooth.com/
[3] Guifi.net: https://guifi.net/
[4] aca-py agent: https://github.com/hyperledger/aries-cloudagent-python/
[5] “A Blockchain-based Decentralized Data Sharing Infrastructure for Off-grid Networking”: https://arxiv.org/abs/2006.07521v2

About the author
Harris Niavis is a Research Engineer at Yale University. His research interests lie in enterprise blockchain networks, decentralized identity management, mesh networks and IoT.

 

Cover image by Pete Linforth from Pixabay

The post DEON: A Hyperledger-based DEcentralized Off-grid Network appeared first on Hyperledger.

Wednesday, 04. November 2020

Hyperledger Foundation

Announcing Hyperledger Besu 20.10.0

This release includes new versioning and mainnet-focused advancements The Hyperledger Besu team is excited to announce today’s Hyperledger Besu 20.10.0 release.  You might have noticed that the versioning for this... The post Announcing Hyperledger Besu 20.10.0 appeared first on Hyperledger.

This release includes new versioning and mainnet-focused advancements

The Hyperledger Besu team is excited to announce today’s Hyperledger Besu 20.10.0 release. 

You might have noticed that the versioning for this quarterly release is a little different than prior Hyperledger Besu releases. The Hyperledger Besu community recently decided to switch its versioning to calendar versioning, known as CalVer. Instead of the historic semantic versioning used by Besu and other Hyperledger projects, the Besu team decided to use CalVer moving forward. In all future releases, you will notice the project versions will start with the year and month (YY.M) of the last major release candidate, followed by a patch number for incremental releases, which results in a YY.M.patch, or 20.10.0 for this release. The Besu team believes this will better track the project’s changes and it follows many other successful open source projects that use Calver, including Splunk and Ubuntu.

Check out the old vs. new versioning in the table below.

ProjectReleaseOld Release VersioningNew Calver Release VersioningHyperledger Besu Hyperledger Besu Q4 Release Candidate 11.6.0-RC120.10.0-RC1Hyperledger Besu Hyperledger Besu Q4 Release Candidate 21.6.0-RC220.10.0-RC2Hyperledger Besu Hyperledger Besu Q4 2020 Quarterly Release1.6.020.10.0Hyperledger Besu Hyperledger Besu subsequent bi-weekly release 1.6.120.10.1


Now to share what is included in the latest release. The Besu community is excited about the continued advancements of the Hyperledger Besu project featured in this release. 

A few highlights for this release include:

Flexible Privacy Group Performance tests Mainnet Support Work, including preparing for the Berlin Network Upgrade and EIP-1559 support Flexible Privacy Group Performance Tests

The ‘add and remove members for privacy groups’ feature was released earlier this year as an early access feature. With privacy groups in Hyperledger Besu, you can add and remove members from a privacy group, creating an improved user experience for private transactions. Privacy groups are built using a private transaction manager, called Orion, to help send private transactions in a permissioned network.

In the 20.10.0 version, privacy groups have been further improved to ensure robust performance. The team performed various tests to ensure the flexible privacy group feature is not a performance bottleneck. 

Flexible privacy groups are now supported when using multi-tenancy. In addition, the team created more library examples and documentation of use cases. 

Mainnet Support Work

Since Hyperledger Besu runs on the public Ethereum mainnet, the Besu community also sought to improve its public chain settings. As a reminder, Hyperledger Besu is the only Hyperledger project that runs on a public chain and in permissioned network settings. This optionality makes it unique and a popular project for trying out both public chain or permissioned network options for a use case.

Berlin Network Upgrade

In this release, the community prioritized work ensuring Hyperledger Besu is ready for the next Ethereum hard fork, Berlin, scheduled to happen in the next few months. You can learn more about Ethereum’s hard forks here. For the Berlin hard fork, the Besu and Ethereum communities are broadly focused on implementing EIPs, or Ethereum Improvement Proposals, that will help with the UX of the Ethereum 2.0 deposit contract, add new functionality to the EVM and change gas costs to reflect their execution time more accurately. 

EIP-1559

In addition to its work on the Berlin network upgrade, the Besu team has also been leading efforts to implement EIP-1559. EIP-1559 is a highly anticipated upgrade to Ethereum’s transaction fee market. This EIP’s goal is to make the Ethereum fee market more efficient. You can read more about the current status of EIP-1559 here in a post written by Tim Beiko, one of our Besu team members.

What’s Next?

The Hyperledger Besu community remains committed to improving its project and making it fit for production blockchain use cases. Watch for new features addressing node hibernation and Bonsai Tries database improvements in our next quarterly release.

Get Involved

Download the latest version of Hyperledger Besu here.

Interested in learning more or curious on how to get started with Hyperledger Besu? Check out the Besu docs, view the tutorials, visit the wiki, or take a look at some open issues in GitHub

Stay tuned to hear more about our work in Ethereum and Hyperledger and about how Hyperledger Besu is continuing to lead the enterprise blockchain space.

The post Announcing Hyperledger Besu 20.10.0 appeared first on Hyperledger.

Tuesday, 03. November 2020

Oasis Open

Invitation to comment on XACML v3.0 Related and Nested Entities v1.0 from the XACML TC

We are pleased to announce that XACML v3.0 Related and Nested Entities Profile Version 1.0 CSD02 from the OASIS eXtensible Access Control Markup Language (XACML) TC is now available for public review and comment. This is the second public review for this work. Overview: The post Invitation to comment on XACML v3.0 Related and Nested Entities v1.0 from the XACML TC appeared first on OASIS Open.

Second opportunity for public review. Ends November 18th.

We are pleased to announce that XACML v3.0 Related and Nested Entities Profile Version 1.0 CSD02 from the OASIS eXtensible Access Control Markup Language (XACML) TC is now available for public review and comment. This is the second public review for this work.

The eXtensible Access Control Markup Language (XACML) defines categories of attributes that describe entities of relevance to access control decisions. XACML rules, policies and policy sets contain assertions over the attributes of these entities that must be evaluated to arrive at an access decision. Principal among the various predefined entities are the entity that is requesting access, i.e., the access subject, and the entity being accessed, i.e., the resource. However, it is not unusual for access decisions to be dependent on attributes of entities that are associated with the access subject or resource. For example, attributes of an organization that employs the access subject, or attributes of a licensing agreement that covers the terms of use of a resource.

This profile defines two ways of representing these associated entities in the request context – related entities and nested entities – and defines additional mechanisms to access and traverse these entities.

About the TC

The XACML TC specifies access control standards, based on the Attribute-based Access Control model (ABAC). The core of this work is the specification of the syntax and semantics of a policy language called XACML. Current work in the TC consists mostly of defining additional profiles of various types which build on version 3.0 of the XACML core specification.

The documents and related files are available here:

XACML v3.0 Related and Nested Entities Profile Version 1.0
Committee Specification Draft 02
20 August 2020

Editorial source (Authoritative):
https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/xacml-3.0-related-entities-v1.0-csd02.docx

HTML:
https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/xacml-3.0-related-entities-v1.0-csd02.html

PDF:
https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/xacml-3.0-related-entities-v1.0-csd02.pdf

Additional normative artifacts: XML schema:
https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/schemas/

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/xacml-3.0-related-entities-v1.0-csd02.zip

How to Provide Feedback

OASIS and the XACML TC value your feedback. We solicit feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

This public review starts on 04 November 2020 at 00:00 UTC and ends 18 November 2020 at 11:59 UTC.

Comments on the work may be submitted to the TC by following the instructions located at: http://www.oasis-open.org/committees/comments/form.php?wg_abbrev=xacml

Feedback submitted by TC non-members for this work and for other work of this TC is publicly archived and can be viewed at: http://lists.oasis-open.org/archives/xacml-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members.

In connection with the public review of these works, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about this specification and the XACML TC may be found on the TC’s public home page: https://www.oasis-open.org/committees/xacml/

Additional information related to this public review can be found in the public review metadata document [3].

==========

Additional references:

[1] http://www.oasis-open.org/policies-guidelines/ipr

[2] http://www.oasis-open.org/committees/xacml/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-Limited-Mode
RF on Limited Terms Mode

[3] Public review metadata document: https://docs.oasis-open.org/xacml/xacml-3.0-related-entities/v1.0/csd02/xacml-3.0-related-entities-v1.0-csd02-public-review-metadata.html

The post Invitation to comment on XACML v3.0 Related and Nested Entities v1.0 from the XACML TC appeared first on OASIS Open.


r@w blog

#WikiShadows (Techno-Political Contours of Knowledge Production on Wikipedia)

Tanveer Hasan & Rahmanuddin Shaik Session Wikipedia is a group project, and people in the group need to have separate pages to discuss changes and improvements to Wikipedia’s content, be that an article, a policy, a help page, or something anything else. Reading these discussion pages is a vastly rewarding, slightly addictive, experience. Sometimes reading Wikipedia can ruffle feathers. E.g
Tanveer Hasan & Rahmanuddin Shaik Session

Wikipedia is a group project, and people in the group need to have separate pages to discuss changes and improvements to Wikipedia’s content, be that an article, a policy, a help page, or something anything else. Reading these discussion pages is a vastly rewarding, slightly addictive, experience. Sometimes reading Wikipedia can ruffle feathers.

E.g. 1:

The song, Jana-gana-mana, composed originally in Bengali by Rabindranath Tagore, was adopted in its Hindi version by the Constituent Assembly as the National Anthem of India on January 24, 1950. It was first sung on December 27, 1911 at the Calcutta session of the Indian National Congress. [1]

Whereas Wikipedia entry of National anthem mentions thus:

“Jana Gana Mana is the national anthem of India. Written in highly Sanskritised (Tatsama) Bengali.” [2]

E.g. 2:

Are these beautiful waterfalls on the Kaveri River located in Tamil Nadu — or on the border between Tamil Nadu and Karnataka — or in Tamil Nadu on its border with Karnataka? Or is it really the Cauvery river, and Hogenakal Falls? [3]

Whatever you believe, be sure to bring a (Google) map to the debate, and point out that your opponent’s sources are not RS or NPOV!

E.g. 3:

Born of Serbian parents in a part of the Austrian Empire, which a short time later became a part of the Hungarian half of Austria-Hungary and is now in Croatia. He eventually became a naturalized citizen of the US. [4]

So was he Serbian? Croatian? Austrian? Austro-Hungarian? Istro-Romanian? Jewish? American? Martian? You decide! But don’t forget to leave an edit summary saying how pathetic it is to choose any other version. (Guess who are we talking about?) Clue: He is inventor par excellence.

In this day and age where information is often a touch and go process, a forgotten mode, a solitary quest towards creating knowledge sounds romantic (almost). Networked collaborations (such as Wikipedia) which have created knowledge sites have led to democratic interpretation and assimilation of such knowledge. They also as a basic necessity have sprung up various modes of annotation, verifiability of the Knowledge thus produced and utility quotient of the same. After all, why create and hold on to information that no body really cares about.

Plan

In this discussion session, the co-leaders of the session shall attempt to peel out the benign face of the visible Wikipedia page(there is a hidden world out there) and discuss the political, technological and social contours of information available on Wikipedia. We shall take the participants through the various stages of discussion about a Wikipedia page and how discussions tend to alter the course of an article. How false consensus is proposed, consent is manufactured and how these efforts are usually defeated by ‘Answer People’ and ‘Vandal Fighters’. It is no less of a war than the one between information and mis-information. The discussion on, calculus, for instance, was host to some sparring over whether the concept of “limit,” central to calculus, should be better explained as an “average.”

This discussion session brings to the table questions of legitimisation of knowledge and the inherent hierarchies that operate even within open networks of collaboration and offers a critique on consumption oriented knowledge production. The session also aims to ask questions around knowledge as an agent that has levelled some of the earlier existing contours but has introduced some of its own and how that has changed our usages and shapes our experiences.

The session will involve an edit-a-thon on a topic that will be selected by the co-leaders of the session and live commentary on the discussion pages will be tracked for further analysis. The session intends to build a dialogue towards attempting to problematise the questions of the starkly hierarchical and segmented experiences that have played a significant role in production of knowledge in the era of new knowledge practices. The session also will question the ‘best practices’ in building consent in the present global techno-economic contours of the internet, and its effect on academic spaces, creative practice and intervention.

Readings Using Talk Pages Talk page guidelines Tutorial on Wikipedia talk pages Introduction to talk pages A Wikipedia Reader (pdf, 6.6 MB) Audio Recording of the Session

IRC 2016: Day 2 #Wiki Shadows : Researchers at Work (RAW) : Free Download, Borrow, and Streaming : Internet Archive

Session Team

Tanveer Hasan works with the Wikimedia Foundation. He is
interested in understanding the politics of language that shapes our
understanding of knowledge, and also in exploring the processes of production ofknowledge in digital and humanities fields.

Rahmanuddin Shaik, currently working with the NTR Trust, is a Wikipedian and believes in FLOSS. He uses Internet as a tool to propagate his mother language Telugu. He writes technical and linguistic blog posts and columns in Telugu.

Note: This session was part of the first Internet Researchers’ Conference 2016 (IRC16) , organised in collaboration with the Centre for Political Studies (CPS), at the Jawaharlal Nehru University, Delhi, on February 26–28, 2016. The event was supported by the CSCS Digital Innovation Fund (CDIF).

#WikiShadows (Techno-Political Contours of Knowledge Production on Wikipedia) was originally published in r@w blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 02. November 2020

Oasis Open

Invitation to comment on Exchange Header Envelope (XHE) v1.0 from the BDXR TC

We are pleased to announce that Exchange Header Envelope (XHE) Version 1.0 CSD03 from the OASIS Business Document Exchange (BDXR) TC is now available for public review and comment. This is the fourth public review for this work. The Exchange Header Envelope (XHE) specifies an XML vocabulary for expressing, in machine-processable syntax, the semantics of describing either a header to or an envelo

Opportunity for public review ends November 17th

We are pleased to announce that Exchange Header Envelope (XHE) Version 1.0 CSD03 from the OASIS Business Document Exchange (BDXR) TC is now available for public review and comment. This is the fourth public review for this work.

The Exchange Header Envelope (XHE) specifies an XML vocabulary for expressing, in machine-processable syntax, the semantics of describing either a header to or an envelope for a set of payloads of content. This is distinct from any transport-layer infrastructure header or envelope that may be required to propagate documents from one system to another. The metaphor of a paper envelope in which one places business documents for transport or management is an apt way to describe the role of an exchange header envelope. Concepts of routing, authentication, non-repudiation and concealment all apply in both the metaphor and the electronic equivalent.

XHE is developed jointly by UN/CEFACT and OASIS. It is the successor to the UN/CEFACT Standard Business Document Header (SBDH) Version 1.3 and the OASIS Business Document Envelope (BDE) Version 1.1. The vocabulary is modeled using the UN/CEFACT Core Component Technical Specification Version 2.01 [CCTS 2.01].

The documents and related files are available here:

Exchange Header Envelope (XHE) Version 1.0
Committee Specification Draft 03
07 October 2020

Editorial source (Authoritative):
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03.xml

HTML:
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03-oasis.html

PDF:
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03-oasis.pdf

Additional normative artifacts:
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/mod/
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xsd/
https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xsdrt/

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

http://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03.zip

How to Provide Feedback

OASIS and the BDXR TC value your feedback. We solicit feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

This public review starts on 03 November 2020 at 00:00 UTC and ends 17 November 2020 at 11:59 UTC.

Comments on the work may be submitted to the TC by following the instructions located at: http://www.oasis-open.org/committees/comments/form.php?wg_abbrev=bdxr

Feedback submitted by TC non-members for this work and for other work of this TC is publicly archived and can be viewed at: http://lists.oasis-open.org/archives/bdxr-comment/ All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members.

In connection with the public review of these works, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about this specification and the BDXR TC may be found on the TC’s public home page: https://www.oasis-open.org/committees/bdxr/

Additional information related to this public review can be found in the public review metadata document [3].

==========

Additional references:

[1] http://www.oasis-open.org/policies-guidelines/ipr

[2] http://www.oasis-open.org/committees/bdxr/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#Non-Assertion-Mode
Non-Assertion Mode

[3] Public review metadata document: – https://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03-public-review-metadata.html

The post Invitation to comment on Exchange Header Envelope (XHE) v1.0 from the BDXR TC appeared first on OASIS Open.


FIDO Alliance

FIDO Alliance Members Meet Virtually in Inaugural APAC Marketing Forum

Joon Hyuk Lee and Atsuhiro Tscuhiya, APAC Market Development Team [Snapshots of AMF Inauguration Members] As the world struggles to contain the global pandemic, cybercriminals are launching their attacks and […] The post FIDO Alliance Members Meet Virtually in Inaugural APAC Marketing Forum appeared first on FIDO Alliance.

Joon Hyuk Lee and Atsuhiro Tscuhiya, APAC Market Development Team

[Snapshots of AMF Inauguration Members]

As the world struggles to contain the global pandemic, cybercriminals are launching their attacks and taking advantage of the anxiety and uncertainty that people are feeling. They impersonate trusted authorities or brands to mislead their victims. This is not surprising as cybercriminals are always on the lookout for opportunities and vulnerabilities.

Cybersecurity ranks amongst the top ten global risks, and reducing cyber-risk exposure has become a priority for business leaders, according to the World Economic Forum’s 2020 Global Risks Report.

Meanwhile, cybersecurity and technology experts overwhelmingly agree that reliance on passwords should be reduced if not totally scrapped: 80 percent of all data breaches involve weak or stolen passwords, and 29 percent of all attacks leverage the latter.

The use of passwords poses many challenges. As we increasingly live our lives and perform mission critical work online, safe access to connected devices and online services is more important than ever. The need to raise authentication standards and reduce reliance on passwords is now more urgent than ever.

APAC Marketing Forum

Since 2012, the FIDO Alliance has been working with organizations across Asia Pacific (APAC) to reduce the reliance on passwords and encourage the adoption of simpler and stronger approaches to authentication. Today, we have close to 40 members from both the public and private sector in this region.

Recently, more than 30 representatives from these member organizations got together for the very first FIDO Alliance APAC Marketing Forum (AMF). The AMF, held virtually, was an informal marketing related discussion.

The event provided a platform for members to connect, learn about each other’s markets and share best practices. It facilitated communication and cooperation amongst members, and the authentication industry as a whole.

Recent Initiatives in APAC

FIDO members in APAC also have made tremendous progress in recent months.

Companies that deployed FIDO authentication include PrivyID in Indonesia, and Japan-based NTT Docomo and KDDI. Furthermore, VinCSS became the first company in Vietnam to develop FIDO2-certified strong authentication servers.

FIDO also was included in official standards documents developed by the Taiwan Association of Information and Communication Standards (TAICS) and SEMI (Semiconductor Equipment and Materials International) Taiwan. 

Also in Taiwan, the Taiwan-Cathay United Bank has added the FIDO logo to the latest version of its app, which was released to customers in August.

Additionally, we had successful events like the FIDO Security Key Support Campaign, and 2020 FIDO Hackathon – Goodbye Password Challenge that offered member organizations opportunities to interact with each other despite physical distancing.

Activities in the Pipeline

Moving forward, we aim to organize more of both digital and onsite collaborative marketing events where members can promote their innovations and share case studies. Currently, planned initiatives include:

FIDO Alliance virtual AMFs to be organized once every quarter, where post discussion updates will be shared through the FIDO Blog FIDO Alliance quarterly member newsletter  Updated FIDO Alliance orientation material with contents customized for the needs of APAC members

We look forward to seeing you at the next virtual meeting in October!

If you wish to take part in these exciting new initiatives, or have any inquiries, please do not hesitate to contact tsuchiya@fidoalliance.org.

By joining AMF, you will not only get to connect with key authentication players in APAC, but also gain benefits of participating in FIDO branded awareness and promotional activities together with global champions.

The post FIDO Alliance Members Meet Virtually in Inaugural APAC Marketing Forum appeared first on FIDO Alliance.


Hyperledger Foundation

The move to a production network: What you need to consider and how Hyperledger Fabric can help

Blockchain is a distributed ledger technology that provides a shared, immutable, and transparent history to the participants in the network of all the actions that have happened. Currently, different types... The post The move to a production network: What you need to consider and how Hyperledger Fabric can help appeared first on Hyperledger.

Blockchain is a distributed ledger technology that provides a shared, immutable, and transparent history to the participants in the network of all the actions that have happened. Currently, different types of blockchain technologies exist, including private permissioned implementations that allow governance of the network participants and secure management of sensitive data. 

Blockchain has already demonstrated its potential in numerous POCs and implementations in production in Switzerland and abroad. Deployments are live in various business areas such as supply-chain management, finance (e.g., payments in crypto-currencies, tokenization), healthcare data management, ticketing services, eVoting, cars and planes configuration and maintenance, among others. Increasingly, companies are moving past the initial stage of  testing whether  blockchain is a good solution to a specific business problem (i.e., developing a POC) and are starting the work of adopting this technology in production networks. 

The move from a successful POC implementation to a production deployment brings many added challenges such as ongoing management, integration (on both the technology and business process fronts), security and budgeting. Here are some key areas where advanced consideration and preparation can smooth a production implementation:

Deployment options for running and maintaining the blockchain nodes and other components of the blockchain network.

Blockchain technology is often employed to ensure transparent and secure transactions executed between parties without fully aligned interests. Instead of relying on a centralized “trusted” party, which can become a target for the internal and external cyber-attacks, parties opt to set up a distributed network. This blockchain network consists of replicated nodes that execute smart contracts – programs defined by the business logic of the application. Depending on the business requirements, a specific type of blockchain implementation (i.e., permissionless/permissioned, private/public, or hybrid) shall be chosen. This choice also impacts the network configuration and maintenance. When the nodes are located on premises, execution of smart contracts can be verified by the parties directly. However, infrastructure maintenance costs and shortage of specialists “in house” can complicate such an approach. The nodes can also be deployed in the cloud; multiple companies provide such services. Working with service providers simplifys management. However, it also requires careful selection of the provider(s) based on the sensitivity level of data that are being processed by the smart contracts, regulations, and physical location of the hosting data centers.

Set up or integration of identity management approach.

For the permissioned blockchain technology implementation, identity management service plays a crucial role. Often, even as a part of distributed ledger frameworks, identity management services are implemented in a centralized manner, which poses serious security threats and can become a single point of failure. Setting up a distributed identity management service, using blockchain-based approach for identity management (such as self-sovereign identity solutions) and implementing privacy-preserving approaches (using advance cryptographic primitives such as zero-knowledge proofs) are the approaches that must be considered for the real-world implementation. 

Transformation of  business processes into chaincode (smart contracts) 

Transforming paper-based contracts and business processes into computer programs can be challenging and may require compliance with domain-specific regulations (i.e., Drug Supply Chain Security Act in US, Falsified Medicines Directive of the European Union in the pharmaceutical sector). Making sure the business processes are properly transferred and are available for  verification and audits are cornerstone requirements for the successful integration of the blockchain. To understand and translate specifics of a company’s business area into smart contracts, collaboration between the company and blockchain specialists with the specific domain knowledge and legal expertise is required.

Data management approach.

Replication of the code execution between multiple parties brings transparency into blockchain implementations. At the same time, more parties may access the sensitive data that is required for the execution of the smart contracts deployed on the nodes. In the course of development of the blockchain, a number of approaches for hybrid data management have been proposed. These approaches assume that only part of the data is managed within the blockchain network, and most of the data, especially of a sensitive nature, is stored and processed locally. Private data collections, hardware security modules, data encryption are examples of approaches that aim to ensure data security and user privacy.

Estimation of the infrastructure and maintenance costs.

Once the above points are considered, and preliminary choices are made, it’s easier to estimate the effort and costs of implementing and maintaining a system. It is important to ensure integration and interoperability with other non-blockchain-based components. Detailed specifications of SLAs and timelines in the dynamic blockchain environment must not be overlooked.

Working with customers, we have found that Hyperledger Fabric, one of the most mature permissioned blockchain technology platforms implementations, is well architectured to help companies to make the leap from POC to a successful production deployment.

Hyperledger Fabric networks can be set up on the customer premises and in the cloud, including multi-vendor cloud environments. Deployment and maintenance of the robust and secure blockchain-based POCs and real-world systems can be achieved in close collaboration between the company and a vetted service provider that has deep experience helping enterprises successfully adopt Fabric.

With Hyperledger Fabric, business logic can be “translated” into the chaincode using one of the following general-purpose languages: Go, node.js, or Java. Fabric also provides a set of built-in tools for sensitive data management, such as private data collections and channels, as well as flexibility in the choice of identity management approach, such decentralized identity management and anonymous credentials. 

Being aware of the challenges, knowing how to address them, and working together with a Hyperledger Certified Service Provider, like Swisscom, are key components to the successful deployment of a blockchain-based project in production.

The post The move to a production network: What you need to consider and how Hyperledger Fabric can help appeared first on Hyperledger.


Federal Blockchain News

Melissa Oh & Anil John of DHS Silicon Valley Innovation Program

Melissa Oh & Anil John of the Department of Homeland Security's Silicon Valley Innovation Program (SVIP) reveal the secret sauce of this one-of-a-kind federal innovation approach to push emerging blockchain products towards interoperability, and create solutions built-to-suit for DHS without suffering vendor lock-in. They discuss building a diverse partnership portfolio with startups run by wom
Melissa Oh & Anil John of the Department of Homeland Security's Silicon Valley Innovation Program (SVIP) reveal the secret sauce of this one-of-a-kind federal innovation approach to push emerging blockchain products towards interoperability, and create solutions built-to-suit for DHS without suffering vendor lock-in. They discuss building a diverse partnership portfolio with startups run by women and other underrepresented groups, and call on other federal agencies to share their work and lessons learned in blockchain projects. Melissa Oh is Managing Director of the Silicon Valley Office of the DHS Science & Technology Directorate. Anil John is the Technical Director of the SVIP.

Friday, 30. October 2020

Me2B Alliance

Re: Stay connected with us!

Hi friends, Over the course of this weekend I'll be shutting down the groups.io mailing lists.  Be sure to sign up at https://me2ba.org/membership/ to stay a part of the conversation and our projects to ensure the ethical behavior of technology. See y'all on the other side! Lisa

Hi friends,

Over the course of this weekend I'll be shutting down the groups.io mailing lists.  Be sure to sign up at https://me2ba.org/membership/ to stay a part of the conversation and our projects to ensure the ethical behavior of technology.

See y'all on the other side!

Lisa


Hyperledger Foundation

Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise... The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.

Welcome to the Weekend Update. Our goal with this weekly post is to share quick updates about online education, networking and collaboration opportunities and resources for the open source enterprise blockchain community. 

If you have suggestions for resources or events that we should spotlight in a future Weekend Update, let us know here using #HLWeekendUpdate. 

Call for students, professors and researchers to get involved in Hyperledger

Universities have a critical role to play in the transformation that blockchain technology is driving. Educational institutions will teach needed skills, research new uses and help shape the direction of this movement. If you are at a university and are interested in understanding Hyperledger and making an impact on the projects, this guide has the information you need to get started.

Blockchain Expo North America

Blockchain Expo North America will be virtual this year. The event will feature two days (November 4-5) of top-level content and thought leadership discussions looking at the blockchain ecosystem. Hyperledger executives will take the stage for two sessions on November 4 for the focus on blockchain for the enterprise:

Keynote: “Success Stories of Deploying Blockchain and Lessons Learned” by Hyperledger’s Brian Behlendorf at 10:00 a.m. MST Live Keynote Panel: “The Future of Enterprise Technology – Predictions of 2021 & Beyond” moderated by Hyperledger’s Daniela Barbosa at 10:20 a.m. MST

For more details about Blockchain Expo North America, go here

Webinar: How Credit Unions are Using MemberPass to Improve the Member Experience

Data security, particularly around identity, has become a major concern for consumers everywhere due to ongoing data breaches and waves of identity. In the financial industry, digital identity protection is especially crucial. Innovative credit unions have collaborated to develop CULedger, a credit union service organization offering MemberPass, a consumer-focused global digital identity solution. CULedger’s Julie Esser will present the credit union industry’s use of MemberPass and provide an update on their deployments and use cases.

Tune on Wednesday, November 4, at 10:00 am EST. For more information and to register, go here.

Case Study: DLT Labs and Walmart Canada Transform Freight Transport Management

Learn more about how DLT Labs is using Hyperledger Fabric to resolve freight transportation invoice and payment challenges for Walmart Canada in this new case study.

Hyperledger Study Circle

Hyperledger Sweden hosts a Tech Study Circle every other Friday. The session is open to all who want to share learning experiences, clear doubts, educate each other and discuss various Hyperledger certifications.

The group meets this Friday, November 6, at 13:00 GMT/15:00 CEST. More details are here.

Virtual Meetups

Saturday, October 31, at 8:30 UTC / 14:00 IST: Hyperledger Bangalore hosts “Blockchain Techfest 2020” Thursday, November 5, at 18:00 UTC / 11:00 MST: Hyperledger Denver hosts “Learn About The Blockchain Automation Framework (BAF) & Cactus” Thursday, November 5, at 23:00 UTC / 18:00 EST: Hyperledger Toronto hosts “Climate Action and Accounting with Hyperledger”

See the full Virtual Meetup schedule here

The post Weekend Update: This Week’s Round-up of Remote Blockchain Learning Resources appeared first on Hyperledger.


OpenID

Second Public Review Period for Three Proposed FastFed Implementer’s Drafts

The OpenID Fast Federation (FastFed) Working Group recommends approval of the following specifications as OpenID Implementer’s Drafts: FastFed Core 1.0 FastFed Basic SAML Profile 1.0 FastFed Basic SCIM Profile 1.0 An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public […] T

The OpenID Fast Federation (FastFed) Working Group recommends approval of the following specifications as OpenID Implementer’s Drafts:

FastFed Core 1.0 FastFed Basic SAML Profile 1.0 FastFed Basic SCIM Profile 1.0

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Implementer’s Drafts. For the convenience of members, voting will actually begin a week before the start of the official voting period.

These drafts incorporate updates to the specifications made in response to feedback during the previous review period. The previous versions did not become Implementer’s Drafts.

The relevant dates are:

Implementer’s Drafts public review period: Thursday, October 29, 2020 to Sunday, December 13, 2020 (45 days) Implementer’s Drafts vote announcement: Monday, November 30, 2020 Implementer’s Drafts voting period: Monday, December 14, 2020 to Monday, December 21, 2020 (7 days)*

* Note: Early voting before the start of the formal voting will be allowed.

The FastFed working group page is https://openid.net/wg/fastfed/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “FastFed” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-fastfed, and (3) sending your feedback to the list.

— Michael B. Jones – OpenID Foundation Board Secretary

The post Second Public Review Period for Three Proposed FastFed Implementer’s Drafts first appeared on OpenID.


Second Public Review Period for OpenID Connect User Questioning API Specification Started

The OpenID MODRNA Working Group recommends approval of the following specification as an OpenID Implementer’s Draft: OpenID Connect User Questioning API 1.0 This would be the second Implementer’s Draft of this specification. An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the […] The

The OpenID MODRNA Working Group recommends approval of the following specification as an OpenID Implementer’s Draft:

OpenID Connect User Questioning API 1.0

This would be the second Implementer’s Draft of this specification.

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members, voting will actually begin a week before the start of the official voting period.

The relevant dates are:

Implementer’s Draft public review period: Thursday, October 29, 2020 to Sunday, December 13, 2020 (45 days) Implementer’s Draft vote announcement: Monday, November 30, 2020 Implementer’s Draft voting period: Monday, December 7, 2020 to Monday, December 21, 2020 *

* Note: Early voting before the start of the formal voting period will be allowed.

The OpenID MODRNA working group page is https://openid.net/wg/mobile/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “MODRNA” working group on your contribution agreement), (2) joining the working group mailing list at http://lists.openid.net/mailman/listinfo/openid-specs-mobile-profile, and (3) sending your feedback to the list.

— Michael B. Jones – OpenID Foundation Board Secretary

The post Second Public Review Period for OpenID Connect User Questioning API Specification Started first appeared on OpenID.


Public Review Period for Proposed Final FAPI 1.0 Part 1 and Part 2 Specifications

The OpenID Financial-grade API (FAPI) Working Group recommends approval of the following specifications as OpenID Final Specifications: Financial-grade API – Part 1: Baseline Security Profile Financial-grade API – Part 2: Advanced Security Profile A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note

The OpenID Financial-grade API (FAPI) Working Group recommends approval of the following specifications as OpenID Final Specifications:

Financial-grade API – Part 1: Baseline Security Profile Financial-grade API – Part 2: Advanced Security Profile

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Final Specifications. For the convenience of members, voting will actually begin three weeks before the start of the official voting period because of the holidays, for members who have completed their reviews by then.

The relevant dates are:

Final Specifications public review period: Thursday, October 29, 2020 to Monday, December 28, 2020 (60 days) Final Specifications vote announcement: Monday, November 30, 2020 Final Specifications voting period: Tuesday, December 29, 2020 to Tuesday, January 5, 2021 (7 days)*

* Note: Early voting before the start of the formal voting will be allowed.

The FAPI working group page is https://openid.net/wg/fapi/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “FAPI” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-fapi, and (3) sending your feedback to the list.

— Michael B. Jones – OpenID Foundation Board Secretary

The post Public Review Period for Proposed Final FAPI 1.0 Part 1 and Part 2 Specifications first appeared on OpenID.

Thursday, 29. October 2020

omidiyar Network

Keep Voting Until We Get it Right

By Mike Kubzansky, CEO, Omidyar Network 2020 has challenged and consumed us — as individuals and as a society. From the pandemic to politics, to the severe economic downturn and the desperate cries for Black lives to matter; too many across the country and around the globe are struggling — financially, physically, and mentally. We are at times scared, or angry, or resigned, and a pervasive

By Mike Kubzansky, CEO, Omidyar Network

2020 has challenged and consumed us — as individuals and as a society. From the pandemic to politics, to the severe economic downturn and the desperate cries for Black lives to matter; too many across the country and around the globe are struggling — financially, physically, and mentally.

We are at times scared, or angry, or resigned, and a pervasive sense of helplessness underpins so much of current daily life, across whole swathes of society. Yet so many people have remained remarkably resilient and resolute. We have donned our masks and taken to the streets to peacefully protest, help at food banks, or simply serve as poll watchers. We have prayed, meditated, eaten (and then tried to exercise) our way through this year of chaos and uncertainty. Several of us even got new kittens or puppies.

While everyone has different coping mechanisms, there is one thing I hope we all do: vote.

As political scientist Larry Sabato said, “Every election is determined by the people who show up.” Whether it’s by mail, drop box, in person, early, or on election day (masked, of course), we must all show up. We believe deeply in democracy. It is the bedrock of this nation, and it will only be of, for, and by all the people if we all vote. That’s why Omidyar Network signed A Day for Democracy pledge to give our staff in the US time off to vote and do their civic duty.

And when voters do their civic duty, their votes must be counted. We are proud of the work of our sister organization, Democracy Fund, and many of our grantees like Community Change Action, Demos Action, Public Citizen, and Jobs With Justice for their ongoing efforts to dismantle barriers to voting, so that everyone who is eligible can vote with ease — and can trust that election outcomes are legitimate, so that we have genuine majority rule by a full and fully empowered electorate. It’s also why I signed on to World Justice Project’s letter, along with a number of other foundation presidents, demanding that the rule of law is protected to ensure the security and integrity of the elections.

We cannot let expediency outweigh accuracy. We must be patient as election officials do their jobs and count every vote, even if it takes extra time before a winner is declared. Ensuring that every vote is counted will enhance the legitimacy of the election and strengthen our democracy.

While democracy is not a focus area for Omidyar Network — nor is improving its functioning our mission — it weaves through much of our work to strengthen technology and the economy. And pluralism, if true to its ideals, should have spillover benefits that improve democracy.

In Responsible Technology, we know that online platforms with outsized power can ravage our democracy by allowing unfettered hate, bigotry, racism, antisemitism, and disinformation to run rampant. We support organizations leading the #StopHateforProfit campaign to call on Facebook to change its policies that inflame intolerance and undermine democracy. We also support Graphika’s investigations into domestic disinformation and potential foreign interference in the 2020 election, as well as its partnership with the Election Integrity Partnership, which identifies instances aimed to “prevent or deter people from voting or to delegitimize election results.”

In our call to reimagine capitalism, we recognize that democracy and the economy are fundamentally intertwined. We cannot address one without taking into account the other. Concentrated economic power too easily spills over into political power. That has resulted in policies that prevent the economic interests of the many from being fulfilled. We are working to address these imbalances, and support communities, working families, and small businesses to have greater power and voice in the economic decisions that impact their futures.

We also cannot reflect on American democracy without reflecting on race. The US is still not living up to its democratic ideal in which all people are created — and treated — as equal. In addition to the continued horror of the 164 Black individuals who have been killed by police through August, we see every day continued work to disenfranchise and suppress voting by Black and brown voters. So it’s not surprising that some voters drop out of the system and choose to simply not participate — real change has been too slow to come.

We know that change won’t always happen as quickly as we would like — or many times need — but it will happen. Voting is the first step on this journey toward a more just and equal future.

As President Obama said in Philadelphia at a rally last week, “Voting’s about using the power we have, and pooling it together to get a government that’s more concerned, and more responsive, and more focused on you and your lives and your children, and your grandchildren, and future generations. And the fact that we don’t get 100% of what we want right away is not a good reason not to vote. It means we’ve got to vote, and then get some change, and then vote some more, and then get some more change, and then keep on voting until we get it right.”

Keep Voting Until We Get it Right was originally published in Omidyar Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Berkman Klein Center

What Digital Contact Tracing Can Teach Us About Public Trust, Health Equity, and Governance in the…

What Digital Contact Tracing Can Teach Us About Public Trust, Health Equity, and Governance in the United States Reflections from over a dozen Digital Pandemic Response Working Group meetings hosted by the Berkman Klein Center for Internet & Society at Harvard University. By Adam Nagy with contributions from Hilary Ross Screenshot from a visual tutorial on digital contact tracing n
What Digital Contact Tracing Can Teach Us About Public Trust, Health Equity, and Governance in the United States Reflections from over a dozen Digital Pandemic Response Working Group meetings hosted by the Berkman Klein Center for Internet & Society at Harvard University.

By Adam Nagy with contributions from Hilary Ross

Screenshot from a visual tutorial on digital contact tracing narrated by DPR Working Group Co-Chair Professor Jonathan Zittrain and produced by the BKC Digital Pandemic Response staff team. This animated sequence illustrated how Bluetooth enabled exposure notification systems function.

As of October 2020, COVID-19 cases are continuing to climb across much of the world. The global economy is in tatters. Education, commerce, worship, and all things in between remain disrupted. Over a million people are dead.

The overall situation was resoundingly similar five months ago when the Berkman Klein Center for Internet & Society launched its Policy Practice focusing on Digital Pandemic Response (DPR), along with a Digital Pandemic Response Working Group (“Working Group”). With generous support from the Ford Foundation, Hewlett Foundation, and MacArthur Foundation, the DPR Program and Working Group emerged as the linchpins of BKC’s response to the crisis. As members of the BKC staff supporting this project, we’ve had the opportunity to join the Working Group as observers, scribes, and contributors: the following is a snapshot of what we considered to be some of the significant learnings, unturned stones, and ongoing debates of the Working Group over the past several months.

About the Working Group

Amidst clouds of uncertainty, group members saw an opportunity to attenuate this unprecedented crisis — if not in whole, at least in part — using the equally unprecedented capabilities of the digital age. We wanted to examine those opportunities together, building bridges and regularly convening experts and policymakers from across sectors to dig into the complex social, technical, and political questions, and sharing learnings as we went. To start, the Working Group focused on contact tracing, an essential pillar of an effective public health response and a key locus for significant human and technological investment.

Composed of representatives from public health, technology, and state government, alongside civil liberties advocates, legal experts, and academics, the Working Group has operated as a weekly venue for issue-spotting, collaboration, and candid, Chatham House Rule discussion with invitees. The group’s attention has largely focused on the use of information and communications technology (ICT) to support or supplement traditional contact tracing efforts. Over the course of this work, the group gravitated towards several key questions around how technology and access to data can support contact tracing — and surfaced important concerns related to technical limitations, public health efficacy, and civil liberties risks. Succinctly, these interrelated domains are the essential relationship between public trust and digital interventions, the risks of uneven costs and benefits of digital interventions, and the urgent need for increased coordination between and among the private sector, public health professionals, researchers, and political leadership.

The Essential Relationship Between Public Trust and Digital Interventions

The pandemic struck at a time when Americans’ trust in government, institutions, in experts, and in information is at near-record lows. And it is precisely among some of the hardest-hit communities where these deficits are most well-founded — why should an undocumented immigrant trust a government-employed contact tracer? Why should people of color, especially Black Americans, trust that an app won’t use their data to harm them in the future? Why should anybody trust a tech company not to sell, misuse, or otherwise fail to protect their data?

This is an enormous problem, and barring enforced mandates, tools in the public health kit hinge on the public’s willingness to use them. Trust is a key enabler when it comes to the efficacy of different pandemic responses from population-level interventions such as wearing masks and social distancing, to downloading an app or picking up the phone to share information with a contact tracer.

More Privacy or More Data — Debating Difficult Trade-Offs:

Given this environment, many digital solutions, most notably the Apple and Google Exposure Notification system (GAEN), prioritized preserving user privacy at the expense of providing contact tracers with more comprehensive contextual data. Reasonable minds may (and do) disagree about whether any individual approach gets the balance of trade-offs right. GAEN is considered the ‘gold standard’ from a privacy standpoint, but there are a host of non-GAEN digital contact tracing apps in development or in use around the world. What is clear is that robust privacy protections are intended to make adoption more palatable for wary governments and populations. But a privacy-preserving app is certainly not a sufficient condition for high adoption among a population — adoption rates differ significantly across countries even when the underlying technology is identical — and adoption rates, while a crucial metric, are not a perfect proxy for efficacy and actual public health impact.

Even once someone has downloaded a digital contact tracing or exposure notification app, opportunities for attrition abound. First, users must activate the app and keep it on. Upon receiving a positive test result or an exposure notification users must be proactive — either uploading their result to warn others or heeding the instructions accompanying the exposure notification. In other words, it is not enough for people to simply download the app. As Working Group member Mary Gray has written elsewhere, solving the “wicked hard tech problem” of notifying exposed individuals in a privacy-preserving way does not solve the critical last mile problem of routing them to trusted care workers and ensuring they self-quarantine with the necessary support to weather COVID-19. Accomplishing that task — under the privacy-protective constraints of GAEN — will require creative, non-technical solutions such as employing trusted local institutions or community members to serve as extensions of public health agencies to encourage participating in contact tracing programs — digital or otherwise — and to connect people who’ve tested positive to necessary resources.

Trust, but Verify: Evaluating Efficacy

Aside from users, there is an additional, crucial layer of trust: do public health authorities have confidence in the contact tracing app? A major unresolved barrier to buy-in concerns the lack of large-scale, replicated studies measuring the efficacy of these digital interventions. While developers have expressed concern with ‘analysis paralysis’ and the slow pace at which jurisdictions adopt these apps — arguing in favor of deploying quickly and making improvements down the line — it is difficult to expect jurisdictions to invest time and resources into still under-proven products. There are significant concerns around the costs of releasing technologies that are not proven as efficacious in terms of spent political capital, eroded public trust, and overall reduced adoption rates. Striking the right balance will remain challenging until there is more data available on the efficacy of these interventions.

In the interim, technologists and developers must work diligently to reduce other areas of uncertainty and pain points for public sector implementers. These include the potential for privacy and security breaches, overall cost, and lack of input from subject matter experts during the design process. Organizations like the Linux Foundation for Public Health (LFPH) have attempted to reduce these challenges by channeling the attention of the OS community and substantial resources to the increasingly widely adopted GAEN-enabled COVID Shield and COVID Green apps, vetting developers and firms for jurisdictions to work with, and producing useful guidance documentation. Google and Apple’s Exposure Notifications Express — turn-key solutions to building and launching a contact tracing app — will expedite adoption while allowing local authorities to customize certain key elements. The willingness of organizations to serve as ‘air-traffic’ controllers — certifying third-party vendors, scaling testing, creating replicable best practices, and working closely with local officials and public health authorities — will be critical drivers of trust and buy-in.

Takeaways and Calls to Action:

State and local public health leaders and policymakers should urgently invest in building and strengthening relationships with locally trusted community groups, which are critical to helping individuals and communities collectively support each other and understand strategies to weather COVID-19. In states that have adopted exposure notification apps, community groups could help members understand privacy-preserving exposure notification tech as part of a suite of holistic supports. As exposure notification and digital contact tracing technology is so new, we need further research on its efficacy. Who is responsible for evaluating efficacy, potential adverse effects, and the impact of these technologies across a variety of communities? What monitoring structures ought to be put into practice? (For example, Ireland established a COVID Tracker App Advisory Committee to continually assess the efficacy of the app alongside a mechanism to wind-down the app in 90 days if they find it ineffective.) What might cross-sector evaluation of these technologies look like, with companies, researchers, policymakers, and community groups working in collaboration? A complete cost-benefit analysis for policymakers will require research into a variety of questions — many of which are outlined in this robust research agenda from the University of Zurich. Some of this work is already underway — and it might be a useful exercise to catalog the existing literature. Health (In)Equities: Uneven Costs and Benefits of Digital Interventions

The impacts of COVID-19 are not distributed equally — its devastating effects fall most heavily along pre-existing lines of socioeconomic and racial inequality. COVID-19 death rates are disproportionately high across communities of color, especially among Black Americans who are more likely to contract COVID-19, be hospitalized for it, and die of the disease in comparison to other racial groups. Longstanding disparities in access to health insurance, employment, education, quality healthcare, environmental justice, and other rights, increase the prevalence of comorbidities in Black, Latino/Hispanic, Native Americans, and Pacific Islander populations making them more vulnerable to COVID than other racial groups. In addition to causing compounding pre-existing medical conditions, social determinants can make transmission events more likely. For example, racial minorities in general and Black Americans in particular, are more likely than non-Hispanic White populations to use public transit as a primary mode of transport, to live in densely populated areas and in multigenerational households, and to work in essential frontline jobs that preclude isolating at home. The Working Group has grappled with how digital public health interventions account for these disparities and can risk exacerbating them further.

Data Disparities

A major issue of concern raised to the Working Group pertains to gaps in state and federal public health agency data collection regimes, particularly with respect to important demographic information such as race, ethnicity, language, and other factors. In the absence of this data, doctors and epidemiologists have had to build improvised solutions in order to better understand the disparate impacts of COVID-19 and create response strategies. These innovations, while highly impressive, are ad hoc and cannot substitute for improvements in collection, universal data standards, and best practices. Of course, the collection of demographic data is fraught — take, for instance, the legal fight over the inclusion of a citizenship status question in the 2020 Census — and without robust safeguards and public trust in how the data will be used there are risks such as stigmatization or suppressed response among vulnerable populations. Scholars Merlin Chowkwanyun and Adolph Reed Jr. recently explored the risks of relying on demographic data without also considering socioeconomic status data in The New England Journal of Medicine, arguing inter alia that on its own demographic data can erroneously feed perceptions that a social problem is actually a “primarily racial” one, which in turn can be “used to rationalize neglect and funding cuts.” They go on to argue in favor of incorporating data on socioeconomic factors alongside demographic information to help preempt these dangers, discover how risks intersect across different racial and social lines, and also draw attention to racial disparities. Data collection that ignores these key areas limits healthcare professionals’ ability to develop a complete hierarchy of risk on which to base the prioritization of care and resources.

The Working Group also discussed the potential of aggregated mobility data from mobile operators and sources like GPS trace data from social media applications to inform epidemiological models and to provide policymakers with important insights regarding movement patterns and a way to evaluate the efficacy of different strategies. But this data must be interpreted in context: urban and rural populations behave differently, as do communities of differing socioeconomic status. Further, data quality gets worse in rural areas. Each of these factors must be considered in interpreting data, finding and correcting for biases, and comparing across datasets.

Digital Divides

App-based digital contact tracing solutions risk leaving disproportionately vulnerable populations at a further disadvantage due to unequal access to smartphones. Furthermore, false alerts from inaccurate digital contact tracing tools will have greater impacts on essential workers — disproportionately people of color — who cannot work from home more than white-collar workers. Because the costs of a false-positive exposure notification are not borne equally, policymakers ought to consider necessary legal protections and social benefits to support individuals alerted that they should quarantine for 14-days. Overall, there is inadequate investment of key resources to protect communities of color; this lack of investment has both contributed to and revealed inequalities that make populations of color more susceptible to the virus.

Takeaways and Calls for Action:

In order to understand people’s lived experiences of the pandemic and develop equitable mitigation strategies to protect communities, data-collecting organizations — both government agencies and academics — should be collecting relevant disaggregated health data on race, ethnicity, language, socioeconomic status, and environmental factors that contribute to inequities, such as ventilation and levels of crowding in living conditions. Policymakers, the private sector, and academics should consider how mobility data moves across sectors and ensure that government actors use datasets in a manner that is proportionate and necessary to achieve a legitimate aim. Social media data, adtech data, and data from third-party aggregators are particularly sensitive, and in some jurisdictions privacy regulations are far less rigorous than those imposed on mobile operator data. People need support to be able to effectively self-isolate and quarantine, but lack of resources prevents many people from being able to do so. Policymakers, community groups, and other relevant institutions should strongly consider offering funding and other community-based support to those who need to self-isolate and quarantine and therefore are unable to work. Policymakers should implement legal protections to protect those who need to self-isolate and quarantine from a broad range of adverse impacts at work, at home, and in their communities. Domestic Disarray: The Urgent Need for Increased Coordination in the United States

In the United States, the COVID-19 response has been stymied from the start due to the Trump administration’s decision to delegate many decisions to industry, state, or local leaders. The lack of a national strategy on everything from mandatory mask requirements to the allocation of protective equipment has proven disastrous. Federal leadership and coordination of contact tracing efforts — digital or otherwise — continues to be lackluster. Expanding the contact tracing workforce, creating technological components, and bolstering public trust in contact tracing has been left to the states, albeit with guidance, assistance, and funding available from the Center for Disease Control and other agencies. What have been the consequences of limited federal coordination of contact tracing efforts across jurisdictions and the lack of standardized tools and practices? What is filling in the gaps? And looking outwards, are there lessons to be learned from international contexts?

Weighing the Costs and Benefits of a Decentralized Approach

Most countries engaged in digital contact tracing adopt a singular, national app. The United States has taken a decentralized approach. Currently, a growing number of states and territories use GAEN enabled apps, South Dakota, Utah, and Rhode Island use GPS-based apps, and the majority of states do not have an app. Meanwhile, employers, universities, and other private entities are steadily developing or adopting their own hyper-local solutions.

Decentralization gives the public health authorities actually engaged in contact tracing more flexibility with respect to the allocation of their funds and energies, enables them to customize apps according to their needs, and crucially, affords them the freedom to decide between GAEN and competing approaches.

But there are obvious trade-offs to this heterogeneous approach. These include the uneven adoption of digital contact tracing (most states still don’t have an app), interoperability problems, and a disjointed outreach/communications strategy. Thankfully, different GAEN enabled-apps can now communicate exposure events to one another across state lines if they are configured to use the Association of Public Laboratories national key server — but not all of them are configured to do so, non-GAEN apps do not work with the server, and it took months (and the generosity of private actors) to get the system up and running.

Lessons from Europe

A state-by-state model did not, however, force the Federal government to the sidelines. The European Union, which also adopted a decentralized approach, is instructive. Nearly every EU member state has a digital contact tracing app with far higher adoption rates than seen in the United States — and there are significant ways in which the EU has been more proactive than the Federal government.

Standardizing and clarifying privacy protections and requirements is one such aspect. Congress has proposed several COVID-19 privacy bills, but none have passed, so currently no single federal law creates consistent and clearly-applicable privacy protections with respect to contact tracing. The European Commission and the European Data Protection Board, on the other hand, have issued legal guidelines and decisions to EU Member States regarding digital contact tracing (See, 1, 2, and 3), aggregated mobility data (See, 1 and 2) and interoperability.

Providing detailed technical guidance and assistance is another. The CDC guidelines to states around digital contact tracing are high-level and cursory. Federal assets such as 18F or the U.S. Digital Service, which could have been deployed to assist states with the design and roll-out of their own apps, have not been empowered to do so. Meanwhile, EU organs have produced significant technical guidance to Member States as well as concrete technical solutions — most notably creating a single European Federation Gateway Service to make GAEN apps interoperable.

Other entities have attempted to fill the leadership vacuum left by the Federal government. Private non-profit organizations such as the Linux Foundation Public Health or PathCheck Foundation are providing jurisdictions with technical assistance and solidifying best practices. Organizations such as the National Governors Association, the Association of State and Territorial Health Officials, and many others are using their coordinating power and expertise to issue recommendations to members and concentrate efforts.

Takeaways and Calls to Action:

Fully launched in October, the “Federation Gateway” enables European Union Member States with properly configured GAEN-apps to interoperate. Launched in July, the Association of Public Health Laboratories (APHL) National Key Server and Verification Server allows properly configured GAEN-apps in U.S. states and territories to interoperate. While the end results are similar the associated processes were markedly different. In the EU, the European Commission led work with member states to coordinate on interoperability guidelines, technical specifications, and the adoption of a legal basis for the service. In the United States, the APHL, a non-governmental non-profit, partnered with Apple, Google, and Microsoft to create and host a national exposure key server (as well as a unified verification server). The APHL-led approach brought an interoperable server online more quickly than the European Commission’s effort, but is it possible that the EU’s holistic approach will offer more enduring lessons? Our goal — if not for this pandemic, than at least the next one — ought to be a globally interoperable system for digital contact tracing. Accomplishing this will require tremendous synchronization across diverse legal and policy architectures. The EU’s effort, which confronted not only the technical question but legal questions as well, is a step in that direction. Uneven state-level adoption of contact tracing apps opened the door to hyper-local test and trace solutions by private institutions, namely universities and employers. Labeled the “company-town model” by Working Group Co-Chair Prof. Jonathan Zittrain, this approach has had some unsavory real-world consequences. In one extreme example, Albion College deployed a mandatory app with around the clock location surveillance that was discovered to have several security flaws, endangering the privacy of their students. As the chief enforcers of privacy protection laws, state attorneys general must remain vigilant about the privacy impacts of these apps — and would do well to issue guidelines to developers to help ensure safeguards are met. In instances where a private institution has both a large footprint in a community as well as the resources to conduct a rigorous testing and tracing program — as is the case with Harvard University — decision-makers ought to consider how to mitigate any negative community effects reopening may cause. Concluding Thoughts

The early success of digital contact tracing in countries like South Korea — which had a critical combination of high levels of public trust, a legal framework that enabled sweeping public health surveillance using a variety of data sources, and a robust pandemic response infrastructure — inoculated belief in mobile technology as a critical tool to fight the spread of COVID. That belief persists in many corners, as evidenced by the slow but steady rollout of different contact tracing apps throughout the US and globally, but it has been tempered.

A silver bullet solution was always unlikely. An app cannot solve for an impoverished public healthcare infrastructure, for inadequate support of the sick and the possibly sick, or for leaders that actively spread misinformation and undermine trust in scientific authorities. But ‘no-app’ doesn’t solve these issues either.

Digital contact tracing solutions are arrows in our quiver, silver or otherwise, and ones that are increasingly affordable and accessible to public health authorities. The challenge now is making them as effective as possible and also, as with any weapon, working to prevent unintended harms. Invested stakeholders can re-invigorate enthusiasm for these solutions by supporting research that investigates the effectiveness of ongoing efforts and ways to improve their effectiveness. There are legitimate concerns that ineffective or insecure tools will erode the public’s trust in government-sponsored technological solutions or that the impacts of false positives will be unequally distributed. Authorities need to have a transparent plan for not only monitoring the effectiveness of these interventions in breaking transmission chains but also to guard against unintended consequences, particularly for already vulnerable or disenfranchised populations.

Until a vaccine is developed and widely dispersed, our work will continue. Despite the fact that many schools and workplaces are reopened (at least partially) and that states are easing restrictions on various activities such as indoor dining, we are still deep in this crisis. As the Working Group looks towards engaging in new topics — such as vaccine distribution schemes — it’s crucial to take stock of what we’ve learned during our examination of digital contact tracing, why we wrestled with these questions, and how we can best use what we’ve learned moving forward.

What Digital Contact Tracing Can Teach Us About Public Trust, Health Equity, and Governance in the… was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 28. October 2020

Kantara Initiative

Spotlight on Airside

1.     In which sector(s) and country(ies) do you operate? Airside’s privacy-based digital identity solutions help individuals and organizations share, verify, and manage personal information. While Airside started with the goal of using mobile technology to drive convenience throughout the travel experience, we believe no one should have to choose between convenience and privacy […]

1.     In which sector(s) and country(ies) do you operate?

Airside’s privacy-based digital identity solutions help individuals and organizations share, verify, and manage personal information. While Airside started with the goal of using mobile technology to drive convenience throughout the travel experience, we believe no one should have to choose between convenience and privacy in any digital exchange of personal information. Our products and services are available and operational globally across a variety of sectors, such as travel, tourism, health, financial services, and more.

2.     What is your current market focus and what needs and areas of concern does your service/product address?

Our new Airside App is facilitating an easy and safe way for airline travelers to share their verified IDs from their mobile device, then self-check their luggage with facial recognition technology at the airport. Our app may also be used to enable other seamless, touchless, biometric experiences in the travel sector. For instance, our interoperable solution allows airports and airlines to implement the ID verification process without asking travelers to re-enroll information for each trip or sacrifice the privacy of sensitive information, enabling airports and airlines to provide the best experience for customers and employees.

Coming soon, Airside will also enable the sharing of verified COVID test results. Individuals can easily add their verified results to the new Airside App directly from a certified lab and securely share the results with an employer, travel operator, venue, and more. These organizations gain confidence that test results have not been tampered with and are compliant with local public health requirements, and worldwide privacy regulations.

Since 2014, our award-winning Mobile Passport App expedites a safe process through U.S. customs by U.S. and Canadian passport holders. Mobile Passport is the first app authorized by U.S. Customs and Border Protection (U.S. CBP) and is trusted by over nine million people.

3.     In what ways does your service/product push beyond the state-of-the-art?

Privacy and security are at the core of everything we build, and now, more than ever, the integrity of our virtual selves is of the utmost importance. We see how identity theft and identity fraud have skyrocketed with the rapid need for technology during COVID-19. While there may be other ways to verify and share IDs and other personal information, Airside does so in a way that is user-controlled and consent-driven, putting the individual in control of their data. This mitigates health and security risks that come with tedious and often inconsistent paper processes, human intervention and low-grade technology.

Our products and services build a connection of digital trust between people and the organizations with whom they share their ID(s). What’s more, Airside cannot access, does not own, and will not sell the information transmitted using our network. We only keep transaction records (which do not contain any personal information) for audit purposes.

4.     Where do you see the market trending in the mid/long term and how do you see your product/service playing into it?

The paradigm for digital identity has shifted and Airside is uniquely built in such a way to lead the industry with the highest levels of security and privacy standards. Digital identity now serves a critical function for everyday tasks. People are digitally sharing everything, from driver’s licenses and passports to school records and gym memberships. At the same time, the general public is becoming more aware and empowered by well-known privacy regulations, like GDPR and CCPA. Biometrics, for instance, is no longer a topic for tech-savvy early adopters. There is an immediate need for it and the secure transaction of all personal information. So, while people will continue to seek more virtual opportunities to conduct certain transactions, they’ll increasingly look for organizations or businesses that keep them safe, physically and digitally.

A lot of industries hit by COVID-19 are going to rebound and those who commit to personal data rights will do so faster than others. We’re eager to work with industries seeking to keep their employees and customers safe and healthy using our secure, scalable solutions to manage personal information with transparency.

5.     Why were you motivated to join Kantara Initiative and how do you see it helping your organization specifically, and the digital economy as a whole?

Airside has great respect for Kantara Initiative’s work to make our complex, connected lives ones that are user-centric, consent-driven, and ultimately, built on trust. We were motivated to join Kantara Initiative to help us establish an architecture where a clear, unambiguous information exchange can take place. Using Kantara’s Consent Receipt specifications, we are able to offer our partners a service that eliminates the need for centralized databases of personal information and greatly simplifies their compliance requirements. This also provides our network members with clarity of the scope of their personal information and control as to when, how and with whom to share it in a focused, deliberative manner.

It’s a huge step when we can bind our secure and convenient technology with first-class privacy and data protection principles, as standardized by Kantara and echoed in various data protection laws, regulations and international standards. Being together in the global, thought-leading digital economy with Kantara means that Airside’s digital identity privacy network has put the highest levels of privacy assurance and interoperability into real-life practice.

6.     What other significant aspects about your organization (service/product) should our global community know about?

Airside has always believed that privacy is a human right and builds secure and convenient digital identity technology in a manner that protects personal information and meets privacy regulations around the world accordingly. We promise to lead and innovate with our core principle, “Privacy First. Human Always.” Everyone deserves to be treated with equality, respect, and dignity because, ultimately, our true identity is as a human always.

About Airside

Airside is a private company that builds secure and convenient digital identity technology in a manner that protects personal information and meets privacy regulations around the world. Airside keeps privacy-by-design and leading-edge security as its guiding principles for use by individuals and organizations. Individuals are empowered by controlling their personal information with transparent consent protocols, and organizations protect against fraud and ensure a duty-of-care by managing only necessary information. The use of advanced encryption technology keeps all information safe and decentralized, eliminating any source verification database that can be accessed or used by Airside. Airside is headquartered in Arlington, Virginia. Learn more at www.airsidemobile.com or follow us on LinkedIn, Twitter, Facebook, Instagram.

Contact:

Airside Corporate Communications

Email: press@airsidemobile.com

Telephone: (800) 210-6838 x 6


Me2B Alliance

Stay connected with us!

Hi friends,   We're proud to announce a new milestone: the Me2B Alliance is now officially a membership organization! After lots of work behind the scenes, we've made it easy for members to access the full resources of the Alliance community through our online Member Portal.    In order not to miss out on exciting opportunities and events, we invite you to sign up before Mond
Hi friends,

 

We're proud to announce a new milestone: the Me2B Alliance is now officially a membership organization! After lots of work behind the scenes, we've made it easy for members to access the full resources of the Alliance community through our online Member Portal. 

 

In order not to miss out on exciting opportunities and events, we invite you to sign up before Monday, November 2. As an Alliance member, you'll be a part of a diverse group of human rights activists and software engineers, business leaders and legal eagles, educators and astute consumers--all contributing their collaborative energies to the respectful technology movement.

 

Also, our email newsletter will be launching soon, so stay tuned!

 

Member Signup: https://me2ba.org/membership/

 

Thanks for the support,

 

Lisa LeVasseur

Executive Director

Tuesday, 27. October 2020

Berkman Klein Center

Data for better lives

An agenda for meaningful connectivity in Africa The World Bank’s annual World Development Report 2021 themed “Data for better lives’’ will be launched in Q1 2021. In our modern world where the impact of data and digital technology has never been more keenly felt, the focus of the World Bank — a major international developmental partner — on improving lives through data and technology could n
An agenda for meaningful connectivity in Africa

The World Bank’s annual World Development Report 2021 themed “Data for better lives’’ will be launched in Q1 2021. In our modern world where the impact of data and digital technology has never been more keenly felt, the focus of the World Bank — a major international developmental partner — on improving lives through data and technology could not have been more timely.

This is particularly true for Africa, the least digitally connected continent. Africa lags behind the rest of the world in Internet access, digital device access, and digital skills. The World Bank’s focus on data and technology in 2021 should rally governments, industry, academia, and civil society to push for the actualization of developmental targets focused on digital penetration and adoption.

Photo: Pixabay

An important facet of the theme “data for better lives’’ is an acknowledgment that in developing countries, the collection, collation, and analysis of development data which forms the basis of national planning is done infrequently and often lacks the granularity required to achieve its objectives. Many developing countries have poorly supported government statistical offices. Within these contexts, digital signals such as from mobile phone usage and Internet use can be useful, when expertly marshaled, to fill in gaps in developmental data needed to inform developmental outcomes. The most recent blog by the World Development Report 2021 team highlights the poignant example of the use of social media activity to improve traffic safety in Nairobi Kenya.

What is obvious in the profitable and scalable use of these new sources of data is the need for a critical mass of Internet and digital device users in Africa. Applications of these emerging Big Data sources are most efficient and reliable when there is an abundance of data. The World Bank’s focus on the role of technology and data in national development is another opportunity for national governments, industry, academia, and civil society to rally for increased digital penetration and adoption.

A useful framework for thinking about what constitutes real digital progress for nations is the concept of “meaningful connectivity’’. According to the Alliance for Affordable Internet (A4AI), we have meaningful connectivity when we can use the internet every day (regularly) using an appropriate device (smartphone) with enough data (unlimited broadband) and a fast connection (minimum 4G). Only under these conditions of user experience can the durable benefits envisaged by the World Bank’s “Data for Better Lives’’ theme be realized.

From applications ranging from traffic management, mobile money, e-commerce, e-health, software development, and many others, opportunities for individual and national development are opened up when there is a critical mass of digitally connected citizens. “Data for Better Lives’’ thus serves as a rallying point for countries to work towards achieving international developmental goals related to digital connectivity.

Data for better lives was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


The Breakdown: Foreign Interference and the U.S. 2020 Election

Election interference and platform interventions in the lead-up to November 3 Oumou Ly (left) interviews Naima Green-Riley (right) for The Breakdown from the Berkman Klein Center. Concerns about election interference and disinformation are rampant in the weeks before the U.S. presidential election on November 3. In this episode of The Breakdown, Assembly Staff Fellow Oumou Ly interviews
Election interference and platform interventions in the lead-up to November 3 Oumou Ly (left) interviews Naima Green-Riley (right) for The Breakdown from the Berkman Klein Center.

Concerns about election interference and disinformation are rampant in the weeks before the U.S. presidential election on November 3. In this episode of The Breakdown, Assembly Staff Fellow Oumou Ly interviews Naima Green-Riley, a PhD candidate in the Department of Government at Harvard University.

Ly and Green-Riley review recent foreign interference, the weaponization of social issues, and various platform interventions to mitigate the spread of mis- and disinformation ahead of the election.

Read the transcript, which has been lightly edited for clarity:

Oumou Ly (OL): Welcome to The Breakdown. My name is Oumou. I’m a fellow at the Berkman Klein Center on the Assembly: Disinformation program. I am really excited to be joined today by Naima Green-Riley. Naima’s a PhD candidate at the Department of Government at Harvard University, with a particular focus on public diplomacy and the global information space. She also was formerly a foreign service officer and a Pickering fellow. Welcome, Naima. Thanks so much for joining.

Naima Green-Riley (NGR): Well, thank you so much for having me.

OL: So our conversation today centers on foreign interference in the upcoming election, which is drawing close as of the time of this recording. We’re about two weeks out from November 3rd. One big topic on my mind today is the big threat actors this time around… We know that 2016 was sort of a watershed moment in terms of foreign interference for American democratic processes.

In terms of social media manipulation in particular, how do foreign influence efforts in 2020 look in contrast to active measures we saw in 2016? Have the primary threat actors changed, optimized their methods a little bit, or adopted overall new approaches to influencing public opinion?

NGR: Well, you’re definitely right that 2016 marked the first time that the U.S. started to really pay attention to this type of online foreign influence activity. And during that election year, we saw a series of coordinated social media campaigns targeting various groups of individuals in the United States and seeking to influence their political thoughts and behavior. The campaigns were focused on sowing discord in U.S. politics, mainly by driving a wedge between people on very polarizing topics, so they usually involved either creating or amplifying content on social media that would encourage people to take more extreme viewpoints. So some examples might be that veterans were often targeted. There was this one meme that was run by Russian trolls, basically, that showed a picture of a U.S. soldier, and then it had the text, “Hillary Clinton has a 69% disapproval rate amongst all veterans,” on it. Clearly intended to have an impact on how those people were thinking.

They might also give misleading information about the elections, like they might tell people that the election date was maybe several days after the actual election date and therefore try and ruin people’s chances at using their right to vote. Some disinformation campaigns told people that they could tweet or text their vote in so they didn’t have to leave their homes.

And also there was exploitation of real political sentiment in the U.S., often encouraging divisions and particularly divisions around race. And so there were YouTube channels that would be called things like Don’t Shoot or Black to Live that shared content about police violence and Black Lives Matter, and some racialized campaigns that were linked to those types of sites would then promote ideas like, “The black community can’t rely on the government. It’s not worth voting anyway.”

So that’s the type of stuff that we started to see in 2016, and many of those efforts were either linked to the GRU, which is a part of the General Staff of the Armed Forces of Russia, or the Internet Research Agency, the IRA, of Russia. And many characterized the IRA as a troll farm, so an organization that particularly focuses on spreading false information online.

Since 2016, unfortunately, online influence campaigns have only become more rampant and more complicated.

So since 2016, unfortunately, online influence campaigns have only become more rampant and more complicated. We’ve seen a more diverse range of people being targeted in the United States, so not just veterans and African-Americans, but also different political groups from the far right to the far left. We’ve seen immigrant communities be targeted, religious groups, people who care about specific issues like gun rights or the Confederate flag. So, basically, the most controversial topics are the topics that foreign actors tend to drill deep on to try and influence Americans. And so it’s just gotten more and more complex.

OL: I want to pick up on this point, because so often particularly racial issues form the basis of disinformation and influence campaigns because, like you said, they are the most divisive, contentious issues. In what ways have you seen foreign actors work to weaponize social issues in the United States just this year, since the death of George Floyd?

NGR: Well you know, it’s interesting, because we focus a lot on disinformation as targeted towards the elections, but a number of different types of behaviors and activities have been targeted through disinformation, so we’ve seen people try to manipulate things like Census participation or certain types of civic involvement. And the range of ways that actors are actually different platforms is changing too, so we’re seeing text messages and WhatsApp messages being used to impact people in addition to social media.

But after George Floyd was killed, as you might expect, because it’s a controversial issue that affects Americans, absolutely there was sort of this onslaught of misinformation and disinformation that showed up online. So there were claims that George Floyd didn’t die. There were claims that were stoking conspiracy theories about the protests that happened after his death. And I have to say, not all dis- and misinformation is foreign, so that’s why this is such a large problem, because there are many domestic actors that engage in dis- and misinformation campaigns as well. So the narratives that we’ve seen across the space just come from so many different people that sometimes it can be hard to target the problem to one particular actor or one particular motive.

OL: So in 2016, the Russian government undertook really sophisticated methods of influence, certainly for that particular time and for that election, including mobilizing inauthentic narratives via inauthentic users, leveraging witting and unwitting Americans and social media users. How would you contrast the threat posed by Russia’s efforts with other countries known to be involved in ongoing influence efforts?

NGR: Well, I have to say that Russia continues to be a country of major concern. We saw just recently, this week, the FBI announcing that Russia has been shown to have some information about voter registration in the United States. And Russian disinformation campaigns have definitely re-emerged in the 2020 election cycle, but those campaigns only make up a small amount of the overall activities that Russia is engaging in today, all with the goal of undermining democracy and eroding democratic institutions around the world. That being said, we’ve seen other actors emerging in this space. Within the first few months of the COVID-19 pandemic, Chinese agents were shown to be pushing false narratives within the U.S. saying that President Trump was going to put the entire country on lockdown.

Iran has increasingly been involved in these types of campaigns as well. Recently, they used a mass of emails to affect U.S. public opinion about the elections. And one more thing I want to mention is that this is really a global phenomenon. So these actors, these state actors, often outsource their activity through sort of operations in different countries. So for instance, there are stories of a Russian troll farm that was set up in Ghana to push racial narratives about the United States. And there have also been troll farms that are set up by state actors in places like Nigeria, Albania, the Philippines. And so what’s interesting here is that the individuals who are actually sending those are either economically motivated, they’re getting paid, or they might be ideologically motivated, but they’re acting on behalf of these state actors. And that makes this not just a state to state issue, but a real global problem that involves many people in different parts of the world.

OL: So turning to the platforms for a second, what are your thoughts on some of the interventions platforms have announced so far? Like limiting retweets and shares via private message, labeling posts and accounts associated with state-run media organizations… The list of interventions goes on.

NGR: Yeah, all of the things that you mentioned are a good start, I would say. At the end of the day, I think it’s got to be a major focus on, how can we inform social media users of the potential threats in the information environment? And how can we best equip them to really understand what they are consuming? So I think that part of the answer is for these tech companies to, of their own accord, continue to create policies that will address this issue, but we also need better legislation, and that legislation has to focus on privacy rights, has to focus on online advertising, political advertising, tech sector regulation. And then we need policies that will enforce this type of thing moving forward. So it can’t all be upon the tech companies without that guidance, because I don’t know that they necessarily have the total will to do all that’s necessary to really get at this problem.

Social media companies have already started to label content. They’re also searching for inauthentic behavior, especially coordinated inauthentic behavior online. But I think that there is particular work to be done in terms of the way that we think about content labeling. So when platforms are labeling content, they are usually labeling content from some sort of state-run media. And if it’s a state-run media, much of the state-run media that they’re looking at is not completely a covert operation. It’s not a situation where this media source just doesn’t want anyone to know that it’s associated with the state, but it might be pretty difficult for the audience to actually determine that that outlet is from a state-run site.

So an example would be our RT, formerly known as Russia Today. There’s a reason, I think, that it went from Russia Today to RT. If you go to the RT website, you will see a big banner that says, “Question more, RT.” And then there’s lots of information about how RT works all over the world in order to help people to uncover truth. And then if you scroll all the way bottom to the bottom of the website, you’ll see RT has the support of Moscow or the Russian government… So it’s difficult for people to actually know where this content is coming from.

At the end of the day, I think it’s got to be a major focus on, how can we inform social media users of the potential threats in the information environment?

And this summer Facebook made good on a policy that they had said that they were going to enact for some time, where they now label certain types of content. And basically, they say that they’ll label any content that seems like it’s wholly or fully under editorial control that’s influenced by the state, by some state government. And so lots of Chinese and Russian sites or outlets are included in this policy so far. And according to Facebook, they’re going to increase the number of outlets that get this label. And basically what you see is, on the post, you see, “Chinese state-controlled media,” “Russian state-controlled media,” something to that effect. That’s helpful because now a person doesn’t have to click and then go to the website and then scroll to the bottom of the page to find out that this outlet comes from Russia.

But at the same time, I still think we need to do more in terms of helping Americans to understand why it’s an issue, why state actors are trying to reach them, little old me who lives in some small city or some small town in the middle of America, and how narratives can be manipulated. And so only if that’s done in connection with labeling more of these types of outlets on social media, I think you’d get more impact.

YouTube does something else. In 2018 they started to label their content, but the way they label the content is, they basically label anything that is government-sponsored. So if some outlet is funded in whole or in part by a government, there’s a banner that comes up at the bottom of the video that tells people that. And so you’ll see RT labeled is Russian content, but you’ll also see BBC labeled as British content, so it doesn’t have to do with the editorial control of the outlet.

One final thing on this, because I think this is really important. So I have heard stories of people who, let’s say, for whatever reason have stumbled upon some sort of content from a foreign actor. And so this content might come up because somebody shared something and they watched the video, right? So they watch a video. Let’s say they watch an RT video. Maybe they weren’t trying to find the RT video, and maybe they also aren’t the type of person who would watch a lot of content from RT, but they watch that one video. They continue to scroll on their newsfeed, and then they get a suggestion. “You might enjoy this.” Now the next thing that they get comes from Sputnik. It comes from RT again. So now they’re getting fed information about the U.S. political system that is being portrayed by a foreign actor, and they weren’t even looking for it. I think that’s another thing that we’ve got to tackle, is the algorithms that are used in order to uphold tech companies’ business models, because in some cases those algorithms will be harmful to people because they’ll actually feed them information from foreign actors that might have malicious intent.

OL: Naima, this week the FBI confirmed that Iran was responsible for an influence effort giving the appearance of election interference. And in this particular episode, U.S. voters in Florida and, I think, a number of other states received threatening emails from a domain appearing to belong to a white supremacist group. Can you talk a little bit about what in particular the FBI revealed, and what its significance is for the election?

NGR: Right. So there was a press conference on October 21st in which the FBI announced that they had uncovered an email campaign that was orchestra orchestrated by Iran. The emails purported themselves to come from the Proud Boys, which, as you mentioned, is a far-right group with ties to white supremacy, and it was also a group that had recently been referenced in U.S. politics in the first presidential debate. But actually, now we know that these emails came from Iran, and some of the individuals who received the contents of the email posted them online. So they were addressed to the email users by name, and they said, “We are in possession of all of your information, email, address, telephone, everything.” And then they said they knew that the individual was registered as a Democrat because they had gained access to the U.S. voting infrastructure. And they said, “You will vote for Trump on election day, or we will come after you.”

So first of all, they included a huge amount of intimidation. Second of all, they were purporting themselves to be this group that they were not. And third of all, they absolutely were attempting to contribute to discord in the run-up to the elections. It’s dangerous activity. It is alarming activity. It’s something that I think will have multiple impacts for a time to come, because even though the FBI was able to identify that this happened, that goal of shaking voter confidence of course may have been a little bit successful in that instance.

And so one of the things that is good about this is that the FBI was able to identify this very quickly, to make an announcement to the U.S. public that it happened, to be clear about what happened. Unfortunately, what they announced was not just that the Gmail users were receiving this email and there was false information in it, but they also said that they had information that both Russia and Iran have actually obtained registration information from the United States, and that’s concerning as well.

There appears to be good coordination between the private sector and the government on this issue. Google announced the number of Gmail users that are estimated to have been targeted through the Iranian campaign. Unfortunately, the number is about 25,000 email users, which is no small amount. And so this is just another instance of how not social media but the internet realm, email, can be used as a way to target American public opinion.

OL: Thank you. Thank you so much for joining me, Naima. I really enjoyed our conversation. I know our viewers will too.

The Breakdown: Foreign Interference and the U.S. 2020 Election was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


WomenInIdentity

DC Area Meet-Up: the official WiD round up

On October 21, Capital One hosted the 2nd Women in Identity DC Area Meetup. Speakers from both Women in Identity and Capital One brought a wealth of knowledge and experience… The post DC Area Meet-Up: the official WiD round up appeared first on Women in Identity.

On October 21, Capital One hosted the 2nd Women in Identity DC Area Meetup. Speakers from both Women in Identity and Capital One brought a wealth of knowledge and experience to the (virtual) table. The entire meetup was moderated by Melissa Heng, Senior Director of Digital Product Management at Capital One.  It included a number of great discussions on the issues of identity and privacy as well as the tools women need to succeed in the industry.

Leadership and Identity

Sara Strauss, the Senior Vice President for US Card at Capital One, kicked off with a look at why it is important to have diversity in leadership positions in general – and specifically within the identity space. Because everyone has implicit bias, it is crucial that all perspectives are represented when developing new technologies. As unique individuals, we all need an environment that encourages us to share our perspectives openly – they are critically important in the identity space.

Sara also shared some tips from her own experience

Embrace that you are different. Your perspectives matter and what you bring to the table is unique and valuable. Don’t try to look the same and think the same as everyone else because your uniqueness is what will set you apart. When you are speaking up, speak what you believe not what you think you are supposed to say. Find allies. Knowing that you have someone in the room who will support you, can give you the courage to stand out.

Networking for Professional Development

Career Development Facilitator Jennifer McCluskey then moved the discussion into a networking and professional development session. Attendees were able to share their anxieties and struggles with networking and Jennifer highlighted that these are common to ALL of us. What we need to do is shift our perception of networking from a negative, necessary, evil to a positive tool that can help us foster connections in our careers and everyday lives. Networking can help build our professional capital by giving us access to resources, information and influencers. It can give us the competitive edge we need to succeed. Attendees then took the opportunity to put those tools to the test, networking actively in several smaller breakout groups.

The Intersection of Identity and Privacy Panel Discussion

A panel discussion followed featuring Dr. Jenn Behrens, Becky Heironimus, Maggie Martin, Kimberly Sutherland and Sara Farmer. Panellists addressed several questions about privacy and identity and underscored the importance of communication and transparency when collecting and retaining data. Customers and users have a right to know what their information is being used for and,as a sector, it is our responsibility to make sure that they are informed and comfortable with the security of their data. Dr. Behrens suggested that more user friendly language around what is happening to our data in real time will help create an atmosphere of transparency.

Looking to the future, we need to provide customers with different authentication options. We can gradually increase the complexity of requirements as consumers become more comfortable with increasingly sophisticated technology.

The Power in YOU

Aparna Sarin, Vice President of Small Business Card at Capital One, then finished our conversation with an inspiring message: “As women, we have strengths that are hugely valuable.” Her personal journey of discovery includes overcoming bad, self-deprecating habits. She referenced How Women Rise by Sally Helgesen and Marshal Goldsmith and described several common bad habits including a reluctance to claim our achievements, putting your job before your career, falling into the perfection trap, and many more. Overcoming these habits will allow us to focus in on our strengths and give us the courage to be confident, assertive, fearless and empowered leaders.

We finished with an overview of Women in Identity including our mission, goals and member activities. This includes an invitation to all participants – and our readers (men and women) – to sign up! https://www.womeninidentity.org/become-a-member.

We had such excellent conversations on identity, privacy and our role as women within the industry. And we thank the many inspiring women who provided attendees with practical and accessible tools to succeed in the identity space.

Thank you to everyone who participated and made this event such a success!

The post DC Area Meet-Up: the official WiD round up appeared first on Women in Identity.


DEI leadership – how to move from ‘why’ to ‘how’. A round up from CusTech 2020

The WiD Leadership team was out in force at KuppingerCole’s CusTech 2020 from 20-22 October. In a series of sessions across the event, we shared practical tips to encourage better… The post DEI leadership – how to move from ‘why’ to ‘how’. A round up from CusTech 2020 appeared first on Women in Identity.

The WiD Leadership team was out in force at KuppingerCole’s CusTech 2020 from 20-22 October.

In a series of sessions across the event, we shared practical tips to encourage better leadership around DEI and considered ways to identify and control unconscious biases when building identity systems that work for ALL customers.

Some of the highlights included 2 great workshops both of which prompted follow up activity. we are actively looking at how we can get more of these sessions out into the public. Watch this space!

Identifying and Breaking Potential Bias in Identity Systems A great interactive session led by Colette d’Alessandro and Esther Hoeksema where attendees interacted with the WiD leadership team in small group discussions to share ideas on how to remove bias in identity systems. Learning Path to Identity Diversity This session, led by Kay Chopard-Cohen, Diane Joyce and Karyn Bright focused on practical ways to make change. We concluded that change needs to start with our individual personal behaviour. From there we can grow into looking at the way our work groups and organizations operate.  Try simple tactics like stopping yourself from speaking (if that is what you do!) and asking for the opinion of someone who often goes unheard. Or physically setting down your pen as a reminder to listen! Often the most subtle acts are the most powerful when building an inclusive environment.

The positive response from audiences was a great indicator that we are seeing a general shift from simply understanding why diversity is important to understanding how we can create diverse environments and champion those that are underrepresented.

Designing for Diversity The keynote session with Canadian team leaders, Chanda Jackson and Nicole Landry, gave a great foundation for understanding the impact of bias and what it means when you’re looking to develop identity systems that are genuinely intended to work for everyone. Customers and the Identity Experience This final panel brought things full circle, reminding us all that ultimately we are trying to build systems that work for ALL our customers – regardless of their race, gender, financial status or their physical or technical ability. Ably moderated by Nicole Landry, Dia Banerji, Diane Joyce and Louise Maynard-Atem reminded us that while employees may be ‘stuck with’ a bad user experience (UX), our customers have a choice! The past year has seen the identity sector move faster than ever thought possible with products rolling out to meet new sector demands. The team challenged that we may not always fully consider UI/UX needs across the full intersectionality of humanity when doing so. KuppingerCole dubbed this session a “Powerhouse panel!”

This gave the Women in Identity team a great opportunity to work with colleagues across different geographies and time zones – a truly global event.

We look forward to more great opportunities and encourage ALL our members to get in touch if you’d like to be part of future discussions. Interested? Why not update your Women in Identity profile with the topics that interest you most – we’ll be in touch!

The post DEI leadership – how to move from ‘why’ to ‘how’. A round up from CusTech 2020 appeared first on Women in Identity.

Monday, 26. October 2020

Federal Blockchain News

RevoltCypher CEO Mussie Haile

RevoltCypher CEO Mussie Haile shares his experiences delivering blockchain POCs for the federal government, including a Box-integrated platform for document sharing among parties in legal cases, and health record portability for the Veterans Administration.
RevoltCypher CEO Mussie Haile shares his experiences delivering blockchain POCs for the federal government, including a Box-integrated platform for document sharing among parties in legal cases, and health record portability for the Veterans Administration.

Wednesday, 21. October 2020

omidiyar Network

Privacy Front and Center: Meeting the Commercial Opportunity to Support Consumers Rights

By Jesús Salas, Associate, Responsible Technology American consumers are increasingly concerned about privacy and data security when purchasing new products and services, which may be of a competitive advantage to companies that take action towards these consumer values, a new Consumer Reports (CR) study finds. The new study, “Privacy Front and Center: Meeting the Commercial Opportunity to

By Jesús Salas, Associate, Responsible Technology

American consumers are increasingly concerned about privacy and data security when purchasing new products and services, which may be of a competitive advantage to companies that take action towards these consumer values, a new Consumer Reports (CR) study finds.

The new study, “Privacy Front and Center: Meeting the Commercial Opportunity to Support Consumers Rights,” from CR’s Digital Lab with support from Omidyar Network, looks at the commercial benefits for companies that differentiate their products based on privacy and data security.

The report lays out a succinct timeline for the evolution of consumer attitudes toward privacy that culminates with a clear call to action for innovators. Through their comprehensive research methodologies, CR has been able to draw out the nuance around when, why, and how much privacy and security features matter for individuals looking to use and buy security cameras, connected cars, smart speakers, and other products. The report also contextualizes the role that both rule-making through regulation and the supply of better products can have to meet the growing market demand.

“This study shows that raising the standard for privacy and security is a win-win for consumers and the companies,” said Ben Moskowitz, the director of the Digital Lab at Consumer Reports. “Given the rapid proliferation of internet connected devices, the rise in data breaches and cyber attacks, and the demand from consumers for heightened privacy and security measures, there’s an undeniable business case for companies to invest in creating more private and secure products.”

Here are some of the key findings from the study:

According to CR’s February 2020 nationally representative survey, 74% of consumers are at least moderately concerned about the privacy of their personal data. Nearly all Americans (96%) agree that more should be done to ensure that companies protect the privacy of consumers. A majority of smart product owners (62%) worry about potential loss of privacy when buying them for their home or family. The privacy and security conscious consumer class seems to include more men and people of color. Experiencing a data breach correlates with a higher willingness to pay for privacy, and 30% of Americans have experienced one. Of the Android users who switched to iPhones, 32% indicated doing so because of Apple’s perceived privacy or security benefits relative to Android.

The study draws from a nationally representative CR survey of 5,085 adult U.S. residents conducted in February 2020, a meta-analysis of 25 years of public opinion studies, and a conjoint analysis that seeks to quantify how consumers weigh privacy and security in their hardware and software purchasing decisions.

Omidyar Network is honored to support this endeavor to both understand and inform the commercial case for investments in security and privacy-enhancing technology. While privacy should not be turned into an out-of-reach luxury, these findings do present an opportunity for companies to be among the first to differentiate and capture market share or leverage consumer’s willingness to pay in the short-term. Over time, we expect many privacy and data security features to become “table stakes,” in which case first movers will innovate and raise standards further.

Supporting efforts like this is key to our vision of enabling responsible technology that promotes well-being and individual liberty. To accomplish this, we support different levers that drive competition and innovation while safeguarding to manage risks and unintended consequences.

CR also recently launched an updated and expanded website for the Digital Standard with Omidyar Network joining as a strategic partner. The Digital Standard is an evolving, open source framework used by independent testing organizations, researchers and product teams to evaluate how technologies respect consumers’ privacy and security interests. It serves as a resource for companies looking to implement and take action on stronger privacy and security, as well as ownership and governance.

We encourage innovators to leverage and enhance this tool to collectively raise the standard on technology, and grow a culture of responsibility within companies overall.

Tuesday, 20. October 2020

Digital Identity NZ

Aotearoa Digital Identity Hui Taumata

A big mihi to Janelle for her gift of words in our last newsletter, celebrating te wiki o Te Reo Māori. And thank you all for your kind words and encouragement in support of the message. The post Aotearoa Digital Identity Hui Taumata appeared first on Digital Identity New Zealand.

A big mihi to Janelle for her gift of words in our last newsletter, celebrating te wiki o Te Reo Māori. And thank you all for your kind words and encouragement in support of the message. There is more to come!

As we come toward the end of a year unlike any other, we’re excited to be bringing you a unique virtual event – our first ever Aotearoa Digital Identity Hui Taumata (summit). There is much on the Digital Identity horizon in 2021, including the introduction of a Digital Identity Bill and further MBIE-led engagement on a potential Consumer Data Right. In addition to this, we are also making exciting progress as the digital identity ecosystem in Aotearoa/New Zealand really starts to take shape.

The Hui will feature an all-Māori panel in a Kapa Kōrero session, sharing with us Māori perspectives on digital identity. The DIA Digital Identity Transition Programme will also bring us a substantive update on the Interim Trust Framework, and the opportunities it will bring in 2021.

We’re grateful and delighted to be partnering with Payments NZ to bring two international identity luminaries in Bianca Lopes and David Birch. Bianca will bring her unique South American style to explore the latest in privacy as it relates to digital identity. If you are not familiar with her work, check out her Sizzle Reel and the excellent B in the Know series. David is well known to us in New Zealand, and he’s excited to be connecting again with the Trust Framework now moving ahead. David will deploy his inimitable humour and insight to talk about “Stress Testing Digital Identity”. If you’re new to David’s work, check out this TedX talk.

This half day online event is priced at $195, with a discounted $95 rate for DINZ, Tech Alliance and Payments NZ members (including Payments NZ Participants and API Centre Standards Users). A special $45 rate is available for students and community organisations. You can register here.

Other upcoming events
Thank you to everyone who has participated in our recent kōrero sessions, morning coffee calls and webinars. We have another excellent session scheduled for tomorrow – Privacy in our digital worlds, featuring Liz MacPherson (Assistant Privacy Commissioner), Karen Ngan (Partner, Simpson Grierson) and Alice Tregunna (CEO, Trust . Integrity . Compliance Co).

We’re also looking forward to our next DINZ member showcase. The Spring edition is on Thursday 5 November.  There are still a couple of places available for members who are interested in presenting. Please contact us here.

AGM
Finally, a reminder that our AGM is coming up in early December. We have four Executive Council positions up for election – two each in the Major Corporate and SME/Startup categories. Nominations are open now. We especially welcome nominations from those involved in healthcare and kaupapa Māori organisations. And as a youth perspective is incredibly important and pertinent to the work we do, we are also encouraging nominations from young people within your organisations.

Ngā Mihi,

Andrew Weaver
Executive Director

To receive our full newsletter including additional industry updates and information, subscribe now

The post Aotearoa Digital Identity Hui Taumata appeared first on Digital Identity New Zealand.


omidiyar Network

Omidyar Network’s Official Statement on United States, et al.

Omidyar Network’s Statement on US v. Google, Filed Today by the US Department of Justice Today marks a new chapter in the long history of US enforcement of antitrust law. The filing of this lawsuit is recognition that digital platforms, namely Google, have used illegal and improper means to maintain their dominance. Over many years, Google has exercised tremendous control over the marketpla
Omidyar Network’s Statement on US v. Google, Filed Today by the US Department of Justice

Today marks a new chapter in the long history of US enforcement of antitrust law. The filing of this lawsuit is recognition that digital platforms, namely Google, have used illegal and improper means to maintain their dominance.

Over many years, Google has exercised tremendous control over the marketplace of online search. This complaint makes a strong case for how Google manipulated the online search market by paying billions to secure default positions on browsers and mobile handsets to exclude rivals, by denying interoperability to challengers, and by preferencing its own products and services in search results.

The DOJ did not sue Google because it has grown large or successful. Rather, the investigation yielded sufficient evidence to show Google has violated US antitrust law — a set of rules that apply to everyone, and are designed to make sure all of us benefit from free markets and fair competition.

The ideal outcome of this lawsuit is the restoration of competition and safer, more innovative choices for consumers in the online search market. We know that Google’s profit motive influences the content they deliver. Google searches often lead people to Google’s own goods and services, to those who pay Google for our attention, and down rabbit holes of misinformation, hate, and extremism. All of these “results” bolster time spent on the platforms and Google’s revenues. Surely, other search engines, if not excluded from the market, could design algorithms that were safer and more consistent with society’s values.

Today’s filing is an important first step in enforcing existing laws governing Google’s improper past conduct. But this journey is not over; the legal process could take many years to resolve, and other concerns will need to be addressed. For example, Google also used its online search dominance to develop another illegal monopoly in the digital advertising market. Americans deserve and expect that Google will be held accountable for all of its anti-competitive and otherwise harmful conduct as well as prevented from engaging in similar abuses in the future.

Consumers, competition, our economy, and our democracy have already paid too high a price for what are seen as “free” services. Undoubtedly, Google has provided valuable, innovative services that have become essential to everyone’s lives, creating the foundation for how we live, work, and play. And like other necessities that are embedded in our lives — power, water, cars — technology companies should be held to the highest standards. However, if left unchecked, Google’s practices will continue to create serious consequences for advertisers, publishers, foreclosed technological rivals, and the public.

To that end, and alongside this case, we call for federal attention toward the ad-tech stack and support for state-level antitrust cases directed at other aspects of Google’s sprawling businesses. Because antitrust action alone will not restore competition to these markets, Congress should also continue to pursue the development of new laws and rules.


Oasis Open

Collaboration Protocol Profile and Agreement v3.0 from ebCore TC approved as a Committee Specification

OASIS is pleased to announce that Collaboration Protocol Profile and Agreement Version 3.0 from the OASIS ebXML Core (ebCore) TC [1] has been approved as an OASIS Committee Specification. The post Collaboration Protocol Profile and Agreement v3.0 from ebCore TC approved as a Committee Specification appeared first on OASIS Open.

Electronic, XML-based agreements between trading partners is a key part of the ebXML family of standards developed jointly by OASIS and the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT)

OASIS is pleased to announce that Collaboration Protocol Profile and Agreement Version 3.0 from the OASIS ebXML Core (ebCore) TC [1] has been approved as an OASIS Committee Specification.

Electronic Business using eXtensible Markup Language (ebXML) is a family of standards developed through a joint initiative of OASIS and the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT). Five of these ebXML standards, including Collaboration Protocol Profile and Agreement, have been approved by the International Organization for Standardization (ISO) as the ISO 15000 standard. ebXML provides an open, XML-based infrastructure that enables the global use of electronic business information in an interoperable, secure, and consistent manner by all trading partners.

Collaborative Partner Profile Agreements are XML-based documents specifying a trading agreement between trading partners. Each trading partner will have their own Collaboration Protocol Profile (CPP) document that describes their abilities in an XML format. This can include the messaging protocols they support, or the security capabilities they support. A CPA (Collaboration Protocol Agreement) document is the intersection of two CPP documents, and describes the formal relationship between two parties.

CPPA Version 3.0 (CPPA3) specifies several improvements and innovations since the previous Version 2.0 (CPPA2). A CPPA3 document is much easier to read and to create and update manually or using automated tooling than a corresponding CPPA2 document. The same information can be expressed in a significantly smaller CPPA3 CPA document than in a CPPA2 CPA document.

CPPA3 adds support for AMQP, WebSocket transport and the SFTP subsystem of SSH2, in addition to the HTTP, SMTP, FTP transports already covered in CPPA2. The specification text is complemented by the normative CPPA3 XML schema, the agreement registration Exception XML schema, documentation embedded in those schemas, and sample documents.

This Committee Specification is an OASIS deliverable, completed and approved by the TC and fully ready for testing and implementation. The prose specifications and related files are available here:

Collaboration Protocol Profile and Agreement Version 3.0
Committee Specification 01
24 September 2020

Editable source (Authoritative):
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/cppa-v3.0-cs01.odt

HTML:
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/cppa-v3.0-cs01.html

PDF:
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/cppa-v3.0-cs01.pdf

Schemas:
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/schema/

Schema data dictionaries:
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/documentation/

XML document samples: 
https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/samples/

Distribution ZIP file

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file here:

https://docs.oasis-open.org/ebcore/cppa/v3.0/cs01/cppa-v3.0-cs01.zip

Members of the ebCore TC [1] approved this specification by Special Majority Vote. The specification had been released for public review as required by the TC Process [2]. The vote to approve as a Committee Specification passed [3], and the document is now available online in the OASIS Library as referenced above.

Our congratulations to the TC on achieving this milestone and our thanks to the reviewers who provided feedback on the specification drafts to help improve the quality of the work.

==========

Additional references:

[1] OASIS ebXML Core (ebCore) TC
https://www.oasis-open.org/committees/ebcore/

[2] Public review:
* 30-day public review, from 23 June 2020 to 22 July 2020:
https://lists.oasis-open.org/archives/ebcore/202006/msg00004.html
– Comment resolution log: https://docs.oasis-open.org/ebcore/cppa/v3.0/csprd01/cppa-v3.0-csprd01-comment-resolution-log.txt

[3] Approval ballot: https://www.oasis-open.org/committees/ballot.php?id=3524

The post Collaboration Protocol Profile and Agreement v3.0 from ebCore TC approved as a Committee Specification appeared first on OASIS Open.


WomenInIdentity

Member interview with Jacoba Sieders – listen now

In this podcast, Women in Identity Ambassador, Angelika Steinacker, interviews Jacoba Sieders on her journey from a degree in ancient languages to a career in Identity Access Management. Hear why… The post Member interview with Jacoba Sieders – listen now appeared first on Women in Identity.
https://womeninidentity.org/wp-content/uploads/2020/10/WiD-Jacoba-Sieders-interview.mp3

In this podcast, Women in Identity Ambassador, Angelika Steinacker, interviews Jacoba Sieders on her journey from a degree in ancient languages to a career in Identity Access Management. Hear why Jacoba believes that being bold, brave and creative are far more important qualities for an IAM leader than being a technical expert!

Jacoba Sieders is an independent, digital identity expert.  She has held executive positions leading IAM and KYC functions for more than 20 years at major banks in the Netherlands and then in Luxembourg at the European Investment Bank. She also lived and worked in New Delhi, India for ING Group.

She is a member of various international expert groups and think tanks, was part of the Dutch Blockchain Coalition’s SSI initiative, and is a member of the technical working group NEN/ISO.

Jacoba is Advisory Board member of ID-Next, the independent European think tank on identity, and Advisory Board member for the EU ESSIF-lab on SSI. She holds a master’s degree from Leiden University in classics (Greek,Latin, Hebrew) but retrained to become an IT professional.

She recently moved on from corporate life and now focuses on strategic advisory assignments alongside speaking engagements and teaching masterclasses.

The post Member interview with Jacoba Sieders – listen now appeared first on Women in Identity.

Monday, 19. October 2020

Decentralized Identity Foundation

KERI: For every DID, a microledger

The world of digital identifiers (DIDs) and verifiable credentials (VCs) is evolving quickly, giving much cause for optimism. Standards are starting to connect and move towards functional interoperability, governed by testable protocols. Most of this work is happening on the level of VCs. However, DIDs and their infrastructure are also starting to converge and mature as an extensible-yet-interoper

The world of digital identifiers (DIDs) and verifiable credentials (VCs) is evolving quickly, giving much cause for optimism. Standards are starting to connect and move towards functional interoperability, governed by testable protocols. Most of this work is happening on the level of VCs. However, DIDs and their infrastructure are also starting to converge and mature as an extensible-yet-interoperable technology.

Adoption by markets, standards bodies and regulators is largely contingent upon provable security and provable interoperability, so these promising developments cannot come soon enough.

The Digital Identity Foundation (DIF) is very proud to be hosting one particular research and development project that could prove pivotal in this process. It is currently a work item of DIF’s Identifiers and Discovery Working Group. However, a charter for an autonomous working group will be available for review at #IIW31 this week (20–22 October 2020) to facilitate broader participation. The project is called KERI and it is a project that could only be developed in the open, for the public good and for the widest, quickest adoption.

Photo by Fernando Santander But first, what is KERI?

KERI stands for Key Event Receipt Infrastructure. A “key event” is a discrete event in time that involves public/private keypairs, often called blockchain identities or cryptographic identities. These events can be generalized as inceptions (creations), rotations, and signing events: the three kinds of events for which KERI generates and handles receipts. In other words, key events are cryptographic events in the history of an identifier.

Importantly, everything else a decentralized identifier says, does, or refers to is not a key event. As KERI is deliberately laser-focused on key events only, we can call these other events non-KERI events. The real world consequences of a signature or rotation are out of scope and method-specific to boot. KERI is only interested in the most universal aspect of interactions between keys and cryptographic systems, i.e. the cryptography that allows drastically different DID systems to trust each other’s security guarantees.

“DID Methods exist to solve a trust issue. This does it in a different way.”
Charles Cunningham (Jolocom GmbH, Rust development lead for KERI)

Each key event produces a receipt containing only checkable signatures of key event information. Nothing more. Receipts are threaded into logs tracking the history of each identifier, which is similar to a traceable audit trail of hashes — useful for confirming but not for deducing the underlying key material. These threads are compiled into logs that are shared and replicated according to a consensus algorithm and a logic of trust thresholds that creates a fabric of shared history between nodes.

Is KERI a blockchain or a DLT? No. Does it replace blockchains? Also no.

The trust fabric created when KERI nodes share and propagate key material records might

sound redundant to the blockchains where all of today’s DID methods store their key material chronologically. To a degree, this is true: each log containing the history of one key is a “microledger,” like a blockchain with only one participant. Inception and rotation events in all of today’s DID methods are stored in a chronological distributed ledger which can be crawled to create a log of these key events by DID. So, why the redundancy? Why replicate a subset of the blockchain’s capabilities and features in a distinct blockchain-like infrastructure just for key material?

The answer is simple and manifold: blockchains enable many features outside the scope of KERI. These features bring with them complexity, diversity, scale costs, and trust issues. Within KERI’s scope, however, only some of a blockchain or distributed ledger technology’s (DLT’s) features are necessary. Total ordering and double-spend protection, for instance, are hallmarks of distributed ledgers, but hardly justify the added complexity here.

Subset of blockchain/DLT capabilities required by KERI (Dr Sam Smith, 2019)

Working backwards from a short list of security features, KERI infrastructure can be a much more performant, minimalist distributed ledger system. It is still in the family tree of blockchain, DLTs and directed acyclic graphs (DAGs), but it is closer to a sidechain or a trans-blockchain interoperability mechanism. In use cases where all that is needed is a self-certifying, widely-portable identifier, KERI can stand alone as a lightweight DID method. In combination with a traditional DID Method, KERI can increase key management options and strengthen security guarantees by raising a red flag at the first discrepancy between the two parallel and redundant systems.

As a scaling mechanism, KERI can also take away some of the traffic and complexity from the underlying blockchain. In implementations where key management and state maintenance (record keeping about keys that rotate over time) are entrusted directly to the KERI mechanism, these functions can be operated much closer to the edge and replicate after a slight delay. This might be a totally acceptable trade-off of efficiency for latency in many use cases. For example, a roundtrip write-and-wait-for-finality transaction on a global blockchain makes no sense in a low-connectivity Internet of things (IoT) use case, where double-spend is a non-issue.

KERI is both an interoperability mechanism and a standardization incentive

More importantly for the DIF, however, is another major feature of KERI: it could become the foundation of massive interoperability and portability at the infrastructure layer. What’s more, if adopted by enough major players, it could even speed up the standardization process of DIDs themselves. By offering a minimum level of security guarantees shared across all participating methods, it would simplify the security review process for both individual DID methods and for interoperable DIDs as a whole.

By abstracting out the universal, minimal set of key functions, a KERI log that spans multiple ledgers or methods is just as verifiable as one that does not. This means that anywhere

self-certifying KERI identifiers are accepted, an identifier’s history can stretch back further than the existence of KERI. Plus, that history can include so-called “portability events”, where an identifier is deactivated on one ledger and re-activated on another. Method-specific features or records might still need to be exported and imported. The core proof of control function of a DID, however, would be universalized in a way that enabled massive portability.

This same universalizing effect of sharing a security vocabulary across all participating DID methods has the added benefit of being able to guarantee certain security features in any KERI-compliant system. Since KERI also lends itself to simple compliance tests, and since KERI logs give a benchmark against which to test method-specific and blockchain-specific security, this is a small leap for each DID method and a giant leap for standardization and security engineering.

KERI’s history: from whitepaper to community incubation

So far we have been highly technical in our explanation of the project. A careful reader, however, may already have caught the community commitment implicit in phrases such as “KERI-compliant” and “participating DID methods.” KERI is only useful if the major DID methods incorporate it, or if the set of participating DID methods becomes congruous over time with the set of major DID methods.

It is, in a nutshell, a community project of alignment as much as a technological innovation: an agreement on the security model for the common core functionality shared across all DID methods, allowing much variety and extensibility to be preserved by the participating DID methods. Decentralized identifiers have been very decentralized in their design and governance from the beginning, with a high degree of extensibility and flexibility within the fiefdom of each DID method and its governance. KERI has been gathering steam for over a year as a countervailing force, potentially making all DIDs function in an end-verifiable and thus universal way.

“Investing in KERI is investing in interoperability, standardization, and cross-community security guarantees.”
Dr Sam Smith, author of the KERI whitepaper and project lead

In large part, the roots of KERI lie in debates within the World Wide Web Consortium’s (W3C’s) Decentralized Identifier Working Group. For years it has been discussing the “shalls” and “mays” that define a W3C-compliant DID method (and thus a DID system). In practical terms, this process specifies what each DID method can and must assume about other DID methods for such a decentralized and open system to make appropriate security guarantees.

KERI’s creator and the author of its whitepaper is Samuel M Smith PhD., a pioneering technologist in multiple fields, including automated reasoning, distributed systems, autonomous vehicles and blockchain protocol design. Dr. Smith has been refining and experimenting with such a cross-method mechanism since 2019, presenting at every meeting of the biannual Internet Identity Workshop. First came some core principles and requirements of a key infrastructure at IIW28, then at IIW29 a series of sessions about different aspects of a hypothetical system of witnesses that could replicate logs. For IIW30, Dr Smith brought more concrete sessions on finer points and even the roadmapping session that became the DIF working group. Along the way, he has iterated an ever-growing whitepaper describing and explaining all of this.

Now, however, Dr Smith has moved the project into the DIF under the auspices of its Identifiers and Discovery Working Group where he sits as co-chair. Asked about the decision, Dr Smith said, “DIF was a natural choice because I wanted the work to happen quickly but in the open, with participation from the greatest number of companies and innovators across various communities.”

KERI’s contributors: join us!

Foremost among DIF contributors is, of course, Dr Smith, who brings to his KERI design work more than a decade of engineering experience with scale and high-performance systems. Much of this work, focusing largely on AI and streaming/scaling projects, was done through his Python-centric consulting company Prosapien.com. He has also worked with Consensys, contributing to the Seed Quest project among others, soon to be donated to DIF.

Berlin-based Jolocom GmbH has been a major interlocutor in the early development of KERI, since before the creation of the working group at DIF. Jolocom’s Charles Cunningham is the working group’s lead Rust developer, who has written a highly interesting post about mental models of How KERI tackles the problem of trust from a developer’s point of view for the Jolocom logbook.

Representing Spherity GmbH are the working group’s lead JavaScript developer and note-taker. Spherity’s founder, Carsten Stöcker, has written a detailed piece for his company’s blog which called KERI “a more performant ledger for trusted identities.”

The Human Colossus Foundation, a Swiss-based non-profit, has been co-developing on the Rust side as well, working in parallel and providing input on the design considerations. The Human Colossus Foundation has also put substantial energy into promoting and socializing KERI in the Trust-over-IP Foundation, the MyData community and in the Sovrin community, including featuring an hour-long KERI session prominently in a half-day mini-conference it organized.

At IIW31, the KERI developers will be demonstrating their initial work to date while there is still the opportunity to get involved and determine the course of KERI as the project moves from direct mode (two-party) to witness mode (multi-party, distributed consensus). Many sessions are planned for IIW, ranging from introductions to technical discussions to use-case and requirements gathering for KERI-based ideas. Additionally there will be a live demo of the working direct-mode prototype.

Introductory reading and video materials are collected at the main DIF repository, but even if you don’t watch them in advance (or fully understand them if you do), there are many ways to get involved and make this community project stronger and more diverse.

KERI: For every DID, a microledger was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


Oasis Open

Invitation to comment on Service Metadata Publishing (SMP) v2.0 – ends 18 December

Describing a protocol for publishing service metadata within a 4-corner network, where entities are exchanging business documents through intermediary gateway services (sometimes called Access Points). The post Invitation to comment on Service Metadata Publishing (SMP) v2.0 – ends 18 December appeared first on OASIS Open.

This protocol for publishing service metadata within a 4-corner network enters the 60-day public review that precedes the call for consent as an OASIS Standard.

OASIS and the OASIS Business Document Exchange (BDXR) TC [1] are pleased to announce that Service Metadata Publishing (SMP) Version 2.0 Committee Specification 02 is now available for public review and comment.

The TC members have approved [2] submitting this Committee Specification to the OASIS membership for consideration as a Candidate OASIS Standard, as described in the OASIS TC Process [6]. This is a 60-day public review, after which it may be submitted to a membership-wide call for consent to promote the specification to OASIS Standard.

This specification describes a protocol for publishing service metadata within a 4-corner network. In a 4-corner network, entities are exchanging business documents through intermediary gateway services (sometimes called Access Points). To successfully send a business document in a 4-corner network, an entity must be able to discover critical metadata about the recipient of the business document, such as types of documents the recipient is capable of receiving and methods of transport supported. The recipient makes this metadata available to other entities in the network through a Service Metadata Publisher service.

This specification describes the request/response exchanges between a Service Metadata Publisher and a client wishing to discover endpoint information. A client can either be an end-user business application or a gateway/access point in the 4-corner network. This specification also defines the request processing that must happen at the client.

The TC has received four Statements of Use from IBM, Philip Helger, the Federal Reserve Bank of Minneapolis, and Efact [3].

The candidate specification and related files are available here:

Service Metadata Publishing (SMP) Version 2.0
Committee Specification 02
16 January 2020

Editorial source (Authoritative):
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/bdx-smp-v2.0-cs02.docx

HTML:
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/bdx-smp-v2.0-cs02.html

PDF:
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/bdx-smp-v2.0-cs02.pdf

XML schemas:
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/xsd/
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/xsdrt/

Model documentation:
https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/mod/

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:

https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/bdx-smp-v2.0-cs02.zip

The 60-day public review starts immediately and ends 18 December 2020 at 23:59 UTC. This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work. Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility as explained in the instructions located via the button labeled “Send A Comment” at the top of the TC public home page, or directly at: https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=bdxr Comments submitted by TC non-members for this work and for other work of this TC are publicly archived and can be viewed at: https://lists.oasis-open.org/archives/bdxr-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review of Service Metadata Publishing (SMP) Version 2.0, we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information related to this public review can be found at https://docs.oasis-open.org/bdxr/bdx-smp/v2.0/cs02/bdx-smp-v2.0-cs02-public-review-metadata.html

============== Additional information

[1] OASIS Business Document Exchange (BDXR) TC
https://www.oasis-open.org/committees/bdxr/

[2] Approval ballot: https://www.oasis-open.org/committees/ballot.php?id=3528

[3] Statements of Use:

– IBM: https://lists.oasis-open.org/archives/bdxr/202010/msg00001.html
– Philip Helger: https://lists.oasis-open.org/archives/bdxr/202010/msg00003.html
– Federal Reserve Bank of Minneapolis: https://lists.oasis-open.org/archives/bdxr/202010/msg00004.html
– Efact: https://lists.oasis-open.org/archives/bdxr/202010/msg00000.html

[4] https://www.oasis-open.org/policies-guidelines/ipr

[5] https://www.oasis-open.org/committees/bdxr/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#Non-Assertion-Mode
Non-Assertion Mode

[6] TC Process for OASIS Standard
https://www.oasis-open.org/policies-guidelines/tc-process-2017-05-26#OASISstandard

The post Invitation to comment on Service Metadata Publishing (SMP) v2.0 – ends 18 December appeared first on OASIS Open.


Trust over IP

Trust over IP Foundation Introduces a New Tool for Interoperable Digital Trust

When the ToIP Foundation launched in May 2020, our mission was to define a new model for achieving trust online—a model that breaks away from the thousands of siloed solutions... The post Trust over IP Foundation Introduces a New Tool for Interoperable Digital Trust appeared first on Trust Over IP.

When the ToIP Foundation launched in May 2020, our mission was to define a new model for achieving trust online—a model that breaks away from the thousands of siloed solutions for secure, privacy-enhancing digital identity and trust that do not work with each other. This lack of interoperability costs billions of dollars per year in complicated and time-consuming integration and hinders adoption by the very customers we are trying to serve.

Our goal is to drive adoption of a new model for digital trust based on open standard digital wallets and digital credentials that are every bit as interoperable as the physical wallets and paper or plastic credentials that we use every day—to do everything from getting on a plane to entering a hospital to signing a mortgage. As these new tools emerge as the primary mechanism for contactless identity verification, payments, and other online transactions, they will become as essential to our digital lives as browsers and email clients have become to the Web today.

As that happens, it is critical to avoid recreating the vendor-lock in and fragmentation of the “browser wars” that hindered the early days of the Web. Interoperable solutions that avoid vendor lock-in are paramount for a vibrant digital marketplace where consumers are free to choose their preferred digital wallet software from a variety of vendors and use digital credentials from any issuer as they choose.

The need for market-driven interoperability

Open standards alone do not produce interoperable market solutions—there are too many ways interoperability can still go off the rails. Testing labs are another step in the right direction—but vendors need incentives to use them, and those incentives can be scarce in a new market.

The ToIP Foundation recognized that, as our economy grows increasingly digital and collaboration tools grow steadily more powerful, there is a new path to interoperability: tapping market dynamics to drive incubation and adoption of truly interoperable solutions.

With this approach, vendors and customers voluntarily work together to develop interoperability testing requirements designed to meet explicit customer needs in the market. Vendors then satisfy those requirements by passing these interoperability tests with production-ready software.

Introducing the ToIP Interoperability Profile (TIP)

To facilitate this new approach to market-driven interoperability, the Technology Stack Working Group of the ToIP Foundation developed the ToIP Interoperability Profile (TIP). A TIP represents a specific combination of technologies that span each of the four layers of the ToIP technology stack in order to meet the requirements of a set of target customers in one or more digital trust ecosystems. 

TIPs can be designed, refined and supported by multiple vendors and customers wishing to collaborate on interoperability. A TIP typically includes the following elements critical to customer success:

Use cases capturing the specific requirements of customers in one or more digital trust ecosystems. Design principles that must be clearly defined when combining technology and business policies to formulate a solution architecture.  Documentation that clearly communicates the design, architecture, features, and benefits of a TIP to the digital trust ecosystems targeted for adoption. Best practices and implementation guidance for adoption of a TIP, including how to incorporate policies from the ToIP governance stack. Interoperability tests that enable vendors supporting the TIP to be certified for verifiable interoperability. Adoption metrics and case study references that provide quantifiable evidence of the real market impact.

TIPs harness market forces to drive convergence on interoperability

Each TIP consists of two types of components:

Fully-standardized components of the ToIP stack. These components, called ToIP Standard Specifications (TSS), are standards that have already gained Foundation-wide approval.  Custom components that are specific to a TIP. Some places in the ToIP stack do not yet have agreed-upon specifications. For these gaps, a TIP must specify how it fills the gap via an open community specification that can be implemented by any vendor or open source project.

A conceptual “lego block” picture of a complete four-layer TIP—showing how it is constructed from a combination of standard TSS components and custom TIP-specific components—is shown in the figure below.

Launching the Saturn-V TIP

The first TIP published by the ToIP Technology Stack Working Group is named for the historically significant multi-stage rocket platform, the Saturn-V. This TIP emerged from work begun at a 2019 Connect-a-Thon event held by the Hyperledger Indy community. When the ToIP Foundation was launched in May 2020, ToIP members including Commerzbank (Main-Incubator), esatus AG, Evernym, IBM, Trinsic and idRamp recognized the opportunity to coalesce their collaboration into a TIP. 

Once the ToIP Technology Stack Working Group was formed, it established the criteria for managing the lifecycle of TIP from incubation through design, demonstration, acceptance and adoption. The Saturn-V collaborators then proposed their TIP following this process and it was formally accepted as a Draft Deliverable by the Technology Stack Working Group at their 24 August 2020 meeting.

The next stage: mission-critical collaboration on interoperability testing

All the vendors participating in the Saturn-V TIP actively pair with developers from other participants to work through agreed-upon test plans. Having the Technology Stack Working Group oversee the TIP development life cycle on behalf of all participating vendors ensures a more transparent and robust joint testing project than a typical multi-vendor “plug-fest”. 

Since the technologies used for the Saturn-V TIP are Hyperledger Indy and Hyperledger Aries at layers 1-3 of the ToIP Technology Stack, the open source Aries Test suites will be used as the baseline for all test plans. Participating vendors are currently tackling the following stages of the Saturn-V Interop Test Plan One:

Self-Validation against Aries Protocol Test Suite for Aries Interop Profile v. 1.0 Peer-Validation Core Aries Interop Profile v. 1.0 (Aries RFC 302), which supports DID connections, issuing credentials, and fulfilling proofs Connectionless Proofs using the Service Decorator (Aries RFC 56), HTTP over DIDComm (Aries RFC 348).

Future missions

Once Test Plan One is complete, TIP Participants will define Saturn-V Interop Test Plan Two which intends to minimally include peer-to-peer validation for:

Core Aries Interop Profile v. 2.0 (content and scope not yet defined by community) Out-of-Band Protocol (Aries RFC 434) End-to-End testing the Aries Agent Test Harness (contributed by the Government of British Columbia)

Once these stages of interoperability testing are completed, the Technology Stack Working Group will be able to assess whether component specifications of this TIP meets the criteria to become a TSS. If so, these will be advanced to become their own Draft Deliverables for ultimate approval by the Working Group and then the ToIP Steering Committee.

Please join us

We invite you to join in development of the Saturn-V TIP and formulation of new TIPs that provide full-stack interoperability for digital trust ecosystems. If you are not yet a member of the ToIP Foundation, membership is open to anyone—individual or organization—at both free and paid membership levels. For more information, please visit the Foundation membership page.

TIP development is coordinated through the ToIP Technology Stack Working Group

To join the Technology Stack Working Group, go to https://lists.trustoverip.org/, log in with the email address of your ToIP Foundation membership, then subscribe to the mailing list.

The post Trust over IP Foundation Introduces a New Tool for Interoperable Digital Trust appeared first on Trust Over IP.


Me2B Alliance

All Aboard! Me2B Membership effective next week

Hi Friends, One week from today, the Me2B Alliance will be transitioning to a membership organization. What this means for you, as part of the Alliance community, is an opportunity to become an influential voice in the respectful technology movement.    Starting October 26, all Alliance work will be taking place in the membership portal.  To remain active in the Alliance c

Hi Friends,

One week from today, the Me2B Alliance will be transitioning to a membership organization. What this means for you, as part of the Alliance community, is an opportunity to become an influential voice in the respectful technology movement. 

 

Starting October 26, all Alliance work will be taking place in the membership portal. 

To remain active in the Alliance community, please become a member before October 26.  

 

Better yet, join today!

 

For your convenience, here’s the link on the new Me2BA.org website: https://me2ba.org/membership/

After you receive your membership login information, be sure to sign up for all the working groups you want to participate in--and don't forget ticking the box for All Members.  (We will automate this in the future.)  

Starting next week, all WG meetings will shift to a new conferencing platform, which you can view in the membership portal (members will also receive meeting invites next week and going forward).

 

We look forward to seeing you in the new membership portal very soon.

 

Lisa LeVasseur

Executive Director


Federal Blockchain News

FonBnk CEO Chris Duffus on Blockchain-based Spectrum Allocation for the FCC

In the wake of the Nobel Prize for Economics honoring work that revolutionized how wireless spectrum is allocated in the US, we talk with serial fintech, wireless and blockchain entrepreneur Chris Duffus about how the FCC could use blockchain for real-time radio frequency spectrum allocation mechanisms that could dramatically increase the useable wireless bandwidth for first responders, personal el
In the wake of the Nobel Prize for Economics honoring work that revolutionized how wireless spectrum is allocated in the US, we talk with serial fintech, wireless and blockchain entrepreneur Chris Duffus about how the FCC could use blockchain for real-time radio frequency spectrum allocation mechanisms that could dramatically increase the useable wireless bandwidth for first responders, personal electronics, the military and more.

Thursday, 15. October 2020

Digital Identity NZ

Seeking Digital Identity NZ Executive Council nominations

In December last year members elected the first Digital Identity NZ Executive Council. The Council is the governing group for the association; providing guidance and direction as we navigate the developing world of digital identity in Aotearoa. The post Seeking Digital Identity NZ Executive Council nominations appeared first on Digital Identity New Zealand.

In December last year members elected the first Digital Identity NZ Executive Council. The Council is the governing group for the association; providing guidance and direction as we navigate the developing world of digital identity in Aotearoa. Each Council member is elected for a two-year term, with elections held annually and results notified at the Annual Meeting in December. As we approach the end of the year it is time for us to call for nominations for the Council seats coming up for re-election.

This year we have four positions up for election, two each in the Major Corporate and SME/Startup categories.

When we formed the Executive Council last year we asked that you consider electing a diverse group of members who reflect the diversity of the community we seek to support. You did that. The power of that diversity has shone through in the work of the Council this year, especially as we consider the importance of Te Tiriti in a well-functioning digital identity ecosystem.

As the Council has reflected on its own makeup we have identified a number of areas where diversity would continue to improve our ability to serve the community. We would particularly welcome nominations from those involved in healthcare and kaupapa Māori organisations. We are also encouraging nominations from young people from within your organisations, as a youth perspective is incredibly important and pertinent to the work we do.

Once elected in December, one of the first duties of the new Council is to elect leadership roles. David Morrison has held the Chair role since our inception in 2018, and is an exceptional Kaiārahi (guide/counsellor/leader). David’s term on Council extends to 2021, and he has indicated that he is likely to step down from the Executive Council at that time. Our plan this year is for the Council to select two co-chairs to aid in a smooth transition. The opportunity to chair is open to any Council member, and we encourage any who have a desire and passion to serve in this way to make themselves available for nomination.

2021 is a pivotal year for digital identity with the introduction of the Digital Identity Bill and further development of the ecosystem. We are looking forward to some stellar nominations to help us in exploring and pursuing the best opportunities for Digital Identity NZ to make a positive difference in 2021 and beyond.

Executive Council Nominations

There is now an opportunity to put yourself forward or nominate someone else for a role on the Digital Identity NZ Executive Council. This year we have vacancies for the following positions:

Corporate – Major (2 positions) SME & Start-up (2 positions)

The nominees for the above positions must be from a Digital Identity NZ member organisation, and be from the same Digital Identity NZ Group that they are to represent on the Executive Council.

All nominations must be entered into the online form here by 5pm, Thursday 5 November 2020.

Digital Identity NZExecutive Council roles and responsibilities include:

Direct and supervise the business and affairs of Digital Identity NZ Attend monthly Executive Council meetings, usually two hours in duration (video conferencing available) Represent Digital Identity NZ at industry events and as part of delegations Assist with managing and securing members of Digital Identity NZ Participate in Digital Identity NZ working groups and projects Where agreed by the Executive Council, act as a spokesperson for Digital Identity NZ on particular issues relating to working groups or projects Be a vocal advocate of Digital Identity NZ

Online Voting

Voting will take place online in advance of the meeting. The results will be announced at the Annual Meeting. Please see the Charter for an outline of Executive Council Membership and election process. Each organisation has one vote that is allocated to the primary contact of the member organisation.

Annual Meeting Details

The Annual Meeting is scheduled for 9.30am on Thursday 10 December 2020 and is to be held via Zoom. Register for the event here

Notices and Remits

If you wish to propose any notices or motions to be considered at the Annual Meeting, please send them to elections@digitalidentity.nz by 5pm, Thursday 19 November 2020.

Key Dates:

Now: Call for Nominations for Executive Council representatives issued to Members 5 November: Deadline for nominations to be received 12 November: List of nominees to be issued to Digital Identity voting members and electronic voting commences 19 November: Any proposed notices, motions or remits to be advised to Digital Identity NZ 10 December: Annual Meeting, results of online voting announced.

The post Seeking Digital Identity NZ Executive Council nominations appeared first on Digital Identity New Zealand.

Wednesday, 14. October 2020

omidiyar Network

New Season of Should This Exist?, Supported by Omidyar Network

By Sarah Drinkwater, Director, Responsible Technology Today our grantee, WaitWhat, launched the second season of Should This Exist?, which earlier this year was nominated by the prestigious Webby Awards for best tech podcast in the world. We put Should this Exist? on the “must listen” list! It stretches your understanding about radically new technology and its implications on our lives — bo

By Sarah Drinkwater, Director, Responsible Technology

Today our grantee, WaitWhat, launched the second season of Should This Exist?, which earlier this year was nominated by the prestigious Webby Awards for best tech podcast in the world.

We put Should this Exist? on the “must listen” list! It stretches your understanding about radically new technology and its implications on our lives — both the promising and perilous. The storytelling is riveting, the topics are supremely well chosen, and the show’s host — Internet serial entrepreneur and investor Caterina Fake — masterfully guides listeners on unforgettable journeys alongside the inventors of our times. In each episode, Caterina engages with a groundbreaking technology — from contact tracing, to a robot caregiving, to deepfake video production — to ask the important questions. The show surrounds you with ideas and interpretations, but ultimately leaves it to you to decide if the tech should exist, and how it should.

We’re excited to support this show because the world is hungry for this kind of conversation, as we’ve seen with interest in Ethical Explorer and in Netflix’s recent The Social Dilemma. We want to imagine and guide technology to its greatest potential and help technologists navigate around unintended consequences that are so hard to anticipate.

After you subscribe here to the podcast, you might want to sign up to the weekly newsletter.

Here’s a custom link to today’s show if you’d like to share, and below, you can read more about our first three episodes.

LISTEN: http://listen.shouldthisexist.com/Omidyar

Premieres Wednesday, October 14:
The deepfake detective
Chances are, you’ve seen a “deepfake” video. But did you know it? A new breed of tech detectives are building tools to spot these hyper-realistic videos — built with AI — where people say things they didn’t say or do things they’d never do. Some of these clips are just good, fanciful fun. But a deepfake deployed at the right moment could sway an election, or wreck a life. That’s why UC Berkeley professor Hany Farid is working on a “deepfake detective” — a tool to help media outlets know what’s real and what isn’t. But the same program could also give deepfakers a blueprint for how to make their work undetectable. Deepfake technology already exists. This episode asks: What should we do now?

Premieres Thursday, October 15:
Contact tracing: So promising. So invasive.
It’s one of the best weapons we have to contain a pandemic. But can it defeat the disease without spying on people who might carry it? MIT’s Kevin Esvelt has a bold idea: Let’s try a new form of contact tracing that could more than double the program’s impact. Bi-directional tracing looks both forward and backward from a known transmission, building a chart of the “undiscovered branches of the viral family tree” and identifying potential spreaders other systems can’t see. But how much of our data are we willing to give the government, even if it’s to fight Covid-19?

Premieres Wednesday, October 21:
Grandma, here’s your robot
Is it the loneliest idea you’ve ever heard? Or an ingenious hack that helps human caregivers be more attentive and empathetic? You might have these questions when you meet the robot caregivers who roam the halls at retirement homes, doing basic tasks for residents and keeping them connected. Is elder care something we want a robot to do? Roboticist Conor McGinn from Trinity College Dublin actually moved into a retirement home in Washington, DC, to gain a deeper understanding of what residents might want from a robot. The answer surprised him, and it prompts deeper questions: As humans, what responsibility do we have toward our elders? When we fail them, should robots close the gap? And is that the future we want for ourselves?


Berkman Klein Center

Lumen — The Year in Review:

Lumen — The Year in Review September 2019-August 2020 By: Adam Holland, Andromeda Yelton, and Chris Bavitz Introduction September 2019 through the end of August 2020 marked the first year in which Lumen operated with a generous supporting grant from the Arcadia Fund. During that year, the project’s primary objectives fell within three themes: (1) technical improvements to the Lumen s
Lumen — The Year in Review September 2019-August 2020

By: Adam Holland, Andromeda Yelton, and Chris Bavitz

Introduction

September 2019 through the end of August 2020 marked the first year in which Lumen operated with a generous supporting grant from the Arcadia Fund. During that year, the project’s primary objectives fell within three themes: (1) technical improvements to the Lumen site and database; (2) expanding research opportunities, both internal and external; and (3) outreach, both to possible new notice-submitters and to the various constituencies of the Lumen user community. This post draws from Lumen’s first annual report to Arcadia and provides an overview of the project’s key activities during the past year.

To say the least, it was a complex and difficult year on a number of fronts — most notably, because of the COVID-19 pandemic that forced us into a remote work mode for much of 2020. That said, we were able to make significant progress on a number of key fronts:

Lumen’s developers and technical support team achieved a great deal, especially on the backend of the site, and also by adding new user interface features and new notice categories. Lumen had success increasing and improving external research using Lumen’s database, with database access credentials granted to 49 new researchers or research teams, ranging from college undergraduates and investigative reporters to law professors and economists. Several of these researchers produced substantive written work along with various other shorter articles and pieces. Regarding outreach, we had conversations with a range of organizations, online service providers and individuals regarding working more closely with Lumen. The Lumen team participated in a multi-stakeholder virtual workshop in June of 2020, with more topic-specific workshops planned. The onset of the pandemic put a wrench in the works in terms of hiring, but Lumen plans to bring onboard a new research fellow in the coming months.

The remainder of this overview addresses and provides more details on these main themes in the order outlined above.

(1) Technical Improvements and Progress

In addition to too many small-scale bug fixes and one-off requests to name, the Lumen developers’ activity in the first year fell into several key main categories:

Security/anti-obsolescence updates Lumen upgraded Rails (the web application framework the whole system uses) from version 4.2 to version 5.2. Lumen upgraded Ruby (the programming language in which Lumen’s site and administrative interface is built) from version 2.3 to version 2.5. Lumen upgraded various software libraries. Lumen is in the process of upgrading its native search function from ElasticSearch version 5 to ElasticSearch version 7, which is expected to significantly improve the ability of researchers and others to access notices in the database.

In combination, these upgrades and improvements improved system security and system performance, making the database notably faster for users. Additionally, the various improvements keep the site effectively modernized, which in turn allows developers to take advantage of and implement further improvements without too much work. Finally, the ongoing ElasticSearch upgrades allow Lumen administrators to more quickly and effectively redact sensitive data in Lumen’s notices (in addition to making site search functionality more powerful for users).

Overall, these technical improvements make the Lumen site easier to use by and more responsive to both its internal team and the research community. They also serve to “future-proof” the site to the extent possible, making it far more likely that Lumen will be able to continue to exist and thrive indefinitely, and making continued and sustained improvements easier to accomplish.

Improvements to the Lumen administrative interface Lumen updated rails_admin from version 1.4 to version 2. As an example of modernization making more modernization easier and possible, this upgrade was only possible because of the underlying Ruby upgrade mentioned above. Lumen added a Content Management System (CMS) to the site and migrated all the old blog posts and pages into it. The CMS will also make it much easier for the Lumen team to share out rich multimedia content on the Lumen website and write short pieces more rapidly and effectively. Some recent blog posts made using the CMS include a write-up of Lumen’s workshop on best practices in notice and takedown transparency, “Algorithmic Copyright Management: Background Audio, False Positives and De facto Censorship” and “Pandemic Misinformation Campaign Comes to Lumen.” Improvements to receiving and sharing notice data A series of improvements to the Lumen application programming interface (“API”) improved the quality of data that the database is able to accept and process, most notably with respect to URLs submitted. Some URLs are malformed when submitted, and API improvements dramatically reduced the error rate upon submission. Prompted by a series of distributed denial of service attacks on the Lumen site, Andromeda Yelton, the lead developer for Lumen, spent a great deal of time and effort putting into place, documenting, and continuing to improve on a series of changes and improvements for managing the requests for data that Lumen receives through its API, which represent a potential vulnerability for the site and database. These changes improved Lumen’s data security, made it easier for legitimate researchers to continue to use the site, and helped better allocate site and system resources. Andromeda later gave a presentation about her work at Code4Lib. Lumen added “Counterfeit” and “Placeholder” notice types. Google began accepting takedown notices referencing the presence of counterfeit goods or advertisements for them on Google sites, and in order to accommodate this new notices stream, we created a new template for such notices. Having this new type available also made it possible for the Lumen team to effectively reach out to new possible submitters, including Amazon and eBay. The “placeholder” notice is another new type that allows Lumen to accept more notice streams. Some large OSPs, like Google, receive takedown requests about which they are unable to share the details for legal reasons. Despite this, they still wish to indicate that they have received a request. In those cases, they can point their users to a “placeholder” notice that provides what details are available. User Interface

Lumen made a series of changes regarding how visitors to the site see the URLs that are part of each notice. The changes make it possible for Lumen to present notice URLs in a truncated form to casual Lumen visitors, while still granting access to complete URLs to Lumen accredited researchers. Casual Lumen users can view one notice’s full set of URLs by providing an email address. Researchers with credentials can be granted access to notices within a limited time frame, up to a maximum specific number of notices, and with or without use of the Lumen API, and can also be given the ability to generate “permanent” versions of Lumen notice URLs that are suitable for use in published works or for citation.

(2) Research Using the Lumen Database

Lumen granted research credentials to forty-nine different researchers during the year in question. These researchers range from college undergraduates who have recently become interested in copyright law or censorship, to international researchers from a wide range of countries, including Brazil, Turkey, Ukraine, France, India, Austria, Russia, Germany, and the UK, as well as EU-affiliated researchers and international NGOs such as the Committee to Protect Journalists, as well as law professors and journalists and others in the United States.

Many of the projects that these researchers are working on are still ongoing, such as Professor Eugene Volokh’s ongoing series of law journal articles about falsified court orders and online defamation law. Some of the completed research projects include:

Asher-Schapiro, Avi, Zidan, Ahmed. “India Uses Opaque Legal Process to Suppress Kashmiri Journalism, Commentary on Twitter,” Committee to Protect Journalists (blog), October 24, 2019, https://cpj.org/2019/10/india-opaque-legal-process-suppress-kashmir-twitter/. Fuller, Andrea, Grind, Kirsten, Palazzolo, Joe. “Google Hides News, Tricked by Fake Claims,” Wall Street Journal, May 15, 2020, sec. Tech, https://www.wsj.com/articles/google-dmca-copyright-claims-takedown-online-reputation-11589557001. Akdeniz Yaman, Guven, Ozan (2019). “EngelliWeb[HA4] 2019: An Iceberg of Unseen Internet Censorship in Turkey”. https://ifade.org.tr/reports/EngelliWeb_2019_Eng.pdf Hovyadinov, Sergei, Toward a More Meaningful Transparency: Examining Twitter, Google, and Facebook’s Transparency Reporting and Removal Practices in Russia (November 30, 2019). Available at SSRN: https://ssrn.com/abstract=3535671 or http://dx.doi.org/10.2139/ssrn.3535671 Srivas, Anuj. ““At ‘Government Request’, Twitter Blocks Tweet by BJP MP Tejasvi Surya,” The Wire, accessed October 8, 2020, https://thewire.in/tech/at-govt-request-twitter-blocks-hate-speech-including-tweet-of-bjp-mp-tejasvi-surya. Matias, J. N., Mou, M. E., Penney, J., & Klein, M. (2020). Do Automated Legal Threats Reduce Freedom of Expression Online? Preliminary Results from a Natural Experiment. https://osf.io/nc7e2/

There are also many shorter articles online referencing or relying on Lumen, such as this one, from the Sunday Guardian Live, or this one from TorrentFreak.

Over the summer of 2020, the Lumen team also worked closely with a Harvard Law School student research assistant to begin developing a taxonomy of takedown notices, their underlying data, and the various involved stakeholders. This draft taxonomy seeks to cast light on the range of interests and incentives that a given stakeholder in the notice and takedown (“N&TD”) ecosystem must balance with respect to whether a particular piece of information should come down and the degree to which there should be transparency regarding the request and any subsequent action taken. It is the Lumen team’s hope to soon turn this working draft into a white paper, as well as the raw material for a Lumen workshop, as well as use it to inform discussions on any statement of best practices regarding N&TD transparency.

(3) Outreach Events

The Lumen team’s original plan had been to hold a fairly intimate in-person workshop over the course of two days, as a way of initiating conversation between the various parts of Lumen’s user and research communities, and to plant the seed for more detailed and targeted workshops to come. Unfortunately, the COVID-19 pandemic got in the way of those plans, and as a result, the June workshop was held virtually. Although the Lumen team members were of course very disappointed to not be able to have the full in-depth workshop we had planned, especially the face-to-face network building and conversations, hosting a virtual event had some positive aspects. These included lower costs and the possibility of drawing more participants. The end result was that we were able to diversify and expand the initial invitee list substantially, including a wider range of interested parties, and — critically — giving the group more international representation. On that note, it meant that some foreign human rights activists who would otherwise not have been able to attend were present — including representatives of EngelliWeb, which has published a human rights report on Turkish takedowns that relies heavily on Lumen. The most recent of EngelliWeb’s reports can be found here.

Using the lessons learned from this first virtual event, and anticipating that virtual events will be the norm for the foreseeable future, Lumen has planned a series of smaller and more topically focused events for the coming fall and winter, the first few of which will be focused on learning more from current and prospective Lumen researchers.

Outreach to New Sources of Notices and Notice Data

Encouraging recipients and senders of takedown notices to share copies of those notices with Lumen has proven to be one of the biggest challenges the team has faced. Although Lumen’s name recognition has clearly improved, due in no small part to the increased publicity from outside journalism and research publications, and although those companies with whom Lumen has existing relationships are generally positive about the benefits of sharing, some institutions are still loathe to share notices and notice data. Finding ways to be more effective at turning preliminary outreach into new data-sharing arrangements will be a top priority for the Lumen team in the coming year.

General Outreach and Media Participation

In addition to the June 2020 workshop mentioned above, and their ongoing work with Lumen researchers, members of the Lumen team participated in the following activities:

Prompted by the increased attention paid to the Internet Archive after it announced its pandemic-motivated National Emergency Library, Adam Holland wrote a Medium piece examining the various points of view on the library, and urging the NEL to share any takedown notices it received with Lumen. Lumen provided some statistics on Google takedowns to TorrentFreak for an article. Lumen provided statistics on takedowns to Professor Rebecca Tushnet for her testimony to the Senate Judiciary subcommittee on the DMCA. Lumen Project Manager Adam Holland answered some questions from a cyberlaw professor about Google’s search index and robots.txt pertinent to the way in which the Florida Department of Law Enforcement (FDLE) operates its publicly-accessible database of FL residents who have been previously convicted of various sex offenses. As noted above, Andromeda Yelton gave a presentation on her work defending Lumen against a Russian botnet to Code4Lib. Adam spoke with a member of the data team of The Correspondent, a newsroom based in Amsterdam. Adam provided general commentary regarding the DMCA and United States fair use law to Daniel Laufer, a German reporter interested in how a German company, Acromax Media, may be abusing the DMCA to take down critical reporting in Albania, and was quoted in the ensuing article. · Adam spoke to a New York Times reporter to give background information about DMCA takedowns for an article about a lawsuit between two self-published Amazon authors. Adam spoke with CBS News regarding a story about Professor Volokh’s research. Adam spoke with the San Antonio Express News for a story posted online here. Additionally, members of the Lumen team provided background information regarding Lumen, the DMCA and notice & takedown generally to inquiries from journalists, activists, legislative staffers, researchers, and other interested parties.

Of special note, on December 16, 2019, Lumen project manager Adam Holland and Lumen PI Chris Bavitz made comments to the Third Meeting of the Stakeholder Dialogue on Art. 17 of the Directive on Copyright in the Digital Single Market in Brussels. Article 17 references “”Use of protected content by online content-sharing service providers.”

The presentation was well-received, and also was a boost to Lumen’s broader publicity. Lumen was invited to join a multi-stakeholder mailing list regarding ongoing Article 17 discussions, in which it continues to participate, and also made several new EU contacts, including a former member to the EU Parliament, who have kept Lumen apprised of opportunities to contribute comments or thoughts to ongoing copyright and intermediary liability-related legislative and regulatory discussions within the EU.

A copy of the remarks can be found at:

Bavitz, Chris, Holland,Adam, “Lumen Presents Comments to the Third Meeting of the Stakeholder Dialogue on Art. 17 of the Directive on Copyright in the Digital Single Market in Brussels” (December 17, 2019) https://www.lumendatabase.org/blog_entries/807

A recording of the day’s proceedings is available at:

“COPYRIGHT STAKEHOLDER DIALOGUES — Streaming Service of the European Commission,” https://webcast.ec.europa.eu/copyright-stakeholder-dialogues-16-12, (accessed October 8, 2020)

Lumen’s participation begins at approximately the 4:00:00 mark.

Other outreach efforts

The Lumen team has also had productive conversations with a variety of other activists and researchers about possible cooperative efforts, including with Carrie Goldberg, an American lawyer specializing in representing victims of so-called “revenge porn”; the “Disinfodex” project emerging from the Berkman Klein Center’s 2019–2020 Assembly Program; the Digital Public Library of America, the Reporters Committee for Freedom of the Press, Harvard’s Caselaw Access Project, and the Humboldt Institute for Internet and Society in Berlin.

Social Media Statistics

Lumen maintains a Twitter account, from which it tweets or retweets about content moderation, takedowns, censorship, academic freedoms, the “right to be forgotten” and other news related to online information. During the period from September 1, 2019 to August 31, 2020:

The account added 986 new followers, a ~25% increase The engagement rate on Lumen’s tweets and retweets went up each quarter, from 0.9% in the first to 2.0 % in the fourth. Lumen’s tweets received 1.41 million total impressions, and an average of between 20 and 25 link clicks per day. Lumen’s top mention in terms of engagements (an order of magnitude greater than typical) was when a CNN reporter mentioned Lumen in a tweet referencing a takedown notice sent regarding Donald Trump’s account. Data and Material Produced

During this year, the Lumen database added ~2.6 million more notices, referencing many millions of URLs, involving approximately fifty-eight thousand separate entities. As mentioned above in the technical improvements sections, we put into place our planned changes for displaying URLs in a truncated form to casual Lumen visitors, while granting access to full notices with complete URLs to researchers requesting access. We were and are gratified to have received relatively few complaints from users regarding the change, and none from active researchers. Current policy is to grant a single request per email address to view a notice. Lumen has consistently averaged approximately one thousand such requests per day, but may revisit and revise the bounds of that policy in the coming year.

During the time period from September 1, 2019 to September 1, 2020, Lumen received almost six hundred thousand unique visitors, who visited Lumen close to fourteen million times, viewing over nineteen million unique Lumen website pages. These traffic numbers represent an approximately 50% increase in activity from the previous year, which the Lumen team attributes to both more research activity and greater use of the site by the public at large.

The most visited Lumen URL was http://lumendatabase.org/notices/9415, which is a Google placeholder notice for search results that contain URLs reported as illegal under German youth protection laws. There is no way to be certain as to why this notice is visited often, but it may be that this notice’s popularity is a rough proxy for the number of such removals by Google in Germany and the number of searches the internet-using German public performs for the underlying material. Or, it could be the relative novelty of the new laws is driving interest. The second most visited Lumen page, close behind first in terms of total visits, was Lumen’s own search page.

Conclusion

In the year to come, the Lumen team looks forward to continued progress on all fronts, from expanding the scope, scale and impact of research done with Lumen’s data and gathering new sources of takedown notice data, to improving the Lumen user experience and adding new members to the Lumen team, There will be more events, whether virtual or in person, more publications, and more opportunities to get involved.

Lumen — The Year in Review: was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 13. October 2020

omidiyar Network

A Decade of Digital Public Goods: Holding Up the Digital Economy, Preventing Monopolies, and…

A Decade of Digital Public Goods: Holding Up the Digital Economy, Preventing Monopolies, and Safeguarding Freedoms for All By CV Madhukar, Managing Director, Responsible Technology Long lines form with South Africans waiting to receive COVID relief funds. Digital platforms, such as digital identity and interoperable payment systems, form the scaffolding that holds up our entire digital
A Decade of Digital Public Goods: Holding Up the Digital Economy, Preventing Monopolies, and Safeguarding Freedoms for All

By CV Madhukar, Managing Director, Responsible Technology

Long lines form with South Africans waiting to receive COVID relief funds.

Digital platforms, such as digital identity and interoperable payment systems, form the scaffolding that holds up our entire digital economy, improving the delivery of basic services to millions.

While not visible, this digital architecture has the power to enhance many different aspects of our lives, wherever we live on the globe. Identification enables people to open new bank accounts and access public services like education and health. While interoperable payment systems have the ability to drive down transaction costs for consumers.

Since the start of the COVID-19 pandemic, we’ve seen just how essential digital infrastructure can be. South Korea and Singapore were able to use theirs to track and trace who had contracted the virus — and to whom they had been in close proximity with. India used its digital infrastructure to send more than $200 million in emergency aid directly to more than 340 million citizens in a matter of days. Estonia, meanwhile, used its digital infrastructure to create digital immunity passports, which allow those who have recently tested negative for the novel coronavirus to return to work in person.

The problem is that this kind of digital infrastructure is only present in a few countries around the world — which leaves the rest of us to depend on slower, less reliable systems. Even in countries as wealthy as the US, unemployment benefits, which could be paid in seconds via digital infrastructure, can take months to process due to antiquated technology and processes. And when it comes to countries with far fewer resources, especially in Africa, the lack of digital infrastructure can create even bigger impediments to access even basis services. For example, the lack of formal identification is one of the biggest barriers for financial inclusion. And even if governments and philanthropists have every intention of supporting vulnerable people, these will remain mere intentions without a mechanism through which they can get people the resources they need.

That’s why Omidyar Network is working to make sure countries have the technical resources to build the digital infrastructure that is powering so much of the developed world — and to do it in a way that is cost-saving, open-sourced, and empowering to local entrepreneurs who will no longer have to be dependent on big technology companies.

To be clear, digital infrastructure must also be accompanied by rigorous laws and policies to protect individual privacy and prevent harms. Digital infrastructure must be built so people have control over how their data are used, and data privacy must be built-in, not opt-in. It should be seen as a public good that enables private-sector growth, prevents digital monopolies, and safeguards the rights and freedoms of all.

On September 18, New America organized a virtual event, “2030: A Decade of Digital Public Goods for Effective Institutions,” on the sidelines of the UN General Assembly. Philanthropists, heads of state, government regulators, civil society leaders, and startups gathered to launch a decade of work to develop digital goods for the public sector. Several speakers pointed to the shortage of public investment in building such infrastructure.

We urgently need philanthropists to not only bring development assistance to the countries that need it, but to also invest in the infrastructure that makes it possible to get there. We need governments to own this agenda and build tech so the management of large-scale, digital infrastructure lies within the country’s elected institutions. And in an era where data has become a key factor of economic development, we need to focus on empowering individuals with agency, thereby generating the greatest benefit to the people.

That’s not to say the private sector won’t reap any rewards from these investments, on the contrary. In the same way companies benefit from public works like roads, bridges, and tunnels, they will also be able to innovate on top of this foundational layer of digital infrastructure — and reach customers they never could have found otherwise. Likewise, this technology will be able to multiply returns on philanthropic investments by exponentially expanding the universe for people across the world.

Put simply, digital infrastructure is a first-mile necessity for an equitable digital world.

This could have a positive, transformative impact on the global economy and global development. But it’s going to take all of us to install safe and inclusive, digital infrastructure all over the world. If we make this kind of investment, we can begin to build a world where gaps in technology never limit the programs governments can implement, the progress they can make, or anything people do to improve their own lives.

Watch the full event recording online to hear how others in the conversation proposed advancing the #DigitalDecade to support public institutions.


Oasis Open

Four new Project Specifications approved by the #OSLC Open Project

New specifications advance OSLC initiative to enable interoperation of change, configuration, and asset management processes across application and product lifecycles. The post Four new Project Specifications approved by the #OSLC Open Project appeared first on OASIS Open.

New specifications advance OSLC initiative to enable interoperation of change, configuration, and asset management processes across application and product lifecycles.

OASIS is pleased to announce that OSLC Core v3.0, OSLC Query v3.0, OSLC Requirements Management v2.1, and OSLC Change Management v3.0 from the Open Services for Lifecycle Collaboration Open Project [1] have been approved as an OASIS Project Specifications.

These join the previously announced OSLC Quality Management v2.1 as the first approved Project Specifications from the Open Projects program. Managing change and configuration in a complex systems development lifecycle is very difficult, especially in heterogeneous environments that include homegrown tools, open source projects, and commercial tools from different vendors. The OSLC initiative applies World Wide Web and Linked Data principles to enable interoperation of change, configuration, and asset management processes across a product’s entire application and product lifecycle.

OSLC Core defines the overall approach to Open Services for Lifecycle Collaboration based specifications and capabilities that extend and complement the W3C Linked Data Platform. OSLC Query provides a mechanism for a client to search for RDF resources that match a given criteria. OSLC Requirements Management support key RESTful web service interfaces for the management of Requirements, Requirements Collections and supporting resources defined in OSLC Core. OSLC Change Management defines a RESTful web services interface for the management of product change requests, activities, tasks and relationships between those and related resources such as requirements, test cases, or architectural resources.

These Project Specifications are OASIS deliverables, completed and approved by the OP’s Project Governing Board and fully ready for testing and implementation. The applicable open source licenses can be found in the project’s administrative repository at https://github.com/oslc-op/oslc-admin/blob/master/LICENSE.md

The specifications and related files are available at:

OSLC Core Version 3.0
Project Specification 01
17 September 2020

– OSLC Core Version 3.0. Part 1: Overview
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/oslc-core.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/oslc-core.pdf

– OSLC Core Version 3.0. Part 2: Discovery
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/discovery.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/discovery.pdf

– OSLC Core Version 3.0. Part 3: Resource Preview
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/resource-preview.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/resource-preview.pdf

– OSLC Core Version 3.0. Part 4: Delegated Dialogs
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/dialogs.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/dialogs.pdf

– OSLC Core Version 3.0. Part 5: Attachments
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/attachments.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/attachments.pdf

– OSLC Core Version 3.0. Part 6: Resource Shape
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/resource-shape.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/resource-shape.pdf

– OSLC Core Version 3.0. Part 7: Vocabulary
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-vocab.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-vocab.pdf

– OSLC Core Version 3.0. Part 8: Constraints
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-shapes.html https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-shapes.pdf

– OSLC Core Vocabulary definitions file:
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-vocab.ttl

– OSLC Core Resource Shape Constraints definitions file:
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-shapes.ttl

OSLC Query Version 3.0
Project Specification 01
01 October 2020

https://docs.oasis-open-projects.org/oslc-op/query/v3.0/ps01/oslc-query.html https://docs.oasis-open-projects.org/oslc-op/query/v3.0/ps01/oslc-query.pdf

OSLC Requirements Management Version 2.1
Project Specification 01
03 September 2020

– OSLC Requirements Management Version 2.1. Part 1: Specification
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-spec.html
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-spec.pdf

– OSLC Requirements Management Version 2.1. Part 2: Vocabulary
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-vocab.html
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-vocab.pdf

– OSLC Requirements Management Version 2.1. Part 3: Constraints
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-shapes.html
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-shapes.pdf

– Requirements Management Vocabulary definitions file:
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-vocab.ttl

– Requirements Management Resource Shape Constraints definitions file:
https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/requirements-management-shapes.ttl

OSLC Change Management Version 3.0
Project Specification 01
17 September 2020

– OSLC Change Management Version 3.0. Part 1: Specification
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-spec.html https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-spec.pdf

– OSLC Change Management Version 3.0. Part 2: Vocabulary
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.html https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.pdf

– OSLC Change Management Version 3.0. Part 3: Constraints
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.html https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.pdf

– Change Management Vocabulary definitions file:
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.ttl

– Change Management Resource Shape Constraints definitions file:
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.ttl

Distribution ZIP file

For your convenience, OASIS provides complete packages of the specifications and related files in ZIP distribution files. You can download the ZIP files at:

– OSLC Core: https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps01/core-v3.0-ps01.zip

– OSLC Query: https://docs.oasis-open-projects.org/oslc-op/query/v3.0/ps01/query-v3.0-ps01.zip

– OSLC Requirements Management: https://docs.oasis-open-projects.org/oslc-op/rm/v2.1/ps01/rm-v2.1-ps01.zip

– OSLC Change Management: https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/cm-v3.0-ps01.zip

Members of the OSLC OP Project Governing Board approved these specifications by Special Majority Votes [2] as required by the Open Project rules [3]. Our congratulations to the participants and contributors in the Open Services for Lifecycle Collaboration Open Project on their achieving this milestone.

Additional references:

[1] Open Services for Lifecycle Collaboration Open Project
https://open-services.net/

[2] Approval ballots:

– Core Management: https://lists.oasis-open-projects.org/g/oslc-op-pgb/topic/vote_on_oslc_core_v3_0_1/76734933?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76734933
– Query Management: https://lists.oasis-open-projects.org/g/oslc-op-pgb/topic/vote_on_oslc_query_version/76871816?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76871816
– Requirements Management: https://lists.oasis-open-projects.org/g/oslc-op-pgb/topic/vote_on_oslc_requirements/76430414?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76430414
– Change Management: https://lists.oasis-open-projects.org/g/oslc-op-pgb/topic/vote_on_oslc_core_v3_0_1/76734933?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76734933

The post Four new Project Specifications approved by the #OSLC Open Project appeared first on OASIS Open.


eSSIF-Lab

Meet the eSSIF-Lab’s ecosystem: the Infrastructure Development Instrument first winners

eSSIF-Lab has already kicked-off the programme for the 7 proposals selected, out of the 36 that were submitted before the first deadline of the Infrastructure-oriented Open Call, to contribute with open source technical enhancements and extensions of the SSI Framework of the project. These are:

BRIDGE by SICPA Spain S.L.U. BRIDGE for ledger-agnostic interoperable issuance and verification of W3C verifiable credentials. Capability Based Authorization System by Jolocom A capabilities-based authorization system, utilizing DIDs, Verifiable Credentials, Verifiable Presentations, etc. eSSIF- TRAIN by Fraunhofer-Gesellschaft Trust Management Infrastructure Component. Evernym Open Sourcing Project by Evernym UK Open sourcing Evernym’s credential exchange platform. Self-Sovereign IDentity Online by UBICUA Online password less authentication based on SSI and FIDO2. SSI eIDAS Bridge by Validated ID, S.L. An eIDAS bridge, which is a component that proposes to enhance the legal certainty of any class of verifiable credentials. Verifiable Credential Authority by NYM Srl A DLT/blockchain independent platform to Issue and Verify certified attributes and claims, under different formats, and for any SSI system.

The Infrastructure Development Instrument will support these innovators to provide scalable and interoperable open source SSI components for eSSIF-Lab Framework with up to € 155,000 funding.

Selected companies under this instrument will have the opportunity to take part in a very active and collaborative ecosystem with other eSSIF-Lab participants to:

improve framework’s vision, architecture, specifications etc. ensure interoperability (at the technical and process levels) and address each other’s issues jointly. Would you like to join them?

Infrastructure-oriented Open Call is still admitting applications (the next deadline is on 4th January 2021)

Apply NOW!

Follow the updates of this initial batch of winners, about the current open call and about the next deadline (on June 2021) in the eSSIF-Lab space of the NGI Online Community!


Meet the eSSIF-Lab’s ecosystem: the 1st Business-oriented Programme participants

After a tough competition among interesting proposals and 2 days of intense on-line Hackathon, eSSIF-Lab has selected 15 best projects out of the 19 who succeed in the open call to join the 2nd stage of its First Business-oriented Programme:

CommercioKYC by  Commerc.io Easy KYC with Self-Sovereign Identity. Universal DID SaaS by Danube Tech Building a hosted service that allows developers to easily work with Decentralized Identifiers (DIDs), without having to set up their own infrastructure. SSI-enabled “Contractual Event” Passport by Domi Labs Enabling businesses to integrate SSI into their contractual record management processes. e-Origin Wallet by e-Origin Digital wallet of verifiable credentials for the products’ origin. Gataca Connect by Gataca España Trusted Single Sign On for a human-centric Internet. SSI4DTM by JoinYourBit Self-Sovereign Identity for Digital Transaction Management: a Digital Transaction Management platform to execute any cross-border transactions: NDAs, contracts, bids, etc. Universal Backup Service (UBS) for SSI Agents by Jolocom A vendor-neutral, plug-and-play component for equipping SSI Agents with a service to generate interoperable backups of end user data. SSI-as-a-Service by Netis Simplifying SSI integration and adoption. Gaya by NYM srl Supports public notaries to remotely incorporate Limited Liability Company, providing all the tools they need to apply digital transformation to their business. NYM Credentials for Self-Sovereign Identity by Nym Technologies A bulletin-board and search system for privacy-enhanced services Digital ID and signatures by Off-Blocks Onboarding businesses and organizations in a self-sovereign world through user-friendly and low-cost control over trusted digital identities, verified credentials and digital signatures. IRIS – Discourse Community Credentials by Resonate Beyond Streaming A Discourse plugin that allows SIOP OIDC login and community-friendly transparent recognition, award and governance of verifiable credentials as user-friendly ‘badges’. Dynamic Data Sharing Hub with Consent Flow by The Human Colossus Foundation Brings SSI benefits to the wider economy by enabling a privacy preserving full data life cycle including consent. Trusted Digital Assistant – a data operator solution by unikk.me Bringing the fundamental right to an autonomous identity to every person by enabling trusted parties to act as a ‘trusted digital assistant’ in catering. User-friendly Management Interface for Verifier Policies by Verifiable Credentials Verifiable credentials for reviewers of scientific publishing and research funding proposals.

The 8-month Business-oriented Programme offers business and technical support to integrate SSI technology with market propositions and it’s structured in 3 competitive stages (only the best performing projects in each phase are entering the next).

The initial stage, which expanded for the first month, was intended for the teams to work in the Proof of concept of their projects and to start building their business case. They both presented the outcomes of their work during business pitches and technology demonstrations in the online Hackathon which took part on September 16th and 17th. All the 19 projects who took part in this first stage, including those by Filancore, Wellbeing cart, MyData Global and Spherity, will receive € 15,000 funding to reward their efforts.

During the second stage, which already is ongoing, the 15 selected teams will work during the next 5 months on their developments to create a mock-up and a Prototype and on their business models and will cooperate among them to create a real ecosystem ensuring interoperability and scalability. Funding linked to the participation in this stage is € 50,000.

Only the best-in-class teams from those will reach the third stage and focus on testing their MVPs and defining Business models to get a final funding of € 41,000.

Are you curious about which ones will be those completing the programme?

Follow up the course of the programme at eSSIF-Lab space in NGI Community and join us in congratulating all the participating teams on their development efforts so far!

Join NOW!

Last but not least, 2nd Business-oriented Call is expected to launch in late spring 2021, and it will be open to SMEs and startups developing commercial SSI-based applications and services with focus in the verticals of HealthTech, eGovernment, Education or competing in the generic track of Open Disruptive Innovation. Follow the updates in NGI Online Community!


Me2B Alliance

Global Privacy Control -> W3C work

Hi friends, If you're interested in getting involved in the Global Privacy Control Spec, you can join the W3C Privacy CG https://www.w3.org/community/privacycg/, and participate in the GPC discussion here:  https://github.com/privacycg/proposals/issues/10  I sent my concerns to the Certification WG last week so won't repeat them all here.  My main problem with t
Hi friends,

If you're interested in getting involved in the Global Privacy Control Spec, you can join the W3C Privacy CG https://www.w3.org/community/privacycg/, and participate in the GPC discussion here:  https://github.com/privacycg/proposals/issues/10 

I sent my concerns to the Certification WG last week so won't repeat them all here. 

My main problem with the spec is that it is functionally an opt out--meaning that the individual must take an action to deliberately opt out of selling data.  Once again, the burden is put on the individual.

A central thesis in CCPA and the folks drafting the spec seems to be that "Privacy by default is great but has even more legal teeth with this preference chosen explicitly." [quote from issue in github by Henry Lou]

Richard and I discussed this a bit earlier in the year, and I'm still confused about the legal foundation for this assertion, and why it's being framed like this.  Wouldn't Privacy by default be kinder, and more respectful? Better? 

In any case, I highly encourage you to get involved directly in the work.  (Because there's a lot of interesting stuff happening in the Privacy CG.)

Lisa





Monday, 12. October 2020

Decentralized Identity Foundation

Drilling down: Open Standards

What standards are and what it means to make them openly In our last post in the series, we drilled down into a granular definition of “open-source” development and the thinking that goes into a choice of license. In this post, we drill down into what “standards” are, and the characteristics of an “open process” for developing standards. Supporting these open standards is where the bulk of D

What standards are and what it means to make them openly

In our last post in the series, we drilled down into a granular definition of “open-source” development and the thinking that goes into a choice of license. In this post, we drill down into what “standards” are, and the characteristics of an “open process” for developing standards. Supporting these open standards is where the bulk of DIF’s efforts and resources are focused. In the next post in the series, we will turn to how open source and open standards work together to create new business models and strategies with real-world consequences for cooperation, “coöpetition,” and healthy markets.

Photo by Jim Quenzer

It is worth noting up front that the term standard has two slightly different usages. One is related to quality assurance or business process compliance — think of marketing that references “the highest standards of _____ ”. This refers to specifications and metrics used to grade outputs in a regulated industry or sector, like “Grade A Beef”. These are set and enforced by some combination of regulators, private-sector auditors and industry associations. Outside of software, this is usually what people refer to by “standardization:” and a specialist in any industry can wax eloquent on the politics and the consequences of decisions by standards bodies fixing those specifications and metrics.

In software and other IP-driven industries like medicine or engineering, standards have more to do with control and portability of data, enforcing measurable compatibility with the products of others. A common metaphor for this kind of standardization is the width or “gauge” of railway tracks — how far apart the rails are is a somewhat arbitrary decision but if they are different between two countries or regions they will have completely distinct rail systems. Software standards work much the same way, and for this reason standardization is often a prerequisite of procurements from government or substantial investments in the private sector. No one wants to invest in locomotives if all the places they want to take it… use different rails.

In the software world, as in the world of trains, standards define a given market for products and services. Compliance tests make objective (and far less controversial) the question of whether or not a given product meets a given set of requirements. Explicitly-defined, testable protocols make products provably swappable and/or interoperable. Open standards processes try to define those tests and protocols in the open, with input from initial and future contenders in that market, speeding up the timeline to legitimacy by incorporating major players and incorporating widely-sourced input.

Standards processes, as inherited from tangible industries

One way to explain standards processes is to begin with some examples in the physical world, which we learned about as matters of fact in our education. These evolved to support the creation of precise manufacturing methods and to support more seamless commerce, giving stability and safety to commodity markets. Weights and measures are classic standards that support for both: after all, there is nothing natural about our units of measure or currencies, as anyone who’s used both metric and non-metric measures knows all too well. How long is something? How heavy is something? How much liquid is in a gallon or liter?

Standardizing these kinds of measures was a quantum leap for commerce and mercantilism: it gave everyone a common reference point and enabled accounting systems (and “ledgers”) of vastly wider scope and simplicity. The fact that standards have been decided at the international level means it can happen on a global scale. The metric system, for example, is defined by the International Bureau of Weights and Measures, an international standards development organization (SDO). There is little debate in 2020 about what a gram is (it’s the weight of one milli-liter of water), but things like tolerance or accuracy in weighing and marking system is still an ongoing matter of debate and specification carried out there.

Another physical world standard is the shipping container. A whole global infrastructure has been built around this standard-sized container that allows it to be put on truck beds, shipped on train cars and put in ships that go around the world. It also means if you can fit whatever your thing is inside that box it can get to almost anywhere in the world because there is a standards based infrastructure that can handle it. Massive economies of scale (which have terraformed geopolitics by enabling high-throughput, high-efficiency global trade networks) are unlocked by this kind of standard, which enables the movement of containers (in most cases with no knowledge, or no direct knowledge, of what is inside them) to become a kind of commodity whose price stabilizes and steadies far-flung trade. The analogy to the “packets” or “data points” of modern information technology has been a mainstay of thinking about software business models for decades, and W3C Verifiable Credentials are no exception.

Similar standards also govern the electricity coming out of the walls in our houses, which has become so reliable and ubiquitous over the last century that few people outside of the relevant industries think much about it, or how dangerous it would be if standards were loosened. The “amount” (load and speed) of electricity, as well as the physical form factor of plugs, wiring, and circuit boards are all standardized at the national or regional level. This creates regional economies of scale in both the delivery of energy as a resource and in the manufacturing of electricity-powered products. Indeed, much of software engineering as an academic discipline and a labor market, as well as many standards around data and their governing standards bodies, evolved out of the electrical and communications infrastructure that preceded the advent of modern software.

Standards processes, for bits and bytes

Digital technology also needs explicit and testable standards, deliberated by specialists and engineers in a transparent process for the common good and for the stability of huge systems of capital and human effort. As the internet has evolved, the bulk of this effort has focused on the definition of common protocols that allow information to be exchanged by different computer systems, potentially written in very different languages and operating across very different topographies, with very different inputs and automations and governance structures.

The protocols that make up the modern internet were originally created by the group that began building the ARPA network. In 1986 the Internet Engineering Task Force (IETF) formed and it is still the steward of many key protocols that form the basis of much of the internet, particularly around security and load-balancing at massive, infrastructure scale. The Worldwide Web Consortium (W3C) was formed in 1994 and works to develop the software standards for the World Wide Web’s core technologies: browsers and servers.

One example we all use every day is E-mail, or as it was once known, “electronic mail”. How addressing and discovery can work, the limits and parameters of a universally-recognizable address, etc are all written up in authoritative specification documents. Colloquially, these documents are often referred to as “the standard,” or “the RFCs” (“Request[s] for Comment” referring to the collective editorial process by which standards are written). Email is typical, however, in that a patchwork of multiple interlocking protocols are actually required to send and receive emails.

Although this list of protocols needs to move slowly to give the end-consumer stability and assurance, the list is actually in a state of permanent minor flux, as individual protocols are iterated and upgrades, support for older versions fades away, and now protocols are added that take advantage of security or performance enhancements reaching critical mass elsewhere. For decades, the dominant protocols in email have been Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), and underlying handshake/transport protocols like Transport Layer Security (TLS). As you can see clicking any of these links, the “standard” (current best practices and specifications) are compromised of a patchwork of iterating and component specifications with a more narrow scope, allowing for a kind of modular and incremental evolution allowing markets and applications to phase components and subsystems in and out over time without interruptions to service or sudden changes in user experience.

Another example we use every day is “web standards,” i.e., HTML, HTTP and CSS. These protocols are the standards that let any web server present information according to prescriptive formats which will be displayed in roughly the same way by any compliant web browser. Here, as in weights and measures, or electricity, there are always slippages and margins for error, though, as any front-end developer can tell you) This enables a diversity of web servers and a diversity of browsers. There are of course open source examples of each (Apache Server and Firefox Browsers being two examples) but there are also proprietary versions of them as well.

To many observers, the degree of openness in the codebase matters less than the diversity and complexity of organizations involved in the governance of the protocols: in this regard, control of “browser standards” might be endangered by increasing market power on the part of browser vendors owned by private entities that also have outsized and direct control over operating systems and app store platforms, upon which any competing browser would depend directly. Neither open source nor open standards are a guarantee of a healthy and open market, although both generally contribute to that end.

Standards and Proto-standards

Standards development is generally a slow process taking years, driven by a balance between (rough) consensus between stakeholders refining and iterating requirements and running code against which these requirements can be measured or considered. Depending on the context, stakeholders can include vendors or commercial actors, regulators, consumer advocacy groups, affected industries, and/or individuals. The terms “running” and “code” both cover a lot of territory, but it would be impossible to arrive at a standard without at least two independent, functioning pieces of code that have been tested, audited, and hardened, ideally by some deployment at scale.

In some cases, this is where an open standards process begins: two maximally independent implementations decide to cooperate for greater adoption and maturity, and seek out a venue for the relevant stakeholders to debate the merits and trade-offs of their current codebases and future variations or possibilities. Different standards bodies can be more or less public in their processes, more or less transparent in their results, and more or less complex in their rules of engagement: indeed, some operate according to a rulebook as complex as that of a parliament, and a style guide as exacting as an academic institution.

For the most open of standards, however, it is possible to work in the open, deliberatively and transparently, long before this “first” step. Working in an industry group, trade association, or other neutral venue can speed up the work towards a standards by front-loading the collaboration, peer-review, market-testing, and process legitimacy needed to get an idea ready for the market and standardization sooner. These “pre-standards” venues are like containers where all those participating have signed an IPR agreement up front that means their work is unencumbered by patents or royalties and safe from being front-run or patent-trolled.

The products of these pre-standards processes are often called “pre-standard specifications” or “proto-standards” (if they are more ambitious and protocol-shaped). Groups that develop proto-standards often have explicitly-defined processes for how to publish a proto-standard, as well as when and how to hand off a sufficiently matured proto-standard to an SDO process for more formal and authoritative standardization.

We should not overstate the distinction between standards and pre-standards, however: there are many shares of grey in between. “Standards-track” and “non-standards-track” specifications alike can be more mature or legitimate depending on the parties involved, the pertinent SDOs (or lack thereof), and of course the process used to create them. For this reason, work items and working groups at the DIF avail themselves of multiple procedural options and tailor their processes to the context, which is why “scoping” and “chartering” processes can take months to hammer out between organizations and their legal departments. This is also specifications developed at DIF without being further hardened in a more formal standards body can sometimes be called “standards,” in the sense that they are adopted in the industry and function as standards. How the market and the relevant industries treat, and trust, and rely on a specification is the ultimate judge of when it can be called the authoritative text for a standard process or procedure!

In our next drill-down, we’ll go into more detail on DIF’s processes, how you can get involved, and what decisions go into assuming an active role in a working group or work item.

Drilling down: Open Standards was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


Federal Blockchain News

Redefining the Future of the Economy: Dawn Talbot & Ralph Benko

In their new book "Redefining the Future of the Economy: Governance Blocks & Economic Architecture," Co-Authors Dawn Talbot & Ralph Benko say the blockchain industry has failed to live up to its promise because it cannot reflect complex decision processes. They propose a mathematical framework for automated decision making meeting fiduciary standards. Without this approach, they warn that t
In their new book "Redefining the Future of the Economy: Governance Blocks & Economic Architecture," Co-Authors Dawn Talbot & Ralph Benko say the blockchain industry has failed to live up to its promise because it cannot reflect complex decision processes. They propose a mathematical framework for automated decision making meeting fiduciary standards. Without this approach, they warn that the US could become a second-rate technology power dominated either by coders with wild-west mentalities or by China’s investment in blockchain, which they call this generation’s sputnik moment.

Saturday, 10. October 2020

FIDO Alliance

Deploying FIDO in Japan: An Interview with SBI Sumishin Net Bank

SBI Sumishin Net Bank is an Internet-focused bank jointly established in 2007 by SBI Holdings and Sumitomo Mitsui Trust Bank. In keeping with their aim to be recognized for innovation, […] The post Deploying FIDO in Japan: An Interview with SBI Sumishin Net Bank appeared first on FIDO Alliance.

SBI Sumishin Net Bank is an Internet-focused bank jointly established in 2007 by SBI Holdings and Sumitomo Mitsui Trust Bank. In keeping with their aim to be recognized for innovation, the bank deployed FIDO Authentication in July 2020. We had an interview with the bank about the details of their deployment.

Q. Describe your service and how it’s using FIDO Authentication.

We have incorporated  FIDO-compliant authentication into our  existing “SBI Sumishin Net Bank” mobile application. Now, a single application is available to provide both banking and authentication functions to our customers. This eliminates the need for our customers to enter passwords and verification codes for each transaction. Instead, they can simply log in to the SBI Sumishin Net Bank App with biometric authentication. Even when transactions are made from a PC or other non-mobile application environments, the application will confirm and approve the transaction details before they are executed, preventing unauthorized transfers. Furthermore, when using the login approval function, only the registered smartphone can remove any control, which prevents unauthorized logins.

Q. What FIDO specification(s) did you implement? 

We have deployed a solution based on FIDO UAF, which uses biometrics (fingerprint and facial recognition) and PIN as the authentication methods.

Q. What other approaches did you consider before choosing FIDO? 

We looked at continuing with the existing smartphone application “Smart Authentication,” which is a separate application the customer would have to authenticate logins and bank transactions. However, we saw it as difficult to operate two applications separately and saw it as a burden for our customers to have to use two separate applications just to bank with us.

Q. Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?

Although there are various types of authentication methods available, the fact that FIDO Authentication is a global standard developed by a global consortium FIDO Alliance, and that we have seen is increasingly being deployed in Japan and globally – were two factors that made it very appealing to us. 

Q. Why did you decide on a standards-based approach? 

There are two main reasons why we chose to take a FIDO standards-based approach.

First, FIDO Authentication provides stronger security. FIDO Authentication enables safe exchange of authentication results over the network, and the credential is stored only on the device that performs the authentication (in our case, the smartphone) and does not need to be transmitted over the network or stored on the server side. 

Second, FIDO improves convenience for our customers. By incorporating authentication into our existing banking app, we are making it possible to complete both banking and authentication functions in a single app, enabling smooth transactions without having to enter passwords or other information.

Q. What steps were involved in your roll out of FIDO Authentication? Did you work with a partner? 

We implemented the FIDO-compliant “SaAT Pokepass Authentication Service” provided by Net Move Corporation (“Net Move”), a wholly owned subsidiary of SBI Sumishin Net Bank. The new authentication function “Smart Authentication NEO” was deployed by incorporating the client SDK for this service into the bank application.

Q. What other data points can you share that show the impact FIDO authentication has had?

On July 31, 2020, we launched a new authentication feature, “Smart Authentication NEO.” On the quantitative side, the number of new registered customers has reached approximately 100,000 in just three weeks since its launch, and we expect this number to increase further in the future.

On the qualitative side, many customers have commented on the convenience of being able to use a single app for both banking and authentication functions.

Q. What advice would you give to other organizations considering rolling out FIDO authentication? 

Again, our company’s FIDO authentication uses Net Move’s “SaAT Pokepass Authentication Service.” By collaborating with Net Move, we were able to deployed the new authentication function “Smart Authentication NEO” in a short period of time.

In addition to FIDO authentication, Net Move already has an installed at more than 100 financial institutions, including “SaAT Netizen,” an anti-fraudulent remittance service, and we believe that Net Move can help to solve these issues.

Q. What role do you see FIDO Authentication playing for your company in the future?

The “Smart Authentication” service will be discontinued after January 2021, and we will move exclusively to the FIDO-enabled “Smart Authentication Neo” app. We see moving to the FIDO-enabled app  as the key authentication function will further allow us to provide secure and convenient experiences for our customers.

Q. If you are able, please provide a quote from an executive regarding this deployment and the impact FIDO has had for your organization.

Quote from the project manager of SBI Sumishin Net Bank:

“Our goal is to revolutionize financial services and make society more comfortable and convenient by utilizing the most advanced technology with a customer-centric approach. Security is an extremely important factor in achieving this goal, and we believe that the introduction of FIDO will make a significant contribution.”

The post Deploying FIDO in Japan: An Interview with SBI Sumishin Net Bank appeared first on FIDO Alliance.

Friday, 09. October 2020

Me2B Alliance

All Aboard! Me2B Alliance Membership Countdown

Two weeks from today, the Me2B Alliance will be transitioning to a membership organization. What this means for you, as part of the Alliance community, is an opportunity to become an influential voice in the respectful technology movement.    Starting October 22, all Alliance work will be taking place in the membership portal.  To remain active in the Alliance community, ple

Two weeks from today, the Me2B Alliance will be transitioning to a membership organization. What this means for you, as part of the Alliance community, is an opportunity to become an influential voice in the respectful technology movement. 

 

Starting October 22, all Alliance work will be taking place in the membership portal. 

To remain active in the Alliance community, please become a member before October 22.  

 

Better yet, join today!

 

For your convenience, here’s the link on the new Me2BA.org website: https://me2ba.org/membership/

 

We look forward to seeing you very soon.

 

Lisa LeVasseur

Executive Director

 


Kantara Initiative

Kantara to Assess GSA Login Service’s Compliance With NIST Digital ID Guidelines

The General Services Administration has selected Kantara Initiative to perform third-party evaluation of the federal government’s unified login service against digital identity protection standards set forth by the National Institute of Standards and Technology. Kantara said Wednesday it will assess whether the login.gov portal operates in accordance with NIST Special Publication 800-63-3 provision

The General Services Administration has selected Kantara Initiative to perform third-party evaluation of the federal government’s unified login service against digital identity protection standards set forth by the National Institute of Standards and Technology.

Kantara said Wednesday it will assess whether the login.gov portal operates in accordance with NIST Special Publication 800-63-3 provisions for user identity proofing and authentication.


Kantara and SAFE Identity to support each other’s digital identity Trust Marks

GSA seeks Kantara certification for NIST standard. Trust Framework Providers SAFE Identity and Kantara Initiative have reached a reciprocal agreement to consolidate digital identity assessments, each endorsing and supporting the other’s public key infrastructure (PKI) and non-PKI domain Trust Frameworks, along with their certified identity providers. The collaboration simplifies digital identi

GSA seeks Kantara certification for NIST standard.

Trust Framework Providers SAFE Identity and Kantara Initiative have reached a reciprocal agreement to consolidate digital identity assessments, each endorsing and supporting the other’s public key infrastructure (PKI) and non-PKI domain Trust Frameworks, along with their certified identity providers.

The collaboration simplifies digital identity assessment and Trust Mark processes for companies in healthcare, financial services and other sectors to reduce organizational risk.

Thursday, 08. October 2020

Kantara Initiative

Login.gov to be third-party assessed against NIST’s digital identity guidelines

The General Services Administration wants to build trust in Login.gov‘s ability to verify users’ identities for any agency using the service, so it’s having the technology assessed by a third party. Kantara Initiative will assess the conformity of Login.gov’s identity proofing and authentication with the National Institute of Standards and Technology‘s Special Publication (SP) 8

The General Services Administration wants to build trust in Login.gov‘s ability to verify users’ identities for any agency using the service, so it’s having the technology assessed by a third party.

Kantara Initiative will assess the conformity of Login.gov’s identity proofing and authentication with the National Institute of Standards and Technology‘s Special Publication (SP) 800-63-3, the government’s digital identity guidelines.

Wednesday, 07. October 2020

Kantara Initiative

SAFE Identity and Kantara Partner to Expand Trust Frameworks

SAFE Identity and Kantara Initiative, the two worldwide acknowledged Trust Framework Providers, are focusing on extending digital identity trust and security. The companies have announced a reciprocal agreement to endorse and support each other’s Trust Frameworks, which is used for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.

SAFE Identity and Kantara Initiative, the two worldwide acknowledged Trust Framework Providers, are focusing on extending digital identity trust and security. The companies have announced a reciprocal agreement to endorse and support each other’s Trust Frameworks, which is used for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.


Kantara Initiative Welcomes the General Services Administration

Kantara to Provide Its NIST SP 800-63-3 Conformity Assessment For Identity Proofing & Authentication WAKEFIELD, Mass., USA – October 7, 2020 — Kantara Initiative announced today that the United States General Services Administration (GSA) is joining Kantara and plans to put its Login.gov service through the Kantara assurance and approval program based on third-party assessment […]

Kantara to Provide Its NIST SP 800-63-3 Conformity Assessment For Identity Proofing & Authentication

WAKEFIELD, Mass., USA – October 7, 2020 — Kantara Initiative announced today that the United States General Services Administration (GSA) is joining Kantara and plans to put its Login.gov service through the Kantara assurance and approval program based on third-party assessment against the requirements detailed in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-63-3 Digital Identity Guidelines.

“NIST SP 800-63-3 is focused on modernizing the policy in keeping with rising threat levels for identity proofing, verification and authentication whilst also improving privacy in the overall digital user experience,” said Colin Wallis, Executive Director, Kantara Initiative.  “Kantara has developed assessment criteria against each of 63-3’s normative requirements to drive consistency in assessments of applicable credential service providers (CSPs) done by Kantara 3rd party accredited assessors. Consistency in assessments drives long term integrity in Kantara’s Trust Framework and Trust Marks internationally, thereby building trust and confidence for all stakeholders in the wider digital economy.”

With the GSA’s oversight, Login.gov provides simple, secure, private access to participating US government digital services online.  A single privacy-aware login.gov account can sign-in to multiple government agencies making managing federal benefits, services and applications easier and more secure for the public. Login.gov is used by over 60 applications at 17 agencies including the Department of Defense, Department of Homeland Security, Department of Energy and the Executive Office of the President. To date, over 25 million people have signed up to use login.gov.

Kantara is one of the leading global consortiums improving trustworthy use of identity and personal data through innovation, standardization and good practice. It provides third party assessment against the NIST SP 800-63-3 for identity proofing and authentication. US Government agencies are required to follow NIST guidelines and independent, external assessment is acknowledged as best practice to demonstrate NIST standards compliance.

Kantara was authorized as a US Government Trust Framework provider to the GSA’s then FICAM Trust Framework Solutions (TFS) program in 2011 and works with governments and standards-bodies internationally to align its Trust Mark program for multi-jurisdictional adoption. Kantara is one of the premier Trust Framework Providers aligned with the US National Strategy for Trusted Identities in Cyberspace (NSTIC) program as well as similar initiatives outside the US.

NIST SP 800-63-3

NIST SP 800-63-3 is the set of prevailing digital identity guidelines to which US Federal agencies implementing identity verification and authentication must comply. It also serves as a widely-recognized benchmark for any organization or business wishing to implement identity verification and authentication services, both internationally as well as within the US.

About Kantara Initiative Trust Framework

Kantara Initiative’s Trust Framework Operations and Identity Assurance program is the industry leading program that accredits Assessors and approves Credential Service Providers (CSPs).  The program offers three Classes of Approval that enable CSPs to seek approval of their Identity and Credential Management Services as meeting NIST 800-63 rev.3 requirements. By design, Kantara’s IAF can be applied to a range of standards-derived schemes and classes of approval to verify an organization’s conformance, including NIST SP 800-63-3.

About Kantara Initiative

The Kantara Initiative is one of the leading global consortiums improving trustworthy use of digital identity and personal data through innovation, standardization and good practice. Kantara provides real-world innovation through its development of specifications, applied R&D and conformity assessment programs for the digital identity and personal data ecosystems. More information is available at https://kantarainitiative.org/.

Follow Kantara Initiative on Twitter — @KantaraNews

For further information:

Bob Olson, Virtual, Inc.
+1.781.876.8839
rolson@virtualinc.com

Tuesday, 06. October 2020

omidiyar Network

Omidyar Network’s Official Statement on Today’s Report Issued by the House Judiciary Subcommittee…

Omidyar Network’s Official Statement on Today’s Report Issued by the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law The House Judiciary subcommittee’s recommendations released today represent the most comprehensive response from Congress to date on anti-competitive conduct in the tech sector. After a lengthy probe into industry giants Apple, Amazon, Google, and Face
Omidyar Network’s Official Statement on Today’s Report Issued by the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law

The House Judiciary subcommittee’s recommendations released today represent the most comprehensive response from Congress to date on anti-competitive conduct in the tech sector. After a lengthy probe into industry giants Apple, Amazon, Google, and Facebook, we are grateful to see bipartisan understanding of the problems posed by unchecked concentration in this sector, the recognition of how competition policy can improve, and the need for additional resources to enable the DOJ and FTC to enforce existing antitrust laws. We are even more encouraged to see proposals for legislative changes and increased oversight.

Until now, critical areas of the internet have remained unregulated and unchecked, requiring the sort of dedicated investigation that the report reflects. The potentially illegal market dominance of the largest technology platforms, as well as their social and political power, is seriously threatening our individual freedoms, workers’ rights, economies, and democracies. From widespread disinformation and discrimination against vulnerable groups, to questionable tactics to kill competition and pervasive surveillance practices; we have ample evidence that the status quo is creating adverse conditions for individuals and society at large.

The “Investigation of Competition in Digital Markets” Majority Staff Report and Recommendations from the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law, is a thoughtful roadmap for legislative and regulatory action to reign in the power of big tech, protect Americans, and guide Congress’s work in the coming year. Democrats and Republicans agree that the dominant technology platforms have acquired monopoly power in their respective markets; federal agencies should have more resources for enforcement; interoperability and data portability should be mandated; and the burden of proof in merger cases should be shifted. Democrats go beyond, and propose to mandate structural separation between the technology platforms’ many business units, and the removal of certain barriers to private antitrust enforcement.

To better steward technology and shape the rules of the road for the future, we believe that all people — including those who use technology, or are directly affected by these companies’ business practices — should have a voice in how these platforms operate. Strong competition policy matched with strong oversight is as important to every entrepreneur looking to innovate and build THE next great company as it is to every household, every worker, and every voter.


MyData

6 reasons why YOU should Run for MyData Global Leadership positions in the elections for 2021

  Get ready for the most exciting elections of November 2020! MyData Global is an award-winning international nonprofit based in Finland. MyData Global’s mission is to empower individuals to self-determination regarding their personal data.  Our association is politically non-aligned and emphatically collaborative, not antagonistic, by nature. We approach the complex set of issues around

  Get ready for the most exciting elections of November 2020! MyData Global is an award-winning international nonprofit based in Finland. MyData Global’s mission is to empower individuals to self-determination regarding their personal data.  Our association is politically non-aligned and emphatically collaborative, not antagonistic, by nature. We approach the complex set of issues around personal...

Read More

The post 6 reasons why YOU should Run for MyData Global Leadership positions in the elections for 2021 appeared first on MyData.org.

Monday, 05. October 2020

omidiyar Network

Partner Spotlight: Paradigm Initiative’s Vision for Ensuring Good ID and Data Protection in Nigeria

By Franklyn Odhiambo and Thea Anderson, Responsible Technology, Omidyar Network Earlier this year, the Nigerian government launched a Steering Committee for the National Identity in June, and published the Data Protection Bill 2020 for public comment in August. As a social change venture with a presence across Africa, we as Omidyar Network actively use our voice and partnerships with civil s

By Franklyn Odhiambo and Thea Anderson, Responsible Technology, Omidyar Network

Earlier this year, the Nigerian government launched a Steering Committee for the National Identity in June, and published the Data Protection Bill 2020 for public comment in August. As a social change venture with a presence across Africa, we as Omidyar Network actively use our voice and partnerships with civil society organizations to reinforce good technology design and policy practices that yield Good ID — equity, inclusion, privacy, security, and transparency.

To better understand the implications of these developments in Nigeria, we recently spoke with Adeboye (“Boye”) Adegoke, Paradigm Initiative’s senior program manager. Paradigm Initiative is an African social enterprise, focused on increasing transparency and accountability between governments, civil society organizations, grassroots organizers, and the general public, on issues related to digital rights and inclusion. Omidyar Network partnered with Paradigm Initiative in early 2020 to support their advocacy and education efforts with Nigeria’s policymakers and the Nigerian Identity Management Commission, aiming toward securing greater safeguards and transparency in the National Identification Numbers program.

The commission operates the federal government’s national identity database, and assigns and issues the unique identification numbers that will soon be required for citizens and legal residents ages 16 and older.

Below Boye discusses these recent announcements and his organization’s vision for increasing digital ID and data protection safeguards, as well as transparency in government decision-making.

What are your top recommendations to Nigerian policymakers today on the draft Data Protection Bill 2020?

Adegoke: One, the proposed Data Protection Commission must be fully funded, staffed, and operational before there is any further registration, or issuing of [National Identification Numbers] or any other digital ID programs.

Two, the commission must be truly independent. The overwhelming number of government actors currently proposed for the commission’s advisory board will seriously impact true independence.

Three, independent members — from civil society, media, and academia — must be included on the advisory board to balance government and private sector representatives equally.

Four, data protection must never be used as a disguise to deny journalists access to critical information in the interest of the public, especially within journalistic rights.

Five, vague terms such as “national interest”, “national security”, and “public morality” — which have been used in the past to derogate from citizens’ rights — must be clearly defined in the bill, and not be left open-ended and amenable to different interpretations.

“Given the work that we do, part of our advocacy objectives is to ensure that the development of digital policy receives quality inputs from us. Our focus is clear in this regard; rights, inclusion, and innovation. We will advocate against draft legislation/policy that seeks to limit rights, stifle innovation or widen access gaps and we will support those who seek to promote access, rights, and innovation.”

How did Paradigm Initiative come to advocate on digital identity issues?

Adegoke: Our entry point builds on our existing commitment to digital rights and digital inclusion, alongside our belief in the right to privacy and equity for all Nigerians. When the federal government relaunched and wanted to fast-track the National Identification Number registration process, we were alarmed and had to act. Before a new ID program is even considered, we believe there must be many strong safeguards in place, including a data protection framework. Government-issued ID programs in Nigeria have never been inclusive, starting at the point of registration. They’ve amplified gaps between the “haves” and the “have-nots” that corrupt officials can exploit, especially for Nigerians that are not socially or politically connected, and stateless populations.

What issues do feel poorly-managed government-issued IDs create for citizens and residents?

Adegoke: There are potential risks depending on how the ID system is instituted. We are mindful of the kind of problems that an [digital] ID for residents can help solve. We are also aware that these can create new problems and amplify old ones. A digital ID may complicate already problematic and non-transparent surveillance activities of repressive regimes. On the other hand, [digital IDs] can ensure state resources are efficiently distributed, and problems, such as eliminating “ghost workers” in government services, can be resolved.

What are practical challenges you see with the current National Identification Number process in Nigeria?

Adegoke: One, poor engagement by the government with the public. The government often assumes they know what is best for people, but have had no engagement with civil society voices or public forums. Two, digital ID vendors and contractors wield a huge influence in the Nigerian [National Identification Number] process, while human rights organizations and civil society voices have been largely ignored. Three, infrastructure challenges, including unstable electricity access and limited internet connectivity. Implementing a transparent and inclusive digital ID program requires not just an internet connection, but a reliable one. In October 2019, for example, it was announced that a [National Identification Number] would be required to write university matriculation Exams. The government was forced to quickly walk back that decision when the [National Identity Management Commission] could not cope with the surge of admission seekers due to the lack of operational [National Identification Number] registration centers. And four, corrupt practices by some [National Identification Number] registration officers. Registering for a [National Identification Number] goes to its highest bidders, weakening the accountability and trust in the government.

How does Paradigm Initiative engage with Nigerian policymakers to raise these concerns?

Adegoke: First, we have been at the forefront of the campaign for Nigeria to enact the Digital Rights and Freedom Bill in Parliament since 2016. Second, we engage and educate policymakers on the need for parliament to pass the Nigerian Data Protection Bill (originally introduced in 2011) into law. While the new draft Data Protection Bill 2020 is critical, it only addresses a fraction of our concerns. Third, in 2019 a federal High Court in Abuja recommended the [National Identity Management Commission] put safety measures and a sufficient regulatory framework in place. We led a group of actors to demand that [commission] suspend [National Identification Number] implementation until the government enacts a comprehensive data protection law, and resolves the data security lapses and privacy concerns we identified.

We [also] facilitate engagement between [the commission] and civil society groups, grassroots advocates, and social media activists. In February, we hosted top officials in two engagement sessions with stakeholders from the Niger Delta region and Uyo. Also, in August, we hosted a virtual conversation between the Director General of [the commission], and civil society, where we received feedback on the [National Identification Number] concerns of our citizens and civil society. This month, we facilitated a virtual public consultation session for feedback and recommendations on the draft Data Protection Bill 2020.

Paradigm Initiative recently released the Digital Rights and Privacy in Nigeria report, and co-hosted the Forum on Internet Freedom in Africa (FIFAfrica) to amplify the voices—and solutions—of African civil society organizations working on digital rights issues. Additionally, just last week, Paradigm Initiative represented Nigerian civil society in an ID4Africa panel on Nigeria’s identity ecosystem. The conversation resulted in a Biometric Update article on the lack of both accountability mechanisms, and transparency on the ID program, as well as the fees collected when the Nigerian government hires third-party private actors to enroll people in the program.


Me2B Alliance

Me2B Alliance Monthly Call - Mon, 10/05/2020 8:00am-9:00am

Reminder: Me2B Alliance Monthly Call When: Monday, 5 October 2020, 8:00am to 9:00am, (GMT-07:00) America/Los Angeles View Event Organizer: Megan Bekolay Description: Lisa LeVasseur is inviting you to a scheduled Zoom meeting.   Topic: Me2B Alliance Time: Mar 2, 2020 08:00 AM Pacific Time (US and Canada)         Every month on

Reminder: Me2B Alliance Monthly Call

When: Monday, 5 October 2020, 8:00am to 9:00am, (GMT-07:00) America/Los Angeles

View Event

Organizer: Megan Bekolay

Description:

Lisa LeVasseur is inviting you to a scheduled Zoom meeting.   Topic: Me2B Alliance Time: Mar 2, 2020 08:00 AM Pacific Time (US and Canada)         Every month on the First Mon, until Aug 3, 2020, 6 occurrence(s)         Mar 2, 2020 08:00 AM         Apr 6, 2020 08:00 AM         May 4, 2020 08:00 AM         Jun 1, 2020 08:00 AM         Jul 6, 2020 08:00 AM         Aug 3, 2020 08:00 AM Please download and import the following iCalendar (.ics) files to your calendar system. Monthly: https://zoom.us/meeting/vpMoce6qqDkph3jl_ajkRgY0KikqhW7ZHQ/ics?icsToken=98tyKuqvqz0tGNKXs1_Hf6kqE9r8b9_qknkdoK9inRXuMSdqMij_PfNKBeVFOOmB   Join Zoom Meeting https://zoom.us/j/375672623   Meeting ID: 375 672 623   One tap mobile +16699006833,,375672623# US (San Jose) +14086380968,,375672623# US (San Jose)   Dial by your location         +1 669 900 6833 US (San Jose)         +1 408 638 0968 US (San Jose)         +1 646 876 9923 US (New York) Meeting ID: 375 672 623 Find your local number: https://zoom.us/u/acUTI5Weo  


Federal Blockchain News

Blockchain Platform Compliance with NIST & FISMA Standards

James Howard, PhD, discusses the findings of his paper "Blockchain Compliance with Federal Cryptographic Information Processing Standards", co-authored with Maria Vachino. The paper revealed that of the Ethereum, Corda, Fabric & Multichain platforms, only Corda meets FISMA standards.
James Howard, PhD, discusses the findings of his paper "Blockchain Compliance with Federal Cryptographic Information Processing Standards", co-authored with Maria Vachino. The paper revealed that of the Ethereum, Corda, Fabric & Multichain platforms, only Corda meets FISMA standards.

Me2B Alliance

Re: Reminder: Me2B Full Alliance meeting tomorrow morning at 8am PDT

Just ran across this: https://www.datapolicytrust.com/ Might be interesting. Doc
Just ran across this: https://www.datapolicytrust.com/
Might be interesting.
Doc

Sunday, 04. October 2020

Me2B Alliance

Reminder: Me2B Full Alliance meeting tomorrow morning at 8am PDT

Hi friends,   Please join me tomorrow for our bi-monthly full alliance call at 8am PDT where I’ll share exciting news about our transition to membership.   Join Zoom Meeting https://zoom.us/j/375672623   Meeting ID: 375 672 623   One tap mobile +16699006833,,375672623# US (San Jose) +14086380968,,375672623# US (San Jose)   Dial by your location    

Hi friends,

 

Please join me tomorrow for our bi-monthly full alliance call at 8am PDT where I’ll share exciting news about our transition to membership.

 

Join Zoom Meeting

https://zoom.us/j/375672623

 

Meeting ID: 375 672 623

 

One tap mobile

+16699006833,,375672623# US (San Jose)

+14086380968,,375672623# US (San Jose)

 

Dial by your location

        +1 669 900 6833 US (San Jose)

        +1 408 638 0968 US (San Jose)

        +1 646 876 9923 US (New York)

Meeting ID: 375 672 623

Find your local number: https://zoom.us/u/acUTI5Weo

 


Decentralized Identity Foundation

Drilling down: Open Source

A crash-course in the complex world of variously-open software licensing The ostensibly binary distinction between “open” and “closed” software gets bandied about in many contexts, often in a dangerously simplified form, as though there were only two, mutually-exclusive options. It can also be extended to standards in an imprecise or oversimplified way. Sometimes people refer to groups like DIF “

A crash-course in the complex world of variously-open software licensing

The ostensibly binary distinction between “open” and “closed” software gets bandied about in many contexts, often in a dangerously simplified form, as though there were only two, mutually-exclusive options. It can also be extended to standards in an imprecise or oversimplified way. Sometimes people refer to groups like DIF “working on open-source standards,” but speaking precise, no such thing exists!

Only software code can be open-source, since after all, “source” is short for “source code” (pre-compiled software). Standards, whether open or not, are not code — instead, they are functional definitions and specifications that define and specify protocols that one application (or code base) uses to talk to another (or code base). Each standard can be understood as a benchmark for testing specific implementations, dividing existing and future codebases into those fully, partly, or non-compliant with its requirements.

“Lean Startup workshop” in Amsterdam, by Daria Nepriakhina

Code, whether open-source or not, can implement or “build to” a pre-existing standard, which can be developed in a variously open or closed manner; there is no inherent link or dependency between the openness or rigor of the two processes. Similarly, a standard can be “written around” one or more existing implementations, in such a way that the existing code is definitively compliant with the resulting standard tailored to it. This latter operation is an important function in open-source communities, in that it invites future developers to make new code that will be interchangeable or interoperable with that precedent. Even if that pre-existing code is closed, the resulting standard can be of great use to open-source development, particularly if the process used to write it was also “open” and participated in by the designers and implementers of that closed-source precedent.

In this series of three brief explanatory posts, we’ll first explain open-source development as a process rather than defining it by its results. In the next post, we will explain the characteristics and optionalities of an “open process” for developing standards. Supporting these open standards is where the bulk of DIF’s efforts and resources are focused. Lastly, we will turn to how open source and open standards work together to create new business models and strategies with real-world consequences for cooperation, “coöpetition,” and healthy markets.

Source Code and Intellectual Property Law

At their lowest level, computers are just machines that exercise millions of structured computations every second. They take inputs, perform computation, and produce outputs.

(public domain)

These computations can be combined into complex structures of data and decision-making called “programs” that interface with humans to make useful and meaningful outputs. At the lowest level, these computations still look like oceans of 1s and 0s to the untrained eye, but decades of refinement of operating systems, programming languages, scripting languages, and other abstractions make it easy for engineers to deal only with “source code” as a kind of human-readable abstraction at higher levels. This is “compiled” into more efficient, machine-readable “binaries” (ones and zeros) that can be deployed to standardized hardware in the real world where it is “run” (live software is often referred to as “runtime”).

by Chris Szalwinski from The C Program is licensed under a Creative Commons Attribution 2.5 Canada License.

Because it is human-readable and because licenses apply to compiled binaries and functional software in specific jurisdictions, the software industry has largely applied practices analogous to academic “peer review” for sharing and critiquing core pieces of source code. This complicates the open/closed distinction further, since a closed-source project and a licensed binary might still offer key components up for review through github or other channels traditionally used for open development. Furthermore, the many gradients between complete “open” or “closed” software depends on the ownership and licensing of a given piece of software, which can evolve over time or differ in its enforceability across jurisdictions. Like all legal matters, mileage may vary and always consult a licensed expert!

A thumbnail history of licensing

There is a range of different licensing regimes for source code. Among open-source licenses, different functions can be enabled or facilitated independently of one another: external review, attribution, innovation, maintenance, and even revenue-sharing and other conditions on business practices.

Proprietary code bases are released under closed-source licenses, which are optimized for secrecy, exclusivity, and/or sale of the results through licensing-based revenue models. For decades, this was how Microsoft licensed not only its proprietary stand-alone software, but even the bulk of the Windows operating system. The licensing landscape is much more diverse today, but before the 1990s very little commercial software was developed in the open or had its source code published after the fact.

The culture of academia is one of sharing and publishing intellectual work for peers to see and review — open peer-review is as central in academic computer science as it is in the hard sciences. The internet was originally imagined, architected, prototyped, and built by academics, and tinkerers, primarily supported by military and government funding. These three cultures (the military, academia, and independent “hacker”/tinkerer types) formed the basis of internet culture. A fourth culture was added later when the general public’s access to the internet evolved into a massive commercial industry in the early 1990s.

For our purposes here, we can limit ourselves to the fairly direct link, sometimes biographical, between the anomalous, non-commercial origins of the internet and the development of the open-source movement within the software industry. One “origin myth” that exemplifies this link is the story of a young Richard Stallman, then working in an early Artificial Intelligence lab at the Massachusetts Institute of Technology, who wanted to customize some printer software for an expensive new printer shared across all the floors of a busy research building. To do so, he needed to get the source code from the manufacturer, which he had done to customize the previous “workhorse” printer in the same building.

The manufacturer of the new model, however, surprised Stallman by refusing to hand over the code on grounds of licensing and intellectual property rights. Stallman, to whom the motto “software wants to be free” is often misattributed, often credits the incident for his deeply-held belief that the end-users of software have a right to modify and participate in the software they use, whether they paid directly for it or benefited from the procurements of governments. To this day, the details and boundaries of this right are still being debated, not only in open-source requirements for government procurements, but also in “right to repair” laws that extend these rights into the domain of hardware and the physical (and 3D-printed) world.

Varying degrees of “Software Freedom”

Stallman went on to found the Free Software Foundation and play a pivotal role in the elaboration of an open-source (and “free open source”) movement. He also created the first GNU Public Licence (GPL), still a major and influential family of licenses. All versions of the GPL have within them some version of these 4 essential “freedoms”:

The freedom to run the program as you wish, for any purpose (“freedom 0”).
The freedom to study how the program works, and change it so it does your computing as you wish (“freedom 1”). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help others (“freedom 2”).
The freedom to distribute copies of your modified versions to others (“freedom 3”). By doing this, you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Honoring these four freedoms textually and fully can make it very hard to sell software commercially for many reasons, salient among them that freely-available source code in a major language is trivial to compile into functioning binaries; commercial or “closed” software is generally sold as compiled binaries, keeping the source code secret.

Furthermore, many early open-source licenses like GPL explicitly forbid so-called “enclosure” of the code, i.e., incorporating pieces or subsets of the licensed code into a new, close-source product. Anti-enclosure licensing is often referred to as “copyleft” (because it prohibits future copyrighting!) or “viral licensing,” since it cannot be mixed and matched with other licensing without overriding more restrictive licensing. This severely limits commercial potential of “forks” or derived variants. The entire Linux family of operating systems are licensed this way, which keeps Linux development squarely in the camp of maximally-open development throughout.

Over the course of the 1980s and 1990s, the open-source movement grew and at some point the software industry could not afford to ignore it, even if directly participating with licenses like GPL was not feasible for most companies of any size. This period saw the first “compromise-licenses” and hybrid open/closed business-models evolve around them. Many of the licenses still popular today date to these early experiments. For instance, the license originally created for the University of California, Berkeley’s fork of Unix, and still known today as the Berkeley Software Development (BSD) license, is one such “commercially lenient” open-source licenses. These licenses allow developers to take code that was originally open, make significant changes to modified versions of it, and license or even sell those under more classically commercial licenses.

Similarly, more lenient variants of the Apache and MIT licenses date to this period as well. It is worth noting that Apache web servers were one of the first open-source pieces of software that replaced a dominant commercial product. Apache web server software reaching a dominant position in the previously closed-license, commercial niche was a watershed moment for open-source software.

Apache is often held up, alongside Linux, as an example of where open-source software optimizes for standardization and safety of mission-critical infrastructure, largely due to the thorough and ongoing auditing and maintenance enabled by its core components being completely open to review, testing, and improvement proposals. Conversely, however, some historians of software development point out that allowing enclosure to the degree that Apache licenses do can lead to simultaneous or “parallel” development of various (partially closed-source) forks to evolve in tandem according to various divergent business agendas. Over time, this can splinter development, wicking off talent, attention, and manpower into closed development and creating a major coordination problem for the open-source “parent” of the family tree. In the most severe cases, this can greatly diminish and forfeit the standardization and security gains that made sharing an open-source parent so desirable in the first place.

Today and Tomorrow

Today, cloud-based business models and Software-as-a-Service have revived and expanded the toolkit for closed-source development, and in turn breathed new life into the debate about how hybrid models could make open-source infrastructure sustainable by revenue shares from closed-source products that depend on it. Similarly, the so called “cloud loophole” in older GPL licenses, whereby software could be run but not “distributed,” was closed by the newer Affero GPL that applies to cloud environments.

Another interesting frontier in the evolving licensing landscape is being opened up in recent years by distributing computing and distributed ledgers. One key assumption of traditional licensing is that software runs on a finite set of distinct pieces of hardware, with ownership and liability that follow straightforwardly from there. Distributed computing, however, where computation work is spread out over a more diffuse and indeterminate numbers of computers, which might have limited or no insight into the “big picture” of the software they are running, muddy the waters even further than traditional cloud environment. On the radically open side of the spectrum, the distributed-computing ecosystem project HoloChain has innovated the Cryptographic Autonomy Licence to empower users (and the software running in their name) by protecting them with encryption.

New forms of “confidential computing” and encrypted, self-sovereign networking take this even further, making the boundaries of software and runtime environments similarly porous. Attempts at licensing for these new topographies have been controversial at times, but they are also an important precedent and context for work happening in the DIF, such as that of the Secure Data Storage working group.

There are even more radical experiments and movements happening in the open-licensing problem space, which would restrict usage not [only] according to commercial terms or enclosure, but according to non-monetary and non-licensing rubrics as well. For instance, the “ICE breaker” project has brought more attention to the movement to license software according to so-called “ethical licenses.” These restrict derivative use anywhere it can be proven to support of human rights violations, such as in military applications or for “predictive policing” and other use-cases that run afoul of international authorities on human rights and discrimination. The provisions or triggers for such licensing might be as hard to enforce as international human rights law (i.e., very hard), but it sends a signal that specifications and standards from other disciplines such as human rights law and ethical philosophy might someday find their place among the commercial and attribution clauses in open-source licenses.

Further reading

All code developed under the DIF umbrella is strictly open source, and while DIF accepts donations of previously-closed source code, it is expected that all ongoing iterations of them will take place in the open. The original vision for DIF as a project within the Joint Development Foundation was to host code development, and it has since expanded to include co-developed pre-standards specifications and other educational and non-technical materials as well. Take a spin through our github repositories if you’d like to see an overview of historical and ongoing projects.

In this quick overview, we have only scratched the surface, though, and we encourage anyone working in the open software space to educate themselves further. The Open Source Initiative holds the trademark on the phrase “open source” and maintains a very helpful list of licenses that it has decided are fully conformant with its principles. The list of approved licenses that they maintain is a great place to start if you are researching the licensing landscape, and they also hold events and offer a substantial offering of educational materials.

In our next installment, we’ll turn to Open Standards and the work of the Decentralized Identity Foundation.

Drilling down: Open Source was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 02. October 2020

omidiyar Network

Exploring the Tech Risk Zones: Outsized Power

By Kacie Harold, Omidyar Network Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones

By Kacie Harold, Omidyar Network

Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones (or potential downsides) and ensure technology is safe, fair, and compassionate.

Chris Hughes is the co-founder and co-chair of the Economic Security Project, a group that is working to restructure the economy so that it works for everyone. Prior to this, he co-founded Facebook in 2004, and later worked as a publisher at New Republic for several years. Chris has worked exclusively on economic issues since 2016, focusing on anti-monopoly and antitrust issues, and calling for a guaranteed income and tax policy.

What motivates you on the issues related to ethics and responsibility within technology?

I think any systems builder, whether it’s in technology, finance, or the arts needs to think about how their work impacts other people and the world. This is an obligation that we all have as humans first, whether we end up as business people or anything else. Tech in particular has a very important responsibility since so many of the companies that are out there, pioneering and charting new paths for products and services. And each one of those comes with a different set of ethical questions that tech companies need to develop a practice of asking and answering on a regular basis.

I tend to be an optimist and think that folks in tech now are thinking much more comprehensively and ethically. That said, I don’t think that we should overcomplicate thinking about ethics. Just as parents teach their kids at the playground to think about how their play affects other kids, we do the same thing in schools. We have to do the same thing in business and technology as well. And I don’t think that creating the habit of thinking about how our work impacts others is a particularly tall ask. Part of being human is thinking about how we live in community with other people, what other people provide us and what we’re providing them. If there’s any moment in the past several decades that illustrates that more than ever it’s COVID-19. We can see that we all rely on each other to stay healthy and create the kind of communities that we, have appreciated and want to return to.

I think that the long arc of history teaches us that we’re all relying on each other and the decisions that we make as individuals affect the greater community. This is true in business. It’s true in politics and in organizing, too.

How do monopolies hurt entrepreneurs?

For small business entrepreneurs, the most worrying things about monopolies is their ability to move into a market and effectively shut that market down by either price gouging, copying features or tools, or, hostile acquisitions of companies in that marketplace. From a talent perspective, it is often difficult to compete against monopolies because of their ability to attract and retain talent.

Increases in market concentration lead to decreases in entrepreneurship and innovation. It becomes harder to enter these markets, and this slows down the pace of innovation. Even before the recession, small business startups were at a near historic low, and one of the chief causes of that is monopoly power.

My sense is that we’ve lost a vocabulary and a framework to talk about outsized corporate power in the United States over the past 30 or 40 years. Most folks in tech are wary of these conversations, but they are also concerned about the big of the consolidation of power. And so we’re at a transitional moment where folks in tech, like a lot of people elsewhere in the country and even in the world are rethinking what the role of public policy should be to create markets that are fair and create good outcomes for everyone.

I think another issue is that people have begun to see the large tech monopolies as inevitable and unchangeable. And so they may not think as much about how those monopolies impact their lives or impede their work.

That’s what the leaders of the large tech companies want you to believe. So if that’s what you’re thinking, they’ve got you right where they want you. The more that they can convince folks that there is no other way, and that this is the best of all possible worlds, then they’ve won.

I think there are a lot of entrepreneurs out there who are thinking creatively and are skeptical about the role that those large companies are playing. The challenge is less about not buying into what the tech companies are saying — that monopolies are inevitable– and more about believing that government can be a positive force for good in the world. Specifically, that the Federal Trade Commission and the US Department of Justice can create markets that are more dynamic and fair. We live in a time where cynicism about the government runs deep. I think for entrepreneurs, that cynicism is bigger barrier than the tech companies’ talking points.

If you don’t mind shifting for a moment, I’d like to ask you about something you wrote in an op-ed for The New York Times in 2019 about your experience working at Facebook.

“I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders.”

Given your experience of being in the room as decisions like this get made, is there any advice you would give tech developers and teams to identify the key moments where they need to stop and think about what they are creating?

I can only speak for myself. In the early days of Facebook, it was very hard for me to imagine what a success at scale might look like. These were products that were for college kids in the United States that were largely meant to be fun, about creating connection. We knew that it was more than just fun, but the backdrop was that it was a lighthearted, recreational project that we hoped would bring people closer together. That would have been the way we would have spoken about it at the time. And so for me it was very hard to imagine what this thing would look like when over a billion people were using it, for who knows how many hours a day, and anyone can have access. That difficulty was real, and it isn’t an excuse because we knew that Facebook was a very sticky, very popular product, very early on. And that’s why I wrote what I wrote. Because we should have thought much more seriously about what it could turn into, even at the outset.

I’m not sure if it would have changed in some of those initial decisions that I made at the time, but it would have created a framework of accountability that we could refer back to as a team and individually. And I think it’s only in the past year or two that Facebook has really come to even understand its responsibility, if it really has. My advice to (tech) teams is, even if you’re working small, think big, and think about what problems could be introduced at scale.

I think when you are in a company that is growing and doing really well, it’s natural to be excited and want to move quickly, but that speed can make it difficult to predict ways that things could go wrong. Do you have any advice for how tech makers can recognize those pivotal moments where they should slow down and consider the impact of what they are creating?

You’re always in the moment, and you don’t have to worry about figuring out if you’re in the moment or not. My advice is that you should always be asking that question. Often it will feel theoretical, but it isn’t. I guess that’s my point with the playground analogy at the beginning. Thinking about how your actions impact other people is a basic part of living in a community with other people.

I realize that interviewing somebody (formerly) from Facebook may be a little counterproductive because people could say, well, my company is not going to become a Facebook, so I don’t need worry about this. But I think everybody should be thinking about it much of the time, whether you’re in the CEO suite or, the most junior customer service agent.

You can find more of Chris’ thinking on twitter @chrishughes. The Economic Security Project is a grantee of Omidyar Network.

Thursday, 01. October 2020

omidiyar Network

Exploring the Tech Risk Zones: Bad Actors

By Kacie Harold, Omidyar Network Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones

By Kacie Harold, Omidyar Network

Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones (or potential downsides) and ensure technology is safe, fair, and compassionate.

Caroline Sinders is a designer and artist focusing on the intersections of artificial intelligence, abuse, and politics in digital conversational spaces. She has worked with the United Nations, Amnesty International, IBM Watson, the Wikimedia Foundation and recently published a piece with the support of Omidyar Network and Mozilla Foundation. Sinders has held fellowships with the Harvard Kennedy School, Google’s PAIR (People and Artificial Intelligence Research group), and the Mozilla Foundation. Her work has been featured in the Tate Exchange in Tate Modern, the Victoria and Albert Museum, MoMA PS1, LABoral, Wired, Slate, Quartz, the Channels Festival and others. Caroline also has a passion for addressing harassment online, which represents one of the harmful behaviors within the Bad Actors Tech Risk Zone.

Caroline, can you tell us about how design plays an important role in creating safe and inclusive environments online?

I’ve been studying online harassment for nearly the past seven years. I look at it from the perspective of how technology products and social networks are designed, and how that design can mitigate or amplify harassment. I focus on how the design of a space allows for harassment to occur, including both the actions that a harasser could engage in and the affordances that a victim has, to mitigate the harassment that they are receiving.

How can tech companies benefit from protecting their users from harassment?

I always like to remind people that bad business costs money. Additionally, when people face harassment, they tend to engage in self-censorship. The chilling effect of harassment is that people post less content and they engage less often. I believe that becomes a human rights issue when, for safety reasons, some people can not engage freely in a platform, but others can. Being able to participate safely in a space is crucial for engaging in free speech. Ultimately, a company will lose money if people stop using or all together leave their platform; one way to get users to stay is to protect them.

In the last few years, you’ve worked with Band Camp, Facebook, and Wikipedia on anti-harassment policies and tools to support victims. Are there any common challenges that you’ve seen tech teams struggle with as they address harassment on their platforms?

Platforms, across the board, struggle to identify growing forms of harm. Harassers are always changing their methods and finding some new and interesting way to hurt other people. It’s important to regularly talk to a variety of people from underrepresented groups, who are using your product or technology in order to understand how forms are harassment are evolving.

When you listen to users, you need to be aware their relationship to the tool. Often in open source communities or volunteer led projects, you see a lot of users who feel very committed to a project because they have contributed to it and they are deeply invested in the community. For instance, at Wikimedia, I saw victims who were more willing to forgive or try to empathize or work through the harassment they had faced out of concern that asking the Wikimedia Foundation or community leadership to make changes might rupture the community or hurt the encyclopedia. In these cases, you need to find other marginalized members who have experienced toxicity, and have a conversation with them and make sure you aren’t perpetuating toxicity in order to protect a project.

Another challenge is that some forms of harassment look innocuous at first. For example, imagine you receive the same message from 10 different people over the course of a year, and although you block the users, the messages keep coming. When you file a report, there’s no way to show the messages are related, and the platform has no way to investigate it. In another scenario where you receive a comment from someone that says, “I love your green top with the polka dots,” you might be scared, wondering why or how that person has seen your shirt. But the content moderator isn’t going to see that, all they see is a comment on the victim’s appearance. Even with harassment policy and procedures in place, reporting flows may prevent victims from sharing context or evidence necessary for a content moderator to verify it.

How can tech companies be proactive about preventing harm on their platforms?

Unfortunately, when big tech thinks of preventative care in terms of harassment, they think of technology solutions to it. This can be really problematic because those technology solutions end up being things like AI and AI filters, which aren’t very accurate.

Preventing harassment would entail much more user friendly privacy settings. The challenge is, most people aren’t necessarily thinking of their safety until it has been compromised. One way to increase safety for users is to make data privacy settings really legible, and easy to find and use. This could also look like sending users a push notification suggesting changes to their privacy settings, keeping location sharing off by default, or even notifying users of ways that harassment can occur on that platform.

In addition to giving people tools to protect themselves, victims may also need proof that they have reported abuse in case things get worse. So right now, if you file a harassment report on Facebook or Twitter they send you an email, but it would help victims to be able to find all of those reports in one place and in a downloadable format in case they need those reports to build a legal case at some point.

What advice do you have for tech makers, builders, or companies that are just starting to think about or discuss harassment?

Hire Black women and other marginalized people who use your tool. If you are a privileged person, you may not quite understand that someone could experience harassment in a place that you feel is very safe. I think of Zoom which, really could not have anticipated this moment or the popularity of their tool. The CEO said that they had never thought of harassment because Zoom was created as a workplace tool. But we know that harassment happens at work.

When you design a technology, always ask yourself what could possibly go wrong and really map out things, even if they feel absurd to you. Don’t just design for like this middle area of how you hope people will use your technology, design for the real world.

Finally, remember that every data point about harassment is a real person’s traumatic story. So even if you have what seems like really low numbers of harassment, it’s always important to remember that these are people experiencing trauma, not numbers.

You can find more of Caroline’s work on her website, and can follow her journey on twitter @CarolineSinders.


r@w blog

#FollowTheMedium

Zeenab Aneez & Neha Mujumdar Session It was media theorist Marshall McLuhan who popularised the phrase ‘the medium is the message’; to him, different kinds of media engage the senses in different ways, affecting how we process it and engage with its contents. Before situating research in the digital space, it is important to ask ourselves: what is the nature of the medium are we dealing
Zeenab Aneez & Neha Mujumdar Session

It was media theorist Marshall McLuhan who popularised the phrase ‘the medium is the message’; to him, different kinds of media engage the senses in different ways, affecting how we process it and engage with its contents. Before situating research in the digital space, it is important to ask ourselves: what is the nature of the medium are we dealing with here? How do people interact with it? What are the opportunities it provides and the risks it poses? How can we study new digital objects, such as online-first news outlets, podcasts, etc in a way that recognises the medium’s newness?

The proposed session is an exploration of a methodology that is informed and defined by specific characteristics of the medium, with a special focus on digital news and journalism in India. Through this, it seeks to tackle the first of the four key focus areas of the conference: How do we conceptualise, as an intellectual and political task, the mediation and transformation of social, cultural, political, and economic processes, forces, and sites through internet and digital media technologies in contemporary India?

Keeping this key question in mind, we ask: how can digital methods research contribute to the study of news and journalism in the digital space? How can we use digital objects such as tags, Likes, and Comments to understand how user feedback works in the new information economy? What can the interface of a news creation platform tell us about the changing roles of Indian journalists in today’s media environment? How can we formulate a methodology for studying the metamorphosis of a news story by using Twitter and what skills are required to gather and process information for research of this nature?

In order to inform our responses to such questions, we borrow from Richard Rogers’ adage ‘Follow the medium’ (Rogers 2013), which argues that “natively digital”(Ibid. 19) objects like tags, links, Likes or Comments, which originate in digital networks, cannot be fully understood with methods, such as, say content analysis; an example of a non-digital method that does not recognise its digital nature. The proposed session will make use of the general philosophy embodied by Rogers’ approach and urge participants to acknowledge the specific properties of the Internet as a medium and look at news and journalism as part of the larger media ecology of the web. This calls for the use of new methods that are digital in nature; the discussion on contemporary news should expand from how the news industry is coping with the digital transition, to how we can better understand the specific elements of this transition and use this understanding to reflect upon the changing nature of journalism and news itself.

In order to channel the discussion, the session proposes using the framework from one particular field of digital research: platform studies. With the advent of Web 2.0 and the emergence of the ‘web as platform’ (O’Reilly 2007) and the strengthening relationship between the news industry and social media platforms(‘Reuters Institute Digital News Report’ 2015), traditional as well as digital-born news sites are increasingly adopting a platform model. Therefore, platform studies makes for a fitting framework within which to understand the workings of these platforms, their technological and formal structures, and the specific ways in which they allow users to interact with news content.

Plan

The session will begin with a brief introduction to digital methods (Rogers 2013) and the field of ‘platform studies’ (Bogost and Montfort 2009; Gillespie 2010; Dijck 2013), which will serve as a loose framework through which to study existing news platforms as well as perform analyses on social media platforms as sites for news and journalism. This will be supplemented by the works of Anne Helmond (2015) and Tarleton Gillespie (2010).

Following this, participants will be divided into groups of four-six, with each group anchored by a volunteer, with added support from the two co-leaders. They will then be given the task of formulating a research question that makes use of one or more of the digital methods presented and are also required to frame a methodology that makes allowances for the particularities of the Indian news environment. The session will conclude with a brief discussion based on their findings.

The goal of the workshop will be to explore how digital methods can be aligned with current concerns about news and journalism in India, and open up avenues for research that acknowledges that online news occupies a space that includes natively digital objects and information architectures and hence demands research methods specific to this environment. The workshop also aims at reflecting on potential collaborations between researchers in media studies, data scientists and technologists in developing a comprehensive methodology using which to study digital media in India.

Readings

Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12, no. 3 (2010): 347–364.

Rogers, Richard. “The End of the Virtual: Digital Methods,” Digital Methods. MIT press, 2013: 19–38.

Van Dijck, José. “Disassembling Platforms, Reassembling Sociality,” The Culture of Connectivity: A Critical History of Social Media. Oxford University Press, 2013: 24–44

References

Anderson, Christopher W. “Towards a Sociology of Computational and Algorithmic Journalism.” New Media & Society, 15, no. 7 (2013): 1005–1021.

Bogost, Ian, and Nick Montfort. 2009. “Platform Studies: Frequently Questioned Answers.” Digital Arts and Culture 2009 https://escholarship.org/uc/item/01r0k9br.pdf.

Helmond, Anne. 2015. Presentation by Anne Helmond — Becoming Data Point. Panel. Transmediale. https://www.youtube.com/watch?v=smXLCAGafqs

Lovink, Geert. 2008. Zero Comments: Blogging and Critical Internet Culture. New York: Routledge.

O’Reilly, Tim. 2007. ‘What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software’. SSRN Scholarly Paper ID 1008839. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=1008839.

Procter, Rob, Farida Vis, and Alex Voss. “Reading the Riots on Twitter: Methodological Innovation for the Analysis of Big Data.” International Journal of Social Research Methodology, 16, no. 3 (2013): 197–214.

Reuters Institute Digital News Report. 2015. Oxford, England: Reuters Institute for the study of Journalism, Oxford University. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/Reuters%20Institute%20Digital%20News%20Report%202015_Full%20Report.pdf

Rogers, Richard. Digital Methods. MIT press, 2013.

Audio Recording of the Session

IRC 2016: Day 3 #Follow The Medium : Researchers at Work (RAW) : Free Download, Borrow, and Streaming : Internet Archive

Session Team

Zeenab Aneez is an independent journalist and researcher in the field of digital media and culture. Her interests include digital publishing practices, new media journalism, media ecologies, digital labour and social media alternatives. She was previously a reporter at The Hindu, Hyderabad.

Neha Mujumdar is an independent editorial consultant based in Bangalore. Her writing has appeared in The Hindu and Time Out.

Note: This session was part of the first Internet Researchers’ Conference 2016 (IRC16) , organised in collaboration with the Centre for Political Studies (CPS), at the Jawaharlal Nehru University, Delhi, on February 26–28, 2016. The event was supported by the CSCS Digital Innovation Fund (CDIF).

#FollowTheMedium was originally published in r@w blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 30. September 2020

WomenInIdentity

Member interview with CYNAM Director, Madeline Howard

We sat down with CYNAM Director, Madeline Howard, to find out what makes this ‘woman in identity’ tick…. What do you do and what gets you out of bed in… The post Member interview with CYNAM Director, Madeline Howard appeared first on Women in Identity.

We sat down with CYNAM Director, Madeline Howard, to find out what makes this ‘woman in identity’ tick….

Madeline Howard What do you do and what gets you out of bed in the morning?

I’m delighted to wear a few different hats! My day job is the Socio-Technical Engagement Manager at Cygenta, where I work on the human side of cyber security. I particularly focus on awareness, behaviour and cultural change. I love my field of work as I constantly feel as though I’m making a positive difference to help people and organisations to live more secure lives.

Cygenta also supports me as an i100 for the National Cyber Security Center and their CyberFirst Schools programme. I dedicate one day a week to this – and I absolutely LOVE this part of my job.

Here, I get to work with amazing international, national and local companies to inspire, excite and enthuse the next generation about computer science and cyber security. It’s so rewarding to make a difference to the futures of young people while also supporting the industry talent pipeline.

Finally in my spare time, I’m a Director of Cyber Cheltenham, CyNam. It is a real privilege to be a Director of the UK’s largest cyber cluster and support the development of the local ecosystem.

So what gets me out of bed in the morning? Well I’m passionate about making a positive difference. Whether that is in improving organisations and individuals cyber security awareness, exciting the next generation about their future opportunities or developing the incredible cyber security ecosystem we have, I always want to make a positive impact.

How did you get to where you are today?

It was not exactly a ‘traditional’ route! I studied Geography at Newcastle University and really didn’t know what I wanted to do. Well that’s a lie, I knew my dream job was to be a weather girl… still true! When I finished university, I started in a role within cyber security working in business development. In months I knew I’d found my niche! I loved the pace at which this sector moves and it’s great to be constantly learning about how we engage with technology.

I was keen to pursue a career, raising awareness about cyber security. At Cygenta, I am extremely lucky enough to work for – and be mentored by – Dr Jessica Barker. One thing I recognise is that I’ve always been really lucky to have lots of fantastic people to turn to for advice and guidance and support when I’ve had to make tough decisions during my career.

What is the most important lesson you have learned along the way?

The biggest lesson I’ve learned is to go with your gut, ALWAYS.

I’ve had lots of great opportunities but I nearly turned down some of these simply because I over-thought things. Sometimes things can feel like a risk but if it feels right in your gut it normally is . And, if it doesn’t work out, you’ll still have learned so much simply by learning to take a risk – it’ll all help to shape the future you.

What’s your advice to CEOs in the identity space?

STOP: Thinking that your employees are your ‘human firewall’ against cyber attacks and incidents. The human firewall suggests your employees should prevent attacks and incidents. When it comes to cyber attacks, you can’t prevent every incident, and so this is a totally unrealistic ask on individuals. It also doesn’t prepare people for what to do when something does go wrong.

START: Developing individuals as ‘Human Sensors’. The concept of human sensors suggests that individuals know how to detect and respond to incidents. They understand the indicators of compromise and feel empowered to approach the correct individuals or platforms to report incidents. This ultimately minimises the fear, uncertainty and doubt (FUD) they may have when they think an incident has occurred.

CONTINUE: Investing in cyber security – not just the technology but talent development and people skills. Ensure your employees feel confident and empowered to positively engage with cyber security to improve your response and resilience capabilities.

In one sentence, why does diversity matter to you?

In all aspects of life, diversity is the enabler for growth, innovation, collaboration and positive change.

What book/film/piece of art would you recommend to your fellow members? Why?

“The Go-Giver”.

It really resonated with me. In short, it suggests we shouldn’t do things because we expect something in return but because we just want to help. It is a fantastic ethos to live by – it’ll empower you to become the best version of yourself!

What advice would you give to the teenage ‘you’?

Care less about what people think of you.

It’s not ‘uncool’ to want to achieve.

Go with your gut and everything will work out.

And, finally, accept that you can’t be in control of everything.

I tell myself everyday ‘I wonder if the stars will align?’. If something works out, the stars have aligned. I don’t know anything about the stars but sometimes when things work out, it’s nice to visualise the stars aligning and shining for you.

Where can we find you on social media / the Web?

Twitter: @Madzzhoward

Linkedin: Madeline Howard

The post Member interview with CYNAM Director, Madeline Howard appeared first on Women in Identity.


omidiyar Network

Exploring the Tech Risk Zones: Surveillance

By Kacie Harold, Omidyar Network Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones

By Kacie Harold, Omidyar Network

Earlier this year, we released the Ethical Explorer Pack — a toolkit designed to help tech workers spark dialogue about the future impacts of their designs, identify early warning signs, and brainstorm positive solutions. This week, we are sharing some additional, practical advice from leading experts on how to mitigate against several of the tech risk zones (or potential downsides) and ensure technology is safe, fair, and compassionate.

Matt Mitchell is a hacker and Tech Fellow at The Ford Foundation, working with the BUILD and Technology and Society teams to develop digital security strategy, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Matt has also worked as the Director of Digital Safety & Privacy at Tactical Tech, and he founded CryptoHarlem, which teaches basic cryptography tools to the predominately African American community in upper Manhattan.

Matt, why should small and midsize tech companies want to address issues of surveillance and think about data privacy and security for their users?

I recently spoke with founders of a blockchain, cryptocurrency social media startup that values “humans first”. Privacy came up briefly in the conversation. As a small team going through their first round of funding, they are motivated to build quickly, get people to use the product, and then find a way to monetize it. I suggested they create a transparency report and a plain speak privacy policy because this would give them competitive advantage, and it speaks to the motivations of that team. When you are building a product that’s new, existing companies and competitors might not have these things, so focusing on privacy is really easy, lo-hanging fruit when it comes to feature development. You can go a long way to earning the trust of your users and build engagement when people know that using your product isn’t going to compromise their security in the future.

Are there any common surveillance related problems companies run into when they build a new products or features?

When you’re making a product, there’s a temptation to gather as much data as possible because, in the worst case scenario, maybe you’re VC-funded and you’re losing your seed funding. The money you have to play with every month is going down and you’re not really meeting your KPIs, but you do know your users. If you reach a place where you may have to lose some staff, it can be tempting to sell user information or what you know about user behavior.

Monetizing user data usually seems like a good idea at the time. But it always turns out to be something that hurts you, because it hurts your relationship with the users. When your users can’t trust you anymore, they begin seeing you as the lowest part of what you provide. You are no longer delighting the users, and then they lose the reason why they’re there, and it becomes so easy for someone to replace you.

You may be approached by a company who is interested in just a small part of what you do, for instance, something related to user behavior. This is where you should say “no”. You are still empowered to say “no” at that moment. But as soon as you say “yes”, even if it’s just to sell a little bit of information, but only to trusted partners in certain conditions, that criteria starts sliding really quickly, especially if you are not the only one making decisions or you have funders or VCs you report to. Once you make it, you can’t undo it. You can’t unbuild a surveillance apparatus.

Another common problem is that teams are working on tight timelines, and it can be hard to find the time to make sure they are doing things right, and without guidance, they don’t know when they are doing something wrong. When it comes particularly to surveillance, people don’t have a good mapping of things that equal surveillance in their industry and in their products. Engineers aren’t thinking they want to add surveillance to something, they just want to build a tool. They don’t realize when the different elements of what they built and the data they are collecting can be used to monitor and harm users.

What can teams do to prevent surveillance issues from creeping up on them?

I think harm reduction on a micro-intervention level is a helpful practice because it’s just adding a few minutes into a workday that is full of loose minutes. When you’re trying to fix a broken app or a broken world it can take years, and you won’t necessarily have any wins. This is why it is important to invest those minutes and prevent these harms.

Everyone on the team needs to be equipped with tools and information to identify and prevent surveillance-related harms. For engineers, (quality assurance), and the debugging team, using basic checklists on a regular basis can help prevent problems and identify moments where the team should slow down and evaluate whether there may be a surveillance issue developing. Product managers and UX should create user personas that include information about how that user could be harmed, if your tool were used for surveillance.

Finally, give your team an “emergency brake” that anyone can pull anonymously, if they see an emerging harm, or something that violates the values your team or company has agreed upon. Make it clear ahead of time that if the emergency brake is pulled, the team will dedicate a sprint to fixing the issue.

What advice would you give tech builders who are just starting to think about surveillance?

Reading doesn’t seem like the first thing you want to do when starting a company and focused on finding funding, hiring engineers, and building a prototype. But reading doesn’t take long, and the value it saves you in protecting you from liability, enhancing your ability to compete, and building trust with your users pays itself back in dividends.

I recommend reading about Black [people] using technology, because those use cases open up a set harms that you can apply to almost everything. Two books I like are Dark Matters by Simone Brown, an amazing book on the surveillance of Black folks, and Algorithms of Oppression by Safiya Umoja Noble. When you know better, you can do better.

You can learn more about Matt’s work and watch his talks and security training videos on Medium, or follow him on Twitter @geminiimatt.


Kantara Initiative

SAFE Identity and Kantara Initiative Announce Collaboration to Expand Trust Frameworks for PKI and Non-PKI Identity Providers

SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.

SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.

Tuesday, 29. September 2020

OpenID

The OpenID Foundation and the UK Open Banking Implementation Entity Hosting a Workshop Focused on Financial-grade API (FAPI) and Third Party Providers (TPPs)

View the workshop recording in its entirety here. The OpenID Foundation (OIDF) and our development partner, the UK Open Banking Implementation Entity (OBIE), continue our collaboration in outreach to the fintech community via workshops in hosting another workshop on Tuesday, October 13, 2020 at 1pm UTC. This next event builds on a prior joint workshop […] The post The OpenID Foundation and the U
View the workshop recording in its entirety here.

The OpenID Foundation (OIDF) and our development partner, the UK Open Banking Implementation Entity (OBIE), continue our collaboration in outreach to the fintech community via workshops in hosting another workshop on Tuesday, October 13, 2020 at 1pm UTC. This next event builds on a prior joint workshop in focusing on helping third party providers (TPPs) develop a detailed understanding of the OpenID Foundation’s Financial-grade API (FAPI) profile.

Workshop goals:

To demonstrate tools which help TPPs build and test their own FAPI compliant apps To show TPPs how to identify and raise issues with banks To thereby speed up the overall resolution of issues and growth of a healthy open banking ecosystem

Workshop target audience:

OIDF members and community participants OBIE members Fintech architects, developers and testers All TPPs enrolled with OBIE (including TSPs, Vendors and ASPSPs acting as TPPs)

Workshop agenda:

Introduction: Chris Michael (OBIE) & Dave Tonge (OIDF & Moneyhub) – 5 min. Overview of FAPI Profile (including key differences between OB profile and FAPI): Freddi Gyara (OBIE) & Joseph Heenan (OIDF & FinTechLabs) – 15 min. What to Expect as a TPP Connecting to a FAPI Compliant Bank API (demo using Ozone bank): Freddi Gyara (OBIE) – 15 min. How to Check if a Bank is FAPI Compliant (demo using the FAPI conformance suite): Joseph Heenan (OIDF & FinTechLabs) – 15 min. How to Identify and Raise Issues with a Bank (best way to raise tickets with OB service desk): Gary Sharples (OBIE) – 10 min. Other Tests a TPP Can Perform (overview/demo of RP tests): Joseph Heenan (OIDF & FinTechLabs) – 15 min. Recap & Q&A Session: All participants – 15 min.

 

The post The OpenID Foundation and the UK Open Banking Implementation Entity Hosting a Workshop Focused on Financial-grade API (FAPI) and Third Party Providers (TPPs) first appeared on OpenID.


Kantara Initiative

SAFE Identity and Kantara Initiative Announce Collaboration

SAFE Identity and Kantara Initiative announced a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers. Kantara and SAFE Identity are dedicated to trusted digital identity management services and solutions, but they focus on different yet complementary assurance aspects and [

SAFE Identity and Kantara Initiative announced a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.

Kantara and SAFE Identity are dedicated to trusted digital identity management services and solutions, but they focus on different yet complementary assurance aspects and technologies. Kantara’s efforts are typically directed towards de jure standards conformity assessment, standardization, and non-PKI innovation that apply across multiple technologies. SAFE’s focus is expanding and standardizing the use of PKI-based credentials employed for identity, confidentiality and data integrity.


SAFE Identity and Kantara Initiative Announce Collaboration to Expand Trust Frameworks for PKI and Non-PKI Identity Providers

RESTON, Va. and WAKEFIELD, Mass., Sept. 29, 2020 (GLOBE NEWSWIRE) — SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced today a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified […]

RESTON, Va. and WAKEFIELD, Mass., Sept. 29, 2020 (GLOBE NEWSWIRE) — SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced today a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers. This collaboration is significant because it will consolidate the digital identity assessment and Trust Mark process for companies in healthcare, financial services and other sectors, helping to reduce risk for organizations who rely on the SAFE and Kantara Trust Frameworks.


SAFE Identity and Kantara Initiative Announce Collaboration to Expand Trust Frameworks for PKI and Non-PKI Identity Providers

RESTON, Va. and WAKEFIELD, Mass., Sept. 29, 2020 (GLOBE NEWSWIRE) — SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced today a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified […]

RESTON, Va. and WAKEFIELD, Mass., Sept. 29, 2020 (GLOBE NEWSWIRE) — SAFE Identity and Kantara Initiative, two globally acknowledged Trust Framework Providers focused on expanding digital identity trust and security, announced today a reciprocal agreement to endorse and support each other’s Trust Frameworks for Public Key Infrastructure (PKI) and non-PKI domains together with their certified identity providers.


Decentralized Identity Foundation

Understanding DIDComm

A cross-community effort to standardize on common, DID-anchored capabilities If you are reading this, you probably know already what Decentralized Identifiers (DID) are: they are “identifiers” or addresses which can be queried to return some information about the subject represented. The addresses take the form of a long, opaque “string” (a long block of letters and numbers, in this case of a fix

A cross-community effort to standardize on common, DID-anchored capabilities

If you are reading this, you probably know already what Decentralized Identifiers (DID) are: they are “identifiers” or addresses which can be queried to return some information about the subject represented. The addresses take the form of a long, opaque “string” (a long block of letters and numbers, in this case of a fixed length), and the DID “documents” that get returned when they are queried contain some cryptographic key material and, depending on the particularities of the returning system, maybe a few other pieces of information or addresses.

Identity systems have traditionally been largely hierarchical, focusing on asymmetrical or vertical relationships. The most open system that allows horizontal communications between users in different systems is email, and even there, user-to-user communications are mediated by servers, which banlist each other based on a federated reputation system. DIDCommunications (DIDComm), on the other hand, is a set of tools to allow horizontal (or at least, power-neutral) and bidirectional channels of communication between two entities that know each other’s DIDs and nothing else. It resembles today’s end-to-end encryption systems like Signal, Telegram, and Whatsapp more than it resembles traditional email.

How DIDComm works

DIDComm is a cross-community standard that creates libraries and design patterns for two or more DID-controlling entities from diverse DID-based systems to communicate directly with one another. It creates a secure communication channel between software controlled by each of these entities, which can be people, organizations or things. This constitutes an “authenticated channel,” in that control of a given DID’s private keys is, barring a failure of design or operational security, proof of authenticity of the party represented by that DID.

This architecture is powerful because it provides a way to do mutual authentication between any two parties. Right now many systems of messaging and communication on the open web don’t provide for mutual authentication with cryptography; they ask individuals to authenticate to sites or businesses (nowadays mostly with single-sign on integrations, i.e. “log in with your XXX account” buttons), and these businesses vouch for and secure the end-user. In exchange, businesses get valuable insight into the communications or commercial activity they facilitate — insights they often sell to third parties.

Furthermore, while the onus is on individuals to authenticate themselves to these enabling institutions and middlemen, the business themselves do little to authenticate themselves reciprocally. Over time, the “lock icon” next to URLs in modern browsers has come to be a useful norm (and users have been habituated to reacting suspiciously to any website which cannot provide on). However, phishing attacks, where intercepted or falsified communications lead users to malicious websites impersonating ones they trust, continue to be a major attack vector for fraud and identity theft. By supporting mutual authentication, a more uniform and democratic protocol is established for secure communications, which raises the bar for user expectations for security assurances from institutions and websites.

A Quick History of DIDComm and the Aries Protocols

DIDComm was first developed within an international and collaborative open source co-development project called Aries (hosted at Hyperledger) and under a IPR regime designed to cover software but not specifications. Aries was spun out of the earlier Hyperledger Indy project, to both iterate, expand, and make more blockchain-agnostic the codebase and tooling created earlier for the identity blockchain Indy. In the same process, Hyperledger Ursa was also created to advance the underlying cryptographic elements independently of the blockchain and the identity systems relying on it. The Indy project continues evolving as well, from a single blockchain to a family of interoperable ones.

The name “DIDComm” and version 1 of its libraries and designs evolved in this context: it adapted the Indy-specific horizontal communications libraries to a more agnostic Aries context and iterated them to be more configurable for new contexts and implementations. At some point, however, it became clear that further interoperability would be best served not by writing specifications based on existing Aries implementations, but by a more “green-field”, specification-first design process with interlocutors from further afield. This work came to be co-sponsored by DIF and Hyperledger partly to engage these outside interlocutors, and partly because the IPR protections of DIF were more appropriate to a specification-first open-standards process. The charter of the DIDComm working group at DIF .

Source: CHAPI101/DIDComm101 joint session at #IIW30 The Future of DIDComm

Since the chartering process began in September of 2019 at the post-IIW DIF Face to Face, the work of designing a new core for DIDComm and thinking through new features and structure has proceeded at a steady clip. The implementations and details are still being hammered out in some places, but the feature set is stable. The benefits of the new protocol will include:

Mutual authentication Robust, email-like threading support for relying messaging systems (including error-handling and other machine-readable messaging systems) Support for new security and transport primitives like the JWM envelope Asynchronous by default, but with synchronous communication modes also supported Offline support so that two agents who are not online can exchange information via bluetooth or QR Code Easy to support with a web server (among many other topographies) Many of the positive qualities of the earlier SOAP protocol, include message format standards, routing, transport agnostic support, and subprotocols.

This effort has the direct support and participation of many core members of the Aries community. It is within the plans of the Aries community to migrate to DIDComm V2 as it becomes ready for use and tested. In both existing DIDComms (many of which are already in production), protocols relying on a DIDComm-encrypted channel for authentication or communications functions will be moved over. The relationship between DIDComm and these relying “subprotocols” is quite similar to the relationship between HTTP and the APIs created on top of HTTP — an upgrade to the underlying security or feature set will not affect the relying applications that just need a secure channel.

There is some consternation about the use of the term “DIDComm Protocols” to describe the various different types of exchange or transactions that could be built on a DIDComm foundation. Regardless of whether one chooses to call them “protocols” or something else, a cross-community standard will be crucial to co-developing broadly interoperable capabilities and common libraries on the basis of secure, authenticated DIDComm channels. Some examples of what these would include:

Secure user messaging or even chat-like instant messaging Issuing a credential Presenting a credential or an identity proof Interactions with IOT systems or even directly with specific, authenticated devices Payment coordination Getting involved

A good place to jump in would be this video recorded by DIDComm WG Chair Sam Curren at the June DIF Face-to-face meeting:

With this background, a read through the charter, the mailing list archives, the github repository, or the Slack channel of the DIDComm working group at DIF will make a lot more sense; as will DIDComm sessions at the new IIW and F2F meetings.

Understanding DIDComm was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF at #IIW31

Whether, why, and how to attend the biannual conference along with DIF The Internet Identity Workshop is an interactive “un-conference” that has convened various kinds of researchers, identity technology companies, and thinkers since 2005. It brings together a wide swath of people working towards great control of the digital representations of themselves, incorporating theoretical, market-bu

Whether, why, and how to attend the biannual conference along with DIF

The Internet Identity Workshop is an interactive “un-conference” that has convened various kinds of researchers, identity technology companies, and thinkers since 2005. It brings together a wide swath of people working towards great control of the digital representations of themselves, incorporating theoretical, market-building, research, and policy-oriented projects. This coming month, the 31st edition will be held entirely only on the Qiqochat platform, like the 30th edition before it.

The Storied History of IIW

Kaliya Young, Doc Searls and Phil Windley founded the conference in 2005. At the time, all three were active in a group known as “the Identity Gang”. They had been discussing identity technology concepts in detail on a mailing list since meeting each other at Digital Identity World in the fall of 2004; after a year of passionate discussions, there came a point where an in-person meeting felt necessary. The first meeting was in Berkeley, California, and ever since it has been hosted at the Computer History Museum in Mountain View, California. It has moved online-only since the Covid-19 pandemic began in early 2020.

In 2005, many in the community were blogging and writing about identity and related themes. Some had made broad and widely acclaimed pronouncements like Kim Cameron, most known in this regard for his essay, The Seven Laws of Identity. One early collaboration within the community was a Lexicon that has aged well. Many were also writing code and founding projects to realize their various visions. At the first IIW, eight different projects gave structured presentations on the first day, while the second day was devoted to a more open-ended and collaborative synthesis, facilitated by Kaliya Young using Open Space Technology.

At that first event, several of the projects focused on using URLs as identifiers for individuals, who could authenticate against these as they moved around the early Web 2.0, commenting on different sites. Conversations at the event were formative on the OpenID protocols and the OpenID Foundation. Since then, many different protocols have been conceived and/or nested at the conference, including OAuth, OpenID Connect, SCIM, Information Cards, XRI, XDI, Web Finger, Salmon, and PubSubHubbub, among others.

The event attracts people from all over the world and continues to be the “biggest tent” for collaboration and ideation across the various user-centric identity spheres and communities, particularly for what is now called “decentralized identity” or “self-sovereign identity”. You can see the book of proceedings from most of the IIWs for the last 15 years, refined and edited in the month after each organic and self-organizing event.

The Decentralized Identity Foundation at IIW

IIW has always been a neutral venue where diverse groups across the community focused on the decentralized identity, that often work in parallel between IIWs, come together to report out and understand each other’s projects. Furthermore, many attendees come from established sectors of the software industry, such as Enterprise Identity and Access Management (EIAM), Cybersecurity, or Customer Identity and Access Management (CIAM). This creates a healthy mix of specialists, professionals, activists, and novices, balancing the “bikeshedding” of technology specifics with societal conversations and industry-wide trends and roadmaps. The open culture of DIF and its commitment to integrations with today’s identity technologies thrive in this environment, and many key DIF collaborations have begun or successfully recruited new participants at IIW.

Anyone who attends from any of the myriad organizations in the space can put agenda/discussion topics forward, which also makes it a very generative place for researchers, innovators, and market-watchers. Many of the DIF working groups present their latest work at the event for technical or business review, and it is a key venue for soliciting insightful feedback from related organizations and industry stakeholders. Sometimes projects in early stages of development, specification, or scoping even do requirements gathering or technical sanity-checks at the event.

Historically, the DIF has hosted a one-day face-to-face meeting immediately before or after the IIW conference proper to take advantage of the geographic co-location of so many key players. This time around, however, DIF is experimenting with a more spaced-out approach to allow for a period of processing of the IIW sessions to allow for a more complementary role 5 or 6 weeks after the fact. Stay tuned for more details about design or development sprints to take place bookended by the two events.

Hot Topics across #IIW30 and #IIW31

At the last IIW, DIF working groups put forward these sessions:

The internal governance group shared its new Code of Conduct and the Glossary Group also gave a presentation on its methodology and results There were four different sessions presenting and gathering feedback on four different aspects of the KERI project. Similarly, there were multiple presentations on the BBS+ signature suite at the heart of both the Aries AnonCreds2 system for verifiable presentations and a novel JSON-LD-based system, development of which continues to be lead by Mattr Global. The DIDComm WG gave a progress report, which was particularly important to many stakeholders in the Aries community not involved in the autonomous research & development project. A separate session was held on the JSON Web Messaging (JWM) proposal that the DIDComm WG submitted to the IETF. Similarly, there was a well-attended joint progress report on the SideTree Protocol / Element DID and Friends , as well as an update from the XYZ project to iterate the OAuth protocol. A breakthrough panel (which was recorded — see previous link) explored the potential interaction or combination of the complementary browser-based CHAPI communications/transport protocol and the more browser-independent work of the DIDComm WG Frameworks and ecosystem maps were presented by the new Trust-over-IP foundation, the Operators project of MyData Global; certification programs were announced and outlined by the Me2B Alliance and the ID2020 project. For a more detailed overview of highlights, see Juan Caballero’s detailed recap for the company blog of DIF member organization, Spherity GmbH For the complete, edited notes from all sessions, see the Proceedings Book (sponsored by DIF member organization, Jolocom GmbH) Heading into the 31st IIW, we can expect significant interest and attendance at sessions on these topics: A progress report on the alignment of the browser-native Presentation Exchange DIF specification and the relevant Aries RFCs and libraries Moving KERI from prototype to actual implementations (and integrations!), as well as Drummond Reed’s less-technical introduction entitled “KERI for Mere Mortals” Cross-method interoperability and portability, within and beyond Indy networks (which are proliferating quickly in Europe!) Test Suites and a process to incentivize (or even fund?) revisions of major specifications to include more explicit test vectors Lessons from the mid-2020 DID-Core sprint and PING W3C Security Review Updates on OIDC-DID bridging work, OAuth & GNAP, and fast-moving regulatory changes in Europe and North America Making your IIW plan

DIF has had great success presenting work at IIW — and increasingly, it is becoming an important aspect of our educational and outreach efforts to post recordings of these presentations on our youtube channel. We’re hoping this tradition will build more momentum at the upcoming IIW!

The event is a great opportunity for those who are already very active in the community working on standards and code related to decentralized identity. It is also a great opportunity for those who are very new to get up to speed quickly via what language professors call “deep immersion.” It is very welcoming and friendly, and since each session includes experts from different fields, there are few assumptions and much level-setting. DIF leadership attends the event and encourages its membership to actively participate.

For those already planning to attend, please note that a skeletal structure is already posted. The most important sessions to attend are each day’s “agenda creation” sessions; the “demo hour” sessions for live demos of new products and prototypes, and the “closing circle” readouts from each day’s events are close seconds. Cordoning off those timeslots in your calendar weeks in advance is highly recommended, as a way of keeping those timeslots free of conflicts in your home time zone!

DIF at #IIW31 was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Saturday, 26. September 2020

Me2B Alliance

Re: Welcome New Board Member Sheryl Wilkerson

Welcome, Sheryl.  Get Outlook for iOS
Welcome, Sheryl. 
Get Outlook for iOS

Re: Welcome New Board Member Sheryl Wilkerson

Thank you. I appreciate the welcome and look forward to working with you. Regards, Sheryl  M:703.855.1208
Thank you. I appreciate the welcome and look forward to working with you.

Regards, Sheryl 
M:703.855.1208

Re: Welcome New Board Member Sheryl Wilkerson

Welcome onboard Sheryl. Iain
Welcome onboard Sheryl.
Iain

Friday, 25. September 2020

Me2B Alliance

Welcome New Board Member Sheryl Wilkerson

Dear Community,   I’m delighted to announce the addition of a new Board Member to the Me2B Alliance, Sheryl Wilkerson.  Thought we’d let Sheryl introduce herself in her own words (below).    Welcome aboard, Sheryl!   "Personal data is one our most valuable assets and plays an increasingly important role in our economy and connected lives.  I've spent m

Dear Community,

 

I’m delighted to announce the addition of a new Board Member to the Me2B Alliance, Sheryl Wilkerson.  Thought we’d let Sheryl introduce herself in her own words (below). 

 

Welcome aboard, Sheryl!

 

"Personal data is one our most valuable assets and plays an increasingly important role in our economy and connected lives.  I've spent most of my career advocating for companies that develop and deploy innovative technologies that improve quality of life for people.  I believe in access to technology for all, but also the right to privacy, consent, transparency, good stewardship practices, and accountability in how my personal data and information is managed.  I know first-hand the devastating impact that privacy failures and intentional breaches can have on your life. 

As a lawyer, I am excited about public policy and efforts underway to establish more fulsome data protections.  As an entrepreneur, I've seen how technology can harvest data to democratize opportunity and level the playing field for underserved segments of society.  As a public servant, I understand the need to protect the public interest while enabling business growth and prudent use of data. As a private employee, I understand the immense responsibility companies have to ensure the fair provisioning of e-commerce services. 

I'm pleased to serve on the Board of the Me2B Alliance which is working with some of the most committed experts and knowledgeable thought leaders to develop standards for respectful technology that will create a fair and balanced future for those who use it."

 


Federal Blockchain News

China's Blockchain Services Network

Victoria Adams, PhD & Jason Brett of the Value Technology Foundation preview their upcoming paper on the threat posed by China's Blockchain Services Network and what the United States needs to do to prevent China's technical hegemony.
Victoria Adams, PhD & Jason Brett of the Value Technology Foundation preview their upcoming paper on the threat posed by China's Blockchain Services Network and what the United States needs to do to prevent China's technical hegemony.

MyData

Putting MyData Principles into action: An introduction to the MyData Design Toolkit

Over the last years, the global community of MyData has been developing an approach aimed at strengthening digital human rights while opening new opportunities for businesses to develop innovative new services based on personal data and mutual trust. MyData Design was established in September 2019 to advance the design culture and practices within the MyData... Read More The post Putting My

Over the last years, the global community of MyData has been developing an approach aimed at strengthening digital human rights while opening new opportunities for businesses to develop innovative new services based on personal data and mutual trust. MyData Design was established in September 2019 to advance the design culture and practices within the MyData...

Read More

The post Putting MyData Principles into action: An introduction to the MyData Design Toolkit appeared first on MyData.org.

Tuesday, 22. September 2020

OpenID

OpenID Foundation Deepens Partnership with Financial Data Exchange on Adoption of New Security Standards

OpenID Foundation Chairman, Nat Sakimura, presented a keynote at the Financial Data Exchange’s (FDX) Global Summit Fall 2020. Nat’s keynote, “Global Adoption of FAPI Among Open Banking Standards… And Beyond”, highlighted the growing adoption momentum of the Foundation’s Financial-grade API (FAPI) security profile and the high quality of self-certified implementations via the Open Certification Pro

OpenID Foundation Chairman, Nat Sakimura, presented a keynote at the Financial Data Exchange’s (FDX) Global Summit Fall 2020. Nat’s keynote, “Global Adoption of FAPI Among Open Banking Standards… And Beyond”, highlighted the growing adoption momentum of the Foundation’s Financial-grade API (FAPI) security profile and the high quality of self-certified implementations via the Open Certification Program.

The FDX Global Summits are the organization’s signature technical conferences for its members and brings together the best of the financial industry’s engineers, business leaders and stakeholders in a technical working group environment to better the development, implementation and adoption of open standards and associated data use cases and certification protocols. The Financial Data Exchange is a standards development organization that has rapidly grown to over 150 Banks, Fin-Techs and key players in the US and Canadian financial services market.

The OpenID Foundation and the Financial Data Exchange continue to partner in working groups and workshops to drive the adoption of these important new standards. Don Cardinal, Managing Director of the FDX, Anoop Saxena, of Intuit and Co-Chair of the FAPI Work Group, and Nat Sakimura will “keynote” a panel on the intersection of identity and open banking standards at Summit Fall 2020.

Please note that registration is open for the upcoming OpenID Foundation Virtual Workshop on Wednesday, October 28, 2020.

The post OpenID Foundation Deepens Partnership with Financial Data Exchange on Adoption of New Security Standards first appeared on OpenID.

Monday, 21. September 2020

Oasis Open

Invitation to comment on AMQP Request-Response Messaging with Link Pairing v1.0 – ends October 21

Defining a common pattern for pairing two unidirectional links to create a bidirectional message transport between two endpoints. The post Invitation to comment on AMQP Request-Response Messaging with Link Pairing v1.0 – ends October 21 appeared first on OASIS Open.

OASIS and the OASIS Advanced Message Queuing Protocol (AMQP) TC are pleased to announce that AMQP Request-Response Messaging with Link Pairing Version 1.0 is now available for public review and comment. This is the first public review for this specification.

AMQP defines links as unidirectional transport for messages between a source and a target. A common messaging pattern is that of “request-response”, that is, two parties partaking in a bidirectional conversation using messages. This document defines a common pattern for pairing two unidirectional links to create a bidirectional message transport between two endpoints.

The documents and related files are available here:

AMQP Request-Response Messaging with Link Pairing Version 1.0
Committee Specification Draft 01
20 August 2020

Editable source (Authoritative):
https://docs.oasis-open.org/amqp/linkpair/v1.0/csd01/linkpair-v1.0-csd01.docx

HTML:
https://docs.oasis-open.org/amqp/linkpair/v1.0/csd01/linkpair-v1.0-csd01.html

PDF:
https://docs.oasis-open.org/amqp/linkpair/v1.0/csd01/linkpair-v1.0-csd01.pdf

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:

https://docs.oasis-open.org/amqp/linkpair/v1.0/csd01/linkpair-v1.0-csd01.zip

How to Provide Feedback

OASIS and the AMQP TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review starts 22 September 2020 at 00:00 UTC and ends 21 October 2020 at 23:59 UTC. Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility, which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=amqp).

Comments submitted by TC non-members for this work and for other work of this TC are publicly archived and can be viewed at: https://lists.oasis-open.org/archives/amqp-comment/ All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members.

In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the AMQP TC can be found at the TC’s public home page: https://www.oasis-open.org/committees/amqp/ Additional information related to this public review can be found in the public review metadata document [3].

========== Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr

[2] https://www.oasis-open.org/committees/amqp/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-RAND-Mode
RF on RAND Mode

[3] Public review metadata document:
https://docs.oasis-open.org/amqp/linkpair/v1.0/csd01/linkpair-v1.0-csd01-public-review-metadata.html

The post Invitation to comment on AMQP Request-Response Messaging with Link Pairing v1.0 – ends October 21 appeared first on OASIS Open.


OpenID

Important Announcement about OpenID Foundation Transitions

Dear OpenID Foundation Members: It is with a heavy heart to share this news with you. Don Thibeau, after many years of his excellent service, informed the OpenID Board of his intention to move on from his position as the Executive Director at the end of 2020 to pursue other opportunities and challenges in the […] The post Important Announcement about OpenID Foundation Transitions first appeared

Dear OpenID Foundation Members:

It is with a heavy heart to share this news with you. Don Thibeau, after many years of his excellent service, informed the OpenID Board of his intention to move on from his position as the Executive Director at the end of 2020 to pursue other opportunities and challenges in the identity space. This is such a loss for the Foundation, as we would never have grown to what we are without Don’s dedicated service as a colleague and friend to many of us.

Don has committed to the Board of Directors to provide the time needed to ensure a successful transition to new leadership in 2021. Don will continue to be instrumental through his tenure in the many liaison relationships that the Foundation has under way.

I have asked Vice Chairman Bjorn Hjelm to lead the Foundation’s transition to new leadership in 2021. The Executive Committee and the Board of Directors have started this work under Bjorn’s direction and will continue to communicate with membership and the community at large on our progress. If you have any questions or input into this process, please direct your inquiries to Vice Chairman Bjorn Hjelm.

This is an important transition in the evolution of the OpenID Foundation. The Board of Directors is committed to taking the time required to successfully execute a transition plan while being transparent in our efforts, and continuing the work of the Foundation with its members and working groups. Thank you for your continued support and contributions to the OpenID Foundation.

 

Best regards, 

Nat Sakimura
Chairman, OpenID Foundation Board of Directors

The post Important Announcement about OpenID Foundation Transitions first appeared on OpenID.


OpenID Foundation Continues to Evolve in 2021

Dear OpenID Foundation Members:  After 10 years I’ve decided to move on from my position as Executive Director of the OpenID Foundation at the end of 2020. It’s been an honor to serve the Board, members of the Foundation and the community at large.  It has been a privilege and a lot of fun to […] The post OpenID Foundation Continues to Evolve in 2021 first appeared on OpenID.

Dear OpenID Foundation Members: 

After 10 years I’ve decided to move on from my position as Executive Director of the OpenID Foundation at the end of 2020. It’s been an honor to serve the Board, members of the Foundation and the community at large. 

It has been a privilege and a lot of fun to help lead the Foundation. I’ve been proud to add my efforts to yours in ensuring the OpenID Foundation’s unique and important contributions to a more secure, interoperable identity ecosystem.

For the Foundation, it’s an opportunity to take a fresh look at the future. For me, it’s a chance to do new, big and bold things in the identity space. I look forward to working with you in the months remaining and years to come.

 

Best regards, 

Don Thibeau
Executive Director

The post OpenID Foundation Continues to Evolve in 2021 first appeared on OpenID.

Friday, 18. September 2020

Oasis Open

OpenTeams Podcast features OASIS

Guy Martin on measuring ROI of contributions and similarities between open standards and OSS. The post OpenTeams Podcast features OASIS appeared first on OASIS Open.

Guy Martin on measuring ROI of contributions and similarities between open standards and OSS.

The post OpenTeams Podcast features OASIS appeared first on OASIS Open.

Thursday, 17. September 2020

Berkman Klein Center

Urs Gasser on two new books — and what’s ahead

Urs Gasser on two new books — and what’s ahead The precariousness of the early days of the pandemic turned parents into educators and scholars scrambling to make sense of the historic challenges faced by our societies and the institutions governing them. Urs Gasser, the Executive Director of the Berkman Klein Center (BKC) and Professor of Practice at Harvard Law School, has co-authored two
Urs Gasser on two new books — and what’s ahead

The precariousness of the early days of the pandemic turned parents into educators and scholars scrambling to make sense of the historic challenges faced by our societies and the institutions governing them.

Urs Gasser, the Executive Director of the Berkman Klein Center (BKC) and Professor of Practice at Harvard Law School, has co-authored two timely books that inform both of those struggles.

Gasser co-authored The Connected Parent with John Palfrey, president of the John D. and Catherine T. MacArthur Foundation, which turns a decade of academic research into practical guidance for parents raising children in a “digitally connected” world. He also wrote an essay series published in German (Pandemie als Verbundkrise und digitales Phaenomen) that focuses on the COVID-19 pandemic, risk, digitization, and the law. This book is co-authored with Jens Drolshammer, professor emeritus at the University of St. Gallen and former BKC faculty associate.

We spoke with Gasser about the books, threads between them, and what he is working on next.

You’re the co-author of two new books to be published within a few weeks’ time. Let’s start with “The Connected Parent”. In a nutshell, what is this book about?

“The Connected Parent”, written with my friend and long-time collaborator John Palfrey, offers advice to parents and other caregivers on how to support their children as they grow up in an increasingly digitally connected environment. The book includes very practical advice and suggestions. John and I summarized the best available research, including 15 years of youth and media work at the Berkman Klein Center (BKC), to help parents figure out how to think about issues like screentime, social media usage, privacy and well-being, digital activism and citizenship skills, to name just a few — and what to do about them to minimize risk and embrace opportunities.

Your second book focuses on the COVID-19 pandemic, risk, digitization, and the law. Can you tell us more about it?

This book is published in German and in the form of an essay collection. It offers initial reflections on COVID-19 as a specific type of risk that Judge Posner a decade ago in a seminal book described as a “catastrophic risk” — something most recently also taken up in Tony Ord’s new book.

The texts were written during the “lockdown” and in collaboration with my former teacher and previous BKC associate Jens Drolshammer. Switzerland serves as a country case study to better understand what role the law plays during such a massive crisis. We have heard and learned a lot from public health experts during the crisis. Understandably, the legal system has been less front-and-center. Yet, it turns out that law is almost everywhere and hugely relevant especially during a crisis of this magnitude. Some of the essays also explore the historic role digital technologies play in dealing with COVID-19 — I call it metaphorically the first “digital pandemic.” We talk about the intersectionality of these things.

Both of these books are timely to the ongoing pandemic, in different ways. Are there any connecting points between these two books?

We submitted “The Connected Parent” manuscript before COVID-19 arrived in the US and were only able to add small references to it as our publisher was initially skeptical about how much of a “big deal” COVID-19 would be. Well, of course, now we know better. Parents and educators are trying to figure out how digital tech can be used to support kids around the world as their education is so deeply disrupted by COVID-19. The book will not give a full answer to the current crisis, but might be still helpful in the present based on the connected parent philosophy we sketch in the book. And we believe it offers sound longer-term guidance that helps parents to engage with their kids — and be connected with the digital world they live in. The book on the pandemic reflects on some of the educational experiences in real-time, albeit from a policy and not necessarily a parental perspective. There are also other connection points between the two books. Both are written with a broader audience in mind, and both are the fruit of collaboration.

What motivated you and John Palfrey to write “The Connected Parent”? Isn’t it unusual for scholars to write a parents’ guide?

Yes, it is! John and I have already written several books together, including our 2008 and 2016 book “Born Digital,” and in some ways, the new book is its cousin. Since our last book, we’ve been asked many times for more practical guidance than we were willing to offer in our previous work. And at some point, we decided to take on this challenge and translate what we can learn from research in ways that it translates into recommendations for parents. For me, it’s been a great learning experience to write this book. As academics, we are used to making all sorts of caveats — “can’t say, more research is needed” — where the data is shaky. But as a parent, you have to make decisions, whether you have scientific evidence or not. So we’ve tried to take this seriously and be helpful even where we don’t have all the answers. We did so by being very honest about what we know and what we don’t know — and then still give advice based on what we think makes sense based on our own experiences as researchers, educators, and parents. The Youth and Media team at BKC has been very helpful to keep us honest and grounded.

In the introduction to the essay series on the pandemic, risk, digitization, and the law, you and Jens Drolshammer state that the book is an experiment. What do you mean by that?

It’s been an experiment in multiple ways. It’s experimental in the sense that it was written during the early moments of the pandemic, with lots of uncertainty and even unknown unknowns. The texts offer real-time observations and reflections, without a rigorous methodology, which of course means that our observations are not scientific and offered more in the spirit of early hypotheses. We looked at it like writing initial observations and questions into a personal journal and making these entries publicly available because it might be of interest to others as well. We did so based on stories, reports, news coverage, etc. we’ve collected from the first day of the pandemic based on a set of 10 criteria.

So both the format and working style is experimental, at least for me. We hope that these early and only tentative entries into what we termed our “logbook” might invite more rigorous work over the years to come. It’s also an experiment to write and publish such a book across the Atlantic during such an extraordinary time.

Will you publish an English translation of the book?

I don’t think so. While I enjoy the challenge of writing and publishing in two different languages, I generally don’t feel excited about translating what I’ve written in one language to the other. To me, it’s about more than translating the text: When writing in German or English, I think differently about what I am going to say and how — it’s like flipping a switch.

That said, a brief summary with some of the key observations, as well as a transatlantic conversation with Professor Martha Minow and John Palfrey that is included in the essay collection, is also available on Medium.

Do you plan to continue writing books, and if so, what’s next?

I try to keep a balance as far as types of contributions are concerned. I’m working on a number of shorter articles right now, but have another book project lined up — this time a book on the turn to information and information law, which brings me back to the roots of my academic life.

On a personal level, I enjoy working on books because it forces me to learn and engage in different ways than what has become the dominant mode in today’s professional lives. I’m acutely aware that both reading and writing books are an enormous privilege, and I couldn’t be more grateful for it.

Urs Gasser on two new books — and what’s ahead was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Me2B Alliance

Re: Introducing new co-chair for the Me-s WG

Thanks for the introduction, Lisa. It's great to meet you, Muriel! We've got lots to learn and many discussions ahead about the impacts of technology on Me's, and thank you for all your work and leadership to get us there! Looking forward to speaking soon, Sincerely, Zach
Thanks for the introduction, Lisa.
It's great to meet you, Muriel!
We've got lots to learn and many discussions ahead about the impacts of technology on Me's, and thank you for all your work and leadership to get us there!
Looking forward to speaking soon,
Sincerely, Zach



Wednesday, 16. September 2020

ID2020

ID2020 Celebrates International Identity Day

Please join us today in celebrating International Identity Day. Identity is a fundamental and universal human right. Unfortunately, one in seven people globally — more than one billion individuals — are unable to prove who they are by any widely recognized means. This most fundamental function — the ability to prove who we are — is something that many of us take for granted. Yet it is an in

Please join us today in celebrating International Identity Day.

Identity is a fundamental and universal human right. Unfortunately, one in seven people globally — more than one billion individuals — are unable to prove who they are by any widely recognized means.

This most fundamental function — the ability to prove who we are — is something that many of us take for granted. Yet it is an incredibly important one, allowing us to enjoy rights and protections under the law, access a variety of services, participate as citizens and voters, transact in an increasingly digital economy, and much more.

ID2020 was launched in 2016 to address this global issue by fostering collaboration between the private sector, government, and nonprofit organizations toward a common vision: good ID for all.

Our work is premised on the notion that we all deserve better ways to prove who we are, both in the physical world and online. But achieving our vision will require an intentional focus and sustained commitment to ensuring that the needs of the most vulnerable in society are met. If past experience has taught us anything it is that, absent this focus, hundreds of millions of people will be left behind.

In 2018, in collaboration with the UN High Commissioner for Refugees (UNHCR), we published the ID2020 Manifesto. The Manifesto outlines our values and serves as the basis for the ID2020 Certification for digital ID solutions, our advocacy activities, and the programmatic work we are undertaking in partnership with governments and development and humanitarian organizations.

ID2020 Alliance partners recognize that achieving our shared vision will also require businesses, government, and civil society to collaborate and make simultaneous progress along four tracks:

Continuing to advance digital ID technologies and test them in the field Promoting public and private sector implementations Establishing the technical standards and legal and regulatory frameworks necessary to ensure that these systems are privacy-protecting, user-controlled, portable, interoperable, and persistent across an individual’s lifespan Working across sectors — and with a wide array of stakeholders — to build trust in these systems and support their adoption

ID2020 exists to support this type of collaboration.

Each year, the United Nations recognizes official days of observance for a variety of issues on the international development and human rights agenda. These days promote awareness of, and global action on, important political, social, cultural, humanitarian, or human rights issues.

This is a marathon, not a sprint, and a mission we cannot accomplish alone. We are proud to support efforts, such as the International Identity Day Coalition, that expand awareness of this critical development goal.

We hope that you will consider joining ID2020 as a member of the International Identity Day Coalition. The Coalition brings together development agencies, governments, and public interest organizations to advocate for the formal recognition of International Identity Day by the United Nations and its member nations.

For more information about the Coalition, please visit their website: https://www.id-day.org

ID2020 Celebrates International Identity Day was originally published in ID2020 on Medium, where people are continuing the conversation by highlighting and responding to this story.


ID2020 Celebrates International Identity Day

Please join us today in celebrating International Identity Day. Identity is a fundamental and universal human right. Unfortunately, one in seven people globally — more than one billion individuals — are unable to prove who they are by any widely recognized means. This most fundamental function — the ability to prove who we are — is something that many of us take for granted. Yet it is an in

Please join us today in celebrating International Identity Day.

Identity is a fundamental and universal human right. Unfortunately, one in seven people globally — more than one billion individuals — are unable to prove who they are by any widely recognized means.

This most fundamental function — the ability to prove who we are — is something that many of us take for granted. Yet it is an incredibly important one, allowing us to enjoy rights and protections under the law, access a variety of services, participate as citizens and voters, transact in an increasingly digital economy, and much more.

ID2020 was launched in 2016 to address this global issue by fostering collaboration between the private sector, government, and nonprofit organizations toward a common vision: good ID for all.

Our work is premised on the notion that we all deserve better ways to prove who we are, both in the physical world and online. But achieving our vision will require an intentional focus and sustained commitment to ensuring that the needs of the most vulnerable in society are met. If past experience has taught us anything it is that, absent this focus, hundreds of millions of people will be left behind.

In 2018, in collaboration with the UN High Commissioner for Refugees (UNHCR), we published the ID2020 Manifesto. The Manifesto outlines our values and serves as the basis for the ID2020 Certification for digital ID solutions, our advocacy activities, and the programmatic work we are undertaking in partnership with governments and development and humanitarian organizations.

ID2020 Alliance partners recognize that achieving our shared vision will also require businesses, government, and civil society to collaborate and make simultaneous progress along four tracks:

Continuing to advance digital ID technologies and test them in the field Promoting public and private sector implementations Establishing the technical standards and legal and regulatory frameworks necessary to ensure that these systems are privacy-protecting, user-controlled, portable, interoperable, and persistent across an individual’s lifespan Working across sectors — and with a wide array of stakeholders — to build trust in these systems and support their adoption

ID2020 exists to support this type of collaboration.

Each year, the United Nations recognizes official days of observance for a variety of issues on the international development and human rights agenda. These days promote awareness of, and global action on, important political, social, cultural, humanitarian, or human rights issues.

This is a marathon, not a sprint, and a mission we cannot accomplish alone. We are proud to support efforts, such as the International Identity Day Coalition, that expand awareness of this critical development goal.

We hope that you will consider joining ID2020 as a member of the International Identity Day Coalition. The Coalition brings together development agencies, governments, and public interest organizations to advocate for the formal recognition of International Identity Day by the United Nations and its member nations.

For more information about the Coalition, please visit their website: https://www.id-day.org


Me2B Alliance

Introducing new co-chair for the Me-s WG

Hi everyone,   I’m delighted to introduce you all to Muriel Shockley who has volunteered to co-chair the Me-s WG with Jeff Orgel.   Muriel is the program director for undergraduate studies at Goddard College in Vermont, and has a rich background of skills and expertise, with a BS in Econ from Smith College, a masters in Clinical Psychology from Antioch University and a PhD in Leade

Hi everyone,

 

I’m delighted to introduce you all to Muriel Shockley who has volunteered to co-chair the Me-s WG with Jeff Orgel.

 

Muriel is the program director for undergraduate studies at Goddard College in Vermont, and has a rich background of skills and expertise, with a BS in Econ from Smith College, a masters in Clinical Psychology from Antioch University and a PhD in Leadership and Change from Antioch University.  We are fortunate indeed to have Muriel engaged as a co-chair of the Me-s working group.

 

Please join me in welcoming Muriel to the Me2B Alliance family.

 

Lisa


WomenInIdentity

Member Interview – Shilpa Maher

What do you do and what is it about your job that gets you out of bed in the morning? I’m working for HSBC and have been here since May… The post Member Interview – Shilpa Maher appeared first on Women in Identity.
What do you do and what is it about your job that gets you out of bed in the morning?

I’m working for HSBC and have been here since May 2016 doing a variety of roles, but recently (well since Jan 2018), I have been getting my fingers dirty in something very exciting –  defining and creating a brand new strategic capability for the Bank called – Events Based Data Assurance – which will underpin our Digital Identity efforts.

Essentially, all Digital Identity is customer data that we trust. We have a lot of customer data in the Bank but we don’t record why we trust this data and so we ask the customer to prove who they are over and over again. i.e. when someone wants to open a new account, we ask them to prove their ID by showing us their passport. Someone in the Bank will look at the passport, match the photo on the passport to the person standing in front of them, and confirm that they match. However, what is recorded is the person’s name, DOB, Nationality BUT not the event of ‘passport check in branch’.

If we recorded ‘why we trust the data’ using a new type of data called ‘EVENTS’ i.e. the event which led to us knowing why we trust a piece of data (in the above example, Suzie, a Branch advisor in our Canary Wharf branch checked that the photo on the passport matched the person in front of them and it matched, it unlocks a huge opportunity inside the bank to ‘RE-USE’ data across markets and functions – resulting in a fantastic customer experience and reduction in costly and often duplicative processes.

How did you get to where you are today?

I started on the Barclays Bank Graduate Programme and tended to move around every 3 years or so to new places.

I had a curious mind and wanted to do ‘new’ things rather than be in a BAU type role.

I was hungry to learn and develop and so sought opportunities every time I felt like I had learnt all I could from an organisation. I’ve tried my hand at all sorts of things from Banking, to Management Consulting, to starting up a brand new dotcom business, to digitising an insurance company, studying for an MBA, moving to the online travel industry and then back to Digital Banking and now most recently, qualifying as a Personal Performance Coach and setting up a Coaching Business as a side hustle!  I absolutely love helping people understand what their potential is and to guide them to go and realise this!

What is the most important lesson you have learned along the way?

The most important lesson I have learnt along the way is to be your authentic self, don’t try and be someone who you’re not. I know this is difficult because we all suffer from this, especially us women. We’re always comparing ourselves with the person who is always getting the promotions, the attention, the recognition etc – but by trying to be like her, you slowly start to move away from who you are at your core – your authentic self.

This will only result in a lot of frustration and unhappiness which can become a huge problem.

Remember, you’re an individual with skills, experience and a personality which is unique to you. Embrace this and become comfortable in your own skin and let people see the real you! You’ll be surprised where this can lead!

What’s your pitch to CEOs in the identity space? What do you suggest they START / STOP / CONTINUE doing and why? Writing down your internal business activities in a granular way – so that others can understand why they should trust this piece of data Agree on a common data model which can be used to describe events Define the key trusted providers in your country (utility companies, passport office, mobile phone companies, travel companies) who are willing to start exchanging data/ events with you; get them to write down what they do to assure data in a machine readable way (using the same data model) Stop: Trying to solve for the World – start small, and then scale focusing on real life use cases Continue: to have dialogue with organisations, governments and customers/ citizens to keep the conversation going so that potential solutions/ way forward are relevant to define key customer/ citizen/ employee pain points and look to solve for these In one sentence, why does diversity matter to you?

We are all unique and with this uniqueness, we each bring a different viewpoint to the same problem so be tolerant, embrace this difference and experience the powerful results that follow.

What book/film/piece of art would you recommend to your fellow members? Why?

I’d highly recommend ‘Man’s search for meaning’ by Viktor Frankl because it teaches us/demonstrates the power of the mind and resilience.

I absolutely love the quote in the book: “he who has a WHY to live can bear with almost any HOW……and what alone remains is the last of human freedoms is the ability to choose one’s attitude in a given set of circumstances.”

What advice would you give to the teenage ‘you’?

Don’t be afraid, speak your mind, be yourself and think positive thoughts. As a kid growing up in the UK, I suffered from both racial discrimination and bullying – which impacted hugely on my self -confidence. I hid in my shell and didn’t feel as valued, important or equal to my peers and therefore didn’t take the opportunities that presented themselves to me because of this.

Don’t worry about what people will think; ask the ‘stupid’ questions and never feel that you’re not good enough!

Where can we find you on social media / the Web?

LinkedIn: linkedin.com/in/shilpamaher

Twitter: @shilpamaher

Instagram: shilpamahercoaching

Facebook: shilpamahercoaching

Website: www.shilpamahercoaching.com

The post Member Interview – Shilpa Maher appeared first on Women in Identity.


Berkman Klein Center

International Human Rights Law Is Not Enough to Fix Content Moderation’s Legitimacy Crisis

Photo: Pixabay Should tech companies follow human rights law to govern online speech? This proposal has tremendous appeal. International human rights law can offer a set of rules designed in the public interest with the broad support of a global community. This certainly appears superior to the status quo wherein a handful of CEOs set rules for the speech of billions of social media users. Unsurpri
Photo: Pixabay

Should tech companies follow human rights law to govern online speech? This proposal has tremendous appeal. International human rights law can offer a set of rules designed in the public interest with the broad support of a global community. This certainly appears superior to the status quo wherein a handful of CEOs set rules for the speech of billions of social media users. Unsurprisingly, scholars (here and here) and civil society organizations (here and here) have expressed their support and the project has gained a lot of traction since David Kaye — then UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression — promoted it in 2018.

However, adopting international human rights law might not lead to more legitimate content moderation rules. First, international human rights law is not a set of universally accepted rules. The framework favors some speech standards over other reasonable alternatives. The choice for those standards should itself be subject to a legitimate rule-making process. Second, international human rights law is in many areas highly indeterminate. It offers guidance but no precise answers to many challenging questions. In those cases, human rights law might not constrain the power of tech firms but instead only create the appearance of legitimacy. In other words, the proposal could mean business as usual with the added ‘legitimacy-aura’ of human rights law.

International Human Rights Law Is Not Neutral

The International Covenant on Civil and Political Rights (the Covenant) offers the primary international guidance for free expression standards. The Covenant puts a priority on some normative options over others in areas in which reasonable disagreement between legal systems, experts, and communities exists. Two current controversies illustrate this point.

In July, the Stop Hate for Profit campaign brought together hundreds of companies. For one month, they withheld advertisement from Facebook demanding that the tech giant curb the spread of hate on the platform. Among other demands, the campaign requests the removal of groups focused on Holocaust denial. Although sensible and understandable, the request is in tension with Articles 19 and 20 of the Covenant. The Human Rights Committee, the body that authoritatively interprets this international treaty, has said, “Laws that penalize the expression of opinions about historical facts are incompatible with the obligations that the Covenant imposes.” Indeed, in 2019, Kaye explicitly used the bans on Holocaust denial as an example of a law that breaches states’ international obligation to protect freedom of expression.

In other areas, human rights law sides with the campaign’s demands. Stop Hate for Profit also asks that platforms apply their rules equally to politicians and other users. Twitter and Facebook, however, see things differently. During the COVID-19 pandemic, they decided not to delete President Trump’s posts violating their rules on glorification of violence, election integrity, and COVID misinformation. They reasoned that the public interest of citizens in learning what their representatives think in these cases outweighed the harmful effects of such speech. In other instances, Facebook did remove a video posted by Trump’s campaign for spreading COVID-19 misleading information, and both firms took down content that Brazil’s President Jair Bolsonaro had posted in violation of their rules.

In this contentious debate, international law falls much closer on the side of the Stop Hate for Profit campaign. Kaye’s 2019 report (see para. 47) explains that even though exceptions to protect political speech could in exceptional cases be acceptable, in principle politicians and the public ought to be subject to the same rules. According to the report, harmful speech can even be more dangerous when uttered by political leaders. Therefore, there are even stronger reasons to apply speech rules to these figures.

The main point is that reasonable disagreement exists about how to balance these different considerations when governing politicians’ speech. And the choices that international law (or the Human Rights Committee) makes in these debates are not obvious and not universally accepted. They should themselves be subject to the control of the people. Rather than shifting decision-making power from tech companies to the UN (although certainly a step forward), it is urgent to focus on building processes that can actually involve the public in the deliberation over speech rules.

Lending Legitimacy to Unconstrained Power

At the same time, international human rights law leaves many speech questions unanswered. I have written about the contradictions between regional human rights systems that the UN framework does not solve. A more fundamental open question is how to apply the legitimacy requirement of Article 19 of the International Covenant on Civil and Political Rights to content moderation.

According to Article 19, all restrictions to freedom of expression must have a legitimate end. Legitimate ends for governmental restrictions on speech are the protection of national security or of public order or of public health or morals. Evelyn Aswad asks the right questions: Which ends would be legitimate for content moderation rules set by private companies? Could tech companies claim a business interest as a legitimate purpose? And even if they were not entitled to rely on the most explicit commercial interests such as advertisers’ preferences, could these companies claim that a specific content moderation rule helps them shape the type of community they want to foster?

Most supporters of the proposal would acknowledge that it is necessary to let companies disallow content for the purpose of meeting the preferences and expectations of different users. This appears sensible. Otherwise, all the speech that international human rights law protects — including adult nudity, pornography, and many graphic depictions of violence — would likely have to be allowed on platforms such as Facebook. This would render platforms nearly useless to a large set of users that does not want to navigate through all forms of legal, but perhaps undesirable, speech. But the line between permissible and impermissible ends becomes blurry, and the Covenant, designed to be applied to states, definitely does not draw such a line.

As long as no line exists, international human rights law poses few constraints on what companies can do. For any rule a company might wish to set, it could articulate a public interest end that the rule advances. For instance, for nudity rules, tech firms could claim they are trying to avoid all possible non-consensual distribution of intimate images. For hate speech that does not incite violence, they could posit that they are creating a “safe” environment for communities that are disproportionately the target of such speech. And the list goes on. Susan Benesch has proposed helpful guidance to translate the requirements of Article 19 to content moderation. But unless broad consensus can be built around the meaning of terms such as “the protection of moral,” human rights law will lend its legitimate framework and vocabulary without meaningfully constraining private regulatory power.

International Human Rights Law as a Framework

Adopting human rights law as default content moderation rules can be a project of translation, meaning: take international law standards that already exist and translate them into implementable content moderation rules. For the reasons I discussed earlier, I have little faith in that project.

However, another proclaimed virtue of international human rights law is that it offers a common framework and vocabulary to guide the discussion between multiple actors on how to come up with a new language, a new rulebook specifically designed for online speech. Indeed, it may still be valuable to rely on the human rights framework not to answer all questions but to agree on what questions need to be asked (does the rule have a legitimate end? is the rule necessary to meet that end? are less intrusive measures available?). Tech companies (or anyone making the rules) can contribute to public reasoning and deliberation by being transparent about the lack of unequivocal answers. They should explain, instead, why they prefer certain rules and how they think about them through the lens of the standards set forth in Article 19 of the Covenant. That type of transparent reasoning could be the start of a dialogue with other actors in a shared language.

Such an approach resembles what Larry Lessig refers to as “latent ambiguities.” Lessig tried to imagine how judges would react to novel legal questions posed by the development of technology. In some cases, translation of already existing rules would be easy: for example, extending the protection of mail to electronic communications. In other cases, however, there is no unequivocal answer, and there is a need to decide anew how to regulate. For those situations, Lessig imagined that judges could promote democratic deliberation by identifying those “ambiguous” areas, proposing possible paths forward, and explaining how their own decisions would advance constitutional values.

There is one fundamental difference between Lessig’s work on judicial adjudication and content governance. In the case of judicial decision-making, legislatures could later contest the judges’ decisions. Lawmakers can debate and vote for a different rule. In the governance of online content, although civil society may well play a role in contesting the reasoning and choices tech firms offer, no institution has the authority equivalent to that of a legislature to move the dialogue forward. In that sense, the transparent reasoning of companies can be the beginning of a conversation, but it remains unclear who can “speak” next.

As Jonathan Zittrain argues, the current era of content moderation requires experimentation with processes and institutions that can reconstruct legitimacy and open opportunities for people’s participation in online governance. Looking at international human rights law, to the extent that it offers a common framework to enable conversations, might be a step in that direction. I have tried to begin exploring which positions that framework gives priority to and to emphasize the need for finding other actors that have the capacity to contest the public reasoning of tech companies. Only then will international law be able to foster an actual conversation rather than a monologue uttered by tech firms in the guise of human rights language.

International Human Rights Law Is Not Enough to Fix Content Moderation’s Legitimacy Crisis was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Digital Identity NZ

Te Reo and Identity

Anei mātou te hāpori o Digital Identity NZ e mihi ana ki a koutou i tēnei wiki hirahira, ko te wiki o Te Reo Māori. Arohatia te reo, kia kaha te reo Māori! Warmest greetings to you all during te wiki o Te Reo Māori. The post Te Reo and Identity appeared first on Digital Identity New Zealand.

Kia ora,

Anei mātou te hāpori o Digital Identity NZ e mihi ana ki a koutou i tēnei wiki hirahira, ko te wiki o Te Reo Māori. Arohatia te reo, kia kaha te reo Māori! Warmest greetings to you all during te wiki o Te Reo Māori. Toku ingoa ko, my name is Janelle Riki-Waaka, Pouherenga Waka, Relationships Manager and Kaihuawaere, Consultant Core Education and Digital Identity member.

Ko te kai a te rangatira, he kōrero – the substance of chiefs is to engage in kōrero. We, here at DINZ, have been engaging in lots of kōrero recently about how we can give mana to Te Tiriti o Waitangi and be tiriti-honouring in our intent and actions. We’ve been on a learning journey together and we are in the process of formalising our thinking into a Te Tiriti o Waitangi Action Plan that will guide our DINZ mahi ensuring we are striving for equity in Aotearoa. Our statement of intent is: “Digital Identity New Zealand is committed to being tiriti honouring by giving mana to Te Tiriti o Waitangi and upholding our responsibilities as effective treaty partners with Māori (Tāngata Tiriti).”

In thinking about how the articles of Te Tiriti o Waitangi relate to digital identity, here are just a few examples of our recent whakaaro: Article 1 – Kāwanatanga (Honourable Governance): DINZ understands our position as tangata tiriti and we affirm Māori as tangata whenua. Article 2 – Rangatiratanga (Māori Self-determination): We will seek to work alongside tangata whenua to support their aspirations and ensure their needs are addressed in the digital identity space. Article 3 – Ōritetanga (Equity): DINZ will work alongside iwi/hapū to ensure their voices are represented in kōrero about digital identity and their aspirations are realised in terms of digital identity solutions and developments. Article 4 – The Spoke Promise: Te Reo Māori will be celebrated in the digital identity ecosystem in Aotearoa and participating organisations will demonstrate and act on an understanding of their role as Tangata Tiriti.  

Digital Identity New Zealand is on a journey and we are firmly in the waka with hoe (paddle) in hand! We understand the need for us to ensure Māori perspectives relating to identity are leading our hearts, minds and actions. Identity is a taonga (precious possession) and as such must be protected. The connection our tamariki (children) have with their language, culture and identity will shape our future in Aotearoa.

He tina ki runga, he tāmore ki raro. ‘Contentment above, firmly rooted below.’  A good connection to your identity, culture and heritage will find contentment in life.

Janelle Riki-Waaka
DINZ Executive Council

Kia ora, thank you, Janelle. It is wonderful to have such a skilled and passionate kaiārahi (guide) in our DINZ Executive Council whanau, especially during te wiki o Te Reo Maori. 

We’re pleased to announce our first two DINZ Discussion Forum events: A legal analysis of AML Reliance (members only) 7 October – join us as we share a report commissioned by DINZ members on the challenges and opportunities in the sharing and re-use of identity information used for customer due diligence in AML Register here. Deep dive into our 2020 Trust and Identity research (members only) 30 September – we will share the full research report and explore its implications and the collaborative action we can take to address some of the issues highlighted Register here.    

And based on the success of our member spotlight sessions during Techweek, we are launching a quarterly member showcase series. The first of these will be a Spring Showcase on Thursday, 5th November. You can register your interest in presenting here.

For more information on joining DINZ head to the Get Involved section of our website. Next month we will be communicating with members on our Executive Council elections, which take place early December. We have four positions open for election/re-election.

Lastly, Happy International Identity Day! September 16 is International Identity Day (otherwise called “ID-Day”). It is an initiative led by ID4Africa to raise awareness about the important role identity plays in empowering individuals to exercise their rights and responsibilities fairly and equitably in a modern society. The  date 16 September was chosen in commemoration of Sustainable Development Goal (SDG) 16.9 which calls for legal identity for all including birth registration by 2030.

Ngā Mihi,

Andrew Weaver
Executive Director

To receive our full newsletter including additional industry updates and information, subscribe now

The post Te Reo and Identity appeared first on Digital Identity New Zealand.

Tuesday, 15. September 2020

WomenInIdentity

Supporting International Identity Day on 16th September

The post Supporting International Identity Day on 16th September appeared first on Women in Identity.
Supporting the campaign to make every 16th September International Identity Day.

Why September 16th?

The choice of the date is in recognition of the UN’s Sustainable Development Goal (SDG) 16.9 which calls for a legal identity for all including birth registration by 2030.

In support of this, we asked some of the WiD team  to tell us about themselves and what identity means to them…

Kay Chopard Cohen Teresa Wu Esther Hoeksema-Westra Dia Banerji Diane Joyce Tamara Al-Salim

Join the conversation! Share your video messages with us!!

Email communications@womeninidentity.org.

The post Supporting International Identity Day on 16th September appeared first on Women in Identity.

Monday, 14. September 2020

Oasis Open

Invitation to comment on Open Document Format for Office Applications (OpenDocument) v1.3 – ends Sept. 29th

An open XML-based document file format for office applications that create documents containing text, spreadsheets, charts, and graphical elements. The post Invitation to comment on Open Document Format for Office Applications (OpenDocument) v1.3 – ends Sept. 29th appeared first on OASIS Open.

We are pleased to announce that Open Document Format for Office Applications (OpenDocument) v1.3 from the OpenDocument TC [1] is now available for public review and comment. This is the third public review for OpenDocument v1.3.

The OpenDocument Format is an open XML-based document file format for office applications, to be used for documents containing text, spreadsheets, charts, and graphical elements. OpenDocument Format v1.3 is an update to the international standard Version 1.2, which was approved by the International Organization for Standardization (ISO) as ISO/IEC 26300 (2015).

OpenDocument Format v1.3 includes improvements for document security, clarifies under-specifications and makes other timely improvements.

The OpenDocument Format specifies the characteristics of an open XML-based application-independent and platform-independent digital document file format, as well as the characteristics of software applications which read, write and process such documents. It is applicable to document authoring, editing, viewing, exchange and archiving, including text documents, spreadsheets, presentation graphics, drawings, charts and similar documents commonly used by personal productivity software applications.

The documents and related files are available here:

Open Document Format for Office Applications (OpenDocument) Version 1.3
Committee Specification Draft 03
31 August 2020

Part 1: Introduction
Editable source (Authoritative):
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part1-introduction/OpenDocument-v1.3-csd03-part1-introduction.odt
HTML:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part1-introduction/OpenDocument-v1.3-csd03-part1-introduction.html
PDF:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part1-introduction/OpenDocument-v1.3-csd03-part1-introduction.pdf Part 2: Packages
Editable source (Authoritative):
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part2-packages/OpenDocument-v1.3-csd03-part2-packages.odt
HTML:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part2-packages/OpenDocument-v1.3-csd03-part2-packages.html
PDF:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part2-packages/OpenDocument-v1.3-csd03-part2-packages.pdf Part 3: OpenDocument Schema
Editable source (Authoritative):
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part3-schema/OpenDocument-v1.3-csd03-part3-schema.odt
HTML:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part3-schema/OpenDocument-v1.3-csd03-part3-schema.html
PDF:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part3-schema/OpenDocument-v1.3-csd03-part3-schema.pdf Part 4: Recalculated Formula (OpenFormula) Format
Editable source (Authoritative):
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part4-formula/OpenDocument-v1.3-csd03-part4-formula.odt
HTML:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part4-formula/OpenDocument-v1.3-csd03-part4-formula.html
PDF:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part4-formula/OpenDocument-v1.3-csd03-part4-formula.pdf XML/RNG schemas and OWL ontologies:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/schemas/

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/OpenDocument-v1.3-csd03.zip

How to Provide Feedback

OASIS and the OpenDocument TC value your feedback. We solicit feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

This public review starts 15 September 2020 at 00:00 UTC and ends 29 September 2020 at 11:59 UTC. This specification was previously submitted for public review [2]. This 15-day review is limited in scope to changes made from the previous review. Changes are highlighted in change-marked files included in the package [3].

Comments on the work may be submitted to the TC by following the instructions located at: https://www.oasis-open.org/committees/comments/form.php?wg_abbrev=office Feedback submitted by TC non-members for this work and for other work of this TC is publicly archived and can be viewed at: https://lists.oasis-open.org/archives/office-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members.

In connection with the public review of these works, we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about this specification and the OpenDocument TC may be found on the TC’s public home page [1]. Additional information related to this public review can be found in the public review metadata document [2].

========== Additional references:

[1] OASIS Open Document Format for Office Applications (OpenDocument) TC
https://www.oasis-open.org/committees/office/

[2] Public review metadata document:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/OpenDocument-v1.3-csd03-public-review-metadata.html

[3] Change-marked versions (PDF):
Part 1: Introduction https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part1-introduction/OpenDocument-v1.3-csd03-part1-introduction-DIFF.pdf
Part 2: Packages https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part2-packages/OpenDocument-v1.3-csd03-part2-packages-DIFF.pdf
Part 3: OpenDocument Schema https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part3-schema/OpenDocument-v1.3-csd03-part3-schema-DIFF.pdf
Part 4: Recalculated Formula (OpenFormula) Format https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part4-formula/OpenDocument-v1.3-csd03-part4-formula-DIFF.pdf

[4] https://www.oasis-open.org/policies-guidelines/ipr

[5] https://www.oasis-open.org/committees/office/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-Limited-Mode
RF on Limited Terms Mode

The post Invitation to comment on Open Document Format for Office Applications (OpenDocument) v1.3 – ends Sept. 29th appeared first on OASIS Open.


MyData

Guest Blog: Interesting outcome of the MyData Accelerator project

– What a datafied future of work & skills might be like To this day, ten months have passed since the launch of the MyData Accelerator in November 2019. We coordinated the Accelerator at Vake, the Finnish State Development Company, supported by the Technology Industries of Finland and Sitra. It has been a learning experience... Read More The post Guest Blog: Interesting outcome of the M

– What a datafied future of work & skills might be like To this day, ten months have passed since the launch of the MyData Accelerator in November 2019. We coordinated the Accelerator at Vake, the Finnish State Development Company, supported by the Technology Industries of Finland and Sitra. It has been a learning experience...

Read More

The post Guest Blog: Interesting outcome of the MyData Accelerator project appeared first on MyData.org.

Friday, 11. September 2020

FIDO Alliance

CISA Cites FIDO Authentication to Protect Political Campaigns

Andrew Shikiar, FIDO Alliance Executive Director & CMO  The US Cybersecurity and Infrastructure Security Agency (CISA),  issued an advisory Thursday recommending cyber attack remedies for election-related activities  including the use […] The post CISA Cites FIDO Authentication to Protect Political Campaigns appeared first on FIDO Alliance.

Andrew Shikiar, FIDO Alliance Executive Director & CMO 

The US Cybersecurity and Infrastructure Security Agency (CISA),  issued an advisory Thursday recommending cyber attack remedies for election-related activities  including the use of FIDO authentication to thwart phishing  attempts and account takeover. 

The advisory, entitled “ACTIONS TO COUNTER EMAIL-BASED ATTACKS ON ELECTION RELATED ENTITIES” noted that 78 percent of cyber-espionage incidents are enabled by phishing. CISA makes specific recommendations on protecting against cyber attacks to aid organizations involved in election-related activities.

Among other recommendations, FIDO Authentication was highlighted to thwart phishing attempts and protect against account takeover for cloud email and other high-value services. Specifically, CISA cites FIDO2 Security Keys as a tool that campaigns and organizations can, and should, use to protect themselves. The advisory also recommends that, when available, campaigns and organizations should enroll users in advanced protection services such as Google Advanced Protection, which leverages FIDO Security Keys as a best practice over other 2FA methodologies to protect workforces from account takeovers related to malicious attacks.

FIDO security keys offer protection against phishing attacks by working as a second, physical factor of authentication and only authenticating when a user is on the correct website. Thus, even if a user is tricked into supplying their password to a phishing website, the physical security key will still block attackers from accessing their account. 

Phishing continues to be a problem and remains one of the most popular means by which cybercriminals obtain data. Embracing FIDO technology is smart politics, and smart policy for those who understand the gravity of the cyber threat. As the election draws near, we’re increasingly seeing foreign agents attempting to infiltrate, influence and disrupt our elections.

As the CISA advisory implies, phishing and other cyber attacks are a critical issue with widespread and damaging implications to U.S. national security. The CISA advisory highlights the importance of locking down email systems, which have become a preferred vector for malicious activity. The CISA recommendations are intended as a preferred method for protecting the 2020 and future political campaigns. 

The post CISA Cites FIDO Authentication to Protect Political Campaigns appeared first on FIDO Alliance.


FIDO Alliance Submits Comments to NIST on Digital Identity Guidelines, Asks for Stronger Differentiation for Phishing-resistant Authentication Tools

In June, NIST put out a call for comments on the next iteration of its Digital Identity Guidelines, SP 800-63-4. We welcomed the opportunity to comment; read our full comments […] The post FIDO Alliance Submits Comments to NIST on Digital Identity Guidelines, Asks for Stronger Differentiation for Phishing-resistant Authentication Tools appeared first on FIDO Alliance.

In June, NIST put out a call for comments on the next iteration of its Digital Identity Guidelines, SP 800-63-4. We welcomed the opportunity to comment; read our full comments in the Government & Public Policy area of the website.

Up front, we note that SP 800-63-3 represented a significant improvement in NIST’s Digital Identity Guidelines, taking a more modern approach to identity proofing, authentication, and federation. That said, technology and threat are both never static, and we are encouraged to see that NIST is embarking on another revision of the document.

In our comments, we make three recommendations for SP 800-63-4:

1. NIST should adjust its approach to AALs to help implementers clearly differentiate between tools that are phishing resistant and those that are not. 

Today, a variety of authenticators based on shared secrets – including Look-Up Secrets, Out-of-Band Devices (i.e., Push), and OTP apps and tokens – are given the same weight in AAL2 as authenticators based on asymmetric public key cryptography, such as FIDO. Given how attackers have caught up with the former, it no longer makes sense to combine  these two types of authenticators under a single designation. Doing so misleads implementers into thinking these two categories of authenticators are equivalent in strength or resiliency. In our comments, we provide NIST with several ideas for how it can adjust the AALs to provide more differentiation between tools that are phishing resistant and those that are not. 

2. NIST should engage with FIDO Alliance to explore other alternatives to enable FIDO authenticators to meet AAL3 requirements

When SP 800-63-3 was first published, it created a path for some FIPS 140 validated FIDO authenticators to meet AAL3 – if those authenticators were deployed in concert with Token Binding to deliver Verifier Impersonation Resistance. Since that time, most major browser vendors have withdrawn support for token binding. Per discussions with NIST, we understand that this means that FIDO authenticators can no longer meet AAL3 without implementing other approaches to mitigate the loss of token binding. As NIST embarks on the next revision of SP 800-63, we urge NIST to engage with FIDO Alliance to explore other alternatives to enable FIDO authenticators to meet AAL3 requirements.

3. Provide more direct references to FIDO

SP 800-63B describes Requirements by Authenticator Type but is inconsistent in how it points to standards that support that type. This has created some confusion in the marketplace when implementers consult SP 800-63B and see reference to standards like OTP and PKI but do not see any specific reference to FIDO. In our comments, we offer three suggestions for how the guidance can directly reference FIDO so that implementers have a clearer understanding of where FIDO fits in and supports the requirements. 

We greatly appreciate NIST’s consideration of our comments and look forward to ongoing dialogue and collaboration as they seek to update the Digital Identity Guidance.

The post FIDO Alliance Submits Comments to NIST on Digital Identity Guidelines, Asks for Stronger Differentiation for Phishing-resistant Authentication Tools appeared first on FIDO Alliance.

Thursday, 10. September 2020

Credentials Community Group

What’s in a Wallet? The recap

On July 7 and 14, 2020, The CCG hosted two sessions where we asked people from inside and outside the community to answer the question, “What’s in a Wallet?” You can review the meeting notes and listen to the audio … Continue reading →

On July 7 and 14, 2020, The CCG hosted two sessions where we asked people from inside and outside the community to answer the question, “What’s in a Wallet?” You can review the meeting notes and listen to the audio below:

Tuesday July 7, 2020 Tuesday July 14, 2020

“What’s in a Wallet?” as answered by…

Manu Sporny, Digital Bazaar: Wallet Architecture Diagram Christopher Allen: Decentralized Identity Network Components Dan Buchner: Microsoft (check the minutes.) Kyle Kemper, SwissKey Kaliya Identity Woman Young: CCG Glossary Group with DIF Presentation Orie Steele, Transmute: Universal Wallet Daniel Hardman: What goes in a Wallet? Darrell O’Donnell: The State of Digital Wallets Charles Cunningham, Jolocom: What’s in a Wallet? Katryna Dow, MeeCo (check the minutes.) Nathan Tonani, Learning Economy.io: What is a Wallet?

As you can see, there are a lot of interpretations for what is in a wallet. We hope you find these presentations and perspectives mind-opening and keep in mind the desires of your end users when you develop digital identity wallets.

Materials from this blog post taken from this github thread (shout out to Juan Caballero), & Heather Vescent’s Twitter week 1 & week 2 threads.

Tuesday, 08. September 2020

Me2B Alliance

Re: Selling personal data -- an experiment

Hi James, I think that particular one is around complexity. They know that they cannot serve the needs of those looking for complex products very well with only a single white box to fill in. So they make more money by doing that hand-off to someone who is more geared up to do so. Google have dabbled in the space over the years with various ‘offers’ services or comparison shopping but they are n
Hi James, I think that particular one is around complexity. They know that they cannot serve the needs of those looking for complex products very well with only a single white box to fill in. So they make more money by doing that hand-off to someone who is more geared up to do so.
Google have dabbled in the space over the years with various ‘offers’ services or comparison shopping but they are not as yet that big in those areas.
Iain


Re: Selling personal data -- an experiment

@Iain and others – why do you think that the big players (Google, Facebook etc.) are not (yet) doing too much in those spaces and leave quite some money on the table for intermediaries? Reputational concerns, regulatory concerns, complexity…?     From: <main@Me2BAlliance.groups.io> on behalf of "Iain Henderson via groups.io" <iain.henderson@...> Reply to: "main@Me2BAllia

@Iain and others – why do you think that the big players (Google, Facebook etc.) are not (yet) doing too much in those spaces and leave quite some money on the table for intermediaries? Reputational concerns, regulatory concerns, complexity…?

 

 

From: <main@Me2BAlliance.groups.io> on behalf of "Iain Henderson via groups.io" <iain.henderson@...>
Reply to: "main@Me2BAlliance.groups.io" <main@Me2BAlliance.groups.io>
Date: Saturday, 25 July 2020 at 20:45
To: "main@Me2BAlliance.groups.io" <main@Me2BAlliance.groups.io>
Subject: Re: [Me2BAlliance] Selling personal data -- an experiment

 

Agreed James, although I suspect the steady state for ‘considered purchases’ won’t need/ benefit from intermediaries. Individuals (demand) will have standardised API’s as will the manufacturers, distributors and retailers (supply).

 

If you look at considered purchases in more detail, Google, Facebook and Amazon don’t actually try to do too much in those spaces other than hand off to sector level experts after the search phase. For example, if you do a google search for ’new car’ you get intermediaries at the top of the list as below. Behind the scenes the manufacturers are happy to let the intermediaries separate the wheat from the chaff (unless the search mentions them specifically).

 

Iain

 



Monday, 07. September 2020

Me2B Alliance

No Alliance call today

Hi Friends,   A friendly reminder that this month is a webinar month, which will be next Monday in light of the holiday in the US.      So no monthly meeting or webinar today.  Enjoy your week!   Lisa

Hi Friends,

 

A friendly reminder that this month is a webinar month, which will be next Monday in light of the holiday in the US.   

 

So no monthly meeting or webinar today.  Enjoy your week!

 

Lisa


WomenInIdentity

Launching our Singapore chapter

Local Ambassador, Helen Chua, gives a round-up of the latest WiD launch on August 28th 2020 There has been much discussion of the role of digital identity since the pandemic.… The post Launching our Singapore chapter appeared first on Women in Identity.
Local Ambassador, Helen Chua, gives a round-up of the latest WiD launch on August 28th 2020

There has been much discussion of the role of digital identity since the pandemic. Most countries have experienced lockdowns and are now looking to digitise many traditional services.

In Singapore, the pandemic clearly highlighted the digital divide. While much has already been done to fill the gap, there is still more to do. This was a key point made by Minister Indranee, the keynote speaker at our recent webinar.

She explained that the Singapore government’s enduring vision is to have a fair and just society where everyone has equal opportunity to fulfill their dreams. But, like many countries, social divisions have widened during COVID-19.

The launch of the WiD Singapore chapter is a timely one as it emphasizes the importance of digitalisation and digital identity.  The Minister believes that, in order to encourage women into careers in these areas, we have to start educating young girls to the benefits and opportunities of STEM subjects.

Ms Rama Sridhar, EVP at Mastercard, highlighted that the establishment of WiD will further encourage women to participate as subject matter experts around various aspects of technology. It is one of WiD’s core objectives to encourage and support women into leadership roles.

The establishment of WiD will further encourage women to participate in speaking as subject matter experts around various domains of technology.

Rama Sridhar, EVP at Mastercard

Personally, I am very excited about the role that WiD can play in Singapore, promoting digital identity and digital inclusion. Digital identity empowers every business and individual to reach their full potential.  This is just the beginning of our journey as the Singapore chapter and we really want to reach out to more Singaporeans to elevate, inspire and support women in their STEM careers.

Join us now at https://womeninidentity.org/become-a-member/ #ForAllByAll

The post Launching our Singapore chapter appeared first on Women in Identity.

Sunday, 06. September 2020

Me2B Alliance

Re: Local First

(Maybe safe to say that all 7 principles are a matter of opinion.)   The preference of keeping stuff local—I gotta think that’s allowed.  Maybe safe to say the principles need more contextual nuance.   And yeah, “ownership” is problematic.  BTW, I’ve been reading Sandra Petronio’s Communication Privacy Management  Theory and she makes clear that as soon as you discl

(Maybe safe to say that all 7 principles are a matter of opinion.)

 

The preference of keeping stuff local—I gotta think that’s allowed.  Maybe safe to say the principles need more contextual nuance.

 

And yeah, “ownership” is problematic.  BTW, I’ve been reading Sandra Petronio’s Communication Privacy Management  Theory and she makes clear that as soon as you disclose something, it becomes co-owned by both you and the confidant—as are the boundary management rules.  I’m finding CPM to be incredibly rich and relevant to thinking about information sharing and management (h/t to John Wunderlich for the suggestion months ago).

 

Lisa

 

Wednesday, 02. September 2020

Me2B Alliance

Re: Ethisphere

From the FAQ: The self-reported scores are combined with the qualitative assessment of an applicant company’s supplemental documentation and independent research to produce a final EQ score. It looks like it’s primarily qualitative and based on a risk assessment process that is similar to one we used to do when I was at Price Waterhouse years ago. Noreen
From the FAQ: The self-reported scores are combined with the qualitative assessment of an applicant company’s supplemental documentation and independent research to produce a final EQ score.

It looks like it’s primarily qualitative and based on a risk assessment process that is similar to one we used to do when I was at Price Waterhouse years ago.
Noreen


Decentralized Identity Foundation

Presentation Exchange: A Leap Toward Interoperability for the Decentralized Identity Ecosystem

Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash The decentralized identity ecosystem is on its inevitable path toward widespread adoption and meaningful use. There are a growing number of interpretations and implementations of various decentralized standards. These standards are at different stages in their development lifecycles, harmonizing gradually. As a result, there are lots of data
Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash

The decentralized identity ecosystem is on its inevitable path toward widespread adoption and meaningful use. There are a growing number of interpretations and implementations of various decentralized standards. These standards are at different stages in their development lifecycles, harmonizing gradually. As a result, there are lots of data, from lots of sources, that, without a little planning, won’t play nicely with other data. It is exciting to see the enthusiasm around the standards we know and love and to watch the proliferation of verifiable and portable data. With implementation boundaries as a hindrance, data interoperability will continue to be difficult. We could lose much utility and portability of the data. Luckily, in the midst of this excitement, a great opportunity presents itself: to forge bridges between data islands and promote the cross-pollination of methodologies for generating and exchanging verifiable data. Enter: the Presentation Exchange specification.

The Claims & Credentials working group has been hard at work on the Presentation Exchange specification for months. The effort, starting in January 2020, began to provide a solution to the question, “How should we request and exchange credentials across implementations and systems?” Though a few solutions have been proposed before, none have been as inclusive in the data model and transport agnosticism, or able to attract broad adoption, as Presentation Exchange. The working group is working towards the first release — at the time of writing, we have made over 100 commits and closed 37 GitHub issues.

The specification prides itself on its adaptability, aiming to be “both credential format and transport envelope agnostic, meaning an implementer can use JWTs, VCs, JWT-VCs, or any other credential format, and convey them via OIDC, DIDComm, CHAPI, or any other transport envelope.” At the same time, Presentation Exchange has garnered the interest and support of many key-players throughout the ecosystem, a step crucial to fulfilling the ambitions of the specification.

You may be wondering, what’s up with the title? Isn’t interoperability much broader than sharing credentials? Yes! Absolutely. Sharing credentials is only part of interoperability. Universal wallets, secure data storage, transports, identifiers, DIDComm, and so many more pieces are paramount to thriving in an interoperable, decentralized ecosystem. At the heart of many of these interactions, we find ourselves exchanging verifiable data. By driving a standardized method for requesting and returning verifiable data, which adapts to many data models and functional use cases, we cross an important threshold in making interoperability possible. Without working software to back up the specification, it is merely an interesting idea. We need to go further — to make it an interesting reality.

Recognizing how important the Presentation Exchange spec and reference implementations could be for accelerating real interoperability and healthy competition, we, on the Workday Credentials team, are involved in the specification’s development and are investing significantly in one of its first demonstrable implementations. Workday has been a very active member of DIF since 2019. Our interests lie across the DIF stack, with a focus on standards compliance, affording us opportunities to interoperate with community members and standards adopters alike. Towards that end, we have open-sourced much of our code and specifications relating to decentralized identity. For Presentation Exchange, we have recently pushed a Golang representation of the object model that can be useful in building and validating presentation definitions, submissions, and verifiable presentations (as defined by the W3C standards). We are building out more Presentation Exchange logic that we plan to open source in the coming months. We intend to adopt Presentation Exchange at Workday as we gain confidence in the specification and its other implementations. We are considering different ways Presentation Exchange can be useful in interoperating with verifiable data that does not originate on our platform.

We are looking forward to seeing others contribute reference implementations in other languages and we encourage you to consider contributing to Presentation Exchange and DIF! For more information, please reach out to DIF, or jump in on the public GitHub. To join DIF and contribute directly to the Presentation Exchange specification or implementations, visit here. And for an overview of the design process and goals of the project in video form, see this recording from DIF Virtual Face to Face Meeting in June 2020 (P.E. segment starts at 3.00):

Presentation Exchange: A Leap Toward Interoperability for the Decentralized Identity Ecosystem was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 01. September 2020

Me2B Alliance

Re: Ethisphere

Agreed.  That’s why I think it will be interesting to see how their companies do on our certification—i.e. several will fail.   Lisa  

Agreed.  That’s why I think it will be interesting to see how their companies do on our certification—i.e. several will fail.

 

Lisa

 


Credentials Community Group

September 2020 Schedule

Welcome to the September Schedule for the Credentials Community Group. We have a great line-up of speakers this month. Got something you are interested in presenting on identity, credentials, security or diversity identity/credential adjacent? Get in touch. September 8, 2020: … Continue reading →

Welcome to the September Schedule for the Credentials Community Group. We have a great line-up of speakers this month. Got something you are interested in presenting on identity, credentials, security or diversity identity/credential adjacent? Get in touch.

September 8, 2020: Dakota Gruener will speak on ID2020. September 15, 2020: Anil John will speak on Government Perspectives on DIDs and VCs, lessons learned from funding SSI development. September 22, 2020: Andi Hindle will give us a report out on the 6 week IDENTIVERSE September 29, 2020: Dave Birch will share his latest thoughts on identity, data privacy, and maybe a splash of IoT identity

All meetings are on Tuesday, at 9am PT, Noon ET, 5pm BST, 6pm CET. Anyone can join. We hope you will join us.


r@w blog

#DigitalDesires

Silpa Mukherjee, Ankita Deb & Rahul Kumar Session We propose to design the panel as a workshop with three paper presentations followed by an open discussion with the house exploring the key question of media objects‟ (in the form of film/film music/memes/gifs/trolls) changing relations with law; copyright and piracy having attained newer connotations in the age of media convergence. Whi
Silpa Mukherjee, Ankita Deb & Rahul Kumar Session

We propose to design the panel as a workshop with three paper presentations followed by an open discussion with the house exploring the key question of media objects‟ (in the form of film/film music/memes/gifs/trolls) changing relations with law; copyright and piracy having attained newer connotations in the age of media convergence. While we deal with the materiality of cinema in the new media moment, the session will open out debates on the mutability of media objects in a networked digital terrain ushered in by fast growing and cost-effective internet culture in urban India.

In terms of methodology the panel deploys media archaeology to trace the mutations that film culture has undergone in the digital age. The coexistence of the obsolete media copyright with its meme and its digitally re-mastered copy on torrent informs the research that the three papers involve. A certain engagement with the logic of informed/fan-cinephilic digital labour that unwittingly maintains and updates the algorithmic database of Web 2.0 services will run through the presentations. Along with archival research and interviews with professionals involved with online media companies and “users” who are now the “pirate/prosumer-cinephiles” of media objects, we will carry out extensive digital ethnography to map the chimera of digital territory that user traffic based internet culture in India helped produce.

The digital is a space of intervention: a space for the users to intervene and play with the material online. It is a constant form of participation underscoring a potential for democratic authorship. The definitive notion of authorship voices the overarching body of the state through its legal status. Thus copyright as a legal entity produces a discourse of power through this form of authorship. The contemporary medium or rather the multi-media constellation driven by internet culture in India produces an alternative discourse on authorship, complicating the notion of copyright and piracy at the same time. This charged terrain of (il)legality is also due to the nature of piracy in the digital domain, which does not exist in isolation but have now created bodies or spheres where it has been appropriated as a sub-cultural practice. The figure of the “pirate”/ the “troll”/ the “fan” and the “cinephile” now merges with the technologically enabled body of the user of new media who negotiates with the medium in multiple ways (and morphs it) and thereby touches all kinds of spaces within and outside the webspace. It has changed the physical scope of cinephilia as addressed in the paper “A Laptop and a Pen-drive: Cinephiles of Mukherjee Nagar,” where the culture of networked sharing evolves from and further complicates physical stations. It has permeated into the body of film music in the paper “Licensed, Remixed and Pirated: Item numbers and the web”, which interrogates the layers of user-based morphs that the text of a dance number in Bollywood undergoes in the culture of web based remixing and hacking. It changes the way protected materials like films circulate in the space designated as YouTube, marked by its ability to reproduce copyright materials without violating the law as the third paper titled “Online Streaming in the Era of Digital Cinephilia” points out; the logic of the obsolete license of old Hindi films which gains a new viral life on YouTube with its official upload vying with the multiple hacker-user uploads.

Thus the panel intends to explore the dizzying overlaps that produce this internet induced distinct zone of ambiguity that neither the law nor the state or the author can claim ownership over. The very embodiment of the material in the digital is in transition i.e. in a state of being morphedby the blurring of the identities of the multiple bodies at work at each moment. Through the three papers we intend to chart this transitional aesthetic sometimes contained and sometimes flowing out of the body of the media text onto the physical, technological and extra textual objects as well. The panel seeks to position this new world of media objects that overlap and form an uncontainable entity, seeking newer forms of negotiations with the older existing order. We seek to explore then what happens to the very essence of author(ity)ship when digital enters its domain.

Plan

A Laptop and a Pen-drive: Cinephiles of Mukherjee Nagar

With the changes technology has brought to contemporary life, cinephiles — for whom movies are a way of life, films and how they are experienced have undergone major changes. The classic cinephile, as the term was adopted in the 1960s has undergone a major change in the era of internet piracy. I will look at the way pirated films via torrent downloads are consumed by students in certain pockets in New Delhi especially around Mukherjee Nagar area. These students who come from the upwardly mobile Indian middle class families are engaged inpreparations of competitive exams to land a lucrative government job. Circumstances dictate that these students own a laptop to watch films but not a high speed internet connection. To fuel their cinephilic urge, they are dependent upon soft copy vendors of pirated films. These vendors are like a video library, the repository here being a laptop and a storage drive. These professional film pirates depend upon the p2p file sharing commonly referred as “torrent.” DVD and Blu Rays released by official sources are ripped at a bigger size by certain uploaderswhich are downloaded by another one who rips it to an even smaller size, fit enough to be downloaded by pirates with a slower broadband till it reaches places like Mukherjee Nagar. Using this particular case study, where the world of online film piracy merges with a third world piracy domain, I plan to interrogate the logistics of a new kind of cinephilia and try and frame this particular form of informal circuit of media production and consumption into a coherent perspective.

Relevant websites: https://kat.cr, https://yts.la/, https://torrentfreak.com.

Relevant software: Handbrake, uTorrent / Deluge / Vuze.

Relevant reading: Treske, Andreas. The Inner Life of Video Spheres: Theory for the YouTube Generation. Institute of Network Cultures, Amsterdam, 2013

Licensed, Remixed and Pirated: Item Numbers and the Web

The coming of new digital technologies has rendered the relationship of media objects’ with law extremely malleable and volatile. It urges us to rethink certain categories we have been working with, viz. piracy and copyright. The specific focus of the paper will be on item numbers’ relationship with changing technology and the law. The proprioceptive body being the central node of enquiry here: the law that affects the body that moves on screen and the body that is moved by the screen is made flexible in the digital age with Web 2.0’s unique design that spawns hackability and remixability. Through the registers of music licensing to YouTube, circulation of content offline as MP3 downloads in cheap mass storage devices, user generated morphed content related to item numbers (in the form of memes, GIFs, trolls, posters, tumblr blogs and listicles) spawned by amateur digital culture and remixing videos of film content the paper traces the gray zone between web based music piracy and its copyright rules. It will interrogate the moment when the entertainment industry has recognized the clear shift of its spectatorship from the older media to the more digital platforms and appropriates the contingency brought in by the algorithmic anxiety of Web 2.0 and its unique relationship with law and hence censorship regulations to innovate newer means of mass circulation and bypassing censorship.

Relevant content: https://www.youtube.com/watch?v=i2O2dBonBok.

Relevant user-traffic-oriented platforms: http://www.memegenerator.com, http://www.trolldekho.com, http://www.imgur.com, https://www.tumblr.com/.

Relevant curated online media platforms: ScoopWhoop, Buzzfeed India, blog.erosnow.com.

Online Streaming in the era of Digital Cinephilia

Digital piracy has allowed for certain democratization of film distribution and consumption through a parallel economy of piracy. The lack of control over these channels of distribution produces a blatant threat to the copyright and intellectual property rights that are quintessential to the mainstream culture of commercial film distribution. This paper will focus on the intersection of these two dichotomous cultures through the experience of watching old films via online streaming. The resurfacing of old films hosted by big corporations like Shemaroo, Venus and Ultra who began as film rights and video rights owners at one point host their old video content in a user generated space called youtube. The video content is a very specific form here. It is an obsolete entity, defined by its ambiguity with copyright that is able to make a legal transgression in order to circulate.

The circulation of the feature films in a web space that is primarily known for its clip culture also provides an interesting paradigm for the copyright material. The big corporate copyright floats in a culture of pirated experiences where the legal domain becomes a dizzying site of contradictions. Through this paper I will draw parallels between the history of these companies and their work in the field of film circulation and to the creation of a new form of cinephilia and its complicated relationship to the law. I will use a variety of archival sources, legal documents and discourses on online streaming to contextualize my argument.

Relevant websites: https://www.youtube.com/user/ShemarooEnt, https://www.youtube.com/user/VenusMovies, https://www.youtube.com/user/UltraMovieParlour

Readings

None.

Audio Recording of the Session

IRC 2016: Day 1 #Digital Desires : Researchers at Work (RAW) : Free Download, Borrow, and Streaming : Internet Archive

Session Team

Silpa Mukherjee is a Delhi based research scholar. She is currently enrolled in an MPhil programme in Cinema Studies at the School of Arts and Aesthetics, Jawaharlal Nehru University. She is a recipient of the social media research fellowship awarded by Sarai, CSDS. Her research interests include media archaeology with a focus on the body’s warped interconnections with changing media technology, studies of the medium, its resolution, aesthetics and erotics and a keen engagement with the practices of the Bombay Hindi film industry.

Ankita Deb is an M.Phil candidate at the Dept of Cinema Studies, School of Arts and Aesthetics, Jawaharlal Nehru University, India. Her dissertation is on 1970s and the Romantic Couple in Bombay Cinema. Her other research interests are in melodrama, romance, media archaeology, Iranian cinema, early cinema and Bombay Cinema.

Rahul Kumar is an M.Phil. candidate at the Department of Cinema Studies, School of Arts and Aesthetics, JNU. His dissertation deals with film journalism during the 1970s in Bombay cinema. A post-graduate from CHS JNU, he’s an active media pirate. His other research interests include film piracy, classical Hollywood cinema, cinephilia, film history and film genre.

Note: This session was part of the first Internet Researchers’ Conference 2016 (IRC16) , organised in collaboration with the Centre for Political Studies (CPS), at the Jawaharlal Nehru University, Delhi, on February 26–28, 2016. The event was supported by the CSCS Digital Innovation Fund (CDIF).

#DigitalDesires was originally published in r@w blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 30. August 2020

WomenInIdentity

Interview with Andrew Weaver, Digital Identity NZ

Esme Wardhaugh sat down with Andrew to talk about identity, and the digital community. What is it about your job that gets you out of bed in the morning? I… The post Interview with Andrew Weaver, Digital Identity NZ appeared first on Women in Identity.

Esme Wardhaugh sat down with Andrew to talk about identity, and the digital community.

What is it about your job that gets you out of bed in the morning? Andrew Weaver

I wear a number of hats – from Digital Identity Aotearoa/New Zealand, to independent consulting, helping equip organisations with payments technology through to supporting mahi aroha (literally ‘love work’) with charities and social enterprises. I love supporting people to reach their full potential, especially if it involves challenging the status quo.

How did you get to where you are today?

A common thread of applying knowledge of technology and systems (including human systems) to improve and add value. I’m most definitely not a fan of shiny tech for shiny tech’s sake!

What is the most important lesson you have learned along the way?

Knowledge and wisdom is everywhere around us – in nature, in history, in different perspectives and worldviews. My most effective and rewarding role is in exploring and drawing that wisdom out, rather than assuming my own narrow perspective is shared by everyone.

What’s your message to CEOs in the identity space? What do you suggest they start, stop or continue doing – and why?

Identity is Taonga, a Māori word meaning something that is treasured and cherished. When we change our view of identity information and recognise it as something very personal and very precious we will start to treat it with a lot more dignity, care and respect. That means stopping treating identity as a means to a monetised marketing end and stopping the wholesale harvesting and sale of personal information.

In one sentence, why does diversity matter to you?

I have to quote your own motto (which I do all the time, by the way) – Identity systems built for everyone are built by everyone. #ForAllByAll

What book/film/piece of art would you recommend to your fellow members?

Technically Wrong is a great book on inclusive design.

What advice would you give to the teenage ‘you’?

Explore More

Where can we find you on social media / the Web?

On LinkedIn and at digitalidentity.nz.

The post Interview with Andrew Weaver, Digital Identity NZ appeared first on Women in Identity.

Friday, 28. August 2020

WomenInIdentity

WiD Diversity & Inclusion Policy

Guiding Principles At Women in Identity, we strive for a diverse, inclusive, and equitable workplace iwhere all volunteers and members feel valued and respected, regardless of gender, race, ethnicity, national… The post WiD Diversity & Inclusion Policy appeared first on Women in Identity.
Guiding Principles

At Women in Identity, we strive for a diverse, inclusive, and equitable workplace iwhere all volunteers and members feel valued and respected, regardless of gender, race, ethnicity, national origin, age, sexual orientation or identity, education or (dis)ability, .experiences and heritages and will ensure that all voices are valued and heard equally. 

We aim to create a model for diversity and inclusion that the identity industry can adopt.  

To provide informed, authentic leadership for cultural equality, Women in Identity strives to: 

See diversity, inclusion and equality as critical to the well-being of our staff and the identity communities we serve.  Acknowledge and dismantle any inequalities within our policies, systems, programs and services and continually update and report our progress.  Explore potential underlying, unquestioned assumptions that interfere with inclusiveness.  Advocate for and support board-level thinking about how systemic inequalities impact our organisation, and how best to address that in a way that is consistent with our mission. Help to challenge assumptions about what it takes to be a strong leader and champion diversity of leadership at all levels of our organisation   Practise and encourage transparent communication in all interactions. Lead with respect and tolerance. We expect all members to embrace this notion and to express it in organisation interactions and through everyday practices.

Women in Identity commits to promoting diversity and inclusion in our workplaces: 

Pursue cultural competency throughout our organisation by creating substantive learning opportunities and formal, transparent policies. Generate and aggregate quantitative and qualitative research related to equity
to make incremental, measurable progress toward the visibility of our diversity, inclusion, and equity efforts. Once the content is curated it will be added to our website so others can access. Improve our cultural leadership pipeline by creating and supporting programs and policies that foster leadership that reflects the diversity of communities that the identity industry serves.  Pool resources and expand offerings for underrepresented constituents by connecting with other identity organisations committed to diversity and inclusion efforts. Develop and present sessions on diversity, inclusion, and equality to provide information and resources internally, and to members, the community, and the identity industry.  Develop a system for being more intentional and conscious of bias during the hiring, promoting, or evaluating process.  Include a salary range with all public job descriptions.  Advocate for public and private-sector policy that promotes diversity, inclusion, and equity. Challenge systems and policies that create inequity, oppression and disparity. Our 2020 Goals Opening a US entity Restructuring Board of Directors Renewed focus on supporting non-western markets  Ambassador in India Seeking out multiple ambassadors for Africa External Diversity Review Kick off our inaugural research project which focuses on the impact of diversity and exclusion in the identity industry. The outputs of this project will include a code of conduct and implementation framework that will give identity organisations a practical and pragmatic guide to adopting a more diverse and inclusive approach to product development

The post WiD Diversity & Inclusion Policy appeared first on Women in Identity.

Thursday, 20. August 2020

ID2020

New Report Highlights Public Concerns About Data Privacy

Members of the ID2020 Alliance are united in the belief that identity is a fundamental and universal human right and that we all deserve better ways to prove who we are — both in the physical world and online. As a community of technologists, advocates, implementers, and funders, we also believe that ethically implemented, privacy-protecting, user-managed, and portable digital ID solutions o

Members of the ID2020 Alliance are united in the belief that identity is a fundamental and universal human right and that we all deserve better ways to prove who we are — both in the physical world and online.

As a community of technologists, advocates, implementers, and funders, we also believe that ethically implemented, privacy-protecting, user-managed, and portable digital ID solutions offer a better alternative to the current “data as a commodity” paradigm.

A new report from KPMG entitled, The New Imperative for Corporate Data Responsibility, suggests that the American public increasingly agrees with this perspective. Based on a survey of 1,000 respondents, the report concludes that consumers expect corporations to “take significant steps to better protect, manage, and ethically use their data.”

“The findings are unmistakable,” says Orson Lucas, Principal, KPMG Cyber Security Services. “Data privacy and protection are clear priorities for consumers. Close attention to customer data handling, management, and protection practices are key, foundational elements of establishing and maintaining digital trust.”

The report outlines valuable findings regarding how the public feels about the security of their data, the issues that concern them, and who they believe bears responsibility for creating a more secure and trustworthy digital ecosystem.

Key Findings: Beliefs

97 percent of respondents say that data privacy is important to them 87 percent believe that data privacy is a human right 86 percent believe that data privacy is a growing concern 68 percent don’t trust companies to ethically sell their private data 54 percent don’t trust companies to use their personal data in an ethical way 53 percent don’t trust companies to collect data in an ethical way 50 percent don’t trust companies to protect their personal data

These insights should not surprise us. After all, a majority of Americans have, at one time or another, been personally affected by a data breach. On the one hand, it is encouraging to see that public opinion is shifting; our experiences have made us all more cognizant of how our identity and personal data are being misused and mismanaged. On the other hand, these experiences are also contributing to a growing mistrust of all forms of digital ID. For those who work in this space, this is a timely reminder that, as we develop and deploy new forms of digital ID, we must do so with an intentional focus and abiding commitment to rebuild and maintain public trust.

Key Findings: Concerns

83 percent of respondents worry most about the theft of their Social Security Number, followed by their credit card number (69 percent), and their passwords (49 percent) Only 16 percent are worried about their medical records being stolen. Medical records are the most commonly cited example of data that consumers trust companies to protect (57 percent)

Given these rapidly changing consumer opinions, and with our vision of good digital ID for all as our guiding star, we at ID2020 regularly revisit the question: how do we get there?

The data suggest what we have long believed: that, in the coming years, market forces (i.e. consumer demands) will drive tectonic shifts in the data economy. But will market forces be enough?

Most telling in the data is that the public is most likely to trust healthcare companies to protect the privacy of their medical records. Laws such as the Health Insurance Portability and Accountability Act (HIPAA) and the California Consumer Protection Act (CCPA) establish robust data protections and stiff penalties for companies that mishandle data. These offer the public a degree of confidence that their health information will be held sacrosanct.

The house is on fire and that the public is finally smelling the smoke when it comes to data security. Market forces can be extremely powerful, but we expect that governments will play an equally important, catalytic role by establishing the regulatory frameworks and consumer protections necessary to rebuild trust and encourage the broad adoption of safe and secure digital ID applications.

Here as well, the KPMG data provides some valuable insights for policymakers.

Key Findings: Who is Responsible

91 percent of respondents say that corporations should take the lead in establishing corporate data responsibility 56 percent say that companies should prioritize giving consumers more control over their data in 2020 84 percent are open to state legislatures giving consumers more control over their data
“Data privacy issues are not going to go away,” says KPMG Cyber Security Services Principle, Steve Stein. “In fact, consumer protections around data privacy, like the ones provided by the CCPA, are very likely to be codified in other states and eventually at the federal level. Simply put, privacy laws are only going to increase in volume and rigor. That’s why visibility, protection, and trust is gaining such momentum in the marketplace and also why leading-edge companies are not looking at data privacy as just another compliance or check-the-box exercise. They see privacy as one of the pathways to growing their business by improving trust with their customers.”

So…How DO We Get There?

Identity systems rely on trust to function; trust between issuers of identity and relying parties and, critically, that of those who use the system to prove their identity to access various goods, services, and privileges.

ID2020 was established in 2016 to promote the adoption and implementation of user-managed, privacy-protecting, and portable digital ID solutions. To achieve this vision, we are working simultaneously along three tracks.

We are helping shape the market through the ID2020 Certification, which applies 41 rigorous, outcome-based Technical Requirements to certify best-in-class digital ID solutions. We are working with policymakers in the United States and internationally to advocate for the ethical implementation of better forms of digital ID. And, as the technologies continue to evolve, we are implementing programs in the field to test and apply what we learn as these systems are replicated and brought to scale.

Fully realizing the potential of digital ID will require businesses, technology providers, policymakers, and civil society to collaborate — and quickly — to build and implement functional, privacy-preserving, user-managed ID systems, and work to overcome the mistrust which could impede their broad adoption. We developed the ID2020 Alliance model to foster this collaboration.

The road to good ID for all is riddled with potholes…and we have one chance to get this right.

About ID2020

ID2020 is a global public-private partnership that harnesses the collective power of nonprofits, corporations, and governments to promote the adoption and implementation of user-managed, privacy-protecting, and portable digital ID solutions.

By developing and applying rigorous technical standards to certify identity solutions, providing advisory services and implementing pilot programs, and advocating for the ethical implementation of digital ID, ID2020 is strengthening social and economic development globally. Alliance partners are committed to a future in which all of the world’s seven billion people can fully exercise their basic human rights, while ensuring data remains private and in the hands of the individual.

New Report Highlights Public Concerns About Data Privacy was originally published in ID2020 on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 19. August 2020

Berkman Klein Center

“No simple answer”

“Covid State of Play” examines sick buildings, planning lags, and reopening By Carolyn Schmitt A tent hospital in Central Park. Photo: Wikimedia Commons In the absence of leadership guidelines for mitigating COVID-19, creative solutions — in concert with established public health recommendations — are key. Low-cost air quality sensors, rethinking what schools look like, and identifying
“Covid State of Play” examines sick buildings, planning lags, and reopening

By Carolyn Schmitt

A tent hospital in Central Park. Photo: Wikimedia Commons

In the absence of leadership guidelines for mitigating COVID-19, creative solutions — in concert with established public health recommendations — are key.

Low-cost air quality sensors, rethinking what schools look like, and identifying new modes of collaboration are a few of the creative approaches discussed during a recent discussion hosted by Jonathan Zittrain and Magaret Bourdeaux of the Berkman Klein Center’s Digital Pandemic Response program.

The talk examined the current “Covid State of Play,” and covered COVID-19 testing, school reopenings, and ventilation. Their guest, Joseph Allen, is an Assistant Professor of Exposure Assessment Science at Harvard’s T.H. Chan School of Public Health.

Bourdeaux, also of the Harvard Medical School’s Global Public Policy and Social Change program and the Harvard Kennedy School’s Security and Global Health Project, outlined the current testing situation in the United States, urging for more action and implementation.

“I think this is all about hard work. It is about systematic planning. It is about leadership and not counting [on] that there’s going to be some miracle cure, some miracle intervention that is going to save us. We are going to have to actually do the work that is required to control this outbreak and we just have to grow up and do it,” she said.

Allen added that there ongoing work on a rapid saliva-based at-home test, which needs to be reviewed by the FDA but has the potential to be a quick, accessible, low-cost test. “There is the technology available for at-home rapid tests. To Margaret’s point, we need it now,” he said, noting that the test isn’t the same as the PCR tests currently in use. “It takes a different mentality. It’s not a diagnostic test like we expect at the doctor’s office. It’s a tool to control the pandemic or help control the pandemic, so we need a mindset shift here and think about how we think about testing.”

School reopenings without testing infrastructure and public health implementations are also a pressing concern. Atop the testing challenges, many schools and universities have buildings with poor ventilation, Zittrain pointed out. Bourdeaux and Allen emphasized how being outside is safer than being indoors, but buildings with good ventilation are important for mitigating the spread of COVID-19.

“We are in the sick building era,” Allen said, meaning many buildings meet only minimum air quality standards to save energy. “So we’re paying the consequence right now for our choices that we’ve stopped designing buildings for people.”

Zittrain inquired about the use of carbon dioxide sensors in indoor spaces to monitor the air quality, a tool Allen said is already in use, and that his lab has also built. “Some of these can connect to the building information system, so in real-time it can, to your point, ‘hey, your CO2 hit a certain level. Let’s open up the dampers in here.’ In fact, it’s called demand control ventilation,” Allen explained.

But the availability of low-cost carbon dioxide sensors means employees can also raise red flags about air quality. “It’s democratized this healthy buildings idea and people are sharing that data. They are sharing that. Buildings are getting labeled sick buildings,” he said. “People can finally make the invisible visible with these cheap sensors.”

With “sick buildings” as a backdrop, the conversation shifted to whether schools should reopen. Allen, a proponent for reopening schools, argued there should be prerequisites to opening: “One, you have to control the spread and two, you have to make enhancements to your risk reduction strategies within the school. So it’s the when and the what. When to open and what has to be done, and so that’s where I’ve been bullish to say hey, if you do those things, sure, schools should open.”

Allen cited recent reopening failures in states like Georgia as examples of when these conditions were not met and should not have opened. “I am confident if we meet those metrics you’ll have low community spread and the probability of entering into the school and your new cases lower, that’s obvious as a numbers game. And then if you put these other strategies in place which we know work in hospitals and elsewhere, including and beyond airborne transmission, it’s mask-wearing, it’s de-identification, it’s managing flows of people and queues of people,” he said.

Bourdeaux echoed Allen’s concerns, emphasizing the importance of controlling community transmission, including case counts, and “understanding how robust your public health measures to end community transmission are.”

She compared the current response to the virus to the way people experience a hurricane, where the storm blows over and the perceived danger subsides. Instead, she said, there is more work to do from a public health perspective before having the reopening conversation. Bourdeaux said having a national plan and for having the important conversations to help stop the spread of the virus should be part of this action plan.

“We’re not having a very intelligent conversation about really what we’re dealing with to date, and so that’s not related to schools and whether schools could be made safe,” she said. “They absolutely can be made safe. We’ve seen buildings like hospitals, as Dr. Allen has pointed out. We can make places safe but I think that it’s asking a lot to say okay, let’s reopen schools when we’re not having a smart conversation about where we stand with community transmission in general.”

While children have lower infection and mortality rates, Allen countered that schools play an important function for many students, and other risks — such as access to food and virtual dropout — should be factored in as well. “If we don’t think there are consequences to keeping tens of millions of kids outside of school, they’re at higher risk of abuse and neglect, exploitation. The loss of learning. The loss of socialization. Over 30 million kids rely on schools for meals. These are massive costs and it’s horrifying to recognize that our country hasn’t prioritized this,” he said.

Along similar lines, Zittrain asked whether any official guidelines for reopening might further intensify the inequalities between wealthier communities — who have access to more resources — and marginalized communities, who are disproportionately impacted by COVID-19.

These inequalities will still exist with virtual schooling, Allen said. “This virus is exposing deep fissures within our society, the structural racism that’s in our society that exists within these schools. If we keep kids all at home that’s going to exist for the exact same reason and if you bring back some, well that inequality, inequity is going to exist and be exacerbated as well. There’s no simple answer here other than honestly it’s a systemic issue that needs to be fixed and fixed fast.”

To address these myriad challenges presented by COVID-19, Zittrain asked about best practices for sharing information and working together. Both Allen and Bourdeaux underscored the great opportunities and responses they have seen. Allen described the past few months as a period of great collaboration and camaraderie with “the whole world, every scientist and medical professional is focused on the same problem.” As an illustration of such new collaboration, he cited a report he worked on to advise school superintendents. Bourdeaux similarly emphasized how so many people are trying to take action to help and support during the pandemic, and referred to a recent poll that says most Americans support a mask mandate.

The trio also explored creative ways to host schools, such as makeshift schools outside, similar to how hospitals made tented spaces in parks. Allen pointed to an op-ed he wrote outlining steps to reopen, which includes temporary school spaces.

“Let’s put some tents. Let’s use the ball field. Let’s get creative. Look at what the medical community did…There were tents in Central Park,” he said. “We should turn convention centers into schools. Let’s put tents in every park. We can get real creative here instead of saying well, we have this old crumbling infrastructure, what are we going to do? Let’s just jam a thousand kids back into it and do everything the same way. Instead, I think there are some creative solutions out there.”

“No simple answer” was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 18. August 2020

Digital Identity NZ

Ready for action

In the last month we have shared with you our latest research findings on Digital Identity and Trust, and just last week the DIA’s DITP team shared with us their exciting news on the formal development of a Digital Identity Trust Framework, see the webinar recording here for Aotearoa/New Zealand. The post Ready for action appeared first on Digital Identity New Zealand.

Kia ora,

In the last month we have shared with you our latest research findings on Digital Identity and Trust, and just last week the DIA’s DITP team shared with us their exciting news on the formal development of a Digital Identity Trust Framework, see the webinar recording here for Aotearoa/New Zealand.

Both are important foundations that collectively give us the confidence to take definitive action.

The overriding theme of the Trust and Identity research was one of the organisations and their technology partners taking on a greater share of responsibility when it comes to trustworthy management of personal information and data. This involved recognising that Identity is a Taonga, and understanding (and acting upon) their role as kaitiaki (custodians) of that precious information. Digital Identity NZ has an important role to play in helping organisations on that journey.

The news from the DIA that the Government will introduce a Digital Identity Bill in 2021 is a huge boost to those of us committed to developing a trustworthy digital identity ecosystem in Aotearoa. A definitive trajectory embracing collaboration allows organisations to develop new and innovative technologies and business models that enhance privacy, security and trust while reducing complexity and cost. We welcome and applaud the announcement, and acknowledge the amazing efforts of the DITP team in making it happen.

The research and Trust Framework announcements happily coincide with a new phase of community activity in Digital Identity NZ. From this week we will be forming a number of member-led projects around a number of action-oriented initiatives:

1: Development of an Aotearoa Digital Identity Ecosystem Map (this kicked off on Monday with a pilot survey to member organisations)

2: Further analysis and followup following the high-level findings from our 2020 Research, including a deeper analysis of Māori, Pacifika, and Senior communities, as well as those who may experience accessibility challenges

3: Release of a piece of legal and technical analysis of digital identity’s role in enabling customer onboarding for organisations with AML/CFT responsibilities, and formation of one or more project teams to discuss and explore options for furthering reuse and/or reliance of identity credentials

4: Development of a Digital Identity education roadmap incorporating the principles of Kāwanatanga (Honourable Governance), Rangatiratanga (Self Determination & Agency), Ōritetanga (Equity & Partnership) and freedom of choice.

We will be working closely with the DIPT team as they develop the Interim Trust Framework for the remainder of 2020. Further initiative-based project teams will be formed as we move through that progress. Similarly, we will be gauging member interest in developing a community submission on the Consumer Data Right, in response to MBIE’s request for feedback on their discussion document ‘Options for establishing a consumer data right in New Zealand’.

We will be communicating with you to gather expressions of interest and to invite you to initial exploratory Zoom sessions as each of the initiatives kicks off. In the meantime please feel free to contact me directly.

Now more than ever is a great time for you to join DINZ if you are not yet a member. Participating in the project teams and some of our research material is limited to member organisations only, and joining now will also allow you to consider nominating for a role on our Executive Council elections towards the end of the year. Join us as we shift into this exciting phase of action.

Ngā Mihi,

Andrew Weaver
Executive Director

To receive our full newsletter including additional industry updates and information, subscribe now

The post Ready for action appeared first on Digital Identity New Zealand.


Decentralized Identity Foundation

Where to begin?

An Overview of Introductory Resources (August 2020) [Note: this article is also available as a downloadable PDF. It was co-written with Kaliya-IdentityWoman.] There is no one single (or even central) place where decentralized identity technologies are being created. It is decentralized in its processes as well as its topography. Learning the “lay of the land” entails bouncing around a netwo

An Overview of Introductory Resources (August 2020)

[Note: this article is also available as a downloadable PDF. It was co-written with Kaliya-IdentityWoman.]

There is no one single (or even central) place where decentralized identity technologies are being created. It is decentralized in its processes as well as its topography. Learning the “lay of the land” entails bouncing around a network more than surveying a city from a high vantage point. But that’s ok! We recommend starting from the least familiar of these links and bouncing around, rather than reading start to finish.

Photo by Benjamin Elliot —Video Introductions to SSI An Introduction to Self-Sovereign Identity 9min
This presentation by the “SSI Ambassador” (Adrian Doerk of the LISSI project, headquartered in Frankfurt, Germany) touches on the psychology and sociology of identity definitions. It highlights why digital identity offers many different ways to present ourselves and walks through the basics of how ID today is dominated by mega-IdPs (identity providers). It gets to the conceptual heart of what SSI is about. It also shows a real live demonstration of a scooter-authorization project spearheaded by T-Mobile Germany. Self-Sovereign Identity (SSI) Foam Figure Explainer v2 9min
John Phillips from 460 Degrees, a consulting firm from Sydney, AU, updated his earlier video in January of 2020. This level-setting and first-introduction was designed to help Phillips’ clients understand what SSI looks like from a business-process perspective, which is the key perspective from which large enterprises are mostly likely to decide how much to invest in researching and considering SSI solutions. For more guidance on B2B sales and education, see his longer February presentation at SSI Meetup about how to explain SSI to C-Level executives. The True Meaning of Identity 38 min
In September 2019, Kaliya Young presented to SIBOS (the conference of the SWIFT settlement network central to global banking) in their Innovation Track. The video introduces the concept of identity and how it has evolved over time, before sharing at a high level how Self-Sovereign Identity works and why it solves widespread business problems. Domains of Identity — 33 min
In July of 2018, Kaliya Young presented to the MyData Conference and tied together her research work developing the Domains of Identity and how the various domains connects to the usage of decentralized identity technology — decentralized identifier and verifiable credentials.

The SSIMeetup series run out of Madrid, Spain by the indefatigable Alex Preukchat has been assembling a video archive of their live webinar series for years. For these, they invite leaders of influential SSI projects from around the world. These events have a loyal following of regular attendees, making for a spirited Q&A session at the end of each video. While most of the videos tend towards deep dives in specific technical, governance, business, or regional topics, some of them can be useful, accessible, and inspiring for novices. Here are a few we recommend in particular as “novice-friendlier deep dives”:

At the Core of SSI is the Decentralized Identifier (DID) 50min by Drummond Reed CIO Brief: Why SSI is Important 65min by Steve Magennis Introduction to Hyperledger Aries 71min by Nathan George Sovereignty in Historical Context 55min by Natalie Smolenski SSI In Healthcare 63min by Manreet Nijjar — Organizational Centers of Gravity in SSI Decentralized Identity Foundation (you are here)

This organization was formed as a Joint Development Foundation project in 2017 and has grown to be a major venue for IPR-protected co-development among large and small industry players. It has historically focused on the development of both working open-source code and pre-standard specifications for decentralized identity, but it is starting to branch out into non-technical forms of cooperation for the purposes of market-building and to promote all open decentralized identity technology, whether created in DIF or elsewhere.

The W3C Credentials Community Group

This public discussion group is affiliated with the Worldwide Web Consortium (W3C), a standards organization for web technologies supported by membership dues and responsible for the management of core protocols like HTML and TLS. The Credential Community Group (CCG) is not an official working group of the W3C, but is still protected under a version of W3C’s IPR regime. Work items can be proposed by W3C non-members, and these often include specifications that go on to be standardized. CCG meets every week and is a hub for coordinating activity. Their meetings are recorded and minuted, often including presentations. They also have coordinate other discussion groups open to non-members, such as the Education Credentials Task Force (“vc-ed”).

The Internet Identity Workshop

This biannual event has been the convening at the heart of the decentralized identity community for 15 years, held at the Computer History Museum in the heart of Silicon Valley. It is uniquely co-created by its participants in a mostly-organic and community-driven way, with no pre-picked speakers, keynotes, or commercial presentations aside from a demo hour. True to its name, workshopping, whiteboarding, and open (but detailed and concrete!) discussion on a massive scale are the core of the event. The books of proceedings organize and refine all of the notes taken live during all the sessions convened at each event.

MyData Global

This influential and egalitarian global organization grew out of a series of conferences first held in 2016. They advocate for a human-centric and rights-driven vision where individuals can get digital services that support them to collect their data, not just safeguarding and well serving but even empowering the data subject. Organizational members and individual members come together to create an ongoing dialogue between “consumers” and ethical businesses to shape new types of markets for digital services. Rooted in the values of the MyData Declaration, they focus primarily on elaborating refined, nuance, and bottoms-up models for data governance; only from a solid governance foundation do they begin to make policy, business, and technology decisions. They recently published the MyData Operators paper, bringing their policy, business, and technological goals more squarely into the realm of decentralized identity.

The Sovrin Foundation

Since the early days of decentralized identity, Sovrin has been one of the major hubs of innovation and entrepreneurship, serving as a kind of all-in-one community, codebase, blockchain, and better business bureau. They have also published many canonical educational and marketing texts that have been foundational to the development of “self-sovereign identity” as a sector of the software industry. The Aries project (housed in the Hyperledger Foundation) and the Trust-over-IP foundation are, in a sense, spin-outs of the Sovrin foundation, and both remain loosely based (sometimes by design, sometimes by momentum) on the Sovrin community’s codebase, ledger, and design principles. The fastest way to familiarize yourself with these is the whitepaper library on the Foundation’s website.

Standards organizations

These centers of gravity are where ideas and business models evolve in broad conversation. Inevitably, structuring ground rules and governance models for these eventually have to be negotiated between technical experts and standards adopted before major investments (of capital, but also of legislation and public good will) can be approved. See Nader Helmy’s great tour of standards organizations relevant to these centers of gravity.

— Government Initiatives: United States Federal Government

Anil John, head of the Silicon Valley Innovation Program run under the auspices of the Department of Homeland Security’s Science and Technology directorate has been funding the development of standards and business models in the nascent field for years. Their website hosts an overview of related projects articles by them, as well as guidance documents such as the Taxonomic Approach to Understanding Emerging Blockchain Identity Management Systems. Anil’s blog hosted by CyberForge includes many thoughtful pieces, such as Anil’s recent thoughts on interoperability or his article on the role of government in free-market technology, Can LESS be more?

The British Columbia Provincial Government

“BCGov” has invested significantly in the SSI IT management model and has pioneered an approach to verifiable public data represented by the Verifiable Organizations Network (“VON”). This network provides public-record credentials about registered business in a publicly accessible repository of Verifiable Credentials, bringing the traditional “orgBook” into the decentralized era. By directing much of their IT budgets towards open-source development and participating heavily in the emerging open-source community around this project, they have provided invaluable leadership, particularly within the Aries community, including such major contributions as the core of the Aca-Py Codebase.

The European Union

ESSIF (European SSI Framework) is an ongoing open-source initiative to seed and accelerate the role of SSI in the EU digital “single market” strategy. It is housed jointly between the directorate-generals responsible for IT planning and policy in Brussels and the independent European Blockchain Services Infrastructure (EBSI), a 30-country collaboration of EU and EU-affiliated countries working together to build a shared infrastructure for government blockchain projects. SSI Meetups are the best way to learn more about ESSIF and its role in the EU’s broader digital initiatives:

An overview by ESSIF Convenors Daniel du Seuil and Carlos Pastor (July 2019) ESSIF Chief Legal Counsel Nacho Alamillo gave an introduction to ESSIF’s approach to eIDAS in February 2020 and a more detailed overview of the official report of the legal review based on that approach in May of 2020 ESSIF-LAB program coordinators Oskar van Deventer and Rieks Joosten (TNO, the Hague) gave an overview of how their program incentivizes interoperability and European contributions to the broader SSI landscape (March 2020) Interested parties are encouraged to check the ec.europa.eu website for current public consultations and calls for comment. Other Notable International Consortia:

The Known Traveler Digital Identity project is led by the World Economic Forum and brings together the governments of Canada and the Netherlands, two Airlines and three airports to create a proof of concept that schedule to start real-world trials in the ill-fated Spring of 2020. Interrupted roadmaps aside, it has produced a significant corpus of documentation, policy recommendations, debate, and interest among governments and technical industries.

— Whitepapers and Publications: The Concept of Self-Sovereign Identity including its Potential by eGovernment Innovationszentrum, Gratz University of Technology Self-sovereign Identity: A position paper on blockchain enabled identity and the road ahead, by the Identity Working Group of the German Blockchain Association Decentralized Identity: Own and control your identity by Microsoft Decentralized-ID.com is a sprawling and truly bottomless collection of resources culled from conference annals, github, and technical publications. It is almost exclusively gathered and curated by anonymous independent researcher “Infominer,” who has been working in the virtual salt mines of decentralized identity, cryptocurrency, and the #indieweb movement for almost a decade.

Rebooting the Web of Trust hosts a “github journal” of whitepapers, most of them not just peer-reviewed but collaboratively written in person at 3-day “working conferences”; these range from highly technical and even cryptographic topics to business and UX-oriented contributions to the knowledge base of the broader decentralized-identity community. Some recent and still-topical highlights include:

Co-organizer Joe Andreiu’s Primer on Functional Identity Eric Welton’s Bearing Witness and Ecosystem bootstrapping via Notary VCs Pamela Dingle, Daniel Hardman, et al’s Alice attempts abuse on attack modeling the SSI credential exchange model Michael Shea, Sam Smith, and Carsten Stöcker, Cooperation beats Aggregation Kaliya Young et al., Reputation Interpretation — Monographs: Comprehensive Guide to Self Sovereign Identity (2019) — Kaliya Young / Heather Vescent Spherity’s SSI 101 Series on Medium (2020) — Juan Caballero Self Sovereign Identity (2021) — Alex Preukschat / Drummond Reed — Podcasts: Definitely Identity by Tim Bouma who leads trust framework and identity projects for the IT authority of the federal government of Canada. On the show, he interviews leaders in the field. State of Identity. The One World Identity conference and network has a podcast that covers the identity technology sector broadly, including cybersecurity, federated and centralized identity vendors, and even identity-related machine learning projects. PSAToday by Kaliya Young and Seth Goldstien. PSA stands for Privacy Surveillance and Anonymity, and covers a wide range of data rights topics.

There is, of course, much more to be recommended for deeper dives into specific technologies, business problem spaces, policy histories, and governance thinking. But we have to draw the line somewhere, and by the time you get to those advanced topics, are you really reading “introductory” texts anymore? Stay tuned for more curation, more knowledge bases, and more network exploration.

Where to begin? was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


Berkman Klein Center

Beware of Apps Bearing Gifts in a Pandemic

Companies are using tricky user interfaces to get more of our data in ways that take advantage of our isolation and need. Continue reading on Berkman Klein Center Collection »

Companies are using tricky user interfaces to get more of our data in ways that take advantage of our isolation and need.

Continue reading on Berkman Klein Center Collection »


ID2020

ID2020 Announces Certification of BLOK Bioscience Immunity Passport

ID2020 Announces Certification of BLOK Pass from BLOK Bioscience ID2020 is proud to announce the certification of BLOK Pass from BLOK Bioscience, a digital ID-based solution for COVID health status certificates. BLOK Pass is the first COVID-focused solution to be certified by ID2020. BLOK Pass offers individuals a self-sovereign record of their testing, antibody and, ultimately, vaccination
ID2020 Announces Certification of BLOK Pass from BLOK Bioscience

ID2020 is proud to announce the certification of BLOK Pass from BLOK Bioscience, a digital ID-based solution for COVID health status certificates. BLOK Pass is the first COVID-focused solution to be certified by ID2020.

BLOK Pass offers individuals a self-sovereign record of their testing, antibody and, ultimately, vaccination status. Initially envisioned as a means to help businesses and governments manage the safe and incremental return to public life in the midst of the COVID-19 pandemic, the solution is expected to be applied more broadly in the future to support immunization certificates and the transmission of other medical test results.

“The COVID-19 pandemic has thrust digital credentials into the global spotlight,” said ID2020 Executive Director, Dakota Gruener. “As we consider digital ID-based solutions for public health applications, getting the technology right is not negotiable. We intentionally set a high bar for certification and are delighted to recognize the BLOK Pass solution for meeting our high standards for privacy protection, user-management, portability, and more.”

To be eligible for certification, solutions must adhere to 41 functional, outcomes-based Technical Requirements. In addition to providing a roadmap to help developers create better products, the ID2020 Certification also provides a “third-party seal of approval” so that implementers — and ultimately, end-users — can trust that the technology was developed in accordance with the highest ethical and technical standards.

“Pandemic management is essentially an entirely new solution domain,” said BLOK Solutions Chief Technology Officer, Areiel Wolanow. “The precedents we set now will set the standard for how the future unfolds, so we have a duty to get things right. By starting with the principle that individuals should always be the sole owner of their data, it is our hope at BLOK that this a standard that others will find it exceedingly difficult to deviate from.”

The ID2020 Certification is already impacting the technical landscape for digital ID and technology providers of all sizes are increasingly aligning their technical approaches to comport with ID2020’s requirements. To date, more than 30 technology providers from every corner of the globe have submitted applications and worked with the ID2020 staff and advisory committees to complete the application process.

Today, BLOK Pass joins Kiva Protocol, Gravity.earth, and ZAKA as part of a small, but rapidly growing, cadre of ID2020 certified digital ID solutions.

About ID2020

ID2020 is a global public-private partnership that harnesses the collective power of nonprofits, corporations, and governments to promote the adoption and ethical implementation of user-managed, privacy-protecting, and portable digital identity solutions.

By developing and applying rigorous technical standards to certify identity solutions, providing advisory services and implementing pilot programs, and advocating for the ethical implantation of digital ID, ID2020 is strengthening social and economic development globally. Alliance partners are committed to a future in which all of the world’s seven billion people can fully exercise their basic human rights and reap the benefits of economic empowerment and to protecting user privacy and ensuring that data is not commoditized.

www.ID2020.org

About BLOK BioScience, Ltd.

BLOK BioScience is part of the BLOK Group. With a network of global experts and a robust and far-reaching supply chain, we provide trusted and authentic solutions to help respond to the rapidly changing population wellness landscape.

Our unique combination of medical and strategic expertise and technical knowledge results in secure and compliant solutions for tracking and verifying immunity, and we use rapid antibody testing and recording to enable governments and industry to manage viral outbreaks and mitigate the economic and social effects of pandemics

www.blokbioscience.com

ID2020 Announces Certification of BLOK Bioscience Immunity Passport was originally published in ID2020 on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 17. August 2020

FIDO Alliance

New White Paper Series Provides How-tos and Best Practices for Going Passwordless in the Enterprise

Support for FIDO in browsers and operating systems is widespread and growing fast. Enterprises now have better tools to replace easily compromised passwords with simpler, stronger FIDO Authentication and eliminate […] The post New White Paper Series Provides How-tos and Best Practices for Going Passwordless in the Enterprise appeared first on FIDO Alliance.

Support for FIDO in browsers and operating systems is widespread and growing fast. Enterprises now have better tools to replace easily compromised passwords with simpler, stronger FIDO Authentication and eliminate phishing, man-in-the-middle and other security attacks. But, if you want to deploy FIDO in your enterprise, what are the first steps? Do you need to explain “why FIDO?” to your CISO? What do the timelines look like? Should you build your own server or work with a vendor? What FIDO authenticators should you accept? How do you manage  them? 

The FIDO Alliance Enterprise Deployment Working Group (EDWG) will answer these questions, and more, in its new white paper series. The series aims to educate corporate management and IT security on the improvements available for authentication today and how to leverage them within their own organizations. This work is dedicated to eliminating passwords and securing the simple act of logging into company systems and applications. 

First up in the series is the primer “CXO Explanation: Why Use FIDO for Passwordless Employee Logins?” This document is the guide for you and/or the executive leaders in your organization as to why you should invest in FIDO2 deployment for your employees.

It addresses all of the common questions from CXOs on the value proposition of FIDO Authentication and how the FIDO2 passwordless framework addresses the authentication needs and challenges of companies for the modern workforce. Read it now at https://fidoalliance.org/white-paper-cxo-explanation-why-use-fido-for-passwordless-employee-logins/ and pass it along to colleagues.

Subsequent entries in this educational series will focus on server deployment, authenticator choices, authenticator life-cycle management, and credential acceptance in the enterprise. This series is part of the Alliance’s strategy to provide expert deployment guidance to our community in order to support the rapidly growing number of FIDO implementations across a variety of use cases. Please watch this space as we publish more in this Enterprise Series over the coming months. 

The post New White Paper Series Provides How-tos and Best Practices for Going Passwordless in the Enterprise appeared first on FIDO Alliance.

Thursday, 13. August 2020

Berkman Klein Center

The Breakdown: Daphne Keller explains the Communications Decency Act

Daphne Keller discusses CDA 230, the executive order, and content moderation Daphne Keller (left) joined Oumou Ly (right) for the latest episode of The Breakdown. In this episode of The Breakdown, Oumou Ly is joined by Daphne Keller of the Stanford Cyber Policy Center to discuss the Section 230 of the Communications Decency Act, content moderation and Big Tech platforms, and recent events that pro
Daphne Keller discusses CDA 230, the executive order, and content moderation Daphne Keller (left) joined Oumou Ly (right) for the latest episode of The Breakdown.

In this episode of The Breakdown, Oumou Ly is joined by Daphne Keller of the Stanford Cyber Policy Center to discuss the Section 230 of the Communications Decency Act, content moderation and Big Tech platforms, and recent events that propelled them into the spotlight in recent months.

Section 230 of the Communications Decency Act, or “The Twenty-Six Words That Created The Internet,” provides platforms legal immunity for third party speech — including by their users. It came under fire recently when President Donald Trump signed an executive order to limit protections for social media companies.

Read the transcript, which has been lightly edited for clarity.

Oumou Ly (OL): Welcome to the Breakdown. My name is Oumou; I’m a staff fellow on the Berkman Klein Center’s Assembly: Disinformation program. Our topic of discussion today is CDA 230, Section 230 of the Communications Decency Act, otherwise known as “The Twenty-Six Words That Created The Internet.” Today I’m joined by Daphne Keller from the Stanford Cyber Policy Center.

Thank you for being with us today Daphne, I appreciate it, especially this conversation will help to unpack what has turned out to be such a huge and maybe consequential issue for the November election and certainly for technology platforms and all of us who care and think about this information really critically.

One of the first questions I have for you is a basic one, can you tell us a little bit about CDA 230 and why it’s referred to as The Twenty Six Words That Started The Internet?

Daphne Keller (DK): Sure. So first, I strongly recommend Jeff Kosseff’s book, which coined that Twenty Six words phrase, it is a great history of CDA 230, and it’s very narrative.

So Intermediary Liability Law is the law that tells platforms what legal responsibilities they have for the speech and content posted by their users. And US law falls into three buckets. There’s a big bucket, which is about copyright, and there the law in point is the Digital Millennium Copyright Act, the DMCA, and it has this very choreographed notice and takedown process.

The other big bucket that doesn’t get a lot of attention is federal criminal law. There’s no special immunity for platforms for federal criminal law crimes. So if what you’re talking about is things like child sexual abuse material, material of support of terrorism, those things, the regular law applies. There is no immunity under CDA 230 or anything else.

And then the last big bucket, the one we’re here to talk about today is CDA 230, which was enacted in 1996 as part of a big package of legislation. Some of which was subsequently struck down by the Supreme Court, leaving CDA 230 standing as the law of the land. And it’s actually a really simple law, even though it’s so widely misunderstood that there’s now a Twitter account, a Bad Section 230 Takes, just to retweet all the misrepresentations of it that come along.

“Broadly speaking, the Internet could not exist the way we know it without something like CDA 230”

But what it says is, first, platforms are not liable for their users’ speech. Again, for the category of claims that are covered, so this isn’t about terrorism, child, sex abuse material, et cetera. But for things like state law defamation claims, platforms are not liable for their users’ speech. And the second thing it says is also platforms are not liable for acting in good faith to moderate content. So to enforce their own policies against content they consider objectionable.

And this, that second prong was very much part of what comes Congress was trying to accomplish with this law. They wanted to make sure that platforms could adopt what we now think of as terms of service or community guidelines and could enforce rules against hateful speech or bullying or pornography, or just the broad range of bad human behavior that most people don’t want to see on platforms. And the key thing that Congress realized, because they had experience with a couple of cases that had just passed that happened at the time, was that if you want platforms to moderate, you need to give them both of those immunities. You can’t just say you’re free to moderate, go do it. You have to also say, and if you undertake to moderate, but you miss something and there’s defamation… still on the platform or whatever, the fact that you tried to moderate won’t be held against you.

And this was really important to Congress because there had just been a case where a platform that tried to moderate was tagged as acting like an editor or a publisher and therefore facing potential liability. That’s the core of CDA 230. And I can talk more if it’s helpful about the things people get confused about, like the widespread belief that platforms are somehow supposed to be neutral, which is —

OL: Well, would you please say something about that.

DK: Yeah. Congress had this intention to get platforms to moderate. They did not want them to be neutral; they wanted the opposite. But I think a lot of people find it intuitive to say, well, it must be that platforms have to be neutral. And I think that intuition comes from a pre-Internet media environment where everything was either a common carrier, like a telephone, just interconnecting everything and letting everything flow freely. Or it was like NBC News or The New York Times — it was heavily edited, and the editor clearly was responsible for everything that the reporters put in there. And those two models don’t work for the Internet. If we still have just those two models today, we would still have only a very tiny number of elites with access to the microphone.

And everybody else would still not have the ability to broadcast our voices on things like Twitter or YouTube or whatever that we have today. And I think that’s not what anybody wants. What people generally want is they do want to be able to speak on the Internet without platform lawyers checking everything they say before it goes live. We want that. And we also — generally — also want platforms to moderate. We want them to take down offensive or obnoxious or hateful or dangerous but legal speech. And so 230 is the law that allows both of those things to happen at once.

OL: Okay. Daphne, can you talk a little bit about the two different types of immunity that are outlined under CDA 230 we call them shorthand (c )(1) and (c ) (2)?

DK: Sure. So in the super shorthand, (c )(1) is immunity for leaving content up, and (c ) (2) is immunity for taking content down.

OL: Yeah.

DK: So most of the litigation that we’ve seen historically under the CDA is about (c )(1). It’s often really disturbing cases where something terrible happened to someone on the Internet, and a speech defaming them was left out, or speech threatening them was left up, or they continue to face things that were illegal. So those are cases about (c )(1). If the platform leaves that stuff up, are they liable? The second prong (c )(2) just hasn’t had nearly as much attention over the years until now. But that’s the one that says platforms can choose their own content moderation policy that they’re not liable for choosing to take down content they deem objectionable as long as they are acting in good faith.

And that’s the problem that does have this good faith requirement. And part of what the executive order attempts is to require companies to meet the good faith requirement in order to qualify for immunities. If someone can show that you are not acting in good faith, then you lose this much more economically consequential immunity under (c )(1) for contents that’s on your platform that’s illegal.

And the biggest concern I think for many people there is if this economically essential immunity is dependent on some government agency determining whether you acted in good faith. That introduces just a ton of room for politics because my idea of what’s good faith won’t be your idea of what’s good faith, won’t be Attorney General Barr’s idea of what’s good faith. And so having something where political appointees, in particular, get to decide what constitutes good faith and then all of your immunities hanging in the balance is really frightening for companies.

And, interestingly, today we see Republicans calling for a fairness doctrine for the Internet calling for a requirement of good faith or fairness in content moderation. But for a generation, it was literally part of the GOP platform every year to oppose the fairness doctrine that was enforced for broadcast by the FCC. President Reagan said it was unconstitutional. This was just like a core conservative critique of big government suppressing speech for decades, and now it has become their critique, and they’re asking for state regulation of platforms.

OL: That is so interesting to me, both that and the fact that CDA 230 in so many ways is what allows Donald Trump’s Twitter account to stay up. It’s really, really interesting that the GOP has decided to rail against it.

DK: It’s fascinating.

OL: So just recently, the president signed an executive order concerning CDA 230 pretty directly. Can you talk a little bit about what the executive order does?

DK: Sure. So I think I wanted to just start at a super high level with the executive order in the day or so after it came out, I had multiple people from around the world reach out to me and be like, this is like what happened in Venezuela when Chavez started shutting down the radio station.

It has this resonance of like, there is a political leader trying to punish speech platforms for their editorial policies. And that — before you even get into the weeds — that high-level impact of it is really important to pay attention to. And that is the reason why [the] CDT (the Center for Democracy and Technology) in DC has filed a First Amendment case saying this whole thing just can’t stand, we’ll see what happens with that case.

But, and there again like that’s not a bad idea, but then it leads to things in the executive order that I think don’t work. So then there are also in the executive order for other things that might be big deals. So one is that [the] DOJ has instructed to draft legislation to change 230. So eventually, that will come along, and presumably, it will track the very long list of ideas that are in the DOJ report that came out this week. [Editor’s note: this interview was recorded on June 18, 2020] A second is it instructs federal agencies to interpret 230 in the way that the executive order does.

This way that I think is not supported by the statute that takes the good faith requirement and applies it in places it’s not written in the statute. Nobody’s quite sure what that means because there just aren’t that many situations where federal agencies care about 230, but we’ll see what comes out of that. A third is that Attorney General Barr of the DOJ is supposed to convene state attorneys general to look at a long list of complaints. And this is like, if you look at it, if you’re an Internet policy nerd, it’s just all the hot button issues… are fact-checkers biased? Can algorithmic moderation be biased? And, well, it can. How can you regulate that? You will recognize these things if you look at the list.

And then the fourth one, and this is one that I think deserves a lot of attention is that DOJ is supposed to review whether platforms, particular platforms are quote problematic vehicles for government speech due to viewpoint discrimination, unquote. And then, based on that, look into whether they can carry federally funded ads. This is I think for most platforms the ads dollars part is not that big a deal, but being on a federal government block list of platforms with disapproved editorial policies, just like has this McCarthyist feeling.

OL: Can you talk a little bit about the role of CDA in relation to the business models that the platforms run?

DK: Sure. So broadly speaking, the Internet could not exist the way we know it without something like CDA 230. And that’s not just about the Facebooks of the world, that’s about everything all up and down the technical stack DNS providers, CloudFlare, Amazon Web Services is another backend web hosting. And also tons of little companies, the knitting blog that permits comments or the farm equipment seller that has user feedback. All of those are possible because of CDA 230. And if you pull CDA 230 out of the picture, it’s just very hard to imagine the counterfactual of how American Internet technology and companies would have evolved.

They would have evolved somehow and, presumably, the counterfactual is we would have something like what the EU has, which boils down to a notice and takedown model for every kind of legal claim. But they’d barely have an Internet economy for these kinds of companies. There’s a reason that things developed the way that they did.

OL: Yeah. Do you think that there’s any, maybe not what you think, but I’m sure that we can all agree this is likely to be the case, if the liability shared with that 230 offers platforms is removed, how would that change the way that platforms approach content moderation?

DK: Well, I think a lot of little companies would just get out of the business entirely. And so there’s an advocacy group in DC called Engine, which represents startups and small companies, and they put together a really interesting two-pager on the actual cost of defending even frivolous claims in a world with CDA 230 and in a world without CDA 230. And it’s basically, you’re looking at 10 to 30 thousand dollars in the best-case scenario for a case that goes away very, very quickly even now. And that’s not a cost that small companies want to incur. And the investors there are all these surveys of investors saying, I don’t want to invest in new platforms to challenge today’s incumbents if they’re in a state of legal uncertainty where they could be liable for something at any time. So I think you just eliminate a big swath of the parts of both the existing parts of the Internet that policymakers don’t pay any attention to.

You make them very, very vulnerable, and some of them go away, and that’s troubling, and you create a lot of problems for any newcomers who would actually challenge today’s incumbents and try to rival them in serious user-generated content hosting services.

For the big platforms, for Facebook, for YouTube they’ll survive somehow, they’d change their business model, they probably … the easiest thing to do is, you use their terms of service to prohibit a whole lot more and then just like take down a huge swath, so you’re not facing much legal risk.

OL: Yeah. It’s hard to imagine living in that kind of a world.

DK: It is, it is.

OL: Yeah. Thank you so much for joining me today, Daphne. This was a great and enlightening conversation, and I’m sure our viewers will enjoy it.

DK: Thank you for having me.

OL: Thanks.

The Breakdown: Daphne Keller explains the Communications Decency Act was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 06. August 2020

FIDO Alliance

2020 FIDO Hackathon in Korea Update: Mid-term Meetup Event

Joon Hyuk Lee, APAC Market Development Director, FIDO Alliance Editor’s note: For the background information on the 2020 Hackathon in Korea, see the April blog post: “2020 FIDO Hackathon: Goodbye […] The post 2020 FIDO Hackathon in Korea Update: Mid-term Meetup Event appeared first on FIDO Alliance.

Joon Hyuk Lee, APAC Market Development Director, FIDO Alliance

Editor’s note: For the background information on the 2020 Hackathon in Korea, see the April blog post: “2020 FIDO Hackathon: Goodbye Password Challenge in Korea.”  To learn more about examples of proposed development ideas, please read the June blog post: “2020 FIDO Hackathon in Korea: Learn & Implement Phase.”

In the afternoon of July 1st, 2020, a Mid-Term Meetup Event for FIDO Hackathon – Goodbye Password Challenge was held at Telecommunication Technology Association (TTA). Originally, the Hackathon Steering Committee had planned a full-day onsite final implementation and evaluation day, followed by a month and a half online training phase.  Due to the global pandemic, we had to change our schedules in accordance with school calendar disruptions and summer holidays. We decided to have a half-day mid-term meetup event for participants. This allowed us to help the teams to stay on course while providing a safe environment for people to learn from each other face-to-face.

Nineteen different teams participated in the event, half of them face-to-face with strict public health guidance applied, and the other half virtually. The meeting gave opportunities for teams to share their FIDO protocol-based online service development ideas and current development status, learn from each other and receive valuable feedback from FIDO Alliance Korea Working Group members.

In addition to sharing their projects’ current development status, the teams had the opportunity to present the “homework” they have completed after online training.  The homework was writing a simple article on the web, with answers to the following questions:

What is FIDO Alliance? What are the FIDO protocols? What are the benefits of implementing FIDO protocols? (Option) What services/products are you developing for the 2020 FIDO Hackathon and what would be the value of adopting FIDO protocols for online authentication?

We were very pleased with the articles we received. You can read examples (mostly in Korean) by visiting these following links:

No matter how passwords are complicated, they are not safe!  Learn FIDO concepts FIDO Authentication: Good security knowledge to learn while you are quarantined at home 2020 FIDO Hackathon Homework – My Plan to Develop FIDO Based IoT Solution for Home FIDO – New technology standard to solve problems with IDs and Passwords  Let’s learn about FIDO Alliance and FIDO Specs Summary on FIDO Alliance and FIDO Specs

We hope this short blog gives you a better understanding of the current status of the 2020 FIDO Hackathon in Korea.  We will be back soon with more updates after the final evaluation — scheduled for this week. 

The post 2020 FIDO Hackathon in Korea Update: Mid-term Meetup Event appeared first on FIDO Alliance.

Wednesday, 05. August 2020

WomenInIdentity

Interview with Dia Banerji, UK Ambassador at WiD

We interviewed Dia Banerji, one of our UK Ambassadors, to find out more about her passion for innovation and technology. What gets you out of bed in the morning? I… The post Interview with Dia Banerji, UK Ambassador at WiD appeared first on Women in Identity.

We interviewed Dia Banerji, one of our UK Ambassadors, to find out more about her passion for innovation and technology.

What gets you out of bed in the morning? Dia Banerji

I am passionate about the power of innovative technology & its potential in creating a better world. As an individual, I enjoy solving problems with technology solutions. I consider myself to be fortunate to be able to pursue my passion in the work I do.

I am the founder of ID4V. It is an early stage start-up focused on building a platform to enable Self Sovereign Digital Identity for Cross-Border Travel Visa Applications using Blockchain Technology. ID4V aims to address the inefficiency in our travel visa application system which affects millions of people around the world. Very few experiences in life can be compared to the joy of creation. I am thrilled to be on this journey.

I also work as a Consultant to the Blockpass identity Lab at Edinburgh Napier University where I advise on commercialization of emerging technology. I help identify industry applications for Blockchain & Privacy Preserving Machine Learning research. Every day I spend at the lab I learn something new. I am surrounded by brilliance and get to work with innovative cutting-edge research. It is truly inspiring.

This is a very exciting time to be in Identity. We are creating a gateway to access the digital economy. What we build today & how we build it, will shape our future. My work in Identity allows me to be part of this incredible evolution.

Roosevelt once said “Far and away the best prize that life has to offer is the chance to work hard at work worth doing.” To me, working in Identity as an Entrepreneur and a Professional is work worth doing and I love it!

How did you get to where you are today?

The short answer would be ‘CURIOSITY’!

Prior to embarking a career in the technology sector, I spent over fifteen years in the financial services industry and worked for some of the top global banks in the world. Around 2016, within my fraternity there was a lot of noise about Bitcoin. Most of my peers and friends who were in Financial Services were naysayers and opposed Bitcoin with the utmost passion. Wall street greats like Warren Buffet famously called Bitcoin ‘Rat Poison Squared’! This got me curious. I wanted to learn more about Bitcoin to be able to figure out where I stood on the debate. This pursuit led me to studying Blockchain and Distributed Ledger Technologies. I was amazed with the potential of this technology in solving real world problems. I wanted to be part of it. So, I pivoted my career to outlining use cases for Blockchain especially within Financial Services sector. And in that journey, I came to narrow my focus on Digital Identity and got to where I am today.

What is the most important lesson you have learned along the way?

The most important lesson that I learned was not to be intimidated by the things I did not know and appreciate the value of stepping outside my comfort zone. I was a financial service professional before I entered the world of Technology and Digital Identity. I have an MBA and not a computer science degree. There will always be somethings that we do not know and that is absolutely fine! Especially when one is working within the Identity Industry which is constantly changing and new standards are being incubated as we speak. Knowledge has a way of demystifying complex concepts. And once you understand something in depth you are no longer intimidated by it. The key thing is to never stop learning. And one of the most enjoyable aspects of my work is that I get to learn something new all the time. This to me this is both inspiring and empowering.

What should leaders in identity start, stop and continue – and why?

The concept of identity has been broadly accepted as a fundamental human right. A legal identity enables individuals to participate in society and have access to rights and services. It is a prerequisite to financial & social inclusion.

Identity management is a global problem. The current system does not scale and is in much need for disruption. For, those of us in the developed world we are plagued with inefficiencies of legacy systems, data privacy breaches & leaks leading to trust issues with central authorities. For the citizens of the developing nations many are denied access to financial & social services for lack of verifiable identity. According to World Bank, it is estimated that approximately one billion people across the world do not have access to an officially recognizable identity!

As the world transitions to a digital economy we need to have a more secure, scalable, interoperable and citizen focused digital identity management solutions to ensure inclusion.

So, my advice to the CEO’s in the identity space would be the following:

START: Collaborating with one another and build solutions and platforms which are interoperable. Have diverse teams within the organisation so that the solutions reflect the wider needs of society. Both are imperative for mass adoption!

STOP: Building centralized identity management solutions! Be citizen focused and build applications on the principles of self-sovereign digital identity. Allow individuals to own, control and manage the distribution of their personal identity. This is essential to be able to build scalable and secure systems with no central point of failure and reduce risk of data theft.

CONTINUE: Identifying new uses cases for Identity management solutions and apply innovative technologies to enhance efficiency and security of applications. Digital Identity is a relatively new industry and is constantly changing. We need to continue to fund new research and explore new use cases. This is crucial, for the creation of robust digital identity management platforms for the future.

In one sentence, why does diversity matter to you?

I have always looked to nature to find order and to me diversity is the purest form of natural existence and the only way to live and thrive in this world.

What book/film/piece of art would you recommend to your fellow members? Why?

I would recommend the film, The Matrix. It is a beautiful film and the script is almost Shakespearean. If one sees beyond the action-packed computer graphics there is a deep spiritual message and a tale about the constant strife between ignorance and enlightenment faced by mankind. Like most great works of art, it leaves you questioning. Definitely worth a watch!

What advice would you give to the teenage ‘you’?

I would advise my teenage self to be ‘fearless’ and ‘take chances’. In order to walk the path that no one has ever walked before requires one to be brave and follow one’s gut. Greatness is seldom achieved by being careful. So, go ‘ALL IN’ and follow your passion and don’t seek external validation. Believe in yourself!

Where can we find you on social media / the Web?

You can find me on LinkedIn.

The post Interview with Dia Banerji, UK Ambassador at WiD appeared first on Women in Identity.


Digital Identity NZ

Development of Digital Identity Trust Framework confirmed

Cabinet has confirmed that a Digital Identity Trust Framework based in legislation will be developed. The Trust Framework will be a regulatory regime that ensures that identity service providers meet the required rules. The post Development of Digital Identity Trust Framework confirmed appeared first on Digital Identity New Zealand.

Cabinet has confirmed that a Digital Identity Trust Framework based in legislation will be developed. The Trust Framework will be a regulatory regime that ensures that identity service providers meet the required rules.

The Trust Framework will also ensure that citizens and businesses can have trust and confidence that their identity information is being handled appropriately.

Developing an Interim Trust Framework

The Department of Internal Affairs (DIA) will develop an Interim Trust Framework, which will enable the Trust Framework rules to be developed and tested with digital identity providers, while the legislation is being drafted.

The legislation is scheduled to be introduced to the House in 2021. The legislation will enable providers to be legally accredited against the Trust Framework rules, which will be based on existing and developing standards.

The Trust Framework will support the development of security, privacy-enhancing and interoperable approaches to digital identity services, to maximise benefits for citizens, the economy and society. To inform this process, DIA will be testing a range of approaches through pilot programmes with public and private sector organisations.

Find out more

Email the Digital Identity team: digital.identity@dia.govt.nz

Website: Digital Identity Programme

Development of Digital Identity

The post Development of Digital Identity Trust Framework confirmed appeared first on Digital Identity New Zealand.

Saturday, 01. August 2020

r@w blog

#DigitalLiteraciesAtTheMargins

Aakash Solanki, Sandeep Mertia & Rashmi M Session The session intends to initiate a discussion on digital literacies in the wake of ‘Digital India’ programme drawing on the empirical insights from three different field situations. The discussion will be anchored in the social and material context of Digital India but will not be limited to it. The questions we raise in this specific con
Aakash Solanki, Sandeep Mertia & Rashmi M Session

The session intends to initiate a discussion on digital literacies in the wake of ‘Digital India’ programme drawing on the empirical insights from three different field situations. The discussion will be anchored in the social and material context of Digital India but will not be limited to it. The questions we raise in this specific context may be extended to understand the current conceptual as well as practical deployment of many ICT4D programmes as envisioned by both government and non-government actors. The idea of digital literacy is central to both the conceptualization and the execution of such programmes, and the actors in charge work with their own understanding of the context and needs of the people they aim to empower. There have been very few attempts to systematically understand the concept of digital literacy which leave much scope for either lenient contextual interpretations or context insensitive one-size-fits-all approach towards technological interventions. This session is an effort to begin one such discussion which we hope will refine the prevalent understanding of digital literacy/literacies in India.

From a glance at the structure of Digital India programme, it is apparent that the programme is designed to achieve digital inclusion and is primarily directed towards the digitally marginalized in spite of having a more comprehensive agenda. The schemes such as National Digital Literacy Mission (NDLM) and the way they are conceived are indexical of the kind of target groups which the programme plans to address. A key concern for us is to think through the mismatches between the frameworks of the digital literacy initiatives and the local socio-technical contexts which we observed in our field sites. The objective of the session is not as much to arrive at the definitional fixity of the concept of digital literacy as it is to complicate and problematize the prevalent definitions of digital literacy implicit in both visualization and execution of such initiatives. We plan to meet this objective through empirical insights we have on three different field sites.

The session will also focus on certain methodological questions that might help us better understand digital literacy. This part of the session addresses questions such as: how can we conceptually define digital literacy/literacies? What parameters should go into the measuring of digital literacy? How should we theoretically understand it — as technical skills or knowledge or some higher cognitive ability? How can we best pedagogically achieve it given the complexity of ground reality? The questions will be directed towards encouraging thought in this area rather than providing answers. The session will also try and discuss various kinds of policy and pedagogical documentation available on digital literacy and critically debate their conceptualization and execution by juxtaposing them against various uses of ICTs on the ground by specific groups of users. This part of the discussion will draw upon scholarly and other kinds of documentation available on the topic and use them to evaluate various government and corporate initiatives to achieve digital literacy in India.

Plan

In keeping with the spirit of the conference, the three discussants’ will try to put forth empirical insights from their respective field situations and frame nuanced research and discussion questions on digital literacies at the margins of techno-cultural capital and/or access. Further the discussion will be aided by specific readings and the insights drawn from them. The idea is to have a symmetrical, reciprocatory and anthropologically comparative conversation on questions of technology, materiality, access, meaning making, development and literacy, by moving back and forth between different fieldsites and interpretive frameworks.

Field Note I

The first discussant’s work on social media use in rural Rajasthan discusses socio-technical changes instituted by the introduction of ICTs despite their developmental failures. He claims that these changes have been often viewed from technologically or socially deterministic positions and that there are significant empirical gaps between such technocratic discourses and the grassroots experiences of technology. There is a growing usage of social and digital media in rural areas where ICT4D and e-Governance pilot projects have failed to meet their goals. Based on an ethnographic study of ICTs in two villages of Rajasthan, his work aims to situate social and digital media in a complex rural society and media ecology using co-constructivist approach. Focusing on context sensitive meaning making of ICTs, it will seek to contribute to an empirically sound discourse on media, technology and rural society in India.

Field Note II

The second discussant’s work on mobile phones and multimedia consumption among the digitally marginalized users in Bangalore brings into focus the popular usage of ICTs, specifically mobile phones, among the subaltern users. While such popular usage indicates a certain level of literacy already achieved by the digitally marginal groups by mere exposure and peer learning, it is not sufficient to do away with all kinds of guided training required to make such users participate in informationalized environments. Her observations on the mobile phone usage among the subaltern users in Bangalore problematize the notion of digital literacy and invite us to think about it as a more layered and stratified concept. They raise questions such as ‘what constitutes digital literacy?’ — some complex use of gadgets learnt by mere exposure and peer knowledge or an awareness about the social relevance of the technologies and knowledge about their appropriate deployment in different social contexts? While mere access and some nominal training might be helpful in equipping people with some knowledge about gadget-use, her study points out that such initiatives are far from achieving the right degree of digital literacy needed to make these people participate in new media ecologies. Thus it contends the claims of 1. Organic literacy attained by mere exposure and peer sharing of technological knowledge and 2. Literacy attained by current training programmes which might equip the digitally marginalized with knowledge of technological use but not necessarily inform them about the context relevant knowledge needed for their appropriate deployment.

Field Note III

The third discussant’s work on e-governance initiatives in an Indian state plans to return the gaze on to the bureaucracy itself and takes the conversation from the margins back to the centre. His work moves away from the target groups generally alluded to in programs such as the NDLM. It takes into accounts the struggles, anxieties, hopes and promises of/for a bureaucracy in coming to terms with a gradual but seemingly eventual shift from paper work to digital paper work. The users in this case are staff members tasked by the higher-level bureaucracy-who have little or no clue about it themselves- to learn a new tool and migrate all paper work to the digital domain. Many of e-governance projects are spearheaded by corporate organizations, which in turn dictate the terms of the conversation on Digital Literacy even within the government. What impact does this have on how Digital Literacy is understood, articulated and executed in ICT4D programs within and without the government.

Readings

Terranova, Tiziana. 2004. Chapter 5: Communication Biopower, 131–157. Network Culture: Politics for the Information Age. London: Pluto Press.

Mazzarella, William. 2010. Beautiful Balloon: the Digital Divide and the Charisma of New Media in India. American Ethnologist, 37(4), 783–804.

Smith, Richard Saumarez. 1985. Rule-by-Records and Rule-by-Reports: Complementary Aspects of the British Imperial Rule of Law. Contributions to Indian Sociology 19(1): 153–176.

Audio Recording of the Session

IRC 2016: Day 2 #Digital Literacies at Margins : Researchers at Work (RAW) : Free Download, Borrow, and Streaming : Internet Archive

Session Team

Aakash Solanki is a PhD candidate in Anthropology and South Asian studies at the University of Toronto. He is broadly interested in the genealogical study of states, statistics (stats), and computing. In the past, he has worked on the collection, classification, management of information and its politics in colonial India. In addition to prior training in computer science, he has worked in government agencies both in the US and India, on data science projects in education , health, and skill development at the city, state, as well as the federal level. He has previously published in the journal South Asia and is a Contributing Editor to the journal Cultural Anthropology. He runs an interdisciplinary seminar series on Development at University of Toronto.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow at New York University.

Rashmi M is a doctoral student in the school of social sciences at National Institute of Advanced Studies (NIAS), Bangalore. Her research interest is in the area of media studies. Her M.Phil work at English and Foreign Languages University, Hyderabad was on Kannada websites, in which she looked at the cultural politics of regional vernacular languages especially Kannada in the English dominated world of the Internet. Her doctoral work at NIAS focuses on changing media consumption practices via mobile phones and other peripheral technologies among users with limited technological access and economic means in the city of Bangalore and surrounding areas.

#DigitalLiteraciesAtTheMargins was originally published in r@w blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 31. July 2020

MyData

MyData Online 2020 Conference Call for Proposals open from 30 July to 6 September

The main programme of the conference is content made by the MyData community through the Call for Proposals. The call started on 30 July. After the call ends on 6 September, all submissions will be reviewed twice by our community of reviewers. Notification of acceptance will be sent out on 18 September. The full conference programme will be published... Read More The pos

The main programme of the conference is content made by the MyData community through the Call for Proposals. The call started on 30 July. After the call ends on 6 September, all submissions will be reviewed twice by our community of reviewers. Notification of acceptance will be sent out on 18 September. The full conference programme will be published...

Read More

The post MyData Online 2020 Conference Call for Proposals open from 30 July to 6 September appeared first on MyData.org.

Thursday, 30. July 2020

Decentralized Identity Foundation

The Universal Resolver Infrastructure

A DIF-hosted resource for community development Introduction It has been almost three years since DIF began working on the Universal Resolver (GitHub: Universal Resolver) — a foundational piece of infrastructure for the Decentralized Identity ecosystem (see the original announcement). Since then, our vision of being interoperable across ledgers and DID methods has seen a lot of support. Thanks t
A DIF-hosted resource for community development Introduction

It has been almost three years since DIF began working on the Universal Resolver (GitHub: Universal Resolver) — a foundational piece of infrastructure for the Decentralized Identity ecosystem (see the original announcement). Since then, our vision of being interoperable across ledgers and DID methods has seen a lot of support. Thanks to community contributions, the Universal Resolver now supports around 30 different DID methods.

Today, we are happy to announce an updated set of instances where the Universal Resolver is deployed. One stable and one experimental version will be iterated, maintained, and hosted by DIF as a service to the community!🎉

While this is undoubtedly a useful resource for research, experimentation, testing, and development, it is important that it not be mistaken for a production-grade universal resolver. It should be pointed out that:

This infrastructure is neither intended or approved for production use cases and that nobody should rely on it for anything other than for development and testing purposes. These two specific deployments are not production-ready. The preferred scenario continues to be that all DID-based information systems, run by a method operator or otherwise, production or otherwise, host their own instance of the Universal Resolver (or other DID Resolution tools). DIF withholds the right to limit or modify the performance of this free service in case usage for production, commercial, and/or malicious purposes is detected. Two Deployments

The following two deployments are now available as a community service:

* https://resolver.identity.foundation/ — Hosted on IBM Cloud by DIF (thanks IBM!). While not considered production-ready, this instance is expected to be relatively stable. It will be tested before and after manual updates from time to time, with versioned releases.
* https://dev.uniresolver.io/ — Hosted on AWS by DIF. This instance is more experimental, will be updated frequently, and is connected to CI/CD processes. It may be down from time to time or have unexpected functionality changes.

Note: For backward compatibility, the original URL https://uniresolver.io/ will now redirect to https://dev.uniresolver.io/.

Documentation

See the following links for more information about testing, release, and deployment processes of the Universal Resolver:

Photo by Anne Nygård AWS Architecture: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/dev-system.md CI/CD Process: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/continuous-integration-and-delivery.md Branching Strategy: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/branching-strategy.md Release process: https://github.com/decentralized-identity/universal-resolver/blob/master/docs/creating-releases.md Periodically, this standing work item is discussed in the Identifiers and Discovery Working Group, so that group’s recorded meetings and discussions on Slack and mailing list may contain further insight on the above topics.

The Universal Resolver Infrastructure was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 29. July 2020

MyData

Press release: MyData Operator 2020 status awarded to 16 organisations from around the world

MyData Global Press release Helsinki, July 29th 2020 Embargo 16:30 CEST    A bold new initiative to shape the new normal of data has arrived – MyData Operator 2020 status awarded to 16 organisations from around the world   As the CEOs of US data giants face inquiry in the country’s Congress, an alternative to... Read More The post Press release: MyData Operator 2020 status aw

MyData Global Press release Helsinki, July 29th 2020 Embargo 16:30 CEST    A bold new initiative to shape the new normal of data has arrived – MyData Operator 2020 status awarded to 16 organisations from around the world   As the CEOs of US data giants face inquiry in the country’s Congress, an alternative to...

Read More

The post Press release: MyData Operator 2020 status awarded to 16 organisations from around the world appeared first on MyData.org.

Tuesday, 28. July 2020

Decentralized Identity Foundation

Where to begin with OIDC and SIOP

and how today’s most powerful authentication mechanisms can be decentralized It is a mouthful of an acronym: it stands for OpenID Connect — Self-Issued Identity Provider. Unless you are familiar with the terminology of the OpenID community, knowing what the acronym stands for doesn’t illuminate much, but rest assured, this is one of the most exciting developments to support the widespread adoptio
and how today’s most powerful authentication mechanisms can be decentralized

It is a mouthful of an acronym: it stands for OpenID Connect — Self-Issued Identity Provider. Unless you are familiar with the terminology of the OpenID community, knowing what the acronym stands for doesn’t illuminate much, but rest assured, this is one of the most exciting developments to support the widespread adoption of Verifiable Credentials across the web.

This post will explain how it’s exciting and what kinds of adoption it could galvanize. First, we will cover OpenID & OpenID Connect, then touch on how decentralized identity (or “self-sovereign identity”) relates to “authentication” and “user accounts,” and finally how they can work together, with two educational videos along the way.

While there is substantial and sophisticated prior art, particularly in the Aries community, for integrating verifiable credentials with OpenID Connect, using DIDs for OIDC is still emerging and approaching its first stable specification in the coming months. This is the culmination of substantial collaborative work to develop this bridge between the DID Authentication Working Group at DIF and various interlocutors at the OpenID Foundation. This post breaks down key elements of this development and shares more resources if you want to explore it further.

What is OpenID & OpenID Connect

The first seeds of OpenID were sprouted at the very first Internet Identity Workshop in 2005. All the companies interested in URL-based protocols got together and collaborated together on their various models for designing authentication for users against URLs they controlled, like their personal blogs. This protocol has evolved and the latest iteration is based on sophisticated OAuth (Open Authorization) standards and tooling.

The basic and most typical flow used by OpenID Connect can be described as follows: an individual, which in this context can be called a “user” of the identity system, first gets a fresh proof of authenticity of their digital identity. This unique proof is minted in the course of an interaction with a service called an “Identity Provider” (IP), i.e. Google or Facebook. This proof usually takes the form of a “token,” a single-use cryptographic access code linked to the corresponding identity record at the IP. The user then takes that token to a second site that they are going to login to, with is called the “Relying Party” or RP, which can then trust they are dealing with the same person identified (usually very strongly) by the Identity provider.

OpenID Login Flow

The teams of professionals that created OpenID Connect had enough imagination to anticipate more complex use-cases that weren’t immediately needed by the commercialized web of 2005, but for which the technical foundations were still worth laying. Among there, there was a clear idea that users would, in some cases, prefer or need to bring their own identity with them rather than a pointer to a record on an IP’s server. This identity would thus be “self-issued,” a capability they designed into the OpenID core specification.

The vision of DID-SIOP is a way of bringing decentralized identity concepts into alignment with the ideas of “self-issued” portable identity that the original OpenID innovators had. It was good that they included and preserved this underutilized capability in their immensely popular and internet-powering framework, which is the basis of modern social login (i.e., “Sign on with Google/Facebook/etc”). After all, it would have been simpler not to, but enough of designers and thinkers involved anticipated much of what has developed in parallel, the decentralized identity technology that the DIF serves to support.

How do decentralized identity systems work?

In this conceptual framework, the “Identity Provider” has been cut down a notch, and is instead referred to as a mere “Issuer” (of credentials and information, perhaps of identities over which it has less control).Similarly, the “user” is defined less by borrowed tools and more by owned ones, assuming the title of “Holder” of information and identity, whether issued or self-issued.. The “verifier” relies less on the identity provider, choosing instead to verify information and identities presented by their holder on their own terms (with some cryptographic assurances about the issuer).

One interesting difference between traditional OIDC and decentralized systems is that in the latter, all parties have identifiers that make it possible to verify signatures; it is hard to tell from a DID whether it corresponds to an institution, an individual, or an inanimate object, because it could be any or all of those. Whatever it represents, it points to ways of verifying signatures with so-called “key material” (public keys, hints to how to classify and use them). Most often, this happens by looking up the material on a distributed ledger, but this is not, strictly speaking, definitive of the framework..

The key material from an OIDC issuer proves the veracity of whatever information or identity is being presented in a way that is tied back to that issue by similar key-material guarantees, and a self-issued OIDC token works the same way. (In a self-issued OIDC credential, the holder’s key material is used in the place of an institutional issuer’s).

How OpenID and Decentralized ID can fit together

One of the big challenges for any new technology that needs an identity system is getting adoption of the needed components so the system can actually work at a sustainable scale. This usually required buy-in from various kinds of actors in an ecosystem: at the very least, it needs critical mass of users/holders, IPs/issuers, and RPs/verifiers, each maintaining their end of the infrastructure and “keeping the lights on,” as it were.

This is exactly where the two systems can really help each other: achieving and maintaining critical mass of all three, as the distribution of more and less centralized solutions changes, and self-issued credentials come to be accepted in theory and in practice. OpenID Connect has a large “install base”: there are literally millions of websites running OpenID Connect tooling as the authentication mechanism at their “front door” for users. Indeed, while a vanishingly small portion of internet users have ever heard of OIDC, it is the nuts and bolts of the most universal and familiar UX and user flow of the contemporary commercial web, including online banking and government services.

OIDC-SIOP leverages the code that OpenID Connect relying parties already have in place across all these millions of sites, and the lion’s share of the 10,000 most used websites. Think of the screen that reads, “Log in with Google / Facebook / Twitter / Github / etc.” OIDC-SIOP enables organizations to ask for Verifiable Credentials that an individual holds in their wallet instead of a token from Google / Facebook / Twitter / Github. These can be single-use access codes with cryptography built in, or more reusable credentials, or richer credentials containing various kinds of information otherwise requested from an IP. This mechanism is provided by the Self-Issued OpenID Provider flow described by the core specification:

If successful, this will be a huge win for decentralized identity, because it addresses the perennial “Relying Party” problem of adoption: how do you get relying parties to adopt a new technology, install and trust a new “doorway,” adapt their security and business processes to a new set of strengths and weaknesses? In the popular imagination and even in much of the technology press, scaling a business or a technology is often imagined as a quest for users, but they are often the easiest shareholder to get on board, particularly for something convenient and powerful. The relying parties (or, in economic terms, the “demand side” of identity) is often a much harder business problem, and in this case, no “big lift” is required of the verifiers or “relying party” consuming a new kind of authentication credentials because they only have to make minor adjustments to the nearly universal OIDC tooling they already have.

Further Reading

The purpose of the DID Authentication Working Group at DIF is to design, recommend, and implement authentication protocols that rely on open standards and cryptographic protocols tailored to today’s and tomorrow’s systems for handling DIDs and DID documents, the primitives of decentralized identities. In last six months, the group has been actively working on the OIDC bridge, and they presented their latest work at DIF’s June 2020 virtual “face to face” meeting:

Here is a link to their draft specification, nearing stability and ratification at time of press: https://identity.foundation/did-siop/. For an explanation of some of the design principles and conceptual fine points, see the article “If You Build an Island You Need a Boat” from DIF member Mattr Global (NZ).

Finally, if you want to go even deeper into the technical nitty gritty (particularly if you are unfamiliar with OIDC best practices), watch this video of a two hour presentation about OIDC-SIOP put on jointly with the DIDAuth group and the OpenID Foundation June 25th 2020.

Where to begin with OIDC and SIOP was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


ID2020

Grameen Foundation Joins the ID2020 Alliance

ID2020 is proud to announce that Grameen Foundation, one of the world’s leading nonprofits using technology to alleviate poverty and hunger, has joined the ID2020 Alliance. “Over the past 23 years, Grameen Foundation has demonstrated what we can accomplish when we empower women to help their families escape the cycle of poverty and hunger,” said ID2020 Executive Director, Dakota Gruener. “Given G
ID2020 is proud to announce that Grameen Foundation, one of the world’s leading nonprofits using technology to alleviate poverty and hunger, has joined the ID2020 Alliance.

“Over the past 23 years, Grameen Foundation has demonstrated what we can accomplish when we empower women to help their families escape the cycle of poverty and hunger,” said ID2020 Executive Director, Dakota Gruener. “Given Grameen Foundation’s enthusiastic embrace of technology to enhance the delivery of financial and other development services, we are delighted to welcome them to the Alliance and look forward to collaborating with them as they continue to incorporate digital ID into their programs.”

Grameen Foundation was inspired and encouraged by Nobel Laureate Professor Muhammad Yunus, the founder of Grameen Bank and a global leader in the fight against poverty. Rooted in the microfinance movement, Grameen Foundation works across Africa, Asia, the Americas, and the Middle East to extend financial and other services to the world’s poorest people.

Today, Grameen Foundation embraces a multidimensional approach to the complex problems of poverty, using technology to strengthen