Last Update 3:34 PM May 18, 2021 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Tuesday, 18. May 2021

aNewGovernance

Leveraging the trust of nurses to advance a digital agenda in Europe

From the press of the European Commission Open Research Europe, the article Eric Pol, Chairman of aNewGovernance, had the privilege to be invited to co-author by the European Federation of Nurses. A pleasure to have collaborated with incredible team: Paul De Raeve and Elisabeth Adams (EFN) , Patricia Davidson (John Hopkins University), Franklin A. Shaffer (CGFNS International) and Amit Kumar Pandey (Socients AI and Robotics).

This article brings together insights from a unique group of stakeholders to explore the interaction between AI, the co-creation of data spaces and EHRs, and the role of the frontline nursing workforce. We identify the pre-conditions needed for successful deployment of AI and offer insights regarding the importance of co-creating the future European Health Data Space.

aNewGovernance is now looking forward to making this paradigm shift a reality through the implementation of a fair and human-centric data-infrastructure.

See article

Monday, 17. May 2021

SelfKey Foundation

SelfKey Partners With Moonpay: Buy/Sell Cryptocurrencies Using SelfKey Wallet

SelfKey partners with MoonPay. The Moonpay integrated SelfKey Wallet will allow its users to buy cryptocurrencies, including KEY, using fiat currencies. The post SelfKey Partners With Moonpay: Buy/Sell Cryptocurrencies Using SelfKey Wallet appeared first on SelfKey.

SelfKey partners with MoonPay. The Moonpay integrated SelfKey Wallet will allow its users to buy cryptocurrencies, including KEY, using fiat currencies.

The post SelfKey Partners With Moonpay: Buy/Sell Cryptocurrencies Using SelfKey Wallet appeared first on SelfKey.


Velocity Network

The Newest Economy: Welcome to the Credential Currency Revolution

Credential Engine (CE) and the Velocity Network Foundation (VNF) are operating together to assist with a system that links all the education and training providers with credential earners and employers. The post The Newest Economy: Welcome to the Credential Currency Revolution appeared first on Velocity.

Sunday, 16. May 2021

Velocity Network

Interview with Jim Owens, President and CEO of Cisive

We sat down with Jim Owens, President and CEO of Cisive, a global provider of compliance-driven human capital management and risk management solutions, and a Founding Member of the Velocity Network. See why Jim is so passionate about working with the Velocity Network. The post Interview with Jim Owens, President and CEO of Cisive appeared first on Velocity.

Me2B Alliance

Protected: Rebuilding Respectful Relationships in the Digital Realm

There is no excerpt because this is a protected post.

This content is password protected. To view it please enter your password below:

Password:

Friday, 14. May 2021

Me2B Alliance

The Policymaker’s Guide to Respectful Technology in Legislation

The concept of better privacy practices is not new – we’ve been talking about it since the birth of the internet. But what exactly does “better privacy” mean, and how do we get there? Policymakers have been working on legislation oriented around privacy for some time. While there have been some steps in the right direction, significant change won’t happen until we broaden the lens. What most peo

The Policymaker’s Guide to Respectful Technology in Legislation

The concept of better privacy practices is not new – we’ve been talking about it since the birth of the internet. But what exactly does “better privacy” mean, and how do we get there? Policymakers have been working on legislation oriented around privacy for some time. While there have been some steps in the right direction, significant change won’t happen until we broaden the lens. What most peo

The concept of better privacy practices is not new – we’ve been talking about it since the birth of the internet. But what exactly does “better privacy” mean, and how do we get there?

Policymakers have been working on legislation oriented around privacy for some time. While there have been some steps in the right direction, significant change won’t happen until we broaden the lens. What most people want but don’t have the terms to describe is respectful digital relationships. In the same way there is an unspoken code for respectful behavior in physical-realm relationships, this same type of behavior is just as essential when engaging with an online service or website.

Today, MIT Computational Law Report published a framework for policymakers to understand and advocate for more respectful digital relationships based on the Me2B Alliance vision.

The article, “Rebuilding Respectful Relationships in the Digital Realm,” penned by Elizabeth Renieris, an internationally recognized expert in law, policy and digital privacy, describes the Me2B Alliance’s approach to digital engagement with a focus on the legal and technological foundations necessary to rebuild digital relationships based on an ethos of mutual respect.

Renieris’ paper proceeds in five parts:

1. Digital relationships today and surveillance capitalism. Examines the Me2B Relationship in context, including in the macro-context of a phenomenon known as surveillance capitalism, which distorts and undermines the very ethos of the Me2B Relationship.

2. Today’s failed opt-out paradigm. Outlines the failures of the prevailing “notice and choice” paradigm for digital interactions, including its legal and practical defects, as well as how such defects result from a failure to account for the effects of surveillance capitalism.

3. An alternative path forward. Maps the expectations we have in the physical world onto the digital world, including through new legal foundations and innovative uses of technology to realign expectations and reality.

4. Digital interactions in this new paradigm. Defines interactions according to the relevant Me2B Relationship state.

5. Conclusions and recommends. Identifies next steps for research and exploration by the Alliance.

Policymakers take note: this is where privacy legislation needs to go in order to turn these small steps into leaps in the right direction. The vocabulary and concepts defined in this paper will help to define what is much more complex than “privacy” alone. We urge policymakers to internalize these concepts and start infusing respect into privacy-related legislation.

Read the full article, “Rebuilding Respectful Relationships in the Digital Realm.”

Elizabeth M. Renieris is a data protection and privacy lawyer (CIPP/E, CIPP/US), the Founder & CEO of HACKYLAWYER LLC, a Technology & Human Rights Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School, and an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University. Her paper was commissioned by the Me2B Alliance and originally published in the MIT Computational Law Report on May 14, 2021.


eSSIF-Lab

2 eSSIF-Lab calls: 1 webinar

Are you applying to ongoing eSSIF-Lab Open Calls? Let us help you: tune in to our webinar to get all the information and the tips for crafting an effective proposal.

When:
On Thrusday, 27th of May 2021 at 14:00 (Brussels Local Time), technical coordinator (TNO) and open call manager (FBA) will be answering all your questions about the application process, the programme and the value of the eSSIF-Lab’s Open Calls, live on air.

Where? 

The webinar is online, free and open to all interested people.

REGISTER NOW!

More info on the Open Calls:

Infrastructure-Oriented Open Call

This call targets open source technical enhancements and extensions for eSSIF-Lab Framework which fall within the SSI concept (i.e. technologies which allow individuals to control their electronic identities and guard their privacy).

Deadline: 7th of July 2021 at 17:00 (Brussels Local Time)
Funding: up to 155,000 €

2. Second Business-oriented Open Call

Solutions proposed for this open call should be business solutions that make it easy for organizations to deploy and/or use SSI and must fall within the SSI concept (i.e., technologies which allow individuals to control their electronic identities and guard their privacy).

Deadline: 7th of July 2021 at 17:00 (Brussels Local Time)
Funding: up to 106,000 € (for those best in class).

 

Join the webinar and find out all you need!


2nd Business-oriented Call ongoing

Over the past years, the question of how we can protect our identity in the world of online has been getting more heated. The Self-Sovereign Identity paradigm stepped in with the idea itself, that everyone should be in charge of their own digital personal information, and that it is the data owner who should decide on how this data is being used and by whom.

Self-Sovereign Identity (SSI) promises to empower European citizens with new means to manage privacy, to eliminate logins, and to enjoy much faster and safer electronic transactions via the internet as well as in real life.

Strengthening internet trustworthiness with electronic identities is also the aim of eSSIF-Lab, which has just launched its Second Business-oriented Open Call on 7 May 2021.

2nd Business-oriented Open Call

This call is open for proposals to develop and demonstrate SSI working solutions for a real-world, domain-oriented problem or opportunity, in prioritized eSSIF-Lab areas (Health Tech, e-Government and Education), or in the Open Disruptive Innovation track (which is covering innovative bottom-up projects out of these verticals).

Proposed solutions must be commercial, competitive and sector-specific SSI applications for products and services. They are expected to leverage on the added value that previously developed functionalities within the eSSIF-Lab, to include the issuing, exchange and consumption of SSI credentials as a core element and to have high technology and investment readiness levels.

The proposals selected in the Second Business-oriented Call will be invited to join an 8-month acceleration programme, including business and technical support to integrate eSSIF-Lab and other SSI technology with market propositions. 

.

Who can apply?

This call targets SMEs (including startups), not-for-profit entities (such as foundations or associations), or collaborative initiatives (as micro-consortia of these entities), registered in a Member State of the EU or in a H2020 associated country.

Deadline: 7th of July 2021, at 17:00 (Brussels Local Time).

Funding: up to 106,000 € per project (for those best in class).

APPLY NOW! Infrastructure-oriented Open Call

Remember that also the Infrastructure-oriented Open Call of eSSIF-Lab is still ongoing and that the deadline to submit applications has been extended. This call looks for open source technical enhancements and extensions for eSSIF-Lab Framework which fall within the SSI concept (i.e. technologies which allow individuals to control their electronic identities and guard their privacy).

Open source SSI components developed as a result of this open call will be applied by other participants in eSSIF-Lab and, hence, applicants shall be willing and able to work in an agile way with lots of communication and interaction with other participants in the eSSIF-Lab ecosystem.

Who can apply?

Innovators in the SSI domain (such as outstanding academic research groups, hi-tech startups, SMEs, etc.), legally established/resident in a Member State of the EU or in a H2020 associated country.

Deadline: 7th of July 2021 at 17:00 (Brussels Local Time)

Funding: up to 155,000 € per project.

APPLY NOW!

EdgeSecure

Welcome to Our New CIOs: Putting the Spotlight on Edge Member Institutions

Anthony Yang has been serving as Assistant Vice President and Chief Information Officer at Caldwell University since May 2020. Prior to his role at Caldwell, he served at the University as Executive Director, IT Operations and Digital Communications, where he managed the University website, social media, network infrastructure, media services, and technology support services. The post Welcome to

Experience the article in
View From The Edge
magazine.

Caldwell University

Anthony Yang
Assistant Vice President & Chief Information Officer

Anthony Yang has been serving as Assistant Vice President and Chief Information Officer at Caldwell University since May 2020. Prior to his role at Caldwell, he served at the University as Executive Director, IT Operations and Digital Communications, where he managed the University website, social media, network infrastructure, media services, and technology support services.

With over 20 years of teaching and training experience in the sciences and technology, Anthony has harnessed the ability to communicate with professionals in the industry and distill those concepts into a non-technical format that anyone can understand. Anthony has 15+ years of experience managing: systems administration, IP-based networks, enterprise resource planning (ERP), web content management systems (CMS), search engine optimization (SEO), search engine marketing (SEM), campaigns for social media, and database driven systems. He has a detailed understanding of WAN and LAN networking principles and protocols, cybersecurity, web development and design, customer service, AV technology, biology, general chemistry and organic chemistry.

Anthony received his Bachelor of Arts degree in Psychology from Rutgers, The State University of New Jersey-New Brunswick.

Mercer County Community College

Inder Singh
Vice President for Information Technology Services

Inder Singh serves in the position of Vice President of Information Technology Services at Mercer County Community College (MCCC) and is responsible for all aspects of Information technology services at the College to support students, faculty, staff and community.

Prior to joining MCCC, he spent 30 years in information technology in CIO roles in higher education. Most recently, he served as the Assistant Vice-President/Chief Information Officer for Springfield Technical Community College in Springfield, MA. Prior to his post at Springfield, he worked with both private and public institutions as a CIO and in leadership roles to provide technology solutions and support services to improve college enrollment to graduation rate goals 

In the upcoming academic year, he is planning and implementing the following key major initiatives to support and improving teaching and learning outcomes, enrollment management, retention, secure and flexible IT infrastructure services:

Live streaming technology implementation to support smart classrooms for teaching and learning, stream athletics, culinary demo, theater and college-wide events. Implementing Cloud-based CRM technology to improve retention in advising and workforce development area. Upgrading network infrastructure (LAN/WAN/Internet/WiFi) for flexibility, scalability and strengthening IT security to support academic and administrative goals. Implementing communication and collaboration technologies to support enrollment, online learning and business processes reviews to improve operating efficiencies and improving IT services.

Inder received an MBA in Management from the Rensselaer Polytechnic Institute in Troy, NY and holds numerous technical certifications and is a member of various professional affiliations. 

Middlesex College

John Mattaliano
Acting Executive Director, Chief Information Officer

John Mattaliano is a member of the Middlesex College (MC) Information Technology Department. He joined the team in 2018 and is currently serving in the capacity of Acting Executive Director/CIO.

Before joining Middlesex College, John was the Vice President of Information Technology for a large NYC financial institution. In this role, John was instrumental in planning, designing and implementing a new infrastructure platform that was scalable, reliable and secure. The result of these efforts provided employees with the flexibility and freedom to operate from remote settings, domestic and foreign.

When John joined Middlesex College, he utilized successful techniques from his previous business environment in collaboration with Middlesex College’s team building and diversity values. John was instrumental in leading the creation of a tactical and strategic infrastructure plan that is the current roadmap for Information Technology progression. He is currently revising the strategic and tactical Information Technology plan to ensure all of Middlesex College is provided with the tools and information to continue the ‘Student Learning First’ philosophy. 

John’s vision is to continue to support the Middlesex College community – students, faculty, administrators and leadership with future implementation of targeted projects: 

Improved network and wireless connectivity, Enhanced wireless connectivity allowing seamless connectivity for all ERP assessment and readiness study to improve information flow between students, facility and administrators Added virtualization and buildout of a security practice for the Middlesex College

John is working towards a Master’s Degree from Southern New Hampshire University and has a Bachelor’s Degree in Computer Science from New Jersey Institute of Technology. 

New Jersey Institute of Technology

Kamalika Sandell
Vice Provost and Chief Information Officer

Kamalika joined New Jersey Institute of Technology (NJIT) in July 2020 from American University, where she served as the Associate Chief Information Officer.

Kamalika has 25+ years’ experience leading initiatives with P&L responsibilities from $20M-$200M in global companies with ambitious targets across multiple industries including financial services, higher education, manufacturing, industrial, and retail – representing companies including CapitalOne, Cummins, and PepsiCo. She has built organizations ground-up, led divisions through multiple rounds of modernization, managed distributed teams, transformed practices incorporating agile, and established a culture of learning where innovation is an operating norm. 

In her role at NJIT, Kamalika is the principal architect of enterprise technology strategy for the University, developing and delivering on critical technology services that enable the delivery of NJIT’s mission. Additionally, her role includes responsibility for developing, championing and implementing NJIT’s technology vision, strategy and supporting roadmaps aligned with the 2025 strategic plan. Kamalika also works closely with faculty, students, and technology teams to improve and innovate learning management solutions and expand online learning capabilities. She believes there is great potential at NJIT to develop a high-impact and high-functioning Information Services and Technology Division, instituting governance, incorporating innovation, and creating a culture of partnership, service and excellence. 

She is a member of several national and international committees and a frequent speaker on digital, analytics, and governance in organizations.

Kamalika received a Bachelor’s Degree in Computer Science and Engineering from Jadavpur University in India. She holds a Master’s Degree in Organization Development from American University.

Princeton Theological Seminary

Jeffrey Sieben
Chief Information Officer

Jeffrey Sieben has been serving as Chief Information Officer at Princeton Theological Seminary since August 2017. In his role at Princeton Theological, he focuses on engagement, content distribution, business analysis, data systems, enterprise systems, visualization-driven reporting and innovative, scalable web technologies.  

Prior to his arrival at Princeton Theological, Jeffrey served in a variety of functions at Columbia University for nine years, including roles as Senior Director, Information and Web Technology and Director of Information, Online and Educational Technology.

Jeffrey has a strong track record of research, design and deployment of secure innovative technology platforms serving large and varying constituencies and equally diverse needs, derived from deep business and pedagogical understanding. He also has the know-how to design, implement and manage highly diversified technology environments and platforms as well as the security experience to protect highly confidential information.

He received his Master of Science in Information Systems from Fachhochschule Pforzheim – Hochschule für Gestaltung, Technik und Wirtschaft and a Bachelor of Arts in Music Production and Engineering from Berklee College of Music.

Rider University

Douglas McCrea
Associate Vice President for Information Technology and Chief Information Officer

Douglas McCrea was named Rider University’s Associate Vice President for Information Technology and Chief Information Officer in 2019. Prior to joining Rider, he spent 19 years at Rutgers University, where he served in various IT roles with increasing responsibility. He most recently served as the senior director of information technology on the New Brunswick campus where he was responsible for IT systems, end-user technology, software development, content management, service framework, risk management and regulatory compliance. In addition, he orchestrated IT governance and cybersecurity implementation, as well as project management, strategic planning, budget management and allocation for all IT initiatives.

In his role as CIO at Rider, the position has been elevated to a Cabinet position, wherein his responsibilities include aligning technology to academic and administrative strategic objectives. Additionally, Douglas oversees a complex portfolio of information technology services on both campuses. These include leading, administering, planning, and budgeting areas of administrative and academic computing, enterprise systems, classroom and conference technologies, and many others.

McCrea received a Bachelor of Science in Advanced Plant Biology from Ohio University. He has also completed the yearlong Big Ten Academic Alliance IT Leadership Program.

Stevens Institute of Technology

Tej Patel
Vice President for Information Technology and Chief Information Office

Tej Patel joined Stevens Institute of Technology in August 2020 as Vice President for Information Technology and Chief Information Officer.  A forward-looking leader with more than 15 years of higher education and corporate information technology experience, he is responsible for formulating a unifying IT vision and strategy aligned with Stevens’ overarching mission.

Before embarking on his role at Stevens, Tej held several leadership positions at the University of Pennsylvania including Penn Nursing Chief Information Officer and IT Director of Systems and Infrastructure Service at the Annenberg School for Communications. At the University of Pennsylvania, Tej advanced the goals of the University’s strategic plan by aligning it with the IT strategic plan, which included developing and implementing digital exostructure strategy culture for IT, including a four-year roadmap addressing governance, judiciously managing multiple multimillion IT budgets, and developing a successful team to digitally transform the organization. Tej implemented Online Learning and Teaching, Research-as-a-Service, Cloud First, Clinical Education Platform, and EPIC for Education programs for Penn Nursing. He led programs to generate an external revenue stream while maintaining the education and research mission via a broad community in the healthcare ecosystem. He provided IT leadership to a 50+million hospital and led IT merger and acquisition efforts for the school. Tej also co-chaired the IT Roundtable for the University of Pennsylvania.

Tej has experience providing value with state-of-the-art IT services, enhanced customer-centric services, and complex problem-solving; and advancing cultures of innovation, digital product management, and change management skills for all enterprise IT services. Tej is a serial technologist deeply interested in building and leading IT organizations focused on advanced technology, technology strategy, and innovation; and delivering connected services and customer experiences.

Tej earned a Bachelor of Science in Business Administration with a concentration in Management Information Systems from Montclair State University and is a candidate for a Master of Science in Organizational Dynamics at the University of Pennsylvania.

Experience the article in View From The Edge magazine.

The post Welcome to Our New CIOs: Putting the Spotlight on Edge Member Institutions appeared first on NJEdge Inc.


Edge’s Role in the Rising Tide of Technology Innovation

A little over twenty years ago Edge was formed to provide efficiencies in network services in an emerging state of connectedness that formed a foundation for how every institution and organization now operates. The post Edge’s Role in the Rising Tide of Technology Innovation appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

A little over twenty years ago Edge was formed to provide efficiencies in network services in an emerging state of connectedness that formed a foundation for how every institution and organization now operates. Our digital connectedness has now advanced as a necessity and competitive advantage in a now global society. In concert with its original mission and vision, and charter as a technology organization, Edge has now grown into a United States northeast region leader in networking and digital transformation for higher education, government, and healthcare.  Edge’s membership spans New York, New Jersey, Pennsylvania, Delaware and Virginia. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built advanced connectivity, technologies and services.

As we have always been, Edge is dedicated to providing the individuals and organizations we serve with the access, expertise and organizational capacity necessary to achieve their strategic goals and objectives with our technology products and services offerings. Knowing that in today’s world we still face challenges with digital fluency, digital inclusion, and the instantiation of a level playing field for e-commerce, eSports, e-education, telemedicine, digital government, and numerous other societal domains, Edge continues to expand for the common good into areas where underserved populations still reside. Leveraging our world-class network infrastructure, as well as our company of seasoned, highly experienced technology thought leaders and professionals, Edge is expanding organically into Delaware, Pennsylvania, New York, and Virginia. 

Edge is unique in its technology investment, portfolio of service offerings, and associated capabilities that best allow institutions and organizations to grow and thrive. Applying the theory of comparative advantage to Edge’s high-tech ecology benefits the common good at the state, region, and potentially national level, knowing that when viewed in the context of an economic model, institutions and organizations have a comparative advantage over others in how they operate if they can operate at a lower relative opportunity cost, i.e., at a lower relative marginal cost.  The theory of comparative advantage involves economic theory that, when applied to technology as a service industry, maintains that institutions and organizations benefit from the efficiency gains that arise from differences in how they interoperate with technological transformation and particular insourcing of related services.

As the world continues to change, a level playing field with respect to institutional and organizational tech-ecologies can only be accommodated by nonprofit technology providers such as Edge. In the coming years, Edge network, professional, and managed services, and consortium approaches to procurement will benefit institutions and organizations in the region while creating for Edge members sustainable cost structures for advancing the quality and quantity of digital transformation and cyberinfrastructure. 

Experience the article in View From The Edge magazine.

The post Edge’s Role in the Rising Tide of Technology Innovation appeared first on NJEdge Inc.


Three-Year Strategic Plan

Edge’s 2021-2024 Strategic Plan includes four major pillars of progress that feature financial performance, membership value, internal processes, and organizational capacity. Working to mature the Edge organization, we understand that the future of research and discovery is a connected and networked business. The post Three-Year Strategic Plan appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

Edge’s 2021-2024 Strategic Plan includes four major pillars of progress that feature financial performance, membership value, internal processes, and organizational capacity. Working to mature the Edge organization, we understand that the future of research and discovery is a connected and networked business. Edge will consistently strive to maintain and advance its prominence as a committed and capable research partner. Moreover, we understand that the post-pandemic world is evolving into the “next normal” and also changing in response to the fourth industrial revolution’s impact on education, government, and healthcare. Digital Transformation (Dx) is now a success imperative for our member institutions and organizations; thus Edge will strive to serve as a catalyst and collaborator for Dx.

Edge’s pursuit of sustainable and affordable technology products, services, and solutions as a center of excellence will continue to evolve in support of the common good. Leveraging Edge’s last three years of success in building a supply chain model for procurement and making available solutions that support the entirety of the business lifecycle, these programs remain continuing strategies with new objectives in this Plan. Evolution of shared-services, subscription and team-based services, and e-procurement are represented as Plan outcomes and will work in concert with Edge’s efforts to broaden its reach and impact among a growing list of member institutions and organizations.

Finally, Edge will continue in its pursuit of providing insight and business intelligence by ensuring that its members are informed and aware of the national conversations regarding repositioning and reinforcing the role of information technology leadership as an integrated strategic partner of institutional leadership in supporting institutional mission and vision. Considering the future of education, government, and healthcare and the need for transformation and business continuity, remote working, teaching, and learning, and potential for disruption from catastrophic events, Edge’s 2021-2024 Strategic Plan ascribes to this future vision in continuing to build in concert with the needs of its constituents. Please give it a read at njedge.net/publications.

Experience the article in View From The Edge magazine.

The post Three-Year Strategic Plan appeared first on NJEdge Inc.


Edge’s New Web Presence

Serving as Edge’s primary brand statement and central repository of information in how we communicate with current and future members, our web site remains a critical component in how we communicate our value proposition. The post Edge’s New Web Presence appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

Serving as Edge’s primary brand statement and central repository of information in how we communicate with current and future members, our web site remains a critical component in how we communicate our value proposition. Moreover, Edge’s website is a statement of culture, mission, vision, and attitude of service to our membership. As we utilize our web site to tell our story, the need for improved navigation, messaging, visuals, and taxonomy all combine to support our evolving story. Over the past four years, since the launch of Edge’s second generation website, much has changed. The Edge organization has built on success and has expanded service offerings through a technology solutions center approach. We’ve strengthened our abilities to effect economies of scale in procurement, serve as fiduciary for affiliated organizations, drive economic advancement, and serve as a national model of excellence for research and education networking and computing.

This year Edge unveiled its third generation website to represent the current evolutions, maturities, and successes in achieving thought leading solutions for digital transformation in higher education, government, and healthcare. The soft launch of Edge’s new web site has organically accelerated as positive feedback and affirmation has continued in the state and region. As the regions’ nonprofit technology partner, Edge is excited to tell its story and represent and evolving brand that validates a twenty-year journey from the last century to today. Our new website captures our heritage and also expresses our future. As we look forward to the next four years and the accomplishment of our next Strategic Plan, our web site will serve as a critically important asset in how we communicate with our members and the world.

We invite you to explore Edge’s new web presence.

Experience the article in View From The Edge magazine.

The post Edge’s New Web Presence appeared first on NJEdge Inc.


EdgeMarket: The “Easy Button” for Technology Procurement in Higher Education

As the region’s leading nonprofit technology solution center, Edge is committed to connecting the higher education community with an ecosystem of solutions that support digital transformation. This commitment includes not only Edge’s own services delivered at nonprofit rates, but also third party vendor solutions that meet the needs of students, staff, and faculty. The post EdgeMarket: The “Easy

Experience the article in
View From The Edge
magazine.

As the region’s leading nonprofit technology solution center, Edge is committed to connecting the higher education community with an ecosystem of solutions that support digital transformation. This commitment includes not only Edge’s own services delivered at nonprofit rates, but also third party vendor solutions that meet the needs of students, staff, and faculty. With this in mind, EdgeMarket was designed to be an organization’s “easy button” for technology procurement. Operating under a supply chain model informed by member needs, EdgeMarket issues requests for proposals (RFPs) on behalf of the entire higher education community in order to reduce the procurement burden on each institution and accelerate the onboarding and impact of essential technologies.

The RFP process can be onerous and with already stretched timelines and budgets, many organizations lack the time and resources to devote to researching and selecting new technology solutions. EdgeMarket minimizes the amount of effort required to research, analyze, and procure essential tools through three purchase methods, including Lead Agency status, cooperative pricing system, and shared services agreement. As a state authorized Education Services Corporation, Edge holds Lead Agency status. Under this designation, Edge can issue RFPs and negotiate pricing contracts for strategic partnerships on behalf of its membership; enabling members to obtain products that are in high demand at the most cost-competitive pricing available. Edge also holds designation as a Cooperative Pricing System and can issue RFPs on behalf of the entire public sector community in New Jersey and beyond.

Recently Awarded RFPs
Edge provides a broad array of technology related products and services, including solutions for software applications, cybersecurity, data analytics, cloud and data services, and video conferencing. EdgeMarket continually has a number of active procurements underway and upcoming procurements that are being prepared, researched, or considered. All services and solutions available through Edge can be viewed on the EdgeMarket eProcurement portal. EdgeMarket’s recent procurement contracts include recognized leaders in AI-enabled ChatBot solutions. One of these solutions, AdmitHub, has chatbots that combine behavioral science and AI to support students by individually guiding them through complex tasks, gathering data, conducting surveys, involving advisors, and providing 24/7 responses to their questions.

Using natural language processing and machine learning, the Ivy chatbot interacts with stakeholders by understanding the semantics of their inquiries and provides an immediate, meaningful answer. Ivy can be trained to disseminate information such as deadlines or school codes, but also can connect students to university resources such as help videos or other 3rd party resources. The third solution, Ocelot, is higher education’s leading AI Student Engagement Platform and includes bidirectional, smart communications capabilities for student outreach and support, empowering institutions to reach every student and answer every question from a single platform. Institutions use Ocelot to increase enrollment and retention, reduce summer melt, and maximize the impact of department staff.

To help organizations effectively communicate to their users when a crisis strikes, EdgeMarket’s solution suite includes OnSolve, the leading global provider of SaaS-based mass notification and critical communication solutions. The company’s cloud-based software communications platform provides seamless and easy-to-deploy solutions for the exchange of critical information among organizations, their people, devices, and external entities with use cases designed to save lives, enhance revenue, and reduce costs.

For institutions looking for steeply discounted rates for department and enterprise document imaging, management, workflow, and storage, EdgeMarket now offers the enterprise content management solution, GRM. A recognized leader in digital transformation, GRM’s VisualVault document management system enables organizations to efficiently scan or upload documents, extract metadata with intelligent character recognition (ICR) and optical character recognition (OCR) technology, capture additional data with i-forms, and automate verification workflows—simplifying document storage, retrieval, editing, sharing, and approval.

Edge has also recently awarded contracts to two recognized leaders in online exam proctoring, Examity and Proctorio. Both platforms provide a scalable, cost-effective solution for protecting exam integrity by validating identities and monitoring test takers during exams. In addition to these newly-acquired EdgeMarket solutions, institutions can also find products and services for enterprise resource planning (ERP) and student information systems (SIS), learning management systems (LMS), and cloud-based voice and unified communications. Edge has additional projects on the horizon and is currently preparing the release of several new RFPs, including Classroom Virtual Desktop, IT and Professional Consulting Services, Digital Marketing Services, Technology Hardware & Software Catalog, and Managed Campus and Residential Network Services.

Procurement Participation
Edge continuously evaluates member needs and market opportunities in order to identify, prioritize, and schedule future procurements. Member institutions and co-op participants are instrumental in further enhancing EdgeMarket’s solution suite by sharing insight into what solutions are needed and can provide the most value. Edge encourages organizations in the Edge community to get involved in the procurement process and shape future RFPs by participating in bid requirement feedback, vendor reviews, and vendor demos. Looking forward, Edge will continue to leverage its current success in eProcurement to advance the scale and capability of the EdgeMarket portal, and as new members join the Edge community, the robust catalog of products, providers, and services will continue to further diversify and grow.

Ready for easier, smarter, faster service procurement? Join EdgeMarket today! Learn more at njedge.net/solutions-overview/consortium-buying-eprocurement.

RFPs issued on behalf of the entire higher education community. EdgeMarket aggregates demand and issues RFPs on behalf of members, so that your purchasing team doesn’t need to take on the onerous process alone.

An EdgeMarket procurement portal where you can review all of the services and solutions available through Edge.

Currently available cooperative procurement contracts for:

Cloud-based Voice & Unified Communications: CBTS Enterprise Resource Planning (ERP) & Student Information Systems (SIS): Anthology, Jenzabar AI-Enabled Chatbots: AdmitHub, Ivy.ai, Ocelot Learning Management System: D2L – Brightspace Remote Exam Proctoring: Examity, Proctorio Microsoft Solutions & Services: SHI/Microsoft Enterprise Content Management: GRM Emergency Notification Software: OnSolve …and more to come.

Upcoming procurements for:

Technology Catalog for Hardware & Software (TeCHS) Data Warehouse/Business Intelligence for Institutional Effectiveness IT Help Desk Services Pathogen Control Technologies Managed Campus and Residential Network Services  Marketing Services Classroom Virtual Desktop Experience the article in View From The Edge magazine.

The post EdgeMarket: The “Easy Button” for Technology Procurement in Higher Education appeared first on NJEdge Inc.


NJTransfer – Illuminating the Path to Higher Education

For many students who begin their higher education experience at a community college, trying to determine which classes to take and how these courses will transfer to a four-year degree program can feel like a daunting task. The post NJTransfer – Illuminating the Path to Higher Education appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

For many students who begin their higher education experience at a community college, trying to determine which classes to take and how these courses will transfer to a four-year degree program can feel like a daunting task. With a mission of addressing this pain point, the New Jersey Commission on Higher Education partnered with the New Jersey President’s Council to launch the New Jersey Statewide Transfer Initiative. This initiative is designed to promote enrollment at local community colleges and to support a streamlined transfer process to a New Jersey college or university. Launched in 2001, the NJ Transfer website provides an online tool to help students determine which course credits can be transferred from a New Jersey community college to a participating four-year institution in the State. The web resource also provides contact information for admissions at a New Jersey college or university, information about recruitment events, and shares which majors and programs students can apply to at each institution.

One of the biggest drivers of the Initiative was to encourage high school students to pursue their higher education in New Jersey and later begin their careers locally. “One of our number one exports is 18-year-olds, where high school graduates leave the state for college, and often do not return,” says Joe Rearden, Vice President Administration & Finance, General Counsel, and Chief Financial Officer. “The New Jersey Statewide Transfer Initiative brought people together to put a system in place that would give students the resources and information that they need to plan out the college process much more easily.”

Improving the Transfer Student Experience
The confusing experience of trying to map out your own college trajectory was one that Thea Olsen, Program Director of NJ Transfer Initiative, Edge, remembers well. “I am from South Jersey and attended Camden County College. I had a similar experience that many transfer students can attest to, where you are flying blind and making uneducated decisions. I ended up transferring to Drexel University and after receiving my bachelor’s and master’s Degrees, I began working at the University in the Office of Transfer and Commuter Student Engagement. I’ve not only had the transfer experience as a student myself, but I have also worked in all the roles that are involved in the transfer student life cycle, from the enrollment funnel to the academic student support services. There is definitely a void in support services for transfer students; they often consider themselves to be a marginalized student population. As I came into my current role over a year ago, finding out about NJ Transfer was just a huge ah-ha moment.”

NJ Transfer is only one of a few statewide online tools that provide course and academic program information for students transferring to a four-year institution. “From an advisor’s standpoint, having NJ Transfer at your fingertips is a hugely valuable resource,” says Olsen. “To discover an up-to-date database of multiple equivalencies that an advisor or transfer admissions counselor could refer to when needed was just a mind-blowing experience. If I had known about NJ Transfer as a student, this resource would have likely changed my trajectory of most of my community college experience, which in my case was Camden County College. I would’ve been able to go on the site, identify how my credits transferred to the partnered institutions’, and once I changed my major, I could have potentially made more educated decisions about the courses that I was taking in relationship to my chosen four-year institutions. From a student perspective, NJ Transfer is a very valuable navigational and academic planning tool. Not all transfer students seek out their academic advisor, so having a website with all the necessary information is extremely important.”

Expanding Awareness and Education
NJ Transfer works closely with institutions of higher education to ensure the optimal transferability of academic credit is possible and that faculty and staff understand how to utilize the site and take full advantage of the information provided. “In working with institutions, I share best practices for using NJ Transfer and help them relay course information and equivalencies to students in a clear and digestible way,” shares Olsen. “Four-year institutions also use the site to re-review an entire content area.  For example, a school can type anthropology into the keyword and pull up all the community colleges that have anthropology in the title.  They can then take this information back to their faculty, review the course in detail, and discuss if more direct equivalencies are possible. For community colleges, this process can be reversed to ensure their courses are current on the site to identify if any of the courses have undergone changes in content that may positively impact students transfer credit outcomes and resubmit them as revised.”

Recently, NJ Transfer joined Edge as an affiliated organization, and with this move, Edge will be focusing on improving the user experience and helping to drive the initiative forward. “We will be rebranding the website and print materials to bring the NJ Transfer interface, the visual user experience of the site, and the marketing materials into the year 2021,” says Olsen. “I will also be offering training to those who work closely with our web system to ensure there is a consistency of knowledge across the board. Specifically, I would like to start educating a wider array of staff members in different departments at 4- and 2-year institutions so they can confidently share their knowledge with students and train others on the site’s capabilities.”

Olsen says that building the knowledge and awareness of NJ Transfer is a grassroots effort, since information often travels across campuses by word of mouth. “As more students use NJ Transfer, there is a greater probability that they’re mentioning the web resource to other students.  If a recruiter sits down with a high school student who wishes to attend community college, the recruiter can showcase the site and explain how to find the information that they need. Ultimately, I want to expand the level of knowledge regarding NJ Transfer and the number of people who have that knowledge, and empower recruiters, transfer counselors, and college success professors to use the planning tool in the classroom and relay the value of the resource to students.”

Fostering a Holistic Approach
A long-term goal included in transforming NJ Transfer is to offer informative workshops to high school guidance counselors so more students can learn about the planning tool early on. “We want to capture the whole transfer student experience from that early moment they decide to attend community college, through earning their associate’s degree, then on to their first day at their chosen 4-year college or university,” says Olsen. “NJ Transfer is much more than an academic planning tool. The site offers a convenient one-stop resource for transfer-specific points of contact where students can begin to cultivate their support network of staff at both the 2-year and 4-year level; helping them to make more informed decisions about their future.”

Olsen will also continue to connect with partner institutions to better understand their individual needs and how to improve the solutions and services offered. “I look forward to strengthening the current working relationships with partner institutions and also building bridges to new groups of people. By gaining a deeper understanding of the community’s needs and challenges, we can enhance the features and functionality of NJ Transfer and collaboratively improve the transfer student experience in New Jersey.”

To learn more about NJ Transfer and how students, parents, faculty, and staff can use the resource to create a comprehensive college plan, visit www.njtransfer.org.

Experience the article in View From The Edge magazine.

The post NJTransfer – Illuminating the Path to Higher Education appeared first on NJEdge Inc.


EdgeNet: A New Standard of Excellence

In a world where lightning-fast, reliable network connections are becoming not only desired, but necessary, to conduct business, drive research, and propel education, Edge is dedicated to providing advanced networking technology solutions that are robust, resilient, and meet the unique needs of member institutions. The post EdgeNet: A New Standard of Excellence appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

In a world where lightning-fast, reliable network connections are becoming not only desired, but necessary, to conduct business, drive research, and propel education, Edge is dedicated to providing advanced networking technology solutions that are robust, resilient, and meet the unique needs of member institutions. The Edge optical fiber network, EdgeNet, is a high-performance network designed to provide superior networking services to institutions and their respective user communities. Designed to meet specific business requirements as defined by the extensive member communities, like every top tier network, Edge aligns with industry standards for each of the primary value components to ensure members receive a network connectivity experience above and beyond that of a traditional Internet service provider (ISP).

Employing Industry Best Practices
With security and performance going hand in hand at Edge, EdgeNet is Mutually Agreed Norms for Routing Security (MANRS) Compliant and abides by all MANRS policies and practices. This global initiative has established a security baseline of action for network operators, aims to secure Internet routing, and provides crucial fixes that help reduce common routing threats. EdgeNet is also Internet Routing Registry (IRR) Compliant and regularly assists connected members with compliance to avoid interruption of supporting services from large entities, like Google. The IRR is a database of Internet route objects for determining and sharing routes and related information used for configuring routers, with a view to avoiding problematic issues among Internet service providers.

Member institutions connected to EdgeNet can count on comprehensive 24/7 monitoring, support, and incident response through the Network Operations Center (NOC). The Edge network team proactively supports members’ efforts to remain current with route registry information requirements established by The American Registry for Internet Numbers (ARIN). This nonprofit corporation is responsible for managing Internet number resources, including Internet Protocol version 4 (IPv4), IPv6 addresses, and Autonomous System Numbers for Canada, many Caribbean and North Atlantic islands, and the United States. ARIN managed address space is part of the filtering that mitigates for propagation of correct routing.

Mitigating Network Security Risks
To improve the reliability and availability of the network and optimize network performance, peering connections take the majority of EdgeNet member traffic off the commodity Internet. Edge’s peering fabric continues to be one of the largest areas of growth, with recent upgrades to both of the commercial peering fabric connections. Edge employs fully redundant connections to peering exchanges, as well as Private Network Interconnect (PNI) with Google, Amazon, Netflix, Akamai, and others. Edge’s Internet exchange providers include Telehouse/New York Internet Exchange (NYIIX) and DE-CIX. Telehouse/NYIE is one of the largest regional providers in the area, with an excess of three hundred members and a peering fabric that is the largest in the world. DE-CIX has a global footprint, providing premium network interconnection services and Internet Exchanges internationally. Edge has upgraded the Telehouse/NYIIX IXP peering fabric to 100G and will be upgrading the DE-CIX IXP connection to 100G. Combined, these Exchanges, plus Edge’s direct peering with content providers and content delivery networks (CDNs), account for 70-80% of Edge Network’s traffic.

Ranked among the top Global Transit Providers, Nippon Telegraph and Telephone Corporation (NTT) Communications and Telia Company provide the stable and reliable connectivity of the Edge network, with routers in Philadelphia and Newark, New Jersey. The Edge optical backbone design incorporates multiple paths among nodes throughout the network which, in turn, provides for protected circuits and packet network redundancy. To protect the EdgeNet core and all connected members, an automatic DDoS defense mechanism is in place, where all traffic entering the Edge network is actively monitored to minimize service disruptions. The DDoS mitigation services are intrinsic to the services provided to Edge by both Telia and NTT.

In the research and education community, Internet2 is vital to helping organizations collaborate and access key technology resources. EdgeNet’s peering connection to Internet2 enables Edge members to connect directly to the nation’s premier coast-to-coast research and education network and gain access to Advanced Layer 2 Services (AL2S) and Advanced Layer 3 Services (AL3S). Edge’s Internet2 access network and the transactional network are physically and logically separated by design, and in an effort to further bolster network collaboration, New Jersey has added a “gigapop” presence at 32 Avenue of the Americas (AOA) and 401 North Broad Street, Philadelphia. The 32AOA location interconnects with Internet2 at 100G and lays the groundwork for future connection to NREN at 10 or 100G. At the Philadelphia gigapop location, the connection to Internet2 is at 100G and Edge’s sister regional network, KINBER. Edge can support many 100G connections into the infrastructure and between these two facilities via Edge’s optical backbone.

Under the guidance and direction of respected industry veterans Jim Stankiewicz, Associate Vice President and Principal Network Architect, and Bruce Tyrrell, Associate Vice President Programs & Services, Edge’s primary focus is delivering the highest speed possible for flow/packet delivery from point to point and to ensure the full capacity of contracted data rates are always available.

Implementing High Availability
EdgeNet is at the forefront of utilizing SDN-based LISP protocols to deliver a consolidated, robust architecture that enables members to reduce capital expenditures and operating expenses, simplify network design, reduce system load, and improve network scalability. Working and learning from home increased significantly in the past year, sparking Edge to add peering with Comcast and Altice (Optimum) to help meet the drastic rise in usage of the Edge network. In addition, Edge added a 100 GB direct peering connection with Akamai to coincide with recent video game releases and bandwidth spikes.

Another growing trend includes dark fiber connections for organizations looking to meet the evolving needs of their end-users with bolstered bandwidth and reduced latency. With impressive flexibility and scalability, many institutions are using dark fiber to parcel off bandwidth, eliminate speed issues, and provide the superior connectivity colleges and universities demand. Over a single pair of fiber, Edge can run eight separate and distinct circuits for each connected member, with built-in scalability to meet future demands.

A diagnostic feature of the Edge optical network is the Fiber Assurance Advanced Link Monitoring service that helps monitor the dark fiber infrastructure and can quickly isolate where a problem is located. Edge employs an Optical Time Domain Reflectometer (OTDR) feature that runs an algorithm on network spans. This service provides agnostic 24/7 in-service fiber link monitoring where real-time fiber fault detection localization and notification occurs automatically. With this information, Edge can notify their provider with an approximate location of the break, helping to resolve the issue quickly. Oftentimes, the Fiber Assurance enables the “fault fix” process to begin before the end user even notices a service disruption.

In recent years, hundreds of institutions are now offering dynamic eSports programs and competition opportunities on their campuses. To help organizations meet this growing demand and remain competitive in an emerging field, EdgeNet has a dedicated eSports connection with bandwidth reserved exclusively for gaming traffic, allowing lightning-fast on-net competition. Edge has a router with 880G of capacity dedicated to eSports and video game peering connections to all major gaming platforms, including Riot Games, Roblox, Take-Two Interactive, and Sony. Direct peering with Twitch and Valve Corporation has also been added to the EdgeNet network services. These exclusive connections take traffic off the commodity internet and minimize service disruptions. Plus, the built-in DDoS mitigation of the EdgeNet network prevents attacks from outside the network.

Delivering Multi-Layered Network Security
Unlike traditional ISPs, EdgeNet is highly invested in the success of connected members and ranks security among the most crucial objectives of their high-performance network. The EdgeNet network supports 100% of member demand, where the traffic of one organization is never impacted by another member’s network activity.  Edge maintains a physically and logically segmented, out of band, network management platform to ensure the security and availability of the management function, even if issues arise. With an intentional segmentation of transit routing and core routing, EdgeNet includes peering and member to member routing. This design provides an effective method to isolate the transit edge, where the Internet service providers connect, allowing Edge to mitigate nefarious transit traffic, while assuring the operational integrity of the network core.

Following the principles of Net Neutrality, Edge never intervenes, throttles, or oversubscribes network services to any connected member. In addition, Edge does not inspect traffic or packets for content and leaves the determination of what is classified as acceptable or unacceptable traffic to the member enterprise networking team.

Looking toward the future, Edge is dedicated to further growing in connectivity and services, while scaling the network in a way that keeps costs low. Drivers of this initiative include Cloud Services and Internet2 connectivity, and Edge continues to look for ways to support this growing scale and maintain network resiliency.

Ready to discover the powerful connectivity of EdgeNet? Check out njedge.net/solutions-overview/network-connectivity-and-internet2 to learn more.

Experience the article in View From The Edge magazine.

The post EdgeNet: A New Standard of Excellence appeared first on NJEdge Inc.


Networking with Our Board of Trustees

As digital transformation, also referred to as Dx, continues to gain momentum across many industries, so does the rise of data breaches and cyber attacks. We then add in a swift pandemic-induced move to widespread remote learning and work life, and many organizations were left highly vulnerable to cyber threats. Since COVID-19, the US Federal Bureau of Investigation (FBI) reported a 300-percent in

Experience the article in
View From The Edge
magazine.

As the Northeast Region’s premiere nonprofit technology partner to higher education, healthcare, and government, Edge serves as a member informed organization to provide secure, advanced networking, technology solutions, and consortium buying programs for its constituents. Based on the By-Laws of the corporation, representation on Edge’s Board of Trustees from various sectors of education, healthcare, and government provide an intersectional view into those things which best serve the Common Good. The diversity of experience and thought among Edge’s Board of Trustees members offers talents not easily found and serves to benefit the whole of the organization and its affiliates, sponsors, and members. The time and effort involved in serving on the Edge Board of Trustees is appreciated greatly as through this governance framework the Edge organization continues to grow and thrive, even in today’s challenging times.

We would like you to get to know those individuals elected to our Board of Trustees and why their background is timely, relevant, and impactful to the current and future trajectory of our organization. Meeting quarterly to discuss a full agenda of items related to how research and education networks (RENs) across the nation bring increasing value to the institutions they serve, our Board of Trustees members all speak with authority and experience cultivated in real world achievements. In each issue going forward, we’ll present a few of Edge’s Board of Trustees members to better acquaint you with their backgrounds and experiences. As one will see, Edge is proud of the esteemed individuals who give freely of their time and resources to advance Edge’s value proposition across the Region.

Dr. Steve Rose

President of Passaic County Community College (PCCC) and
Chair, Edge Board of Trustees

Dr. Steven M. Rose has served as president of PCCC since 1996, overseeing unprecedented growth and expansion. Considered one of the fastest growing colleges in New Jersey, PCCC operates four campus locations throughout Passaic County and enrolls 13,000 students in both traditional and online programs. In 2014, PCCC was included in the prestigious Achieving the Dream national initiative to promote student success in community college education.

Prior to becoming President of PCCC, Dr. Rose served as the College’s Vice President for Academic Affairs, Dean of Faculty, and Dean of Admissions and Enrollment Management. He holds a doctoral degree in education from Rutgers University, a master’s degree in higher education from the University of Vermont, and a bachelor’s degree in political science from Muhlengberg College. He is also an officer of the New Jersey President’s Council and is serving on the Workforce Development Board of Passaic County Board. In 2012, Dr. Rose was awarded the Faith in Paterson Award from the Greater Paterson Chamber of Commerce and its member businesses.

Currently serving as Chair on the Edge Board of Trustees, Dr. Rose values the relationship between the two entities and his ability to collaborate with other like-minded administrators.

“With Edge doing a lot of the work for us, we know we’re going to get good value. We are going to trust the vendors, and everything becomes significantly easier. This is a valuable service when we can onboard technology and keep up with the times,” Dr. Rose shared.

In January, he was named the 2021 Vice Chair of Middle States Commission on Higher Education Committee on Substantive Change. This Committee reviews requests from colleges and universities that want to make significant changes to the scope of their accreditation. 

“Accreditation assures students and the public of the educational quality of higher education,” Dr. Rose stated. “I am honored to be part of the accreditation process that ensures institutional accountability, self-appraisal, improvement, and innovation through peer review and the rigorous application of standards within the context of institutional mission.”

Dr. Rose is serving in his first term as a Commissioner, while continuing as president of PCCC.

Dr. Joseph Marbach

President of Georgian Court University,
Edge’s Board of Trustees, Independent Sector

On July 1, 2015, Dr. Joseph R. Marbach became Georgian Court University’s ninth president and was charged with leading the strategic vision and growth of the university. His hire made him the first man and first lay president in Georgian Court’s 107-year history. 

Dr. Marbach’s distinguished background in the academic arena as both an educator and thought leader provided him with unique skills and knowledge for leading the University’s 2,500 students and 30+ undergraduate and nine graduate programs.

“More than ever before, higher education demands that we bring new ideas and a new way of thinking to the difficulties we face,” says Dr. Marbach. “It is only by working together, and by reviewing, questioning, and refining our strategies that we will meet and overcome challenges.”

Prior to coming to Georgian Court, Dr. Marbach served as provost and vice president for academic affairs at La Salle University in Philadelphia. He also held a post as a professor of political science, established the English Language Institute, the Office of Professional and Corporate Education, and the Institute for Lasallian Education and Engage Pedagogy. He is the former dean for the College of Arts and Sciences at Seton Hall University in South Orange, New Jersey, where he also served as a professor and former chair of the Department of Political Science.

Dr. Marbach is a past president of the New Jersey Political Science Association and has served on the council of the American Political Science Association’s Section on Federalism and Intergovernmental Relations. He is a fellow with the Pennsylvania Policy Forum and has been an active participant in the Global Dialogue on Federalism, sponsored by the Forum of Federations and International Association of Centers for Federal Studies.

Graduating magna cum laude from La Salle University in 1983, Dr. Marbach then earned his M.A. and Ph.D. in Political Science from Temple University. 

An award-winning radio analyst, Dr. Marbach is often called upon for his expertise on state and local government. He is also the editor-in-chief of Federalism in America: An Encyclopedia and has contributed to and edited Opening Cybernetic Frontiers. 

Michele Norin

Senior Vice President and Chief Information Officer at Rutgers University,
Edge’s Board of Trustees, Research Sector

Michele L. Norin is the institutional leader for technology at Rutgers State University of New Jersey. Hired in December 2015 as Senior Vice President and Chief Information Officer, Ms. Norin’s principal responsibility is to provide leadership in the strategic adoption and use of information technology in support of her University’s vision for excellence in research, teaching, outreach, and lifelong learning. 

“One of the most fundamental challenges we struggle with in higher education is simply trying to keep up with all of the new capabilities that come with technology,” said Ms. Norin. “We’re kind of in two spaces at the same time. One space is learning to leverage the tools we already have to the fullest extent, while the other involves keeping our eye on emerging tools and capabilities.”

As Rutger’s primary advocate and spokesperson for IT strategies and policies, Ms. Norin defines and communicates a university-wide vision for technology, while providing oversight for IT-related issues and strategic planning. She also works closely with peers at other Big Ten Academic Alliance institutions and leading research universities to achieve synergies and cohesiveness in IT.

Prior to coming to New Brunswick, Ms. Norin was at the University of Arizona for over 26 years, including roles as Vice President for IT and Chief Information Officer, Director of Network Technology Solutions, and Coordinator for IT Outreach and Information Delivery.

Ms. Norin has a bachelor’s degree in Management Information Systems from the University of Arizona and a master’s degree in Educational Leadership from Northern Arizona University. Her career entails 30 years of experience in the field of technology spanning from administrative systems programming to campus IT leadership and strategic planning.

In October 2018, Norin was named the New Jersey Tech Council 2018 CIO of the Year (Nonprofit category). She serves as a member of EDUCAUSE, and in 2019, she was named one of the executive sponsors of Edge’s professional network, Women Leaders in Technology.

“It’s exciting to be a part of higher education because of its bigger mission to educate students and help people from any age group learn, grow, or rebuild their careers,” she said. “We have the opportunity to influence others with new cool tools and devices.”

Candace Fleming

Chief Information Officer, Montclair State University,
Edge’s Board of Trustees, Research Sector

Candace C. Fleming began her role as Vice President and Chief Information Officer at Montclair State University in 2015, after serving nine years as Chief Information Officer and Vice President for Information Technology at Columbia University. 

While at Montclair State, Ms. Fleming has been influential in guiding the University’s IT department to work more closely with researchers and increase their exposure to the overall infrastructure available in the eco network, as well as offering better hosting, procurement, and support options from central IT.

“Partnering with similar IT-oriented experts like Edge also reveals new opportunities and allows researchers to think about the different ways where systems can assist them in their mission,” she said.

Being the first CIO for Columbia University, she also led them towards stronger delivery of campus-wide information technology infrastructure, applications, and services.

Her prior positions gradually increased in levels of responsibility and were at several New Jersey corporations, including Schering-Plough Corporation, Warner-Lambert Company, Pfizer, Inc., and Cadbury-Schweppes.

Ms. Fleming received her BSE degree in Electrical Engineering from Princeton University and her MBA in Finance from New York University. She is a coauthor of Handbook of Relational Database Design, used as a text in numerous graduate courses and professional projects across the United States, Canada, and Japan. She has served as a member of the Board of Directors of NYSERNet, the regional network service for New York State higher education and research institutions.

In January 2015, Ms. Fleming was named to Edge’s Board of Trustees, and in January 2019, she was named one of the executive sponsors of Edge’s professional network, Women Leaders in Technology.

“Edge is continually expanding the solutions they offer as a lead procurement agency in New Jersey, so I encourage institutions to be frequent visitors of the website to remain familiar with this expanding portfolio of solutions,” Fleming shared. “In addition, I would recommend that institutions reach out to Edge to share the types of solutions they may need in the future to allow Edge to proactively find ways to address their needs.”

Dr. Fadi P. Deek

Provost and Senior Executive Vice President, New Jersey Institute of Technology (NJIT),
Edge’s Board of Trustees, Research Sector

While Dr. Fadi P. Deek is New Jersey Institute of Technology’s current Provost and Senior Executive Vice President, he initially began his academic path on the same campus in the early 1980s. 

He received his B.S. in Computer Science in 1985, M.S. Computer Science in 1986, and later his Ph.D. in Computer and Information Science in 1997, all from NJIT. From there, he became a Distinguished Professor with appointments in two departments Informatics and Mathematical Sciences. 

After almost four decades of professional affiliation with NJIT, Dr. Deek has gradually progressed through faculty ranks and taught at all university levels, from first-year to advanced graduate courses, from in-person to online, and to students with diverse abilities, from special needs to honors. With the increased responsibility, he was privileged to go from Coordinator and Director to Assistant Vice Chair to Associate Dean, and now Provost and Senior Executive Vice President. He also serves as a member of the Graduate Faculty – Rutgers University Business School and is a member of Edge’s Board of Trustees.

“NJIT is pleased to have a relationship with Edge and we look forward to continuing to advance our research through data-informed decisions. We have a number of joint collaborations with Edge, up through the Chief Information Officer level. As we continue to develop our education and research support infrastructure, particularly in the domain of digital and data resources, we will continue to call on the expertise of Edge.”

Dr. Deek’s research interests have included software engineering and open-source software development, with applications to learning, collaborative, and decision-support technologies, and computer science education. Through this knowledge, he has published over 150 articles in journals and conference proceedings, ten book chapters, and three books. Dr. Deek has spoken at 40 professional presentations and been a Principal Investigator on several large projects.

With a wide wealth of expertise and knowledge, Dr. Deek has received numerous teaching, research, and service awards.

Experience the article in View From The Edge magazine.

The post Networking with Our Board of Trustees appeared first on NJEdge Inc.


Cybersecurity Solutions and Services

As digital transformation, also referred to as Dx, continues to gain momentum across many industries, so does the rise of data breaches and cyber attacks. We then add in a swift pandemic-induced move to widespread remote learning and work life, and many organizations were left highly vulnerable to cyber threats. Since COVID-19, the US Federal Bureau of Investigation (FBI) reported a 300-percent in

Experience the article in
View From The Edge
magazine.

Ensuring Comprehensive Enterprise Security
As digital transformation, also referred to as Dx, continues to gain momentum across many industries, so does the rise of data breaches and cyber attacks. We then add in a swift pandemic-induced move to widespread remote learning and work life, and many organizations were left highly vulnerable to cyber threats. Since COVID-19, the US Federal Bureau of Investigation (FBI) reported a 300-percent increase in reported cybercrimes.1 Research has also revealed that 43 percent of breaches are attacks on web applications—double from 2019—and 27 percent of malware incidents can be attributed to ransomware.2 Unfortunately, without proper cybersecurity practices in place, many organizations can become susceptible to attacks, often unknowingly.

In 2020, SolarWinds, a major US information technology firm, was the subject of a massive breach, compromising approximately 18,000 SolarWinds customers. SolarWinds works with Fortune 500 companies, top US telecoms and accounting firms, hundreds of universities and colleges globally, and all US military branches. Investigation into the breach discovered that cybercriminals compromised SolarWinds’s Orion solution that helps organizations manage their networks, servers, and networked endpoints. Cybersecurity experts believe that the cyber actor concealed malware inside Orion’s software update. When installed, the malicious code enabled the hacker to perform reconnaissance, elevate user privileges, move to other environments, and compromise sensitive data.

Among cybercriminals’ top targets are institutes of higher education, due to the large amount of personally identifiable information (PII) and research data available, as well as the opportunity to hold data or websites for ransom. Just recently, the University of California (UC) fell victim to a nationwide cyber attack where a ransomware group stole personal data from the University and that of hundreds of other schools, companies, and government agencies. The attack targeted a third party vendor service, Accellion, which is used by the University to securely transfer files. The hackers responsible for the breach have been threatening to publish private information of staff members and students if payment is not received. The SolarWinds and UC incidents are grave reminders of the importance of data protection and risk assessment, especially when partnering with vendors through outsourcing for software services.

Designing Proactive and Preventative Strategies
With the compilation of a massive amount of sensitive information from a large population of people, institutes of higher education often fall on a cyber criminal’s radar. Colleges and universities typically have open networks, providing their students with easy access to their needed apps and services. Plus, students and faculty are connected through multiple devices, creating an increased opportunity for cyber attacks. Educational institutes often become low-hanging fruit for cyber actors, because due to stretched budgets, investment in security is not always given top priority.

Nearly every institution outsources some activities, such as IT functionality, security, and software development, to an external service provider. Moving forward, educational institutions should focus on implementing procedures and policies to minimize the information accessed or disclosed to third-parties. Most importantly, every organization should create a comprehensive vendor security management process that pinpoints and closes gaps in their cybersecurity strategy. With this in mind, Edge works with its members to put cost-effective preventative measures into play and help assess, identify, remediate, prepare, and recover from institutional cyber attacks.

EdgeDx Cybersecurity solutions are designed to help the member community improve their cyber defenses as quickly and affordably as possible. Edge can assist your organization by addressing vulnerabilities and mitigating risk through: Programmatic Security Assessment and Vulnerability Management Endpoint Security Monitoring, Alerting and Response DNS-Layer Security Phishing and Security Awareness Training Multi-factor Authentication (MFA) & Access Management

Improving an Organizations’ Security Posture
Previously known as EdgeSecure, the expanded EdgeDx Cybersecurity solution offers a full suite of enterprise security services and helps organizations accomplish their enterprise security, risk management, and compliance initiatives. Edge’s Cybersecurity professionally spans the enterprise information technology domain from wide area networking to physical security practices, as well as social engineering and staff development to harden an organization against all manner of security threats. Edge’s holistic security approach begins with a Cybersecurity Health Check to identify an organization’s vulnerabilities and to establish a measurement baseline. This assessment evaluates the maturity of an organization’s security program based on industry standards and controls, identifies any gaps, and designs an incremental roadmap to improve the overall security posture of the organization. The assessment program involves interviewing staff/stakeholders, reviewing policies and procedures, determining risk appetite, identifying weaknesses, and providing detailed, actionable recommendations to improve the security program.

An essential component of identifying potentially dangerous interactions between member networks and known/unknown actors is by observing and identifying verified patterns of malicious activity. As part of the Cybersecurity Health Check, Edge conducts an analysis using proprietary/specially negotiated data feeds gathered at the Tier 1 providers level and delivers a scope of visibility far beyond that obtained from a traditional Netflow analysis. This service is available on a one-time basis as part of the assessment, or can also be provided monthly or bi-monthly on a subscription services basis. The analysis will detail any observed patterns or causes for concern within the member’s selected IP range.

Depending on the specific activity an institution would like to explore, the report shows activity with IP addresses known by the cyber intelligence community to be malicious or controlled by harmful actors, unknown beaconing or malware activity traffic to or from dark-web hosts or relays, large data communications, suspicious patterns of communications or events in data flows, traffic to or from inappropriate foreign network locations, or peculiar types of communications that would not be expected, such as disguising external data breaches.

In conjunction with the security assessment or by way of subscription services, an institution can also have a once-monthly scan conducted on the dark web. This scan provides a snapshot of the categorized risk associated with data being sold or shared. The active monitoring also includes GitHub code repositories. GitHub hosts the source code for thousands of different products, including software that’s being developed within a research lab or other university resource. The dark web analysis also monitors those source code repositories to ensure they don’t contain information about the institution that can be exploited, such as plain text passwords.

Identifying and Overcoming Vulnerabilities
Edge will also conduct an assessment of vulnerabilities in current systems, including public/private facing servers, network devices, and workstations. Edge’s Cybersecurity team uses Qualys, one of the leading providers to perform these scans, and provides a detailed report with a prioritized list of vulnerabilities, the affected devices, and the recommendations to mitigate these vulnerabilities. Another important component of the Cybersecurity assessment includes searching the Internet for domains that are registered using an organization’s name or intellectual property in bad faith or infringing on their copyrights. In regards to physical security, Edge will review an institution’s physical and environmental protection policies to ensure that they address the purpose, scope, roles and responsibilities, executive commitment, and departmental coordination needed to create a physically secure environment for systems to operate. Edge will also annually inspect the access to locations where sensitive systems operate in order to ensure they are restricted to legitimate users. Additionally, an organization will be able to confirm that proper environmental controls are being instituted, including fire suppression, temperature, and humidity.

Most institutions lack the resources to conduct 24/7 monitoring to discover and remediate network threats. Edge can act as an extension of an organization’s team to help monitor and respond to emergent threats around the clock, or provide subscription-based services based on an organization’s needs and current security profile. Since the least costly breach is the one that never happens, this robust suite of  services provides  an institution’s IT leaders with solutions for setting programmatic goals and improving security outcomes. With a comprehensive security plan in place, institutions can greatly reduce their vulnerability to cyber attacks, substantially mitigate the economic impact if a breach ever takes place, and make proactive, responsible decisions that continually improve cybersecurity within their organization.

Looking to improve your organization’s security posture and improve breach preparedness? Explore EdgeDx Cybersecurity services at njedge.net/solutions-overview/cybersecurity.

1FBI Urges Vigilance During COVID-19 Pandemic. April 2020.
2Verizon 2020 Data Breach Investigations Report. May 2020.

Experience the article in View From The Edge magazine.

The post Cybersecurity Solutions and Services appeared first on NJEdge Inc.


Commercio

The 4 benefits of exchanging digital documents via the Commercio.network blockchain

Companies, customers and suppliers need a shared view of the real supply chain situation on the blockchain, enabling them to solve communication problems while avoiding costly bottlenecks. Blockchain with its crypto-economic incentives can become the exchange network to make commerce. network companies interact with each other easily, cheaply and securely. The key to successful global collaboratio

Companies, customers and suppliers need a shared view of the real supply chain situation on the blockchain, enabling them to solve communication problems while avoiding costly bottlenecks.

Blockchain with its crypto-economic incentives can become the exchange network to make commerce. network companies interact with each other easily, cheaply and securely.

The key to successful global collaboration between companies is based on trust. Trust on the blockchain is achieved by the transparent processes that enable the certified exchange of a document to be facilitated, controlled, or enforced, making any further verification unnecessary. The exchange of a document based on the blockchain is performed through an electronic signature system that can be verified by all users (shareDoc).
Blockchain creates a decentralized system of trust bringing the following benefits:

Transparency and collaboration: on Commercio.network, documentation is tamper-proof. Our blockchain certifies the product’s journey throughout the supply chain and shares it with stakeholders. The system works without a central data repository or a single administrator as it is decentralized.

Scalability and Availability: Commercio.network solves scalability issues for write transactions. All people in the world can access the distributed and redundant datasets stored via the Internet as they are distributed.

Security and Privacy: a Commercio.network node does not reveal the identity of the person or organization and the commercial document is signed by an electronic signature with private and public cryptographic keys that cannot be altered as they are immutable.

Commercio.network, has created a “trusted” network where businesses can exchange business documents securely through the blockchain. The sum of each individual document pass creates greater value, because through a transparent process of co-certification it validates the certifiable processes of all parties involved

 

L'articolo The 4 benefits of exchanging digital documents via the Commercio.network blockchain sembra essere il primo su commercio.network.


Velocity Network

From pipe dream to reality: what Blockchainmania can mean for the world of HR and recruiting

Totalent explores what blockchainmania can mean for the world of HR and recruiting, recognizing Cornerstone as the first to officially integrate its Applicant Tracking Software (ATS) under the Velocity Network. The post From pipe dream to reality: what Blockchainmania can mean for the world of HR and recruiting appeared first on Velocity.

Own Your Data Weekly Digest

MyData Weekly Digest for May 14th, 2021

Read in this week's digest about: 8 posts, 2 Tools
Read in this week's digest about: 8 posts, 2 Tools

Thursday, 13. May 2021

EdgeSecure

Embracing EdTech to Futurize Learning

Designed to create a personalized and powerful learning experience for students, Edge’s learner-centric technology solutions provide faculty, instructional designers, educational technologists, and technology support personnel with the platforms and tools needed to enhance teaching and learning. The post Embracing EdTech to Futurize Learning appeared first on NJEdge Inc.

Experience the article in
View From The Edge
magazine.

Designed to create a personalized and powerful learning experience for students, Edge’s learner-centric technology solutions provide faculty, instructional designers, educational technologists, and technology support personnel with the platforms and tools needed to enhance teaching and learning. Over the past year, digital learning and remote education became an even larger part of the norm, driving educational technology to keep pace with a rising need for innovative and versatile solutions. As unique challenges and opportunities unfolded, member institutions tapped into Edge’s solution suite in new and powerful ways. Edge’s existing partnership with Zoom allowed many institutions to take advantage of this solution with Edge expertise to support them. Moving forward, institutions can leverage Edge’s knowledge to transform from users of educational technologies like Zoom to innovators in the EdTech arena.

Edge’s Educational and Emerging Technology Practice Group convenes around the multiple facets of technology and pedagogy and provides a platform for the member community to share their insight, pain points, and suggestions. In addition, Edge shares expertise and information on different topics during each session and offers a place for members to collaborate. Setting competition aside, institutions across the community are sharing ideas, best practices, and personal experiences, all with a common goal of improving student success. Edge is dedicated to bringing all these pieces together and ensuring organizations have the tools they need to achieve eLearning success.

The eLearning Big Picture
Edge’s Digital Learning solutions empower member institutions to provide anytime, anywhere teaching and learning and create flexible digital learning environments that can continue to evolve in the years ahead. Schools that didn’t have an expansive infrastructure for asynchronous instruction during the pandemic were forced to quickly move to synchronous online instruction, with many instructors teaching through Zoom, WebEx, or similar tools for the first time. As schools return to normal, the question arises; will these delivery modes continue to be effective choices?

While COVID forced many institutions to quickly create a digital space for remote education, the important focus now lies in how to further develop and enhance these digital environments to help an organization accomplish future educational and business goals. Since face-to-face instruction going forward will no longer exist without a digital component, institutions must take a closer look at how to create long-term strategies for instructional design, Learning Management Systems (LMS), accessibility, collaboration and communication capabilities, and seamlessly integrating EdTech solutions. With these goals in mind, Edge provides solutions for digital content needs all in one place and can augment an institution’s technology team to help streamline the digital transformation process.

“Setting competition aside, institutions across the community are sharing ideas, best practices, and personal experiences, all with a common goal of improving student success. Edge is dedicated to bringing all these pieces together and ensuring organizations have the tools they need to achieve eLearning success.” – Josh Gaul

Sharing EdTech Best Practices
During a time when community engagement and collaboration were essential, Edge created EdTech Table sessions to allow members a way to connect with others in the community and discuss the hot topics in education technology from their organizations. This opportunity also allows Edge to ensure the solutions and services they provide to member organizations are meeting their specific needs and providing the most benefits to every institution. Each discussion features subjects that are relevant to instructional designers, instructional technologists, faculty/instructors, and directors of online programming. For instance, a recent session explored creating engaging course content and how to better balance day-to-day EdTech needs with long term strategic planning. Another recent session discussed the steps educational technology professionals are taking to ensure leadership recognizes the importance of continuing investment in their services, even as the pandemic ends.

EdTech tools can help institutions create more inclusive and engaging educational experiences and allow today’s schools to advance into new areas of learning and instruction. Technology, both inside and outside the classroom, is here to stay and understanding the benefits of EdTech will be important to helping institutions boost the quality of education and meet the evolving needs of both teachers and students. Going forward, EdTech Table sessions will continue to help institutions explore industry trends and the best practices of their peers, while providing a springboard for innovative ideas that can help create a more vibrant and advanced digital learning community.

Explore and discover Edge’s Digital Learning solutions via njedge.net/solutions-overview/educational-technologies/ and stay tuned to the EdgeEvents page for upcoming events dedicated to supporting this thriving community.

Experience the article in View From The Edge magazine.

The post Embracing EdTech to Futurize Learning appeared first on NJEdge Inc.


Kantara Initiative

1Kosmos BlockID Receives NIST Certification

1Kosmos’s BlockID platform has completed a third-party testing process and received NIST SP 800-63 certification. The test was carried out by the Kantara Initiative, which evaluates identity solutions to determine whether or not they meet NIST and ISO standards, and whether they conform to Kantara’s own Identity Assurance scheme and Trust Framework Program.1Kosmos BlockID Receives NIST Certificati

1Kosmos’s BlockID platform has completed a third-party testing process and received NIST SP 800-63 certification. The test was carried out by the Kantara Initiative, which evaluates identity solutions to determine whether or not they meet NIST and ISO standards, and whether they conform to Kantara’s own Identity Assurance scheme and Trust Framework Program.1Kosmos BlockID Receives NIST Certification

The post 1Kosmos BlockID Receives NIST Certification appeared first on Kantara Initiative.


Ceramic Network

Ceramic and IDX now available on Avalanche

Ceramic brings streaming data and cross-chain identity protocols to Avalanche developers.

Ceramic, the decentralized network for data stream processing, and IDX, Web3's first cross-chain identity model, are now integrated with Avalanche. Starting today, developers and users have a new, powerful way to manage identities and dynamic off-chain data in their applications.

Ceramic provides developers with database-like functionality for storing all kinds of dynamic, mutable content. This finally gives developers a Web3 native way to add critical features like rich identities (profiles, reputation, social graphs), user-generated content (posts, interactions), dynamic application-data, and much more.

Developers building applications on Avalanche can easily add support for Ceramic and IDX in their application with a seamless user experience. Avalanche key pairs have been added as a supported signing and authentication method for Ceramic's data streams, so users can now perform transactions on Ceramic with their existing Avalanche wallets.

Blockchain, Data & Identity: a full stack for Web3 developers

Avalanche offers developers the most scalable infrastructure for building decentralized apps and services. It is the first decentralized smart contracts platform built for the scale of global finance, with near-instant transaction finality. Ethereum developers can quickly build on Avalanche as Solidity works out-of-the-box.

Great Web3 and DeFi apps require more than a smart contract platform, however. They also need sophisticated, scalable and dependable data management infrastructure.

Ceramic provides advanced database-like features such as mutability, version control, access control, and programmable logic. Now that Avalanche and Ceramic can easily be used together, developers can:

Build data-rich user experiences and social features on fully decentralized tech Give users cloud-like backup, sync and recovery without running a centralized server Publish content on the open web without the need to anchor IPFS hashes on-chain Leverage interoperable profiles, social graphs and reputations across the Web3 ecosystem

Ceramic's unique stream-based architecture is designed for web-scale volume and latency and to handle any type of data model. Built on top of open standards including IPFS, libp2p, and DIDs and compatible with any raw storage protocol like Filecoin or Arweave, all information stored on Ceramic exists within a permissionless cross-chain network that lets developers tap into an ever growing library of identities and data while using their preferred stack.

IDX: Cross-chain identity and user-centric data

Identity is the first use case enabled by Ceramic's building blocks for open source information. IDX (identity index) is a cross-chain identity protocol that inherits Ceramic's properties to provide developers with a user-centric replacement for server-siloed user tables. By making it easy to structure and associate data to user's personal index, IDX lets applications save, discover and route to users' data.

The IDX SDK makes it simple to manage users and deliver great data-driven experiences using the same keys and wallets as developers and users are already relying on. Users can also link multiple keys, from any wallet and multiple blockchains, to the same identity. This is essential to developers who want to serve users over time, as it enables key rotation, data interoperability across accounts, and rich cross-chain profiles, reputations and experiences (including 50,000+ profiles on Ethereum today).

Getting started with Ceramic on Avalanche To install Avalanche, follow this quickstart guide for running an avalanche node To add IDX to your project, follow this installation guide To use Ceramic for streams without IDX, follow this installation guide Regardless of which option you choose, you should also select 3ID Connect as your DID wallet during the authentication process which handles the integration with Avalanche wallets For questions or support, join the Ceramic Discord and the Avalanche discord About Avalanche

Avalanche is an open-source platform for launching decentralized applications and enterprise blockchain deployments in one interoperable, highly scalable ecosystem. Avalanche is able to process 4,500+ transactions/second and instantly confirm transactions. Ethereum developers can quickly build on Avalanche as Solidity works out-of-the-box.

Website | Whitepapers | Twitter | Discord | GitHub | Documentation | Forum | Avalanche-X | Telegram | Facebook | LinkedIn | Reddit | YouTube

About Ceramic

Ceramic is a public, permissionless, open source protocol that provides computation, state transformations, and consensus for all types of data structures stored on the decentralized web. Ceramic's stream processing enables developers to build secure, trustless, censorship-resistant applications on top of dynamic information without trusted database servers.

Website | Twitter | Discord | GitHub | Documentation | Blog | IDX Identity


GS1

8.5 Packaging Attributes – Business Process Notes

BMS ID ADB Name ADB Business Definition Business Process Notes 2186 Packaging Type Code The code for the type of package or container of the product. The Packaging Type Code and Pallet Type Code may be represented individually or as a unique combination of the two. The detailed packaging information (e.g. BMS IDs 2166, 2180, 2206, 2261, 2263), if used, is related t

BMS ID

ADB Name

ADB Business Definition

Business Process Notes

2186

Packaging Type Code

The code for the type of package or container of the product.

The Packaging Type Code and Pallet Type Code may be represented individually or as a unique combination of the two. The detailed packaging information (e.g. BMS IDs 2166, 2180, 2206, 2261, 2263), if used, is related to each unique instance or combination of Packaging Type Code and Pallet Type Code. If there are multiple packaging types the order in which they are communicated may make a difference, dependent on local requirements.

2166

Package Feature Code

The code that describes features about the packaging of the item.

The Package Feature Code may be repeated for each instance of Packaging Type Code / Pallet Type Code.

2206

Packaging Material Type Code

The code for the type of packaging material of the product.

The Packaging Material Type Code may be repeated for each instance of Packaging Type Code / Pallet Type Code.

In some markets this attribute may be related to other sustainability-related material attributes and may be specified by local regulation.

2261

Package Deposit Amount

The amount of deposit associated with a returnable package.

This amount must be accompanied with a currency type in this or another field, depending upon your master data exchange solution.

2263

Package Deposit Identifier

The identifier for the package deposit.

This attribute must be a GTIN and is associated with the package that is used in the return processing. A specific list of GTINs is supplied by the manufacturer of the package and in some areas is maintained by a central organisation.

2181

Pallet Type Code

The code that indicates the type of pallet that the unit load is delivered on.

The Packaging Type Code and Pallet Type Code may be represented individually or as a unique combination of the two. The detailed packaging information (e.g. BMS IDs 2166, 2180, 2206, 2261, 2263), if used, is related to each unique instance or combination of Packaging Type Code and Pallet Type Code. If there are multiple packaging types the order in which they are communicated may make a difference, dependent on local requirements.

2180

Pallet Disposition Code

The code that describes the expected action to be taken with the pallet.

[no additional notes]

2306

Batch Number Indicator

The indicator specifying whether the item has a batch or lot number.

This attribute does not contain the actual batch or lot number. This number is typically found on the packaging itself. However, the value may be “True” even when the number is not printed on the package. In some cases, batch or lot number might be found on the invoice or other transactional documents.

2308

Packaging Marked Returnable Indicator

The indicator that specifies whether the product packaging is marked as returnable (with or without a deposit).

[no additional notes]

2334

Packaging Date Type Code

The code indicating the type of date on the package to the buyer and consumer.

Specify a code for each type of date that appears on the packaging.


8.4 Packaging Example – Pallet Level

“Pallet” is populated at the pallet level. Packaging Type Code Packaging Material Type Code Packaging Feature Code Pallet Hardwood Banded package Polypropylene (PP) Stretchwrapped Linear Low Density Polyethylene Note: Pallet Type Code (BMS ID 2181) is populated at this level, e.g. “Pallet 1200 X 1000 mm”.

“Pallet” is populated at the pallet level.

Packaging Type Code

Packaging Material Type Code

Packaging Feature Code

Pallet

Hardwood

Banded package

Polypropylene (PP)

Stretchwrapped

Linear Low Density Polyethylene

Note: Pallet Type Code (BMS ID 2181) is populated at this level, e.g. “Pallet 1200 X 1000 mm”.


8.3 Packaging Example – Case Level

“Box” is populated at the case level. Packaging Type Code Packaging Material Type Code Packaging Feature Code Box Double Wall Corrugated Board Internal Dividers

“Box” is populated at the case level.

Packaging Type Code

Packaging Material Type Code

Packaging Feature Code

Box

Double Wall Corrugated Board

Internal Dividers


8.2 Packaging Example – Inner Pack Level

“Multipack” is populated at the inner pack level. Packaging Type Code Packaging Material Type Code Packaging Feature Code Multipack Paperboard Handles *Bottle Coloured Glass *Packed, unspecified Metal Twist Cap * ”Bottle” and “Twist Cap” are populated at the each level.

“Multipack” is populated at the inner pack level.

Packaging Type Code

Packaging Material Type Code

Packaging Feature Code

Multipack

Paperboard

Handles

*Bottle

Coloured Glass

*Packed, unspecified

Metal

Twist Cap

* ”Bottle” and “Twist Cap” are populated at the each level.


8.1 Packaging Examples – Each Level

8.1.1 Net Bag “Net” is populated at the each level. Packaging Type Code Packaging Material Type Code Packaging Feature Code Net Plastic Other Handles 8.1.2 Cereal Box “Box” and “Bag” are populated at the each level. Packaging Type Code Packaging Material Type Code Packaging Feature Code Box Paperboard Bag High Densi
8.1.1 Net Bag

“Net” is populated at the each level.

Packaging Type Code

Packaging Material Type Code

Packaging Feature Code

Net

Plastic Other

Handles

8.1.2 Cereal Box

“Box” and “Bag” are populated at the each level.

Packaging Type Code

Packaging Material Type Code

Packaging Feature Code

Box

Paperboard

Bag

High Density Polyethylene (HDPE)


8 Packaging

This section provides guidance on the set of attributes used to convey information about the make-up of product packaging, such as the packaging form, material and features. This information is specified for all levels of the product packaging hierarchy, e.g. each, inner pack, case and pallet. The goal is to provide an understanding of how these attributes may be populated at various levels, an

This section provides guidance on the set of attributes used to convey information about the make-up of product packaging, such as the packaging form, material and features. This information is specified for all levels of the product packaging hierarchy, e.g. each, inner pack, case and pallet. The goal is to provide an understanding of how these attributes may be populated at various levels, and how the attributes are related and used together.


7.7 Marketing and Consumer Facing Attributes – Business Process Notes

BMS ID ADB Name ADB Business Definition Business Process Notes 600 Batteries Included Indicator The indicator specifying whether batteries are included with the product. If this attribute is True, related battery attributes are required. 601 Batteries Required Indicator The indicator specifying whether batteries are required to operate the product, incl

BMS ID

ADB Name

ADB Business Definition

Business Process Notes

600

Batteries Included Indicator

The indicator specifying whether batteries are included with the product.

If this attribute is True, related battery attributes are required.

601

Batteries Required Indicator

The indicator specifying whether batteries are required to operate the product, including built in batteries and removable batteries.

If this attribute is True, related battery attributes are required.

612

Batteries Built In Indicator

The indicator specifying whether batteries are built into the product.

If this attribute is True, related battery attributes are required.

613

Battery Material Type Code

The code which indicates the material of the battery.

This attribute describes the active material in the battery (e.g. lithium ion, nickel cadmium, alkaline). This attribute is required if the Batteries Required Indicator, Batteries Included Indicator or Batteries Built In Indicator is True.

614

Battery Size Type Code

The code which indicates the physical size/shape of the battery used to operate the product.

This attribute is required if the Batteries Required Indicator, Batteries Included Indicator or Batteries Built In Indicator is True.

615

Battery Weight

The weight of one battery included with or built into the product.

This attribute is required if the Batteries Built In Indicator or the Batteries Included Indicator is True. This is typically used to determine disposal requirements.

617

Number of Batteries Built In

The number of batteries built into the product.

This attribute is required if the Batteries Built In Indicator is True.

618

Number of Batteries Required

The number of batteries required to operate the product.

This attribute is required if the Batteries Required Indicator is True.

789

Consumer Storage Instructions

The instructions and information provided to the consumer about proper storage for the product.

[no additional notes]

791

Consumer Usage Instructions

The instructions and information provided to the consumer on the usage of the product.

[no additional notes]

1066

Dietary Regime Code

The code indicating the diet the product is suitable for.

Some examples of Dietary Regime Code include: Halal, Keto, Low Carb and Vegan. The full range of dietary codes may be found in the GS1 Global Data Dictionary.

1377

Preparation Instructions

The instructions on how to prepare the product for consumption.

This attribute is required if the packaging includes instruction on how to prepare the product. It may also be an instruction associated with the Preparation Type Code and may be repeated as a group for each preparation type (e.g. bake, boil, microwave). Preparation type should be included in the text of the instructions if it is on the package.

1379

Preparation Type Code

The code specifying the method used to make the product ready for consumption.

This attribute is required if the product needs to be prepared by the consumer before consumption. It may also be associated with Preparation Instructions and Serving Suggestion.

1380

Serving Suggestion

A suggestion about the way the product may be served to enhance the consumer experience.

This value is typically a marketing statement describing when or how the product may be enjoyed, often represented with an image on the packaging. (Examples: “Serve with fruits and vegetables for a well-balanced meal!”, “Great for Breakfast, Lunch or Dinner!”)

1494

Features and Benefits

The description of features and benefits of the individual product, service, brand or seller.

This short list of key features or benefits of the product is intended to be displayed as a bullet list. The attribute is repeated for each feature. Bullets are not included in the attribute content, as it will be formatted into a list for presentation to the consumer.

1498

Product Marketing Message

The description of the product experience for the consumer.

One or more understandable, usable paragraphs that describe the product, designed to entice the consumer to purchase. In some regions this may be referred to as “romance language” or “romance copy”. The attribute may be repeated if more space is needed to continue the message.

1506

Product Grade

The description of the product's evaluation or ranking or class, such as quality, size, weight.

[no additional notes]

1530

Search Key Words for Product

The key words provided by the seller intended to help make the product discoverable by consumers using digital search engines.

These are the words, phrases or tags that consumers will use in search engines to find the product.

1550

Seasonal Product Indicator

The indicator that specifies whether the product is seasonal or offered during specific times of the year.

[no additional notes]

1558

Target Consumer Age

The description of the intended age or age range of the consumer.

This is generated by the supplier and generally matches what is on the packaging artwork. Retailers may transform this description to match the terminology they wish to use to communicate with their consumers. For example, a supplier may designate a toy to be targeted for ages 1 to 3 years, while a retailer may want to state the age range as 12 to 36 months.

3531

Product Shape Code

The code representing the shape of the product, excluding the packaging.

[no additional notes]

3552

Alternative Colour Description

The description of the colour of the product.

This is the name provided by the supplier to describe the colour and could be enhanced with marketing language (for example “Flamingo Pink” vs “Pink”).

3587

Product Handling Code

The code that defines the information and processes needed to safely handle the product.

[no additional notes]

3703

Minimum Days of Shelf Life at Arrival

The seller's determination of the minimum number of calendar days of shelf life of the product, based upon the expiration date on the product, upon receipt by the buyer.

This value is provided by the seller. It is allowed to vary by buyer and “arrival” should be based on the agreed-upon point in the distribution chain (e.g. dock door or warehouse gate).

3704

Minimum Days of Shelf Life from Production

The seller's determination of the minimum number of calendar days from the production date to the expiration date.

This value is provided by the seller. It is allowed to vary by buyer.

3709

Usage Period After Opening

The period after opening where the product is still safe to be used by the consumer.

[no additional notes]

3800

Size Description

A description of the size of the product.

This is descriptive terminology for the size of the product rather than a numeric size, for example “small”, “medium” and “large”, or “.5 L 12-count”. It should not be confused with Net Content. This attribute is a description, not a measurement.

5891

Brand Marketing Message

The description of the consumer experience with the product brand.

One or more understandable, usable paragraphs that describe the brand experience, designed to entice the consumer to purchase. This may be used to influence the feeling the consumer has about a brand.


7.6 Target Consumer Age Example


7.5 Serving Suggestion Example


7.4 Preparation Type Code / Preparation Instructions Example (Vegetable Fried Rice)

Preparation Type Code Preparation Instructions Saute Stove Top: 1) Heat approximately 1 tablespoon of vegetable oil in a non-stick frying pan or wok. 2) Pour 1-1/2 cups of contents into pan. 3) Cook on MEDIUM, stirring continuously for 5 minutes or until cooked thoroughly to 165°F. Microwave Microwave: Add 1-1/2 cups of frozen Vegetable Fried Rice in microwaveable

Preparation Type Code

Preparation Instructions

Saute

Stove Top: 1) Heat approximately 1 tablespoon of vegetable oil in a non-stick frying pan or wok. 2) Pour 1-1/2 cups of contents into pan. 3) Cook on MEDIUM, stirring continuously for 5 minutes or until cooked thoroughly to 165°F.

Microwave

Microwave: Add 1-1/2 cups of frozen Vegetable Fried Rice in microwaveable container, cover, and cook on HIGH for 2 minutes or until cooked thoroughly to 165°F.


Blockchain Commons

Everything You Wanted to Know About Blockchain Commons (But Were Afraid to Ask)

In the last year, Blockchain Commons has produced a large collection of specifications, reference libraries, architectures, and reference utilities meant to improve blockchain infrastructure by enabling interoperability among Bitcoin wallets — and in the future, other cryptocurrency applications, such as Ethereum wallets, and other cryptography programs, such as chat systems, key management program

In the last year, Blockchain Commons has produced a large collection of specifications, reference libraries, architectures, and reference utilities meant to improve blockchain infrastructure by enabling interoperability among Bitcoin wallets — and in the future, other cryptocurrency applications, such as Ethereum wallets, and other cryptography programs, such as chat systems, key management programs, and more.

So, what do Blockchain Commons’ technologies do? We’ve just released a Technology Overview video that discuses the concepts and foundations underlying our work and also highlights many of our most important technological releases.

Read More

As the video notes, our work is built on a considerable volume of existing literature. We work with entropy, seeds, and keys and use airgaps, multisigs, and secure storage to ensure robust key creation, safe key usage, and responsible key management. We also build on crucial technological foundations such as CBOR, CRC-32, Fountain Codes, QRs, SHA-256, and Shamir’s Secret Sharing.

This results in a number of core Blockchain Commons technologies:

Bytewords are a text-encoding method for binary data. Uniform Resources support self-describing data that can be used interoperably. LifeHash allows for graphical recognition of seeds, keys, and other data. SSKR provides new libraries and methodologies for sharding secrets. The Torgap architecture creates security by partitioning blockchain services.

The Technology Overview video discusses all of this and more.

We are also continuing to produce documents of use for the entire blockchain community, with our newest tutorials offering more depth on how to use the technologies highlighted in this video. That begins with some just-published developer-focused documents that expand on the overview of Uniform Resources (URs) found in the video. Uniform Resources: An Overview explains how URs are put together, and then A Guide to Using URs for Key Material looks at ur:crypto-seed, ur:crypto-bip39, and ur:crypto-hdkey as three different ways to transmit key material in a standardized, typed format. (We have more UR docs planned on SSKRs, PSBTs, and our new request and response system.)

The specifications, architectures, references, and documents that we’re creating at Blockchain Commons are all meant to improve interoperability among many different vendors, to support the creation of wallets (and future cryptography applications) that enable independence, security, and resilience for their users. We’ve been working with an Airgapped Wallet Community to create these various technologies. If you’re designing blockchain applications of your own, we hope you’ll view the video to see how some of our technologies can help you.

You can also support the continued development of technologies like these, meant to support the entire community by becoming a sponsor at GitHub or by making a one-time donation at our BTCPay.

Wednesday, 12. May 2021

Good ID

FIDO Alliance Supports Biden Administration EO on Cybersecurity

Federal agencies should choose FIDO as they seek to comply with the new Executive Order that requires the implementation of multi-factor authentication within the next 180 days. By: Andrew Shikiar, […] The post FIDO Alliance Supports Biden Administration EO on Cybersecurity appeared first on FIDO Alliance.

Federal agencies should choose FIDO as they seek to comply with the new Executive Order that requires the implementation of multi-factor authentication within the next 180 days.

By: Andrew Shikiar, Executive Director and Chief Marketing Officer, FIDO Alliance

In the face of recent attacks that have exposed areas of weakness in critical U.S. infrastructure assets, President Biden signed a new Executive Order Wednesday to help bolster the nation’s cybersecurity.

There have been a number of high profile attacks against critical American infrastructure in recent months, including the Solarwinds supply chain attack that exposed much of the government to potential risk. Top of mind in recent days is the ransomware attack against Colonial Pipeline, which significantly impacted the flow of refined oil across America. These attacks expose the vulnerability of critical infrastructure in the United States, and the Biden Administration is issuing federal directives that will minimize or eliminate risk.

A key part of the Executive Order is a requirement that agencies adopt multi-factor authentication (MFA) and encryption for data at rest and in transit to the maximum extent possible. Federal Civilian Branch Agencies will have 180 days to comply with the Executive Order and will need to report on progress every 60 days until adoption is complete. If for some reason agencies cannot fully adopt MFA and encryption within 180 days, they must report to Secretary of Homeland Security through the Director of CISA, the Director of OMB, and the APNSA with a rationale for not meeting the deadline.

At the FIDO Alliance, we welcome today’s directive from the Biden Administration and applaud its focus on the importance of multi-factor authentication. What’s notable about this Executive Order is that the White House is prioritizing MFA everywhere, rather than limiting MFA to the PIV/PKI platform that agencies have depended on for more than 15 years. Today’s Executive Order marks an important step forward, in that it makes clear the priority is protecting every account with MFA — without mandating any specific technology. This is a notable shift, because we know that the weakest forms of MFA can still stop some attacks where passwords are the attack vector. We also know that FIDO Authentication is the only standards-based alternative to PIV for those applications that need protection against phishing attacks. This Executive Order opens the door for agencies to deploy FIDO Authentication — something we’ve heard they’ve wanted to do but have held back as use of any non-PIV authentication has not been permitted.  

This isn’t the first time the U.S Government has advocated for the use of MFA and strong encryption. In an advisory issued by CISA in September 2020 on election security, the government agency noted that the majority of cyber-espionage incidents are enabled by phishing, and FIDO security keys are the only form of MFA that offer protection from phishing attacks 100% of the time.

In fact, the U.S. Government hasn’t just been advocating for the use of strong authentication with FIDO, it has actually already been implementing it since at least 2018 on the login.gov portal. With login.gov the U.S. Government is already offering a secure approach to help citizens and agencies to securely access Federal resources. In June 2019, the FIDO Alliance hosted a webinar detailing the deployment case study for login.gov, which is now even more timely with the need for agencies to adopt strong authentication in the next 180 days.

Since its inception, the FIDO Alliance has been bringing industry partners together, including every major operating system vendor as well as technology and consumer service providers across all industry verticals including financial services, ecommerce and government. All those diverse groups have been working together in common purpose to standardize strong authentication. Billions of devices around the world today can support FIDO Authentication and are ready to play their part in ensuring a strong authentication future. The fact that most major cloud providers, device manufacturers and browser vendors all ship with support for FIDO means that agencies can easily leverage MFA that is built in, rather than other products that need to be “bolted on.”  

If there is one thing that the recent spate of attacks has served to once again remind us, it’s that the private sector and public sector need strong security measures to protect critical infrastructure — and the FIDO Alliance believes this begins with authentication.

We urge government agencies to adopt only the strongest forms of MFA when complying with this directive. The FIDO Alliance and its members stand ready to serve and help agencies with the education, resources and tools to implement strong authentication to help reduce risk and improve the cybersecurity posture of the U.S. Government.

The post FIDO Alliance Supports Biden Administration EO on Cybersecurity appeared first on FIDO Alliance.


Oasis Open

Call for Consent opens for OSLC Change Management v3.0 as OASIS Standard

The first Project Specification from OASIS's Open Projects program goes before members as an OASIS Standard candidate The post Call for Consent opens for OSLC Change Management v3.0 as OASIS Standard appeared first on OASIS Open.

The first Project Specification from OASIS's Open Projects program goes before members as an OASIS Standard candidate

The OASIS Open Services for Lifecycle Collaboration (OSLC) Open Project members [1] have approved submitting the following Project Specification to the OASIS Membership in a call for consent for OASIS Standard:

OSLC Change Management Version 3.0.
Project Specification 01
17 September 2020

This is a call to the primary or alternate representatives of OASIS Organizational Members to consent or object to this approval. You are welcome to register your consent explicitly on the ballot; however, your consent is assumed unless you register an objection. To register an objection, you must: 

1. Indicate your objection on this ballot, and 

2. Provide a reason for your objection and/or a proposed remedy to the TC. 

You may provide the reason in the comment box or by email to the Open Project on its general purpose mailing [2]. If you provide your reason by email, please indicate in the subject line that this is in regard to the Call for Consent. Note that failing to provide a reason and/or remedy may result in an objection being deemed invalid. 

OASIS Open Services for Lifecycle Collaboration (OSLC) is an OASIS Open Project operating under the Open Project Rules [3]. Specifically, Change Management v3.0 has proceeded through the standards process defined in https://www.oasis-open.org/policies-guidelines/open-projects-process/#project-specifications and is presented for consideration as an OASIS Standard following the rules in https://www.oasis-open.org/policies-guidelines/open-projects-process/#oasis-standard-approval-external-submissions [4].

Details

The Call for Consent opens at 13 May 2021 at 00:00 UTC and closes on 26 May 2021 at 23:59 pm timezone. You can access the ballot at:

Internal link for voting members: https://www.oasis-open.org/apps/org/workgroup/voting/ballot.php?id=3619

Publicly visible link:  https://www.oasis-open.org/committees/ballot.php?id=3619

OASIS members should ensure that their organization’s voting representative responds according to the organization’s wishes. If you do not know the name of your organization’s voting representative is, go to the My Account page at

http://www.oasis-open.org/members/user_tools

then click the link for your Company (at the top of the page) and review the list of users for the name designated as “Primary”.

Information about the candidate OASIS Standard and the OSLC Open Project

This is the first work of an OASIS Open Project to be submitted as a candidate for OASIS Standard. 

The OSLC initiative applies Linked Data principles, such as those defined in the W3C Linked Data Platform (LDP), to create a cohesive set of specifications that can enable products, services, and other distributed network resources to interoperate successfully. 

Change Management v3.0 defines a RESTful web services interface for managing product change requests, activities, tasks and relationships as well as related resources such as requirements, test cases, or architectural resources. 

The OP received 3 Statements of Use from KTH Royal Institute of Technology, SodiusWillert, and IBM [5]. 

The prose specification document and related files are available here:

* Part 1: Specification

HTML (Authoritative):

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-spec.html

PDF:

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-spec.pdf

* Part 2: Vocabulary

HTML (Authoritative):

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.html

PDF:

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.pdf

* Part 3: Constraints*

HTML (Authoritative):

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.html

PDF:

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.pdf

* Part 4: Machine Readable Vocabulary Terms

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-vocab.ttl

* Part 5: Machine Readable Constraints

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/change-mgt-shapes.ttl

For your convenience, OASIS provides a complete package of the

specification document and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/ps01/cm-v3.0-ps01.zip

Additional information

[1] OASIS Open Services for Lifecycle Collaboration (OSLC) Open Project 

https://github.com/oslc-op/oslc-admin

[2] Comments may be submitted to the OP via the project mailing list at oslc-op@lists.oasis-open-projects.org. To subscribe, send an empty email to oslc-op+subscribe@lists.oasis-open-projects.org and reply to the confirmation email.

All emails to the OP are publicly archived and can be viewed at:

https://lists.oasis-open-projects.org/g/oslc-op/topics

[3] Open Project Rules

Open Project Rules

[4] Timeline Summary:

– PS01 released for 60-day public review 09 March 2021: https://lists.oasis-open.org/archives/members/202103/msg00001.html

– PS01 approved as a candidate for OASIS Standard 05 March 2021: https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/115

– PS01 approved as a candidate for OASIS Standard 05 March 2021: https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/115

– Project Specification 01 approved 17 September 2020: https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/73

– Project Specification Draft 03 approved 15 August 2019: https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/16

[5] Statements of use 

– KTH Royal Institute of Technology

https://lists.oasis-open-projects.org/g/oslc-op/message/412

– SodiusWillert

https://lists.oasis-open-projects.org/g/oslc-op/message/420

– IBM

https://lists.oasis-open-projects.org/g/oslc-op/message/426

The post Call for Consent opens for OSLC Change Management v3.0 as OASIS Standard appeared first on OASIS Open.


Me2B Alliance

Did You Know That School Apps Often Share Student Data with Third Parties?

Did You Know That School Apps Often Share Student Data with Third Parties?

Did You Know School Apps Often Share Student Data with Third Parties?

Me2B Alliance (Me2BA) research recently found that 60% of the school apps we reviewed were sending student data to potentially high-risk third parties without knowledge or consent.

Me2B Alliance (Me2BA) research recently found that 60% of the school apps we reviewed were sending student data to potentially high-risk third parties without knowledge or consent.

The Me2B Alliance is a nonprofit fostering the respectful treatment of people by technology.  We’re a new type of standards development organization – defining the standard for respectful technology.  

We’re working to raise awareness around the unacceptable amount of student data shared with third parties – particularly advertisers and analytics platforms – in school apps. 

We believe that people have too little information about which third parties they’re sharing data with when they use an app. We encourage Apple and Google to make this information more clear in their App stores. 

Scenarios like the ones described in our report – where user data is being abused, even inadvertently – highlight the types of issues we are driven to prevent through independent testing, as well as education, research, policy work, and advocacy. 


Commercio

Privacy on the Blockchain 

One aspect of European privacy law (GDPR) that has received a great deal of attention is the “right to be forgotten,” outlined in Article 17 entitled “right to erasure” (“right to be forgotten”). Simply put, “oblivion” means that organizations must completely erase records containing an individual’s data from all files on each of the following occasions When the person revokes […] L'articolo Pri

One aspect of European privacy law (GDPR) that has received a great deal of attention is the “right to be forgotten,” outlined in Article 17 entitled “right to erasure” (“right to be forgotten”). Simply put, “oblivion” means that organizations must completely erase records containing an individual’s data from all files on each of the following occasions

When the person revokes their consent. When the purpose for which the data was collected is complete. When it is required by law. 4 EU Regulation 2016 679)

It is worth noting that this is not an absolute requirement and that individuals do not have an unconditional right to be “forgotten.” If the organization has legitimate and legal purposes – as stated in the regulation – for storing and processing data, subjects do not have the right to be forgotten. However, the exceptions are few compared to the multitude of uses of common data in our daily lives.

So, in the spirit of GDPR, how could Commercio. network ensure that data is deleted from all places where it is stored or processed?

The first and only fundamental rule is to NEVER put personal data on a Blockchain.

When we share a document on our blockchain via the shareDocument function we NEVER put the document itself, but rather the fingerprint (called a hash) of the document itself.

 

L'articolo Privacy on the Blockchain  sembra essere il primo su commercio.network.

Tuesday, 11. May 2021

Kantara Initiative

1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certified Powered by Advanced Biometrics & Private Blockchain

1Kosmos announced its BlockID platform has been approved by Kantara Initiative as a Full Service, conformant with NIST SP 800-63 rev.3 Class of Approval at IAL2 and AAL2. The industry leadership of 1Kosmos connects a broader vision to the Kantara Identity Assurance scheme in its Trust Framework Program.1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certi

1Kosmos announced its BlockID platform has been approved by Kantara Initiative as a Full Service, conformant with NIST SP 800-63 rev.3 Class of Approval at IAL2 and AAL2. The industry leadership of 1Kosmos connects a broader vision to the Kantara Identity Assurance scheme in its Trust Framework Program.1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certified Powered by Advanced Biometrics & Private Blockchain

The post 1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certified Powered by Advanced Biometrics & Private Blockchain appeared first on Kantara Initiative.


1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certified Powered by Advanced Biometrics & Private Blockchain

Certification highlights platform’s strong identity and authentication abilities  SOMERSET, N.J. and WAKEFIELD, MA, May 11, 2021 – 1Kosmos, the only standards-based platform that uses advanced biometrics and a private blockchain to create an indisputable, reusable digital identity for strong and continuous authentication, today announced its BlockID platform has been approved by Kantara Initi

Certification highlights platform’s strong identity and authentication abilities  SOMERSET, N.J. and WAKEFIELD, MA, May 11, 2021 – 1Kosmos, the only standards-based platform that uses advanced biometrics and a private blockchain to create an indisputable, reusable digital identity for strong and continuous authentication, today announced its BlockID platform has been approved by Kantara Initiative as a Full Service, conformant with NIST SP 800-63 rev.3 Class of Approval at IAL2 and AAL2. The industry leadership of 1Kosmos connects a broader vision to the Kantara Identity Assurance scheme in its Trust Framework Program. 1Kosmos BlockID is a distributed digital identity platform that performs…

The post 1Kosmos BlockID Digital Identity Solution Approved as NIST 800-63-3 Conformant & FIDO2 Certified Powered by Advanced Biometrics & Private Blockchain appeared first on Kantara Initiative.


Schema

Announcing Schema Markup Validator: validator.schema.org (beta)

Announcing preview availability of validator.schema.org for review and feedback. As agreed last year, Schema.org is the new home for the structured data validator previously known as the Structured Data Testing Tool (SDTT). It is now simpler to use, and available for testing. Schema.org will integrate feedback into its draft documentation and add it more explicitly to the Schema.org website f
Announcing preview availability of validator.schema.org for review and feedback.

As agreed last year, Schema.org is the new home for the structured data validator previously known as the Structured Data Testing Tool (SDTT). It is now simpler to use, and available for testing. Schema.org will integrate feedback into its draft documentation and add it more explicitly to the Schema.org website for the next official release.
SDTT is a tool from Google which began life as the Rich Snippets Testing Tool back in 2010. Last year Google announced plans to migrate from SDTT to successor tooling, the Rich Results Test, alongside plans to "deprecate the Structured Data Testing Tool". The newer Google tooling is focused on helping publishers who are targeting specific schema.org-powered search features offered by Google, and for these purposes is a huge improvement as it contextualizes many warnings and errors to a specific target application.
However, many publishers had also appreciated SDTT as a powerful and general purpose structured data validator. Headlines such as "Google Structured Data Testing Tool Going Away; SEOs Are Not Happy" captured something of the mood.
Schema.org started out written only in Microdata, before embracing RDFa 1.1 Lite and JSON-LD 1.0. There are now huge amounts of Schema.org data in all of these formats and more (see webdatacommons report). Schema.org endorsed these multiple encodings, because they can each meet different needs and constraints experienced by publishers. The new validator will check all of these formats.
Amongst all this complexity, it is important to remind ourselves of the importance of simplicity and usability of Schema.org markup for its founding purpose: machine-readable summaries of ordinary web page content. Markup that - when well-formed - helps real people find jobs, educational opportunities, images they can re-use, learn from fact checkers or find a recipe to cook for dinner.
This is the focus of the new Schema Markup Validator (SMV). It is simpler than its predecessor SDTT because it is dedicated to checking that you're using JSON-LD, RDFa and Microdata in widely understood ways, and to warning you if you are using Schema.org types and properties in unusual combinations. It does not try to check your content against the information needs of specific services, tools or products (a topic deserving its own blog post). But it will help you understand whether or not your data expresses what you hope it expresses, and to reflect the essence of your structured data back in an intuitive way that reflects its underlying meaning.
The validator.schema.org service is powered by Google's general infrastructure for working with structured data, and is provided to the Schema.org project as a Google-hosted tool. We are also happy to note that many other schema.org-oriented validators are available, both commercial (e.g. Yandex's) and opensource. For example, the Structured Data Linter, JSON-LD Playground, SDO-Check and Schemarama tools. We hope that the new Schema Markup Validator will stimulate collaboration among tool makers to improve consistency and developer experience for all those working on systems that consume Schema.org data. 
Please share any feedback with the Schema.org community via Github, Twitter (#schemasmv), or the Schema.org W3C community group.

Monday, 10. May 2021

Trust over IP

Trust over IP and Sovrin sign agreement to strengthen collaboration

The Sovrin Foundation (“Sovrin”) Board of Trustees and Trust over IP Foundation (“ToIP”) Steering Committee are pleased to announce that they have signed a Letter Agreement (dated March 18, 2021).... The post Trust over IP and Sovrin sign agreement to strengthen collaboration appeared first on Trust Over IP.

The Sovrin Foundation (“Sovrin”) Board of Trustees and Trust over IP Foundation (“ToIP”) Steering Committee are pleased to announce that they have signed a Letter Agreement (dated March 18, 2021). This agreement signifies the commitment of both organizations to mutual cooperation and recognition for each other’s mandates. Sovrin and ToIP intend to work together toward advancing the infrastructure and governance required for digital trust and digital identity ecosystems. 

“By signing this Letter Agreement, Sovrin and ToIP are excited to take a step further to support the need and importance of our separate but interrelated mandates to benefit people and organizations across all social and economic sectors through secure digital identity ecosystems based on verifiable credentials and SSI,” said Chris Raczkowski, Chairman of Board of Trustees, Sovrin Foundation. 

Under the agreement, each organization will assign one member to act as a liaison to coordinate and maintain lines of communication, attend plenary sessions, and provide periodic updates to the Sovrin Board of Trustees and ToIP Steering Committee. They will also seek opportunities proactively to exchange information, participate in discussions of shared interest, promote the value of each other’s work through joint announcements and media products, as well as collaborate to achieve their respective mandates.

Sovrin and ToIP both operate in a manner that respects open licensing, open source code and open standards. The organizations agree that their open, public materials will be available for reference (with attribution) by the other.

“ToIP and Sovrin each offer something unique to the market. Our members already collaborate together informally on many topics. Signing this agreement makes our work together more visible and open. It will create new opportunities to collaborate on challenges that affect every layer of our trust model,”  said John Jordan, Executive Director of Trust over IP Foundation. “By working together, we want to help solve interoperability problems more quickly and support the adoption of digital trust ecosystems more widely.”” 

If you have any questions or suggestions, please contact info@sovrin.com or operations@trustoverip.org  

To view the text of the agreement, please find it here.

About Sovrin Foundation

The Sovrin Foundation is a non-profit social enterprise which acts as the administrator and governance authority for public available SSI infrastructure, as well as supporting interoperability digital identity ecosystems that adhere to the Principles of SSI. Sovrin’s activities aim to serve the common good of providing secure, privacy-respecting digital identity for all, including individuals, organizations and things.      

About Trust over IP Foundation

Launched in 2020, the Trust over IP Foundation is an independent project hosted by the Linux Foundation. Its members include over 200 leading companies, organizations and individual contributors sharing expertise and collaborating to define standard specifications to advance a secure trust layer for the digital world. Through this collaborative effort, the Trust over IP Foundation aims to define a complete architecture for Internet-scale digital trust that combines cryptographic trust at the machine layer with human trust at the business, legal, and social layers. For more information, please visit us at trustoverip.org

The post Trust over IP and Sovrin sign agreement to strengthen collaboration appeared first on Trust Over IP.


MyData

The State of MyData 2021

– A snapshot of the path to human-centric approach to personal data In less than 10 years, the concept of MyData, a human-centric approach to personal data, has gone from a research project and non-profit’s side project into a cornerstone of data strategies policies in Europe and globally. It has become an essential building block... Read More The post The State of MyData 2021 appeared firs

– A snapshot of the path to human-centric approach to personal data In less than 10 years, the concept of MyData, a human-centric approach to personal data, has gone from a research project and non-profit’s side project into a cornerstone of data strategies policies in Europe and globally. It has become an essential building block...

Read More

The post The State of MyData 2021 appeared first on MyData.org.


Commercio

From EDI to B2B e-commerce On Commercio.network

International trade is really “old school”, there is very little technology. There are still pieces of paper sent around the world to accompany trade exchanges and shipments, and the same system that has been in place since the mercantilist era of the 16th century is basically active. Since the 1980s, there have been systems called EDI (Electronic Data Interchange) that […] L'articolo From EDI t

International trade is really “old school”, there is very little technology. There are still pieces of paper sent around the world to accompany trade exchanges and shipments, and the same system that has been in place since the mercantilist era of the 16th century is basically active.

Since the 1980s, there have been systems called EDI (Electronic Data Interchange) that are used to exchange documents, but these systems are geared and tailored to their proponents, i.e., large companies in specific economic sectors (Retail, Automotive, Aerospace). Although these document exchange systems represent a big step towards the digitization of commerce, they are but a first step on a longer path.
We can hope that in a few years, thanks to Blockchain, we will see the emergence of services that offer “end-to-end” supply chain solutions.
This new type of entry point of end-to-end Supply Chain documentary business exchanges will offer a substantial new level of quality of management of commercial business processes, in order to:
– Reduce errors: the entire order process is error-prone and thousands of paper and pencilled documents are often misinterpreted.
– Reduce lead times: when executing a digital order the company can deliver faster, also reducing the related Order to Cash time.
– Eliminate data entry costs: when everything is exchanged digitally, manual entry is no longer necessary and time and costs are reduced.
– Improve traceability: when everything is done digitally, transactions are accounted for and any errors can be traced back to the point of origin.

Standardization of digital documents on commerce.network
A huge challenge of the commerce.network blockchain is using a common data format to exchange information between players in a Supply Chain. For many years, the Supply Chain industry has struggled with insufficient EDI standardization resulting in its inadequate operation on a global basis. Our solution to this problem is not to create a new standard, but to be open to different standards, although we think that the emerging standard will be UBL 2.1.

 

L'articolo From EDI to B2B e-commerce On Commercio.network sembra essere il primo su commercio.network.

Friday, 07. May 2021

Elastos Foundation

Elastos Bi-Weekly Update – 07 May 2021

...

Elastos Tokenomic and DPoS Modifications Set for Implementation

...

Oasis Open

OSLC Core v3.0 approved by the OSLC Open Project

OSLC Core defines the overall approach to Open Services for Lifecycle Collaboration based specifications and capabilities that extend and complement the W3C Linked Data Platform. The post OSLC Core v3.0 approved by the OSLC Open Project appeared first on OASIS Open.

Project Specification 02 is ready for testing and implementation

OASIS is pleased to announce that OSLC Core Version 3.0 from the Open Services for Lifecycle Collaboration Open Project [1] has been approved as an OASIS Project Specification.

Managing change and configuration in a complex systems development lifecycle is very difficult, especially in heterogeneous environments that include homegrown tools, open source projects, and commercial tools from different vendors. The OSLC initiative applies World Wide Web and Linked Data principles to enable interoperation of change, configuration, and asset management processes across a product’s entire application and product lifecycle.

OSLC Core defines the overall approach to Open Services for Lifecycle Collaboration based specifications and capabilities that extend and complement the W3C Linked Data Platform.

This Project Specification is an OASIS deliverable, completed and approved by the OP’s Project Governing Board and fully ready for testing and implementation. The applicable open source licenses can be found in the project’s administrative repository at https://github.com/oslc-op/oslc-admin/blob/master/LICENSE.md.

The specification and related files are available at:

OSLC Core Version 3.0
Project Specification 02
23 April 2021

– OSLC Core Version 3.0. Part 1: Overview
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/oslc-core.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/oslc-core.pdf

– OSLC Core Version 3.0. Part 2: Discovery
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/discovery.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/discovery.pdf

– OSLC Core Version 3.0. Part 3: Resource Preview
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/resource-preview.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/resource-preview.pdf

– OSLC Core Version 3.0. Part 4: Delegated Dialogs
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/dialogs.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/dialogs.pdf

– OSLC Core Version 3.0. Part 5: Attachments
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/attachments.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/attachments.pdf

– OSLC Core Version 3.0. Part 6: Resource Shape
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/resource-shape.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/resource-shape.pdf

– OSLC Core Version 3.0. Part 7: Vocabulary
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-vocab.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-vocab.pdf

– OSLC Core Version 3.0. Part 8: Constraints
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-shapes.html
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-shapes.pdf

– OSLC Core Vocabulary definitions file:
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-vocab.ttl

– OSLC Core Resource Shape Constraints definitions file:
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-shapes.ttl

Distribution ZIP file

For your convenience, OASIS provides a complete package of the specification and related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open-projects.org/oslc-op/core/v3.0/ps02/core-v3.0-ps02.zip

Members of the OSLC OP Project Governing Board approved this specification by Special Majority Vote [2] as required by the Open Project rules [3].

Our congratulations to the participants and contributors in the Open Services for Lifecycle Collaboration Open Project on their achieving this milestone.

Additional references

[1] Open Services for Lifecycle Collaboration Open Project
https://open-services.net/

[2] Approval ballot:
– https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/133

[3] https://www.oasis-open.org/policies-guidelines/open-projects-process

The post OSLC Core v3.0 approved by the OSLC Open Project appeared first on OASIS Open.


Own Your Data Weekly Digest

MyData Weekly Digest for May 7th, 2021

Read in this week's digest about: 4 posts, 1 question, 1 Tool
Read in this week's digest about: 4 posts, 1 question, 1 Tool

Thursday, 06. May 2021

Velocity Network

Interview with Zach Daigle

We sat down with Zach Daigle, President of PreCheck to learn about PreCheck's reason to be involved in the Velocity Network. The post Interview with Zach Daigle appeared first on Velocity.

The post Interview with Zach Daigle appeared first on Velocity.


Sovrin (Medium)

Sovrin and Trust over IP Signed Mutual Agreement to Strengthen Their SSI Collaboration

The Sovrin Foundation (“Sovrin”) Board of Trustees and Trust over IP Foundation (“ToIP”) Steering Committee are pleased to announce that they have signed a Letter Agreement (dated March 18, 2021). This agreement signifies the commitment of both organizations to mutual cooperation and recognition for each other’s mandates. Sovrin and ToIP intend to work together toward advancing the infrastructure

The Sovrin Foundation (“Sovrin”) Board of Trustees and Trust over IP Foundation (“ToIP”) Steering Committee are pleased to announce that they have signed a Letter Agreement (dated March 18, 2021). This agreement signifies the commitment of both organizations to mutual cooperation and recognition for each other’s mandates. Sovrin and ToIP intend to work together toward advancing the infrastructure and governance required for digital trust and digital identity ecosystems.

“By signing this Letter Agreement, Sovrin and ToIP are excited to take a step further to support the need and importance of our separate but interrelated mandates to benefit people and organizations across all social and economic sectors through secure digital identity ecosystems based on verifiable credentials and SSI,” said Chris Raczkowski, Chairman of Board of Trustees, Sovrin Foundation.

Under the agreement, each organization will assign one member to act as a liaison to coordinate and maintain lines of communication, attend plenary sessions, and provide periodic updates to the Sovrin Board of Trustees and ToIP Steering Committee. They will also seek opportunities proactively to exchange information, participate in discussions of shared interest, promote the value of each other’s work through joint announcements and media products, as well as collaborate to achieve their respective mandates.

Sovrin and ToIP both operate in a manner that respects open licensing, open source code and open standards. The organizations agree that their open, public materials will be available for reference (with attribution) by the other.

“ToIP and Sovrin each offer something unique to the market. Our members already collaborate together informally on many topics. Signing this agreement makes our work together more visible and open. It will create new opportunities to collaborate on challenges that affect every layer of our trust model,” said John Jordon, Executive Director of Trust over IP Foundation. “By working together, we want to help solve interoperability problems more quickly and support the adoption of digital trust ecosystems more widely.”

If you have any questions or suggestions, please contact info@sovrin.org or operations@trustoverip.org .

About Sovrin Foundation

The Sovrin Foundation is a non-profit social enterprise which acts as the administrator and governance authority for public available SSI infrastructure, as well as supporting interoperability digital identity ecosystems that adhere to the Principles of SSI. Sovrin’s activities aim to serve the common good of providing secure, privacy-respecting digital identity for all, including individuals, organizations and things.

About Trust over IP Foundation

Launched in 2020, the Trust over IP Foundation is an independent project hosted by the Linux Foundation. Its members include over 200 leading companies, organizations and individual contributors sharing expertise and collaborating to define standard specifications to advance a secure trust layer for the digital world. Through this collaborative effort, the Trust over IP Foundation aims to define a complete architecture for Internet-scale digital trust that combines cryptographic trust at the machine layer with human trust at the business, legal, and social layers. For more information, please visit us at trustoverip.org.

Originally published at https://sovrin.org on May 5, 2021.

Sovrin and Trust over IP Signed Mutual Agreement to Strengthen Their SSI Collaboration was originally published in Sovrin Foundation Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 05. May 2021

ID2020

Okta Joins the ID2020 Alliance

The ID2020 Alliance could not be happier to welcome Okta to our rapidly growing community of public and private sector organizations that share our commitment to realizing the benefits – and mitigating the risks – of digital ID at scale. Okta provides a secure, trusted platform for managing the identity of customers and employees. Founded in 2009, Okta currently provides technology to more t

The ID2020 Alliance could not be happier to welcome Okta to our rapidly growing community of public and private sector organizations that share our commitment to realizing the benefits – and mitigating the risks – of digital ID at scale.

Okta provides a secure, trusted platform for managing the identity of customers and employees. Founded in 2009, Okta currently provides technology to more than 9,400 organizations. Their commitment to the “good ID” agenda and prominent position in the identity and access management sector will make them an invaluable partner for the ID2020 Alliance.

At Okta’s core is the belief that social impact is foundational to its business and long-term success. Okta for Good, the company’s social impact arm, works to strengthen the links between people, technology, and community. By providing technical solutions for nonprofits, expanding economic opportunities and access to the tech sector, and empowering employees as changemakers, Okta for Good works to ensure that the company lives up to its long-term commitment to maximize benefits to society, the environment, and all of its stakeholders.

“We are delighted to welcome Okta to the ID2020 Alliance,” said executive director, Dakota Gruener. “If properly designed and ethically implemented, digital ID can be the key that unlocks previously inaccessible rights and opportunities for hundreds of millions of people globally. ID2020 and Okta share a common vision; to ensure that those keys are available to everyone.”
“According to USAID, ‘There may be no single factor that affects a person’s ability to share in the gains of global development as much as having an official identity. That’s why Okta is proud to have recently joined the ID2020 Alliance, a cross-sector effort of global organizations committed to an ethical approach to digital ID,” said Adam Rosenzweig, Senior Manager, Product Impact, “ID2020 plays a critical role in the evolving digital ID ecosystem by convening a diverse set of stakeholders to develop standards for the design and evaluation of digital identity implementations.”

The ID2020 Alliance continues to grow, welcoming organizations from the public and private sectors that share our commitment to user-managed privacy-protecting, and portable digital ID. As an Alliance, we are only as strong as our members and we are excited to add Okta to the partnership as we move — together — towards providing good ID for all.

About ID2020

ID2020 is a global public-private partnership that harnesses the collective power of nonprofits, corporations, and governments to promote the adoption and ethical implementation of user-managed, privacy-protecting, and portable digital identity solutions.

By developing and applying rigorous technical standards to certify identity solutions, providing advisory services and implementing programs, and advocating for the ethical implantation of digital ID, ID2020 is strengthening social and economic development globally. Alliance partners are committed to a future in which all of the world’s seven billion people can fully exercise their basic human rights and reap the benefits of economic empowerment and to protecting user privacy and ensuring that data is not commoditized.


Berkman Klein Center

​​Russia’s Broken Web of Internet Laws

A Deep Dive into Five Laws Photo by: Mike Licht. CC BY 2.0 In March 2021 Russia used the latest in a series of newly passed Internet laws for the first time. The ‘sovereign’ 2019 law was used to throttle Twitter’s speed by 50% on desktops and by 100% on mobile phones. This move was triggered by Twitter’s non-action in deleting content considered illegal by the Roskomnadzor (RKN), Russia’
A Deep Dive into Five Laws Photo by: Mike Licht. CC BY 2.0

In March 2021 Russia used the latest in a series of newly passed Internet laws for the first time. The ‘sovereign’ 2019 law was used to throttle Twitter’s speed by 50% on desktops and by 100% on mobile phones. This move was triggered by Twitter’s non-action in deleting content considered illegal by the Roskomnadzor (RKN), Russia’s federal authority overseeing online and media content. This illegal content included the Twitter accounts of several government critics, such as Mihkail Khodorkovsky. This government action was legitimized through the Sovereign Internet law of 2019, which empowers the RKN to respond through appropriate measures to “security threats to the Internet’s functioning inside Russia” such as “changing the configuration/routes of communications/telecommunications.”

Over the past few years, the Russian Parliament has passed over fifty pieces of legislation in an attempt to further control and hegemonize the Russian Internet infrastructure, both internally and externally. While the official justification for these laws is to create a more ‘reliable’ internet, in actuality, some of these laws have provided nearly unfettered freedom to the government to preemptively block and filter content and independently — without relying on Internet intermediaries, or on securing prior judicial sanctions.

This article hopes to shed light on the top five most important of these laws or combination of laws and their consequential effect in crippling Russian democracy, freedom of speech, and privacy rights.

Number 1: Laws mandating identification and de-anonymization

A 2017 law and the following government regulation in 2018, have effectively eliminated any means for Russian citizens to raise their concerns with the government anonymously without the fear of facing backlash. The law has mandated that every user of an online application which can be used for “receiving, transferring, delivering or processing users’ electronic messages on the internet” has to link their account with a registered mobile number so that they can be identified and tracked if need be. Non-compliance by platforms leads to an imposition of a fine amounting to USD 82 for individuals, USD 800 for officials, and USD 16,500 for companies. Per the law, non-compliance may also lead to the blocking of the platform in Russian territory. It is a small respite that at the moment, these laws are only applicable to online messaging platforms, relieving other OSPs from the compulsory linking of accounts for the time being.

Further, the 2018 regulation, which is yet to be implemented, requires such online messaging applications have to partner with mobile operators to ensure that the latter are willing to confirm the identity of any of its users within twenty minutes of such information being sought. As with most other laws discussed in this article, the penalty for non-compliance is the blocking of the online service.

This law hits at the heart of an important way in which online communication can foster open dialogue — the ability to stay anonymous, especially when showing dissent. It would also have a major impact on citizens’ ability to express their religion, faith, and sexuality, which have long been penalized by the Russian government.

Number 2: The double-edged sword of content moderation laws

In 2013, the Logovoy Law was passed, which empowered government officials to block content ‘calling for unsanctioned public events that disturb public order’ within 24 hours and without a court order. In 2019, another draft law, yet to be passed, would give Russian authorities the ability to block websites that censor Russian state media content. Ironically, the reason that the bill provides for this is that such websites violate Russian citizens’ right to access information.

In 2019, four sets of laws were passed that have had the cumulative effect of prohibiting the dissemination of “fake news” and news that is deemed “disrespectful” to the state and government officials. The RKN has the authority to delete such information with immediate effect. The stipulated fine for non-removal amounts to USD 6,400 for individuals and up to USD 24,000 for companies. Repeat offenders “disrespecting” the state could also be imprisoned for up to fifteen days. As a brief aside, the term “disrespectful” being used as a measure for potential jail time brings to mind a 2014 judgment of the Indian Supreme Court that held the word “annoyance”, in the same context and for similar penalties, as vague and overbroad.

In 2020, the Federal law N482-FZ was passed, which was a sophisticated version of the 2019 draft law discussed above. It essentially penalized OSPs such as YouTube, Facebook, and Twitter for censoring Russian state media content. For context, there has been substantial research to show that Russian state media spreads disinformation among its citizens and globally. For instance, this paper highlights how RT (formerly, Russia Today) is an “opportunistic channel that is used as an instrument of state defense policy to meddle in the politics of other states”. However, if any of the tech giants censor content like RT’s, they can now be fined or potentially blocked.

Finally, in January 2021, the Russian parliament has started working on a draft law to fine social media for “illegally blocking users”. This move seems to be a consequence of the ban on USA former President Donald Trump’s Twitter account.

Number 3: Surveillance through laws on forced Data Retention and Data Localisation

The 2016 “Yarovaya amendments”, which have been named after their primary author and member of the State Duma (Russian Parliament’s legislative wing), Irina Yarovaya, provides for compulsory retention of all communications for six months. This includes text messages, voice messages, images, and videos. In addition, it also provides that the metadata that marks the location, timestamp, sender, and receiver of the messages, be retained for three years. The amendments also mandate that this data be stored on Russian territory and that the government be given unfettered access to this data without any prior judicial sanction. Since 2016, LinkedIn, Twitter, and Facebook have been penalized for non-compliance. While Twitter and Facebook were fined approximately USD 50,000 each, the government blocked LinkedIn from the Russian territory entirely. Further, in 2019, the fines for non-compliance with data storage requirements have been increased to USD 78,000.

Additionally, the amendments make way for ‘back door’ access to all such data by requiring companies to provide the government with ‘any information necessary’ to decode electronic messages. This drastically weakens the security of such information and weakens encryption measures.

The Yarovaya amendments requirement to provide a direct and unrestricted back door to access data, coupled with the compulsion to not only localize data but also to retain it for long periods of time effectively means that every conversation/post/discussion can potentially be served on a platter to the Russian government. This is a sticky situation for all involved, but especially for human rights activists, political rights activists, and journalists, such as Alexei Navalny, who will be acutely aware that their conversations and metadata, oftentimes confidential, are exposed to the domestic intelligence agencies. This hampers their ability to work and engage in very important and democratic dialogue freely.

The full implications of such a law were on display in April last year, when the RKN blocked Telegram, a messaging app with over 10 million Russian users, for refusing to provide encryption keys to the government. RKN also ordered the blocking of over 18 million IP addresses, used by Telegram to operate in Russia. This led to chaos and affected many legitimate online services including maps, airline booking, and online shopping, among others.

Number 4: Laws Banning IP addresses and regulating VPNs and Proxys

The various pieces of legislation discussed in this article have been further bolstered by a 2017 law that imposes fines on Virtual Private Networks (VPNs) for allowing users access to content banned by Russian authorities or for providing guidance to access such content. In a bill introduced in 2018, a fine of USD 9,000 was stipulated for non-compliance. This 2018 bill was a result of the above-mentioned ‘Telegram blocking’ incident, after which many Russians turned to VPNs to be able to continue business

Further, a 2019 regulation now requires that VPNs and search engine platforms stay updated with a list of blocked websites maintained by the federal government and on the basis of this list, block access to those websites on their platform.

Number 5: Law creating an independent Russian Internet Infrastructure

The crown for the most restrictive law proposed by the Russian parliament arguably goes to the Sovereign Internet Law of 2019. It seems that the Russian government fully understands and fears the extent to which online foreign threats are able to impact domestic outcomes (wonder why!). Under the garb of ensuring cybersecurity and preventing the Russian territory from such threats, the Sovereign Internet Law has the effect of putting the entire Russian internet infrastructure in a bubble of sorts. The law legitimizes the ‘splinternet’ within Russian territory — it provides that at any time that the Russian government sees the need to cut off Russia from the rest of the world, it can take control of this bubble and isolate it from the rest of the world wide web and the global Internet infrastructure.

While on paper, such a national Internet infrastructure is a reasonable backup plan for “emergencies” to combat any foreign threats, it is left entirely up to the officials to decide what would qualify as such a threat. Hence, the effect of this law will be a complete transfer of control to the Russian government over all aspects of the internet, both content and infrastructure, including, of course, the power to trigger internet shutdowns and removing any content that is deemed undesirable, without any prior judicial authorization.

This is the first global attempt of its kind. Even China has a firewall that filters parts of the Internet but does not create a separate infrastructure of a ‘national’ Internet. Moreover, China’s introduction to the Internet from the very beginning had been controlled, unlike Russia, which has freely provided Internet access to its citizens over the past few decades. This is an additional reason why the ‘internet bubble’ that the Russian government seeks to create may be a problematic endeavor.

Even though the law imposes compulsory installation of certain technical equipment and procedures for all OSPs, including deep packet inspections, to enable the “Russian Bubble” infrastructure, technologists also argue that this may be technically impossible, somewhat like a situation where one expects the first floor of a building to be filled up with water without the ground floor receiving any water influx at all.

Taking stock of this cautionary tale

Russia has some of the most disproportionate laws for Internet regulation in the world and maybe that helps explain why it continues to remain ‘not free’ as per Freedom House’s World Freedom Index 2021. In the same vein, in June 2020, the European Court of Human Rights found in four separate cases brought against Russia by local media outlets and OSPs that blocking entire websites violates the owners’ right to impart information and the public’s right to receive it. Despite this, Russia has, yet again, threatened to ban Twitter in March 2021.

For Russian citizens, Russia’s web of Internet laws is broken and continues to be one of the most disproportionately restrictive in the world. Moving forward, it is imperative that laws restricting a citizens’ right to freedom of speech and/or their right to privacy be weighed against the important and age-old principles carved under human rights instruments such as the European Convention on Human Rights and the International Covenant on Civil and Political Rights. It is not hard to guess what would become of the five-set of laws discussed above if this measure was truly upheld in each of them — they would, without a doubt, either cease to exist or become a lot less restrictive.

Finally, a fact and a thought — Of the three national broadcasting channels in Russia, the government holds over 51% share in one and directly runs the parent companies of the other two. It has also recently introduced a law reducing the permissible percentage of foreign ownership in print media from 50% to 20%. This makes the state’s objective very clear — complete control over public narrative along with unfettered surveillance abilities.

It is worth thinking if this could be a cautionary tale that is potentially paving a misguided path for other countries and for other world leaders, whose objective is abject control over the narrative in their country, both online and offline.

_____________________________

Shreya is an Employee Fellow at the Berkman Klein Center, where she works on the Lumen Project. She is a passionate digital rights activist and uses her research and writing to raise awareness about how digital rights are human rights.

​​Russia’s Broken Web of Internet Laws was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Sexism in Facial Recognition Technology​

Until AI has completely eliminated human biases, can its results be considered any more trustworthy than a human’s? Photo by Electronic_Frontier_Foundation: CC BY 2.0 Facial recognition technology is becoming more powerful and more ubiquitous seemingly every day. In January 2021, a study found that a facial recognition algorithm could be better at deducing a person’s political orientatio
Until AI has completely eliminated human biases, can its results be considered any more trustworthy than a human’s? Photo by Electronic_Frontier_Foundation: CC BY 2.0

Facial recognition technology is becoming more powerful and more ubiquitous seemingly every day. In January 2021, a study found that a facial recognition algorithm could be better at deducing a person’s political orientation than a human judgment or a personality test. Similarly, earlier this week, the Dubai airport rolled out technology through which an iris-scanner would verify one’s identity and eliminate the need for any human interaction when entering or exiting the country.

The use of facial recognition by law enforcement agencies has become common practice, despite increasing reports of false arrests and jail time. While there are various downsides to facial recognition technology being used at all, including fears of mass surveillance and invasion of privacy, there are flaws within facial recognition technologies themselves that lead to inaccurate results. One such major challenge for this still-burgeoning technology is gender-based inaccuracies.

Research has indicated that women are 18% more likely to be misidentified through facial recognition than men. One line of research found that while Amazon’s Rekognition software’s ability to recognize white women’s faces was down to 92.9%, a darker-skinned woman’s recognition would only be 68.9% accurate. Similarly, a study conducted at the University of Washington revealed that a facial recognition tool was 68% more likely to predict that an image of a person cooking in the kitchen is that of a woman. These are clear patterns exhibiting sexism in AI and the use of such technologies for law enforcement is likely to disproportionately affect marginalized groups of people, including genders other than male and female.

It is true that once a leap in technology has been made, hoping to be able to wipe it off the face of the earth may be a little like trying to put a genie back in a bottle. A more probable step in the right direction could be to first, feed representative and better data sets to AIs, and second, deploy the technology only with substantial democratic oversight.

Data scientists at the MIT Media Lab have noted that in the instances where they have trained AI with more diverse data, the accuracy of the results has been less discriminatory and accurate. Hence, presenting AI with a diverse representation of datasets for it to learn from would be a great start in reversing the prevalent biases. Additionally, these technologies would also benefit if the providers of the facial recognition software are transparent about its underlying workings.

This transparency, if accompanied with democratic oversight regarding the application, could potentially aim at striking a better balance as to when, where, how and to what end facial recognition may be used. For example, such oversight could be in the form of regulations that set an industry standard that an AI must meet before its commercial application. However, regardless of how many leaps facial recognition takes in the coming years, a serious, deliberate discussion is necessary for determining whether facial recognition should be used by law enforcement at all. This is because the consequences of error are so grave. The European Commission will release its first legislative proposal later this year and it will be interesting to see how the proposal will attempt to regulate AI application.

Former UN Special Rapporteur David Kaye’s warning that AI will enforce bias is more pertinent now than ever before. However, until the day that AI has completely eliminated human biases, can its results be considered any more trustworthy than a human’s?

About the author: Shreya is an Employee Fellow at the Berkman Klein Center, where she works on the Lumen Project. She is a passionate digital rights activist and uses her research and writing to raise awareness about how digital rights are human rights.

Sexism in Facial Recognition Technology​ was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF Blog

🚀DIF Monthly #18 (May, 2021)

📯  The last month has seen a lot of activity on interoperability (test suites getting mature and documented, with profiles on the horizon) and #IIW32 had a distinctly heads-down, "let's buidl this" vibe to match. Spring is the season for groundwork, planting, and nurturing, after

📯  The last month has seen a lot of activity on interoperability (test suites getting mature and documented, with profiles on the horizon) and #IIW32 had a distinctly heads-down, "let's buidl this" vibe to match. Spring is the season for groundwork, planting, and nurturing, after all.

Table of contents Group Updates Member Updates Funding DIF Media Jobs Metrics Join DIF 🚀 Foundation News Steering Committee Elections - nominate now

We made a quick explainer on our blog to clear up the procedure in anticipation of next month's elections.
In short, if you are a DIF member and have someone in mind who should help DIF's governing board send their name and email to nominations@identity.foundation

DIF Grants

The DIF has officially launched a lightweight mini-grant program so that members (or the Steering Committee itself) can directly and transparently incentivize community contributions like editorship of a spec or implementations of specs. For more information, see our new blog.

Killer Whale Jello Bowl Death Match Redux: The Reckoning

At #IIW32, a mammoth and consequential conversation spanned 4 intense, jam-packed sessions and set new highs for IIW session-name silliness culminating in a massive interoperability watershed and widespread buy-in from many communities! The details of what gets built are being worked out in a new work item of the Claims & Credentials WG, building on the recent donation to DIF of the Bloom WACI extension to Presentation Exchange. The goal is both a minimum viable implementable spec for cross-stack credential exchange ("v0.1 scope"), and a longer-term commitment to a more fully-featured iteration of the same spec ("v1 scope"). Stay tuned!

IIW32 - A wave of DIF donations and debuts

The notes are now available for all the sessions. Mixed in among the usual avalanche of updates and report-outs from established subcommunities like KERI (and its new offshoot, ADPL), DIDComm, Sidetree, and SDS (see below!) were a few smaller items that our readers would overlook at their own peril: Animo (NL) gave a tour of their AcaPy extension project that bridges the divide between Aries and CCG/LD and Bloom debuted their WACI spec mentioned above, which facilitates Presentation-Exchange based exchanges and is in the process of being donated at time of press. A couple new projects, consortia and startups were

Wallet Security WG is starting.

Stay tuned and register for the mailing list

2nd Round of eSSIF-lab grants for European startups

The second rounds of both the infrastructure call and the business call will soon be open and accepting applications and inquiries!

🛠️ Group Updates ☂️ InterOp WG (cross-community) Nick Mothershaw (OIX) from Open Identity Exchange spoke about the guide to define, enforce and compare trust frameworks (with explicit attention to and feedback on the equivalency of and upgrade path for decentralized identity), with Pamela Dingle (Microsoft) as respondent VC status review and discussion crowd-editing session hacking away at v1 of DIF's new public-facing, entry-level FAQ for basic SSI topics IIW panel and workshop scope review 💡 Identifiers & Discovery Updates to Universal Resolver/Registrar configuration, and work on [Helm charts](https://github.com/decentralized-identity/charts. [Philip Feairheller, Sam Smith] KERI-based DID methods, and use of DID Resolution metadata IOP Chooser Universal Resolver Driver Policy Discussion DID resolution over DIDComm to a Universal Resolver Review .well-known and did:web Breaking changes in DID Core JSON-LD context Strictly speaking, all UR DID method implementations are now broken. Discussion around changing JSON-LD contexts, caching, versioning, hashlinks UR could implement a "fixing layer" for a transitional period, and simultaneously try to motivate implementers to fix their drivers. Universal Resolver policy questions about driver submission and maintenance UR currently supports several DID methods that are not in the DID spec registries. Probably DIF should have stricter policies going forward (e.g. only accept drivers for DID methods in the W3C method registry) group will generate a proposal 🛡️ Claims & Credentials MetaMask Credentials- Upgrading EIP712 (Spruce+Consensys Mesh) WACI PE-X meetings (during IIW) and first meeting to settle basics meeting notes from 26th April, pre-work item meeting goal to have a spec in 1-2 months, ready to use by the Health Pass project See WACI draft spec for context (Bloom) VC Marketplace focuses on wrap up / writedown. focusing more on use-cases Credential Manifest is considering multi credential issuance map descriptors are worked on by Bloom Review UVI (Essiflab) 🔓 DID Auth DIF-OIDF joint meetings - follow along on the bitbucket issues if the timing doesn't work for you! On 29th April SIOP/DIF represented by Kristina at the OIDF workshop. OIDF Special call: Support request and VC in user endpoint (through OIDC) review “medi connect” options the call time has been modified to incorporate more timezones 📻 DID Comm IIW32 Sessions with consequences/interest for DIDCommers: Jello Bowl Death Match (Didcomm-PeX/WACI work item starting up in CC WG) ECDH-1PU vs everything else (chat related discussion) ECDH-ES Authenticated Encryption Andrew's Idea: https://hackmd.io/gC4ItH4IQKS_at8P8RyQOQ?view KID/SKID Related Topics ESKID Encrypted Sender PRs 183 - OOB Accept 185 - kid and skid headers 180 - Align service Type with Aries 179 - Issue with commentary 172 - Fix inconsistencies with to/next attributes in a forward message. 177 - Profiles Negotiation Topics Complexity vs re-encryption bloat 182 - Discussion Issue 161 - Attachments WIP actual pr: 174 - Encrypted Attachments 173 - accept property in service block Discussion Topics DIDComm RecipientKeys: Signing vs Encryption KERI Event Logs 📦 Secure Data Storage calls are alternating weekly between the two specifications: Identity Hubs and EDVs SDS EDV call The need in the new EDV repo to be cleansed of hubs related work, and the same applies to Identity Hub repo Add batch operation service Batch operations are required (Derek) by citing the issues with HTTP signing performance when using a remote KMS. The architecture of batch APIs and how these influence implementation complexities. Add option to get full documents from queries Proposed: new features to be merged into the spec, if no objection, but marked with "at risk" until at least 2 implementations are confirmed 🔧 KERI Agreement ADPL and KEL: restore anchor to inception (required for NFT, essential to TrustFrame). Use more generic term in spec documentation for external support infrastructure, 'endorser' instead of original 'witness'. This does not require changing abbreviated labels in existing code/prefix tables. Should we genericise label? compact labels already consumed e, s (for endorser, supporter) synomym backer, b is not consumed. Otherwise leave as w. Restore External Content Anchoring to Inception Event #140 Revised KSN key state notification message #130 Roadmap: Witness support code in KERIpy. ready to test build demo. Added repo for keri-dht-py Conrad Rosenbrock contributor. did:keri method spec draft did:peer is an intermediate step until KERI direct mode is live ⚙️ Product Managers Workday education wallet demo history of changes that lead to current key management profile Q&A on roadmap, VC issuance, formats etc. 🪙 Finance & Banking SIG Alex David, Global Business Development Manager @ Raon Adrian Doerk @ Main Incubator Relevant projects list Wiki 🏥 Healthcare SIG Still hibernating-- contact the interim chair to get involved if you would like to wake the sleeping giant from its slumber! 🦄 Member Updates

DIF Associate members are encouraged to share tech-related news in this section. For more info, get in touch with operations.

Affinidi Free webinar on the topic of Verifiable Credentials, presented by Affinidi and KILT Protocol, organized by BerChain.

eSSIF LAB (EU) - Infrastructure-Oriented call

Infrastructure-Oriented Open Call, with grants of up to 155 000 € (9 months projects). The call is open to European innovators and focuses on the development and interop testing of open-source SSI components. Some examples of SSI components include Wallets, server proxies, revocation, cryptographic enforcer policies, integration, interoperability, compatibility, just to name a few. Please note the final-round deadline: 7th July 2021, 13:00 CET (Brussels local time).

Apply here

eSSIF LAB (EU) - First Business-oriented Call

Infrastructure-Oriented Open Call, with grants of up to 106 000 € . The call is open to European innovators and focuses on extending of the eSSIF-Lab basic infrastructure/architecture with business solutions that makes it easy for organizations to deploy and/or use SSI, reduce business risks, facilitate alignment of business information, etc. Please note the final round opens 7th July 2021, 13:00 CET (Brussels local time).

Apply here

Other NGI Open Calls (EU)

Funding is allocated to projects using short research cycles targeting the most promising ideas. Each of the selected projects pursues its own objectives, while the NGI RIAs provide the program logic and vision, technical support, coaching and mentoring, to ensure that projects contribute towards a significant advancement of research and innovation in the NGI initiative. The focus is on advanced concepts and technologies that link to relevant use cases, and that can have an impact on the market and society overall. Applications and services that innovate without a research component are not covered by this model. Varying amounts of funding.

Learn more here.

🖋️ DIF Media

Setting Interoperability Targets

Our short-term roadmaps need testable, provable alignment goals that we can all agree on for our little communities and networks of technological thinking to converge gradually. In the newest DIF blog post, the Interop WG Chairs overview today's rapidly-maturing landscape of test suites, proto-trustmarks, and alignment roadmaps and the like.

Introducing DIF Grants

DIF is kicking off a program to administer narrowly-scoped financial support for community initiatives, ranging in format from grants to more competitive implementation bounties, hackathon-style open collaborations, and security reviews. Keep reading

Steering Committee Elections are just around the corner!

We made a quick explainer on our blog to clear up the procedure in anticipation of next month's elections. Nominations are still open so take some time to consider getting more involved in the decentralized work of steering this ship of {'foo':"bar"}s!

DIF FAQ is online and accepting issues/PRs

In the spirit of complementarity with CCG's educational & onboarding efforts, DIF staff and volunteers from the Interoperability WG have been working for months on setting up a Frequently Asked Questions page built using DIF's in-house specification authoring tool, Spec-Up. If you would like to donate a question (and/or an answer!) please create an issue on the github repo for the faq.

🎈 Events

Identiverse 2021
June 21 - 23, 2021: Hybrid Experience
June 23 - July 2, 2021: Continued Experience

Check out the Agenda of Identiverse 2021! 💼 Jobs

Members of the Decentralized Identity Foundation are looking for:

Software engineer (remote) SDK developer (Berlin, DE)

Check out the available positions here.

🔢 Metrics

Newsletter: 4.5k subscribers | 32% opening rate
Twitter: 4.456 followers | 7.99k impressions | 2.323 profile visits
Website: 22.836 unique visitors

In the last 30 days.

🆔 Join DIF!

If you would like to get involved with DIF's work, please join us and start contributing.

Can't get enough DIF?
follow us on Twitter
join us on GitHub
subscribe on YouTube
read us on our blog
or read the newsletter archives

Got any feedback regarding the newsletter?
Please let us know - we are eager to improve.


Commercio

5 things Commercio.network’s blockchain allows you to do and 8 big problems you can solve immediately 

Commercio.network is a Blockchain, open to 250 million companies , which helps to create electronic identities with the Self sovereign Identity paradigm and exchange data and documents using this Electronic Identity.  In the long run, the network will be composed only of “trusted” companies and this can help to form the basis for creating a system of trust between companies, […] L'articolo

Commercio.network is a Blockchain, open to 250 million companies , which helps to create electronic identities with the Self sovereign Identity paradigm and exchange data and documents using this Electronic Identity.  In the long run, the network will be composed only of “trusted” companies and this can help to form the basis for creating a system of trust between companies, such as to allow the development of new applications never thought before.

Commercio.network has been designed to allow companies to:

Exchange data and documents through a system that certifies both the identity (DID) of the parties involved and the transaction (SHAREDOC) Guarantee the integrity of the document and therefore its immutability thanks to the fingerprint that each document possesses (HASH) Create a secure, non-traceable connection between two companies. (PAIRWISE)
Sign documents via Blockchain to certify the origin. (SIGN) Time stamp the transaction of documents on the Blockchain for possible future verification by a third party. (SHAREDOC)

The use cases that these functions enable are countless, here are a few examples:

Digital Identity Management

Use digital identity services to meet regulatory requirements, prevent fraud and improve the overall customer experience

Loyalty card management

Enable customers to earn and redeem loyalty points both internal and external to the organization’s Loyalty ecosystem

Employee Documentation

Bypass delays associated with employee document transfers

Reporting and Regulatory Compliance

Store financial information to eliminate errors associated with manual review activities, reduce reporting costs, and support broader regulatory activities

Trade Finance Management

Simplify and shorten the trade finance process, drive efficiency gains and open up new financing product opportunities

Insurance Underwriting Management

Verify identities, ensure applications are complete, assess risk, and complete quotation and binding

Customer Onboarding

Improve the customer Onboarding experience by leveraging digital identities on Blockchain

Transaction Clearing

Decentralized settlement of transactions through a multi-signature escrow entity enabling faster settlement

 

L'articolo 5 things Commercio.network’s blockchain allows you to do and 8 big problems you can solve immediately  sembra essere il primo su commercio.network.


Velocity Network

Blockchain and the Decentralised Workforce

Andy Spence of the Workforce Futurist Newsletter attended the Velocity Network Foundations' launch event - Unleashing the Internet of Careers®. Read more about his takeaways. The post Blockchain and the Decentralised Workforce appeared first on Velocity.

Tuesday, 04. May 2021

Me2B Alliance

Me2BA Product Testing Spotlight Report Published: Data Sharing in Primary & Secondary School Mobile Apps

60% of School Apps are Sending Student Data to Potentially High-risk Third Parties Without Knowledge or Consent According to New Research from Me2B Alliance Research uncovers disturbing findings around privacy concerns with the use of school applications
60% of School Apps are Sending Student Data to Potentially High-risk Third Parties Without Knowledge or Consent According to New Research from Me2B Alliance
Research uncovers disturbing findings around privacy concerns with the use of school applications

What you need to know:

60% of school apps were sending student data to a variety of third parties, including advertising platforms like Google and Facebook On average, there were more than 10 third-party data channels per app Public-school apps are more likely to send student data to third parties than private-school apps (67% public vs. 57% of private school apps) 18% of public-school apps included very high-risk third parties – i.e., third parties that further share data with possibly hundreds or thousands of networked entities Android apps are much more likely than iOS apps to be sending data to third parties, and are much more likely to be sending to high or very high-risk third parties

Me2B Alliance, a non-profit industry group focused on respectful technology, today published a research report to drive awareness to the data sharing practices of education apps associated with schools and school districts. According to the research findings, 60% of school apps were sending student data to a variety of third parties, including advertising platforms like Google and Facebook.

The Me2B Alliance Product Testing team audited a random sample of 73 apps from 38 schools in 14 states across the U.S., covering over half a million people (students, their families, educators, etc.). The audit methodology mainly consisted of examining data flow from the apps to external third-party vendors. The report, “School Mobile Apps Student Data Sharing Behavior,” is available for download at no charge.

Most mobile apps are built with software development kits (SDKs), which provide developers with pre-packaged functional modules of code and the potential of creating persistent data channels directly back to the third-party developer of the SDK. As part of the analysis, the magnitude of third-party data sharing in educational apps, as evidenced by the number of SDKs included in apps, was examined.

Key takeaways from the report:

There is an unacceptable amount of student data shared with third parties – particularly advertisers and analytics platforms – in school apps. School apps – whether iOS or Android, public or private schools – should not include third-party data channels. iOS apps were found to be safer than Android apps, and with ongoing improvements, the “privacy gap” will widen unless Google makes some changes. People still have too little information about which third parties they’re sharing data with, and the app stores (Apple and Google Play) must make this information more clear.

“The findings from our research show the pervasiveness of data sharing with high-risk entities and the amount of people whose data could be compromised due to schools’ lack of resources,” said Lisa LeVasseur, executive director of Me2B Alliance. “The study aims to bring these concerns to light to ensure the right funding support and protections are in place to safeguard our most vulnerable citizens – our children.”

Download the report, “School Mobile Apps Student Data Sharing Behavior,” at no charge. School systems looking for more information can contact admin@me2ba.org. Organizations interested in advancing standards in ethical data and mobile and internet practices can visit the website to learn more about Me2B Alliance membership.

About the Me2B Alliance
The Me2B Alliance is a nonprofit fostering the respectful treatment of people by technology.  We’re a new type of standards development organization – defining the standard for respectful technology. Scenarios like the ones described in this report – where user data is being abused, even inadvertently – highlight the types of issues we are driven to prevent through independent testing, as well as education, research, policy work, and advocacy.

Press

Friday, May 7.

It was 7:30 Tuesday morning when the Me2B Alliance published its new Spotlight Report on School Mobile Apps Data Sharing Behavior. By lunchtime the nonprofit’s research was reported on the national news. The Washington Post, Gizmodo, Apple Insider, American Online News, AdExchanger, and MSN picked up the story, as did a growing list of other carriers (links below), resulting in a boost to the visibility of the Me2B Alliance’s findings about the widespread and risky sharing of students’ personal information by school’s utility apps.

Publication Title 24×7 Guru Android apps 8 times more ‘dangerous’ than iOS apps for students, claims study 74million 60% of School Apps Are Sharing Kids’ Data With Third Parties 9to5Mac Report: Android apps send student data to ‘very high-risk’ third parties 8x more often than iOS AdExchanger 60% Of School Apps Are Improperly Sharing Student Data With Third Parties American Online News 60% of School Apps Are Sharing Your Kids’ Data With Third Parties Apple Insider Of the 60% of school apps sharing data, Android versions much worse than iOS Canada Free News Research: 60% of School Apps Sending Student Data to Third Parties Without Parental Consent Digital Information World In comparison with iOS, Android apps are 8 Times more responsible for sharing student data with high risk third parties EconoTimes Android, iOS student apps associated with public schools more likely to contain ad SDKs and share data with third parties Future Pro Tech Android Apps are Sharing Data to High Risk Parties 800% More Than iOS Gizmodo 60% of School Apps Are Sharing Your Kids’ Data With Third Parties Gizmodo AU 60% of School Apps Are Sharing Your Kids’ Data With Third Parties iPhone Hacks Education Apps Send Data to ‘Very High Risk’ Third Parties 8 Times More on Android As Compared to iOS Laptop Magazine School mobile apps are targeting your kids — 67% send their private data to third parties MakeUseOf Report: Most Educational Apps in US Schools Send Data to Third Parties MSN 60% of School Apps Are Sharing Your Kids’ Data With Third Parties Notebook Check New study finds 60% of apps used by U.S. schools share student data with third parties, sometimes without the users’ knowledge Root Android Report: Android apps send student data to ‘very high-risk’ third parties 8x more often than iOS Softpedia 60% of U.S. School Apps Disclose Collected Data Without Permission Tech Times Android School Apps More Likely to Share Data to Third-Parties Than iOS Version, Research Shows TechJuice Android Apps Share More Data To Third Parties Than iOS The Cyberwire Roundup: Pulse Secure VPN patched. Scripps Health’s IT security incident. Android banking Trojans. Cyber threats to the Tokyo Olympics. The Record by Recorded Future Most K-12 apps send kids’ personal info to advertisers and other third parties, study finds The Register American schools’ phone apps send children’s info to ad networks, analytics firms The Washington Post Cybersecurity 202 Newsletter: The Biden administration will prioritize cybersecurity in the distribution of $1 billion in federal IT funding // Sixty percent of education apps used by schools were sending data to third-party advertising platforms, researc WccfTech Android Apps Used in Schools are Sending Student Data to High-Risk Third Parties 8 Times More than iOS

Me2BA Product Testing Spotlight Report Published: Data Sharing in Primary & Secondary School Mobile Apps

Me2BA Product Testing Spotlight Report Published: Data Sharing in Primary & Secondary School Mobile Apps

Velocity Network

SumTotal – Future Ready Workforce for Compliance Driven Organizations using Blockchain

Together with the Velocity Network, SumTotal Systems is ensuring the healthcare workforce is future ready. Watch this video for more information. The post SumTotal – Future Ready Workforce for Compliance Driven Organizations using Blockchain appeared first on Velocity.

Monday, 03. May 2021

Commercio

Token and Cryptocurrency Exchanges 

Token and Cryptocurrency Exchanges There are two types of token exchanges : centralized CEX exchanges and decentralized DEX exchanges. Centralized Exchange (CEX) works like a classic brokerage exchange: you deposit funds into an account and the exchange does the buying and selling for you. The advantage is that the Exchange does all the work, and is often insured and regulated […] L'articolo Tok

Token and Cryptocurrency Exchanges

There are two types of token exchanges : centralized CEX exchanges and decentralized DEX exchanges.

Centralized Exchange (CEX) works like a classic brokerage exchange: you deposit funds into an account and the exchange does the buying and selling for you. The advantage is that the Exchange does all the work, and is often insured and regulated by the authorities. Most Exchanges, such as COINBASE and BINANCE, are centralized. One advantage to these Exchanges is that they accept credit or debit card payments and bank transfers. They can also pay in fiat currencies, such as dollars or euros, which many users prefer.

Decentralized Exchange (DEX) is a marketplace for Cryptocurrencies and Tokens that is open to everyone. No one is in control of a DEX; people buy and sell on an individual basis through Peer-to-Peer trading applications. One way to think of a DEX is as a “do-it-yourself” trading solution: you make the trades, the funds move out of your account. The biggest advantage of this system is that your funds never have to be entrusted to a trading firm or other third party and will always remain in the wallet of the person making the transaction. These exchanges operate exclusively with digital currency.

 Advantages of a DEX over a CEX 

A DEX can be more resistant to hacking than a CEX because account information is not shared with the exchange operator: funds may be held in your account and you will be the only person with access.  Theoretically, governments or regulators cannot shut down a DEX because it is decentralized, operating across a wide variety of nodes. A DEX operates across the cloud through a variety of nodes and there is no single server that can be blocked or hacked. There is a higher degree of privacy on a DEX because you are not sharing data with the operator. On a DEX you maintain control of your funds in your personal account. A DEX can be faster because you do the transactions yourself.

Disadvantages of a DEX over a CEX 

Funds are not regulated or insured. Regulated exchanges may be required to return your money at any time, so keep your funds in escrow for quick withdrawals.  Most DEXs do not accept credit card, debit card or wire transfer payments Trading volume is limited, which can keep prices low and fees high Services available from DEXs are limited: Margin Trading, Stop Loss and trades involving fiat currencies are not offered. There may be no customer service to contact when there is a problem A DEX can be much more expensive than a CEX because you may need to buy Ethereum Gas (ETH) every time you make a transaction.

Commercio.network is planning to implement its own DEX solution that will allow to trade Fungible and Non-Fungible Tokens (NFT), through a self-managed wallet, using a protocol on Cosmos called IBC through which you can access a worldwide pool of liquidity and users interested in trading Tokens.

 

L'articolo Token and Cryptocurrency Exchanges  sembra essere il primo su commercio.network.


Me2B Alliance

School Mobile Apps Student Data Sharing Behavior

DOWNLOAD PDF

Me2B Alliance Product Testing Report:
School Mobile Apps Student Data Sharing Behavior

Research Performed by Zach Edwards and Lisa LeVasseur
Written by Lisa LeVasseur, Zach Edwards, Karina Alexanyan
Contributors:  Eve Maler, Shaun Spalding, Andrea Ausland

May 4, 2021

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ 1       ABSTRACT

The Me2B Alliance Product Testing team audited and analyzed a random sample of 73 mobile applications used by 38 schools in 14 states across the U.S., covering at least a half a million people (students, their families, educators, etc.) who use those apps. The audit methodology mainly consisted of examining data flow from the apps to external third-party vendors, by evaluating the SDKs included in each app.  This report details and summarizes the audit findings.

The analysis found that the majority (60%) of school apps were sending student data to a variety of third parties. These included advertising platforms such as Google, to which about half (49%) of the apps were sending student data, as well as Facebook (14%). On average, each app sent data to 10.6 third-party data channels.

Two thirds (67%) of the public schools in the sample were sending data from apps to third parties. This finding is particularly troubling since public schools most likely utilized public funding to develop or outsource the apps – meaning that taxpayers most likely paid to fund apps that are sending student data to online advertising platforms.  Moreover, public schools were more likely to send student data to third parties than private schools (67% vs. 57% of private school apps).

Another disturbing public-school finding: 18% of public-school apps sent data to what the Me2B Alliance deems very high-risk third parties – i.e., entities that further share data with possibly hundreds or thousands of networked entities. Zero private school apps in this study sent data to any very high-risk third parties.

The research also showed that Android apps are three times more likely than iOS apps to be sending data to third parties, and are much more likely to be sending data to high or very high-risk third parties: 91% of Android apps send data to high-risk third parties compared to only 26% of iOS apps, and 20% of Android apps sent data to very high-risk third parties, compared to 2.6% of iOS apps.

Additionally, while not examined in detail, the analysis confirmed that the data sent to third parties typically included unique identifiers (through Mobile Advertising Identifiers, or MAIDs), thus enabling profile building for students – including those under the age of 13 – by third-party advertising platforms. Apple’s new AppTrackingTransparency framework[1] and changes to its incumbent IDFA (Apple’s mobile Identifier For Advertisers) system reduce the risk of the profile building that’s described in this research.  This change increases the “respectfulness gap” between iOS and Android apps, although it may not fully remove the risk of profile building.

Also troubling is that the analysis found data being sent to third parties as soon as the app is opened by the user – even if they are not signed into the app.  In most apps, third-party data channels initiated initial data transfers and ID syncs as soon as the app is loaded.

The researchers estimate that upwards of 95% of the third-party data channels are active even when the user isn’t signed in[2].

Our research did not include a deeper look into the third parties to understand whether or not these entities were taking appropriate care of student data, in particular for children under the age of 13, important in light of the Children’s Online Privacy Protection Act of 1998 (COPPA), which outlines requirements for the handling of personal information for children under the age of 13. 85% of the schools included in this analysis have students under the age of 13.

Further, neither the Google Play Store nor the Apple App Store include details on which third parties are receiving data, leaving users no practical way to understand to whom their data is going, which may well be the most important piece of information for people to make informed decisions about app usage.

Professional organizations appeared to have created 99% of the apps and only 1 app appeared to be “home grown” based on developer metadata, but the latter could have been developed by professional organizations or contractors who used the school’s iOS developer account to upload the apps. 77 percent of the apps were built by six educational app companies.

The analysis also examined average app ages to determine if privacy practices and notifications were current. The average age of apps in the study was 11.6 months – apps were being updated roughly annually. It should be noted that at the time of this research, 75% of the iOS apps in the study had not been updated since December 2020 –  when the Apple App Store’s new Privacy Labels began to be required – and therefore didn’t include a Privacy Label.

Finally, in the course of the research it was observed that three schools (8% of those studied) offered only iOS apps.  Given the price difference between Apple and Android devices, there is a small concern that this practice could leave some families behind, possibly exacerbating the “digital divide”.

This research is intended to illuminate the pervasiveness of data sharing with high-risk entities in order to effect change in app development practices, app notification practices, and ultimately to provide policy makers with information to ensure the right funding support and protections are in place to protect our most vulnerable citizens – our children.

The key takeaways are:

There is an unacceptable amount of student data sharing with third parties – particularly advertisers and analytics platforms – in school apps. School apps – whether iOS or Android, public or private schools – should not include third-party data channels. iOS apps were found to be safer than Android apps, and with ongoing improvements the “privacy gap” between iOS and Android apps is expected to widen unless Google makes some changes. People still have too little information about which third parties they’re sharing data with, and the app stores (Apple and Google Play) must make this information clearer.

[1] “Apple Launches the Post-IDFA World to the Dismay of Advertisers”, Venture Beat, April 21, 2021, Dean Takahashi, https://venturebeat.com/2021/04/21/apple-launches-the-post-idfa-world-to-the-dismay-of-advertisers/

[2] “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret”, New York Times, December 10, 2018, Jennifer Valentino-DeVries, et al.  https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html

2       INTRODUCTION

The Me2B Alliance is a Standard Development Organization (SDO) establishing the Me2B Respectful Tech Specification, to ensure that technology treats people right.

In addition to supporting the creation of the specification, the Me2B Alliance (Me2BA) performs independent product testing using the principles in the specification in order to illuminate risks and harms in the behavior of connected technology. This testing lets people (“Me-s”) make safer technology choices, and “B-s” (makers of technology) build safer, more respectful technology. From the Alliance’s product testing experience, it is often the case that makers of technology may be unaware of some of the downstream behavior of integrated technologies or partners.  In short, technology is increasingly complicated and difficult to know with specificity.

We recognize that school administrators and decision-makers for school app choices may not be technical experts and may rely on the expertise of software suppliers.  Our intention in publishing this research is to drive awareness across the spectrum of students, parents, teachers, administrators, local governments, policy makers, and makers of educational software, for a safer society. As a 501c3 non-profit in the U.S., we are available to assist any concerned stakeholder in navigating what’s happening “under the hood”, in order to keep everyone – and especially our children – safe in the digital world.

3       METHODOLOGY

The research focuses on the data sharing practices of education apps that were associated with a school or school district. This study included a random sampling of 73 apps – usually an Android and Apple pair (except for three schools which used only Apple apps) – from 38 public and private schools in 14 states across the U.S., including California, Hawaii, Kansas, Maine, Massachusetts, Minnesota, Mississippi, Nebraska, Oregon, South Dakota, Tennessee, Vermont, Virginia, and Washington.

This research reflects a small subset of the tests included in the Me2B Respectful Tech Specification. This subset is part of the Data Integrity Tests, which focus on the data flow happening “under the hood” of technology.

For our research, the Alliance utilized tools from AppFigures.com, an analytics firm which provides a database of software development kits (SDKs), permissions, and other data about mobile apps across all the major app stores. Figure 1 below provides an example of the SDK information provided by AppFigures.


Source: Appfigures.com

Figure 1: Sample SDK information used in the research showing an app with multiple SDKs and an app with no SDKs.

3.1     SDK Analysis

Most mobile apps are built with SDKs, which provide app developers with pre-packaged functional modules of code, along with the potential of creating persistent data channels directly back to the third-party developer of the SDK. SDKs almost always start running “behind the scenes” as soon as a user opens a mobile app – without the express consent of the user. These SDK providers use this data for a variety of reasons, from performing vital app functions to advertising, analytics and other monetization purposes.

In Me2B vernacular, third-party SDKs are “Hidden B2B Affiliates”, i.e., they are suppliers to the app developer, with whom the user doesn’t have a direct relationship, but the app (and the app developer) does.  The user has a Me2B relationship with the app developer, as memorialized by the acceptance of the app’s Terms of Service or Terms of Use.  The user also has a Me2P (Me-2-Product) relationship with the app itself. The Me2P relationship is the ongoing relationship between the user and the technology. Not only does the user not have a direct (Me2B) relationship with the SDK providers, app users typically have no way to even know who the Hidden B2B Affiliates are. Users are unwittingly in Me2P relationships with SDKs.

A crucial part of the research methodology was to study both the number and the type of SDKs included in the mobile apps.  In particular, SDKs were categorized based on their potential for harm (i.e., abuse or exploitation of information).  AppFigures has a list of 25 SDK categories, and the heuristic for category assignment is sometimes unclear[1].  The original 25 categories used by AppFigures were condensed into three categories: Utility, Analytics, and Advertising (or combinations thereof).  Each SDK was classified into one or more of these three categories based on the nature of the underlying business model of the SDKs in the category.  If, for example, at least one of the SDKs listed in the original AppFigures category performs a function related to advertising, the group was designated as Advertising.

Additionally, one of three risk attributions, medium, high, and very high, was assigned for each category based on the prevailing behavior of the SDKs in the category, as well as the potential harm caused by an SDK’s abuse or exploitation of information.

Utility SDKs perform functions necessary to deliver expected behavior to the app user. In our analysis, these SDKs are lower risk. But given the potential for information exploitation or abuse in virtually any SDK – particularly given the age of the app users in this analysis – none of the Utility SDKs can be considered without potential for harm.  This category of SDKs is designated as medium risk.

Analytics SDKs collect behavioral analytics to be used by the app developer (first party), or by the SDK developer (third party), or passed along to other third parties. Analytics SDKs are designated as high-risk because they often either uniquely identify (fingerprint) individuals or include other potential for data exploitation. Many analytics SDKs either directly support advertising networks or have advertising network partners and their data should be assumed to be associated with online advertising.

Advertising SDKs explicitly perform digital advertising functions, and include several of the AppFigures designations, such as Attribution, Deep Linking, Engagement, Insights, Payments and Location, among others.  All Advertising SDKs are designated as high-risk – particularly in this analysis of educational apps.  Data going to advertisers can include a “unique identifier” to uniquely identify the individual (i.e., student or parent), tracking personally identifying information such as the user’s name, email address, location and device ID use across multiple apps.

Additionally, there is one higher level of risk, called very high-risk.  SDKs receive this risk attribution level if:

AppFigures considers it an Advertising & Monetization SDK, or We consider it an “advertising platform”.For instance, we consider Doubleclick, which AppFigures designates as a Utility, to be an advertising platform. The SDK appears in either the California[2] or the Vermont[3] registries of Data Brokers. Two SDKs were found in the California Data Broker registries: AdColony and InMobi.

Very high-risk SDKs routinely sync to dozens, if not hundreds or thousands, of additional partners through a complex supply side network[4] while leveraging unique mobile advertising identifiers that allow all participants in the network to create unique profiles for people, tracking people and their information across services and devices:

The Google Play Android Advertising ID is a unique identifier assigned to every Android device. According to Google, this identifier allows “ad networks and other apps anonymously identify a user”. This unique identifier is akin to a resettable serial number assigned to a user’s device, and consequently referring to the user of the device. The Advertising ID is available to all apps on the device without requiring any special permissions or consent from the user. This is often used by adtech companies to link digital profiles in order to track consumers across services and devices.[5]

As a final assessment, the high-level functions of each SDK were examined to validate the risk attribution used in the analysis.

[1] For instance, Gigya, defined in Wikipedia (https://en.wikipedia.org/wiki/Gigya) as a “customer identity management” company, is categorized in AppFigures as a Mapping SDK.

[2] Data Broker Registry | State of California – Department of Justice – Office of the Attorney General

[3] Corporations Division (vermont.gov)

[4] “Out of Control: How consumers are exploited by the online advertising industry”, Consumer Council of Norway, January 14, 2020, pp. 35-39. https://fil.forbrukerradet.no/wp-content/uploads/2020/01/2020-01-14-out-of-control-final-version.pdf

[5] Ibid., p. 28.

4       FINDINGS

Our analysis examined the following issues in order to identify potential trends and patterns:

The information being shared, Magnitude of third-party data sharing as evidenced by the number of SDKs included, Detailed analysis of third-party data sharing behavior, including External data sharing behavior trends across operating system/platform (iOS / Android), External data sharing behavior trends across school types (public / private), The riskiness of external data sharing as evidenced by the nature of the SDKs included – in particular, examining trends related to the risk attribution for each SDK, Which third parties are receiving data, The average age of apps, External data sharing disclosures/labels, and Schools/districts with only iOS versions of apps. 4.1     What Information Is Being Collected and Shared?

Most, if not all, of the apps seem to be uniquely identifying the user of the app through account creation, in order to send messages, upload photos, or process payments, which means the app was collecting personally identifying information (PII) such as name, age, and other information.

The Android Play Store lists all of the requested permissions by each Android app.  Most (nearly all) of the examined Android apps were designed to access the following information on the device:

Identity – known in the adtech industry as a Mobile Advertising Identifier (MAID); Android refers to this as an “Ad ID” and iOS refers to it as an “IDFA” Calendar Contacts Photos/Media/Files Location USB Storage

Several apps were accessing:

Camera Microphone Device ID & Call Information

Additionally, virtually all apps also access Network Data by default.  An IP address is a field of data that is hard for devices to block, and while there are permissions for expanded network access (devices on the network, access to custom ports, etc.), some version of network data should always assumed to be accessed (and potentially shared with third parties).

When the user grants permission to the requested information (listed above), it is then accessible by both the app and all the SDKs. Thus, it’s information the app and SDKs can access, but not a guarantee that it does.

This qualitative review of the types of information collected and generated by the apps suggests that there is potential for a large amount of information that could be shared with third parties.

4.2     Magnitude of Data Sharing in Educational Apps 4.2.1   Most apps shared data, averaging over 10 SDKs per app Most (60.3%) of the apps (44 out of 73) had at least one SDK installed and were sharing private student data with third parties. The 44 apps that had SDKs included a total of 486 SDKs. Of the apps that included SDKs, the average number was 10.6 SDKs per app. (See Figure 2.) Note that multiple SDKs can be from a single developer, so this number is not equivalent to the number of third parties with whom data is being shared, which is lower. Private school apps that included SDKs had a higher average number of SDKs per app (13.8) as compared to public-school apps with SDKs (10.8). (See Figure 2.) Android apps included a significantly higher average number of SDKs per app (13.4) as compared to iOS apps (8.5), which is a typical pattern seen generally when comparing Android and iOS apps. (See Figure 2.)

Figure 2: Average Number of SDKs per App with at least one SDK, by App Type

Frequently – about 20% of the time – there were more than 15, and sometimes as many as 20 SDKs in a single app. The histogram in Figure 3 shows the distribution of apps by number of included SDKs. 29 apps didn’t include any SDKs. (Figure 3.)

Figure 3:  Number of SDKs per App Histogram

4.3     External Data Sharing Behavior

Figure 4: App Data Sharing Behavior by Type of App

Of the 44 apps that were sharing data with third parties, 73% were Android apps.

Figure 5:  All Apps with SDKs by Operating System

Of the 44 apps that were sharing data with third parties, two-thirds (66%) were public-school apps.

Figure 6:  All Apps with SDKs by School Type

4.4     Appropriate Care of Children and Their Data

Of the schools in our study, 85% have students under the age of 13.  This analysis doesn’t afford a way to know how any of these third-party SDKs are tracking and handling the information of children under the age of 13.  Additionally, this research did not explore the permission management practices between the app developer and the companies providing the SDKs, and if permission controls flow down from the first-party app to the third parties (prior to the app’s support of Apple’s ATT framework). Based on our findings, the extensive external data sharing behavior found in our analysis must be assumed to include personal data from children under the age of 13.

Another critical issue with school apps is that quite often, the school/district is the client to the app developer, and several of the app developer Privacy Policies assert that the student information and the control of it belongs to the institution – not the app developer. This is confusing, since [at least in the vernacular of the GDPR], the app developer seems to be the Data Controller.

Encouragingly, Google has a Family Ads Program[1] as a part of their actions to build a safer Google Play store for kids[2].  However, three SDKs designated very high-risk in this study (AdColony, AdMob, and InMobi) and one high-risk (Flurry) are included in the current list of self-certified “family friendly” ad SDKs, and the list hasn’t been updated in a couple years.  Additionally, IronSource, Vungle, Unity and Chartboost were all named SDK Developer Defendants (see Section 4.7) in the recent settlement in Northern California which should give Google sufficient incentive to overhaul its Family Ads Program.

[1] https://support.google.com/googleplay/android-developer/answer/9283445

[2] https://android-developers.googleblog.com/2019/05/building-safer-google-play-for-kids.html

4.5     External Data Sharing Behavior Trends Across Operating System/Platform

In addition to the previously mentioned fact that Android apps have a higher number of SDKs per app than iOS apps, there are additional observations between iOS and Android apps.

iOS apps are much more likely to include no SDKs – i.e., to be “clean” with respect to data sharing – than Android apps. 68% of all iOS apps had no external data sharing, whereas 91% of all Android apps had external data sharing.(See Figure 7) This is reflecting the reality that Android, as a development platform, relies upon a number of external SDKs to facilitate app development.  These findings reflect the expected pattern between iOS and Android Apps.

Figure 7: Data Sharing:  iOS and Android

4.6       External Data Sharing Trends Across School Types

64 percent of public-school apps included SDKs, whereas only 54% of private schools included SDKs.   (But as noted earlier, private-school apps that included SDKs had more of them, on average.)

Most likely, taxpayer funds were involved in financing public-school apps. Thus, taxpayers may unwittingly be supporting digital advertising businesses.

Figure 8: Data Sharing:  Public vs. Private Schools

4.7     App Sharing Behavior by Risk Attribution of SDK

As noted earlier, there were 486 total instances of SDKs present in the 73 apps studied (with 44 apps including one or more SDK). Figure 9 shows the distribution of those 486 SDKs by risk attribution. The majority (58%) of SDKs included in apps studied were designated as high-risk.  

Figure 9:  SDKs by Risk Attribution

Nearly 20% of apps that included SDKs included very high-risk mobile advertising platforms. (See Figure 10) 95.5% of apps that included SDKs included high-risk SDKs, sharing data with high-risk third parties – advertising and advertising related platforms, including Google, Facebook, Yahoo, and Twitter.

Figure 10:  Risk Attribution Analysis Across Apps with SDKs

Figure 11, below, shows the percent of Android/Apple apps that contain medium/high/very high-risk SDKs, as a percent of the total number of Android or Apple apps that include SDKs.

Android apps are about 8 times more likely than iOS apps to include very high-risk SDKs. Android apps are 3.5 times more likely than iOS apps to include high-risk SDKs.

Figure 11:  Risk Analysis of SDK-including Apps by Operating System

Similarly, Figure 12, below, shows the percent of public/private-school apps that contain medium/high/very high-risk SDKs, as a percent of the total number of public/private-school apps that include SDKs.

Very high-risk SDKs were only found in public-school apps; and nearly 20% of public-school apps that had SDKs included very high-risk While very high-risk SDKs were found only in public-school apps, public and private-school apps are comparable in the likelihood of including high-risk SDKs.

Figure 12:  Risk Analysis of SDK-including Apps by School Type

4.8     What Third Parties are Receiving Student Data?

The apps in our study used 56 unique SDKs.

Of those SDKs, the owners of the most SDKs were Google (12), Facebook (7), Apple (7), Amazon (3), Square (3), Twitter (2) and Adobe (2).

SDKs owned by Google and Facebook were included 306 times in our studied apps.  Said another way, 63% of all SDKs used by the studied educational apps were owned by either Google (48.6%) or Facebook (14.4%).  Additionally:

100% of the Android apps that included SDKs sent data to Google. 9 (12%) of the apps shared data to Google’s AdMob SDK, sending personal information to Google’s mobile advertising products. At least 17 (23%) of apps with Google SDKs had six or more Google SDKs installed, including Google Maps, Google Sign-In, AdMob, Tag Manager and other Google Products. One of Google’s SDK products in the school apps is Fabric, which was previously owned by Twitter. 19 (22%) of the apps shared data to Facebook, with data going to 4 or more Facebook SDKs including Facebook Analytics, Facebook Share, Facebook Login and Facebook Bolts SDKs. 5 (7%) of the apps shared data to Twitter SDKs, with most of them being for Twitter’s Login SDK, and at least one app (South Dakota, iOS app) sharing data to Mopub, Twitter’s mobile advertising subsidiary (and very high-risk attribution). 6 (8%) of the apps shared data to Yahoo’s Flurry Analytics SDK which is an SDK that has advertising ingestion functionality (high-risk attribution).

The top 10 most frequently included SDKs are shown below in Table 1.

Table 1: Top 10 Most Included SDKs Across All Apps

As can be seen from Table 1, six of the top 10 most included SDKs are owned by Google. And six of the 10 most included SDKs are deemed high-risk SDKs.

As stated earlier, the most troubling aspect of this kind of data sharing is that there is no way to know how these large platforms are handling the data of school-age children, especially those under the age of 13.  Are these companies tracking the age of the information of the data subject for whom they are collecting information?

In a recent groundbreaking legal win[1] for consumers ­– and children, in particular – several large developers of apps for children were mandated by the U.S. District Court for the Northern District of California to remove or disable tracking – which is typically accomplished through SDKs.  One of the authors of this paper tracked the publicly available settlement information[2] which holds both the developers of the apps and the companies behind the included SDKs responsible and accountable.  The settlement agreements include actions such as:

Removing or disabling SDKs in all of their childrens’ apps within a limited time frame (four months for Disney), Delete all ingested childrens’ data (Twitter), and Being banned from ingesting device IDs for three years (Comcast).

In the settlement information, four SDK developers from our school app research were named as “SDK Developer Defendants”: Twitter, AdColony, Flurry, and InMobi, which means that all four of these developers have specific obligations relating to the data of children under 13 years of age.

[1]  “Disney and Ad-Tech Firms Agree to Privacy Changes for Children’s Apps”, Natasha Singer, New York Times, April 13, 2021, https://www.nytimes.com/2021/04/13/technology/advertising-children-privacy.html

[2] https://twitter.com/thezedwards/status/1382182587628544000

4.9     App Labeling Deficiencies 4.9.1   Inadequate Information About Which Third-Party SDKs Are Included

Google doesn’t have a privacy label similar to Apple, so Android users have no privacy information at all supplied at the Google Play Store level.  However, Android apps do list all the permissions for an app before you download the app.

Neither Apple nor Android discloses the names of the SDKs and SDK owners to users, either in the app or in the app stores.  So even though Google, Facebook, Twitter and several other well-known companies are the primary recipients of student data coming from apps, the people using the apps have no real way to know which platforms will receive their data.  Once again returning to Me2B vernacular, users are in undisclosed Me2P relationships with the SDK owners.

4.9.2   App Stores Privacy Policy Links

Both the Apple and Google Play app stores include links to Privacy Policies.  Two types of problems with the privacy policy Links were observed:

Missing or broken links, and Policies that focused on the developer’s website practices, not policies that applied to the mobile app.

In both of these cases, the end result is that there is literally no privacy policy or data use information relating to the app prior to downloading the app, especially if there is also a missing Apple Privacy Label, or for Android apps, for which there is no privacy label.

4.9.3   App Age Rating Older Than Youngest Students

For four apps, the app’s age rating in the app store indicated an older age than the youngest students in the school.

4.10     Privacy Policy Deficiencies 4.10.1 Privacy Policies Apply to the Developer’s Website

Ten of the apps’ Privacy Policies appeared to cover only the app developer’s website, not the mobile app. It’s not unusual for websites and apps to have distinct privacy policies, particularly in the case of websites and apps that are developed, hosted, or operated by different entities, since different developers/operators may have different internal privacy practices and technology even if they were working on behalf of the same school. Thus, it would be reasonable to expect that the app’s privacy policy and data use information prior to download and installation is unclear or unknown.

4.10.2 Privacy Policies Apply to the Educational Institution

Many of the Privacy Policies make clear that the policy covers the relationship with the Client, meaning the educational institution, and that the end users’ – students’ and parents’ – information is covered by the institution and the institution’s privacy policy.  The institution’s privacy policies, however, are not linked in the app stores.

Here’s an example from Privacy Policy for an app developed by Apptegy, Inc. [https://www.apptegy.com/privacy-policy/]:

“Simply put: when we process personal information about you that is provided by your educational institution or organization (or by you or another individual under your educational institution’s or organization’s account), we are not responsible for the disclosures made by your educational institution or organization (or those individuals under the institution or organization account). It is your educational institution or organization that has the responsibility to protect your privacy.”

4.10.3 Privacy Policy Exclusions for Children Under the Age of 13

Several Privacy Policies explicitly exclude children under the age of 13 even though the apps are for schools that include students under the age of 13.  Here’s an example from the Apptegy privacy policy excerpted above, which is connected with a K-12 school district that is labeled by the app store as “E for Everyone:”

“We do not permit individual users or individual accounts for individuals under the age of 13. In addition, we do not knowingly collect any information about or from children under the age of 13, except when an educational institution or organization (or an individual under an institution or organization account) provides information about or from a student under the age of 13 via our goods and services.” 

It should be apparent to the developers of an app for a K-12 school district that some students would, by their very nature, be individuals under the age of 13. The statement that users under 13 are not permitted on the app either lacks nuance that needs to be revised for clarity or is simply a “legal fiction” included in the policy by the developer as an attempt to stay out of the purview of COPPA (a difficult-to-comply-with law that significantly regulates data collection from users under 13).

4.11      Average Age of Studied Apps 4.11.1 Most Apps Were Not Current with the Latest Privacy Protections

As noted above, disclosures were lacking on mobile apps and app stores. The inadequate disclosures are exacerbated by slow updates, which means even when new standards are deployed, educational apps may be late to adopt them. For instance, the recent Apple privacy label is only required for apps which have been updated since December 2020.

The majority (74%) of Apple apps reviewed (28 out of 38 apps) had not been updated since December 2020, and so were not subject to Apple’s recent requirement to provide the Apple privacy label. A small but significant percentage of the apps, almost 7% (5 out of 73), hadn’t been updated in as much as four years, since 2017.

Due to slow updates, these apps may lag – possibly up to a year behind – to incorporate emerging major privacy protections such as the newly required Apple AppTrackingTransparency framework, which requires the app to obtain permission for any data sharing with tracking third parties, such as advertising networks.  It’s likely that, at publication time, 100% of all iOS apps in this study will not be compliant, and users of these school apps still won’t be able to opt out of third-party tracking.

The average age of the educational apps was 11.5 months.(See Figure 13.) Educational apps are being updated on average about once a year. Private-school apps were about four months more current than public-school apps.

The oldest apps on average were Apple apps for public schools. Since Apple apps are likelier not to include SDKs, this is one privacy concern these apps may not have, but they may still be missing newer privacy updates.

Figure 13: Average Age of Educational Apps

4.12     Developer Analysis

Of the school apps studied, 99% were developed by professional app development companies. There was one instance where the app appeared to be “home grown”.  Notably, this app had the highest number of very high-risk SDKs included. (The Alliance reached out to the school in question to alert them of this finding. At publication time, no response was received yet.)

Of all school apps studied, 77% were developed by six specialized “ed tech” companies: LegitAppsApptegy, Blackboard, SchoolInfoApp, Straxis and Web4u Corp.

Virtually identical to our overall, SDKs appeared in 61% of the apps built by these top six developers.

Figure 14:  Top Six Developers Apps Sharing Data

Figure 15: Data Sharing behavior in Apps Built by Top 6 Developers

Somewhat higher than the behavior across all apps, 79% of the top six developers’ apps sharing data were Android apps (compared to 73% across all apps).

Figure 16:  Data Sharing in Apps Built by Top 6 Developers by Operating System

In the 34 apps developed by the top six developers that included SDKs, 68% of them were for public schools. This compares to the total sample, which had 66% of the apps with SDKs being for public schools.

Figure 17:  Data Sharing in Apps Built by Top 6 Developers by School Type

The top developers were closely examined because schools rely on these companies for their expertise in understanding the complex privacy issues involved with working with personal student data. With the majority of these apps sharing student information with third parties, there are several outstanding questions:

Are the developers aware of the risks of the SDKs in their apps? Are the developers alerting their clients as to the potential risks? Are the developers responsible for submitting deletion requests on behalf of the students to the SDKs that ingested their data? 4.13     Missing Android Apps

Three of the schools/districts with Apple apps had no corresponding Android app.

While this is a small percentage of our findings at 7.9% of the schools, this finding could be indicative of a larger problem where students who only have Android devices will be left behind. Since Apple devices are historically more expensive than Android devices[1], this could be another facet of the digital divide.

[1] “Here’s Why Developers Keep Favoring Apple Over Android”, Jim Edwards, Slate, April 4, 2014, https://slate.com/business/2014/04/apple-vs-android-developers-see-a-socioeconomic-divide.html.

5        RECOMMENDATIONS

The Me2B Alliance offers the following recommendations to make school apps safer for students and their families.

School apps for primary and secondary schools must not share data with third parties, and thus must not include high-risk or very high-risk SDKs. App developers creating apps for use by schools and school districts must be transparent, up to date, principled, and rigorous about their privacy protection practices and processes. Schools can’t be expected to be technical and privacy experts. Apple’s App Privacy Labeling is a step in the right direction. We urge Google to offer the same protections in their Play store – especially as they have recently added Privacy Labels to the Chrome Extension store. The Apple and Google app stores must update (or create, in the case of Google) privacy labels to include a list of all the SDKs that are installed in every app, including the name of the SDK owner/parent company. That way parents and students can make informed decisions about where they want to send their data. Google must revisit its Family Ads Program to ensure adequate protections and practices are in place. Additionally, it must perform independent validation of the self-provided assessments. Apps used by children must improve their Privacy Policies. Privacy Policies for institutional clients must make clear who has access to students’ data (the app developer, the institution, or both), and who is responsible for ensuring that data is appropriately protected, particularly for students under the age of 13. 6       FOR FUTURE STUDY

This research is just a starting point for additional investigation – either by the Me2B Alliance or by other interested organizations.  Several questions warrant deeper exploration:

Should taxpayer money ever be used to build apps that send data to high-risk advertising and analytics companies such as Google, Facebook, Twitter, Yahoo, etc.? Some examples may be government or civic apps. What is the process by which public schools requisition educational apps and is there anything that can be learned from it? What sorts of auditing and governance can be applied to companies that ingest student data, especially for students under the age of 13 who are protected by COPPA? Does it make sense to mandate any practices, such as mandatory data deletion for data subjects under the age of 13? How can public and elected officials be assured that advertising SDKs are upholding the requirements? Should schools be required to update apps for students at some regular interval? Is once a year sufficient? How can schools efficiently keep apps updated with emerging privacy enhancements, whether industry norms (like Apple’s privacy label), or local or federal regulations? Should there be an “access” requirement (potentially under the 1964 Civil Rights Act) that requires schools or school districts to make sure that versions of their apps are available to all users on all platforms, to ensure equity of access to educational tools? Why are developers routinely including questionable APIs in Android apps, but not in Apple apps? 7       CONCLUSIONS

Our analysis was not intended to be comprehensive. In our review, we chose schools and school districts, and the educational apps they use, at random. Nevertheless, the frequency with which apps were using SDKs and sharing personal data of students with high-risk advertising and analytics companies is disturbing.  Table 2 below is a summary of the likelihood of an educational app will share personal data with third parties based on our sample set.

Table 2: Summary Findings

The main conclusions of this research are:

The majority of apps (58%) were sending student data to high-risk adverting and analytics third parties. iOS apps are safer/less risky, on the whole. Android apps are much more likely to send data to third parties. 100% of the Android apps that included SDKs were sending student data to Google. 67% of Apple apps that included SDKs were sending student data to Google. 49% of total apps were sending student data to Google. Public-school apps are more frequently sending data to third parties, as compared to private schools, but both are sharing student data in more than 50% of the apps. Taxpayers may be paying for educational apps to collect and share student data with some of the leading advertising and analytics companies in the world. App labels and other information in the app stores don’t provide adequate or accurate information. School apps aren’t being updated in a timely manner relative to the introduction of privacy enhancing practices (like the Apple Privacy Label, Apple’s imminent AppTrackingTransparency Framework, and other practices). SDKs don’t discriminate data sharing practices based on the age of the data subject; they just send data.Student data from children under the age of 13 is being sent to high-risk advertising and analytics companies with no distinguishing tags, and likely is not being handled with the precautions required.

The Me2B Alliance, as a nonprofit fostering the respectful treatment of people by technology, is a new type of standards development organization that is working to define the standard for respectful technology. Scenarios like the ones described in this report – where user data is being abused, even inadvertently – highlight the types of issues we are driven to prevent through independent testing, as well as education, research, policy work, and advocacy.

We welcome your thoughts and feedback.  If you are interested in learning more about our independent testing and audits, contact us at services@me2ba.org.

If you are interested in supporting the Me2B Alliance to perform more research like this, contact us at admin@me2ba.org.

The data used in this report is available upon request by contacting us at admin@me2ba.org.

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/

Sunday, 02. May 2021

Me2B Alliance

Protected: School Apps Student Data Sharing Behavior Data Spreadsheet

There is no excerpt because this is a protected post.

This content is password protected. To view it please enter your password below:

Password:

Friday, 30. April 2021

omidiyar Network

Reimagining Capitalism Series: Building an Anti-Racist and Inclusive Economy

This post expands on the second key pillar for building a new economic paradigm as outlined in Our Call to Reimagine Capitalism. Read the first post in this series, “An Introduction to Ideas, Rules, and Power and How They Shape Our Democracy and Economy” here. By Audrey Stienon, Analyst, Reimagining Capitalism The current context Last summer, the protests and demonstrations that erupt

This post expands on the second key pillar for building a new economic paradigm as outlined in Our Call to Reimagine Capitalism. Read the first post in this series, “An Introduction to Ideas, Rules, and Power and How They Shape Our Democracy and Economy” here.

By Audrey Stienon, Analyst, Reimagining Capitalism

The current context

Last summer, the protests and demonstrations that erupted across the country in response to the murders of George Floyd, Breonna Taylor, and too many others prompted many funders, including Omidyar Network, to think more critically about our role within a system that perpetuates racial injustice. We wanted to determine ways in which we could use our power and privilege to contribute to dismantling white supremacy (you can read more about our commitments here and here).

While criminal justice and policing reform is not one of our focus areas at Omidyar Network, there is a close link between economic injustice and racial violence, which is why we called out the need to build an explicitly anti-racist and inclusive economy as the second pillar upholding our reimagined vision for capitalism. It is impossible to build a more just and equitable economy without directly addressing the ways in which our existing economy was intentionally designed to uphold white supremacy by excluding people from opportunities to participate fully in markets based on their race — not to mention gender, class, nationality, disability, and other forms of human diversity.

Our conviction in the necessity of this pillar has only been compounded by the evidence of the past year.

In the six weeks that it has taken to write and edit this piece, there have been two mass shootings in Atlanta and Indianapolis that targeted Asian Americans. Police officers killed 20-year-old Daunte Wright in Minnesota, 13-year-old Adam Toledo in Chicago, and 16-year-old Ma’Khia Bryant in Ohio. A ceaseless drumbeat of violence and trauma continue unabated, with only a glimmer of accountability in the trial and conviction of Derek Chauvin for the murder of George Floyd.

Meanwhile, the pandemic has exposed the extent to which the prejudices of the past remain embedded in the institutions and systems that we rely on today — to disastrous effect.

During the initial wave of the pandemic, many spoke of COVID as a great equalizer — a universal threat indifferent to our race, nationality, or class. Yet it quickly became clear that, although the virus itself cannot discriminate, our healthcare and economic structures certainly can and do. Every aspect of the pandemic and our healthcare response was impacted by race. The virus itself spread more quickly in communities of color, and took a greater toll on welfare and lives. Meanwhile, COVID tests and lifesaving PPE were more readily available in white neighborhoods at the beginning of the pandemic — and many of these inequities persist.

The effects of the economic recession are similarly shaped by race. As new data from our recent Great Jobs survey with Gallup confirms, Black, Latinx, and Asian-American and Pacific Islander workers are over-represented in essential, frontline industries such as health care, transportation, and retail, causing them to face greater risk of COVID exposure. Even as our economy has inched toward recovery, Black and Latinx workers, and especially women, are regaining their jobs much more slowly than their white counterparts. Meanwhile, women, and especially women of color, were both more likely to lose their job or find it necessary to dedicate their time exclusively to unpaid care work, erasing decades of gains in women’s labor force participation and wages.

These inequalities were not inevitable. Rather, this uneven toll stands as a physical manifestation of our history, etched into our society with every job, home, and life lost. An enduring lesson from this pandemic must be that, even when we face a threat like a virus that cannot intentionally discriminate between us, the institutional and cultural legacies that we have inherited from past efforts to purposefully divide us will ensure that our experiences are unequal and unjust.

A long legacy of exclusion

This persistence of social disparities should surprise no one, since intentional exclusion has long been the norm in the United States. For centuries, the power to shape our laws, government, markets, culture, and biases has been restricted by race, gender, and class. Varied forms of exclusion — from slavery and disenfranchisement to imprisonment, institutionalization, and deportation — have been justified by those in power (predominantly white men) in order to concentrate the benefits of American prosperity into their own hands. Acts — and threats — of physical violence against people of color have played a central role in upholding this dynamic, echoes of which we have seen play out in the violent snapshots of police brutality across the country today.

America’s history of racial exclusion has been integral to shaping our economy, denying people of color the opportunity to profit and accumulate wealth from their own contributions to our economy. The earliest form of American capitalism was one that enriched white Americans who stole land from Native Americans and labor from enslaved Black people. Later iterations of our economy not only permitted racial discrimination, but also exempted employers in sectors with a majority Black or Latinx labor force (namely agriculture, hospitality, and domestic services) from legal requirements to provide a minimum wage, workplace protections, and benefits. Meanwhile, developers, city planners, and the real estate industry redlined Black families into homes and communities with limited job opportunities, few financial service providers, and failing infrastructure. All of these systems were created with the intent of perpetuating white economic supremacy, and, in many cases, remain the foundation of the economy we experience today.

These patterns of economic marginalization and exploitation are crucial to maintaining power imbalances. Robbing people of the ability to accumulate wealth is tantamount to robbing them of the ability to accumulate power. After all, people without savings find it more difficult to change jobs, to choose where or with whom to live, or to contribute time and money to politics. And those robbed of their wealth have less to pass on to their children, thereby perpetuating this theft of opportunity across generations.

In recent decades, efforts to intentionally exclude people of color, and especially Black Americans, from participating in the markets and accumulating wealth and power may have been less overt, but not less effective. For the past fifty years, we have relied on markets to solve our social problems, even though markets are mechanisms in which people with existing wealth and assets use investments to gain even more wealth and assets. Superimposed over an unequal society, free markets can only exacerbate those inequalities. Meanwhile, many of those in power have worked to weaken the very institutions — government, labor unions, our social safety net — that could mitigate these market imbalances.

Supporting organizations working to build an anti-racist economy

Omidyar Network is committed to building an economy that is explicitly anti-racist and inclusive. An economy which simultaneously rejects the justifications for racism and consciously works to remove the structures of our society that perpetuate racial injustice. We are working toward a society where everyone has the power and ability to live the life of their choosing — free to choose their job, choose where they will live, choose how to invest their time and resources, and choose who represents them in government.

A first step toward achieving this vision is to correct the power imbalances that prevent people of color from have a voice in how the economy impacts them. That’s why we support Action Center on Race and the Economy (ACRE), an organization that provides support to groups on the ground campaigning for structural change of, and increased accountability from, the financial sector. ACRE helps grassroots organizations identify the ways in which the financial sector contributes to such challenges as healthcare, housing, the environment, and policing and incarceration. During the pandemic, ACRE was part of a coalition that supported the We Strike Together campaign, using rent and mortgage strikes to secure relief for renters during the crisis.

Similarly, Community Change, represents networks of local organizations building power with people of color so they have a say in policies that impact their lives. Its strategy includes efforts to strengthen organizing infrastructure in communities of color, and to increase their influence and develop new models of sustainable, volunteer-led community organizations.

A second task is to address the ways that existing economic rules perpetuate exclusion, either by changing policies or finding innovative ways to work around an unjust system. One of our partners, National Domestic Workers Alliance(NDWA), advocates on behalf of the 2.5 million nannies, house cleaners, and care workers across the country, most of whom are women, especially Black, Latinx, Native, and Asian American Pacific Island women. During the pandemic, NDWA distributed $400 in emergency support to more than 40,000 domestic workers who had lost their jobs during the lockdowns, and helped successfully advocate for the American Jobs Plan to include a $400 billion dollar investment in care jobs.

Finally, we must weed out and uproot all remaining vestiges of the ideas used to justify exclusion and division, and help new ideas take root. That’s why we support the work of academics, researchers, and other movement and thought leaders who study the stratifications of our economy and propose pathways forward. Most notably, we have worked closely with Professor Darrick Hamilton of The New School, a leading economist whose work explores the ways that existing systems perpetuate racial inequality and offers proposals — such as baby bonds or a federal job guarantee — to make the future economy more equitable. We also support the work of Demos, a think tank for race-forward progressive policy ideas and an advocacy network that aims to increase the power that communities of color have over economic decisions. It has, for example, designed a platform of policies for equitably addressing the climate crisis and expanding the availability of debt-free college. Demos is also releasing a number of policy proposals to ensure that our pandemic recovery brings us closer to a more democratic and inclusive economy. We believe it is imperative that these ideas are not cordoned off in think tanks or academic institutions, but rather shaped in partnership with those communities that are most impacted — building a ladder from the grassroots all the way up the ivory tower.

Far from being a time of solidarity and common endurance, this past year exposed a whole host of enduring racial injustices. Our institutions, built on foundations of intentional exclusion, deepened and perpetuated divisions built on old biases and hatreds, that have been nourished for centuries. However, even as the country witnessed the devastation of the pandemic — layered over the continued translation of hate into outright violence against people of color — many responded by joining the movement for racial justice.

The incredible work of our Reimagining Capitalism grantees is setting a course to build power with communities of color, rewrite the rules that have privileged white and wealthy people for so long, and reshape our ideas of racial justice and equity.

As America rebuilds, it is the mutual responsibility of funders to be anti-racist in rectifying the consequences of centuries of systemic racism that have robbed communities of color of their wealth, power, and rights, and to ensure that the economic system that is rebuilt supports an inclusive society in which everyone can live with dignity and thrive.

Reimagining Capitalism Series: Building an Anti-Racist and Inclusive Economy was originally published in Omidyar Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF Blog

Setting Interoperability Targets

Our short-term roadmaps need testable, provable alignment goals that we can all agree on for our little communities and networks of technological thinking to converge gradually. Simply put, we need a few checkpoints and short-term goals, towards which we can all work together.

Part 1: Conformance Testing for Measurable Goals

[Written in consultation with the Interoperability Working Group chairs]

A recurring topic in the Interoperability Working Group is that of defining short-, medium- and long-term goals or “targets” for interoperability. The topic comes up again every few months, and each time it does, the chairs try to tackle it head-on for a full meeting or two, some emails, and various off-line discussions, but doing so never seems to arrive at a satisfactory or definitive answer. Each time, what feels a Herculean outlay of effort only gets us a tentative resolution, as if we’d deflected a debt collector with a minimum payment. Dear reader, we would like to refinance, or at least restructure, this debt to our community!

In this two-part overview of our goals for 2021, we would like to survey the landscape of “provable,” testable interoperability targets and give some guidance on what we would like to see happen next in the testable interop world. Then, in a companion article, we will lay out clear proposal for parallel, distributed work on multiple fronts, such that progress can be distributed across various subcommunities and hubs of open cooperation, each reasonably confident that they are helping the big picture by “zooming in” on one set of problems.  

Photo by Christopher Paul High A seemingly uncontroversial target state

Interoperation is a deceptively transparent etymology: any two things that perform a function are inter-operating if they can perform their operation together and mutually. Two fax machines interoperate if one sends a fax and the other receives it, which is a far more impressive feat if the two machines in question were designed and built by different companies on different continents, fifteen years apart. This is the kind of target state people usually have in mind when they imagine interoperability for an emerging technology.

This everyday example glosses over a lot of complexity and history, however. Standards bodies had already specified exact definitions of how the “handshake” is established between two unfamiliar fax machines before either model of fax machine was a twinkle in a design team’s eye. In addition to all the plodding, iterative standardization, there has also been a lot of economic trial and error and dead-ends we never heard about.  Even acceptable margins of error and variance from the standards have been prototyped, refined, socialized, and normalized, such that generations of fax machines have taken them to heart, while entire factories have been set up to produce standard components like white-label fax chips and fax boards that can be used by many competing brands. On a software level, both design teams probably recycled some of the same libraries and low-level code, whether open-source or licensed.

Decomposing this example into its requirements at various levels shows how many interdependent and complex subsystems have to achieve internal maturity and sustainable relationships. A time-honored strategy is to treat each of these subsystems in a distinct, parallel maturation process and combine them gradually over time. The goal, after all, is not a working whole, but a working ecosystem.  Architectural alternatives, for example, have to be debated carefully, particularly for disruptive and emerging technologies that redistribute power within business processes or ecosystems. Sharing (or better yet, responsibly open-sourcing and governing) common software libraries and low-level hardware specifications is often a “stitch in time” that saves nine later stitches, since it gives coordinators a stable baseline to work from, sooner.

Naturally, from even fifty or a hundred organizations committed to this target state, a thousand different (mostly sensible) strategies can arise. “How to proceed?” becomes a far-from-trivial question, even when all parties share many common goals and incentives. For instance, almost everyone:

Wants to grow the pie of adoption and market interest Holds privacy and decentralization as paramount commitments Strives to avoid repeating the mistakes and assumptions of previous generations of internet technology

And yet, all the same… both strategic and tactical differences arise, threatening to entrench themselves into camps or even schools of thought. How to achieve the same functional consensus on strategy as we have on principles? Our thesis here is that our short-term roadmaps need testable, provable alignment goals that we can all agree on for our little communities and networks of technological thinking to converge gradually. Simply put, we need a few checkpoints and short-term goals, towards which we can all work together.

Today’s test suites and interoperability profiles

Perhaps the biggest differences turn out not to be about target state or principles, but about what exactly “conformance” means relative to what has already been specified and standardized. Namely, the core specifications for DIDs and VCs are both data models written in the Worldwide Web Consortium (W3C), which put protocols out of scope. This stems partly from the decisions of the groups convened in the W3C, and partly from a classic division of labor in the internet standards world between W3C, which traditionally governs the data models of web browser and servers,  and a distinct group, the Internet Engineering Task Force (IETF).

VCs were specified first, with a preference (but not a requirement) for DIDs, with exact parameters for DIDs and DID systems deferred to a separate data model.  Then, to accommodate entrenched and seemingly irreconcilable ideas about how DIDs could best be expressed, the DID data model was made less representationally-explicit and turned into an abstract data model. This shift to a representation-agnostic definition of DIDs, combined with the still-tentative and somewhat representation-specific communication and signing protocols defined to date, makes truly agnostic and cross-community data model conformance somewhat difficult to test. This holds back interoperability (and objective claims to conformance)!

W3C: Testing the core specifications

The only test suite associated with the core W3C specifications is the VC-HTTP-API test suite for VC data model conformance and the fledgling DID-core test suite, both worked on in the W3C-CCG. The former tests implementations of VC-handling systems against some pre-established sample data and test scripts through a deliberately generalized API interface that strives to be minimally opinionated with respect to context- and implementation-specific questions like API authentication. The latter is still taking shape, given that the DID spec has been very unstable in the home stretch of its editorial process arriving at CR this last week.

The VC-HTTP-API interface, specified collectively by the first SVIP funding cycle cohort for use in real-world VC systems, has been used in some contexts as a de facto general-purpose architecture profile, even if it is very open-ended on many architectural details traditionally specified in government or compliance profiles.  Its authors and editors did not intend it to be the general-purpose profile for the VC data model generally, but in the absence of comparable alternatives, it is sometimes taken as one; it has perhaps taken on more of a definitive role than originally intended.

Following the second iteration of the SVIP program and the expansion of the cohort, the API and its test suite are poised to accrue features and coverage to make it more useful outside of its original context. Core participants have established a weekly public call at the CCG and a lively re-scoping/documentation process is currently underway to match the documentation and rationale documents to the diversity of contexts deploying the API.

Aries: End-to-end conformance testing

Other profiles, like the Aries interoperability profile, serve an important role, but it would be misleading to call it a VC data model test suite-- it is more like an end-to-end test harness for showing successful implementation of the Aries protocols and architecture. Here “interoperability” means interoperability with other Aries systems, and conformance with the shared Aries interpretation of the standard VC data model and the protocols this community has defined on the basis of that interpretation.

Many matters specified by the W3C data model are abstracted out or addressed by shared libraries in Ursa, so its scope is not exactly coterminous with the W3C data model. Instead, the Aries interoperability profile has its own infrastructural focus, which focuses on scaling the privacy guarantees of blockchain-based ZKP systems. In many ways, this focus complements rather than supplants that of the W3C test suites.

Many of the trickiest questions on which SSI systems differ are rooted in what the Aries and Trust-over-IP communities conceptualize as “layer 2,” the connective infrastructural building-blocks connecting end-users to VCs and DIDs. As more and more features get added to be testable across language implementations, and as feature-parity is achieved with other systems (such as support for LD VCs), the case for productive complementarity and deep interoperability gets easier and easier to make.

The first wave of local profiles and guidelines

Other specifications for decentralized-identity APIs and wallets, modelled on that W3C CCG and/or extending the work of Aries-based infrastructures, are starting to crop up around the world.  So far these have all arisen out of ambitious government-funded programs to build infrastructure, often with an eye to local governance or healthy competition. Canada and the European Commission are the most high-profile ones to date, building on earlier work in Spain, the UK, and elsewhere; Germany and other countries funding next-generation trust frameworks may soon follow suit.

It is important, however, to avoid framing these tentative, sometimes cautious attempts at bridging status quo and new models as universal standards. If anything, these frameworks tend to come with major caveats and maturity disclaimers, on top of having carefully narrowed scopes.  After all, they are generally the work of experienced regulators, inheriting decades of work exploring identity and infrastructural technologies through a patchwork of requirements and local profiles that tend to align over time. If they are designed with enough circumspection and dialogue, conformance with one should never make conformance with another impossible. (The authors would here like to extend heartfelt sympathy to all DIF members currently trying to conform to multiple of these at once!)

These profiles test concrete and specific interpretations of a shared data model that provide a testing benchmark for regulatory “green lighting” of specific implementations and perhaps even whole frameworks. Particularly when they specify best practices or requirements for security and API design, they create testable standardization by making explicit their opinions and assumptions about:

approved cryptography, auditing capabilities, privacy requirements, and API access/authentication

These will probably always differ and make a universal abstraction impossible; and that’s not a bad thing! These requirements are always going to be specific to each regulatory context, and without them, innovation (and large-scale investment) are endangered by regulatory uncertainty. Navigating these multiple profiles is going to be a challenge in the coming years, as more of them come online and their differences come into relief as a stumbling block for widely-interoperable protocols with potentially global reach.

The Interoperability working group will be tracking them and providing guidance and documentation where possible. Importantly, though, there is a new DIF Working Group coming soon, the Wallet Security WG, which will dive deeper into these profiles and requirements, benefiting from a narrow scope and IPR protection, allowing them to speak more bluntly about the above-mentioned details.


Energy Web

Acting our way into new thinking…

A progress update on implementing decentralized SLAs and the EWT Escrow Model At the end of February our CEO Walter Kok laid out the concept of a decentralized SLA and how it will form the basis of a new type of service-level agreement (SLA) for enterprises and vendors alike. Unlike a traditional, bilateral SLA, a DSLA approach is inherently distributed and establishes multilateral trust tha
A progress update on implementing decentralized SLAs and the EWT Escrow Model

At the end of February our CEO Walter Kok laid out the concept of a decentralized SLA and how it will form the basis of a new type of service-level agreement (SLA) for enterprises and vendors alike. Unlike a traditional, bilateral SLA, a DSLA approach is inherently distributed and establishes multilateral trust that the services will be delivered according to the agreement. This new type of business and technical architecture will enable very transparent reporting on the actual, real-world service quality.

We see the DSLA as an important aspect of the Utility Layer of the Energy Web tech stack. The Utility Layer comprises a variety of ‘digital machinery’ (i.e., services) that power the dApps running on the Energy Web Chain. That’s why we unveiled the EWT Escrow Model, as a way to unlock the full potential of the Energy Web tech stack to achieve the delivery of a decentralized service operation and its objectives. The EWT Escrow Model allows for both Providers (i.e., vendors providing Utility Layer services) and Patrons (e.g., community members who may stake some amount of EWT in support).

The entire process leverages EW Switchboard to manage DIDs and roles for Customers, Providers, and Patrons.

Our goal is to launch the first Utility Layer services, complete with escrow and subscriptions, in production by Q3 2021. Between then and now, we’re aiming for a several key milestones, including: a) an alpha release on our testnet, Volta (this is when Patrons will be able to test an early version of the Escrow Model); b) a beta release on the Energy Web Chain; and c) scaling the Utility Layer service network, onboarding additional Providers and Patrons from the wider Energy Web community, adding new Utility Layer services, and integrating the escrow and subscription models with exchanges and other external systems to make it even easier to participate.

If you’ve watched our pipeline of major announcements and initiatives in recent weeks and months, you know that the Energy Web team and the broader ecosystem have been incredibly busy. But we’ve also been making progress on the EWT Escrow Model, and are excited to share this update with you.

SPOILER ALERT: DEAR PATRONS AND PROVIDERS, STAKING IS STILL COMING AND Q3 IS STILL A REALISTIC TARGET.

We finalized the architecture, designed the UI, and got the resources (almost) in place…

The picture below lays out the architecture of the decentralized service operations. It breaks down into four important blocks: 1) Customer Tasks (consuming the service, in green), 2) Provider Tasks (delivering the service, in yellow), 3) The Watchtower (monitoring and reporting on the service operations, in red), and 4) Governance Tasks (smart contracts executing the DSLAs, in blue).

And yes, there at the top are the Patrons, who can stake together with their service provider of choice. Let’s get into a little bit more detail.

THE SERVICE PROVIDER AND THE PATRON

Service Providers (“Providers”) are organizations that operate Utility Layer service nodes. In order to become a qualified Provider, an organization must pass a basic KYC check and deposit a minimum balance of EWT into an escrow smart contract for a multi-year period. Once qualified, Providers can launch service nodes by depositing an incremental amount of EWT in escrow and claim they are ready to deliver the service. The Watchtower will validate the claim (more on the Watchtower below). The following animation shows the User Interface we have developed so far to onboard Service Providers and Patrons:

To meet the minimum escrow balance requirement, Providers can either acquire the necessary EWT themselves or accept EWT from external Patrons in the wider community. As long as Providers deliver services in accordance with the prescribed rules, they will earn a stable income on their deposited EWT.

THE CUSTOMER

Customers follow an onboarding process similar to that of Providers. We want the system to be trustless, so the Providers (and Patrons) need to be assured that the Customer will pay for the service once it has been delivered. So basically, they put their pre-payments into escrow, too. Once the Customer claims that they are ready to consume services, the Watchtower will also validate this claim. The following animation shows the user interface we have developed so far to onboard customers and how they can choose the services and SLAs they want to consume:

THE WATCHTOWER AND GOVERNANCE TASKS

The Watchtower will be a similar decentralized architecture as the one we deploy for the validators of the Energy Web Chain. Watchtower nodes will also follow the Proof-of-Authority consensus algorithm. They are trusted entities, similar to our validators (we expect a subset of our validators to also run a Watchtower node).

Each Watchtower will validate the claim of readiness from both Provider and Customer and individually report on it. The smart contract (part of the governance tasks) will move both to the operational status when there is consensus amongst the Watchtowers. The services are now ready to be delivered and consumed and all the Watchtowers will start to monitor the actual performance. When it is their turn, they will measure and publish their results individually. Again, the consensus algorithm will ensure a decision of the actual performance for the interval, taking all scores into account.

We expect that these intervals will aggregate to a daily / weekly Service Level Score. Then depending on the Service Level Agreement, the payment / slashing logic will kick in. The necessary evidence will be stored on the Energy Web Chain so that at any time it can be accessed (taking into account privacy preserving requirements, etc.).

So what can you expect next..

Looking ahead at the coming months, we’re forging ahead with the schedule we laid out in the original EWT Escrow Model article:

In May 2021, we will conduct an alpha release of escrow and subscription tools on our testnet Volta. This is when our Patrons will be able to early test the staking feature. In June and July 2021, we will conduct a beta release of escrow and service components on the Energy Web Chain (EWC), with an initial cohort of Service Providers from the EWC validator community. Throughout the remainder of the year, we will focus on scaling the Utility Layer service network, onboarding additional Providers and Patrons from the wider Energy Web community, adding new Utility Layer services, and integrating the escrow and subscription models with exchanges and other external systems to make it even easier to participate.

Expect to continue to hear from Energy Web over the coming months. If you have questions or ideas, drop us a note in the comments below! To stay up to date with the latest information, follow us on Twitter or join our Telegram community.

The EWT Escrow Model is closer than ever to becoming a reality — including the 10–20% return on deposited EWT we mentioned when we first announced the Escrow Model. We are now seeing a convergence of positive developments: major projects deploying on the Energy Web tech stack in partnership with our members, a robust community around EWT, and a stronger-than-ever appetite for digital solutions that address the urgent climate challenge. We continue to act our way into new thinking indeed.

Acting our way into new thinking… was originally published in Energy Web Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.


Me2B Alliance

Me2BA Privacy Policy

The Me2BA Privacy Policy has been updated to include info about our Auditing Services >

Own Your Data Weekly Digest

MyData Weekly Digest for April 30th, 2021

Read in this week's digest about: 7 posts, 1 Tool
Read in this week's digest about: 7 posts, 1 Tool

Thursday, 29. April 2021

Energy Web

Share&Charge Becomes a Part of Energy Web

Zug, Switzerland—29 April 2021 — Today Energy Web announced that the Share&Charge Foundation has become a part of the global nonprofit. In March 2020, Share&Charge launched its Open Charging Network (OCN) on the Energy Web Chain. Share&Charge’s OCN is a decentralized e-roaming solution for seamless electric vehicle (EV) charging across different charge point networks. Today’s announcem

Zug, Switzerland—29 April 2021 — Today Energy Web announced that the Share&Charge Foundation has become a part of the global nonprofit. In March 2020, Share&Charge launched its Open Charging Network (OCN) on the Energy Web Chain. Share&Charge’s OCN is a decentralized e-roaming solution for seamless electric vehicle (EV) charging across different charge point networks. Today’s announcement makes the Share&Charge team an extension of Energy Web’s core team, while the OCN becomes more tightly integrated with the EW tech stack.

“In the global energy transition, EVs are one of the fastest-growing segments, with customers investing trillions in all-electric cars and trucks, while automakers, charge point operators, and utilities are investing heavily in charging infrastructure,” explained Jesse Morris, chief commercial officer of Energy Web. “EVs sit at the intersection of several major use cases for open-source digital technology like the Energy Web tech stack. Giving EVs self-sovereign identities enables them to easily and efficiently register in energy markets to provide services to grid operators. They’re a major new source of demand for renewable energy. And it will become increasingly important to track their batteries throughout their lifecycle, in alignment with regulations around the world. Bringing Share&Charge into the Energy Web family better prepares us to support the coming e-mobility future.”

Share&Charge will continue to operate as an independent brand, although it will now do so in closer alignment with the Energy Web team and strategy. “This is an exciting transition for Share&Charge and we are happy to be joining forces with EW,” said Dietrich Sümmermann, chair of the Share&Charge Foundation board. “As more and more companies join Energy Web and its global ecosystem, we stand ready to support seamless e-roaming as EVs become the dominant form of mobility in the decades ahead.”

Energy Web is working together with the OCN community through already-established channels — both supporting the current OCN and the development of a DID-based OCN 2.0. Stay tuned for more on this front.

About Energy Web
Energy Web is a global, member-driven nonprofit accelerating the low-carbon, customer-centric energy transition by unleashing the potential of open-source, digital technologies. We enable any energy asset, owned by any customer, to participate in any energy market. The Energy Web Chain — the world’s first enterprise-grade, public blockchain tailored to the energy sector — anchors our tech stack. The Energy Web ecosystem comprises leading utilities, grid operators, renewable energy developers, corporate energy buyers, IoT / telecom leaders, and others.

Share&Charge Becomes a Part of Energy Web was originally published in Energy Web Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.


Commercio

Breaking News: COM Token enters Cosmos Hub’s Gravity DEX Competition.

Commerc.io has decided to sponsor with prize of $10,000 in COM Token the Gravity DEX testnet that will start on May 4, 2021 with a prize pool of $200,000 in ATOM and other Cosmos ecosystem coins. GravityDEX.io is a fully decentralized exchange that uses the Inter-Blockchain Communication (IBC) protocol to enable swaps and pools of digital assets between two blockchains […] L'articolo Breaking Ne

Commerc.io has decided to sponsor with prize of $10,000 in COM Token the Gravity DEX testnet that will start on May 4, 2021 with a prize pool of $200,000 in ATOM and other Cosmos ecosystem coins.

GravityDEX.io is a fully decentralized exchange that uses the Inter-Blockchain Communication (IBC) protocol to enable swaps and pools of digital assets between two blockchains within the Cosmos ecosystem. GravityDEX.io is a great innovation in the DEX field because it has superior efficiency compared to other AMMs due to its innovative equivalent exchange pricing model.

Competitors in this trading competition compete for the best score, which is based on a combination of assets and profits. One-third of the top finishers will receive prizes proportional to their final position on May 11, 2021. Participants must make deposits, withdrawals and swaps on at least 3 pools to be eligible.

The deadline to register for free is April 30 at this Link:  https://airtable.com/shrnBmHrFYOJaCgNm

 

L'articolo Breaking News: COM Token enters Cosmos Hub’s Gravity DEX Competition. sembra essere il primo su commercio.network.


DIF Blog

Introducing DIF Grants

DIF is kicking off a program to administer narrowly-scoped financial support for community initiatives, ranging in format from grants to more competitive implementation bounties, hackathon-style open collaborations, and security reviews.

Lightweight micro-grants to further the DIF mission

DIF is kicking off a program to administer narrowly-scoped financial support for community initiatives, ranging in format from grants to more competitive implementation bounties, hackathon-style open collaborations, and security reviews. Although the complexity and number of stakeholders involved will naturally vary, all of these bounties will all be structured around public “challenges” that directly reward efforts helpful to the community as a whole.

Each of these challenges will culminate in a library-, repository-, or narrow specification-sized donation that closes interoperability gaps, generates usable documentation, or solving other tooling and data problems that move the decentralized identity needle. These could be signature suites, libraries, toolkits, or even detailed tutorials and implementation guides.

Prompts for whitepapers or non-normative implementation guides are harder to define in testable terms, but still welcome; one way of defining them is by listing a significant, finite set of input documents (such as specifications and regulatory profiles) and defining success as simple, useful documentation incorporates all their prescriptions and limitations.

Photo by Julien Chatelain

To make the awarding of these grants as objective and non-controversial as possible, we will be administering them in the form of technically-specified “challenges” that break down into specific, testable criteria. Submissions will be defined as donations to DIF that meet some or all of the criteria of a given challenge; the Technical Steering Committee will administer the adjudication of testing submissions against challenges.

Grant money can be offered directly from DIF’s budget by decision of the DIF’s governing board (the Steering Committee) and/or offered by a member company and approved by the SC. The donator of the grant funds can choose a specific WG or combination of WGs to specify and fine-tune the challenge. These WGs work with the Technical Steering Committee to ratify successful completion and authorize release of funds once a conforming donation has been received.

FAQ for DIF Grants: Who can offer a Grant?
Any DIF member and/or the DIF Steering Committee can offer a grant (or matching funds) as long as the scope of the challenge benefits the entire DIF community. DIF SC will review grant requests before the challenge is prepared and published. Who writes the challenge?
Grants challenges are written by the chairs and/or the members of the WG chosen by the donors to administer the grant. The WG is responsible for writing a challenge acceptable to the SC and the grantor. WG chairs should submit the challenge at least one week before the deadline for Grantor approval (aka “go/no go decision”), to allow for fine-tuning or negotiation of terms. What makes a good challenge?
A good challenge is testable to make the decision about its success objective rather than subjective, and it should include its own testable definition of success and, where appropriate, usable test fixtures.

A good challenge also serves the broader community equally (for instance, not just JWT-, LD-, or Aries-based systems). A challenge should not favor any given commercial product (or infrastructure such as specific blockchains) substantially over others, and care should be taken to think through what the “equivalences” or counterparties are if a challenge comes from a specific community, credential format, etc.

Who can submit and who reviews it?
Participation is open for the broad community (no prior DIF membership is required), but submissions will only be considered upon donation to the challenge-owning DIF WG. The donation should be “clean IP,” i.e. comprised entirely of IP generated by the donator(s), and the challenge owner may also impose some limitations on external dependencies. Is there a minimum and maximum donation size that DIF will administer as a grant?
For now, DIF has decided not to use this structure for grants under 1000U$. DIF cannot manage grants of over 50,000K U$ at this time. How is/are winner(s) selected?
Each challenge can specify the relationship between testable criteria and winner(s). The template for challenge authors specifies a few common patterns, i.e., pre-approved grantee(s), bounty-style competition, and sponsored lead/editor for an open work item.

Payment is handled through a contract signed with JDF. Checklist for defining a good challenge: Define the solution to the challenge as simply as possible, ideally in one noun phrase, i.e. “A lossless, general-purpose translator between DID Document representations”, “A sample implementation that reimplements DIDComm access via Bluetooth in a second major language,” or “A specification for how EDVs could be accessed over NFT written by someone with broad NFT and security experience” Consider redundancy of effort - Do people apply with an idea and get approved to execute (i.e., “grant”), working in reasonable confidence of post-success payment if they meet the deadline? Or do they compete with unknown number of other teams, only applying to get paid after completion? It is recommended that challenge owners decide early between these three modes of operation: “mini-grant” mode: applicants get pre-approved before beginning work and work privately until donating “Work item” mode: applicants get pre-approved to lead a public work item at DIF, which may accept volunteer contributions from other DIF members. Work should be donated even if the challenge is not completed or is completed sooner by another party. “Open participation” mode, aka implementation bounty: anyone can work privately and successful donation(s) get(s) the award. A good challenge should not be met merely by addressing known bugs, repackaging prior art, superficial/minor bugs, etc. Where in the interoperability WG’s map of implementer decisions does the challenge “live” and which sections does it directly affect? Answering this question may help argue for direct or indirect benefits to the whole community. How familiar are the authors of the challenge with that specific problem space, in and outside of decentralized identity? Have experts been consulted from outside the WG, or better yet, outside of the decentralized identity community? How evenly distributed would the benefits be? Please consider (and write out) how various subcommunities (i.e., Aries-based solutions, JWT-/JSON-based solutions, and LD-based solutions) would benefit from this solution. If the benefits would be unevenly distributed, consider adding an additional challenge criterion (perhaps even awardable separately) for extending extra benefits to the least-served community, such as an implementation guide, @Context file, vocabulary to aid translation, etc. Define criteria for success, ideally testable ones to minimize the subjective judgments to be made by the TSC. If possible, include dummy data and other test vectors. Securing formal review by an outsider with specified qualities or experience can be written in as a success criteria or payment trigger. Vaguely-worded requirements of “originality” are hard to assess, but negative requirements like “MUST/SHOULD not re-implement existing solutions linked from the reference section of this spec” can help make sure requirements clear to all involved.

Wednesday, 28. April 2021

GLEIF

The Seven Step Process to Becoming a Validation Agent

In 2020, GLEIF launched a transformative new role into the Global LEI System: the Validation Agent. Having listened to the challenges that financial institutions (FIs) were facing when it came to onboarding new clients, we introduced this new operational model to help FIs overcome delays and save time by removing duplicative processes across onboarding and LEI issuance. The Validation Agent

In 2020, GLEIF launched a transformative new role into the Global LEI System: the Validation Agent.

Having listened to the challenges that financial institutions (FIs) were facing when it came to onboarding new clients, we introduced this new operational model to help FIs overcome delays and save time by removing duplicative processes across onboarding and LEI issuance.

The Validation Agent Framework empowers FIs to leverage their know your customer (KYC), anti-money laundering (AML) and other regulated business as usual onboarding processes, to obtain an LEI for their customers when verifying a client’s identity during initial onboarding or during a standard client refresh update. As a result, FIs acting as Validation Agents save time and cost while enhancing their client onboarding experience by removing the duplication of data validation processes.

By becoming Validation Agents financial institutions can also streamline, accelerate and diversify their use of the LEI, and ensure their autonomy as they look to digitize their business processes.

GLEIF signed JP Morgan as its first Validation Agent at the end of 2020 and is continuing to invite FIs to participate in the Framework’s trial phase and establish first-mover status in the market.

GLEIF is working to standardize the trial engagement process, but also acknowledges that every FI employs its own processes and departmental functions differently. This is why GLEIF is committed to working with each triallist individually to explore and define the Validation Agent role ‘in context’, and to ensure the Global LEI System continues to meet the sector’s needs.

The trial process is comprised of seven steps, as illustrated in the graphic below.

To provide more detail around what this process entails, our latest eBook details what FIs can expect at each stage of this trial, the benefits they can look to enjoy working as part of the Framework and answers some frequently asked questions.

Interested in finding out more? You can download the full eBook by clicking here, and if that doesn’t answer all your questions, our first eBook on the role of the Validation Agent might, which can be found here.


OpenID

Welcoming Gail Hodges as Our New Executive Director

The OpenID Foundation is thrilled to welcome Gail Hodges as the new Executive Director of the OpenID Foundation. Those of you who already know Gail know that she’s passionate about enabling digital identity to serve the public good. She has extensive experience both in the digital identity space – for instance, having founded the Future […] The post Welcoming Gail Hodges as Our New Executive Dire

The OpenID Foundation is thrilled to welcome Gail Hodges as the new Executive Director of the OpenID Foundation. Those of you who already know Gail know that she’s passionate about enabling digital identity to serve the public good. She has extensive experience both in the digital identity space – for instance, having founded the Future Identity Council, and in the payments space – among other accomplishments, having headed Apple Pay in Latin America and having been the global head of digital payments at HSBC.

The Foundation’s former Executive Director and now Non-Executive Member of the Board of Directors, Don Thibeau, said, “I’m looking forward to working with Gail as a new leader for a new era for the Foundation. She brings international operational experience, a fresh perspective, and new energy to the community.”

In her own words, Gail affirmed “I’m honored to be joining the OpenID Foundation at this critical juncture, when robust Identity infrastructure is ever more important to people and society. I look forward to working with the OIDF Board and members, as well as the wider community, to help solve the Identity problems of our time.”

We look forward to the fresh views that Gail will bring to the OpenID Foundation. We see her joining as a unique opportunity to both review the programs and strengths of the OpenID Foundation as it exists today and to refine our vision and strategic plans.

Welcome Gail!

— The OpenID Foundation Board of Directors

The post Welcoming Gail Hodges as Our New Executive Director first appeared on OpenID.

Commercio

100 possible uses for blockchain

100 possible uses for blockchain Ledra Capital, a New York-based venture capital firm, has listed a range of potential uses for Blockchain technology. Some of these categories include financial instruments, public, private and semi-public, physical keys, intangibles and other potential applications. Commercio Network has developed three protocols CommercioID for creating Self Sovereign Identity; C

100 possible uses for blockchain

Ledra Capital, a New York-based venture capital firm, has listed a range of

potential uses for Blockchain technology.

Some of these categories include financial instruments, public, private and

semi-public, physical keys, intangibles and other potential applications.

Commercio Network has developed three protocols CommercioID for creating Self Sovereign Identity; CommercioSIGN for electronic signatures, CommercioDOC for exchanging electronic data and documents, and is also developing three new fintech protocols for paying,tokenizing, issuing and exchanging Crypto-Assets on different blockchains belonging to the Cosmos ecosystem.

CommercioPAY-MINT to tokenize assets,credits and debits by issuing an NFT, a non-fungible token and issue a SEPA payment, CommercioKYC a module to issue eKYC (Know Your Customer) credentials and CommercioDEX to directly exchange Tokens.

Financial instruments, registries and templates

Currency Private equity Shares of listed companies Bonds Derivatives (futures, forwards, swaps, options) Voting rights Commodities Public registers of expenditure Public registers of trading Public registers of mortgages/loans Public registers of maintenance Leasing contracts Insurance services Crowdfunding Microfinance Microcarity

Public Registers 

Land titles Notarial deeds Vehicle registers Business licenses Incorporation/dissolution of companies Company shareholder registers Official Gazette Criminal records Passports Car licenses Boat licenses Flight licenses Identity cards Birth certificates Death certificates Voter cards Elections Health/safety inspections Building permits Firearms licenses Judicial decisions Court documents Voting records Nonprofit records State accounting/transparency

Private records (anonymized) 

Contracts Signatures Wills Trusts Escrows GPS Routes (personal)

Semi-Public Records (anonymized)

College Graduation Certificates Vocational Certificates School Grades Human Resources (pay stubs, CUDs) Medical Records Accounting Records Business Document Exchange Genome Data GPS Routes (Institutional) Delivery Documents Arbitrations

Physical asset keys 

Home/apartment keys Vacation home/part-time sharing keys Hotel room keys Car keys Rental car keys Lease car keys Locker keys Parcel delivery  Betting records Fantasy Sport records

Intangible assets 

Coupons Vouchers Membership cards Reservations (restaurants, hotels, office queues, etc.) Movie tickets Patents Copyrights Trademarks Taxi licenses Software licenses Video game licenses

Other

Evidentiary documents (photos, audio, video), Data records (sports scores, temperatures, cold chain, etc.) SIM cards, GPS network identities, Weapons release codes, Nuclear launch codes, Antispam (micropayments for sending mail), Car sharing, Energy and consumption management, Wedding lists, Food chain tracking, Wills and inheritances, Control of seals and labels

 

L'articolo 100 possible uses for blockchain sembra essere il primo su commercio.network.

Friday, 23. April 2021

Elastos Foundation

Elastos Bi-Weekly Update – 23 April 2021

...

Gelaxy Team: What’s Ahead in 2021

...
...

Oasis Open

Invitation to comment on CACAO Security Playbooks v1.0

This specification defines the schema and taxonomy for cybersecurity playbooks and how cybersecurity playbooks can be created, documented, and shared. The post Invitation to comment on CACAO Security Playbooks v1.0 appeared first on OASIS Open.

Third public review ends May 22nd

OASIS and the OASIS Collaborative Automated Course of Action Operations (CACAO) for Cyber Security TC are pleased to announce that CACAO Security Playbooks v1.0 is now available for public review and comment. This 30-day review is the third public review for this specification.

About the specification:

To defend against threat actors and their tactics, techniques, and procedures, organizations need to identify, create, document, and test detection, investigation, prevention, mitigation, and remediation steps. These steps, when grouped together, form a cyber security playbook that can be used to protect organizational systems, networks, data, and users.

This specification defines the schema and taxonomy for cybersecurity playbooks and how cybersecurity playbooks can be created, documented, and shared in a structured and standardized way across organizational boundaries and technological solutions.

The documents and related files are available here:

CACAO Security Playbooks Version 1.0
Committee Specification Draft 03
20 April 2021

Editable source (Authoritative):
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03.docx
HTML:
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03.html
PDF:
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03.pdf
Change-marked PDF:
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03-DIFF.pdf

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03.zip

How to Provide Feedback

OASIS and the CACAO TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.

The public review starts 23 April 2021 at 00:00 UTC and ends 22 May 2021 at 23:59 UTC.

Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=cacao).

Comments submitted by TC non-members for this work and for other work of this TC are publicly archived and can be viewed at:

https://lists.oasis-open.org/archives/cacao-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the CACAO TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/cacao/

Additional information related to this public review, including a complete publication and review history, can be found in the public review metadata document [3].

Additional references

[1] https://www.oasis-open.org/policies-guidelines/ipr

[2] https://www.oasis-open.org/committees/cacao/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#Non-Assertion-Mode
Non-Assertion Mode

[3] Public review metadata document:
https://docs.oasis-open.org/cacao/security-playbooks/v1.0/csd03/security-playbooks-v1.0-csd03-public-review-metadata.html

The post Invitation to comment on CACAO Security Playbooks v1.0 appeared first on OASIS Open.


Commercio

How to implement Blockchain in your company in 5 steps

To implement any new technology like Blockchain, you need to follow a skills acquisition path so that you don’t make missteps and create cathedrals in the desert that embarrass your company and waste its resources. We’ve identified five areas of expertise divided into five phases of implementation Most companies cannot develop competencies in all of these areas, but they can […] L'articolo How t

To implement any new technology like Blockchain, you need to follow a skills acquisition path so that you don’t make missteps and create cathedrals in the desert that embarrass your company and waste its resources.
We’ve identified five areas of expertise divided into five phases of implementation
Most companies cannot develop competencies in all of these areas, but they can partner with outside firms for specific aspects of these phases.

A network of Italian companies has developed a Blockchain training path called BlockchainWorkshop.it that allows them to quickly acquire all the necessary knowledge.
Knowing the fundamentals of Blockchain is an essential skill as important as developing applications.

Step 1 Education: learn the basic features of the Blockchain technology, what classes of problems it can generically solve and the opportunities that the Blockchain offers
Step 2 Problem/Solution Fit: identifying areas of business problems that current technology does not solve and analyzing if and where Blockchain can be the solution.
Step 3 App Design: what functional solutions will we need to address the problem we discovered in the previous step? How will it affect what we are doing, including business processes, contractual and legal requirements?
Step 4 Software Development: technology selection, vendor selection, integration and implementation of the Blockchain in the enterprise, and security audits

 

 

L'articolo How to implement Blockchain in your company in 5 steps sembra essere il primo su commercio.network.


Own Your Data Weekly Digest

MyData Weekly Digest for April 23rd, 2021

Read in this week's digest about: 6 posts, 1 question
Read in this week's digest about: 6 posts, 1 question

Thursday, 22. April 2021

WomenInIdentity

Member interview with Azadeh Dindayal

What do you do and what is it about your job that gets you out of bed in the morning? There are so many things about my job that gets… The post Member interview with Azadeh Dindayal appeared first on Women in Identity.
What do you do and what is it about your job that gets you out of bed in the morning?

There are so many things about my job that gets me out of bed in th morning. First, I love that I have the opportunity to solve for a trustworthy, informed and transparent way to control our personal information. We need a simple digital identity in so many facets of our digital lives and it’s sorely lacking. Second, I love knowing that I’m contributing to a safer digital future for my children and the next generation. Finally, and most importantly, I love the people I get to share the experience with – the team at IDENTOS, our customers and the identity community continues to humble me.

How did you get to where you are today?

Hmm. Hard to say. Where am I today? I am a daughter, a sister, a mom, a tech professional and very grateful for it all. I don’t know how I got here since most of this was likely already planned before my time… but for the moments I had the opportunity to make a choice, I would say that I’ve arrived here thanks to having trust in myself and others who wanted the best for me.

What is the most important lesson you have learned along the way?

The most important lesson is to really take stock of the people we interact with, especially the people who share in the same vision and are doing their best to create a better future. The more human and connected we are, the better our solutions will be.

What’s your pitch to CEOs in the identity space? What do you suggest they START / STOP / CONTINUE doing and why?

Keeping this focused on identity, I’d say:

START joining other innovators in your community and offer to work on complementary identity projects STOP asking for input from the same groups or people. Surround yourself with different voices and perspectives. CONTINUE bringing your best in identity solutions to build a privacy-respecting, sustainable digital future for citizens, patients, students, parents, children, et al. In one sentence, why does diversity matter to you?

Solutions will fall flat unless we can recognize our blind spots.

What book/film/piece of art would you recommend to your fellow members? Why?

In a field that is dreaming up the future, it probably comes as no surprise that I like to spend a lot of time with movies that have done the same. Aside from diversity, Sci-fi movies also have the power to offer different perspectives. Flicks like Gattaca, Ai, Hunger Games, Battlestar Galactica, West World all have an identity component in the plot that’s interesting to observe. I love the impact it has on the characters’ quality of life, not to mention the freedoms and liberties of some.

What advice would you give to the teenage ‘you’?

As a teenager, I was full of big dreams and wanted to fulfill them all. Personally, this caused a lot of stress, and after a while it also didn’t leave a lot of room for connecting with the people and things I cared most for. If I had the opportunity, I’d tell my teenage self:  “Continue to dream big and have a compass, but don’t be so hard on yourself. Instead, do your best with what you have while giving yourself space to do the things you find most fulfilling.”

Connect with Azadeh on Twitter: @amahinpou and LinkedIn: https://www.linkedin.com/in/azadehd/

The post Member interview with Azadeh Dindayal appeared first on Women in Identity.


Sovrin (Medium)

A Deeper Understanding of Implementing Guardianship

Sovrin releases two new Guardianship Credentials papers at Internet Identity Workshop #32 In December 2019 the Sovrin Guardianship Task Force released a whitepaper titled “On Guardianship in Self-Sovereign Identity.” This groundbreaking paper explored guardianship in the context of SSI, and provided two use cases: one for a refugee, Mya, and one for an elderly living with dementia, Jamie. (W
Sovrin releases two new Guardianship Credentials papers at Internet Identity Workshop #32

In December 2019 the Sovrin Guardianship Task Force released a whitepaper titled “On Guardianship in Self-Sovereign Identity.” This groundbreaking paper explored guardianship in the context of SSI, and provided two use cases: one for a refugee, Mya, and one for an elderly living with dementia, Jamie. (Watch the video illustration of these two use cases at the end of this article.)

Recognising the need to develop the work beyond the whitepaper, the Sovrin Foundation chartered a Sovrin Guardianship Working Group (SGWG) in December 2019. Two key documents were identified as outputs for the working group: a Technical Requirements for Guardianship and Implementation Guidelines. After more than a year of hard work, the two papers were completed and will be publicly presented and released at the Sovrin’s breakout session at the Internet Identity Workshop (IIW) #32 during April 20–22, 2021.

“When we started the work on defining technical requirements for guardianship in early 2020, we viewed the task as relatively simple. ‘All’ we had to do was extract the guardianship requirements from the use cases in the previous whitepaper… It turned out that this view was optimistic,” said John Phillips, one of the authors and Chairs of SGWG in Asia Pacific. “We found that the gap between the two use cases in the whitepaper and technical requirements was too broad. So we realised we needed to revisit our thinking. We needed a conceptual bridge, a ‘mental model’ to understand Guardianship broadly so we could write appropriately narrow technical requirements. The lesson we have (re)learnt in our journey is that, before you can describe something in simple terms, you need to make sure you have a broad and deep enough understanding of the topic.”

The first paper is called the Guardianship Credentials Implementation Guidelines and its purpose is to provide readers with the background they need to implement IT systems that support various kinds of guardianship. In particular, it focuses on what they need to know when using Verifiable Credentials and Decentralised Identifiers, the building blocks of a SSI framework within the frameworks of Sovrin Governance and Trust Over IP (ToIP).

This paper is intended for all people interested in the design, build and operation of a Guardianship implementation using an SSI framework. It outlines the conceptual framework for Guardianship and provides implementation guidelines. The document introduces and uses a mental model to help understand the interplay and relationships of the key entities and actors involved in the establishment, running and ending of a Guardianship Arrangement.

While this document provides background explanations of the thinking that led to the technical requirements guidelines, it does not mandate or warrant specific rules for the implementation of Guardianship as these should be Jurisdiction specific, which is defined by the mental model in the document.

The second paper is called Guardianship Credentials Technical Requirements which was developed by the technical requirements working group within the SGWG. The purpose of this document is twofold: i) provide principles under which guardianship scenario designs and requirements are considered and defined; and ii) provide technical requirements for SSI solutions that offer the capability of guardianship.

The intended audience for this document includes: i) individuals looking to understand how guardianship should be implemented for their use case; ii) SSI solution designers that require guardianship to support the use of VCs in their specific use case; iii) readers who want to understand how Guardianship can work in an SSI context.

The requirements described in this paper, together with other SSI standardization, describe when an SSI solution offers the capability of guardianship. In this case, the requirements are mainly for the technical building blocks that live in the bottom three layers of the Technical “Stack” of the Sovrin/ToIP framework. The mental model, however, is also relevant for the top layer (ecosystem), as guardianship is intricately linked to what we will be defining as Jurisdictions, and this enables human governance to be applied to the first three layers.

This document should be read in conjunction with the Sovrin Guardianship Credentials Implementation Guidelines mentioned above. In particular, readers who are looking to understand the thinking behind these requirements are encouraged to refer to the Implementation Guidelines.

These two papers are the first public release of implementation guidelines and technical requirements for guardianship in the context of SSI. There are sections at the end of the documents that propose areas of future work, and it is expected that these two documents will be updated as technology evolves and open discussion and decision areas are resolved.

Download the two papers here.

Sovrin Guardianship Use Case #1— Mya (a refugee girl)

Sovrin Guardianship Use Case #2 — Jamie (an elderly living with dementia)

Video credit: John Phillips

Originally published at https://sovrin.org on April 22, 2021.

This all-volunteer Sovrin Guardianship Working Group is open to anyone with a genuine interest in and willingness to contribute to digital guardianship. For more information, please see its webpage for details.

A Deeper Understanding of Implementing Guardianship was originally published in Sovrin Foundation Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Oasis Open

Invitation to comment on AMQP Addressing v1.0

AMQP Addressing extends AMQP network concept as a federation of AMQP containers whose nodes communicate with each other either directly or via intermediaries. The post Invitation to comment on AMQP Addressing v1.0 appeared first on OASIS Open.

First public review of this draft specification - ends May 21st

OASIS and the OASIS Advanced Message Queuing Protocol (AMQP) TC are pleased to announce that AMQP Addressing Version 1.0 is now available for public review and comment.


The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business messages between applications or organizations. It connects systems, feeds business processes with the information they need and reliably transmits onward the instructions that achieve their goals.

AMQP Addressing v1.0 further defines the “AMQP network” concept introduced in the main AMQP specification as a federation of AMQP containers whose nodes communicate with each other either directly or via intermediaries. This specification also defines the semantics of the “address” archetype that was left undefined in the main AMQP specification, and the syntax for the AMQP URI scheme and a matching restriction of the AMQP “address-string” type.

The documents and related files are available here:

AMQP Addressing Version 1.0
Committee Specification Draft 01
17 March 2021

Editable source:
https://docs.oasis-open.org/amqp/addressing/v1.0/csd01/addressing-v1.0-csd01.md (Authoritative)
HTML:
https://docs.oasis-open.org/amqp/addressing/v1.0/csd01/addressing-v1.0-csd01.html
PDF:
https://docs.oasis-open.org/amqp/addressing/v1.0/csd01/addressing-v1.0-csd01.pdf

For your convenience, OASIS provides a complete package of the specification documents and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/amqp/addressing/v1.0/csd01/addressing-v1.0-csd01.zip

Metadata records [3] describing the publication and public review history of this specification are published along with the specification files.

How to Provide Feedback

OASIS and the AMQP TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.

The public reviews start 22 April 2021 at 00:00 UTC and end 21 May 2021 at 23:59 UTC.

Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=amqp).

Comments submitted by TC non-members for these works and for other work of this TC are publicly archived and can be viewed at:
https://lists.oasis-open.org/archives/amqp-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the AMQP TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/amqp/

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr

[2] https://www.oasis-open.org/committees/amqp/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-RAND-Mode
RF on RAND Mode

[3] Public review announcement metadata:
– https://docs.oasis-open.org/amqp/addressing/v1.0/csd01/addressing-v1.0-csd01-public-review-metadata.html

The post Invitation to comment on AMQP Addressing v1.0 appeared first on OASIS Open.


Energy Web

Top 5 Ways Energy Web is Accelerating the Clean Energy Transition

The first Earth Day — held April 22, 1970 — is considered the birth of the modern environmental movement. More than half a century later, it has taken on renewed focus amidst the climate crisis. Here at Energy Web, we are developing open-source digital infrastructure to accelerate the low-carbon energy transition and expand access to clean energy markets for all. Here are five specific ways we — t

The first Earth Day — held April 22, 1970 — is considered the birth of the modern environmental movement. More than half a century later, it has taken on renewed focus amidst the climate crisis. Here at Energy Web, we are developing open-source digital infrastructure to accelerate the low-carbon energy transition and expand access to clean energy markets for all. Here are five specific ways we — the Energy Web team, Energy Web members, and the broader Energy Web ecosystem — are working together to speed global decarbonization.

1. Supporting flexibility markets, making it possible for DERs to help balance renewable grids

Distributed energy resources (DERs) — from batteries and electric vehicles to “behind the meter’ devices like smart thermostats — are fast becoming crucial components of increasingly decentralized and dynamic power grids that are running on growing amounts of renewable energy. Such DERs can play a valuable role providing grid services, such as helping to balance supply and demand in flexibility markets. Thus we believe it is essential to make it easy for DERs to enroll, participate, and get compensation for delivered grid services. This is why we built and keep advancing Energy Web Flex as a software development toolkit that grid operators like Austrian Power Grid can use to tap into the full potential of DERs at the lowest administrative cost possible.

2. Digitizing renewable energy markets, making it faster and easier to source renewables

We are building open-source tools to help buyers more easily achieve their proof-of-impact needs and companies around the globe launch user-friendly, digitized renewable energy market platforms. For example, Energy Web Origin is a software development toolkit we built and continue advancing that digitizes the entire energy attribute certificate lifecycle — from certificate issuance to trading and cancellations/claims — for any clean energy market. We are also working directly with various companies around the world to help them use EW Origin to launch their own digitized renewable energy marketplace platforms across Asia, Europe, and the Americas. In addition, Energy Web is developing Energy Web Zero, a renewable energy search-and-match engine that helps buyers discover all available renewable energy products and make direct purchases from platforms (like those built with EW Origin). Think Skyscanner or Kayak.com meets renewable energy markets.

3. Enabling lifecycle asset tracking, such as batteries, as part of a circular green economy

We know that it is important to understand and track the environmental impacts of all energy devices — from batteries to solar panels and wind turbines — from cradle to grave. This lifecycle tracking is also part of major legislation such as the European Green Deal, not to mention major movements such as the circular economy. Energy Web is leveraging our tech stack and developing tools that companies are using to bring solutions to the market, such as our work with BeBat and Fluvius in Belgium to support EasyBat. EasyBat makes it possible to holistically manage customer-owned batteries (both private & professional use) participating in Belgian electricity markets, including legal takeback obligations and/or extended producers responsibility (EPR) of battery manufacturers.

4. Advancing the Crypto Climate Accord to decarbonize an entire sector

We spearheaded the Crypto Climate Accord (CCA) earlier this month in partnership with RMI and the Alliance for Innovative Regulation (AIR) to decarbonize the global crypto and blockchain industry. We are excited to drive it forward with a fast-growing community of now 40 CCA Supporters. More importantly, we have already started identifying, defining, and developing a fast-emerging suite of open-source solutions to help decarbonize crypto. If your company can help us advance solutions in this area, we encourage you to become a CCA Supporter. For more frequent updates on progress related to the CCA, check out the new CCA Medium.

5. Fostering interoperability of digital twins across tech and markets

Historically, power grids and electricity markets — and the technologies used to run them — were heavily fragmented and siloed. Important was locked away in proprietary systems. Customer enrollment in various utility programs, such as selling surplus solar energy to a grid operator or participating in a demand response program, was cumbersome and duplicative. Even worse, customer devices in some cases were prohibited from providing services in certain electricity markets, even if they technologically were able to do so. But those metaphorical walls are coming down, with new regulations such as FERC Order 2222 in the United States and a focus on customer-centric market design in the EU. The key to unlocking this new potential is with digital twins and ‘passports’ anchored to the Energy Web Chain.

This is just a snapshot of all the leading-edge work happening at Energy Web and among our global member community. Stay tuned for more updates and news in the weeks and months ahead!

Top 5 Ways Energy Web is Accelerating the Clean Energy Transition was originally published in Energy Web Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 21. April 2021

GLEIF

\#2 in the LEI Lightbulb Blog Series - Spotlight on China: A Nation Advancing LEI Usage Through Policy and Innovation

Significant advances are taking place right now in China relative to LEI adoption. Regulatory authorities within the region are proactively driving LEI usage and have set ambitious plans for substantial near-term increases in LEI volume. For those who missed it, the People’s Bank of China (PBOC) published an LEI implementation roadmap at the end of 2020, which looks ahead to 2022. This roadmap

Significant advances are taking place right now in China relative to LEI adoption. Regulatory authorities within the region are proactively driving LEI usage and have set ambitious plans for substantial near-term increases in LEI volume.

For those who missed it, the People’s Bank of China (PBOC) published an LEI implementation roadmap at the end of 2020, which looks ahead to 2022. This roadmap is part of the ‘One Belt One Road’ initiative, which is described in the following way on Wikipedia: ‘[…] incorporated into the Constitution of China in 2017. The Chinese government calls the initiative "a bid to enhance regional connectivity and embrace a brighter future."’ The roadmap unveils national plans to increase LEI issuance by a minimum of 170% between the end of 2020 (there were 37,000 active LEIs) and the end of 2022 (the target is a minimum LEI volume of 100,000).

In a separate but complementary development, just this month GLEIF has welcomed the news that the Chinese Financial Certification Authority (CFCA) has launched the first commercial demo of LEIs embedded within digital certificates. CFCA has simultaneously become the first Certification Authority to assume a Validation Agent role within the Global LEI System. This innovation from CFCA shows a further commitment from the Chinese market, this time to facilitating and promoting LEI issuance in digital certificates. It is hoped that CFCA’s early mover status on this front will act as a catalyst for other similar demos to emerge.

The combined sum of these two initiatives is significant. It shows an advanced level of proactivity from regulatory authorities within the region, which is driving LEI issuance to support financial services and banking ecosystems, simplify international trade and enhance digital identity across the region.

Understandably, GLEIF advocates these steps being taken to advance LEI adoption across China and applauds the foresight and proactivity of regulators. A commitment to boosting LEI issuance across such a vast region, will not only deliver benefits to Chinese market participants, but has the potential to motivate regulators across other countries and regions to follow suit. GLEIF certainly hopes that this is the case.

A more detailed summary of each initiative is given below.

China’s LEI Implementation Roadmap: 2020-2022

In Q4 2020, four financial regulators in China released a report detailing a roadmap for LEI implementation. The four organizations involved were: PBOC, the China Banking and Insurance Regulatory Commission (CBIRC), the China Securities Regulatory Commission (CSRC) and the State Administration of Foreign Exchange (SAFE).

The report, which is published in Chinese on the PBOC website, outlines the general objective of the roadmap and details key LEI implementation milestones over the next two years.

As stated in a Regulation Asia article, the intention of wide LEI implementation in China is to help the country connect with the international market, support China’s bid to open up its financial sector, and facilitate cross-border trade and financial transactions.

In line with a translation from the report on the PBOC website, GLEIF understands that the general objective of the roadmap is to establish a comprehensive LEI usage policy system across China’s financial system in line with international standards by the end of 2022. It envisions that the LEI will become an assisting tool for financial management authorities to maintain financial stability and implement financial supervision. It will become an important means for financial infrastructure, financial industry associations and financial institutions to carry out customer identification of legal persons involved in cross-border transactions. It will also become a corporate passport.

This is in line with the PBOC’s role as a member of the Regulatory Oversight Committee (ROC). The ROC aims to promote the broad public interests to improve the quality of data used in financial data reporting, improving the ability to monitor financial risk, and lowering regulatory reporting costs through the harmonization of these standards across jurisdictions.

A number of phase objectives are then outlined for Chinese authorities:

By end of 2020: total LEI volume in China mainland to reach 30,000, covering all financial institutions, member institutions of financial infrastructure and industry associations, and listed companies in China. Proposed LEI application rules in scenarios such as RMB cross-border payments, digital RMB cross-border business, qualified foreign institutional investor (QFII) and RMB qualified foreign institutional investor (RQFII) access, derivatives trading, securities trading, and listed company supervision. Establish a mechanism for mapping and updating LEI with the codes of financial institutions, unified social credit codes, and codes of information systems related to major financial infrastructure. By end of 2021: total LEI volume in China mainland reaches 50,000, with a focus on improving coverage among importers and exporters, trading enterprises, and non-financial enterprises involved in cross-border transactions. Proposed LEI application rules in areas such as the financial market transaction reporting system, credit rating, and application of special institutional codes by overseas institutions. Launch and operate the cross-border legal person information service and digital authentication platform to provide value-added LEI-based data services to financial management departments, financial infrastructure, financial industry associations and financial institutions. By end of 2022: total LEI volume in China mainland reaches 100,000. Continued improvements in coverage among non-financial enterprises involved in cross-border transactions. Use LEI in scenarios such as digital identification of cross-border legal persons. Establish a mechanism for commercially sustainable operation of LEI.


CFCA Paves the Way for Increased LEI Usage in Mass Market Digital Identity Products

CFCA has launched the first commercial demonstration of LEIs embedded within digital certificates. It has also become the first Certification Authority to act as a Validation Agent in the Global LEI System, streamlining LEI issuance with digital ID product and service provision.

These developments by CFCA follow the recent launch of the GLEIF CA Stakeholder Group, created as a platform for GLEIF to collaborate with CAs and Trust Service Providers (TSPs) on the coordination and promotion of a global approach to LEI usage across digital identity products. CFCA’s advances are significant because they are the earliest reported successes aligned to the direction of this industry initiative, which is aimed at achieving a critical mass of LEIs embedded within digital certificates.

Stephan Wolf, CEO of GLEIF comments: “This progress by CFCA on both fronts is very welcome as it moves us one step closer to broad LEI usage in digital certificates globally. Realizing the goal of universal LEI usage in digital identity products will be an important step in enhancing trust and creating innovation opportunities across private sector digital identity management applications. Digital certificates linked by an LEI to verified, regularly updated and freely available entity reference data held within the Global LEI System are easier to manage, aggregate and maintain. The result will be significant efficiencies and far less complexity for certificate owners and the provision of greater transparency for all users of the internet and participants within digital exchanges.”

For more details, please read the corresponding press release here.

The ‘LEI Lightbulb Blog Series’ from GLEIF aims to shine a light on the breadth of acceptance and advocacy for the LEI across the public and private sectors, geographies and use cases by highlighting which industry leaders, authorities and organizations are supportive of the LEI and for what purpose. By demonstrating how success derived from strong regulatory roots is giving rise to a ground swell of champions for further LEI regulation and voluntary LEI adoption across new and emerging applications, GLEIF hopes to educate on both the current and future potential value that ‘one global identity’ can deliver for businesses, regardless of sector, world-wide.


omidiyar Network

Statement from Omidyar Network on Derek Chauvin Verdict: The First Step to Justice

Eleven months after Derek Chauvin stole George Floyd’s final breaths, jurors took less than 11 hours to find Chauvin guilty on all three counts. The jury’s verdict shows that sometimes — but too rarely — justice can prevail, and it is very welcome, given the long history of acquittals that come before it. It does not, however, bring George Floyd back to life, nor the dozens of other unarmed

Eleven months after Derek Chauvin stole George Floyd’s final breaths, jurors took less than 11 hours to find Chauvin guilty on all three counts. The jury’s verdict shows that sometimes — but too rarely — justice can prevail, and it is very welcome, given the long history of acquittals that come before it.

It does not, however, bring George Floyd back to life, nor the dozens of other unarmed Black and Brown people who have been killed by the police since May 25, 2020. Since the trial began, police have killed at least 64 people, more than half of whom were Black or Latinx. Just 10 miles from the courthouse, Daunte Wright was shot dead, and across the Great Lakes in Chicago, police took the life of 13-year-old Adam Toledo in Chicago.

So, while we can briefly exhale today, we know there is so much work that remains to be done to build toward a more just system overall, so that outcomes like today’s are less exceptional.

Listening to the testimony during the trial reopened painful wounds that have been inflicted relentlessly since our country was founded and reflects the continued need for deep systemic change in our society. We know it won’t happen overnight, nor clearly in just one year, but it must happen. As George Floyd’s girlfriend, Courteney Ross, said, “Floyd was one man. George Floyd is a movement.”

And at Omidyar Network, while criminal justice reform is not one of our focus areas, there is a close link between economic injustice and racial violence. And between those two areas and true democracy. We know we will not bring about the kind of world we aspire to unless and until Black people are no longer trapped by systems that were intentionally created to lock them into poverty and strip them of voice and opportunity.

Last summer, we once again began a national racial justice reckoning, we looked deeply at our own systems and policies. We committed to become an anti-racist organization, and while we know we still have quite a long journey, we have taken the first of many steps. We gave more than $1 million to organizations fighting to protect and advance Black lives and another $4 million to organizations looking to reimagine our economy through a racial justice lens. Again, these are the first of many steps we are taking to live up to our commitment and we know there is so much more we can and must do until Black Lives Matter.

As Ibram X. Kendi wrote in How to Be Anti-Racist, “After taking this grueling journey to the dirt road of anti-racism, humanity can come upon the clearing of a potential future: an antiracist world in all its imperfect beauty. It can become real if we focus on power instead of people, if we focus on changing policy instead of groups of people. It’s possible if we overcome our cynicism about the permanence of racism.”

Statement from Omidyar Network on Derek Chauvin Verdict: The First Step to Justice was originally published in Omidyar Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Commercio

The three components of the Commerce.network Blockchain

Blockchain is an enabling technology that is changing the way we think about and implement business applications, but to understand it we need to reiterate the three components that make it possible: The Software The Software is a set of three components: a write-only database, a networking system that connects multiple computers together (Peer-to-Peer), and a consensus mechanism that allows […]

Blockchain is an enabling technology that is changing the way we think about and implement business applications, but to understand it we need to reiterate the three components that make it possible:

The Software
The Software is a set of three components: a write-only database, a networking system that connects multiple computers together (Peer-to-Peer), and a consensus mechanism that allows you to decide which transactions can be written and which cannot.

The Token
Cryptoeconomics is the combination of Token + Game Theory. The latter has nothing to do with gaming but is the study of mathematical models of conflict and cooperation between intelligent rational decision makers. Made famous by the movie “A Beautiful Mind” about the life of Nobel Prize-winning economist John Nash, it is linked to the Blockchain in the solution to the famous problem of Byzantine Generals lying about the coordination of their attack to ensure victory for their opponent. The implementation of a “Byzantine Fault Tolerance” (BFT) is important because it assumes that no one can be trusted. Cryptocurrency through Token is what makes a Blockchain secure, not the technology. Through a process called Mechanism Design, cryptoeconomic incentives can be created that push people to behave in the right way. On the Blockchain, it costs less to be honest than to be dishonest. A validating node receives tokens if it validates transactions and loses tokens if it goes absent.

Cryptography
Cryptography is used in various parts to provide security to a Blockchain network and is based on three basic concepts: hashing, keys, and digital signatures. A “hash” is a unique fingerprint that helps verify that a certain piece of information has not been altered, without the need to actually see it. Keys are used pair one public and one private. As an analogy, imagine a door that needs two keys to be opened. In this case, the public key is used by the sender to encrypt information that can only be decrypted by the owner of the private key. You never reveal the private key to anyone. A digital signature is a mathematical calculation that is used to prove the authenticity of a (digital) message or document. Cryptography is based on the concept of public/private key. Public visibility, but private control. It’s kind of like your business address: you can post a company website, but that doesn’t give any information about how your production processes take place. You’ll need your private key to get into the company, and because you’ve declared that address as yours, no one else can claim a similar address. Although the concepts of cryptography have been around for a while, on the Blockchain they are combined with the innovation of Game Theory, where uncertainty is limited by mathematical certainty. You can mathematically prove that something has been done without necessarily having to show it to others. Very cool.

 

L'articolo The three components of the Commerce.network Blockchain sembra essere il primo su commercio.network.


Hyperledger Ursa

Why Distributed Ledger Technology (DLT) for Identity?

As we continue our pandemic journey that is 2021, more and more people are getting vaccinated against COVID-19. Once vaccinated, people are (finally!) able to do more “in the real... The post Why Distributed Ledger Technology (DLT) for Identity? appeared first on Hyperledger.

As we continue our pandemic journey that is 2021, more and more people are getting vaccinated against COVID-19. Once vaccinated, people are (finally!) able to do more “in the real world.” However, in some cases such as international travel, there is a need to prove that you have been vaccinated before you can participate. In the past, that proof has been accomplished in the form of the paper World Health Organization Carte Jaune/Yellow Card. But in our 21st century pandemic, a handwritten paper document is not particularly trusted. It’s just too easy to buy or make your own. The sudden, urgent need to be able to prove health information in a safe, privacy-preserving and secure way has brought the spotlight on the concept of verifiable credentials and, for Hyperledger, on the three identity-focused projects in the community, Indy (a distributed ledger for identity), Aries (data exchange protocols and implementations of agents for people, organizations and things), and Ursa (a cryptographic library underlying Indy and Aries).

While people understand that paper credentials are insufficient and that a trusted digital solution is needed, they don’t understand why verifiable credentials, or more generally, identity, works extremely well with distributed ledger technology (DLT)—a distributed database spread across multiple nodes, of which blockchain is an example. To be clear from the start, it is not to put the credentials on a public ledger so everyone can see them! We’ll reiterate that a lot in this post. No private data ever goes on the blockchain!!!

To understand why DLT is useful for identity, we need to go back to the basics—paper credentials, how that model has worked for 1000s of years, and how the use of DLTs with verifiable credentials allows us to transition the great parts—security and privacy—of that model to the digital age.


Since as far back as 450BC, people have used paper credentials to enable trusted identity. Legend has it that King Artixerxes of the Persian Empire signed and gave Nehemiah a paper “safe transit” authorization that he used in travels across the empire. People have been using such documents ever since. In technical terms, a credential is an attestation of qualification, competence, or authority issued to an entity (e.g., an individual or organization) by a third party with a relevant or de facto authority or assumed competence to do so. Examples of credentials issued to people include a driver’s license, a passport, an academic degree, proof-of-vaccination and so on. Credentials are also issued to companies, such as business registrations, building permits, and even health inspection certifications.

Examples of Paper Credentials
By Peter Stokyo, peter.stoyko@elanica.com, Licensed under CC By 4.0

A typical paper credential, say a driver’s license, is issued by a government authority (an issuer) after you prove to them who you are (usually in person using your passport or birth certificate) and that you are qualified to drive. You then hold this credential (usually in your wallet) and can use it elsewhere whenever you want—for example, to rent a car, to open a bank account or in a bar to show that you are old enough to drink. When you do that, you’re proving (or presenting) the credential to the verifier. The verifier inspects the physical document to decide if it is valid for the business purpose at hand. Note that in verifying the paper credential, the verifier does not call the issuer of the document. The transaction is only between the holder and the verifier. Further, it is the holder’s choice whether they want to share the piece of paper. If they want, they can keep it to themselves.

 

The Paper Credential Model
By Peter Stokyo, peter.stoyko@elanica.com, Licensed under CC By 4.0

Verification in the paper credential model (ideally) proves:

Who issued the credential.  That the credential was issued to the entity presenting it. That the claims have not been altered.

The caveat “ideally” is included because of the real possibility of forgery in the use of paper credentials. Back to our “proof-of-vaccination” problem.

Let’s see how the good parts of the paper credential model are retained in the verifiable credentials model. With verifiable credentials:

An authority decides you are eligible to receive a credential and issues you one. You hold your credential in your (digital) wallet—it does not go on the distributed ledger! At some point, a verifier asks you to prove the claims from one or more credentials. If you decide to share your data with the verifier, you provide a verifiable presentation to the verifier, proving the same three things as with the paper credentials. Plus: You may be able to prove one more thing—that the issued credentials have not been revoked.

As we’ll see, verifiable credentials and presentations are not simple documents that anyone can create. They are cryptographically constructed so that a presentation of the claims within a credential proves four attributes:

Who issued the credential–their identifier is part of the credential and they signed the credential. 

Who holds the credential–there is a cryptographic binding to the prover. The claims have not been altered–they were signed at the time of issuance. The credential has not been revoked.

Unlike a paper credential, those four attributes are evaluated not based on the judgment and expertise of the person looking at the credential, but rather by machine using cryptographic algorithms that are extremely difficult to forge. Like the paper credential, the verifier does not go back to the issuer to ask about the credential being presented. Only the prover and verifier, the participants in the interaction, need to know about the presentation. So where do the prover and verifier get the information they need for their transaction? We’re just getting to that…


The Verifiable Credentials Model
By Peter Stokyo, peter.stoyko@elanica.com, Licensed under CC By 4.0 

Compared to the paper credentials model, verifiable credentials are far more secure. When the cryptographic verification succeeds, the verifier can be certain of the validity of the data—those four attributes stemming from verifying the presentation. They are left only with the same question that paper credentials have—do I trust the issuer enough

So where does the DLT fit in?

Three of the four things that the verifier has to prove (listed above) involves published data from the issuer that has to be available in some trusted, public distributed place, a place that is not controlled by a central authority (hmm…sounds like a DLT!). In Indy and Aries, data published to a DLT is used to verify the credential without having to check with the issuer. In particular:

The verifier has to know who issued the credential based on an identifier and cryptographic signature. From the presentation, it gets an identifier for the issuer, looks it up on a DLT to get a public key associated with the issuer to verify the signature in the presentation. Thus, the identity of the issuer is known. The verifier has to verify that the claims data has not been altered by verifying a cryptographic signature across the data. Based on an identifier for the type of credential, the verifier gets from a DLT a set of public keys and verifies the signatures. Thus, the verifier knows no one has tampered with the claims data. The issuer periodically updates a revocation registry on a DLT indicating the credentials that have been revoked. If the holder’s credential is revoked, they are unable to create a proof of non-revocation (yes, that’s a double negative…). If the holder can generate that proof, the verifier can check it. Thus, the verifier knows the credential has not been revoked.

The fourth attribute (the binding of the credential to the holder) in Indy is done using some privacy-preserving cryptographic magic (called a Zero Knowledge Proof) that prevents having a unique identifier for the holder or credential being given to the verifier. Thus, no PII is needed for sharing trusted data.

So why DLT? First, we can get the good parts of paper credentials—private transactions between holders and verifiers and no callback to the issuer. Second, the issuer gets a trusted, open and transparent way to publish the cryptographic material needed for those private holder-verifier transactions. Third, there is no need to have a “Trusted Third Party” participating in the interactions.

And did I mention, no private data goes on the DLT!!! 

Hyperledger Indy, Aries and Ursa are enabling this approach to “self-sovereign identity” in a big way,  bringing about a new layer of trust on the Internet that will let us preserve our privacy and give us control over our identity and data—where it belongs. There is a lot to learn. If you’re curious, a great place to start is this Linux Foundation edX course.

Cover image by Nick Youngson CC BY-SA 3.0 Alpha Stock Images

The post Why Distributed Ledger Technology (DLT) for Identity? appeared first on Hyperledger.


Digital Identity NZ

April newsletter: Words from new ED

This is my first newsletter as the new Executive Director for DINZ and the very first thing I’d like to do is thank Andrew Weaver, our outgoing ED, for his tremendous leadership throughout our establishment phase and to wish him all the very, very best on the next stage of his journey. Andrew has worked … Continue reading "April newsletter: Words from new ED" The post April newsletter: Words fro

This is my first newsletter as the new Executive Director for DINZ and the very first thing I’d like to do is thank Andrew Weaver, our outgoing ED, for his tremendous leadership throughout our establishment phase and to wish him all the very, very best on the next stage of his journey. Andrew has worked tirelessly over the last two and a half years to ensure we are working towards a world where people can safely express their identity to fully participate in the digital economy and society overall. 
 
Good luck Andrew and all the best. E te rangatira, tēnā rawa atu koe. Kia pai te haere
 
I’d also like to reflect a little on why I feel so privileged to have been offered the opportunity to take up this role.
 
Clearly, we are living in an age of unprecedented advances across a broad range of technologies, not just in digital but in biotech, in engineering, in pharmaceuticals, in energy, in advanced materials, in 3-D printing, in nanotech, in robotics; the list is a very long one!
 
If you are familiar with work from the likes of Peter Diamandis (co-founder of Singularity University and X-Prize), Jeremy Rifkin (of the zero marginal cost society fame) and Salim Ismail (author of Exponential Organisations) you will be aware that they evangelise a coming age of abundance, driven by the mass adoption of these advancing technologies.
 
However, anyone who’s read or listened to Erik Brynjolfsson and Andrew McAfee around what they call The Great Decoupling, will be aware that access to this abundance has not been equitable to date, and that the technology-driven rise in labour productivity seen in the last few years has not been matched by an equivalent rise in prosperity for the majority of people.
 
To me it is very clear that equitable access to technology is the key driver in ensuring that the benefits of a tech-driven age of abundance can be provided to all people, not just the lucky few. It is also very clear to me that digital identity is at the very heart of ensuring that equitable access.

Ngā Mihi,

Michael Murphy
Executive Director
Digital Identity New Zealand

To receive our full newsletter including additional industry updates and information, subscribe now

The post April newsletter: Words from new ED appeared first on Digital Identity New Zealand.

Tuesday, 20. April 2021

DIF Medium

Sidetree Protocol reaches V1

The DIF Steering Committee has approved the first major release of the Sidetree Protocol specification, “v1” so to speak. Here is a snapshot of the four companies and four implementations that stretched and built the specification. Scalable, Flexible Infrastructure for Decentralized Identity This week, the DIF Steering Committee officially approved the first major release of the Sidetree Protoco

The DIF Steering Committee has approved the first major release of the Sidetree Protocol specification, “v1” so to speak. Here is a snapshot of the four companies and four implementations that stretched and built the specification.

Scalable, Flexible Infrastructure for Decentralized Identity

This week, the DIF Steering Committee officially approved the first major release of the Sidetree Protocol specification, “v1” so to speak. This protocol has already been implemented, and four of its implementers have been collaborating intensively for over a year on expanding and extending this specification together.

What exactly is a “Sidetree”?

Sidetree is a protocol that extends “decentralized identifiers” (DIDs), one of the core building blocks of decentralized identity. Decentralized identifiers (DIDs) enable a person or entity to securely and directly “anchor” their data-sharing activities to a shared ledger, secured by cryptography. The first generation of DID systems accomplished this with a 1-to-1 relationship between “blockchain addresses” (cryptographic identities) and the more flexible, powerful addresses called DIDs. These latter functioned as privacy-preserving extensions of the blockchain addresses to which they were closely coupled. In this way, each DID effortlessly inherited the formidable security guarantees of those blockchains — but in many cases, they also inherited scalability problems and economic models that were a bad fit for many DID use-cases.

Sidetree is a systematic, carefully-engineered protocol that loosens that coupling between anchor-points on a distributed data system (usually a blockchain) and the DID networks anchored to them. Crucially, it replaces the 1-to-1 relationship with a 1-to-many relationship, pooling resources and security guarantees. Depending on the use-case and implementation strategies chosen, the protocol can optimize for scalable performance, for developer-friendly ergonomics and SDKs, for the portability of DIDs and networks of DIDs across multiple anchoring systems, and even for high-availability in low-connectivity contexts where a global blockchain cannot be relied upon directly.

The name “sidetree” combines two hints as to its early technical inspirations and superpowers. Each Sidetree network functions as a kind of identity-specific “Layer 2” overlay network where participating nodes root aggregated operational data into transactions of the underlying chain. This mechanism has many high-level conceptual similarities with the “sidechains” of other “Layer 2” systems, such as the Lightning network running atop Bitcoin or state channel implementations on Ethereum. It also shares with merkle “trees” (and DAGs like IPFS) the self-certifying property of content-addressability, a core building block of decentralized and distributed systems.

Leveraging concepts from sidechains and “Layer 2” network protocols, Sidetree was first proposed by Microsoft’s Daniel Buchner and has been incubated in the DIF community, evolving along the way with major contributions from a growing list of DIF members.

The team that delivered the specification Microsoft (Redmond, WA, USA)

A global consumer and enterprise app, service, hardware, and cloud infrastructure provider whose mission is to empower every person to achieve more. Microsoft is proud to have worked on Sidetree and implemented the Sidetree protocol via its contributions to ION. As a key piece of infrastructure that is foundational to its Decentralized Identity work, Microsoft is committed to the continued development of Sidetree and ION in DIF.

SecureKey (Toronto, ON, Canada)

SecureKey is a leading digital identity and authentication provider, and is a champion of the ecosystem approach to decentralized identity and verifiable credentials, revolutionizing the way consumers and organizations approach identity and attribute sharing in the digital age. This ecosystem-first philosophy informs our investment in Sidetree as a protocol for extensibility and scalability, which can evolve its featureset, and its network model over time. Of particular technological interest to us is how Sidetree can be overlaid on a wide variety of ledger and propagation systems. This will enable identity systems that span many use cases and work across public blockchains, federation and witness protocols, and permissioned blockchains without being locked to any particular ledger technology.

Transmute Industries (Austin, TX, USA)

Transmute uses decentralized identifiers (DIDs) and verifiable credentials (VCs) to secure critical trade data by digitizing key trade documents so that they’re traceable and verifiable anywhere in the world, easily accessible and selectively shareable, searchable and auditable, and impossible to forge or alter. Transmute contributed to Sidetree’s development because it leverages batch processing capabilities to achieve enterprise scale and retains maximum optionality for our customers, allowing their business to span many blockchains and trust frameworks. Transmute sees Sidetree-based networks as necessary for scaling up decentralized identity capabilities to a global enterprise scale, where thousands of verifiable transactions per second can be processed at an unbeatable price.

MATTR Global (Auckland, New Zealand)

Mattr works with communities and a growing network of companies to shift industries like digital identity towards a more equitable future, providing tools to support digital inclusion, privacy and end-user control. Sidetree represents a significant leap forward in thinking around how to create truly decentralized infrastructure for resilient identifiers. We welcome the agnostic and extensible approach not just to distributed ledgers but also to content addressable storage and other building-blocks of flexible infrastructure. We look forward to integrating many of the DID systems coming out of the Sidetree standardization effort.

The first generation of Sidetree Systems

Transmute maintains Sidetree ledger adapters for Ethereum, Amazon QLDB, Bitcoin and Hyperledger Fabric. We also support interoperability tests with DID Key, the Universal Wallet Interop Spec, the VC HTTP API, and Traceability Vocabulary. Transmute has built Sidetree.js, an implementation of the Sidetree protocol based on the DIF’s codebase that focuses on modularity: it is a Typescript monorepo where each component of a Sidetree node (Ledger, Content Addressable Storage, Cache database) can be substituted with different implementations that use a common interface.

SecureKey has created a ledger-agnostic Go implementation of Sidetree along with Orb and Hyperledger Fabric variations built on top. The did:orb method enables independent organizations to create decentralized identifiers that are propagated across a shared decentralized network without reliance on a common blockchain. By extending Sidetree into a Fediverse of interconnected registries, Orb provides the foundation for building digital ecosystems on top of decentralized identifiers using a federated, replicated and scalable approach.

Microsoft is a primary contributor to ION, an open source, public, permissionless implementation of Sidetree on the Bitcoin ledger. There are several repositories and public utilities that make working with ION easier, including:

ION GitHub repo: the main repository for the code that powers ION nodes ION Tools: JavaScript libraries for Node.js and browser environments that makes using DIDs and interacting with the ION network easier for web developers ION Install Guide: A step-by-step guide for installing an ION node ION Explorer: A graphical interface for viewing DIDs and auditing other data transactions published to the public ION network. What’s next for Sidetree

One significant feature on the horizon is to add support for pruning of verbose lineage data (which is no longer needed to maintain the secure backbone of DIDs in a Sidetree implementation) at Sidetree’s anchor points. This addition will allow Sidetree-based networks to purge upwards of 95% of legacy operation data in a decentralized way that maintains all of the security guarantees the protocol currently makes.

Another near-future feature is the so-called “DID Type Table.” DIDs in various DID Method implementations may be typed to provide an indication as to what they DID might represent. The Sidetree WG will publish a table of types (not including human-centric types) that stand for organizations, machines, code packages, etc., which DID creators can use if they want to tag a DID with a given type.

The medium-term roadmap is up for discussion, so if you have ideas get involved!

Sidetree Protocol reaches V1 was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


GLEIF

Q1 2021 in review: The LEI in Numbers

Visibility is a core value for the Global LEI Foundation (GLEIF) as we provide transparency and accessibility to all information related to legal entity identification and data services. This is why we make our quarterly Global LEI System Business Report accessible to all, and why we are committed to providing consistent updates on LEI adoption worldwide. These reports are always free of charge an

Visibility is a core value for the Global LEI Foundation (GLEIF) as we provide transparency and accessibility to all information related to legal entity identification and data services. This is why we make our quarterly Global LEI System Business Report accessible to all, and why we are committed to providing consistent updates on LEI adoption worldwide. These reports are always free of charge and support GLEIF’s aim of providing open, unrestricted access to LEI data on a global scale.

In our latest report, covering Q1 2021, we have seen continued growth within LEI issuance, with 63,000 LEI issued within the quarter, an increase of 1000 from Q4 2020. As a result, the total active LEI population now stands at 1.77 million.

The below infographic contains the key statistics from Q1 2021.

In Q1, we continue to see progress across our key growth markets, with Estonia taking the top spot as the largest growth market (12.1% growth rate), overtaking China (12%) for the first time in over a year. Additionally, we are pleased to have onboarded two new issuers to our network of Local Operating Units (LOUs): Keler Central Depository Ltd., based in Hungary, and Tunisie Clearing, based in Tunisia. These additions to the LOU network present an excellent opportunity to grow the presence of the LEI in two key markets.

For the full report which includes further detail on the state of play of LEI issuance and growth potential, the level of competition between LEI issuing organizations in the Global LEI System and Level 1 and 2 reference data, please visit the Global LEI System Business Reports page.

If you are interested in reviewing the latest daily LEI data, our Global LEI System Statistics Dashboard contains daily statistics on the total and active number of LEIs issued. This feature now enables any user to review historical data by geography, increasing transparency on the overall progress of the LEI.

For further detail, or to access historical data, please visit the Global LEI System Business Report Archive. We look forward to sharing our progress each quarter as we continue to drive LEI adoption in 2021.


SelfKey Foundation

Where to Get the SelfKey Token (KEY)

Want to know where you can get the SelfKey token – KEY? Check out this list of the many cryptocurrency exchanges that have listed the KEY token. The post Where to Get the SelfKey Token (KEY) appeared first on SelfKey.

Want to know where you can get the SelfKey token – KEY? Check out this list of the many cryptocurrency exchanges that have listed the KEY token.

The post Where to Get the SelfKey Token (KEY) appeared first on SelfKey.

Monday, 19. April 2021

Commercio

The most frequently asked  questions about the Commercio.Network project in one PDF

Our project is growing exponentially and every day we receive many questions. We have created a pdf document that collects all the answers to those important questions.   Here are some of the most frequent questions we are asked: What is the mission of Commercio.network ? What is digital transformation ? Why is it fundamental to use a blockchain ? What […] L'articolo The most frequentl

Our project is growing exponentially and every day we receive many questions. We have created a pdf document that collects all the answers to those important questions.  

Here are some of the most frequent questions we are asked:

What is the mission of Commercio.network ? What is digital transformation ? Why is it fundamental to use a blockchain ? What have you developed so far? When was the idea born When was the startup born ? When was the consortium born ? What is the new SPA project ? How do you plan to reach the market ? What are you developing now ? What is a trusted network ? How many tokens exist and how are they distributed ? What is a Token ? What is a Credit? What is the exchange ratio between COM and CCC ? Where can I buy Token for my Blockchain project ? When will the ICO of Token Commerce be done? When will the Commerce Token be listed on an exchange ? When will DEX be launched? What is CommerceDEX? What will be the main functions of DEX ? How will the token price be determined on the DEX? Why was Commercio Consortium created ? How much does it cost to adhere to the Consortium ? Which advantages do I get joining the Consortium? Do I have to join the Consortium to be part of Commercio.network? Who decides the functions to implement in the software project? Who owns the software, the brand and the software project ?

All these answers are available in a PDF document at this address

 

https://commercio.network/faq/

L'articolo The most frequently asked  questions about the Commercio.Network project in one PDF sembra essere il primo su commercio.network.


decentralized-id.com

Object Capability Model

Computer scientist E. Dean Tribble stated that in smart contracts, identity-based access control did not support well dynamically changing permissions, compared to the object-capability model. He analogized the ocap model with giving a valet the key to one's car, without handing over the right to car ownership.
Awesome Object Capabilities and Capability-based Security

Capability-based security enables the concise composition of powerful patterns of cooperation without vulnerability. What Are Capabilities? explains in detail.

Object Capabilities - SourceCrypto Object Capability Modelwiki.c2

Computer scientist E. Dean Tribble stated that in smart contracts, identity-based access control did not support well dynamically changing permissions, compared to the object-capability model. He analogized the ocap model with giving a valet the key to one’s car, without handing over the right to car ownership.

The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation.

These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these – in particular, information flow properties – can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code.

Object Capabilities eRights

The capability model is, in a sense, the object model taken to its logical extreme. Where object programmers seek modularity – a decrease in the dependencies between separately thought-out units – capability programmers seek security, recognizing that required trust is a form of dependency. Object programmers wish to guard against bugs: a bug in module A should not propagate to module B. Capability programmers wish to guard against malice. However, if B is designed to be invulnerable to A’s malice, it is likely also invulnerable to A’s bugs.

Literature Authorization Capabilities for Linked Data v0.3 - An object capability framework for linked data systems CCG

Authorization Capabilities for Linked Data (ZCAP-LD for short) provides a secure way for linked data systems to grant and express authority utilizing the object capability model. Capabilities are represented as linked data objects which are signed with Linked Data Proofs. ZCAP-LD supports delegating authority to other entities on the network by chaining together capability documents. “Caveats” may be attached to capability documents which may be used to restrict the scope of their use, for example to restrict the actions which may be used or providing a mechanism by which the capability may be later revoked.

[DIDAuth_%2B_Obj.Cap.](https://iiw.idcommons.net/DIDAuth%2B_Obj._Cap.)

What is DIDAuth and how is it compatible with Object Capabilities?
We started by defining and describing object capabilities:

A Capability is a Transferable Unforgeable Permission. It can be implemented with unguessable URLS or signed objects. A Java Program object reference is a capability, it allows for actions on the subject (the object instance). A stronger implementation of object capabilities involves a digital certificate issued by a public key, for a resource with a set of supported methods:
Issuer: AlicePubKey
Resource: did:dad:0x123
Actions: Read,Write
Signature: 0x456
Applying the Principle of Least Authority to User Interaction by Bill Tulloh - RWoT 8

Object capabilities (ocaps) are increasingly recognized as an important tool for achieving the goals of self-sovereign identity. Many of the principles of self-sovereign identity, such as minimization and protection, can best be achieved through the disciplined pursuit of the principle of least authority that ocaps enable. This paper examines how POLA can be extended to better protect users when exercising their self-sovereign identity.

Introductory Capability DHT James Foley - RWoT 8

The Object Capability software design paradigm is a powerful philosophy for the programming of decentralized applications particularly in the realms of security and rights management.

Models of Identity by, Joe Andrieu, Nathan George, Christophe Macintosh, Ouri Poupko, Antoine Rondelet, Andrew Hughes – RWoT 7 Security • Liberty • Data • Relationship • Capability

Considering different models for handling identity information allows reconciliation, and creates opportunities to address primary use cases across paradigms, increasing overall strength and security of a solution.
[…]
In the Object Capabilities model, authorization is managed by creating, sharing, attenuating, and using “capabilities” instead of, for example, access control lists. If you have a valid “capability”, you have the authorization. Like a car key, Object Capabilities may be used no matter who you are. This model shifts the burden of identification from error-prone correlations to directly work with individuals’ actual capabilities.

Cryptographic and Data Modeling Requirements from RWoT by Manu Sporny, Dave Longley, and Chris Webber - RWoT 7

This paper introduces the uninitiated to the requirements that have been identified over the years that are driving the community toward certain technological solutions.

Rebooting the Web of Trust is a community that is attempting to create a decentralized ecosystem that enables people to be in control of various aspects of their data and identity information. The group often talks about Decentralized Identifiers, Verifiable Credentials, Object Capabilities, ed25519 keys, cryptographic identifiers, and other technologies but rarely spends time documenting how we got here.

Recent happenings with Linked Data Capabilities By Christopher Lemmer Webber – RWoT 6

One of the outputs from Rebooting Web of Trust Fall 2017 was a writeup on Linked Data Capabilities based on discussions from the workshop (and particularly thanks to the guide of Mark S. Miller’s longstanding work on object capabilities). While the writeup speaks for itself, in short Linked Data Capabilities provide a way to encode object capability security to linked data systems. Much has happened since then.

After the workshop ideas from the paper were reified into specification form and the W3C Credentials Community Group has taken on the specification as an official work item of the group. Some changes have happened in the design of Linked Data Capabilities from the initial Rebooting Web of Trust paper

Credentials CG Telecon Minutes for 2017-11-14 The W3C Credentials Community Group

Topics

Introduction to Mark Miller (Google) DID Spec Review Capabilities in Verifiable Credentials W3C TPAC 2017 Update
Smarm: Requirements for a smart-signatures Scheme By Christopher Lemmer Webber and Christopher Allen - RWoT 5

Smart signatures are desirable, but how to implement them? We need a language that is powerful and flexible enough to meet our needs while safe and bounded to run while remaining simple enough to feasibly implement.

Scheme is a turing-complete language with a (at least stated) fondness for minimalism. Unfortunately Scheme on its own is neither “safe” nor (necessarily) deterministic. Thankfully we can get the properties we want through:

Making object capabilities a core part of the language. Specifically, Jonathan Rees’ “W7 security kernel” demonstrates that a pure lexically scoped environment is itself an appropritate substrate for object capabilities. Restricting space and time precisely in a way that is deterministic and reproducible. Removing sources of external side effects.
Identity Hubs Capabilities Perspective by Adrian Gropper, Drummond Reed, Mark S. Miller – RWoT 5

Identity Hubs as currently proposed in the Decentralized Identity Foundation (DIF) are a subset of a general Decentralized Identifier (DID) based user-controlled agent, based on ACLs rather than an object-capabilities (ocap) architecture. The current approach has both security and scalability issues. Transitioning the Hubs design to an ocap model can be achieved by introducing an UMA authorization server as the control endpoint. This avoids creating confused-deputy security issues and expands scale by enabling the hub to delegate access to resources not stored in the hub itself.

Linked Data Capabilities By Christopher Lemmer Webber and Mark S. Miller

Linked Data Signatures enable a method of asserting the integrity of linked data documents that are passed throughout the web. The object capability model is a powerful system for ensuring the security of computing systems. In this paper, we explore layering an object capability model on top of Linked Data Signatures via chains of signed proclamations. fn:1 We call this system “Linked Data Capabilities”, or “ld-ocap” for short.

Saturday, 17. April 2021

decentralized-id.com

Covid Credentials Initiative

The COVID-19 Credentials Initiative (CCI) is an open global community looking to deploy and/or help deploy privacy-preserving verifiable credential projects in order to mitigate the spread of COVID-19 and strengthen our societies and economies.

COVID-19 Credentials Initiative (CCI) is a part of Linux Foundation Public Health (LFPH), an initiative by Linux Foundation to build, secure, and sustain open-source software to help public health authorities (PHAs) combat COVID-19 and future epidemics. After initial success with deploying exposure notification apps, LFPH started to host CCI in December 2020 to advance the use of Verifiable Credentials (VCs) and data and technical interoperability of VCs in the public health realm, starting with vaccine credentials.

WebsiteBlog • TwitterLinkedin • Forum

CCI Knowledge Base

The COVID-19 Credentials Initiative (CCI) is a global community of more than 300 individuals from over 100 organizations (and counting) looking to deploy and/or help to deploy privacy-preserving verifiable credential projects in order to mitigate the spread of COVID-19 and strengthen our societies and economies.

60 strong Self Sovereign Identity group targets COVID-19 immunity passports, credentials

The COVID Credentials initiative (CCI) has launched to use digital identity to address the spread of COVID-19. The aim is to develop “immunity passports” and much more. The group includes individuals who are part of Evernym, ID2020, uPort, Dutch research organization TNO, Microsoft, ConsenSys Health and consultants Luxoft. So far, at least 69 have signed up.

COVID-19 Immunity Credentials and Contact Tracing Solutions Report Identity Review

Immunity credentials can allow those who are not COVID-positive to return to daily in-person routines, travel or going back to work, providing evidence that they are low-risk of transmitting the COVID-19 virus. This could be accomplished through a national digital identification system that monitors and tracks the health status of its citizens. The details of how digital immunity credentials would be implemented are still up in the air, but may include important features such as the ability to share results remotely, interoperability across systems, proof of authenticity and the potential for individuals to have full ownership over their health data.

CCI GF Task Force

This page describes the COVID-19 Credentials Initiative Governance Framework Task Force (the “CCI GF TF”). It was created by Sankarshan Mukhopadhyay and Chris Raczkowski.

Medcreds Conforms to the CCI Governance Framework

One of the important efforts by the CCI has been completed by the Rules and Governance Workstream. This workstream is in charge of defining the rules of how VC technology is to be used, as well as the algorithmic and human trust mechanisms to ensure sensitive personal data remains secure, private, and tamper-proof.

Zerion Joins Covid Credentials Initiative

As such, Zerion Software is proud to announce our participation in the Covid Credentials Initiative. This global, cross-sector community of organizations committing to finding ways to use digital identities as a way to mitigate the spread of COVID-19 while rebooting public trust.

CCI Blog Hello World from the COVID-19 Credentials Initiative

The COVID-19 pandemic has, in a few months’ time, taken the lives of almost half a million people worldwide and brought economies into lockdown globally. While many are struggling with the effects of social distancing, financial distress, or fear of contracting the virus, here at the COVID-19 Credentials Initiative (CCI), nearly 300 individuals from 100 organizations have united around a cause worthy of our collective efforts: supporting projects that deploy privacy-preserving Verifiable Credentials (VCs) to mitigate the spread of COVID-19 and strengthen our societies and economies.

Bringing emerging privacy-preserving technology to a public health crisis

We submitted this position statement to the “Privacy & Pandemics Workshop: Responsible Uses of Technology and Health Data During Times of Crisis — An International Tech and Data Conference” by the Future of Privacy Forum. The aim is to, from two participants’ point of view, share an abbreviated case study of CCI and to highlight key challenges that arose in our efforts to responsibly use new privacy-preserving technologies to mitigate the spread of COVID-19. We wanted to share our submission with the CCI community and the public in the hope to invite some further discussions.

Carrying Your COVID-19 Credentials in a Physical “Wallet”

We are all painfully aware of the economic and social restrictions imparted on us as a result of the COVID-19 pandemic. In order to reopen offices, restaurants, local private and public facilities — and most importantly, international borders — will likely require a flexible, interoperable, and ubiquitous system that preserves individual agency and privacy. The COVID-19 Credentials Initiative (CCI) focuses its efforts on supporting technology projects that work to meet these requirements through the utilization of W3C compliant ‘Verifiable Credentials (VCs)’, a tamper-evident credential that has authorship that can be cryptographically verified.

CCI has joined Linux Foundation Public Health!

When the COVID-19 Credentials Initiative (CCI) was formed in April 2020, we were a self-organizing group of companies and individuals, held together by a few mailing lists and working groups, to explore how Verifiable Credentials (VCs), an open standard and an emerging technology, could be used for the public health crisis unfolding with COVID-19. Recognizing our limits early on as an informal group, we quickly pivoted from developing a solution together to supporting each other to build for their local contexts. Over the course of nine months, we have seen over 20 projects present their work to the CCI community and developed an MVP governance framework that can be adapted to specific COVID-19 use cases.


SSI and Decentralized Identity Podcasts

The SSI Orbit Podcast Mathieu Glaude Let’s Talk About Digital Identity (LTADI) – Ubisecure Definitely Identity – Tim Bouma Federal Blockchain News - Not always about ID, covers work with DHS PSA Today – Privacy, Surveillance and Anonymity by Kaliya Identity Woman and Seth Goldstein ID Talk – FindBiometrics Identity, Unlocked – Auth0 (really great!) Identity North Podcast – Identity North
The SSI Orbit Podcast Mathieu Glaude Let’s Talk About Digital Identity (LTADI) – Ubisecure Definitely Identity – Tim Bouma Federal Blockchain News - Not always about ID, covers work with DHS PSA Today – Privacy, Surveillance and Anonymity by Kaliya Identity Woman and Seth Goldstein ID Talk – FindBiometrics Identity, UnlockedAuth0 (really great!) Identity North PodcastIdentity North Episodes Analytics Neat—Episode 37: What is a Decentralized Identity (DID)? State Change #41 Unpacking Digital Identity Consensys Hygiene for a computing pandemic FOSS&CRAFTS

features Christopher Lemmer Webber discussing the object capability security approach. Its a generalization not specific to VCs, continuing from the conversation on the CCG mailinglist, Hygiene for a computing pandemic: separation of VCs and ocaps/zcaps, we shared last month.


Global Legal Entity Identifier Foundation (GLEIF)

Established by the Financial Stability Board in June 2014, the Global Legal Entity Identifier Foundation (GLEIF) is tasked to support the implementation and use of the Legal Entity Identifier (LEI). The foundation is backed and overseen by the LEI Regulatory Oversight Committee, representing public authorities from around the globe that have come together to jointly drive forward transparency withi

WebsiteBlog •  TwitterYoutube • LinkedinCrunchbase

Established by the Financial Stability Board in June 2014, the Global Legal Entity Identifier Foundation (GLEIF) is tasked to support the implementation and use of the Legal Entity Identifier (LEI). The foundation is backed and overseen by the LEI Regulatory Oversight Committee, representing public authorities from around the globe that have come together to jointly drive forward transparency within the global financial markets. GLEIF is a supra-national not-for-profit organization headquartered in Basel, Switzerland. - This is GLEIF

Introducing the Verifiable LEI (vLEI)

The vLEI infrastructure will be a network-of-networks of true universality and portability, developed using the KERI (Key Event Receipt Infrastructure) protocol. It will support the full range of blockchain, self-sovereign identity and other decentralized key management platforms. vLEIs will be hostable on both ledgers and cloud infrastructure supporting both the decentralization of ledgers plus the control and performance of cloud. Portability will enable GLEIF’s vLEI ecosystem to unify all ledger-based ecosystems that support the vLEI.

GLEIF Advances Digital Trust and Identity for Legal Entities Globally 

Drummond Reed, Steering Committee Member, Trust-over-IP-Foundation, comments: “The vLEI has the potential to become one of the most valuable digital credentials in the world because it is the hallmark of authenticity for a legal entity of any kind. The family of digital credentials in the GLEIF vLEI Governance Framework can serve as a chain of trust for anyone needing to verify the legal identity of an organization or of a person legally acting on that organization’s behalf. The demand this will create for LEIs — and the impact it will have on adoption of self-sovereign identity — cannot be overestimated. It will be a sea change for digital trust infrastructure that will benefit every country, company, and citizen in the world.”

LEIs to enable corporate digital ID with verifiable credentials

The Global Legal Entity Identifier Foundation (GLEIF) is the umbrella body that delegates responsibility for issuing LEIs to local organizations. It’s such a pressing issue that it was raised by the OECD and B20 (G20 business) just three months ago when they suggested a Global Value Chain (GVC) Passport.

GLEIF Launches New Stakeholder Group to Accelerate the Integration of LEIs in Digital Certificates

GLEIF has launched a CA Stakeholder Group to facilitate communication between GLEIF, CAs and TSPs from across the world, as they collectively aim to coordinate and encourage a global approach to LEI usage across digital identity products. Participation has already been confirmed by China Financial Certification Authority (CFCA), DigiCert Inc, InfoCert, Entrust Datacard, ICAI India, and SwissSign.

Self-sovereign digital identity, vLEI as identification standard for InfoCert DIZME network

Through vLEIs, companies, government organizations, and other legal entities around the world will have the ability to identify themselves unambiguously, even outside of the financial markets, to conduct a growing number of activities digitally, such as:

GLEIF and uPort Test Verified Data Exchange in Financial Transactions - The Ethereum-enabled identity solution is used for permissioned issuance and verification of digitally verifiable credentials using LEIs.

The Global Legal Entity Identifier Foundation (GLEIF) and uPort​, ConsenSys’ digital identity platform anchored on the Ethereum blockchain, have partnered to support the process of exchanging verified data used in financial, commercial, and regulatory transactions. GLEIF is the G20-backed non-profit foundation tasked with promoting the use of legal entity identifiers (LEIs) as the global standard to unambiguously identify parties doing business. Together with uPort, GLEIF is testing how businesses can leverage the Ethereum-backed identity system to increase the efficiency of verifying business identities and persons acting on its behalf within the LEI ecosystem.

NEWS: GLEIF and Evernym Demo ‘Organization Wallets’ to Deliver Trust and Transparency in Digital Business

The Global Legal Entity Identifier Foundation (GLEIF) and Evernym have piloted a solution which allows organisations to create and manage ‘organisation wallets’, containing digital portable credentials that confirm an organisation’s identity and verify the authority of employees and other representatives to act on behalf of the organisation. These credentials can be used to securely identify authorised representatives when they execute an increasing number of digital business activities, such as approving business transactions and contracts, including client onboarding, transacting within import/export and supply chain business networks and submitting regulatory filings and reports.

Ubisecure announces support for Organisation Verifiable Credentials – the Global LEI Foundation verifiable LEI
The verifiable LEI (vLEI) is an organisation-based Verifiable Credential that asserts trusted organisation identity and the roles of authorised representatives and employees. Ubisecure and its partner network will issue vLEIs. Pre-registration for Pilot Programme opened. Ubisecure Identity-as-a-Service and Customer IAM solutions will support vLEI adoption to help manage employee rights through its Representation Governance capabilities. “vLEI brings the LEI to a wider audience” says Ubisecure.
GLEIF vLEI - Verifiable Credentials containing LEI RapidLEI

The new service based on a digitally verifiable credential containing the LEI. Based on self-sovereign identity (SSI), the vLEI enables automated trusted identity verification between counterparties.

Podcast PSA Today: Kaliya & Seth talk LEIs with Simon Wood, CEO of Ubisecure (#1 issuer of Legal Entity Identifiers)

the evolution of LEIs since the financial crisis of 2008, the difference between high assurance and low assurance, and the relationship between rights and ownership as it relates to identity management of entities.

Friday, 16. April 2021

Berkman Klein Center

Decolonial Humanitarian Digital Governance

“Before we start, I’d like to acknowledge that the decisions we make in this room today may have implications into the future and far beyond the lifetime of this project, team or organisation. We make those decisions with that in mind.”- The Long Time Project Image by Carol Gaessler — shared under Creative Commons-Attribution-NonCommercial-ShareAlike 4.0 International license (CC-BY-NC-SA). T
“Before we start, I’d like to acknowledge that the decisions we make in this room today may have implications into the future and far beyond the lifetime of this project, team or organisation. We make those decisions with that in mind.”- The Long Time Project
Image by Carol Gaessler — shared under Creative Commons-Attribution-NonCommercial-ShareAlike 4.0 International license (CC-BY-NC-SA).

This is the first public ‘writing out loud’ of my dual fellowship with the Berkman Klein Center and the Carr Centre for Human Rights, both at Harvard University. In the spirit of a true exploration, rather than starting from a point of ‘expertise’ and homogeneity, I offer this as an emergent thought process, and to invite in participation. I must also point out that though this blog is written through the lens of my journey, my learning isn’t mine alone — it is greatly informed by so many people, disciplines and research and I am grateful to the countless people that have gracefully shared their knowledge and wisdom so that we might collectively grow together.

“Who and what gets fixed in place to enable progress? What social groups are classified, corralled, coerced and capitalized upon so others are free to tinker, experiment and engineer the future?” Ruha Benjamin

I started off my exploration with this hypothesis: Can humanitarian digital policy be decolonized?

I started with this, as I’ve often argued that the humanitarian aid system perpetuates hierarchical, patriarchal, hegemonic views of what ‘development’ and ‘progress’ looks like, ignoring other world-views, and underlying systemic and structural pillars of inequality and bias. As the humanitarian aid system increasingly intersects with technology systems developed in the context of Western capitalism and in small pockets of privileged power, I have not been alone in raising concerns on the implications of collision of two systems that are fundamentally patriarchal and hegemonic on those that are minoritized in the Global South.

To put in context, the range of digital or technology systems that humanitarian actors engage with is incredibly wide. This can range from (as a snapshot):

Systems used to support the coordination of aid efforts — for example, biometric digital identities with refugee assistance programs or earth observing technology in disaster relief Digital platforms or digital transformation with partners — for example, e-government services that link social protection and citizen welfare Technology innovations to support affected populations access to aid — for example, ICTs, apps, digital ledgers, hardware and many others

Even what we constitute as ‘humanitarian’ has shades of grey — whether it’s at the pointy end of a crisis, to support during peacetime. The wide array of aid provision does not easily exist within the carefully designed terminology found in textbooks and political resolutions.

Additionally, who works on technology or digital innovations within humanitarian institutions is wildly divergent — mostly within institutions innovation labs, or within digital or thematic teams — then deployed to the global south for implementation or testing (the notion of testing on vulnerable groups is already contentious and has its own set of critiques).

The diversity of use cases, contexts, stakeholders and owners does have one common thread however. As humanitarian organisations do not tend to traditionally have the requisite digital or technology expertise in-house, they partner externally to achieve their aims. And this is the sticking point — it easily falls into expertise and partnerships that predominantly come from small communities of public-private, technology partners, and academic institutions from the global north. What this does (whether implicitly or unconsciously) is to reinforce a dominant, hegemonic narrative that assumes:

The experiences of global civil society and its actors, is homogeneous The singular ‘Silicon Valley’ values that underpin such digital policies are the ones that all people aspire to, regardless of where they live or their cultural, societal, economic, geographic bearings Power dynamics will continue to be affirmed in the hands of those that currently hold it without considering the cascading impacts of those policy decisions on those that are most going to be affected by it

The appropriateness and impacts of digital technologies and AI within aid systems is still being discovered and understood. An incredible amount of work has gone into establishing data governance guidance and data protection protocols but not as much into broader policy governing the deployment and use of these technologies sector wide. Often they are siloed into communities or thematic areas of work, or perhaps resulting in codes of conduct — but not systematically looking at a sector wide approach that interrogates whether the use of these technologies paradoxically expose, mitigate or expand harm on often marginalised, minoritised constituents AND whether the digital futures we are chasing merely replicate or reinforce existing or past inequalities?.

“..the relationship between tech industries and those populations who are outside their ambit of power — women, populations in the Global South, including black, Indigenous and Latinx communities in North America, immigrants in Europe — is a colonial one” Sareetyta Amrute, Data and Society

So, could humanitarian actors play a more intentional role in designing just and equitable digital futures? Could we in fact, unshackle ourselves from our neo-colonial humanitarian mental models, and push back against the hierarchies of techno-chauvinism and meritocracy? Could we use this moment in time to design worlds that don’t imagine some figures, especially populations in the Global South — to merely be passive beneficiaries and outside of the borders of expertise we seek? Could we invert the pathways of tech colonialism in the aid sector?

As I started my exploration, I realised that my original hypothesis needed more rigour. There were tensions inherent in my original starting point. Firstly, what in fact constitutes digital policy? For whom? For what purpose? What influence does policy have in fact over systems? Different actors have their own policy making instruments — that are incumbent on different leverage points that service multiple agendas. Considering the multiplicity of use cases, actors, and contexts — what I was softly heading to was the governance of the deployment and use of digital systems and technologies within the aid sector. Notwithstanding the work in data governance and data protection in aid, there are still gaps in governance in terms of how and when we deploy tech innovation, the digital systems we are creating and supporting to create, supply chains that we are a part of, due diligence processes, accountability and risk ownership, and many other elements. In addition to this, are questions of immunity (traditionally, humanitarian organisations do not go beyond the individual institutional governance mechanisms, that often are bordered by the institution’s immunity); and appropriateness of the technology innovations we deploy.

How then do we design digital governance systems that speak to these complex, intertwined issues? Instead of merely looking at digital governance in terms of control, could weaving in feminist and decolonial approaches help us liberate our digital futures so that it is a space of safety, of humanity — for those whom we are meant to support? Are these approaches ways in which we can design new forms of digital humanism?

“We could build systems for durability, but instead some dipshits told you we needed to move fast and break things” Audrey Watters

Could a digital governance approach consider questions like the following:

How then do we go beyond what we are merely legally required to do versus what is right to do? And importantly, are humanitarian actors willing to go beyond their immunity? How do we extend digital governance to go beyond the fortresses of individual institutions to a multi-actor, sector wide approach that is emergent and iterative? Can and should governance systems help users realize and/or amplify their rights and in fact use it as a way to hold humanitarian actors to account? How might governance systems actually flatten power in decision making Can governance systems help us monitor our accountability to our promises to affected populations? Who has the power to draw the conclusions from assessments done? Just because we *can* deploy a specific asset or tech, *should* we? How do we increase percentage of risk/harm that gets absolved by humanitarian institutions rather than that risk/harm being pushed further down the chain to affected populations? How might we incentivise governance systems to do the right thing?

Digital technologies and AI mask ideologies of power, and are wed to a market ideology of dominance. To intentionally carve a different type of ideology would require governance systems that are informed by different knowledge sources into decision making. That prioritise those that are most impacted by or are on the receiving end of that initiative, rather than cantering donor or aid institutions privileges. Good digital governance in the vein that this research is pursuing then, must disrupt the idea of ‘solutionism’, must also critique the systems in which that technology is being deployed and the impacts of that deployment — now and into the future.

And this is when my hypothesis started to fork out. Where I originally thought to use elements of futures methodologies to analyse future states and imaginaries of what we might collectively desire, I realised this was not enough for the rigour of what is needed to be achieved. What I have learned in my practice of strategic foresight within systems and institutional transformation is that the facilitation of the method does not automatically result in a change in policy and/or strategic decision making. And that is because the application of insights isn’t weaved into and within how change actually occurs. Often in humanitarian/development work, we are fire-fighting the now — the problem right in front of us. We design solutions and interventions aimed to problem solving what is immediately in front of us without necessarily assessing three things:

The complexity of the system in which that issue lives Just as rights are not static, neither is harm. What is the current and future theory of harm that might arise out of that solution/intervention What might be the impact of the solution/intervention on future generations and on our planet Who holds the fiduciary duty to future digital selves of affected populations?

I now radically pursue the idea of foresight within ethics systems to inform governance. Can we consider the ethics of intervention through the lens of constraining future good and mitigating future bad? Might this in fact be where it can add rigour and value?

“Our radical imagination is a tool for decolonization, for reclaiming our right to shape our lived reality.” ― Adrienne Maree Brown

Lastly, I realised that my focus on decolonisation was somewhat, and honestly — skewed.

Decolonisation is the process of undoing and giving up social and economic power, and restoring what has been taken away in the past, which arguably includes reparations. As Eve Tuck and Wayne Yang argue — decolonisation is also not a metaphor for diversity and inclusion nor is it a replacement for social justice efforts. Would the efforts of humanitarian aid ever include reparations? Could we ethically profess to even do so? We often argue that for the humanitarian system to change, it would involve giving up the status quo that we have a strangle-hold on. However, could this ever be achieved through the result of this one approach/framework? Am I being authentic, am I being honest if I were to declare it so? The answer was no.

What I was unpacking in actuality, was the notion of decoloniality — an aspiration to restore, renew, elevate, rediscover, acknowledge and validate the multiplicity of lives, lived-experiences, culture and knowledge of indigenous people, people of colour, and colonised people as well as to decenter hetere/cis-normativity, gender hierarchies and racial privilege. I think of this as how do we exist in plurality — in a multiverse so to speak. And to include this in governance, not as a tokenistic or virtue-signalling flag, but rather to help us consider different lenses, perspectives, sources of truth in even how we think about what is right, what is fair and what is just. How might decolonial and feminist approaches help us reframe our starting points and in fact influence governance design? This isn’t about just getting different under-represented groups around the table, but rather how might we shift the knowledge and experiences we draw from in the very design and decision making of policy and governance frames. It is to ensure that we are considering the multiplicity of ways in which issues of rights, privacy and agency are understood and experienced the world over, and not imposing just one or a narrow valued judgement on these issues.

Through applying a decolonial lens to governance, might we in fact be able to intentionally design for equity rather than for privilege?

“Decolonisation, expressed by your lips, differs from the decolonisation that comes from within, as a revolutionary concept that speaks about rehumanization — a fundamental planetary project” — Sabelo Ndlovu-Gatsheni

Thus, I am gently landing on Decolonial Humanitarian Digital Governance. An emergent process that is grounded in the following hypothesis: How do we not lock people into future harm, indebtedness or future inequity? It seeks to help start answering the questions unearthed through the journey thus far. Importantly, its primacy is to shift the focus of current humanitarian digital efforts that prioritise the problem solving of now, to one that aims to mitigate future harm and inequity. It aims at not binding nor narrowing the governance actions of humanitarian actors to merely their institutional legal liabilities and privileges but rather allows self-regulation for a shared responsibility for our collective futures across all actors and constituents in the aid sector.

The Decolonial Humanitarian Digital Governance is a framework that is:

Multi-Dimensional (intentionally includes plurality of perspectives affected populations, minoritized groups, humanitarian and tech governance actors, activists, and ethicists) Rigorously analyses future harms and impacts on future generations Interrogates who absorbs future harm Grounded in the rights and equity of impacted minoritized people Is emergent and acts as a compass (not a checkbox)

I see it visualised in three weaves:

And how we might judge this? Though not complete, perhaps some questions we might weave into that could include:

1. For the improvements it makes relative to what is replaced

2. For its understanding and active management of unintended consequences

3. For its mitigation and absorption of current and future harm

4. For its ability to cultivate an evolved awareness of rights, accountability and collective humanity

And that is where I am up to. This is a journey that isn’t complete by any measure. In fact, this framework must never be complete. It must never be static. The complexities we are dealing with are continuously evolving, and our commons have irrevocably shifted. We must unpack decades of mental models and behaviours as the range of choices about the type of futures we want to inhabit — has expanded exponentially, and the choices we make now will decide our collective fates.

To design more flourishing and liberated futures for all, we must uncover the plurality that is available to all of us, to frame how we see the world.

Decolonial Humanitarian Digital Governance was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement…

Between Games and Apocalyptic Robots: Considering Near-Term Social Risks of Reinforcement Learning An increasingly popular branch of Machine Learning challenges current approaches to algorithmic governance. With many of us stuck at home this past year, we’ve seen a surge in the popularity of video games. That trend hasn’t been limited to humans. DeepMind and Google AI both released results from
Between Games and Apocalyptic Robots: Considering Near-Term Social Risks of Reinforcement Learning An increasingly popular branch of Machine Learning challenges current approaches to algorithmic governance.

With many of us stuck at home this past year, we’ve seen a surge in the popularity of video games. That trend hasn’t been limited to humans. DeepMind and Google AI both released results from their Atari playing AIs, which have taught themselves to play over fifty Atari games from scratch, with no provided rules or guidelines. The unique thing about these new results is how general the AI agent is. While previous efforts have achieved human performance on the games they were trained to play, DeepMind’s new AI Agent, MuZero could teach itself to beat humans at Atari games it had never encountered in under a day. If this reminds you of AlphaZero which taught itself to play Go then Chess well enough to outperform world champions, that’s because it demonstrates an advance in the same suite of algorithms, a class of machine learning called Reinforcement Learning (RL).

While traditional machine learning parses out its model of the world (typically a small world pertaining only to the problem it’s designed to solve) from swathes of data, RL is real-time observation based. This means RL learns its model primarily through trial and error interactions with its environment, not by pulling out correlations from data representing a historical snapshot of it. In the RL framework, each interaction with the environment is an opportunity to build towards an overarching goal, referred to as a reward. An RL agent is trained to make a sequence of decisions on how to interact with its environment that will ultimately maximize its reward (i.e. help it win the game).

This unique iterative learning paradigm allows the AI model to change and adapt to its environment, making RL an attractive solution for open-ended, real-world problem-solving. It also makes it a leading candidate for artificial general intelligence (AGI) and has some researchers concerned about the rise of truly autonomous AI that does not align with human values. Nick Bostrom first posed what is now the canonical example of this risk among AI Safety researchers — a paperclip robot with one goal: optimize the production efficiency of paperclips. With no other specifications, the agent quickly drifts from optimizing its own paperclip factory to commandeering food production supply chains for the paperclip making cause. It proceeds to place paperclips above all other human needs until all that’s left of the world is a barren wasteland covered end to end with unused paper clips. The takeaway? Extremely literal problem solving combined with inaccurate problem definition can lead to bad outcomes.

This rogue AGI (albeit in more high-stakes incarnations like weapons management) is the type of harm usually thought of when trying to make RL safe in the context of society. However, between an autonomous agent teaching itself games in the virtual world and an intelligent but misguided AI putting humanity in existential risk lay a multitude of sociotechnical concerns. As RL is being rolled out in domains ranging from social media to medicine and education, it’s time we seriously think about these near-term risks.

How the paperclip problem will play out in the near term is likely to be rather subtle. For example, medical treatment protocols are currently popular candidates for RL modeling; they involve a series of decisions (which treatment options to try) with uncertain outcomes (different options work better for different people) that all connect to the eventual outcome (patient health). One such study tried to identify the best treatment decisions to avoid sepsis in ICU patients based off of multitudes of data, including medical histories, clinical charts and doctor’s notes. Their first iteration was an astounding success. With very high accuracy, it identified treatment paths that resulted in patient death. However, upon further examination and consultation with clinicians it turned out that though the agent had been allowed to learn from a plethora of potentially relevant treatment considerations, it had latched onto only one main indicator for death — whether or not a chaplain was called. The goal of the system was to flag treatment paths that led to deaths, and in a very literal sense that’s what it did. Clinicians only called a chaplain when a patient presented as close to death.

You’ll notice that in this example, the incredibly literal yet unhelpful solution the RL agent was taking was discovered by the researchers. This is no accident. The field of modern medicine is built around the reality that connections between treatment and outcomes typically have no known causal explanations. Aspirin, for example, was used as an anti-inflammatory for over seventy years before we had any insight into why it worked. This lack of causal understanding is sometimes referred to as intellectual debt; if we can’t describe why something works, we may not be able to predict when or how it will fail. Medicine has grown around this fundamental uncertainty. Through strict codes of ethics, industry standards, and regulatory infrastructure (i.e. clinical trials), the field has developed the scaffolding to minimize the accompanying harms. RL systems aiming to help with diagnosis and treatment have to develop within this infrastructure. Compliance with the machinery medicine has around intellectual debt is more likely to result in slow and steady progress, without colossal misalignment. This same level of oversight does not apply to fields like social media, the potential harms of which are hard to pin down and which have virtually no regulatory scaffolding in place.

We may have already experienced some of the early harms of RL based algorithms in complex domains. In 2018 YouTube engineers released a paper describing an RL addition to their recommendation algorithm that increased daily watch time by 6 million hours in the beta testing phase. Meanwhile, anecdotal accounts of radicalization through YouTube rabbit holes of increasingly conspiratorial content (e.g., NYTimes reporting on YouTube’s role in empowering Brazil’s far right) were on the rise. While it is impossible to know exactly which algorithms powered the platform’s recommendations at the time, this rabbit hole effect would be a natural result of an RL algorithm trying to maximize view time by nudging users towards increasingly addictive content.

In the near future, dynamic manipulation of this sort may end up at odds with established protections under the law. For example, Facebook has recently been put under scrutiny by the Department of Housing and Urban Development for discriminatory housing advertisements. The HUD suit alleges that even without explicit targeting filters that amount to the exclusion of protected groups, its algorithms are likely to hide ads from ‘users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users.’ Given the types of (non-RL) ML algorithms FB currently uses in advertising, proving this disparate impact would be a matter of examining the data and features used to train the algorithm. While the current lack of transparency makes this challenging, it is fundamentally possible to roll out benchmarks capable of flagging such discrimination.

If advertising were instead powered by RL, benchmarks would not be enough. An RL advertising algorithm tasked with ensuring it does not discriminate against protected classes, could easily end up making it look as though it were not discriminating instead. If the RL agent were optimized for profit and the practice of discrimination was profitable, the RL agent would be incentivized to find loopholes under which it could circumvent protections. Just like in the sepsis treatment case, the system is likely to find a shortcut towards reaching its objective, only in this case the lack of regulatory scaffolding makes it unlikely this failure will be picked up. The propensity of RL to adapt to meet metrics, while skirting over intent, will make it challenging to tag such undesirable behavior. This situation is further complicated by our heavy reliance on ‘data’ as a means to flag potential bias in ML systems.

Unlike RL, traditional machine learning is innately static; it takes in loads of data, parses it for correlations, and outputs a model. Once a system has been trained, updating it to accommodate a new environment or changes to the status quo requires repeating most or all of that initial training with updated data. Even for firms that have the computing power to make such retraining seamless, the reliance on data has allowed an ‘in’ for transparency. The saying goes, machine learning is like money laundering for bias. If an ML system is trained using biased or unrepresentative data, its model of the world will reflect that. In traditional machine learning, we can at least follow the marked bills and point out when an ML system is going to be prone to discrimination by examining its training data. We may even be able to preprocess the data before training the system in an attempt to preemptively correct for bias.

Since RL is generally real-time observation-based rather than training data-based, this ‘follow-the-data’ approach to algorithmic oversight does not apply. There is no controlled input data to help us anticipate or correct for where an RL system can go wrong before we set it loose in the world.

In certain domains, this lack of data-born insight may not be too problematic. The more we can specify what the moving parts of a given application are and the ways in which they may fail–be it through an understanding of the domain or regulatory scaffolding–the safer it is for us to use RL. DeepMind’s use of RL to lower the energy costs of its computing centers, a process ultimately governed by the laws of physics, deserves less scrutiny than the RL based K-12 curriculum generator Google’s Ed Chi views as a near-term goal of the field. The harder it is to describe what success looks like within a given domain, the more prone to bad outcomes it is. This is true of all ML systems, but even more crucial for RL systems that cannot be meaningfully validated ahead of use. As regulators, we need to think about which domains need more regulatory scaffolding to minimize the fallout from our intellectual debt, while allowing for the immense promise of algorithms that can learn from their mistakes.

Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement… was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


decentralized-id.com

Hyperledger Ursa

Hyperledger Ursa is a shared cryptographic library, it enables implementations to avoid duplicating other cryptographic work and hopefully increase security in the process. The library is an opt-in repository (for Hyperledger and non Hyperledger projects) to place and use crypto. Hyperledger Ursa consists of sub-projects, which are cohesive implementations of cryptographic code or interfaces to cry

WebsiteWiki • GitHub • RFCsDocs • Mailing List • Chat

Welcome Hyperledger Ursa!

Ursa aims to include things like a comprehensive library of modular signatures and symmetric-key primitives built on top of existing implementations, so blockchain developers can choose and modify their cryptographic schemes with a simple configuration file change. Ursa will also have implementations of newer, fancier cryptography, including things like pairing-based signatures, threshold signatures, and aggregate signatures, and also zero-knowledge primitives like SNARKs.

Sovrin contributes to Hyperledger Ursa: A win for cryptography, security, and interoperability

Ursa is a library for cryptography and the result of a collaborative effort from teams at ACM, Bitwise, DFINITY, Evernym, Fujitsu, Intel, the Linux Foundation, the Sovrin Foundation, and State Street. Duplication among blockchain project features heightens security risks. Ursa, however, gathers up crypto implementations across projects and compiles them into a single metalibrary, creating a central repository for crypto code. This way, projects can select sources of code from Ursa, instead of the original source, decreasing duplication security risks and boosting interoperability among projects.

Kiva Protocol, Built on Hyperledger Indy, Ursa and Aries, Powers Africa’s First Decentralized National ID system

Kiva Protocol is built using Hyperledger Indy, Aries, and Ursa, and as implemented in Sierra Leone, allows citizens to perform electronic Know Your Customer (eKYC) verifications in about 11 seconds, using just their national ID number and a fingerprint. With this verification, it is possible for the nation’s unbanked to open a savings account and move into the formally banked population.


Own Your Data Weekly Digest

MyData Weekly Digest for April 16th, 2021

Read in this week's digest about: 5 posts, 1 question, 1 Tool
Read in this week's digest about: 5 posts, 1 question, 1 Tool

Thursday, 15. April 2021

eSSIF-Lab

Meet the eSSIF-Lab’s ecosystem: 2nd batch of winners of Infrastructure Development Instrument

eSSIF

eSSIF-Lab has already kicked-off the programme for another 7 proposals selected, out of 29 that were submitted before the second deadline of the Infrastructure-oriented Open Call. Selected projects are to contribute with open source technical enhancements and extensions of the SSI Framework of the eSSIF-Lab project.

2nd tranche winners are the following: 

Verifier Universal Interface by Gataca España S.L. Building Standard APIs for Verifier components to enable SSI interoperability Automated data agreements to simplify SSI work flows by LCubed AB  Adopt SSI and make it consumable for both organisations and end-users (operated under the brand iGrant.io) Presentation Exchange – Credential Query Infra by Sphereon B.V. Presentation Exchange Interop and Integration Letstrust.org by SSI Fabric GmbH Self-Sovereign Identity for everyone: Enterprise & Consumer Cloud Wallet (OIDC-based), Credentials & SDKs as a basis for applications – free WordPreSSI Login by Associazione Blockchain Italia Improving and completing a set of generic, open-source Java libraries for working with DIDs and VCs SSI Java Libraries by Danube Tech GmbH Improving and completing a set of generic, open-source Java libraries for working with DIDs and VCs NFC DID VC Bridge by Gimly Enabling the use of NFC secure elements as DID and VC transport for off-line and online identity, authorizations and access management

The Infrastructure Development Instrument will support these innovators to provide scalable and interoperable open source SSI components for eSSIF-Lab Framework with up to € 155,000 funding.

Selected companies under this instrument will have the opportunity to take part in a very active and collaborative ecosystem with other eSSIF-Lab participants to:

improve framework’s vision, architecture, specifications etc. ensure interoperability (at the technical and process levels) and address each other’s issues jointly. Would you like to join them?

The second deadline has passed, but Infrastructure-oriented Open Call remains open. eSSIF-Lab is accepting your applications for the next deadline on the 30th of June, 2021!

Apply NOW!

Follow the updates of this initial batch of winners, about the current open call and about the next deadline (on June 2021) in the eSSIF-Lab space of the NGI Online Community!


decentralized-id.com

The ID2020 Alliance

The ability to prove who you are is a fundamental and universal human right. Because we live in a digital era, we need a trusted and reliable way to do that both in the physical world and online.

Website • BlogGitHubTwitterLinkedinCrunchbase

ID2020, ID4D aim to bring legal, binding, digital IDs to all world’s citizens

It was late June of 2014 when businessman John Edge was invited to a screening of a short film directed by actress Lucy Liu. “Meena” is about an 8-year-old girl sold to a brothel and forced into sex slavery for more than a decade. It’s based on a true story. “It’s horrific,” Edge says.

A panel of experts took questions afterward, including Susan Bissell, chief of child protection at international humanitarian group UNICEF. “Susan articulated that one of the biggest problems in protecting children who are at risk of sexual violence is a lack of birth certificates or identity,” Edge says.

ID2020 to kick start digital identity summit at UN with PwC support PWC Press Release

Identity 2020 Systems (ID2020) have announced PwC, the global professional services network, as the lead sponsor of the landmark ID2020 Summit to create technology-driven public-private partnerships to achieve the United Nations 2030 Sustainable Development Goal of providing legal identity for everyone on the planet.

ID2020: Digital Identity with Blockchain - Accenture

Accenture has joined the ID2020 alliance and leverages its unique identity service platform. Learn about our digital identity with blockchain solutions.

Mastercard, Microsoft Join Forces to Advance Digital Identity Innovations

PURCHASE, N.Y. and REDMOND, Wash. – December 3, 2018 – Mastercard (NYSE: MA) and Microsoft (Nasdaq “MSFT” @microsoft) today announced a strategic collaboration to improve how people manage and use their digital identity. Currently, verifying your identity online is s…

Projects aim for legal identity for everyone - ID2020, ID4D aim to bring legal, binding, digital IDs to all world’s citizens Manifesto

The Alliance Manifesto

The ability to prove one’s identity is a fundamental and universal human right. We live in a digital era. Individuals need a trusted, verifiable way to prove who they are, both in the physical world and online. Over 1 billion people worldwide are unable to prove their identity through any recognized means. As such, they are without the protection of law, and are unable to access basic services, participate as a citizen or voter, or transact in the modern economy. Most of those affected are children and adolescents, and many are refugees, forcibly displaced, or stateless persons. For some, including refugees, the stateless, and other marginalized groups, reliance on national identification systems isn’t possible. This may be due to exclusion, inaccessibility, or risk, or because the credentials they do hold are not broadly recognized. While we support efforts to expand access to national identity programs, we believe it is imperative to complement such efforts by providing an alternative to individuals lacking safe and reliable access to state-based systems. We believe that individuals must have control over their own digital identities, including how personal data is collected, used, and shared. Everyone should be able to assert their identity across institutional and national borders, and across time. Privacy, portability, and persistence are necessary for digital identity to meaningfully empower and protect individuals. Digital identity carries significant risk if not thoughtfully designed and carefully implemented. We do not underestimate the risks of data misuse and abuse, particularly when digital identity systems are designed as large, centralized databases. Technical design can mitigate some of the risks of digital identity. Emerging technology — for example, cryptographically secure, decentralized systems — could provide greater privacy protection for users, while also allowing for portability and verifiability. But widespread agreement on principles, technical design patterns, and interoperability standards is needed for decentralized digital identities to be trusted and recognized. This “better” model of digital identity will not emerge spontaneously. In order for digital identities to be broadly trusted and recognized, we need sustained and transparent collaboration aligned around these shared principles, along with supporting regulatory and policy frameworks. ID2020 Alliance partners jointly define functional requirements, influencing the course of technical innovation and providing a route to technical interoperability, and therefore trust and recognition. The ID2020 Alliance recognizes that taking these ideas to scale requires a robust evidence base, which will inform advocacy and policy. As such, ID2020 Alliance-supported pilots are designed around a common monitoring and evaluation framework. ID2020 - Rebooting Web-of-Trust Design Workshop

The second RWoT workshop ran in conjunction with the UN’s ID2020 Summit in New York that May; clearly a significant time for decentralized identity:

1.1 Billion people live without an officially recognized identity — This lack of recognized identification deprives them of protection, access to services, and basic rights. ID2020 is a public-private partnership dedicated to solving the challenges of identity for these people through technology. - id2020.org

ID 2020 Design Workshop - EventBright

The two main goals of the UN summit are:

by 2020, be able to create a legally valid digital identity for every last person without an identity by 2030 to have rolled this capability out to at least 1 billion at-risk people to make them visible and restore them into society both personally and economically

WebOfTrustInfo/rwot2-id2020 - RWOT2 for the ID2020 UN Summit (May 2016). RWoT2 - Topics & Advance Readings

1.1 Billion people live without an officially recognized identity — This lack of recognized identification deprives them of protection, access to services, and basic rights. ID2020 is a public-private partnership dedicated to solving the challenges of identity for these people through technology.

Identity Crisis: Clear Identity through Correlation Christopher Allen [info] [**slideshare] details the overarching history of internet identity standards in his germinal work (submitted to ID2020\RWoT workshop): The Path to Self-Soverereign Identity[ϟ] details the history of identity standards leading up to self-sovereign and details the 10 principles of self-sovereign identity.

I am part of the team putting together the first ID2020 Summit on Digital Identity at the United Nations

Evident from the other whitepapers submitted to that Workshop, the DID identifier had begun to emerge:

Decentralized Identifiers (DIDs) and Decentralized Identity Management (DIDM) Requirements for DIDs

“Respect Network is conducting a research project for the U.S. Department of Homeland Security, HSHQDC-16-C-00061, to analyze the applicability of blockchain technologies to a decentralized identifier system.

Identity System Essentials Members

Private sector engagement is critical for solving at scale. Alliance partners include companies with a collective footprint in the billions and a shared commitment to an ethical approach to digital ID. Decisions about how Alliance funds are administered, which programs to fund, and which technical standards to support are made jointly by Alliance partners through a transparent governance process, preventing dominance by any single institution or sector.

Founders:

Accenture Gavi IDEO Microsoft Rockefeller Foundation

Partners:

Berkeley, University of California BLOK FHI360 Hyperledger ICC International Computing Center iRespond Kiva Mastercard Mercy Corps National Cybersecurity Center Panta Transportation Simprints

Wednesday, 14. April 2021

OpenID

Guest Blog: Financial-grade API (FAPI), Explained by an Implementer – Updated

NOTE: This article was updated to align to the FAPI 1.0 Final version which was published in March, 2021. Introduction Financial-grade API (FAPI) is a technical specification that Financial-grade API Working Group of OpenID Foundation has developed. It uses OAuth 2.0 and OpenID Connect (OIDC) as its base and defines additional technical requir

NOTE: This article was updated to align to the FAPI 1.0 Final version which was published in March, 2021.

Introduction

Financial-grade API (FAPI) is a technical specification that Financial-grade API Working Group of OpenID Foundation has developed. It uses OAuth 2.0 and OpenID Connect (OIDC) as its base and defines additional technical requirements for the financial industry and other industries that require higher API security.

OpenID Foundation Working Groups and Financial-grade API Stack History

Implementer’s Draft 1 — The initial version of the FAPI specification was published in 2017. The version is called Implementer’s Draft 1 (ID1).

Implementer’s Draft 2 — The second version was published in October, 2018. The version is called Implementer’s Draft 2 (ID2). In this version, the FAPI specification was renamed from “Financial API” to “Financial-grade API” for wider adoption across various industries.

Final Version — The final version was published in March, 2021. In this version, the main two parts of the FAPI sepecification, “Part 1: Read-Only Security Profile” and “Part 2: Read and Write API Security Profile”, were renamed to “Part 1: Baseline Security Profile” and “Part 2: Advanced Security Profile”, respectively.

FAPI 2.0 — The FAPI WG has started to discuss the next version of the FAPI specification, which is called “FAPI 2.0”. The FAPI FAQ published on March 31, 2021 (announcement) mentions FAPI 2.0. Authlete is mentioned in the answer to the question “Are there FAPI 2.0 implementations?” because Authlete has already implemented new technical components of FAPI 2.0 such as PAR (OAuth 2.0 Pushed Authorization Requests), RAR (OAuth 2.0 Rich Authorization Requests) and DPoP (OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer).

History of Financial-grade API Specifications

The core parts of the FAPI specification are Part 1 and Part 2. Their previous and final versions are available here:

Implementer’s Draft 1 (Part 1: February 2, 2017 / Part 2: July 17, 2017)

Financial Services — Financial API — Part 1: Read Only API Security Profile Financial Services — Financial API — Part 2: Read and Write API Security Profile

Implementer’s Draft 2 (October 17, 2018)

Financial-grade API — Part 1: Read-Only API Security Profile Financial-grade API — Part 2: Read and Write API Security Profile Differences between ID1 and ID2

Final Version (March 12, 2021)

Financial-grade API Security Profile 1.0 — Part 1: Baseline Financial-grade API Security Profile 1.0 — Part 2: Advanced Differences between ID2 and Final

In addition, another specification was released in August, 2019 that lists additional requirements applied when FAPI and CIBA (Client Initiated Backchannel Authentication) are used together. The specification is called FAPI-CIBA Profile.

Financial-grade API: Client Initiated Backchannel Authentication Profile

For details about CIBA, please read the following article.

“CIBA”, a new authentication/authorization technology in 2019, explained by an implementer Concept of CIBA Certification Program Certification for FAPI OpenID Providers

The Certification Program for FAPI OpenID Providers officially started on April 1, 2019 (announcement). Two vendors were granted certification on the start day. Authlete, Inc., the company founded by the author of this article (me), is one of the two vendors.

Certified Financial-grade API OpenID Providers on April 1, 2019

Two years have passed since then, and now more than 30 solutions and deployments are listed as certified FAPI OPs.

Certification program for the FAPI Final version has not started yet as of this writing (April, 2021), but Authlete 2.2 has already supports the FAPI Final version. See the announcement and the release note published on February 4, 2021 for details.

Certification for FAPI-CIBA OpenID Providers

The Certification Program for FAPI-CIBA OpenID Providers started on September 16, 2019 (announcement). Authlete was the only solution that was granted certification on the start day.

Certified FAPI-CIBA Profile OpenID Providers on September 16, 2019

As of this writing (April, 2021), three solutions including Authlete are listed as certified FAPI-CIBA OPs.

Prior Knowledge Basic Specifications

The format of the FAPI specification is a terse list of technical requirements, so the document is not long. In exchange, a lot of prior knowledge is required to read it smoothly. Especially, you have to learn RFC 6749 and RFC 6750 (the core of OAuth 2.0) and OpenID Connect Core 1.0 (the core of OpenID Connect) by heart.

In addition, because specifications related to JWT (JWSJWEJWKJWA and JWT) are prior knowledge to understand OIDC Core, they are of course prior knowledge to read the FAPI specification. Therefore, you need to understand them perfectly.

JWS Compact Serialization (RFC 7515 Section 7.1)

Furthermore, PKCE (RFC 7636) which was published in September, 2015 is now regarded as a part of the basic set of OAuth 2.0 specifications as well as RFC 6749 and RFC 6750.

The following is a list of specifications that you should read at least once before the FAPI specification.

RFC 6749 — The OAuth 2.0 Authorization Framework RFC 6750 — The OAuth 2.0 Authorization Framework: Bearer Token Usage RFC 7515 — JSON Web Signature (JWS) RFC 7516 — JSON Web Encryption (JWE) RFC 7517 — JSON Web Key (JWK) RFC 7518 — JSON Web Algorithms (JWA) RFC 7519 — JSON Web Token (JWT) RFC 7523 — JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants RFC 7636 — Proof Key for Code Exchange by OAuth Public Clients OpenID Connect Core 1.0 OpenID Connect Discovery 1.0 OpenID Connect Dynamic Client Registration 1.0 OAuth 2.0 Multiple Response Type Encoding Practices OAuth 2.0 Form Post Response Mode

Articles below may help understanding these specifications.

The Simplest Guide To OAuth 2.0 Diagrams And Movies Of All The OAuth 2.0 Flows Diagrams of All The OpenID Connect Flows Understanding ID Token Mutual TLS

In general, “Mutual TLS” means that a client is also required to present its X.509 certificate in a TLS connection. However, in the context of FAPI, Mutual TLS means the following two which are defined in “RFC 8705 OAuth 2.0 Mutual TLS Client Authentication and Certificate-Bound Access Tokens” (MTLS).

OAuth client authentication using a client certificate Tokens bound to a client certificate OAuth Client Authentication using a Client Certificate

When a confidential client (RFC 6749, 2. Client Types) accesses a token endpoint (RFC 6749, 3.2. Token Endpoint), client authentication (RFC 6749, 2.3. Client Authentication) is required. Client authentication is a process where a client application proves it has its confidential authentication information.

Client Authentication at Token Endpoint

There are several ways for client authentication. The following are client authentication methods listed in OIDC Core, 9. Client Authentication (except none).

client_secret_basic — Basic Authentication using a pair of client ID and client secret client_secret_post — Embedding a pair of client ID and client secret in a request body client_secret_jwt — Passing a JWT signed by a key based on a client secret with a symmetric algorithm private_key_jwt — Passing a JWT signed by a private key with an asymmetric algorithm

In client_secret_basic and client_secret_post, a client application directly shows the server its client secret to prove that it has the confidential information.

Client Authentication using Basic Authentication

In client_secret_jwt, a client application indirectly proves that it has the client secret by signing a JWT with the client secret and passing the JWT to the server. On the other hand, in private_key_jwt, signing is performed with an asymmetric private key and the server verifies the signature with the public key corresponding to the private key.

Client Authentication using JWT

Apart from the above, “2. Mutual TLS for OAuth Client Authentication” of RFC 8705 introduces new client authentication methods below.

tls_client_auth — Utilizing a PKI client certificate used in a TLS connection self_signed_tls_client_auth — Utilizing a self-signed client certificate used in a TLS connection

These two utilize the client certificate used in a TLS connection between the client and the token endpoint for client authentication.

Client Certificate for Client Authentication

In tls_client_auth, the PKI client certificate used in a TLS connection established between a client and a server is used for client authentication. The server verifies the client certificate (this should be done even in a context irrelevant to OAuth) and then checks whether the Subject Distinguished Name or Subject Alternative Name matches the pre-registered one. For this process, client applications that want to use tls_client_auth for client authentication must register Subject Distinguished Name or Subject Alternative Name into the server in advance. The specification newly defines the following client metadata for this purpose (RFC 87052.1.2 Client Registration Metadata).

tls_client_auth_subject_dn tls_client_auth_san_dns tls_client_auth_san_uri tls_client_auth_san_ip tls_client_auth_san_email

In self_signed_tls_client_auth, a self-signed client certificate is used instead of a PKI client certificate. To use this client authentication method, client applications have to register a self-signed client certificate into the server in advance.

The following table is the list of client authentication methods mentioned in the FAPI specification.

Client Authentication Methods

For detailed explanation about client authentication, please read “OAuth 2.0 Client Authentication”. Also, if you are not familiar with X.509 certificate, please read “Illustrated X.509 Certificate”.

X.509 Certificate Chain

X.509 Certificate in PEM Format Certificate-Bound Tokens

Once a traditional access token is leaked, an attacker can access APIs with the access token. Traditional access tokens are just like a train ticket which anyone can use once it is stolen.

An idea to mitigate this vulnerability is to check whether the API caller bringing an access token matches the legitimate holder of the access token when an API call is made. This is just like the boarding procedure for international flights where passengers are required to show not only a plane ticket but also their passport.

This idea is called “Proof of Possession” (PoP) and FAPI lists “Mutual TLS” as an only possible option of PoP ( — in the previous versions (ID1 & ID2), “Token Binding” was mentioned as a PoP mechanim but it was dropped by the final version). In this context, “Mutual TLS” means the specification defined in “3. Mutual-TLS Client Certificate-Bound Access Tokens” of RFC 8705.

Because Mutual TLS has several meanings as explained above and I actually experienced a problematic conversation like below,

Me: The API management solution of your company does not support Mutual TLS (as a PoP mechanism).
The company: Not correct. Our solution supports Mutual TLS (because it can be configured to request a client certificate for TLS communication).

I’ve personally decided to call Mutual TLS as a PoP mechanism “Certificate Binding”. This naming is not so bad (at least for me) because it sounds symmetrical to Token Binding and because actual implementations will eventually become just binding a certificate to an access token and won’t care whether the certificate has been extracted from a mutual TLS connection or has come from somewhere else.

In an implementation of Certificate Binding, when the token endpoint of an authorization server issues an access token, it calculates the hash value of the client certificate presented by the client application in the TLS connection and remembers the binding between the access token and the hash value (or embeds the hash value into the access token if the implementation of the access token is a self-contained JWT). When the client application accesses an API of the target resource server, it uses the same client certificate that was previously used in the communication with the token endpoint. The implementation of the API extracts an access token and a client certificate from the request, calculates the hash value of the client certificate and checks the hash value matches the one that is associated with the access token. If they match, the API implementation accepts the request. If not, it rejects the request.

Certificate Binding

It is relatively easy to implement Certificate Binding because it can be implemented only if the client certificate is accessible. On the other hand, Token Binding is relatively hard because it is necessary to modify multiple layers such as TLS layer and HTTP layer. In addition, the future is uncertain as Chrome has removed the Token Binding feature although the community strongly tried to urge Chrome team to rethink it (“Intent to Remove: Token Binding”). However, anyway, related specifications were promoted to RFCs at the beginning of October, 2018.

RFC 8471 — The Token Binding Protocol Version 1.0 RFC 8472 — Transport Layer Security (TLS) Extension for Token Binding Protocol Negotiation RFC 8473 — Token Binding over HTTP

OAuth 2.0 Token Binding” (its status is “expired”) is a specification that defines rules to apply Token Binding to OAuth 2.0 tokens based on the specifications listed above.

NOTE: The final version of the FAPI specification dropped Token Binding due to its unlikeliness of future availability.

JARM

JARM is a new specification which was approved at the same timing with FAPI Implementer’s Draft 2. JARM is referred to in FAPI Part 2.

Financial-grade API: JWT Secured Authorization Response Mode for OAuth 2.0 (JARM)

The specification defines new values for the response_mode request parameter as shown below.

query.jwt fragment.jwt form_post.jwt jwt

If one of the above is specified, response parameters of an authorization response are packed into a JWT and the JWT is returned as the value of a single response response parameter.

For example, a traditional authorization response in the authorization code flow looks like below. code and state response parameters are included separately.

HTTP/1.1 302 Found Location: https://client.com/callback?code={CODE}&state={STATE}

On the other hand, if response_mode=query.jwt is added to an authorization request, the authorization response will become like below.

HTTP/1.1 302 Found Location: https://client.com/callback?response={JWT}

JARM example

Because the JWT is signed by a key of the server, a client can confirm that the response has not been tampered by verifying the signature of the JWT.

Client Metadata for JARM

Before using JARM, client applications have to set a value to the authorization_signed_response_alg metadata in advance. The metadata represents an algorithm for signature of response JWTs. If the value of the response_mode request parameter is *.jwt although the metadata is not set, the authorization request fails because the specification requires response JWTs be always signed.

To encrypt response JWTs, algorithms have to be set in advance to the authorization_encrypted_response_alg metadata and the authorization_encrypted_response_enc metadata. To use an asymmetric algorithm, configuration about client’s public key is necessary, too.

The screenshot below is client-side settings for JARM in Authlete’s web console that is provided for client management.

Authorization Response Algorithms (in Developer Console provided by Authlete)

Server Metadata for JARM

Discovery information of authorization servers that support JARM includes one or more of query.jwtfragment.jwtform_post.jwt and jwt in the list of supported response modes (response_modes_supported). Also, discovery information includes the following metadata related to algorithms used for response JWTs.

authorization_signing_alg_values_supported — supported algorithms for signing

authorization_encryption_alg_values_supported — supported algorithms for key encryption

authorization_encryption_enc_values_supported — supported algorithms for payload encryption

Discovery information of authorization servers that support JARM completely will include data as shown below.

Server Metadata related to JARM Part 1: Baseline

As introduction of prior knowledge was done, let’s start the main part of this article. To begin with, “Part 1” which defines baseline security profile.

Part 1: Requirements for Authorization Server

5.2.2. Authorization server” in “Part 1” lists requirements for authorization server. Let’s take a look one by one.

Part 1: 5.2.2. Authorization server, 1.

shall support confidential clients;

Part 1: 5.2.2. Authorization server, 2.

should support public clients;

The definition of “confidential clients” and “public clients” is described in “2.1. Client Types” of RFC 6749. I don’t explain the difference between the client types here as it is prior knowledge for those who read the FAPI specification. However, the relationship between client types and OAuth 2.0 flows is often misunderstood even by those who are familiar with OAuth 2.0. It is only the combination of a “public client” and “client credentials flow” that RFC 6749 explicitly prohibits. Other combinations are allowed. Without this understanding, you would misread the FAPI specification.

Combinations of Flow and Client Type (RFC 6749)

Just FYI. It is confidential clients only that are allowed to make backchannel authentication requests which are defined in CIBA.

Combinations of Flow and Client Type (CIBA)

Part 1: 5.2.2. Authorization server, 3.

shall provide a client secret that adheres to the requirements in section 16.19 of OIDC if a symmetric key is used;

OIDC Core states that a value calculated based on a client secret must be used as the shared key when a symmetric algorithm is used for signing and encryption. If the entropy of the client secret is lower than the one required by the algorithm, the strength of the algorithm is weakened. Therefore, “16.19. Symmetric Key Entropy” requires that client secrets have entropy strong enough for used algorithms. For example, when HS256 (HMAC using SHA-256) is used for signing algorithm of ID tokens, client secrets must have 256-bit entropy at minimum.

Part 1: 5.2.2. Authorization server, 4.

shall authenticate the confidential client using one of the following methods:

1. Mutual TLS for OAuth Client Authentication as specified in section 2 of MTLS;

2.client_secret_jwt or private_key_jwt as specified in section 9 of OIDC;

Note that client_secret_basic and client_secret_post defined in RFC 6749 are not allowed as client authentication methods at the token endpoint.

Client Authentication Methods allowed in FAPI Part 1

Part 1: 5.2.2. Authorization server, 5.

shall require and use a key of size 2048 bits or larger for RSA algorithms;

Part 1: 5.2.2. Authorization server, 6.

shall require and use a key of size 160 bits or larger for elliptic curve algorithms;

For example, when private_key_jwt is used as client authentication method and RSA is used for signing the JWT, the key size must be 2048 or bigger. Likewise, when an elliptic curve algorithm is used, the key size must be 160 at minimum.

Part 1: 5.2.2. Authorization server, 7.

shall require RFC7636 with S256 as the code challenge method;

It is required to implement RFC 7636 (PKCE) which is a countermeasure for “authorization code interception attack”.

Authorization Code Interception Attack

RFC 7636 has added code_challenge and code_challenge_method request parameters to the authorization request and code_verifier request parameter to the token request. Because the default value of code_challenge_method is plain, authorization requests that comply with FAPI must include code_challenge_method=S256 explicitly.

See “Proof Key for Code Exchange (RFC 7636)” for details about PKCE.

Part 1: 5.2.2. Authorization server, 8.

shall require redirect URIs to be pre-registered;

In RFC 6749, registration of redirect URIs is not required under some conditions. For FAPI, registration of redirect URIs is always required.

Part 1: 5.2.2. Authorization server, 9.

shall require the redirect_uri parameter in the authorization request;

In RFC 6749, the redirect_uri request parameter of an authorization request can be omitted under some conditions. For FAPI, the request parameter must be always included. OIDC has the same requirement.

Part 1: 5.2.2. Authorization server, 10.

shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;

When an authorization server checks whether the value of the redirect_uri request parameter matches a pre-registered one, the rule described in “6. Normalization and Comparison” of RFC 3986 (Uniform Resource Identifier (URI): Generic Syntax) is applied unless the pre-registered one is an absolute URI. On the other hand, FAPI (and OIDC also) requires that simple string comparison be always used to check whether the redirect URIs match.

Part 1: 5.2.2. Authorization server, 11.

shall require user authentication to an appropriate Level of Assurance for the operations the client will be authorized to perform on behalf of the user;

It is required that user authentication performed during authorization process satisfy an appropriate level of assurance. ID1 and ID2 required LoA (Level of Assurance) 2, which is defined in X.1254 (Entity authentication assurance framework. However, the Final version made the requirement more abstract (= changed the requirement from “LoA2” to “appropriate LoA”).

FYI: The following is the definition of LoA 2 described in “6.2 Level of assurance 2 (LoA2)” of X.1254.

At LoA2, there is some confidence in the claimed or asserted identity of the entity. This LoA is used when moderate risk is associated with erroneous authentication. Single-factor authentication is acceptable. Successful authentication shall be dependent upon the entity proving, through a secure authentication protocol, that the entity has control of the credential. Controls should be in place to reduce the effectiveness of eavesdroppers and online guessing attacks. Controls shall be in place to protect against attacks on stored credentials.

For example, a service provider might operate a website that enables its customers to change their address of record. The transaction in which a beneficiary changes an address of record may be considered an LoA2 authentication transaction, as the transaction may involve a moderate risk of inconvenience. Since official notices regarding payment amounts, account status, and records of changes are usually sent to the beneficiary’s address of record, the transaction additionally entails moderate risk of unauthorized release of PII. As a result, the service provider should obtain at least some authentication assurance before allowing this transaction to take place.

Part 1: 5.2.2. Authorization server, 12.

shall require explicit approval by the user to authorize the requested scope if it has not been previously authorized;

Of course.

Part 1: 5.2.2. Authorization server, 13.

shall reject an authorization code (Section 1.3.1 of RFC6749) if it has been previously used;

Prohibiting reuse of authorization codes and ensuring that authorization codes have never been used previously are different things. If the current implementation of an authorization server uses randomly-generated strings as authorization codes and removes them from the database after they are used, the authorization codes have to be kept in the database even after they are used just only for the verification. If strings that represent authorization codes are generated randomly with high enough entropy, it is wasteful to keep authorization codes in the database even after their use. A certain famous engineer says “Most implementations prevent reuse of authorization codes by deleting corresponding database records and don’t check if they have been used previously, and such implementations are sufficient enough.”

Part 1: 5.2.2. Authorization server, 14.

shall return token responses that conform to Section 4.1.4 of RFC6749;

This is not a FAPI-specific requirement. Every authorization server implementation that claims it supports OAuth 2.0 must conform to Section 4.1.4 of RFC 6749.

Part 1: 5.2.2. Authorization server, 15.

shall return the list of granted scopes with the issued access token if the request was passed in the front channel and was not integrity protected;

In RFC 6749, the scope response parameter can be omitted unless requested scopes and granted ones are different (RFC 6749, 5.1. Successful Response). In FAPI, the scope response parameter is required (even if the requested scopes and granted ones are equal) if the authorization request is passed in the front channel and is not integrity protected.

“Integrity protected” here means that a Request Object (OIDC Core Section 6 or JAR) is used.

Part 1: 5.2.2. Authorization server, 16.

shall provide non-guessable access tokens, authorization codes, and refresh token (where applicable), with sufficient entropy such that the probability of an attacker guessing the generated token is computationally infeasible as per RFC 6749 Section 10.10;

ID2 requires that access tokens have a minimum of 128 bits of entropy, but the Final version avoids mentioning the exact size of the minimum entropy and just says “sufficient entropy”.

Part 1: 5.2.2. Authorization server, 17.

should clearly identify the details of the grant to the user during authorization as in 16.18 of OIDC;

Suppose that a client application requests payment scope. A typical authorization page will tell the user just that the client application is requesting the payment scope. However, recent regulations in financial industries require that details be explained to the user. For example, information about the purpose of the payment scope, the amount of money transferred, and so on. Generally speaking, recent regulations require that grant be more specific.

UK Open Banking has invented “Lodging Intent” for the purpose. In the mechanism, (a) a client application registers details of grant it wants into an authorization server in advance, (b) the authorization server issues an intent ID that represents the registered details, and (c) the client makes an authorization request with the intent ID. As a result, the authorization server can generate an authorization page which includes the details of the authorization request.

To make the lodging intent pattern available as standards, OpenID Foundation has developed two separate specifications; “OAuth 2.0 Pushed Authorization Requests” (PAR) and “OAuth 2.0 Rich Authorization Requests” (RAR). These specifications will be mentioned again later.

Part 1: 5.2.2. Authorization server, 18.

should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a client as in 16.18 of OIDC;

It should be noted that, if the format of access tokens is self-contained-type (e.g. JWT), the access tokens cannot be revoked unless the system implements and operates a mechanism like CRL (Certificate Revocation List) or OCSP (Online Certificate Status Protocol) of PKI (Public Key Infrastructure). If the system does not provide such mechanism, it means that the system has decided to give up revocation of access tokens. In this case, the duration of access tokens must be short enough to mitigate damage of access token leakage. See “OAuth Access Token Implementation” for further discussion.

Part 1: 5.2.2. Authorization server, 19.

shall return an invalid_client error as defined in 5.2 of RFC6749 when mis-matched client identifiers were provided through the client authentication methods that permits sending the client identifier in more than one way;

FAPI Part 1 requires MTLS (tls_client_authself_signed_tls_client_auth) or JWT (client_secret_jwtprivate_key_jwt) for client authentication.

MTLS uses a client certificate but a certificate does not include the client identifier of the client which tries to authenticate itself with the certificate. Therefore, the client_id request parameter needs to be given explicitly.

On the other hand, JWT-based client authentication methods present a JWT as the value of the client_assertion request parameter and the JWT contains the client identifier as the value of the iss claim. Therefore, the client_id request parameter is not necessary. In addition, according to RFC 7523, 3. and OIDC Core, 9., the sub claim also holds the client identifier when a JWT is used for client authentication.

JWT-based Client Authentication and Client Identifiers

In MTLS, it is only the client_id request parameter that represents a client identifier. On the other hand, in JWT-based client authentication, both the iss claim and the sub claim hold a client identifier. The values of the claims must match. Also, if the client_id request parameter is redundantly given although JWT-based client authentication is used, the value of the request parameter must match the client identifier, too.

Part 1: 5.2.2. Authorization server, 20.

shall require redirect URIs to use the https scheme;

This sentence added by FAPI Implementer’s Draft 2 is short but has a big impact. Because of this sentence, developers cannot use custom schemes in FAPI any more. To process redirection on client side only without preparing an external Web server, developers have to use the method described in “7.2. Claimed “https” Scheme URI Redirection” of BCP 212 (OAuth 2.0 for Native Apps).

Part 1: 5.2.2. Authorization server, 21.

should issue access tokens with a lifetime of under 10 minutes unless the tokens are sender-constrained; and

This requirement was added by the FAPI Final version. “sender-constrained” here means that access tokens have to be bound to a client certificate (MTLS).

Part 1: 5.2.2. Authorization server, 22.

shall support OIDD, may support RFC8414 and shall not distribute discovery metadata (such as the authorization endpoint) by any other means.

This requirement was added by the FAPI Final version. OIDD here is short for “OpenID Connect Discovery 1.0”. Therefore, authorization servers for FAPI must implement a “discovery endpoint” which is defined in OIDD Section 4.

Part 1: 5.2.2.1. Returning authenticated user’s identifier

Further, if it is desired to provide the authenticated user’s identifier to the client in the token response, the authorization server:

Section 5.2.2.1. lists requirements that an authorization server must follow when the authenticated user’s identifier is requested. In other words, when an ID token is requested.

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 1.

shall support the authentication request as in Section 3.1.2.1 of OIDC;

3.1.2.1. Authentication Request” of OIDC Core is the definition of a request to an authorization endpoint in the context of OpenID Connect. RFC 6749 calls a request to an authorization endpoint “authorization request”. OIDC Core calls it “authentication request”. Aside from the names, considering that the specification of an authorization endpoint is the main part of OIDC Core, the FAPI’s requirement is almost equal to stating “shall support OIDC Core”.

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 2.

shall perform the authentication request verification as in Section 3.1.2.2 of OIDC;

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 3.

shall authenticate the user as in Section 3.1.2.2 and 3.1.2.3 of OIDC;

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 4.

shall provide the authentication response as in Section 3.1.2.4 and 3.1.2.5 of OIDC depending on the outcome of the authentication;

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 5.

shall perform the token request verification as in Section 3.1.3.2 of OIDC; and

Part 1: 5.2.2.1. Returning authenticated user’s identifier, 6.

shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of OIDC with its sub value corresponding to the authenticated user and optional acr value in ID Token.

Summary of the requirements above is “shall follow OIDC Core specification.” Nothing special for FAPI.

Part 1: 5.2.2.2. Client requesting openid scope

If the client requests the openid scope, the authorization server

1. shall require the nonce parameter defined in Section 3.1.2.1 of OIDC in the authentication request.

OIDC Section 3.1.2.1 (Authorization Code Flow) states that nonce is optional. On the other hand, OIDC Section 3.2.2.1 (Implicit Flow) states that nonce is mandatory.

The FAPI requirement above requires nonce even in the authorization code flow if openid is included in scope.

Part 1: 5.2.2.3. Clients not requesting openid scope

If the client does not requests the openid scope, the authorization server

1. shall require the state parameter defined in Section 4 of RFC6749

In RFC 6749, the state parameter is optional. FAPI makes the parameter mandatory when openid is not included in scope.

Part 1: Requirements for Public Client

5.2.3. Public client” of “Part 1” lists requirements for public clients. Let’s take a look one by one.

Part 1: 5.2.3. Public client, 1.

shall support RFC7636;

RFC 7636 is PKCE.

Part 1: 5.2.3. Public client, 2.

shall use S256 as the code challenge method for the RFC7636;

This means “an authorization request must include code_challenge_method=S256.”

Part 1: 5.2.3. Public client, 3.

shall use separate and distinct redirect URI for each authorization server that it talks to;

Part 1: 5.2.3. Public client, 4.

shall store the redirect URI value in the resource owner’s user-agents (such as browser) session and compare it with the redirect URI that the authorization response was received at, where, if the URIs do not match, the client shall terminate the process with error;

These requirements are so clear that further explanation is not needed.

Part 1: 5.2.3. Public client, 5.

(withdrawn); and

“(withdrawn)” here indicates that the requirement which existed in the previous FAPI versions has been withdrawn. You’ll see more “withdrawn”s in following sections, too.

Part 1: 5.2.3. Public client, 6.

shall implement an effective CSRF protection.

In normal cases, CSRF protection is implemented on server side. What is CSRF protection as a requirement for public clients? This is CSRF protection for redirect URIs. The following is an excerpt from “10.12. Cross-Site Request Forgery” of RFC 6749.

The client MUST implement CSRF protection for its redirection URI. This is typically accomplished by requiring any request sent to the redirection URI endpoint to include a value that binds the request to the user-agent’s authenticated state (e.g., a hash of the session cookie used to authenticate the user-agent). The client SHOULD utilize the “state” request parameter to deliver this value to the authorization server when making an authorization request.

In addition to the requirements from “Public client, 1” to “Public client, 6”, “if it is desired to obtain a persistent identifier of the authenticated user”, that is, if an ID token is requested, an authorization request by a public client:

Part 1: 5.2.3. Public client, 7.

shall include openid in the scope value; and

Part 1: 5.2.3. Public client, 8.

shall include the nonce parameter defined in Section 3.1.2.1 of OIDC in the authentication request.

On the other hand, “If openid is not in the scope value”, an authorization request by a public client:

Part 1: 5.2.3. Public client, 9.

shall include the state parameter defined in section 4.1.1 of RFC6749;

Part 1: 5.2.3. Public client, 10.

shall verify that the scope received in the token response is either an exact match, or contains a subset of the scope sent in the authorization request; and

Part 1: 5.2.3. Public client, 11.

shall only use Authorization Server metadata obtained from the metadata document published by the Authorization Server at its well known endpoint as defined in OIDD or RFC 8414.

Part 1: Requirements for Confidential Client

5.2.4. Confidential client” of “Part 1” lists requirements for confidential clients. The requirements are positioned as additions to the requirements for public clients. Therefore, confidential clients must follow not only the requirements in 5.2.4 but also the requirements in 5.2.3.

Part 1: 5.2.4. Confidential client, 1.

shall support the following methods to authenticate against the token endpoint:

1. Mutual TLS for OAuth Client Authentication as specified in Section 2 of MTLS, and

2.client_secret_jwt or private_key_jwt as specified in Section 9 of OIDC;

Note that client authentication methods defined in RFC 6749 (client_secret_basic and client_secret_post) cannot be used.

Part 1: 5.2.4. Confidential client, 2.

shall use RSA keys with a minimum 2048 bits if using RSA cryptography;

Part 1: 5.2.4. Confidential client, 3.

shall use elliptic curve keys with a minimum of 160 bits if using Elliptic Curve cryptography; and

Part 1: 5.2.4. Confidential client, 4.

shall verify that its client secret has a minimum of 128 bits if using symmetric key cryptography.

These requirements apply when encrypted JWTs are used.

Part 1: Requirements for Protected Resources

6.2.1. Protected resources provisions” of “Part 1” lists requirements for protected resources.

Part 1: 6.2.1. Protected resource provisions, 1.

shall support the use of the HTTP GET method as in Section 4.3.1 of RFC7231;

Part 1: 6.2.1. Protected resource provisions, 2.

shall accept access tokens in the HTTP header as in Section 2.1 of OAuth 2.0 Bearer Token Usage RFC6750;

That is, protected resource endpoints must support HTTP GET method and be able to accept an access token in the format of Authorization: Bearer {AccessToken}.

Request to Protected Resource Endpoint

Part 1: 6.2.1. Protected resource provisions, 3.

shall not accept access tokens in the query parameters stated in Section 2.3 of OAuth 2.0 Bearer Token Usage RFC6750;

That is, protected resource endpoints must not accept a query parameter in the format of access_token={AccessToken}.

Part 1: 6.2.1. Protected resource provisions, 4.

shall verify that the access token is neither expired nor revoked;

Part 1: 6.2.1. Protected resource provisions, 5.

shall verify that the scope associated with the access token authorizes access to the resource it is representing;

Part 1: 6.2.1. Protected resource provisions, 6.

shall identify the associated entity to the access token;

Part 1: 6.2.1. Protected resource provisions, 7.

shall only return the resource identified by the combination of the entity implicit in the access and the granted scope and otherwise return errors as in Section 3.1 of RFC6750;

These are general steps of access token verification that protected resource endpoints are expected to take.

3.1. Error Codes” of RFC 6750 defines three error codes. They are invalid_requestinvalid_token and insufficient_scope. One point those who are not familiar with RFC 6750 may feel strange is that an error code is embedded not in the response body but in the WWW-Authenticate HTTP header.

RFC 6750 Error Response

Part 1: 6.2.1. Protected resource provisions, 8.

shall encode the response in UTF-8 if applicable;

Part 1: 6.2.1. Protected resource provisions, 9.

shall send the Content-type HTTP header Content-Type: application/json; if applicable;

Protected resource endpoints in FAPI are expected to return their responses in JSON format.

Part 1: 6.2.1. Protected resource provisions, 10.

shall send the server date in HTTP Date header as in Section 7.1.1.2 of RFC7231;

The format of Date header is defined in “7.1.1.1. Date/Time Formats” of RFC 7231. Below is an example.

Date: Sun, 06 Nov 1994 08:49:37 GMT

Part 1: 6.2.1. Protected resource provisions, 11.

shall set the response header x-fapi-interaction-id to the value received from the corresponding FAPI client request header or to a RFC4122 UUID value if the request header was not provided to track the interaction, e.g., x-fapi-interaction-id: c770aef3-6784-41f7-8e0e-ff5f97bddb3a;

This is a requirement specific to FAPI. Responses from FAPI protected resource endpoints must include an x-fapi-interaction-id header.

When an incoming request has x-fapi-interaction-id, the same value of the header must be included in the response. Otherwise, the protected resource endpoint must generate a new value for x-fapi-interaction-id.

Part 1: 6.2.1. Protected resource provisions, 12.

shall log the value of x-fapi-interaction-id in the log entry; and

This is also specific to FAPI. This requirement doesn’t have any impact on request and response formats, but this can make it easy to correlate server-side logs and client-side logs.

Part 1: 6.2.1. Protected resource provisions, 13.

shall not reject requests with a x-fapi-customer-ip-address header containing a valid IPv4 or IPv6 address.

Part 1: 6.2.1. Protected resource provisions, 14.

should support the use of Cross Origin Resource Sharing (CORS) [CORS] and or other methods as appropriate to enable JavaScript clients to access the endpoint if it decides to provide access to JavaScript clients.

For example, if a protected resource endpoint wants to allow JavaScript clients to access it from anywhere, the endpoint should include an Access-Control-Allow-Origin: * header in responses.

Part 1: Requirements for Clients to Protected Resources

6.2.2. Client provisions” of “Part 1” lists requirements for clients to follow in accessing protected resources.

Part 1: 6.2.2. Client provisions, 1.

shall send access tokens in the HTTP header as in Section 2.1 of OAuth 2.0 Bearer Token Usage RFC6750; and

That is, clients send an access token in the format of Authorization: Bearer {AccessToken}.

Part 1: 6.2.2. Client provisions, 2.

(withdrawn);

Part 1: 6.2.2. Client provisions, 3.

may send the last time the customer logged into the client in the x-fapi-auth-date header where the value is supplied as a HTTP-date as in Section 7.1.1.1 of RFC7231, e.g., x-fapi-auth-date: Tue, 11 Sep 2012 19:43:31 GMT;

Part 1: 6.2.2. Client provisions, 4.

may send the customer’s IP address if this data is available in the x-fapi-customer-ip-address header, e.g., x-fapi-customer-ip-address: 2001:DB8::1893:25c8:1946 or x-fapi-customer-ip-address: 198.51.100.119; and

Part 1: 6.2.2. Client provisions, 5.

may send the x-fapi-interaction-id request header, in which case the value shall be a RFC4122 UUID to the server to help correlate log entries between client and server, e.g., x-fapi-interaction-id: c770aef3-6784-41f7-8e0e-ff5f97bddb3a.

These are FAPI-specific HTTP headers. It is up to clients whether to send the headers or not.

FAPI-specific HTTP Headers Part 1: Security Considerations

7. Security considerations” of “Part 1” lists security considerations. Summary is as follows.

7.1. — Follow BCP 195. Use TLS 1.2 or newer. Follow RFC 6125.

7.2. — Part 1 doesn’t authenticate authorization request and response.

7.3. — Part 1 doesn’t assure message integrity of authorization request.

7.4.1. — Part 1 doesn’t discuss encryption of authorization request.

7.4.2. — Be careful not to leak information through logs.

7.4.3. — Be careful not to leak information through referrer. Make duration of access tokens short.

7.5. — Native applications shall follow BCP 212 but must not support “Private-Use URI Scheme Redirection” and “Loopback Interface Redirection”. They must use https for the scheme of redirect URI as introduced in “Claimed https Scheme URI Redirection”.

7.6. — Both FAPI implementation and underlying OAuth/OIDC implementation must be complete and correct. See OpenID Certification.

7.7. — Use a separate issuer per brand if multiple brands need to be supported.

“Part 2” provides solutions for security considerations listed in “Part 1”, for example, by making “request object” mandatory. “Part 2” is recommended when higher security than “Part 1” is needed.

Part 2: Advanced

Next, let’s read “Part 2” which defines advanced security profile.

Detached Signature

5.1.1. ID Token as Detached Signature” of “Part 2” states that it uses “ID token” as “detached signature”.

An ID token is signed by an authorization server, so even if an attacker tampered the content of the ID token, it could be detected. A client application that has received an ID token can confirm that the ID token has not been tampered by verifying the signature of the ID token.

If an authorization server embeds hash values of response parameters (such as code and state) into an ID token, a client application can confirm that the values of the response parameters have not been tampered by computing hash values of the response parameter values and comparing them to the hash values embedded in the ID token. In the context, the ID token is regarded as a detached signature.

ID Token as Detached Signature

For the code response parameter that represents an authorization code, c_hash has already been defined in OIDC Core as a claim that represents the hash value of code. Likewise, at_hash has been defined as a claim that represents the hash value of access_token.

What is missing is a claim that represents the hash value of the state response parameter. So, “5.1.1. ID Token as Detached Signature” defines s_hash for that purpose.

s_hash
State hash value. Its value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the state value, where the hash algorithm used is the hash algorithm used in the alg header parameter of the ID Token’s JOSE header. For instance, if the alg is HS512, hash the state value with SHA-512, then take the left-most 256 bits and base64url encode them. The s_hash value is a case sensitive string.

Because “Part 2” uses ID tokens as detached signatures, even if client applications don’t need ID tokens in their application layer, they have to send authorization requests that require an ID token. To be exact, they have to include id_token in the response_type request parameter. This is the reason that the second requirement in “5.2.2. Authorization Server” is saying “shall require the response_type values code id_token”.

However, since Implementer’s Draft 2, ID tokens don’t have to be used as detached signatures when JARM is used. It is because the entire set of response parameters is packed into a JWT.

Part 2: Requirements for Authorization Server

5.2.2. Authorization server” of “Part 2” lists requirements for authorization server.

Part 2: 5.2.2. Authorization server, 1.

shall require a JWS signed JWT request object passed by value with the request parameter or by reference with the request_uri parameter;

The request and request_uri parameters are defined in “6. Passing Request Parameters as JWTs” of OIDC Core. To use these parameters, the first step is to pack request parameters into a JWT. This JWT is called “request object”. An authorization request (1) passes the request object as the value of the request parameter directly or (2) puts the request object somewhere accessible from the authorization server and passes the URI pointing to the location as the value of the request_uri parameter.

Passing a Request Object by Value

Signing a request object is not mandatory in OIDC Core, but signing is mandatory in FAPI Part 2. If request objects are signed, authorization servers can confirm that the request parameters have not been tampered by verifying signatures of the request objects.

To be honest, what surprised me most when I read the FAPI specification for the first time (many years ago) is this requirement. It’s because I knew from my experience it is hard to implement the request object feature on authorization server side. As the feature is hard to implement and optional in OIDC, there are many authorization server implementations that claim they support OIDC but don’t support request object. Be careful not to choose an authorization server implementation that doesn’t support request object if you want to build a system that supports FAPI Part 2.

Part 2: 5.2.2. Authorization server, 2.

shall require

1. the response_type values code id_token, or

2. the response_type value code in conjunction with the response_type value jwt;

To use ID token as detached signature, even if an ID token is not needed in the application layer, id_token must be included in the response_type request parameter.

But, as mentioned in the previous section, id_token doesn’t have to be included in the response_type request parameter when JARM is used. “When JARM is used” is, to be concrete, “when the response_mode request parameter is included and its value is one of query.jwtfragment.jwtform_post.jwt and jwt”.

NOTE: ID2 requires that response_type be either code id_token or code id_token token when JARM is not used, but the Final version has removed code id_token token.

Part 2: 5.2.2. Authorization server, 3.

(moved to 5.2.2.1);

Part 2: 5.2.2. Authorization server, 4.

(moved to 5.2.2.1);

Part 2: 5.2.2. Authorization server, 5.

shall only issue sender-constrained access tokens;

In ID2, this clause was “shall only issue authorization code, access token, and refresh token that are holder of key bound;”. However, because the requirement was impractical, it was changed to the current one. See FAPI Issue 202 for details if you are interested.

Part 2: 5.2.2. Authorization server, 6.

shall support MTLS as mechanism for constraining the legitimate senders of access tokens;

In ID2, this clause was “shall support [OAUTB] or [MTLS] as a holder of key mechanisms;”. However, OAUTB (Token Binding) was removed from the Final version due to its unlikeliness of future availability.

Part 2: 5.2.2. Authorization server, 7.

(withdrawn);

Part 2: 5.2.2. Authorization server, 8.

(moved to 5.2.2.1);

Part 2: 5.2.2. Authorization server, 9.

(moved to 5.2.2.1);

Part 2: 5.2.2. Authorization server, 10.

shall only use the parameters included in the signed request object passed via the request or request_uri parameter;

shall require that all parameters are present inside the signed request object passed in the request or request_uri parameter;

In ID2, this requirement was “shall require that all parameters are present inside the signed request object passed in the request or request_uri parameter;”. The expression was changed but the point remains the same. A request object must include all request parameters to conform to FAPI Part 2.

This is different from OIDC Core which allows request parameters to be put inside or outside a request object and merges them.

Part 2: 5.2.2. Authorization server, 11.

may support the pushed authorization request endpoint as described in PAR;

The “pushed authorization request endpoint” is a new endpoint defined in “OAuth 2.0 Pushed Authorization Requests” (PAR). A client application can register an authorization request at the endpoint and obtain a Request URI which represents the registered authorization request. The client specifies the issued Request URI as the value of the request_uri request parameter when sending an authorization request to the authorization endpoint.

The following diagram excerpted from “Illustrated PAR: OAuth 2.0 Pushed Authorization Requests” shows the authorization code flow which utilizes the pushed authorization request endpoint.

Authorization Code Flow with Pushed Authorization Request Endpoint

HISTORY: The 7th section of ID2 showed an idea about pre-registration of an authorization request. The section named the endpoint for the pre-registration “request object endpoint”. The specification of PAR was developed based on the idea. As a result, the FAPI Final version has withdrawn the 7th section.

Part 2: 5.2.2. Authorization server, 12.

(withdrawn)

Part 2: 5.2.2. Authorization server, 13.

shall require the request object to contain an exp claim that has a lifetime of no longer than 60 minutes after the nbf claim;

OIDC Core does not require that request objects include the exp claim. In contrast, FAPI Part 2 requires exp as a mandatory claim.

Furthermore, the Final version has added a requirement “a lifetime of no longer than 60 minutes after the nbf claim”. Because of this requirement, the nbf claim has become mandatory.

The new requirement is a breaking change from a viewpoint of client applications because authorization servers now reject authorization requests whose request object does not include the nbf claim. As a matter of fact, some test cases in the official conformance suite had to be updated for the new requirement.

Authorization server implementations may provide a mechanism to mitigate the impact of the breaking change. For example, Authlete has defined Service.nbfOptional flag that indicates whether the nbf claim in the request object is optional even when the authorization request is regarded as a FAPI-Part2 request. The value of the flag can be changed by “nbf Claim” in the Service Owner Console.

Service Configuration: nbf Claim

Part 2: 5.2.2. Authorization server, 14.

shall authenticate the confidential client using one of the following methods (this overrides FAPI Security Profile 1.0 – Part 1: clause 5.2.2-4):

1. tls_client_auth or self_signed_tls_client_auth as specified in section 2 of MTLS, or

2.private_key_jwt as specified in section 9 of OIDC;

It should be noted that client_secret_jwt is not allowed in Part 2. This is different from Part 1.

Client Authentication Methods in FAPI

Part 2: 5.2.2. Authorization server, 15.

shall return the aud claim in the request object to be, or to be an array containing, the OP’s Issuer Identifier URL;

This requirement was added by the Final version. Client applications have to put the aud claim in request objects. The value of “OP’s Issuer Identifier URL” can be found in the discovery document as the value of the issuer metadata (cf. OpenID Connect Discovery 1.0, 3. OpenID Provider Metadata).

Part 2: 5.2.2. Authorization server, 16.

shall not support public clients;

This requirement is a new one added by the Final version, but it is said that it has been logically impossible to support public clients in the context of FAPI Part 2 since older FAPI versions.

Part 2: 5.2.2. Authorization server, 17.

shall require the request object to contain an nbf claim that is no longer than 60 minutes in the past; and

The 13th requirement implies that the nbf claim is mandatory. This 17th requirement states it explicitly.

Part 2: 5.2.2. Authorization server, 18.

shall require PAR requests, if supported, to use PKCE (RFC7636) with S256 as the code challenge method.

“PAR” here is “OAuth 2.0 Pushed Authorization Requests”.

5.2.2.1. ID Token as detached signature

In addition, if the response_typevalue code id_token is used, the authorization server.

Section 5.2.2.1. lists requirements for authorization servers which are applied when an ID token is used as a detached signature.

5.2.2.1. ID Token as detached signature, 1.

shall support OIDC;

5.2.2.1. ID Token as detached signature, 2.

shall support signed ID Tokens;

5.2.2.1. ID Token as detached signature, 3.

should support signed and encrypted ID Tokens;

From a viewpoint of OIDC, these requirements are not new. By definition, ID tokens are always signed. Encryption of ID tokens is optional.

Part 2: 5.2.2.1. ID Token as detached signature, 4.

shall return ID Token as a detached signature to the authorization response;

This requires that an authorization server issue an ID token, but because the condition written at the top of Section 5.2.2.1 requires that id_token be included in response_type and so an ID token is issued as a general consequence, this requirement doesn’t have to exist.

Part 2: 5.2.2.1. ID Token as detached signature, 5.

shall include state hash, s_hash, in the ID Token to protect the state value if the client supplied a value for states_hash may be omitted from the ID Token returned from the Token Endpoint when s_hash is present in the ID Token returned from the Authorization Endpoint; and

When JARM is used, this requirement doesn’t have to be followed.

Part 2: 5.2.2.1. ID Token as detached signature, 6.

should not return sensitive PII in the ID Token in the authorization response, but if it needs to, then it should encrypt the ID Token.

PII is short for “Personally Identifiable Information”.

The feature of ID token encryption has existed since OIDC Core. When the encryption algorithm for ID tokens is an asymmetric one, the authorization server must either (1) manage public keys of client applications directly in its database or (2) fetch JWK Set documents from the locations pointed to by clients’ jwks_uri metadata and extract public keys from the documents.

For signing ID tokens, it is server-side keys only that an authorization server has to handle.

ID Token Signing

In contrast, if an authorization server wants to support encryption of ID tokens, the authorization server has to handle client-side keys, too.

ID Token Encryption

This is the reason that not a small number of authorization server implementations don’t support ID token encryption.

5.2.2.2. JARM

In addition, if the response_type value code is used in conjunction with the response_mode value jwt, the authorization server

5.2.2.2. JARM, 1.

shall create JWT-secured authorization responses as specified in JARM, Section 4.3.

This clause does not include any FAPI-specific requirements. It just says that JARM implementations should function as the JARM specification requires.

When response_type does not contain id_token, no ID token is issued. Therefore, an ID token cannot be used as a detached signature. In this case, JARM has to be used to assure that the authorization response has not been tampered.

Part 2: Requirements for Confidential Client

The FAPI Final version has renamed Part 2: Section 5.2.3 from “Public client” to “Confidential client”.

Part 2: 5.2.3. Confidential client, 1.

shall support MTLS as mechanism for sender-constrained access tokens;

That is, the authorization server must issue certificated-bound access tokens as defined in Section 3 of RFC 8705.

Part 2: 5.2.3. Confidential client, 2.

shall include the request or request_uri parameter as defined in Section 6 of OIDC in the authentication request;

As listed in the list of requirements for authorization servers, either the request parameter or the request_uri parameter must be included. Note that OIDC Core says “If one of these parameters is used, the other MUST NOT be used in the same request.”

Part 2: 5.2.3. Confidential client, 3.

shall ensure the Authorization Server has authenticated the user to an appropriate Level of Assurance for the client’s intended purpose;

This requirement states just that the user shall be authenticated appropriately. The FAPI Final version removed the requirement “by requesting the acr claim as an essential claim” which once existed in the clause.

HISTORY:

In ID2, this requirement was “shall request user authentication at LoA 3 or greater by requesting the acr claim as an essential claim as defined in section 5.5.1.1 of [OIDC];”.

When a client wants to require claims as essential ones, the acr_values request parameter cannot be used. Instead, a client must use the claims request parameter, pass JSON as its value, and include {“essential":true} inside the JSON. The following is an example of JSON that needs to be given as the value of the claims request parameter in order to mark urn:mace:incommon:iap:silver as an essential ACR.

Claims for Essential ACR

BTW, this requirement is loosened in UK Open Banking which is based on FAPI Part 2. That is, clients don’t have to require ACRs as essential. Probably, it is not intentional. I guess that the snapshot of FAPI specification which was referred to when Open Banking Profile (OBP) was developed didn’t contain the sentence, “by requesting the acr claim as an essential claim”.

The official Financial-grade API conformance test suite (conformance-suite) developed and maintained by FinTechLabs.io contains test cases for OBP. When FinTechLabs ran the OBP test cases using Authlete to test the test suite itself, they encountered an error. Because Authlete strictly follows FAPI specification, Authlete reported “acr claim is not required as essential.” However, the expected behavior in the context of OBP is to ignore the FAPI requirement.

The right approach for the error was to amend OBP (to make OBP compliant with the latest FAPI specification). However, I was given explanation like “if the official conformance test suite did it, all the existing OBP implementations wouldn’t be able to pass the official tests. Changing the tests at this timing might cause delay in the officially-announced schedule of Open Banking.”

Therefore, I decided to tweak Authlete and added OPEN_BANKING option in addition to FAPI option.

Supported Service Profiles (in Service Owner Console provided by Authlete)

If OPEN_BANKING is enabled, Authlete dare not to check if the acr claim is required as essential even in the context of FAPI Part 2. The code snippet below is the actual implementation excerpted from Authlete’s source code.

Code to judge whether acr should be required as an essential claim

As a result of this, Authlete is listed as a platform vendor that has passed “Open Banking Security Profile Conformance”.

Authlete listed in Open Banking Security Profile Conformance

~HISTORY END

However, again, the FAPI Final has removed the requirement “by requesting the acr claim as an essential claim”, so Authlete no longer checks whether ACRs are requested as essential ones. Therefore, the flag OPEN_BANKING is not meaningful any more.

Part 2: 5.2.3. Confidential client, 4.

(moved to 5.2.3.1);

Part 2: 5.2.3. Confidential client, 5.

(withdrawn);

Part 2: 5.2.3. Confidential client, 6.

(withdrawn);

Part 2: 5.2.3. Confidential client, 7.

(moved to 5.2.3.1);

Part 2: 5.2.3. Confidential client, 8.

shall send all parameters inside the authorization request’s signed request object

Part 2: 5.2.3. Confidential client, 9.

shall additionally send duplicates of the response_typeclient_id, and scope parameters/values using the OAuth 2.0 request syntax as required by Section 6.1 of the OpenID Connect specification if not using PAR;

If request parameters are all put into a request object, either the request parameter or the request_uri parameter is sufficient. However, if parameters that are mandatory in OAuth 2.0 / OIDC Core (e.g. client_id and response_type) are omitted, the request is no longer compliant with OAuth 2.0 / OIDC Core. Therefore, parameters that are mandatory in OAuth 2.0 / OIDC Core must be put outside the request object duplicately even if they exist inside the request object.

The FAPI Final version has added a condition “if not using PAR”. This implies that the set of request parameters don’t have to be compliant with OAuth 2.0 / OIDC when PAR is used. This incompatibility comes from JWT Secured Authorization Request (JAR). See “Implementer’s note about JAR (JWT Secured Authorization Request)” for details.

response_type requirement in OAuth 2.0, OIDC and JAR

Part 2: 5.2.3. Confidential client, 10.

shall send the aud claim in the request object as the OP’s Issuer Identifier URL;

Part 2: 5.2.3. Confidential client, 11.

shall send the exp claim in the request object that has a lifetime of no longer than 60 minutes;

The same requirements can be found in Section 5.2.2. Authorization server.

Part 2: 5.2.3. Confidential client, 12.

(moved to 5.2.3.1);

Part 2: 5.2.3. Confidential client, 13.

(moved to 5.2.3.1);

Part 2: 5.2.3. Confidential client, 14.

shall send a nbf claim in the request object;

The same requirement can be found in Section 5.2.2. Authorization server.

Part 2: 5.2.3. Confidential client, 15.

shall use RFC7636 with S256 as the code challenge method if using PAR; and

That is, an authorization request must include code_challenge_method=S256 request parameter when PAR is used.

Part 2: 5.2.3. Confidential client, 16.

shall additionally send a duplicate of the client_id parameter/value using the OAuth 2.0 request syntax to the authorization endpoint, as required by Section 5 of JAR, if using PAR.

The PAR specification requires that authorization servers handle request objects based on the rules defined in JAR. The JAR specification has made the response_type request parameter optional, but the client_id remains mandatory. See “Implementer’s note about JAR (JWT Secured Authorization Request)” for details.

Part 2: 5.2.3.1. ID Token as detached signature

In addition, if the response_type value code id_token is used, the client

Section 5.2.3.1. lists requirements for client applications which are applied when an ID token is used as a detached signature.

Part 2: 5.2.3.1. ID Token as detached signature, 1.

shall include the value openid into the scope parameter in order to activate OIDC support;

This is not a FAPI-specific requirement. OIDC Core requires that an OIDC request include openid in the scope parameter. See the explanation about the scope parameter written in Section 3.1.2.1. Authentication Request in OIDC Core for details.

Part 2: 5.2.3.1. ID Token as detached signature, 2.

shall require JWS signed ID Token be returned from endpoints;

Nothing new from OIDC’s viewpoint. By definition, ID tokens are always signed. And when response_type is code id_token and scope contains openid, both the authorization endpoint and the token endpoint return an ID token. See “Diagrams of All The OpenID Connect Flows” for details about what the endpoints return.

Part 2: 5.2.3.1. ID Token as detached signature, 3.

shall verify that the authorization response was not tampered using ID Token as the detached signature;

That is, client applications have to compute hash values of response parameters outside the issued ID token and compare the values to the hash values in the ID token.

Part 2: 5.2.3.1. ID Token as detached signature, 4.

shall verify that s_hash value is equal to the value calculated from the state value in the authorization response in addition to all the requirements in 3.3.2.12 of OIDC; and

NOTE: This enables the client to verify that the authorization response was not tampered with, using the ID Token as a detached signature.

This requirement particularly mentions the state parameter and the s_hash claim in the ID token although they are just one of parameter/hash pairs that have to be considered.

Part 2: 5.2.3.1. ID Token as detached signature, 5.

shall support both signed and signed & encrypted ID Tokens.

By definition, ID tokens are always signed. When ID tokens are encrypted, the order of signing and encrypting is “sign then encrypt”. As a result, an encrypted ID token takes the form of “Nested JWT” as illustrated below.

Nested JWT (JWS in JWE pattern)

See “Understanding ID Token” for details about the structure of ID tokens.

Part 2: 5.2.3.2. JARM

In addition, if the response_type value code is used in conjunction with the response_mode value jwt, the client

Part 2: 5.2.3.2. JARM, 1.

shall verify the authorization responses as specified in JARM, Section 4.4.

See “Section 4.4. Processing rules” of JARM for details about the verification steps.

Part 2: 5.2.4.

(withdrawn)

Part 2: 5.2.5.

(withdrawn)

Part 2: 6.2.1. Protected resource provisions, 1.

shall support the provisions specified in clause 6.2.1 Financial-grade API Security Profile 1.0 – Part 1: Baseline; and

Part 2: 6.2.1. Protected resource provisions, 2.

shall adhere to the requirements in MTLS.

Part 2: 6.2.2. Client provisions

The client supporting this document shall support the provisions specified in clause 6.2.2 of Financial-grade API Security Profile 1.0 – Part 1: Baseline.

Simply put, Section 6 of Part 2 states that protected resource endpoints and client applications shall use certificate-bound access tokens and follow requirements in Part 1.

Part 2: 7. (Withdrawn)

The 7th section of ID2 was “Request object endpoint”. The section was removed by the FAPI Final version because it was replaced with “OAuth 2.0 Pushed Authorization Requests” (PAR). See “Illustrated PAR: OAuth 2.0 Pushed Authorization Requests” for overview of PAR.

Part 2: Security Considerations

8. Security considerations” of “Part 2” lists security considerations. Summary is as follows.

8.1 — This specification references security considerations in Section 10 of RFC 6749 and RFC 6819.

8.2 — Protected resource endpoints shall accept only certificate-bound access tokens.

8.3.1 — Clients should use a different redirect URI per authorization server.

8.3.2 — Authorization codes and client secrets are passed to attackers if developers are deceived into using a fake token endpoint.

8.3.3 — Hybrid flow or JARM can be used as a countermeasure for IdP mix-up attack.

8.3.4 — (removed)

8.3.5 — Because an access token is bound to an X.509 certificate, stolen access tokens cannot be used without corresponding certificates.

8.4.1 — RFC 6749 doesn’t assure message integrity of authorization request and response.

8.4.2 — Using request objects prevents authorization request parameter injection attack.

8.4.3 — Using hybrid flow or JARM prevents authorization response parameter injection attack.

8.5 — Cipher suites for TLS 1.2 are restricted.

8.5. TLS considerations” of “Part 2” permits only the following cipher suites for TLS communication when the TLS version in use is below 1.3.

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

But, from a viewpoint of interoperability of web browsers, additional cipher suites allowed in the latest BCP 195 are permitted for authorization endpoints.

Because I couldn’t find any good reasons to exclude the following cipher suites,

5. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

6.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

I created Issue 216 (TLS_ECDHE_ECDSA cipher suites) to suggest adding them to the list of permitted cipher suites. 1 year and 4 months later, the issue was closed with the reason that FAPI now allows TLS 1.3.

8.6 — PS256 and ES256 only are allowed for JWS signature algorithm.

Signing algorithms of JWS are listed in “3.1. “alg” (Algorithm) Header Parameter Values for JWS” of RFC 7518 (JSON Web Algorithms). Among the 13 algorithms, “8.6. Algorithm considerations” of “Part 2” permits PS256 and ES256 only. The section explicitly states that RSASSA-PKCS1-v1_5 (e.g. RS256) should not be used and none must not be used.

JWS algorithms permitted by Financial-grade API, Part 2

FYI: JWT is used at the following places in an authorization server implementation.

JWT Usage in Authorization Server Implementation

8.6.1 — RSA1_5 encryption algorithm must not be used.

This requirement about encryption algorithms was added by the FAPI Final version. FAPI prohibits RSA1_5. The algorighm identifier is defined in “4.1. “alg” (Algorithm) Header Parameter Values for JWE” of RFC 7518 (JSON Web Algorithms). The identifier represents RSAES-PKCS1-v1_5.

8.7 — Use certified FAPI implementations.

8.8 —Don’t allow privileged actions without an access token.

8.9 — Keys for signature verification should be accessible via the jwks_uri or jwks client metadata (cf. RFC 7591) and the jwks_uri server metadata (cf. RFC 8414).

8.10 — A compromise of any client that shares the same key with other clients would result in a compromise of all the clients.

8.11 — JWK sets should not contain multiple keys with the same kid, but other key attributes may be used to select one among multiple key candidates.

FAPI implementation

This chapter picks up some topics related to FAPI implementation.

Baseline or Advanced?

When a client application requests an access token and accesses APIs with the access token, which security profile should apply, FAPI Part 1 or FAPI Part 2, or neither of them?

Some implementations may configure themselves statically and others may make the decision dynamically at runtime. The FAPI specification mentions nothing about how to determine which security profile should apply.

A simple approach would be “Regard all authorization requests as FAPI Part 2 requests.” Actually, UK Open Banking has adopted this approach. A hard-coded implementation like this may be acceptable if the system development is a one-time work.

However, this approach is not appropriate for a generic authorization server implementation. It’s because a hard-coded implementation hinders flexibility of system design too much. Therefore, in a generic implementation, it is better to judge dynamically at runtime whether an authorization request is for FAPI Part 1 or for FAPI Part 2 (or for normal OAuth 2.0 / OIDC).

If so, how to judge dynamically? The conclusion everyone will eventually reach after thinking will be just one. It is judged by checking the requested scopes.

(Note: Another possible way would be to utilize the resource request parameter defined in “RFC 8707 Resource Indicators for OAuth 2.0”.)

For example, (1) prepare scopes named read and write, (2) adopt a rule where the read scope requires FAPI Part 1 requirements be satisfied and the write scope requires FAPI Part 2 requirements be satisfied, and (3) implement APIs so that they interpret the scopes accordingly. If APIs are implemented in this way, the implementation of an authorization endpoint can change its behavior dynamically by (a) applying FAPI Part 2 requirements when the scope request parameter includes the write scope, (b) applying FAPI Part 1 requirements when the scope request parameter does not include the write scope but includes the read scope, and (c) applying normal OAuth 2.0 and OIDC requirements when the scope request parameter includes neither the read scope nor the write scope.

How to implement the scope-based switch? For instance, one approach might be to regard scopes whose name starts with read as scopes for FAPI Part 1. However, this approach imposes heavy restrictions on scope names. If that is the case, what approach has Authlete adopted?

As the first step, Authlete implemented a generic mechanism to set arbitrary attributes to each scope. On the mechanism, Authlete treats the attribute name fapi in a special way. An attribute having name fapi and value r represents Read-Only (= Baseline). Likewise, an attribute having name fapi and value rw represents Read-and-Write (Advanced).

The web console for FAPI-aware Authlete (version 2.0+) provides UI for scope attributes. The screenshot below defines a scope named read with an attribute of fapi=r.

Scope Settings for FAPI Read-Only

Authlete’s /auth/authorization API that parses an authorization request checks scopes listed in the scope request parameter in the authorization request and regards the request as a request for FAPI Part 2 if the scope list includes a scope that has an attribute of fapi=rw. If the scope list does not include any scope having an attribute of fapi=rw but includes a scope having an attribute of fapi=r, the authorization request is regarded as a request for FAPI Part 1. In other cases, the authorization request is treated as a normal OAuth 2.0 / OIDC request.

NOTE: In ID2, the names of FAPI Part 1 and Part 2 were “Read-Only Security Profile” and “Read and Write Security Profile”. The FAPI Final verison renamed them to “Baseline Security Profile” and “Advanced Security Profile”. The values of r and rw for the fapi attribute were determined based on the old names.

Mutual TLS

“Mutual TLS” has three meanings as listed below (as already explained previously).

TLS communication using a client certificate Client authentication using a client certificate Certificate binding

The first part is handled by API management solutions. On the other hand, the second and the third parts don’t necessarily have to be handled by the API management layer. Rather, a better system architecture would handle them in a different layer that is independent of the API management layer.

Because of its unique architecture, Authlete doesn’t take on any task in the API management layer. That is, Authlete does nothing for the first part. On the other hand, Authlete supports the second and the third parts. Thanks to this, with Authlete, systems can support MTLS (RFC 8705 OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens) required by FAPI on any API management solution that developers want to use. I actually tried MTLS on Amazon API Gateway and wrote an article titled “Financial-grade Amazon API Gateway” to explains how to achieve it.

Example of Component Deployment for MTLS on Amazon API Gateway

Any API management solution can support MTLS by using Authlete as long as the solution provides a mechanism which enables developers to access the client certificate used in TLS communication.

Existing API management solutions may try to implement MTLS directly. However, it would take time, and above all, it is not a good system design to support the functionality directly in the API management layer. At the time of this writing, if you use an API management solution provided by one of giant cloud vendors, Authlete is the best answer for MTLS.

The video below is a session in “Financial APIs Workshop” that took place in Tokyo on July 24, 2018. In the video, Justin Richer, one of the most famous software engineers in the community and the author of “OAuth 2 in Action”, is explaining Authlete’s MTLS implementation.

The material and transcript of the presentation are available at “Authlete FAPI Enhancements”.

Access Token Duration

This is not related to FAPI, but I explain this feature here because I’m often consulted about the feature in the context of Bank API by customers who want to make duration of access tokens for remittance shorter than that of access tokens for other purposes.

The functionality can be achieved by making access token duration shorter when an authorization request contains a scope for remittance. For example, if an API for remittance requires a scope named remit, the authorization server would shorten access token duration when an authorization request contains the scope.

Authlete supports the functionality by treating a scope attribute named access_token.duration in a special way.

Authlete checks all scope attributes of requested scopes, picks up the smallest value among values of access_token.duration attributes, and uses it as the duration of an access token being issued. If none of the requested scopes has an access_token.duration attribute, Authlete uses the default value of access token duration set per authorization server instance. If the default value is smaller than the smallest value of access_token.duration attributes, the default value is used.

The screenshot below shows how to set access_token.duration=300as a scope attribute.

Scope Settings for Access Token Duration

Likewise, duration of refresh tokens can be set by utilizing refresh_token.duration attributes.

NOTE: Authlete 2.1 and newer versions support “access token duration per client”. See “How Authlete determines token duration” on Authlete Knowledge Base for details.

Access Token with Transaction Information

This feature is not related to FAPI, either, but I explain it here as I’m often consulted about it in the context of Bank API by customers who want to associate detailed transaction information with an access token. I hear that some regulations in Europe require an access token be issued per transaction under some conditions.

This functionality cannot be achieved by “scope attribute” which was explained in “Access Token Duration” because the functionality requires data be handled per access token, not per scope.

Since old days, Authlete has provided a mechanism to set arbitrary key-value pairs to an access token. This feature can be utilized to associate transaction information with an access token. Technical details about this feature are explained in “Extra Properties”. See also “How to add extra properties to an access token” in Authlete Knowledge Base.

However, note that it is not a smart way to associate detailed information such as amount of money with an access token directly. Instead, a recommended way is to (1) store detailed transaction information into another database and (2) associate the unique identifier of the database record with an access token.

Authorization Details

Since the version 2.2, Authlete supports “OAuth 2.0 Rich Authorization Requests” (RAR). The standard specification adds a request/response parameter, authorization_details.

The authorization_details parameter is used to enable an access token to hold detailed information about authorization. For example, detailed information about payment such as “How much?”, “To whom?”, etc.

According to the specification, the authorization_details parameter can be used anywhere the scope parameter is used. For instance, (a) in the authorization request, (b) in the token response, (c) in the introspection response, and so on.

(a) authorization_details in Authorization Request (RAR Section 3)

(b) authorization_details in Token Response (RAR Section 7)

(c) authorization_details in Introspection Response (RAR Section 8.2)

RAR is an open standard to describe details about authorization and tie the information to an access token. RAR is to be adopted as a component of FAPI 2.0 (cf. “Are there FAPI 2.0 implementations?” in FAPI FAQ).

Authlete’s Extra Properties can be used for the same purpose. One functional difference is that Extra Properties can choose to expose or hide extra properties. Hidden extra properties never appear in any OAuth/OIDC responses but can be retrieved by Authlete’s introspection API (/auth/introspection API). There are some use cases where you want to tie information to an access token but hide the information from the client application and the user. In such cases, Extra Properties is useful.

Finally

Thank you for reading this long post till the end.

Authlete has already supports the FAPI 1.0 Final version and known technical specifications of FAPI 2.0 as mentioned in FAPI FAQ. You can try FAPI through Authlete API Tutorials with an Authlete account (signup).

 

Takahiko Kawasaki
Co-founder and representative director of Authlete, Inc.
https://www.authlete.com/

The post Guest Blog: Financial-grade API (FAPI), Explained by an Implementer – Updated first appeared on OpenID.

Blockchain Commons

2021 Q1 Blockchain Commons Report

In Q1 2021, Blockchain Commons largely focused on working with companies to integrate our Gordian architecture, best practices, specifications, reference libraries, and reference applications into their wallets and services. This included: Released three Gordian reference apps for public beta testing on Apple’s TestFlight; Planned the creation of an independent Gordian Recovery app for use with thi

In Q1 2021, Blockchain Commons largely focused on working with companies to integrate our Gordian architecture, best practices, specifications, reference libraries, and reference applications into their wallets and services. This included:

Released three Gordian reference apps for public beta testing on Apple’s TestFlight; Planned the creation of an independent Gordian Recovery app for use with third-party wallets; Updated our keytool-cli command-line interface app for new UR (Universal Resource) usages and future specifications; Refined our Lifehash specification to optimaize for black & white usage; Tested new improvements to Lethekit; Supported the adoption of Gordian best practices by a variety of manufacturers; and Worked directly with our sustaining sponsor Bitmark to architect their new Autonomy bitcoin app.

We also did work to support the wider community, including:

Produced a major design document on multisigs; Supported DID’s advancement on its standards track; Worked to develop the did:onion DID method; Developed packages to support activists; and Testified to legislatures in Nevada, North Dakota, and Wyoming.

(Also see our previous, Q4 2020 report.)

Read More Gordian Work

Our advances on Gordian were linked to a large-scale transition in the meaning of the Gordian system. Prior to 2021, we were offering a variety of applications, but we were simultaneously working with companies to create specifications. This created a real tension. Now that we’ve seen the beginnings of adoption of our best practices, we’ve been able to change the focus of our applications from being consumer-focused to being exemplars for other companies to study. In Q2, we expect this to mature into a Gordian Seal™ program that denotes companies who are producing products that follow the Gordian principles and our best practices. Like similar projects such as the FIDO Alliance, we expect the Gordian Seal to make it easier for everyone in the blockchain ecosystem to work together, creating bizdev and interoperability opportunities.

Gordian Testing. Blockchain Commons now has three iOS reference apps available for public beta testing via Apple’s Testflight: Gordian Cosigner, Gordian Guardian, and Guardian Wallet. Wallet was our original app, recently updated to support the newest Lifehash and crypto-request features; Cosigner is the companion signing app that we introduced last quarter; and Guardian is our newest release, a key-storage tool for iOS. As reference apps, these projects are mainly meant as exemplars, demonstrating the best practices and principles suggested by the Gordian architecture as well as exemplifying how multiple apps can interact through Airgaps and Universal Resources (URs). See our video for a real-life example of Gordian Cosigner, Guardian, and Wallet working together to securely create a multisig account and sign a PSBT, and this video that demonstrates AirGap signing between LetheKit and Gordian Cosigner.

Independent Recovery. We are already working to split a new app, “Gordian Recovery”, off of Gordian Guardian. One of our best practices for the Gordian system demands that a user can recover funds from a Gordian-approved wallet even if the company producing it disappears. Gordian Recovery will provide one way to do so, allowing a user to independently use the app to recover and sweep funds held in a multisig wallet, no matter the status of the wallet manufacturer. At its debut, we expect Gordian Recovery to support at least three different companies with their own multisig hardware or software wallet systems. Generally, Guardian Recovery, like Gordian Cosigner, demonstrates how Blockchain Commons will interact with the wallet ecosystem by providing not just specifications, best practices, and references, but also complementary apps that can support third-party wallets.

Keytool Updates. Our keytool CLI (command-line interface) app also received major upgrades in Q1. We expanded it to support the new UR (Universal Resource) crypto-request and crypto-response features that add support for arbitrary requests. This is a first step in moving away from the m/48' derivation used by current hardware wallets for multisig, which often results in master or cosigner xpub reuse and thus a risk to privacy. This means that we now have the infrastructure to demonstrate different solutions to the problems of m/48'xpub reuse, but there’s still a lot of legacy use cases that need to be resolved. (We also reviewed one solution for the m/48' derivation problem and found it unnecessarily complex, so: onward.)

Lifehash Improvements. Most of the Gordian specifications that we’ve created while working with our Airgapped Wallet Community focus on interoperability, particularly moving data across airgaps. Lifehash is something else: it’s a user-interface specification meant to give users trust in the continuity of cryptocurrency accounts without having to depend on reading text addresses, which aren’t user-intuitive. Instead, Lifehash creates unique, colored icons for each account that remain consistent over time. This quarter, we adjusted the colors used by Lifehash so that they look better in black & white printing, and simultaneously experimented with dithering on the 200x200 LetheKit display, to ensure that we could display meaningful information on a worst-case small black & white display.

Lethekit Updates. The DIY LetheKit has been our prime reference for testing hardware implementations of our Gordian best practices. Not only did we test it out with dithered Lifehashes last quarter, but we also were able to use its display to test out animated QRs for transferring PSBTs. It worked! (Albeit, slowly.) See also this video of LetheKit exporting derived keys to Gordian Signer.

Gordian Adoption. Finally, as noted, we’re very happy to see continued adoption of Gordian. Last quarter, we talked about all of the software and library development being done by wallet companies. This quarter, we’re seeing more companies committing to including SSKRs (Sharded Secret Key Reconstruction), URs (Universal Resources), and other Gordian features in their wallets. Sparrow Wallet has been expanding its capabilities, while Foundation Devices was the newest to announce their integration of some UR features, including animated PSBTs. Foundation and Sparrow have both been working with us for a while: we’re thrilled to see both hardware and software wallet companies incorporating Blockchain Commons specifications! We’re also aware of two more software-wallet companies who haven’t made announcements yet, and we’ve been extensively working with a third company to produce a Gordian-approved wallet that entirely matches our principles and best practices and adds on some great UI besides. We can now announce that third company is Bitmark …

Autonomy Architecting. Blockchain Commons has been working with Bitmark to design Autonomy, a bitcoin wallet and digital assets manager. Their goal is to make it easy for families to gain independence and preserve generational wealth. Autonomy combines a mobile app with your own dedicated full node running privately in the cloud. We used a partitioned architecture (keys across app+node) and a personal recovery network (using SSKR) to remove all single points of failure. When you transact, everything is multisig between your app and full node. Any single key can fail and you won’t lose your funds. Nothing else like this currently exists in the market.

Other Advances

Here’s some details on our other major projects:

Multisig Documents. This quarter, we released our first major expansion to our #SmartCustody tutorials since 2019: a 10,000-word document on Designing Multisig for Independence & Resilience. The state of multisig wasn’t sufficiently advanced to provide specific advice when we originally wrote #SmartCustody, so we’re now happy to include it by breaking down the design of multisig addresses into theoretical, transactional, operational, and functional elements and providing design patterns and examples to help you put those elements together. Want to know why Blockchain Commons is today focused on multisig technology and how to create them to meet your own needs? It’s all here.

DID Recommendation. We’ve been working with the DID specification since it was incubated at early Rebooting-the-Web-of-Trust workshops, and co-authored by Blockchain Commons founder Christopher Allen. We’re thrilled that it’s now a Candidate Recommendation. There might still be revisions and new candidates, but this is the beginning of the last stage in creating an official internet standard for decentralized identity on the internet.

DID Method Expansion. Meanwhile, Blockchain Commons is doing its own work on DIDs, with a new did:onion method implementation. We’re happy to say that Spruce Systems is already considering it for their DID resolver. We’re also considering returning to work on our own BTCR DID method, which was one of the first DID methods, but has gotten somewhat out of date with recent updates to Bitcoin.

Apt Packages. We have produced apt packages for installing seedtool and keytool for use with Debian and Tails OSes. This is a first step in providing support tools for human-rights activists, which we expect to get more emphasis in our work in Q3.

Legislative Work. Finally, Christopher Allen’s work with legislatures multiplied last quarter. In Wyoming, he led efforts that resulted in the Wyoming Digital Identity Act, which includes a specific definition of digital identity that works toward the principles of self-sovereign identity. Work on a private-key-protection bill was less successful, as it had to be withdrawn after being spoiled by an amendment. Christopher has also recently testified before legislatures in North Dakota and Nevada. This is all crucial work because actual engineering will ultimately be limited and directed by what states legislate, so we want to make sure the laws fit the technology and the needs of the individual.

Intern Search. We’re thrilled that our Summer 2021 internship program has been sponsored by the Humans Rights Foundation. We’ve thus put out a call for interns, and we’re expanding our usual development work to also include direct support for activists, who need help to maintain their privacy in potentially hostile regimes. However, to really support activists requires knowing what they need. So, we’re also considering working with interns to do research and conduct interviews to create user engagement models for activists. This would be similar to the Amira model from RWOT. If you are interested in having developers work with or mentor blockchain interns, want to suggest intern projects, or even have engineers who might be interested in working on Blockchain Commons projects briefly during the summer, please mail us.

Taproot & Schnorr First Steps. We may see a major upgrade for Bitcoin as soon as this fall, if Speedy Trial is approved for Taproot and Schnorr. The first will increase privacy for #SmartCustody features such as timelocks and multisig, while the second will allow aggregated multisig, which offer several advantages over traditional ECDSA signatures. Our brand-new musign CLI app provides the first experimental support for Schnorr signatures leveraging the Taproot/Schnorr BIPS, but independent from bitcoind. More to come as we ramp up to this expansion.

Supporting the Future

We’ve laid out our initial roadmap for the next two cycles of Blockchain Commons efforts, covering spring and summer. Important topics includes finalizing our Gordian Seal program, solving problems with xpub reuse in Bitcon multisig, supporting and advancing SSKR (Sharded Secret Key Reconstruction) for not just cryptographic seeds but other data, continuing to support and advance URs (Universal Resources) with encrypted and/or signed CBOR, and architecting a new QuickConnect 2.0 for initiating TorGap-based services between peers. If you’d like to know more about our roadmap, especially if you considering becoming a sustaining sponsor or to sponsor a specific project on our roadmap, please contact us directly.

If you’d like to support our work at Blockchain Commons, so that we can continue to work new specifications, architectures, reference applications, and reference libraries to be used by the whole community, please become a sponsor. You can alternatively make a one-time bitcoin donation at our BTCPay.

Thanks to our sustaining sponsors, Bitmark and Blockchainbird, our new project sponsor Human Rights Foundation(@HRF), as well as our GitHub monthly sponsors, who include Flip Abignale (@flip-btcmag), Dario (@mytwocentimes), Foundation Devices (@Foundation-Devices), Adrian Gropper (@agropper), Eric Kuhn (@erickuhn19), Trent McConaghy (@trentmc), Mark S. Miller (@erights), @modl21, Jesse Posner (@jesseposner), Protocol Labs (@protocol), Dan Trevino (@dantrevino), and Glenn Willen (@gwillen).

Christopher Allen, Executive Director, Blockchain Common


decentralized-id.com

IOTA Foundation

The IOTA Foundation is the Next-Generation Blockchain and was initiated with a very clear and focused vision of enabling the paradigm shift of the Internet of Things, Industry 4.0 and a trustless ‘On Demand Economy’ through establishing a de facto standardized ‘Ledger of Everything'. It aims to enable all connected devices through verification of truth and transactional settlements which incentiviz

WebsiteBlogLinkedin • Docs • GPlay

IOTA Foundation is a non-profit organization and creator of the Tangle, a permissionless, multi-dimensional distributed ledger, designed as a foundation of a global protocol for all things connected.

The Case for a Unified Identity Our Vision for a Unified Identity Protocol on the Tangle for Things, Organizations, and Individuals Establishing Trust between People, Organizations and Things (Video)

The concept of digital identity, implemented in the Tangle Identity Eclipse project, provides this layer of trust that the online world requires. Build on IOTA’s Tangle and the standards of the World Wide Web Consortium (W3C) of DID and Verifiable Credentials, people, organizations, and things can identify each other, share data, and instantly verify the integrity of this data. They remain fully in control over their data in a privacy-first process. A few examples of how Tangle Identity can be used:

Digitizing physical documents such as passports and licenses, creating reusable Know Your Customer (KYC) information. Verifiable company registrations and proof of employment to prevent phishing and fraud. Proof of authenticity, signed by the manufacturer, creating trust in devices and their capabilities.
Tangle EE Eclipse Working Group Releasing IOTA Identity Alpha: A Standard Framework for Digital Identity

In this blog, you will find the alpha-release of IOTA Identity, open-sourcing our Selv app, and the announcement of the Identity X-Team. For those that participate at Odyssey Momentum, we also prepared a hackathon package near the bottom of this blog.

iotaledger/identity.rs

IOTA Identity is a Rust implementation of decentralized digital identity, also known as Self-Sovereign Identity (SSI). It implements standards such as the W3C Decentralized Identifiers (DID) and Verifiable Credentials and the DIF DIDComm Messaging specifications alongside supporting methods. This framework can be used to create and authenticate digital identities, creating a trusted connection and sharing verifiable information, establishing trust in the digital world.

Selv (GitHub)

Share your health status and other personal credentials securely and privately.

Iota Identity Experience Team - GitHub

The IOTA Identity Experience Team is a collaborative effort to provide help, guidance and spotlight to the IOTA Identity Community through offering feedback and introducing consistent workflows around IOTA Identity.

Persistent Selv - A self-sovereign digital identity (SSID) empowering individuals to engage in heritage and legacy-planning, establishing trusted connections with future generations and their environment.

Dark Matter Labs and IOTA Foundation — with significant conceptual contributions from Futures Literacy experts at UNESCO and Finland Futures Research Centre at University of Turku — are launching Persistent Selv; an exploratory demo app empowering individuals to improve their ecological footprints, by prospecting their environmental legacies and establishing trusted connections with future generations.

series on IOTA in the Deep Demonstration on Long-Termism. IOTA Foundation and EIT Climate KIC on the Road to a Long-Term Future Social Impact

In this short series, we want to share with you our learning and insight from this novel approach. In this part, we will explore the concept of Long Termism, how EIT Climate KIC orchestrates a diverse group to design new tools and interventions and how the IOTA Foundation contributes to this initiative.

Long-Term Cooperation: IOTA Foundation signs Memorandum of Understanding with EIT Climate KIC

At the IOTA Foundation, we fundamentally believe that a new time requires novel approaches to governance. As the first non-profit foundation in the European Union that was financed with a cryptocurrency endowment, we pioneered such novel arrangements in what we think will serve the IOTA protocol best in the very long term. Throughout the year we have been sharing our learning and have been working with diverse thought leaders and renowned organizations from the field of sustainable finance and economics to strategize how we can create the structure to support long-term impact initiatives.

Persistent Self: An interactive demo around long-term digital identity

Today, we want to share with you the result of an experimental demonstration project we have been working on with a number of compelling partners. First and foremost, this demo is the result of a collaboration with Dark Matter Labs, a strategic discovery, design and development lab. Built on Selv, IOTA´s self-sovereign identity (SSID) demonstration platform, we want to give you a glimpse into digital identities and how they can impact mankind’s sustainability in the future. It is important to stress that this is an experiment and collaborative thought experiment and we are grateful to EIT Climate KIC to have the foresight of supporting such bold developments.

Tuesday, 13. April 2021

Oasis Open

Call for Consent for Open Document Format for Office Applications (OpenDocument) V1.3 as OASIS Standard

Specifying characteristics of an open application-independent digital document file format. OpenDocument Format v1.3 is an update to the international standard Version 1.2, which was approved by the International Organization for Standardization (ISO) as ISO/IEC 26300 (2015). The post Call for Consent for Open Document Format for Office Applications (OpenDocument) V1.3 as OASIS Standard appeared

Free, open XML-based document file format for office applications is presented to members as a candidate for OASIS Standard.

The OASIS Open Document Format for Office Applications (OpenDocument) TC members [1] have approved submitting the following Committee Specification to the OASIS Membership in a call for consent for OASIS Standard:

Open Document Format for Office Applications (OpenDocument) Version 1.3
Committee Specification 02
30 October 2020

This is a call to the primary or alternate representatives of OASIS Organizational Members to consent or object to this approval. You are welcome to register your consent explicitly on the ballot; however, your consent is assumed unless you register an objection [2]. To register an objection, you must:

Indicate your objection on this ballot, and Provide a reason for your objection and/or a proposed remedy to the TC.

You may provide the reason in the comment box or by email to the Technical Committee on its comment mailing list or, if you are a member of the TC, to the TC’s mailing list [3]. If you provide your reason by email, please indicate in the subject line that this is in regard to the Call for Consent.

This Committee Specification was approved by the Technical Committee and was submitted for the required 60-day public review [3]. All requirements of the OASIS TC Process having been met [4][5], the Committee Specification is now submitted to the voting representatives of OASIS Organizational Members.

Details

The Call for Consent opens at 14 April 2021 00:00 UTC and closes on 27 April 2021 23:59 pm UTC. You can access the ballot at:

Internal link for voting members: https://www.oasis-open.org/apps/org/workgroup/voting/ballot.php?id=3608

Publicly visible link: https://www.oasis-open.org/committees/ballot.php?id=3608

OASIS members should ensure that their organization’s voting representative responds according to the organization’s wishes. If you do not know the name of your organization’s voting representative is, go to the My Account page at

http://www.oasis-open.org/members/user_tools

then click the link for your Company (at the top of the page) and review the list of users for the name designated as “Primary”.

Information about the Committee Specification

The OpenDocument Format is a free, open XML-based document file format for office applications, to be used for documents containing text, spreadsheets, charts, and graphical elements. OpenDocument Format v1.3 is an update to the international standard Version 1.2, which was approved by the International Organization for Standardization (ISO) as ISO/IEC 26300 (2015). OpenDocument Format v1.3 includes improvements for document security, clarifies under-specified components, and makes other timely improvements.

The OpenDocument Format specifies the characteristics of an open XML-based application-independent and platform-independent digital document file format, as well as the characteristics of software applications which read, write and process such documents. It is applicable to document authoring, editing, viewing, exchange and archiving, including text documents, spreadsheets, presentation graphics, drawings, charts and similar documents commonly used by personal productivity software applications.

The TC has received 3 Statements of Use from The Document Foundation, CIB labs GmbH, and Collabora Productivity.[5].

URIs
The prose specification document and related files are available here:

Part 1: Introduction
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part1-introduction/OpenDocument-v1.3-cs02-part1-introduction.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part1-introduction/OpenDocument-v1.3-cs02-part1-introduction.html
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part1-introduction/OpenDocument-v1.3-cs02-part1-introduction.pdf

Part 2: Packages
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part2-packages/OpenDocument-v1.3-cs02-part2-packages.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part2-packages/OpenDocument-v1.3-cs02-part2-packages.html
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part2-packages/OpenDocument-v1.3-cs02-part2-packages.pdf

Part 3: OpenDocument Schema
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part3-schema/OpenDocument-v1.3-cs02-part3-schema.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part3-schema/OpenDocument-v1.3-cs02-part3-schema.html
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part3-schema/OpenDocument-v1.3-cs02-part3-schema.pdf

Part 4: Recalculated Formula (OpenFormula) Format
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part4-formula/OpenDocument-v1.3-cs02-part4-formula.odt (Authoritative)
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part4-formula/OpenDocument-v1.3-cs02-part4-formula.html
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/part4-formula/OpenDocument-v1.3-cs02-part4-formula.pdf

XML/RNG schemas and OWL ontologies:
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/schemas/

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:

https://docs.oasis-open.org/office/OpenDocument/v1.3/cs02/OpenDocument-v1.3-cs02.zip

Additional information

[1] OASIS Open Document Format for Office Applications (OpenDocument) TC
https://www.oasis-open.org/committees/office/

TC IPR page
https://www.oasis-open.org/committees/office/ipr.php

[2] OpenDocument TC comment mailing list: office-comment@lists.oasis-open.org
(You must be subscribed to send to this list. To subscribe, see https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=office.)

Main mailing list: office@lists.oasis-open.org

[3] Candidate OASIS Standard Special Majority Vote:
https://www.oasis-open.org/committees/ballot.php?id=3562

[4] Public reviews:

CS02 approved as a candidate for OASIS Standard on 02 February 2021: https://www.oasis-open.org/committees/ballot.php?id=3562 Committee Specification 02 approved 30 October 2020: https://www.oasis-open.org/committees/ballot.php?id=3529
The changes made between CSD03 and CS02 are documented in https://github.com/oasis-tcs/odf-tc/pull/30#issuecomment-715261301. The TC judges these changes to be Non-Material Changes. Committee Specification Draft 03 approved 31 August 2020: https://lists.oasis-open.org/archives/office/202008/msg00096.html.
CSD03 Approved by TC for public review 31 August 2020: https://lists.oasis-open.org/archives/office/202008/msg00096.html.
15-day public review of CSD03 opened 15 September 2020 and closed 29 September 2020: https://lists.oasis-open.org/archives/members/202009/msg00003.html
Comment resolution log: N/A
The differences between CS01 and CSD03 are marked in:
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part1-introduction/OpenDocument-v1.3-csd03-part1-introduction-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part2-packages/OpenDocument-v1.3-csd03-part2-packages-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part3-schema/OpenDocument-v1.3-csd03-part3-schema-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/csd03/part4-formula/OpenDocument-v1.3-csd03-part4-formula-DIFF.pdf Committee Specification 01 approved 25 December 2019: https://www.oasis-open.org/committees/ballot.php?id=3460.
The changes made between CSPRD02 and CS01 are documented in https://lists.oasis-open.org/archives/office/201912/msg00007.html. The TC judges these changes to be Non-Material Changes.
The differences between CSPRD02 and CS01 are marked in:
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs01/part1-introduction/OpenDocument-v1.3-cs01-part1-introduction-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs01/part2-packages/OpenDocument-v1.3-cs01-part2-packages-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs01/part3-schema/OpenDocument-v1.3-cs01-part3-schema-DIFF.pdf
https://docs.oasis-open.org/office/OpenDocument/v1.3/cs01/part4-formula/OpenDocument-v1.3-cs01-part4-formula-DIFF.pdf Committee Specification Draft 02 / Public Review Draft 02 approved 04 November 2019.
15-day public review of CSPRD02 opened 28 November 2019 and closed on 12 December 2019: https://lists.oasis-open.org/archives/members/201911/msg00007.html.
Comment resolution log: https://docs.oasis-open.org/office/OpenDocument/v1.3/csprd02/OpenDocument-v1.3-csprd02-comment-resolution-log.txt.
The differences between CSPRD01 and CSPRD02 are marked in: https://docs.oasis-open.org/office/OpenDocument/v1.3/csprd02/part3-schema/OpenDocument-v1.3-csprd02-part3-schema-DIFF.pdf Committee Specification Draft 01 / Public Review Draft 01 approved 26 August 2019.
30-day public review of CSPRD01 opened on 26 September 2019 and closed on 25 October 2019: https://lists.oasis-open.org/archives/members/201909/msg00004.html.
Comment resolution log: https://docs.oasis-open.org/office/OpenDocument/v1.3/csprd01/OpenDocument-v1.3-csprd01-comment-resolution-log.txt.

[5] Statements of Use:

The Document Foundation – https://lists.oasis-open.org/archives/office-comment/202101/msg00003.html DIB labs GmbH – https://lists.oasis-open.org/archives/office/202012/msg00020.html Collabra Productivity – https://lists.oasis-open.org/archives/office-comment/202101/msg00000.html

The post Call for Consent for Open Document Format for Office Applications (OpenDocument) V1.3 as OASIS Standard appeared first on OASIS Open.


Energy Web

Google Backs Energy Web to Harmonize Low-Carbon Electricity Markets Across Europe

Zug, Switzerland — 13 April 2021 — Today Energy Web announced a new initiative, funded by a €1 million grant from Google.org’s Impact Challenge program, to provide a digital framework for coordinating distributed energy resources (DERs) across the transmission and distribution market interface of Europe’s power grid. Climate-KIC, the EU’s leading knowledge and innovation community focused on clima

Zug, Switzerland — 13 April 2021 — Today Energy Web announced a new initiative, funded by a €1 million grant from Google.org’s Impact Challenge program, to provide a digital framework for coordinating distributed energy resources (DERs) across the transmission and distribution market interface of Europe’s power grid. Climate-KIC, the EU’s leading knowledge and innovation community focused on climate, helped to select winning grantees.

The initiative will leverage the open-source Energy Web stack to make it easy for prosumers and DERs to register and participate in local and regional energy markets. Working with mobile network operators, IoT service providers, original equipment manufacturers (OEMs), and grid operators across Europe, the initiative will foster the procurement of flexibility services in a transparent, non-discriminatory, and market-based way, as prioritized by the EU Clean Energy Package.

Energy Web’s grant comes from the Google.org Impact Challenge on Climate, which commits €10M to fund bold ideas that aim to use technology to accelerate Europe’s progress toward a greener, more-resilient future. It is expected to accelerate the strong momentum Energy Web has already generated working with European grid operators. This grant and initiative will build upon existing open-source software developed to support near-identical use cases with transmission system operators (TSOs) Austrian Power Grid, 50 Hertz, and Elia as well as distribution system operators (DSOs) Electra Caldense and Fluvius. The common theme across these projects is easing the digitization, registration, and participation of DERs in energy markets managed by both DSOs and TSOs. More recently, this same architecture underpinned EasyBat, used to track the lifecycle of residential and commercial batteries in Belgium.

“This Google.org grant and the work it will fund supporting a pan-European electric flexibility market is the culmination of Energy Web’s mission to enable any customer and any asset to participate in any energy market,” explained Walter Kok, CEO of Energy Web. “We are able to harness the flexibility of prosumers to provide value throughout the electricity value chain to make DERs the backbone of the European grid.”

The initiative will focus on integrating the Energy Web stack with a number of electricity market participants, including but not limited to:

Mobile Network Operators: in order to assign digital identities to all GSM-enabled energy assets via SIM cards. IoT Service Providers: in order to co-develop SIM cards with electricity-sector roaming capabilities (i.e., independent of the Mobile Network Operator) that will allow energy assets to directly register with the Energy Web technology stack. OEMs: in order to enable new generations of energy assets to directly embed an identity on the stack. DSOs / TSOs: once integrated, DSOs will be able to tap into the flexible capacity of DERs connected to their networks and use them to procure flexibility. Likewise, TSOs will be able to simultaneously leverage DERs for regional / national markets in support of safer, greener, more-resilient electricity grids.

Since it is common for Mobile Network Operators, IoT service providers, and payment solution providers to operate their own proprietary data systems and architectures, the project will ensure interoperability between these systems — including multiple blockchains — and the public Energy Web stack. Ultimately, at least 1 million DERs will integrate into electricity markets as part of the Google.org-funded project.

“We have received an overwhelming number of applications for our Google.org Impact Challenge on Climate,” said Rowan Barnett, head of Google.org in EMEA and APAC. “The Energy Web project is so exciting because it has tremendous transformative potential, not just for Europe, but for power grids around the world. We are catching a glimpse of the future electricity system — one that is inherently decentralized and democratized.”

About Energy Web
Energy Web is a global, member-driven nonprofit accelerating the low-carbon, customer-centric energy transition by unleashing the potential of open-source, digital technologies. We enable any energy asset, owned by any customer, to participate in any energy market. The Energy Web Chain — the world’s first enterprise-grade, public blockchain tailored to the energy sector — anchors our tech stack. The Energy Web ecosystem comprises leading utilities, grid operators, renewable energy developers, corporate energy buyers, IoT / telecom leaders, and others.

For more, please visit https://energyweb.org.

Google Backs Energy Web to Harmonize Low-Carbon Electricity Markets Across Europe was originally published in Energy Web Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.


omidiyar Network

Portals to Beautiful Futures: Trends to Watch in 2021 and Beyond

By David T. Hsu, Exploration & Future Sensing, Omidyar Network The PORTALS report, in collaboration with Guild of Future Architects, flows from a yearlong process of imagining futures beyond the pandemic. Download the full report here. It’s a warm spring day. Clouds hover over the horizon. The San Gabriel foothills smell of sage and soil after the rain. I feel my muscles begin to r

By David T. Hsu, Exploration & Future Sensing, Omidyar Network

The PORTALS report, in collaboration with Guild of Future Architects, flows from a yearlong process of imagining futures beyond the pandemic. Download the full report here.

It’s a warm spring day. Clouds hover over the horizon. The San Gabriel foothills smell of sage and soil after the rain. I feel my muscles begin to relax for the first time in a while.

I’m not alone. Though serious hurdles remain in the United States and globally, increasing numbers of American adults see hope on the horizon. A recurring Axios poll shows “hopeful” taking the lead over “stressed” and “frustrated” for the first time since the pandemic began. Roughly one-third of US adults have been vaccinated; the rest will be eligible by the end of April. Recent jobs reports show signs of an economy waking from hibernation. The American Rescue Plan and proposed American Jobs Plan would invest a combined $4 trillion in recovery and infrastructure.

These new possibilities bring a rush of questions. What does life look like after the pandemic for my community; my generation? How will we juggle work, education, and care to make ends meet? Whose lives and livelihoods are valued? After a year of isolation, where will we find belonging?

When we look back on 2020, will we remember it as the year everything changed?

“Disasters are extraordinarily generative,” Rebecca Solnit writes, but “there is no simple formula for what arises: it has everything to do with who or what individuals or communities were before the disaster.”

Our new PORTALS report imagines possible futures arising out of today’s disasters. It is the latest installment of the annual Trends to Watch series, curated by our Exploration and Future Sensing team to support Omidyar Network’s mission of reimagining systems to build more inclusive and equitable societies. Across our focus areas of Reimagining Capitalism, Responsible Technology, and Pluralism, the work of “reimagining” requires that we interrogate our own assumptions about how systems need to change by listening to diverse viewpoints.

Created in collaboration with Guild of Future Architects, the 2021 report distills four open-ended provocations, designed to stir imagination and action, from a yearlong process of collective foresight. Between November 2019 and February 2021, the Guild invited 1,000 people to participate in a series of virtual sessions asking how systems that shape our daily lives could become more just and inclusive by 2036. A critical part of looking ahead was also looking back at how we arrived at today’s social, economic, and political realities.

Here are the provocations:

What if shared well-being became the standard of success for our nations? Are we ready to move from an era that rewards extraction to one that prioritizes regeneration? How will we move from an era of destabilizing information into an age of trusted wisdom? Can we dismantle industrial-age silos between work, home, education, play, and community?

Each one pairs with a “Spectrum of Possibility” exercise to stir your own thinking about how imagined futures could become reality. Taken together, we hope that these provocations offer useful foundations for people working now to build tomorrow’s systems.

Download the full report here. What if shared well-being became the standard of success for our nations?Are we ready to move from an era that rewards extraction to one that prioritizes regeneration?How will we move from an era of destabilizing information into an age of trusted wisdom?Can we dismantle industrial-age silos between work, home, education, play, and community?

The PORTALS report is a collaboration between Omidyar Network and the Guild of Future Architects. It was envisioned and written by a diverse group of fellows, employees, and consultants, based upon foresight sessions convened by the Guild throughout 2020 involving 1,000+ participants.

Guild of Future Architects contributors include Sharon Chang, Kamal Sinclair, Rachel Yezbick, Sheena Matheiken, Jessica Clark, Tony Patrick, Amelia Winger-Bearskin, and Robert Sinclair.

Omidyar Network’s Exploration and Future Sensing team includes Eshanthi Ranasinghe, David Hsu, Julia Solano, and Nicole Allred. We welcome your feedback at explorations@omidyar.com.

Portals to Beautiful Futures: Trends to Watch in 2021 and Beyond was originally published in Omidyar Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 12. April 2021

Oasis Open

Call for Consent for Exchange Header Envelope (XHE) V1.0 as an OASIS Standard

Defining a business-oriented artifact either referencing (as a header) or containing (as an envelope) a payload of one or more business documents. The post Call for Consent for Exchange Header Envelope (XHE) V1.0 as an OASIS Standard appeared first on OASIS Open.

Business document electronic header standard developed jointly by UN/CEFACT and OASIS is presented to members as a candidate for OASIS Standard.

The OASIS Business Document Exchange (BDXR) TC members [1] have approved submitting the following Committee Specification to the OASIS Membership in a call for consent for OASIS Standard:

Exchange Header Envelope (XHE) Version 1.0
Committee Specification 03
13 December 2020

This is a call to the primary or alternate representatives of OASIS Organizational Members to consent or object to this approval. You are welcome to register your consent explicitly on the ballot; however, your consent is assumed unless you register an objection [2]. To register an objection, you must:

Indicate your objection on this ballot, and Provide a reason for your objection and/or a proposed remedy to the TC.

You may provide the reason in the comment box or by email to the Technical Committee on its comment mailing list or, if you are a member of the TC, to the TC’s mailing list. If you provide your reason by email, please indicate in the subject line that this is in regard to the Call for Consent.

This Committee Specification was approved by the Technical Committee and was submitted for the required 60-day public review [3]. All requirements of the OASIS TC Process having been met [4][5], the Committee Specification is now submitted to the voting representatives of OASIS Organizational Members.

Details

The Call for Consent opens on 12 April 2021 at 00:00 UTC and closes on 25 April 2021 at 23:59 UTC. You can access the ballot at:

Internal link for voting members: https://www.oasis-open.org/apps/org/workgroup/voting/ballot.php?id=3607

Publicly visible link: https://www.oasis-open.org/committees/ballot.php?id=3607

OASIS members should ensure that their organization’s voting representative responds according to the organization’s wishes. If you do not know the name of your organization’s voting representative is, go to the My Account page at

https://www.oasis-open.org/members/user_tools

then click the link for your Company (at the top of the page) and review the list of users for the name designated as “Primary”.

Description

The Exchange Header Envelope (XHE) has been developed jointly by UN/CEFACT and OASIS as the successor to the UN/CEFACT Standard Business Document Header (SBDH) version 1.3 and the OASIS Business Document Envelope (BDE) Version 1.1.

XHE defines a business-oriented artefact either referencing (as a header) or containing (as an envelope) a payload of one or more business documents or other artefacts with supplemental semantic information about the collection of payloads as a whole. An exchange header envelope describes contextual information important to the sender and receiver about the payloads, without having to modify the payloads in any fashion. This vocabulary is modeled using the UN/CEFACT Core Component Technical Specification Version 2.01.

The TC received 4 Statements of Use from IBM, ph-xhe open source project, Chasquis Consulting, and Efact [5].

URIs

The Committee Specification and related files are available here:

Exchange Header Envelope (XHE) Version 1.0
Committee Specification 03
13 December 2020

Editorial source (Authoritative)
https://docs.oasis-open.org/bdxr/xhe/v1.0/cs03/xhe-v1.0-cs03.xml

HTML
https://docs.oasis-open.org/bdxr/xhe/v1.0/cs03/xhe-v1.0-cs03-oasis.html

PDF:
https://docs.oasis-open.org/bdxr/xhe/v1.0/cs03/xhe-v1.0-cs03-oasis.pdf

For your convenience, OASIS provides a complete package of the
specification document and any related files in ZIP distribution files. You can download the ZIP file at:

https://docs.oasis-open.org/bdxr/xhe/v1.0/cs03/xhe-v1.0-cs03-oasis.zip

Additional information

[1] OASIS Business Document Exchange (BDXR) TC
https://www.oasis-open.org/committees/bdxr/

TC IPR page
https://www.oasis-open.org/committees/bdxr/ipr.php

[2] Comments may be submitted to the TC through the use of the OASIS TC Comment Facility as explained in the instructions located at https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=bdxr

Comments submitted to the TC are publicly archived and can be viewed at https://lists.oasis-open.org/archives/bdxr-comment/

Members of the TC should send comments directly to bdxr@lists.oasis-open.org.

[3] Candidate for OASIS Standard Special Majority Vote:
https://www.oasis-open.org/committees/ballot.php?id=3560

[4] Public reviews:

15-day public review, 02 November 2020: https://lists.oasis-open.org/archives/members/202011/msg00001.html

Comment resolution log: http://docs.oasis-open.org/bdxr/xhe/v1.0/csd03/xhe-v1.0-csd03-comment-resolution-log.xlsx

60-day public review, 10 February 2021: https://lists.oasis-open.org/archives/members/202102/msg00003.html

Comment resolution log: TBD – no comments received

[5] Statements of Use

IBM:
https://lists.oasis-open.org/archives/bdxr/202101/msg00015.html ph-xhe open source project:
https://lists.oasis-open.org/archives/bdxr/202101/msg00011.html Chasquis Consulting:
https://lists.oasis-open.org/archives/bdxr/202101/msg00010.html Efact:
https://lists.oasis-open.org/archives/bdxr/202101/msg00009.html

The post Call for Consent for Exchange Header Envelope (XHE) V1.0 as an OASIS Standard appeared first on OASIS Open.


Berkman Klein Center

What happened with digital rights in Africa in Q1 2021?

It was business as usual ‘’Human rights protected offline should also be protected online’’ This was the resolution by the United Nations Human Rights Council (HRC) — the resolution on “the promotion, protection and enjoyment of human rights on the Internet” at the 38th Session of the Human Rights Council of July 2018 which affirmed what we all held sacrosanct — that digital rights are huma
It was business as usual

‘’Human rights protected offline should also be protected online’’

This was the resolution by the United Nations Human Rights Council (HRC) — the resolution on “the promotion, protection and enjoyment of human rights on the Internet” at the 38th Session of the Human Rights Council of July 2018 which affirmed what we all held sacrosanct — that digital rights are human rights.

Internet and digital access have deepened across Africa in the past 20 years, in the process ushering in new vistas of human development. An important dimension of this development has been the expansion of new media, where new technologies such as social media have given voices to the hitherto voiceless and amplifying once stifled perspectives.

Given that the tight control many authoritarian regimes on the continent exercised relied on having a tight grip on public discourse, the Internet and the new media it spawned soon ran into cross hairs with governments from Cairo to Cape Town. Africa has one of the most censored digital environments in the world.

Pushing back against this censorship is a community of digital rights defenders. A basic, yet important tool in their efforts to promote media freedom is documentation. In the aid of digital rights defenders everywhere, here’s a summary of some of the most important developments in digital rights in Africa in Q1 2021.

Photo: Pixabay Internet shutdowns and website blocks

Sage-like, I had written presciently in the first week of January that it was not a matter of if, rather when Internet shutdowns will occur in 2021. Like clockwork, the first Internet shutdown happened shortly later in Uganda on January 13 on the eve of Presidential elections which saw young activist Robert Kyagulanyi (aka Bobi Wine) challenge Yoweri Museveni, the Ugandan President. The Internet shutdown was ordered by the government in retaliation for Facebook’s move to block a number of pro-government accounts.

On November 4 2020, the Ethiopian government cut off telephone and Internet service in the northern region of Tigray at the commencement of military offensive against the Tigray People’s Liberation Front (TPLF). The restrictions on telephone and Internet service persisted into 2021. Curiously, numerous Twitter accounts were created elsewhere in the country in the following days to fill the information vacuum created in the Tigray region.

Following opposition protests after run-off elections in Niger on February 21, mobile Internet service providers disrupted service. The election was won by Mohamed Bazoum, a former Interior Minister and candidate of the ruling party in the first democratic transition in the country.

On March 5, Facebook, YouTube, and Whatsapp were restricted on the Orange/Sonatel network in Senegal following protests against the arrest of opposition politician Ousmane Sonko of the Pastef party. Ousmane Sonko had been earlier accused of rape — an allegation he claimed was politically motivated.

On Election day March 21, there was a disruption to Internet connectivity in Congo. The election, won by incumbent President Dennis Sassou Nguesso who was seeking his fourth term in office, was marked by low voter turn-out. The major opponent to the incumbent Guy-Brice Parfait Kolélas died of Covid-19 on election day, after being flown out to France for medical attention.

In January, People’s Gazette, a critical website in Nigeria, was blocked ostensibly from orders from the Nigerian government following an investigative report about senior officials of the President.

Legal and Policy developments

In Kenya, the Finance Act of 2019 empowered the Kenya Revenue Authority to announce a Digital Services Tax in 2020. The tax came into effect on January 1 2021. The tax is 1.5% of the total value of business services rendered through online platforms and is expected to hinder the development of online businesses.

Also in Kenya, the Statute Law Miscellaneous Amendment Act, an amendment to the Official Secrets Act, was signed on December 11 2020. According to AccessNow, it gives sweeping powers to the Cabinet Secretary of Interior and Coordination of National Security to access data from any phone or computer.

In a landmark judgment in favour of privacy in February, South Africa’s Constitutional Court declared the bulk interception of communications by the country’s spy agencies unlawful. The case was based on evidence that the state spied on investigative journalist Stephen Patrick Sole of amaBhungane Centre for Investigative Journalism while communicating with a source. The applicants including amaBhungane Centre argued that the Regulation of Interception of Communications Act of 2002 (RICA) and the National Strategic Intelligence Act 39 of 1994 (NSIA) violate the right to privacy.

In Uganda, the Tax Amendment Bills, 2021 is before Parliament for debate. If passed into law, will take effect from 1st July 2021. Among its goals are repealing the top services (OTT) tax (reducing the rate of over the top tax from 20% to 12%) and introducing a new tax on internet data.

Zambia enacted a Cybersecurity and Cybercrime Act which gives sweeping powers to the government to exercise greater control over social media and conduct communications surveillance without a court order. The Law has been quickly challenged in the High Court of Zambia by a coalition of civil society organizations who are praying the court to declare the law unconstitutional.

Arrests and detentions

In Zimbabwe Devine Panashe Maregere and his wife Vongai Nomatter Chiminya were both arrested in January for sending a message on WhatsApp claiming that President Emmerson Mnangagwa had died of Covid-19. They were charged at Beitbridge magistrate court with publishing or communicating falsehoods.

On February 9 2021 Innocent Bahati, a poet and singer was reported missing. He was last seen a few prior in Nyanza, Southern Province. Innocent’s poems, which he recited in videos posted on YouTube, focused on issues such as poverty or criticism of the government, for which he had once been detained.

Countering Business as usual in Q2 2021

Why digital rights violations persist in the most brazen forms such as Internet shutdowns and clampdowns on freedom of expression in Africa has roots in the broader dysfunctional state of politics and society in general on the continent. We can only improve human rights standards when there has been a more fundamental change in the way our societies are run — which will include key institutional reforms in electoral democracy and economic justice. It is a campaign we’ve commenced in Q1 2021 and will gladly continue in Q2 2021.

What happened with digital rights in Africa in Q1 2021? was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF Blog

🚀DIF Monthly #17 (April, 2021)

📯  Sidetree has reached v1, lots of groups are preparing sessions and demos for IIW (see below for DIF coupon!), Steering Committee elections are just around the corner, a whole hackathon kicked off organized by a member organization and devoted in large part to building on DIF specs and

📯  Sidetree has reached v1, lots of groups are preparing sessions and demos for IIW (see below for DIF coupon!), Steering Committee elections are just around the corner, a whole hackathon kicked off organized by a member organization and devoted in large part to building on DIF specs and libraries, and work continues apace!

Table of contents Group Updates; 2. Member Updates; 3. Funding; 4. DIF Media; 5. Community; 6. Jobs; 7. Metrics; 8. Join DIF 🚀 Foundation News Register for the upcoming IIW now! Use the coupon code DIF_XXXII_20 to get 20% off your ticket(s)! Steering Committee Elections are just around the corner!

We made a quick explainer on our blog to clear up the procedure in anticipation of next month's elections.

🛠️ Group Updates ☂️ InterOp WG (cross-community) Justin Richter & Adrian Gropper on newest GNAP core spec GNAP = Grant Negotiation and Authorization Protocol Defines a mechanism for delegating authorization to a piece of software, and conveying that delegation to the software. This delegation can include access to a set of APIs as well as information passed directly to the software. Discussion on Micro-Grant/Implementation-Bounty setting by DIF for the community (ideation and planning). Machine readable governance framework straw man (Telegramsam), Trust frameworks at ToiP (Sankarshan). Stephen Curran gave a tour of the recently-launched cross-agent Aries test harness, and an overview of the interop landscape of Aries, the different profiles, and the Code With Us grant program to bring us closer to more full-featured interoperability with the non-Aries world. Anil John and some SVIP cohort members gave a Plugathon Report-out focusing specifically on the open-sourced conformance testing aspect of the program: Slides and demos from the event, the W3C-CCG VC-HTTP-API specification, and the updated W3C-CCG/Digital Bazaar CHAPI Test Suite. In other interop news from the broader community, there was also an Interop Testing Survey to gauge interest and motivation to advance DIF Member Gataca's Verifier Universal Interface and a video Guided tour of DIF member iGrant.io’s Aries Testing Journey and their open-sourced Aries Interop Playground (both done through the interop program of ESSIF-LAB). 💡 Identifiers & Discovery Discussion on multisig and delegation in DID methods and verification methods: Review of Verifiable Conditions Discussion on advanced verification methods involving smart contracts and others. New did:solid method (but not related to Solid) by DIF members Civic Discussion of the mechanics of the Solana blockchain, which the method utilizes. Member-run universal resolver. Universal Registrar discussion. DID WG: Timeline and what the current "CR" state means. Review and discussion of DID WG test suite Policies of DID Method Registry from last few DID-WG topic calls. 🛡️ Claims & Credentials Presentation Exchange reached v1.0.0 status. It is now an officialy DIF Ratified Specification Work and discussion on current Credential Manifest and VC Marketplace items. Credential Manifest - how issuers differ from verifiers in their publication needs and mechanisms Should CM support to output multiple credentials? VC Marketplace - VC business-model/marketplace-use-case sandbox/discussion group. Detailed discussions not just about payment and incentivization but also discovery mechanisms, semantic definition/propagation, and other ecosystem-scale design issues the group is moving its materials into Spec-up format and editing them in a spec-ward direction adding more Use-case categories to their already extensive list Discussing the current status of the LDAP VC-Revocation mechanism proposed by Spherity (Germany) and AutoSeg (Brasil) as a VC data model extension. Quick overview of UVI (Essif-lab), an extension of the VC-HTTP-API to help discovery and verification from a VC verifier's point of view. 🔓 DID Auth DIF-OIDF joint meetings - follow along on the bitbucket issues if the timing doesn't work for you! On 29th April SIOP/DIF will be represented by Kristina at the OIDF workshop. 📻 DID Comm Recently merged issues the long discussed 157 - JSON-LD Context 167 - property to accept media type 171 - media type discussion 166 - to support Curve P-384 Repository restructure to fix image display in the spec - 170 DIF Blog post on how to use Spec-Up coming soon! Still open issues Fix inconsistencies with to/next attributes in a forwarded message. 172 Attachments WIP 161 Encrypted Attachments Revisited issues 165 - cty of JWM 162 - Rewrapping forwarded messages. - awaiting PR Peer DID Method 2 produced some good questions and was eventually merged - pull 26 📦 Secure Data Storage Secure Data Storage Features: EDV Client Features. EDV Server Features. HUB Features. Decentralized Twitter (Dewitter) Requirements List: Assumptions, Principles, Requirements, and Other Considerations. Confidential Storage Specification Refactoring 🔧 KERI Steady improvement and extension of the Q&A document keeps going in parallel to the spec/whitepaper and the implementations KERI is moving from a first-cut "promiscuous mode" proof of concept to the next stage of development: securing an internal interface for local events to protect a controller's authoritative key event log from contamination by potentially malicious external events and receipts. This entails designing a query mechanism for communicating requests for key events that manages the multisig escrow and the duplicity-detection log key Threads to follow on github: Query Mode discussion (a whopper!) roadmapping thread thread A conceptual/mental-model alignment thread on how the concept of the transaction-event log "TEL" is evolving as KERI gets more complex and starts layering on security, duplicity/corruption-detection, multi-sig, etc. ⚙️ Product Managers Discussion on Wallet Security WG: Product intros: GlobalID, Serto. DHS SVIP Interoperability update: Deck, Transmute post. 🪙 Finance & Banking SIG Alex David, Global Business Development Manager @ Raon gave a presentation followed by discussion during the last meeting 🏥 Healthcare SIG Meetings will resume after IIW with a recap of relevant demos and sessions 📋 Additional Agenda Items Wallet Security - soon to be a WG now that chairs have committed to lead the group! Hospitality & Travel SIG still chartering and holding exploratory meetings to tease technical requirements out of use cases founding members have been working on. Documentation Corps. started their work to create a FAQ page for the decentralized identity space. 🦄 Member Updates

DIF Associate members are encouraged to share tech-related news in this section. For more info, get in touch with operations.

Condatis

External Identities at Microsoft Ignite: Condatis roundup. This blog focuses on specific advances within the world of digital identity that our customers and readers will find helpful.

Affinidi is hosting their first ever virtual hackathon on the topic of Verifiable Credentials. They are inviting developers to leverage open source technologies (mostly developed and/or managed in DIF!) and Affinidi’s SDKs and APIs to build applications for Healthcare, Fintech and Open categories.
March 26th to May 9th, 2021

💰 Funding

eSSIF LAB (EU) - Infrastructure-Oriented call

Infrastructure-Oriented Open Call, with grants of up to 155 000 € (9 months projects). The call is open to European innovators and focuses on the development and interop testing of open-source SSI components. Some examples of SSI components include Wallets, server proxies, revocation, cryptographic enforcer policies, integration, interoperability, compatibility, just to name a few. Please note the final-round deadline:  30 June 2021, 13:00 CET (Brussels local time).

Apply here

NGI Open Calls (EU)

Funding is allocated to projects using short research cycles targeting the most promising ideas. Each of the selected projects pursues its own objectives, while the NGI RIAs provide the program logic and vision, technical support, coaching and mentoring, to ensure that projects contribute towards a significant advancement of research and innovation in the NGI initiative. The focus is on advanced concepts and technologies that link to relevant use cases, and that can have an impact on the market and society overall. Applications and services that innovate without a research component are not covered by this model. Varying amounts of funding.

Learn more here.

🖋️ DIF Media

Drilling down: Co-Development

Having gone over the subtleties and complexities of open-source software and open standards in general, we will now drill down into how exactly DIF advances both, in the form of a “Frequently Asked Questions” (FAQ) document. In this FAQ, we'll cover what co-development means in general, at DIF, beyond DIF, and in legal terms. Co-development isn't just in the DNA of DIF-- DIF's primary purpose is to set the stage for co-development, because we feel it's the fastest way forward to adoption and evolution of our field.

Steering Committee Elections are just around the corner!

We made a quick explainer on our blog to clear up the procedure in anticipation of next month's elections.

Sidetree has reached v1 status!

After years of robust collaboration and hard work, the Sidetree Protocol has been iterated and specified to a DIF's standards. If you have a hazy notion of what Sidetree is and how it compares to "regular" DID Methods and blockchains, now might be a good time to read our new blog post outlining the major implementations and showcasing the contributing companies.

🎈 Events

Internet Identity Workshop #32
April 20 - 22, 2021 (Online Event)

The Internet Identity Workshop has been finding, probing and solving identity issues twice every year since 2005. IIW32 will be online, at an altered schedule to accomodate more time zones. Every IIW moves topics, code and projects further forward. People from the traditional IAM industry and tech giants collaborate deeply with researchers, journalists, activists, startups, tinkers and thinkers! Use this coupon code for 20% off your ticket(s): DIF_XXXII_20.

Identiverse 2021
June 21 - 23, 2021: Hybrid Experience
June 23 - July 2, 2021: Continued Experience

Check out the Agenda of Identiverse 2021!

OpenID Foundation Virtual Workshop
April 29, 2021 - 6:00 PM - 9:00 PM CEST (Online Event)

OpenID Foundation Workshops provide technical insight and influence on current Internet identity standards. Among others, this workshop will provide updates on all active OpenID Foundation Working Groups as well the OpenID Certification Program. Technologists from member organizations and others will provide updates on key issues and discuss how they help meet social, enterprise and government Internet identity challenges. 💼 Jobs

Members of the Decentralized Identity Foundation are looking for:

A remote lead for an SSI standards/strategy team!

Check out the available positions here.

🔢 Metrics

Medium: 1.3k followers | 3.932 minutes read
Twitter: 4.339 followers | 19,3k impressions | 3.793 profile visits
Website:28.776 unique visitors

In the last 30 days.

🆔 Join DIF!

If you would like to get involved with DIF's work, please join us and start contributing.

Can't get enough of DIF?
follow us on Twitter
join us on GitHub
subscribe on YouTube
read us on our blog
or read the archives

Got any feedback regarding the newsletter?
Please let us know - we are eager to improve.


Sidetree Protocol reaches V1

The DIF Steering Committee has approved the first major release of the Sidetree Protocol specification, "v1" so to speak. Here is a snapshot of the four companies and four implementations that stretched and built the specification.

Scalable, Flexible Infrastructure for Decentralized Identity

This week, the DIF Steering Committee officially approved the first major release of the Sidetree Protocol specification, "v1" so to speak. This protocol has already been implemented, and four of its implementers have been collaborating intensively for over a year on expanding and extending this specification together.

What exactly is a “Sidetree”?

Sidetree is a protocol that extends “decentralized identifiers” (DIDs), one of the core building blocks of decentralized identity.  Decentralized identifiers (DIDs) enable a person or entity to securely and directly “anchor” their data-sharing activities to a shared ledger, secured by cryptography. The first generation of DID systems accomplished this with a 1-to-1 relationship between “blockchain addresses” (cryptographic identities) and the more flexible, powerful addresses called DIDs. These latter functioned as privacy-preserving extensions of the blockchain addresses to which they were closely coupled. In this way, each DID effortlessly inherited the formidable security guarantees of those blockchains-- but in many cases, they also inherited scalability problems and economic models that were a bad fit for many DID use-cases.

Sidetree is a systematic, carefully-engineered protocol that loosens that coupling between anchor-points on a distributed data system (usually a blockchain) and the DID networks anchored to them. Crucially, it replaces the 1-to-1 relationship with a 1-to-many relationship, pooling resources and security guarantees. Depending on the use-case and implementation strategies chosen, the protocol can optimize for scalable performance, for developer-friendly ergonomics and SDKs, for the portability of DIDs and networks of DIDs across multiple anchoring systems, and even for high-availability in low-connectivity contexts where a global blockchain cannot be relied upon directly.

The name “sidetree” combines two hints as to its early technical inspirations and superpowers. Each Sidetree network functions as a kind of identity-specific “Layer 2” overlay network where participating nodes root aggregated operational data into transactions of the underlying chain. This mechanism has many high-level conceptual similarities with the “sidechains” of other “Layer 2” systems, such as the  Lightning network running atop Bitcoin or state channel implementations on Ethereum. It also shares with merkle “trees” (and DAGs like IPFS) the self-certifying property of content-addressability, a core building block of decentralized and distributed systems.

Leveraging concepts from sidechains and “Layer 2” network protocols, Sidetree was first proposed by Microsoft’s Daniel Buchner and has been incubated in the DIF community, evolving along the way with major contributions from a growing list of DIF members.

The team that delivered the specification Microsoft (Redmond, WA, USA)

A global consumer and enterprise app, service, hardware, and cloud infrastructure provider whose mission is to empower every person to achieve more. Microsoft is proud to have worked on Sidetree and implemented the Sidetree protocol via its contributions to ION. As a key piece of infrastructure that is foundational to its Decentralized Identity work, Microsoft is committed to the continued development of Sidetree and ION in DIF.

SecureKey (Toronto, ON, Canada)

SecureKey is a leading digital identity and authentication provider, and is a champion of the ecosystem approach to decentralized identity and verifiable credentials, revolutionizing the way consumers and organizations approach identity and attribute sharing in the digital age. This ecosystem-first philosophy informs our investment in Sidetree as a protocol for extensibility and scalability, which can evolve its featureset, and its network model over time.  Of particular technological interest to us is how Sidetree can be overlaid on a wide variety of ledger and propagation systems. This will enable identity systems that span many use cases and work across public blockchains, federation and witness protocols, and permissioned blockchains without being locked to any particular ledger technology.

Transmute Industries (Austin, TX, USA)

Transmute uses decentralized identifiers (DIDs) and verifiable credentials (VCs) to secure critical trade data by digitizing key trade documents so that they’re traceable and verifiable anywhere in the world, easily accessible and selectively shareable, searchable and auditable, and impossible to forge or alter. Transmute contributed to Sidetree’s development because it leverages batch processing capabilities to achieve enterprise scale and retains maximum optionality for our customers, allowing their business to span many blockchains and trust frameworks. Transmute sees Sidetree-based networks as necessary for scaling up decentralized identity capabilities to a global enterprise scale, where thousands of verifiable transactions per second can be processed at an unbeatable price.

MATTR Global (Auckland, New Zealand)

Mattr works with communities and a growing network of companies to shift industries like digital identity towards a more equitable future, providing tools to support digital inclusion, privacy and end-user control. Sidetree represents a significant leap forward in thinking around how to create truly decentralized infrastructure for resilient identifiers. We welcome the agnostic and extensible approach not just to distributed ledgers but also to content addressable storage and other building-blocks of flexible infrastructure. We look forward to integrating many of the DID systems coming out of the Sidetree standardization effort.

The first generation of Sidetree Systems

Transmute maintains Sidetree ledger adapters for Ethereum, Amazon QLDB, Bitcoin and Hyperledger Fabric. We also support interoperability tests with DID Key, the Universal Wallet Interop Spec, the VC HTTP API, and Traceability Vocabulary. Transmute has built Sidetree.js, an implementation of the Sidetree protocol based on the DIF’s codebase that focuses on modularity: it is a Typescript monorepo where each component of a Sidetree node (Ledger, Content Addressable Storage, Cache database) can be substituted with different implementations that use a common interface.

SecureKey  has created a ledger-agnostic Go implementation of Sidetree along with Orb and Hyperledger Fabric variations built on top. The did:orb method enables independent organizations to create decentralized identifiers that are propagated across a shared decentralized network without reliance on a common blockchain. By extending Sidetree into a Fediverse of interconnected registries, Orb provides the foundation for building digital ecosystems on top of decentralized identifiers using a federated, replicated and scalable approach.

Microsoft is a primary contributor to ION, an open source, public, permissionless implementation of Sidetree on the Bitcoin ledger. There are several repositories and public utilities that make working with ION easier, including:

ION GitHub repo: the main repository for the code that powers ION nodes ION Tools: JavaScript libraries for Node.js and browser environments that makes using DIDs and interacting with the ION network easier for web developers ION Install Guide: A step-by-step guide for installing an ION node ION Explorer: A graphical interface for viewing DIDs and auditing other data transactions published to the public ION network. What's next for Sidetree

One significant feature on the horizon is to add support for pruning of verbose lineage data (which is no longer needed to maintain the secure backbone of DIDs in a Sidetree implementation) at Sidetree’s anchor points. This addition will allow Sidetree-based networks to purge upwards of 95% of legacy operation data in a decentralized way that maintains all of the security guarantees the protocol currently makes.

Another near-future feature is the so-called “DID Type Table.” DIDs in various DID Method implementations may be typed to provide an indication as to what they DID might represent. The Sidetree WG will publish a table of types (not including human-centric types) that stand for organizations, machines, code packages, etc., which DID creators can use if they want to tag a DID with a given type.

The medium-term roadmap is up for discussion, so if you have ideas get involved!


Trust over IP

ToIP Foundation Hosts the Interoperability Working Group for Good Health Pass

Digital health passes — often mischaracterized as “vaccine passports” in the popular press — are making headlines as a key component in the drive to restore global travel and restart... The post ToIP Foundation Hosts the Interoperability Working Group for Good Health Pass appeared first on Trust Over IP.

Digital health passes — often mischaracterized as “vaccine passports” in the popular press — are making headlines as a key component in the drive to restore global travel and restart the global economy after the massive impact of the COVID-19 pandemic.

Enabling individuals to receive and selectively share proof-of-test, proof-of-vaccination, and proof-of-recovery with the highest standards for security, privacy and data protection will allow destination countries and travel systems worldwide to accept credentials from multiple market vendors. But concerns related to equity and access can only be addressed if these health pass implementations are designed to be interoperable.

As the leading global consortium for interoperable digital trust infrastructure, the ToIP Foundation has partnered with the Good Health Pass Collaborative – a project of ID2020 – to host a new Working Group focused on the core issues of interoperability, privacy, and equity for digital health passes. The Interoperability Working Group for Good Health Pass consists of nine drafting groups, each focused on a specific interoperability challenge as defined in the Interoperability Blueprint Outline.

“The Good Health Pass Collaborative is bringing people together to solve a set of problems that affect the entire world,” said John Jordan, executive director of the ToIP Foundation. “This ambitious effort uniquely aligns with the mission of ToIP because it requires interoperable digital credentials that can be accepted and verified anywhere they are needed. Getting this right, and doing so now, will not only make it safe for people to travel again, it will open the door for new tools and services that can solve other challenging problems that also require global-scale digital trust. For these reasons, ToIP is honored to contribute to this urgent global mission by hosting the Interoperability Working Group on behalf of the Good Health Pass Collaborative.”

Each drafting group, consisting of volunteer representatives from around the world representing the health, travel, technology, and policy sectors, will first conduct an intensive 30-day sprint to develop an initial set of draft recommendations. This will be followed by a second 30 day community and public review process to develop a final set of recommendations.

“Digital health passes – If properly designed and implemented – could offer a path to safely restore domestic and international travel, resume certain aspects of public life, and restart the global economy,“ said ID2020 executive director, Dakota Gruener. “Collaboration is critical at this juncture. Our organizations share a commitment to ensuring that digital health passes are designed and implemented in ways that serve the needs of the individuals and institutions that rely on them, while simultaneously protecting core values like privacy, civil liberties, and equity. ToIP has developed a powerful set of tools and models for digital trust frameworks, and we are delighted to be partnering with them in this critically important effort.”

The nine drafting groups collaborating within the new Working Group are:

Paper Based Credentials will define how a paper-based alternative can be created for any digital health pass so access will be available to all. Consistent User Experience will specify the common elements required so that individuals can easily, intuitively, and safely use digital health pass implementations. Standard Data Models and Elements will determine the core data items needed across all digital health pass implementations for both COVID-19 testing and vaccinations. Credential Formats, Signatures, and Exchange Protocols will specify the requirements for technical interoperability of Good Health Pass implementations. Security, Privacy, and Data Protection will define the safety requirements for Good Health Pass compliant implementations. Trust Registries will specify how verifiers can confirm that a digital health pass has been issued by an authorized issuer. Rules Engines will define how digital health pass apps can access different sources of policy information to determine what test or vaccination status is needed for a specific usage scenario. Identity Binding will specify the options for verifying that the holder of a digital health pass is the individual who received the test or vaccination credential. Governance Framework will define the overall set of policies that must be followed for an implementation to qualify as Good Health Pass compliant.

By adhering to the Good Health Pass Interoperability Blueprint that will be synthesized from the outputs of these nine drafting groups, airlines, airports, hospitality industries, international customs officials and others will be able to process visitors easily without requiring additional unnecessary steps mandated by proprietary systems. Travelers will not be confused about which credential they need for each point of verification. Moreover, since individuals will be fully in control of their own personal data in credentials in their own wallets or devices, they can be confident that their private health data is not being tracked or misused.  

Interested organizations are invited to join the ToIP Foundation to participate directly in this new Working Group or in the public comment period in May. They are also encouraged to join the Good Health Pass Collaborative at ID2020 to participate in the construction, adoption, and advocacy of the Good Health Pass Interoperability Blueprint.

The post ToIP Foundation Hosts the Interoperability Working Group for Good Health Pass appeared first on Trust Over IP.

Sunday, 11. April 2021

Ceramic Network

What is Ceramic?

Ceramic is a decentralized content computation network for a world of open source information.

Ceramic is a public, permissionless, open source protocol that provides computation, state transformations, and consensus for all types of data structures stored on the decentralized web. Ceramic's stream processing enables developers to build with dynamic information without trusted database servers to create powerful, secure, trustless, censorship-resistant applications.

This overview introduces how:

Decentralized content computation gives rise to a new era of open source information Stream processing provides an appropriate framework for dynamic, decentralized content You can use Ceramic to replace your database with a truly decentralized alternative

To skip ahead and get started building, try Playground to demo Ceramic in a browser application, the Quick Start guide to learn the basics using the Ceramic CLI, or follow the Installation page to integrate Ceramic into your project.

This post was originally published on the Ceramic documentation site. Contents here may fall out of date, but you can always find the most up to date version here.

The internet of open source information

At its core, the internet is a collection of applications running on stateful data sources – from identity systems and user tables to databases and feeds for storing all kinds of content generated by users, services, or machines.

Most of the information on today's internet is locked away on application-specific database servers designed to protect data as a proprietary resource. Acting as a trusted middleman, applications make it difficult and opaque for others to access this information by requiring explicit permissions, one-off API integrations, and trust that returned state is correct. This siloed and competitive environment results in more friction for developers and worse experiences for users.

Along other dimensions, the web has rapidly evolved into a more open source, composable, and collaborative ecosystem. We can observe this trend in open source software enabled by Git's distributed version control and in open source finance enabled by blockchain's double-spend protection. The same principles of open source have not yet been applied to content.

The next wave of transformative innovation will be in applying the same open source principles to the world's information, unlocking a universe of content that can be frictionlessly shared across application or organizational boundaries. Achieving this requires a decentralized computation network designed specifically for content with flexibility, scalability, and composability as first class requirements.

Decentralized content computation

Open sourcing the content layer for applications requires deploying information to a public, permissionless environment where files can be stored, computation can be performed, state can be tracked, and others can easily access content.

Advancements in other Web3 protocols have already achieved success in decentralized file storage. As a universal file system for the decentralized web, IPFS (including IPLD and Libp2p) provides an extremely flexible content naming and routing system. As a storage disk, durable persistence networks (such as Filecoin, Arweave, and Sia) ensure that the content represented in IPFS files are persisted and kept available. This stack of Web3 protocols performs well for storing static files, but on its own lacks the computation and state management capacity for more advanced database-like features such as mutability, version control, access control, and programmable logic. These are required to enable developers to build fully-featured decentralized applications.

Ceramic enables static files to be composed into higher-order mutable data structures, programmed to behave in any desired manner, and whose resulting state is stored and replicated across a decentralized network of nodes. Ceramic builds upon and extends the IPFS file system and underlying persistence networks, as well as other open standards in the decentralized ecosystem, with a general-purpose decentralized content computation substrate. Due to Ceramic's permissionless design and unified global network, anyone in the world can openly create, discover, query, and build upon existing data without needing to trust a centralized server, integrate one-off APIs, or worry if the state of information being returned is correct.

Streams

Ceramic's decentralized content computation network is modeled after various stream processing frameworks found in Web2. In these types of systems, events are ingested, processed as they arrive, and the resulting output is applied to a log. When queried and reduced, this log represents the current state of a piece of information. This is an appropriate framework for conceptualizing how dynamic information should be modeled on the decentralized web. Furthermore because the function that processes incoming events on any particular stream can be custom written with logic for any use case, it provides the general-purpose flexibility and extensibility needed to represent the diversity of information that may exist on the web.

On Ceramic, each piece of information is represented as an append-only log of commits, called a Stream. Each stream is a DAG stored in IPLD, with an immutable name called a StreamID, and a verifiable state called a StreamState. Streams are similar in concept to Git trees, and each stream can be thought of as its own blockchain, ledger, or event log.

StreamTypes

Each stream must specify a StreamType, which is the processing logic used by the particular stream. A StreamType is essentially a function that is executed by a Ceramic node upon receipt of a new commit to the stream that governs the stream's state transitions and resulting output. StreamTypes are responsible for enforcing all rules and logic for the stream, such as data structure, content format, authentication or access control, and consensus algorithm. If an update does not conform to the logic specified by the StreamType, the update is disregarded. After applying a valid commit to the stream, the resulting StreamState is broadcast out to the rest of the nodes on the Ceramic Network. Each of the other nodes that are also maintaining this stream will update their StreamState to reflect this new transaction.

Ceramic's flexible StreamTypes framework enables developers to deploy any kind of information that conforms to any set of arbitrary rules as a stateful stream of events. Ceramic clients come pre-packaged with a standard set of StreamTypes that cover a wide range of common use cases, making it easy to get started building applications:

Tile Document: a StreamType that stores a JSON document, providing similar functionality as a NoSQL document store. Tile Documents are frequently used as a database replacement for identity metadata (profiles, social graphs, reputation scores, linked social accounts), user-generated content (blog posts, social media, etc), indexes of other StreamIDs to form collections and user tables (IDX), DID documents, verifiable claims, and more. Tile Documents rely on DIDs for authentication and all valid updates to a stream must be signed by the DID that controls the stream. CAIP-10 Link: a StreamType that stores a cryptographically verifiable proof that links a blockchain address to a DID. A DID can have an unlimited number of CAIP-10 Links that bind it to many different addresses on many different blockchain networks. CAIP-10 Links also rely on DIDs for authentication, same as the Tile Document. Custom: You can implement your own StreamType and deploy it to your Ceramic node if the pre-packaged StreamTypes are not suitable for your use case. Authentication

StreamTypes are able to specify their authentication requirements for how new data is authorized to be added to a particular stream. Different StreamTypes may choose to implement different authentication requirements. One of the most powerful and important authentication mechanisms that Ceramic StreamTypes support is DIDs, the W3C standard for decentralized identifiers. DIDs are used by the default StreamTypes (Tile Documents and CAIP-10 Links).

DIDs provide a way to go from a globally-unique, platform-agnostic string identifier to a DID document containing public keys for signature verification and encryption. Ceramic is capable of supporting any DID method implementation. Below, find the DID methods that are currently supported by Ceramic:

3ID DID Method: A DID method that uses Ceramic's Tile Document StreamType to represent a mutable DID document. 3IDs are typically used for end-user accounts. When 3IDs are used in conjunction with IDX and the 3ID Keychain (as is implemented in 3ID Connect), a 3ID can easily be controlled with any number of blockchain accounts from any L1 or L2 network. This provides a way to unify a user's identity across all other platforms. Key DID Method: A DID method statically generated from any Ed25519 key pair. Key DIDs are typically used for developer accounts. Key DID is lightweight, but the drawback is that its DID document is immutable and has no ability to rotate keys if it is compromised. NFT DID Method (coming soon): A DID method for any NFT on any blockchain. The DID document is statically generated from on-chain data. The DID associated to the blockchain account of the asset's current owner (using CAIP-10 Links) is the only entity authorized to act on behalf of the NFT DID, authenticate in DID-based systems, and make updates to streams or other data owned by the NFT DID. When ownership of the NFT changes, so does the controller permissions. Safe DID Method (coming soon): A DID method for a Gnosis Safe smart contract on any blockchain. Typically used for organizations, DAOs, and other multi-sig entities. Ceramic Network

The Ceramic Network is a decentralized, worldwide network of nodes running the Ceramic protocol that communicate over a dedicated topic on the Libp2p peer-to-peer networking protocol. Ceramic is able to achieve maximum horizontal scalability, throughput, and performance due to its unique design.

Sharded execution environment

Unlike traditional blockchain systems where scalability is limited to a single global virtual execution environment (VM) and the state of a single ledger is shared between all nodes, each Ceramic node acts as an individual execution environment for performing computations and validating transactions on streams – there is no global ledger. This "built-in" execution sharding enables the Ceramic Network to scale horizontally to parallelize the processing of an increasing number of simultaneous stream transactions as the number of nodes on the network increases. Such a design is needed to handle the scale of the world's data, which is orders of magnitude greater than the throughput needed on a financial blockchain. Another benefit of this design is that a Ceramic node can perform stream transactions in an offline-first environment and then later sync updates with the rest of the network when it comes back online.

Global namespace

Since all nodes are part of the same Ceramic Network, every stream on Ceramic exists within a single global namespace where it can be accessed by any other node or referenced by any other stream. This creates a public data web of open source information.

Additional node responsibilities

In addition to executing stream transactions according to StreamType logic, Ceramic nodes also maintain a few other key responsibilities:

StreamState storage: A Ceramic node only persists StreamStates for the streams it cares to keep around, a process called "pinning." Different nodes will maintain StreamStates for different streams, but multiple nodes can maintain the state of a single stream. Commit log storage: A Ceramic node maintains a local copy of all commits to the streams it is pinning. Persistence connectors: Ceramic nodes can optionally utilize an additional durable storage backend for backing up commits for streams it is pinning. This can be any of the persistence networks mentioned above, including Filecoin, Arweave, Sia, etc. (coming soon). Query responses: Ceramic nodes respond to stream queries from clients. If the node has the stream pinned it will return the response; if not, it will ask the rest of the network for the stream over libp2p and then return the response. Broadcasting transactions: When a Ceramic node successfully performs a transaction on a stream, it broadcasts this transaction out the rest of the network over libp2p so other nodes also pinning this stream can update their StreamState to reflect this new transaction. Clients

Clients provide standard interfaces for performing transactions and queries on streams, and are installed into applications. Clients are also responsible for authenticating users and signing transactions.

Currently there are three clients for Ceramic. Additional client implementations can easily be developed in other programming languages:

JS HTTP client: A lightweight JavaScript client which connects to a remote Ceramic node over HTTP. The JS HTTP client is recommended for application developers. JS Core client: A JavaScript client which also includes a full Ceramic node. The JS Core client is for those who want the maximum decentralization of running the full Ceramic protocol directly in a browser application. CLI: A command line interface for interacting with a Ceramic node. Getting started Try Ceramic

To experience how Ceramic works in a browser application, try the Playground app.

Installation

Getting started with Ceramic is simple. Visit the Quick Start guide to learn the basics using the Ceramic CLI or follow the Installation page to integrate Ceramic into your project.

Tools and services

In addition to various standards referenced throughout this document, the Ceramic community has already begun developing many different open source protocols, tools, and services that simplify the experience of developing on Ceramic. Here are a few notable examples:

3ID Connect: A authentication SDK for browser-based applications that allows your users to transact with Ceramic using their blockchain wallet. IDX: A protocol for decentralized identity that allows a DID to aggregate an index of all their data from across all apps in one place. IDX enables user-centric data storage, discovery, and interoperability. It is effectively a decentralized, cross-platform user table. IDX can reference all data source types, including Ceramic streams and other peer-to-peer databases and files. IdentityLink: A service that issues verifiable claims which prove a DID owns various other Web2 social accounts such as Twitter, Github, Discord, Discourse, Telegram, Instagram, etc. Once issued, claims are stored in the DID's IDX. Tiles: An explorer for the Ceramic Network. Documint: A browser-based IDE for creating and editing streams.

Saturday, 10. April 2021

OpenID

The 7 Laws of Identity Standards

The OpenID Foundation is proud to participate in the first ever ‘Identity Management Day,’ an annual awareness event that will take place on the second Tuesday in April each year. The inaugural Identity Management Day is April 13, 2021. Founded by the Identity Defined Security Alliance (IDSA), the mission of Identity Management Day is to educate business leaders […] The post The 7

The OpenID Foundation is proud to participate in the first ever ‘Identity Management Day,’ an annual awareness event that will take place on the second Tuesday in April each year. The inaugural Identity Management Day is April 13, 2021.

Founded by the Identity Defined Security Alliance (IDSA), the mission of Identity Management Day is to educate business leaders and IT decision makers on the importance of identity management and key components including governance, identity-centric security best practices, processes, and technology, with a special focus on the dangers of not properly securing identities and access credentials.

In addition, the National Cyber Security Alliance (NCSA) will provide guidance for consumers, to ensure that their online identities are protected through security awareness, best practices, and readily-available technologies.

From the point of view of an open identity standards development organization and with a hat tip to Kim Cameron’s 7 Laws of Idenity, here are my 7 Laws of Identity Standards:

A identity standard’s adoption is driven by its value of the reliability, repeatability and security of its implementations. A standard’s value can be measured by the number of instances of certified technical conformance extant in the market. Certified technical conformance is necessary but insufficient for global adoption. Adoption at scale requires widespread awareness, ongoing technical improvement and a open and authoritative reference source. When Libraries/Directories/ Registries act as authoritative sources they amplify awareness, extend adoption and promote certification. Certified technical conformance importantly complements legal compliance and together optimize interoperability. Interoperability enhances security, contains costs and drives profitability.

On behalf of the OpenID Foundation, I want to thank Julie Smith, Executive Director of the IDSA for her and her colleagues’ leadership. The OpenID Foundation looks forward to supporting this important event in the years to come.


Don Thibeau OpenID Foundation The post The 7 Laws of Identity Standards first appeared on OpenID.

Friday, 09. April 2021

Elastos Foundation

Elastos Bi-Weekly Update – 09 April 2021

...

eSSIF-Lab

Verifier Universal Interface by Gataca España S.L.

Validated ID, S.L. An eIDAS bridge, which is a component that proposes to enhance the legal certainty of any class of verifiable credentials.

Verifier Universal Interface (VUI) is an interoperability working group that aims at building a complete set of standard APIs for Verifier components in SSI ecosystems

As different technology providers build SSI solutions, it becomes critical to ensure interoperability between these solutions. Available standards for SSI still have important gaps, leading us to an ecosystem of full-stack providers whose approach to interoperability is building proprietary plug-ins for each one of the other available solutions. This approach to interoperability is not scalable.

The underlying problem is that building standards take time. That is the reason that we propose a practical and focused approach to enable scalable interoperability in the SSI community.

We propose to start with a specific SSI component, namely the Verifier component, and lead the definition of the minimum set of standard APIs necessary to implement or interoperate with such module. That is, a role-centric approach to standardization at API level.

To date, 12 organisations are contributing to this initiative. The VUI working group has already drafted a first version of a generic spec that integrates existing standards and interop efforts and fills the gaps to provide a complete set of APIs. This draft version can be found at https://bit.ly/3h5VE7P and has been built using ReSpec.

This draft version for VUI includes today 6 APIs:

Presentation Exchange Consent Management Schema resolution Issuer resolution ID resolution Credential status resolution

Next steps
As next steps, the Working Group (WG) needs to take this ground work to a more mature level. That is, to further define the specification by achieving consensus in the broader community, and bridging perspectives from DIF, W3C, EBSI, and Aries.

The WG is organized in Working Packages (WP), one for each interface. Any participant can lead or contribute to WP, which shall integrate at least 2 Implementors and 1 Integrator. Implementors are responsible for defining the API, a set of interoperability tests, and service endpoints for Integrators to execute those tests.

The WG has launched a survey in the broad SSI community and two of the 6 interfaces have been selected as initial WPs:

Presentation Exchange
Issuer Resolution

Ready to contribute?
To subscribe to this WG please refer to https://groups.io/g/vui

Country: Spain
Further information: https://gataca.io
Team: Gataca Spain

GitLab: https://gitlab.grnet.gr/essif-lab/infrastructure_2/gataca


Own Your Data Weekly Digest

MyData Weekly Digest for April 9th, 2021

Read in this week's digest about: 6 posts
Read in this week's digest about: 6 posts

Thursday, 08. April 2021

DIF Blog

The Steering Committee is growing

An operating addendum was adopted last month which contains all the procedures for periodic elections and distribution requirements. This article repackages those policies in the form of a Frequently Asked Questions document.
And nominations are now being accepted

DIF was established to represent our fast changing community and to create a safe space for the co-development of next-generation code and specifications. From the original handful of members 4 years ago, the foundation has grown to become an organization with over 200 member companies, representing thousands of PRs and a strong commitment to decentralizing identity software.

Since its inception, DIF has been primarily governed by a Steering Committee. The Steering Committee is a  group of member representatives whom lead DIF, set strategy, assure the high quality of ratified work items, react to the community’s needs, among other decisions.

Photo courtesy of Element5digital

Over the last year, one of the main foci of the current Steering Committee has been restructuring itself to ensure it will represent the interests and diversity of its growing membership. Among the conclusions of this analysis was that a larger steering committee would garner more trust and visibility into DIF's internal governance as an organization. An operating addendum was adopted last month which formalizes procedures for periodic elections and distribution requirements. The following repackages those policies in the form of a Frequently Asked Questions document: for further clarification, see the addendum itself.

When are the dates and deadlines for this election cycle?
Announcement of election + Nomination starts - 8th April (Thu) Last day for questions to candidates - 6th May (Thu) Nomination Closes - 13th May (Thu) SC ballot opens - 20th May (Thu) SC ballot closes - 27th May (Thu) New SC first meeting - 3rd June (Thu) Who can be nominated and by whom?

Any individual who is active in DIF can nominate a DIF member to the Steering Committee, and/or get nominated for the Steering Committee by someone else. We recommend nominating members of the DIF community with high visibility and recognition, whether among DIF's membership or among the wider community. Only one individual can represent each DIF member organization, and there are distribution criteria to keep the Steering Committee balanced, but those considerations are applied *after* the election. This means that multiple colleagues from the same member organization can be nominated and accept that nomination, but only one of them can sit on the Steering Committee. The DIF membership includes Associate members, Contributors, individuals who signed a Feedback Agreement, and adjacent/partner organizations referred to as Liaisons (i.e. OIDF or Hyperledger).


Nominations should be sent to the nominations at identity.foundation email address.


How and when do nominated people become candidates?

Nominees will be contacted by DIF staff and must accept the nomination before the nomination period closes. Along with confirmation that they would serve if elected, they also provide a short biography and an optional statement describing their interest in and qualifications for serving on the Committee. Early nominations are encouraged.


How do voting members select which candidates to support?

DIF Staff will share this list of accepting nominees and their statements with the membership via Slack. Since all Associate Members are eligible for one vote, it is recommended that colleagues within one member company confer or discuss in backchannel before submitting a vote on behalf of their whole company.


How can interested DIF members ask questions of all the candidates to help them decide?

If there are specific issues of interest to all or most of the DIF membership, any DIF member (whether they work at an Associate Member organization or not) may send clear questions to DIF staff. Nominees should send these questions to the nominations at identity.foundation email address, and a representative cross-section of these (lightly edited or unedited) will be compiled into a document sent to all nominees.


Who votes and how?

Only associate members can vote, and each votes as a whole on one ballot. This too is done as via email: ballots should be sent to the nominations at identity.foundation email address. These will be audited confidentially and not published in any way (not even as tallies). If one member organization accidentally votes more than once, DIF staff will reply to the emails to clarify the issue, so please use a monitored/normal email address in the organization's domain rather an personal accounts or "alts". All ballots are assumed to be collective/company-wide decisions even if multiple are received.


How many candidates can each associate member vote for?

Each ballot should contain between 0 and X names, where X is the number of open seats. Most elections, X should be between 4 and 6. If less than X are named, the rest of the organization's votes are forfeited-- each vote counts singly.


When is the actual election period?

DIF elections run for one week, from the distribution of the final list of candidates to one week later at midnight EST. In this election, those dates will be Thursday 20 May and Thursday 27 May, respectively. In the weeks leading up to 20 May, DIF staff will also contact all associate members to clarify any questions about the process and confirm the dates, the number of seats, the nominee statements, etc.


How often will the Steering Committee elections take place?

They are expected to be annual, with 2-year terms to preserve continuity across elections. If seats are unexpectedly made available between elections, they will stay open until the next election.



Energy Web

Making Crypto Green with Energy Web Zero

Energy Web is thrilled to be one of the three founding organizations behind the newly announced Crypto Climate Accord (CCA). In the next year, we anticipate dozens of novel approaches coming to market focused on decarbonizing the crypto industry in the spirit of the CCA. But we’ve also developed an open-source application that can play a major role in decarbonizing Crypto: Energy Web Zero (EW 

Energy Web is thrilled to be one of the three founding organizations behind the newly announced Crypto Climate Accord (CCA). In the next year, we anticipate dozens of novel approaches coming to market focused on decarbonizing the crypto industry in the spirit of the CCA. But we’ve also developed an open-source application that can play a major role in decarbonizing Crypto: Energy Web Zero (EW Zero).

EW Zero is a public, low-cost application built on top of the open-source Energy Web stack that helps renewable energy buyers drive their carbon footprint to zero via purchases of different renewable energy products, such as impact-rich renewable energy certificates from emerging economies and power-purchase agreements. We can use EW Zero beginning today to help the crypto industry achieve the provisional targets of the CCA by 1) helping crypto holders directly decarbonize crypto holdings while 2) enabling entire crypto networks to decarbonize from the bottom-up.

Demand-side solution: help crypto holders directly decarbonize their crypto holdings

Because of how energy markets are structured, cryptocurrency producers (such as miners) in many places around the world have very little choice in exactly where their power comes from. And even where that choice exists, given the extremely fragmented nature of renewable energy markets and regulatory frameworks with varying degrees of support for renewables, finding the right renewable energy product is a daunting task. EW Zero attacks both of these problems head-on by providing renewable energy buyers with a digital application for locating, procuring, and claiming emissions reductions from verified renewables around the world in a user-friendly, streamlined way.

In the spirit of the CCA, EW Zero can be used by crypto holders to make their crypto holdings green. Whether you are an institutional cryptocurrency investor, a retail crypto hodler, application developer, crypto service provider, or an exchange, EW Zero enables you to purchase renewable energy equal to the non-renewable energy used to create and maintain your cryptocurrency holdings.

This isn’t a new concept: corporates from around the world have been purchasing low-carbon products to help decarbonize supply chains in support of corporate sustainability objectives. In climate-speak, this concept of covering the power consumption from crypto purchases/holdings is analogous to actions to reduce Scope 3 carbon emissions. What’s new here is that we’re using EW Zero to apply these kinds of transactions to decentralized sources of power consumption: blockchains.

Here’s a simple example: you are an institutional cryptocurrency investor who purchased 20,000 BTC. Based on publicly available data, we can estimate the amount of electricity attributable to your 20,000 BTC holdings each year. By extension we can estimate the total carbon footprint of your BTC by making assumptions about how green various grids are around the world (in fact, one of the provisional objectives of the CCA is to develop an industry-wide open source account standard to make this even easier). With that information in hand, you can use EW Zero to search for different renewable energy products around the world. You can then buy enough renewables to make-up for the non-green energy that went into your 20,000 BTC. There are several renewable energy products to choose from: energy attribute certificates (EACs) from existing and future assets, power-purchase agreements, and onsite solar. Energy Web will encourage buyers to support impact-rich EACs, in part because locationality standards are less applicable compared to standard corporate renewables procurement. And once you’ve bought your renewables, you can “claim” the emissions reduction from them since EW Zero makes it easy for buyers to trace the full lifecycle of every megawatt-hour of clean power they bought all the way back to the specific power plant that the energy came from.

This is just one example for an institutional buyer. We can make the same kinds of transactions possible in a more automated way for retail crypto holders via exchanges (think “green crypto” features directly on your favorite exchange) and integrations with wallets.

This is just what we can do with EW Zero today. Our vision is for a whole new category of “negative emissions” products to make their way onto Zero. Here, instead of purchasing renewable energy, cryptocurrency holders invest in different negative emissions projects from around the world to remove and lock away carbon associated with their crypto holdings.

Supply-side solution: enable crypto networks to decarbonize from the bottom-up

Grids around the world are decarbonizing and so are the blockchains underpinning them. However, verifying and credibly measuring the share of crypto production powered by renewables is extremely challenging since most data come from self-reported surveys and proxies (see Vox’s article on this topic) and entities like crypto miners don’t normally get the certificates or bills they need to prove where their renewables come from.

EW Zero and other components of the Energy Web stack address exactly this issue irrespective of the geography or market. Crypto miners and other blockchain actors will be able to automatically track renewables consumption at any given time and report that while preserving privacy. These claims will be, on the other hand, verifiable using a validation mechanism. For example, imagine a miner has onsite solar connected to the grid. This solar, along with the miner, each have unique, self-sovereign digital identities verified by trusted sources like corresponding solar manufacturers and local grid operators. All relevant events, such as estimated power production and excess power sent to the grid, are documented and validated on a highly granular level (e.g., every 30 minutes). That way, this specific miner can provide a verifiable proof of renewably powered, mining-related consumption anywhere in the world.

The last point is key for all industries, not just crypto. Today corporates face significant challenges with reporting their impact in countries without an official tracking system. Energy Web’s open-source tech stack solves this issue. This proposed concept is similar to how Google and Microsoft are working to make their data centers 100% renewably-powered on an hourly basis according to when their data centers actually consume electricity in line with the emerging EnergyTag standard for EACs,

Join the Crypto Climate Accord

We are excited to act on the goals of the CCA and decarbonize blockchains. Interested companies from the energy and crypto sectors can join the CCA by contacting Energy Web directly via doug.miller@energyweb.org.

Making Crypto Green with Energy Web Zero was originally published in Energy Web Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lissi

Teste die selbstbestimmte Identität mit der Lissi Demo

Teste jetzt die selbstbestimmte Identität mit der Lissi Demo. Dieser Artikel beschreibt die Nutzung der Lissi Demo. Der Artikel ist auch auf englisch verfügbar. Lissi bietet Institutionen und Endnutzern Software Anwendungen zum Austausch vertrauenswürdiger Informationen mit maximaler Datensouveränität. Wir bieten eine Demonstration für das Lissi Wallet (iOS / Android) an, mit welcher du den A
Teste jetzt die selbstbestimmte Identität mit der Lissi Demo.

Dieser Artikel beschreibt die Nutzung der Lissi Demo. Der Artikel ist auch auf englisch verfügbar.

Lissi bietet Institutionen und Endnutzern Software Anwendungen zum Austausch vertrauenswürdiger Informationen mit maximaler Datensouveränität. Wir bieten eine Demonstration für das Lissi Wallet (iOS / Android) an, mit welcher du den Ablauf des Empfangens von Nachweisen und der Beantwortung von Informationsanfragen ausprobieren kannst.

Du kannst die Demonstration hier testen. Die Demo kann entweder auf dem Smartphone oder in einem Desktop-Browser am Computer geöffnet werden. Wir empfehlen die Verwendung des Lissi Wallets für ein optimales Erlebnis, die Demo funktioniert aber auch mit anderen kompatiblen Wallets.

Die Demo enthält verschiedene Anwendungsfälle, um eine mögliche Nutzererfahrung zu veranschaulichen. Die Anwendungsfälle sind an das Demo-Setup angepasst, d.h. in einem realen Szenario können je nach Anwendungsfall zusätzliche Schritte notwendig sein.

Anwendungsfall 1: Online Ausweis (eID — elektronische Identifikation) Erhalte deinen digitalen Ausweis.

Im ersten Schritt erhältst du deinen Online-Ausweis. Dieser Ausweis wird dir von einem Bürgeramt ausgestellt. Stelle dir vor, dass du in diesem Bürgeramt bist und dich vor Ort ausweist. Du kannst diesen Ausweis auch über die eID-Funktion deines Personalausweises erhalten und deine Daten direkt digital via NFC auslesen ohne beim Amt vor Ort zu sein.

Anwendungsfall 2: Kreditkarte Erhalte eine Kreditkarte von deiner Bank.

Sobald du dein Online Ausweis in deinem Wallet hast, kannst du diesen verwenden, um eine Kreditkarte zu erhalten. Eine Kreditkarte kannst du ganz einfach als Neukunde bei einer Bank beantragen. Die Karte wird dir zur Verfügung gestellt, sobald du deine Daten freigegeben hast, welches ein notwendiger regulatorischer Schritt ist. Die beantworte Informationsanfrage ersetzt z.B das Video- oder Postident-Verfahren.

Anwendungsfall 3: Hotel-Check-in Erlebe den schnellen und kontaktlosen check-in bei einem Hotel.

Du möchtest einen digitalen Check-in für deinen Hotelaufenthalt durchführen. Du kannst einfach deine persönlichen Daten sowie eine Zahlungsoption angeben, um deine Zimmerkarte zu erhalten. Einfach und unkompliziert!

Anwendungsfall 4: Einschreibung als Student und Kurszertifikate Einschreibung als Student und Kurszertifikate.

Du hast die Schule erfolgreich abgeschlossen und möchtest Dich nun bei einer Universität einschreiben. Du kannst deine Base-ID und Telefonnummer nutzen, um der Universität die notwendigen Informationen bereitzustellen.

Wir hoffen, dass die Demonstration hilft das Nutzererlebnis und die Interaktionen in einem digitalen Identitäts-Ökosystem wie IDunion besser zu verstehen. Die genauen Umstände und das erforderliche Sicherheitsniveau ist vom jeweiligen Anwendungsfall abhängig und kann stark variieren. Regulierte Anwendungsfälle haben gesetzliche Vorgaben bezüglich des benötigten Vertrauensniveaus. Das Lissi Team ist mit Vertrauensdienste-Anbietern, Behörden, Kommunen, Ämtern, Verbänden und sonstigen Parteien im Gespräch um alle notwendigen Bedingungen zu erfüllen und dir das beste Nutzererlebnis zu ermöglichen.

Wir würden gerne dein Feedback hören! Was gefällt dir an der Demo oder wo siehst du Verbesserungsmöglichkeiten? Kontaktiere uns via info@lissi.id.

Cheers!
Das Lissi-Team


Testing self-sovereign identity with the Lissi demo

Experience self-sovereign identity in action with the Lissi demo. This article explains the demonstration offered by Lissi. It is also available in German. Lissi provides institutions and end-users with software to exchange trusted information while having maximal data sovereignty. We now offer a demonstration for the Lissi Wallet, which enables you to experience the user flow of receivi
Experience self-sovereign identity in action with the Lissi demo.

This article explains the demonstration offered by Lissi. It is also available in German.

Lissi provides institutions and end-users with software to exchange trusted information while having maximal data sovereignty. We now offer a demonstration for the Lissi Wallet, which enables you to experience the user flow of receiving credentials and answering information requests.

You can find the demonstration on www.lissi.id/demo. You can either open the demo on your phone or on a desktop browser. While we recommend using the Lissi Wallet (iOS / Android) for an optimal experience, the demo also works with other compatible wallets.

The demo contains different example use-cases to illustrate a potential user journey. The use cases are adjusted to the demonstration setup, meaning that in a real scenario there might be additional steps necessary depending on the use case at question.

Use case 1: Base-ID (eID — electronic Identification) Get your Base-ID

In the first step, you get your Base-ID. This credential is issued to you from a municipal office . Imagine you are in this municipal office and show your physical ID card first. You could also get this credential by using the eID function of your pass and directly and digitally extract it via NFC from your eID-Card.

Use case 2: Credit Card Get your digital credit card from your bank.

Once you got your Base-ID in your wallet, you can use it to get a credit card. You can easily request a credit card as a new customer of a consumer bank. You will be provided with the card once you shared your data, which is a necessary step from a regulatory perspective.

Use case 3: Hotel check-in Check-in to you favourite hotel.

You want to do a digital check-in for your hotel stay. You can easily present your personal information as well as a payment option to get your room card.

Use case 4: Apply for a university and get a course certificate Apply for a university and get a course certificate.

You have successfully completed school and now want to enrol at a university. You can use your Base-ID and phone number to provide the university with the necessary information.

We are convinced this demonstrated user flow can help to better understand the interactions in a digital identity ecosystem such as IDunion. The exact circumstances are always dependent on the use case at hand and might vary regarding their required level of assurance, which is a standardised measure to determine the trust you can allocate to the information presented to you. Regulated use cases have legal requirements regarding the level of trust needed. The Lissi team is in discussion with trust service providers, authorities, municipalities, agencies, associations and other relevant stakeholders to meet all the necessary requirements and provide you with the best user experience.

We would love to hear your feedback. What do you like about the demo or where do you see room for improvement? Feel free to reach out to us via info@lissi.id

Cheers,
The Lissi Team

Wednesday, 07. April 2021

Nyheder fra WAYF

Kom til WAYF-erfamøde på Zoom!

Torsdag den 20. maj 2021 kl. 9.30-12.30 og evt. fortsat fra kl. 13 holder WAYF sit tolvte erfamøde. Sidste gang vi holdt mødet på Zoom, var det en stor succes – som vi hermed forsøger at gentage. Link til møderum offentliggøres her på sitet forud for mødet. Language Danish Read more about Kom til WAYF-erfamøde på Zoom!

Torsdag den 20. maj 2021 kl. 9.30-12.30 og evt. fortsat fra kl. 13 holder WAYF sit tolvte erfamøde. Sidste gang vi holdt mødet på Zoom, var det en stor succes – som vi hermed forsøger at gentage. Link til møderum offentliggøres her på sitet forud for mødet.

Language Danish Read more about Kom til WAYF-erfamøde på Zoom!

SelfKey Foundation

All Data Breaches in 2019 – 2021 – An Alarming Timeline

Your data is valuable and should belong to you. Nevertheless our online records are exposed on an almost daily basis, with potentially devastating consequences. This blog post aims to provide an up-to-date list of data breaches and hacks. The post All Data Breaches in 2019 – 2021 – An Alarming Timeline appeared first on SelfKey.

Your data is valuable and should belong to you. Nevertheless our online records are exposed on an almost daily basis, with potentially devastating consequences. This blog post aims to provide an up-to-date list of data breaches and hacks.

The post All Data Breaches in 2019 – 2021 – An Alarming Timeline appeared first on SelfKey.


Sovrin (Medium)

An open letter to Financial Crimes Enforcement Network advocating for financial inclusion in its…

An Open Letter to Financial Crimes Enforcement Network Advocating for Financial Inclusion in its Regulations April 7, 2021 On December 23, 2020, the Financial Crimes Enforcement Network (FinCEN) of the US Treasury Department published a Notice of Proposed Rulemaking (NPRM) regarding “Requirements for banks and money services businesses” related to certain transactions involving convertible
An Open Letter to Financial Crimes Enforcement Network Advocating for Financial Inclusion in its Regulations

April 7, 2021

On December 23, 2020, the Financial Crimes Enforcement Network (FinCEN) of the US Treasury Department published a Notice of Proposed Rulemaking (NPRM) regarding “Requirements for banks and money services businesses” related to certain transactions involving convertible virtual currency (CVC) or digital assets with legal tender status (“legal tender digital assets” or LTDA). FinCEN is identifying additional statutory authority for the proposed rule under the Anti-Money Laundering Act of 2020, providing additional information regarding the reporting form, and reopening the comment period for the proposal.

The Sovrin Compliance and Inclusive Finance Working Group (CIFWG) took the opportunity to submit its formal response to FinCEN on March 29, 2021 — focusing explicitly on how their proposed rules will impact financial inclusion.

Sovrin Foundation asked Amit Sharma, Chair of the Sovrin CIFWG, to tell us more about the potential impact of the NPRM, their response and recommendations to FinCEN.

Q: Can you tell us a little bit about the Sovrin CIFWG? What’s your mission and who are your group members?

The purpose of the Sovrin CIFWG is to advance the mission of financial inclusion globally, by addressing the challenges and opportunities presented by innovations in the financial services and payments landscape, and the attendant financial and regulatory compliance implications. We are an open group of traditional bank and non-bank financial institutions, regulators, policymakers, technologists, ethicists, and legal experts who monitor the challenges faced by the financially excluded and under-served. CIFWG focuses on how economic and regulatory technologies can bridge the gap between traditional banking compliance and associated risks injected by innovation. We have developed and been actively promoting the Sovrin Compliance and Inclusive Finance Rulebook, an innovative best practices framework that extends traditional banking compliance and payments guidance to emerging fintech and virtual asset service providers (VASP) processes.

Q: What do you mean by global financial inclusion, and why is it important?

Global financial inclusion remains a desirable and necessary development goal — 1.7 billion adults lack a bank account[1], and millions more have limited or no access to traditional financial services around the world. Furthermore, “de-risking” — the efforts of financial institutions to terminate or restrict relationships of certain clients and customers — has continued to be amplified by the continued growth of global anti-money laundry (AML) / counter-terrorism financing (CTF) controls[2]. The result is a disproportionate impact for the financially under-served, the global poor, and institutions and sectors that provide services to these segments of the economy. The attendant consequences of financial exclusion cannot be overstated given that remittances from migrant workers alone total over $500 billion a year (three to four times foreign aid) and is a vital source of finance for poor countries.

Q: What is the NPRM regarding “Requirements for banks and money services businesses” proposed by FinCEN about?

The December NPRM proposed to address the threat of illicit finance with respect to certain transactions involving CVC or LTDA by (a) establishing new reporting requirements for certain CVC or LTDA transactions analogous to existing currency transaction reports, and (b) establishing new record keeping requirements for certain CVC or LTDA transactions that is similar to the record keeping and travel rule regulations pertaining to funds transfers and transmittal of funds.

On January 1, 2021, the Anti-Money Laundering Act of 2020 (Division F of Pub. L. 116–283) (“AML Act of 2020”) became law. FinCEN proposed that by regulation, CVC and LTDA are monetary instruments because they are “similar material” to “coins and currency of a foreign country, travelers checks, bearer negotiable instruments, bearer investment securities, bearer securities, [and] stock on which title is passed on delivery[3].”

Q: What do you think about the NPRM? How does it impact the fintech sector and its key stakeholders?

We believe combating illicit finance activities is necessary and a top priority for FinCEN. But efforts to serve both law enforcement equities and financial inclusion do not present a binary choice. An effective anti-money laundering and counter-terrorism financing regime should also prioritize increased engagement of financially underserved, de-risked and/or excluded parties to foster a financial system that provides enhanced participation, greater transparency, and innovation in domestic financial systems.

Further, the data on illicit finance risks in the CVC sector don’t necessarily support the need for the proposed rule changes which are motivated to strengthen law enforcement and regulatory oversight efforts. According to a recently released report by Chainalysis, illicit activities or crime related to virtual assets has continued to decline, with the illicit share of cryptocurrency activity falling to just 0.34% in 2020.

Importantly within the data analyzed by Chainalysis, the concentration of illicit activities has shown to be primarily with a “small group of shady cryptocurrency services, mostly operating on top of large exchanges, [who] conduct most of the money laundering that cybercriminals rely on to make cryptocurrency-based crime profitable[4].” This data would seem to call for more targeted efforts by law enforcement to concentrate investigations of criminals by identifying owners of these deposit addresses and the organizations that are conducting deliberate money laundering operations at scale (among otherwise legitimate activities).

As such, we are concerned with the proposed rules as they impact a growing sector that is increasingly providing solutions to marginalized and financially excluded constituencies. Also, organizations operating in CVCs, including fintech companies, virtual asset services providers (VASPs), and other similar organizations may fall under the broader definition of nonbank financial institutions or money services businesses/money transfer operators (MSBs/MTOs) that already face enormous scrutiny by mainstream financial institutions when evaluating their respective risks related to onboarding into new accounts. The Financial Action Task Force (FATF) itself has acknowledged the fact of de-risking disproportionately impacting certain sectors like MSBs/MTOs[5].

Q: What is the role of the CVC services in regards to global financial inclusion, and how secure are they?

As CVC services continue to evolve and grow, so do the applications and opportunities inherent in facilitating greater inclusion of underserved and marginalized communities and organizations into the global financial ecosystems. As such, self-hosted wallets are playing an increasingly important role with virtual assets, as global financial operations continue to be unbundled, and an ongoing trend toward de-centralized financial services continues unabated[6].

Self-hosted wallets enable anyone with an internet connection to transact with others in digital assets on a peer-to-peer (P2P) basis. Additionally, these wallets can be used to store value or digital assets securely and provide capability for the user/consumer to hold resources personally that enable them to interact in the fiat and digital financial context. This enables consumers to transact with counterparties directly without the need for a third-party intermediary–similar to making a cash transaction for a purchase of a good, pay an expense, or transfer value to a friend or family member.

When combined with blockchain technology, P2P transactions are arguably more secure and more transparent than activities undertaken in cash, as the convenience of cash and the efficiencies that come with electronic payments are combined with the risk controls associated with pseudonymous transactions that are not otherwise dependent on a specific financial intermediary[7].

Q: In sum, what are your recommendations to FinCEN regarding the NPRM?

We really appreciated the opportunity to comment on the NPRM. We believe that any proposed rule should explicitly include a thorough assessment of the threats to financial inclusion that may come about with such a proposal, and the Agencies should consider the concerns outlined in the response in our letter as part of such an assessment.

Secondly, increased targeted enforcement and requirements for transaction monitoring would strengthen law enforcement regulatory intentions with the proposed rules vs a more wholesale application of record keeping and reporting for related to all noncustodial wallet holders and their activities. In short, a smaller group of actors are engaged in such activity, and enforcement and regulatory measures should be better targeted to them vs the industry as a whole — especially in light of the financial exclusion implications of such proposed rules.

Strengthening financial inclusion, including by encouraging innovation in the financial services arena with technology applications that extend beyond existing banking and payments systems can actually provide enhanced security, risk and compliance controls, while bringing greater opportunities to marginalized and otherwise financially underserved or excluded communities. The underlying technologies include:

The facilitation of cross-border remittances via P2P and blockchain networks that provide efficiency, cost advantages and security; The enhanced auditability and traceability of transactions conducted via these applications that better enable financial services regulators and law enforcement interests; and The application of digital and sovereign identity management capabilities that strengthen know-your-customers processes and customer information programs while preserving the essential privacy protections inherent in financial services.

We welcome follow up with Agencies and participating members that are interested in providing guidance, examples and use cases of inclusion efforts — including the application of self-hosted/un-hosted wallets in particular.

Thank you Amit!

Anyone can download the full letter here. Please feel free to reach out to Amit Sharma (Amit@finclusive.com) and the Sovrin Foundation (info@sovrin.org) if you have any questions or suggestions regarding the NPRM.

For more details about the Sovrin Compliance and Inclusive Finance Working Group and instructions on how to join, please see its webpage or contact cifwg@sovrin.org.

Reference materials:

[1] Global Findex Database: https://bit.ly/2OlEybX

[2] De-risking in the Financial Sector: https://bit.ly/2Q0g3kX

[3] Requirements for Certain Transactions Involving Convertible Virtual Currency or Digital Assets: https://bit.ly/3ugrOme

[4] The 2021 Crypto Crime Report: https://bit.ly/31MOQ8k

[5] FATF (2016). Correspondent Banking Services. Paris: Financial Action Task Force: https://bit.ly/3mlMcQf (accessed July 11, 2019).

[6] World Bank staff estimates based on data from IMF Balance of Payments Statistics database. For UAE, estimates are based on reports from its Central Bank.

[7] Are regulators poised to demand cryptocurrency address whitelisting? Probably not: https://bit.ly/3mlMcQf

An open letter to Financial Crimes Enforcement Network advocating for financial inclusion in its… was originally published in Sovrin Foundation Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 06. April 2021

Nyheder fra WAYF

Reykjavíks Universitet nu med i WAYF

Reykjavíks Universitet (RU) er i dag indtrådt i WAYF som brugerorganisation. Hermed har ansatte og studerende ved RU nu adgang til at identificere sig som sådanne over for en lang række webtjenester i både WAYF og eduGAIN. Allerede nu tilgår RU-brugere eksempelvis ERASMUS-tjenesten OLA via WAYF. Language Danish Read more about Reykjavíks Universitet nu med i WAYF

Reykjavíks Universitet (RU) er i dag indtrådt i WAYF som brugerorganisation. Hermed har ansatte og studerende ved RU nu adgang til at identificere sig som sådanne over for en lang række webtjenester i både WAYF og eduGAIN. Allerede nu tilgår RU-brugere eksempelvis ERASMUS-tjenesten OLA via WAYF.

Language Danish Read more about Reykjavíks Universitet nu med i WAYF

Monday, 05. April 2021

Good ID

Authenticator Certification Hits a New Milestone with First L3+

By: FIDO Alliance staff A major milestone has been realized, with the German Federal Office for Information Security (BSI-Bundesamt für Sicherheit in der Informationstechnik) becoming the first organization to the […] The post Authenticator Certification Hits a New Milestone with First L3+ appeared first on FIDO Alliance.

By: FIDO Alliance staff

A major milestone has been realized, with the German Federal Office for Information Security (BSI-Bundesamt für Sicherheit in der Informationstechnik) becoming the first organization to the achieve the Certified Authenticator Level 3+ level, which is the highest level of validation currently offered by the FIDO Alliance. 

The path toward the Level 3+ designation has been several years in the making.

Dr. Rae Rivera, Certification Director for the FIDO Alliance explained that the Certified Authenticator program was originally launched in August 2018 in a bid to define greater levels of assurance for FIDO authenticators. She noted that the FIDO Specifications include an inherent amount of security and privacy. The goal with the Certified Authenticator program is to provide additional security assurances for the authenticators themselves. 

With the first Certified Authenticator Level 3+ designation now granted, Rivera expects other organizations will follow, helping to improve strong authentication for users and organizations around the world.

“We’re continuing to see more pickup and uptake in the Certified Authenticator program,” Rivera said. “At each higher level, there’s less risk of a vulnerability.”

Understanding the Different Certified Authenticator Levels

There are three core levels (L1, L2, L3 and ) in the Certified Authenticator program with each level building on the requirements of the preceding level. Incremental additional assurance can be obtained to allow a vendor to achieve a “+” within each level (L1+, L2+, L3+). 

The program evaluates authenticators to answer the question ‘how well does the authenticator protect the private key?The most basic entry level is L1 which Rivera said a vendor can achieve by supporting and implementing the FIDO specifications. An authenticator certified at L1 provides protection against phishing and credential abuse.

Moving up to L2, Rivera noted that restricted operating environments are required to protect against malware attacks. When you get to L3 and L3+, Rivera said that it’s all about looking at hardware authenticators, and how they provide protection against brute force attacks. 

“One of the core attributes of our higher level programs, specifically level three and three plus, is that they require the product to have what we call a companion program certification,” Rivera said. 

She noted that the companion program certification that has been defined for those higher levels is Common Criteria  which provides sets of evaluations and designations to help define the security posture for a given device or service.

“The higher level that you go, the less vulnerable the authenticator is to any kind of attack,” Rivera said.

Why the Level 3+ Certification is Significant

With BSI now certified at L3+ the door is open to others to follow the same path toward the highest level of security assurance.

“Personally I feel like this is a huge leap forward for the program,” Rivera said.

Rivera noted that to date there have been many products that have been certified at the lower levels of the Certified Authenticator program. Now that the first L3+ has been achieved she anticipates that there will be more interest from organizations to go through the program to gain that additional higher level of assurance.

“This certification clearly demonstrates the value of our certified authenticator program – particularly at the higher levels,” she said. “Government and regulated industries such as finance, healthcare, energy and education often have more sensitive use cases that require specific types of authentication into their networks. Vendors and relying parties in these markets see this as a benefit because it meets the need for hardware protection and is also Common Criteria certified.” 

How Others Can Benefit from the First Level 3+ Certification

Now that BSI has hit the Level 3+ certification, there is now quite literally a path for others to follow.

Rivera explained that with the L3+ certification there is a protection profile associated with it. The protection profile contains all the components that are used to achieve the L3+. As such, another vendor could utilize the protection profile to develop their product to get certified at the higher level.

“The protection profile serves as good guidance for those that are seeking the higher levels as to what they need to do and what modifications they need to make to their implementation,” Rivera said. “BSI getting certified at Level 3+ has made it a little easier for others to start achieving this level.”

The post Authenticator Certification Hits a New Milestone with First L3+ appeared first on FIDO Alliance.

Sunday, 04. April 2021

OpenID

OpenID Connect Federation Specification Resulting from a Year of Implementation Experience

The OpenID Foundation held three interop events for implementations of the OpenID Connect Federation specification during 2020. The learnings from each were used to iteratively refine the specification after each round, just as we did when developing OpenID Connect itself. The current specification is the result of the iterative rounds of improvement, informed by the […] The post OpenID Connect F

The OpenID Foundation held three interop events for implementations of the OpenID Connect Federation specification during 2020. The learnings from each were used to iteratively refine the specification after each round, just as we did when developing OpenID Connect itself. The current specification is the result of the iterative rounds of improvement, informed by the experiences of multiple implementations.

Special thanks to Roland Hedberg for organizing these useful interop events and to all those who brought their code to test with one another.

The specification is published at:

https://openid.net/specs/openid-connect-federation-1_0-14.html and https://openid.net/specs/openid-connect-federation-1_0.html The post OpenID Connect Federation Specification Resulting from a Year of Implementation Experience first appeared on OpenID.

Saturday, 03. April 2021

decentralized-id.com

Twitter Collection – 2021-04-02

Decentralized Identity - Curated 2021-04-02

Own Your Data Weekly Digest

MyData Weekly Digest for April 2nd, 2021

Read in this week's digest about: 12 posts
Read in this week's digest about: 12 posts

Thursday, 01. April 2021

Oasis Open

Invitation to comment on three new AMQP specifications

The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business messages between applications or organizations. The 3 specification drafts extend its capabilities The post Invitation to comment on three new AMQP specifications appeared first on OASIS Open.

First public reviews for 3 extensions to popular messaging standard Advanced Message Queuing Protocol (AMQP) are open through April 29th

OASIS and the OASIS Advanced Message Queuing Protocol (AMQP) TC are pleased to announce that three new AMQP specifications are now available for public review and comment:

Event Stream Extensions for AMQP Version 1.0 AMQP Filter Expressions Version 1.0 AMQP Claims-based Security Version 1.0

The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business messages between applications or organizations. It connects systems, feeds business processes with the information they need and reliably transmits onward the instructions that achieve their goals.

Event Stream Extensions for AMQP Version 1.0 defines a set of AMQP extensions for interaction with event stream engines, including annotations for partition selection and filter definitions for indicating offsets into an extension stream to which a link is attached.

AMQP Filter Expressions Version 1.0 describes a syntax for expressions consisting of property selectors, functions, and operators that can be used for conditional transfer operations and for configuring a messaging infrastructure to conditionally distribute, route, or retain messages.

AMQP Claims-based Security Version 1.0 describes an AMQP authorization mechanism based on claims-based security tokens.

The documents and related files are available here:

Event Stream Extensions for AMQP Version 1.0
Committee Specification Draft 01
17 March 2021

Editable source:
https://docs.oasis-open.org/amqp/event-streams/v1.0/csd01/event-streams-v1.0-csd01.md (Authoritative)
HTML:
https://docs.oasis-open.org/amqp/event-streams/v1.0/csd01/event-streams-v1.0-csd01.html
PDF:
https://docs.oasis-open.org/amqp/event-streams/v1.0/csd01/event-streams-v1.0-csd01.pdf

AMQP Filter Expressions Version 1.0
Committee Specification Draft 01
17 March 2021

Editable source:
https://docs.oasis-open.org/amqp/filtex/v1.0/csd01/filtex-v1.0-csd01.docx (Authoritative)
HTML:
https://docs.oasis-open.org/amqp/filtex/v1.0/csd01/filtex-v1.0-csd01.html
PDF:
https://docs.oasis-open.org/amqp/filtex/v1.0/csd01/filtex-v1.0-csd01.pdf

AMQP Claims-based Security Version 1.0
Committee Specification Draft 01
17 March 2021

Editable source:
https://docs.oasis-open.org/amqp/amqp-cbs/v1.0/csd01/amqp-cbs-v1.0-csd01.docx (Authoritative)
HTML:
https://docs.oasis-open.org/amqp/amqp-cbs/v1.0/csd01/amqp-cbs-v1.0-csd01.html
PDF:
https://docs.oasis-open.org/amqp/amqp-cbs/v1.0/csd01/amqp-cbs-v1.0-csd01.pdf

ZIP distribution files

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP files at:

https://docs.oasis-open.org/amqp/event-streams/v1.0/csd01/event-streams-v1.0-csd01.zip

https://docs.oasis-open.org/amqp/filtex/v1.0/csd01/filtex-v1.0-csd01.zip

https://docs.oasis-open.org/amqp/amqp-cbs/v1.0/csd01/amqp-cbs-v1.0-csd01.zip

Public review announcement metadata records [3] are published along with the specification files.

How to Provide Feedback

OASIS and the AMQP TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.

The public reviews start 31 March 2021 at 00:00 UTC and end 29 April 2021 at 23:59 UTC.

Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=amqp).

Comments submitted by TC non-members for these works and for other work of this TC are publicly archived and can be viewed at:
https://lists.oasis-open.org/archives/amqp-comment/

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specifications and the AMQP TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/amqp/

Additional references

[1] https://www.oasis-open.org/policies-guidelines/ipr

[2] https://www.oasis-open.org/committees/amqp/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr#RF-on-RAND-Mode
RF on RAND Mode

[3] Public review announcement metadata:
Event Stream Extensions for AMQP Version 1.0
– https://docs.oasis-open.org/amqp/event-streams/v1.0/csd01/event-streams-v1.0-csd01-public-review-metadata.html
AMQP Filter Expressions Version 1.0
– https://docs.oasis-open.org/amqp/filtex/v1.0/csd01/filtex-v1.0-csd01-public-review-metadata.html
AMQP Claims-based Security Version 1.0
– https://docs.oasis-open.org/amqp/amqp-cbs/v1.0/csd01/amqp-cbs-v1.0-csd01-public-review-metadata.html

The post Invitation to comment on three new AMQP specifications appeared first on OASIS Open.


Digital Scotland

Neatebox awarded the Inclusion and Empowerment Award by WSA 2020

Neatebox has been awarded the Inclusion and Empowerment Award by WSA 2020 and is announced as the only UK business within the WSA Global Top 40.  The post Neatebox awarded the Inclusion and Empowerment Award by WSA 2020 appeared first on Digital Scotland.
Neatebox, a Scottish pioneer of “Social Tech”, has been awarded the Inclusion and Empowerment Award by WSA 2020 and is announced as the only UK business within the WSA Global Top 40.

The World Summit Awards select and promote digital solutions that address global issues and improve society. Backed by the UN, the final jury of 43 internationally recognised experts selected 40 solutions from 26 countries. The winning organisations are diverse, and united by their impactful solutions for local communities, their technical finesse and strength in  design.

This prestigious win for Scotland and the UK is important because it further cements their position as global digital front-runners and central to the digital accessibility conversation. It highlights the need for businesses to be responsive to their customers needs, safety and comfort in the continuing public-health crisis. It helps us understand that safe, affordable solutions are available to ensure we are more disability aware.

Neatebox’s Founder and CEO, Gavin Neate, comments “to be recognised at any time is fantastic but to represent your country by winning a World Summit Award for Inclusion and Empowerment at a time when disabled people are under such dreadful strain due to the effects of COVID19 is truly inspiring for us all.  We very much hope that this award will raise awareness of what is possible and lead to us to work with those countries and businesses who see the provision of services for disabled and vulnerable people as a reflection of their international stranding in the world.”

The post Neatebox awarded the Inclusion and Empowerment Award by WSA 2020 appeared first on Digital Scotland.

Wednesday, 31. March 2021

OpenID

Picking up Speed on the FAPI Roadmap: From the FAPI RW Implementers Draft 2 to FAPI 1.0 Advanced Final

The FAPI Working Group published a FAPI Frequently Asked Questions (FAQ) as a resource for those new to FAPI and implementing FAPI. This FAQ will evolve over time as the Foundation engages with new members like the Australian Competition & Consumer Commission as well as colleagues in the US, Australia, Brazil and other jurisdictions on […] The post Picking up Speed on the FAPI Roadmap: From t

The FAPI Working Group published a FAPI Frequently Asked Questions (FAQ) as a resource for those new to FAPI and implementing FAPI. This FAQ will evolve over time as the Foundation engages with new members like the Australian Competition & Consumer Commission as well as colleagues in the US, Australia, Brazil and other jurisdictions on their open banking initiatives which were recently highlighted in an OIDF blog.

Earlier this month, the OpenID Foundation achieved a major milestone in publishing the Financial-grade API (FAPI) Parts 1.0 and 2.0 final specifications. This is a true accomplishment for open banking initiatives worldwide.

Many organizations and jurisdictions have utilized the FAPI Implementers Draft 2 as the starting point of their adoption of FAPI in defining their open banking solution stacks. To assist these organizations and jurisdictions in transitioning from the FAPI RW Implementers Draft 2 to the FAPI 1.0 Advanced Final specification, Joseph Heenan, led the OpenID certification work and  published a non-normative list of changes document.

This document is a non-normative list of the changes between the second implementers draft of FAPI1 part 1 (read only / baseline) and part 2 (read-write / advanced) and the final versions:

Changes are listed as major if they may require alterations to implementations. Changes are listed as minor if they are generally clarifications and are unlikely to require alterations to existing implementations that were already applying best current practice. Most changes have both client and server clauses that reflect the same change to the protocol; such a change is only listed once below in the interests of brevity.

The progress and pace of standards development is a function of the contribution of human capital, especially domain expertise. The Foundation would like to thank member, Authlete, a consistent key contributor to the FAPI standards. Authlete’s contribution of Joseph Heenan’s time and talent that was key to the ongoing improvements in the evolution of the FAPI standard.

New to open banking and Financial-grade API (FAPI)? Get started and learn more at the FAPI microsite.

The post Picking up Speed on the FAPI Roadmap: From the FAPI RW Implementers Draft 2 to FAPI 1.0 Advanced Final first appeared on OpenID.

Elastos Foundation

Elastos DID: What’s Ahead for 2021

...

Tuesday, 30. March 2021

Ceramic Network

Community Call 12

Discussion of NFT and music projects, NFT:DID for turning NFT's into identities, and critical updates en route to mainnet.

Monday, 29. March 2021

Digital Identity NZ

Announcing new Executive Director and farewell to Andrew Weaver

Introducing the new Executive Director for Digital Identity NZ and formally farewelling Andrew Weaver as the outgoing Executive Director.  We are excited to announce that Michael Murphy joins us as the new Executive Director for DINZ. With a wide and diverse background, Michael is active in the investment and startup communities along with business mentoring. He … Continue reading "Annou

Introducing the new Executive Director for Digital Identity NZ and formally farewelling Andrew Weaver as the outgoing Executive Director. 

We are excited to announce that Michael Murphy joins us as the new Executive Director for DINZ. With a wide and diverse background, Michael is active in the investment and startup communities along with business mentoring. He had some early involvement in the Digital Identity Forum, the precursor to DINZ and so joins us with a broad understanding of the importance of digital identity in furthering a trust based economy and society.

Over the next few months Michael will be connecting with our members and stakeholders to better understand what people need and desire from the work of Digital Identity NZ and from the wider ecosystem activity, including the Trust Framework development being led by DIA. Please do make Michael feel welcome as he appears in various events in the near future.

Executive Director – Michael Murphy

Across several roles, Michael is focused on helping Aotearoa’s best and boldest entrepreneurs in creating truly world-class companies that will employ coming generations of Kiwis and underpin the economic and social development of our country. 

As Executive Director at Digital Identity New Zealand, Michael sees trust and digital identity as being at the very heart of how people, communities, businesses and economies will thrive and prosper in a future driven by disruptive innovation. He is highly motivated to explore how the many threads of digital identity and trust can be most effectively woven together to create the firm foundations needed for the development of a truly trust based economy in Aotearoa, for the benefit of us all. 

In his other roles as investor, director, advisor and mentor, Michael works with globally ambitious start-up and early-stage ventures as well as more established businesses, looking to innovate and adopt a more exponential mindset to drive their growth. Working across both groups, cross pollinating ideas and methods, is where Michael is most energised: bringing experience and disciplined entrepreneurship to early stage ventures and helping more mature businesses explore the opportunities for growth and manage the risks presented by these massively disruptive times.

Outgoing Executive Director – Andrew Weaver

From the 31st March we bid farewell to Andrew Weaver who has so ably steered DINZ as the inaugural Executive Director since November 2018. Andrew has been such an active and positive influence in the digital identity sector it is hard to imagine him moving on. As the Chair of the Executive Council of DINZ I have had the pleasure of working with Andrew weekly over the past two years and have great admiration for his approach to the role.  

He has developed a fantastic platform from which we can continue the work of enabling a world where people can express their identity using validated and trusted digital means in order to fully participate in a digital economy and society. The fact that the past 12 months has seen DINZ adapt seamlessly to a world with COVID-19 is a testament to Andrew’s flexibility and professionalism in getting the work done.

Andrew, thank you for all you have contributed to the work of DINZ. We truly wish you all the best in wherever your journey takes you. 

Michael, a warm welcome and we look forward to working with you!

The post Announcing new Executive Director and farewell to Andrew Weaver appeared first on Digital Identity New Zealand.


omidiyar Network

Legacy, Community, and Equity: Reflecting on Women’s History Month

By Beth Kanter, Chief Advocacy & Strategic Communications Officer For many women, March symbolizes more than just a month of observance. It’s a time where women across all industries and geographies celebrate the achievements of their peers and colleagues today and take a critical look at the opportunities they’re creating for young women just entering the workforce. It’s also a chance for us

By Beth Kanter, Chief Advocacy & Strategic Communications Officer

For many women, March symbolizes more than just a month of observance. It’s a time where women across all industries and geographies celebrate the achievements of their peers and colleagues today and take a critical look at the opportunities they’re creating for young women just entering the workforce. It’s also a chance for us to celebrate the historic achievements of the pioneering women that came before us. At Omidyar Network, we’re constantly working to create a culture and world where all women have equitable access to the systems and powers that govern our daily lives.

Like many, I was inspired to do this work by the first woman in my life — my mother. My commitment to social justice was developed while watching her work as a special education teacher for more than 30 years. As an educator, she worked passionately to support and empower students who were often marginalized because of their race, class, or learning abilities. As an active member of her union, she understood why it was critical for people to have power and voice in the workplace. She taught me the importance of being engaged in my community and how to live my values and support my community through politics — one of my first memories is stuffing envelopes for our neighbor who was running for the Montgomery County Council and reading her NOW newsletter.

After her sudden death in 2000, I’ve always wondered what she would think of some of the unfortunate events the country has endured over the years. Part of me is relieved that she did not have to witness the horrors of 9/11, the recession, and the pandemic. On the other hand, I think she would have been amazed to see the remarkable women who have trailblazed their way through the world by challenging the status quo, reimaging what’s possible, and ensuring equality for the leaders of the future. My mom, along with other women before me, would have beamed with pride to bear witness to the accomplishments of Supreme Court Justice Sonia Sotomayor, Speaker of the House Nancy Pelosi, and most recently, Vice President Kamala Harris.

I am constantly awed and inspired by the incredible women grantees we work with at Omidyar Network who are shaping a new inclusive economy, ensuring a responsible tech ecosystem, and positively impacting our society. We have the honor to work with a powerhouse team of women leaders such as:

Ifeoma Ozoma with Earthseed who’s taking on the role of Big Tech as a brave whistleblower at Pinterest, holding the company to account for racial bias and inequality; Lisa Donner from Americans for Financial Reform who’s working to reimagine our economic system by way of representation and ensuring equality; and Solana Rice from Liberation in a Generation who’s providing equitable access to communities for meaningful social change.

Click here for a look at how we’ve highlighted our grantees for our Women’s History Month campaign!

I feel an obligation and a great sense of joy to carry out the legacy built by these women, my mother, and many others, through the work Omidyar Network does every day. We’re committed to showing up for each other and addressing all instances of biases and inequality. I see so much of what my mother believed in and stood for in Omidyar Network’s mission and through our phenomenal team of women grantees, including:

Legacy, Community, and Equity: Reflecting on Women’s History Month was originally published in Omidyar Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF Medium

Drilling Down: [Co-]Development in the Open

Drilling down: Co-development Competitors working together in the open Having gone over the subtleties and complexities of open-source software and open standards in general, we will now drill down into how exactly DIF advances both, in the form of a “Frequently Asked Questions” (FAQ) document. Topics covered include: What “standardization” means to DIF and what DIF means to standard
Drilling down: Co-development

Competitors working together in the open

Having gone over the subtleties and complexities of open-source software and open standards in general, we will now drill down into how exactly DIF advances both, in the form of a “Frequently Asked Questions” (FAQ) document. Topics covered include:

What “standardization” means to DIF and what DIF means to standardization. A newbie-friendly survey of how DIF relates to nearby organizations with overlapping or related foci. What “co-development” and “coöpetition” really mean, concretely

The next installment in this series will offer some concrete steps that a DIF member organization (or even a non-member organization) can take to roll up their sleeves and get involved.

We can start with this 2020 diagram of how specific specifications fit in the landscape of open-source and open-standards organizations. Click on the image below to download a clickable PDF file.

Src: E.D. Rouven Heck 2019, updated by Interoperability Working Group, 2020 What is DIF’s role in “standardizing” decentralized identity?

The governance of the decentralized identity standards space is, by design, decentralized: many different complementary organizations make steady, focused progress on specifications and codebases. In these coextensive communities, each garners consensus and buy-in for a focused set of data models and protocols with overlapping scopes and functions. Claiming the mantle of central authority for decentralization would be a pyrrhic victory for any Standards Development Organization (SDO).

Having been founded as a joint development project, DIF’s guiding mission is to effect harmonization, interoperability, and usability among early adopters, growing rather than controlling the development community. This nurtures healthy competition in the market growing around these technologies and provides a space for competitors to work together openly but safely, balancing “code-first” open source approaches and “standards-first” or protocol-driven ones. Allowing any single standards body (even DIF itself) to set the tone for all of decentralized identity or speak on its behalf would run counter to this mission.

Instead, the DIF exists to support, promote and facilitate two processes that seem to contradict each other at first blush. On the one hand, DIF accelerates the advance of standards and pre-standard specifications and libraries for emerging, decentralized identity systems. On the other hand, DIF strives to decentralize the landscape of identity technology and encourage new forms of data governance, new business models, and meaningful competition in our corner of the sector.

We unite these two processes with a pragmatic, considered, and adoption-ready strategy that is grounded in cooperation with external stakeholders and adjacent organizations. Most of these focus more squarely on technical standards or industry coordination and development, or some narrower set of technologies or protocols; decentralizing identity requires a both/and approach, which is deeply encoded in the DNA of DIF as an organization.

Who are the other major organizations in decentralized identity?

Of the various standards development organizations with which DIF cooperates closely, the World-Wide Web Consortium clearly has pride of place. The core specifications of decentralized identity were incubated there. The verifiable credential specification was ratified (or in W3C parlance, moved to “candidate recommendation” status) over a year ago, and the decentralized identifier specification joined it two weeks ago. As its name would imply, the WWW consortium is squarely focused on the standards structuring the WWW: browsers, cookies, web servers, and other “agents” in the topology of public web browsing.

Many important standards and specifications for data formats outside of the web problem space take place at the Internet Engineering Task Force (IETF), particularly around data representations, transports, and security engineering. This includes many primitives and building blocks of modern programming, like JSON representations, cryptographic primitives, and universal tokens and authentication standards, usage of which extends far beyond the public web. There is a traditional division of labor according to which the web’s data models are standardized at W3C and its protocols are standardized in the IETF, leading to a kind of odd-couple dynamic whereby two very different cultures have grown up around these specialties.

Another major site of coordination for primitives and building blocks is the more constitutively open-source-focused Organization for the Advancement of Structured Information Standards (OASIS). Originally a governance group for XML and the earliest forms of what we now call “Big Data”, OASIS saw its purview and power extended when various global trade and messaging standards were built extending XML. To this day, the Open Office document standards and much of the open-source tooling for PDF, DITA, and other non-proprietary document standards are still governed and advanced through OASIS.

DIF supports decentralized identity by incubating and co-developing prototypes and specifications, some of which are handed off to these more deliberative organizations for formal review and standardization where appropriate. Importantly, though, not all “pre-standards” work has as its goal a standardized or universalized version of that work. In many cases, a specification that was good enough to guide an experimental or contingent implementation making its way to market might serve its purpose well enough without further hardening.

These “DIF-terminal” specifications and sample implementations are apt contributions to the field and the market without going on to formal review. They serve to bootstrap and jumpstart other development by getting more credentials or identifiers into circulation, while growing the pool of developers experienced and excited about the technology and the community. This “build early” approach is crucial to refining and hardening other parts of the stack, as well as building up a market and its workforce.

How does DIF work together with these organizations?

DIF is not the only pre-standards organization in which specifications and implementations are co-developed by early adopters and explorers of this family of technologies and applications. Anywhere companies and individual experimenters collaborate on open-source projects, “co-development” is happening, but these too vary in the kinds of collaboration for which they have designed and optimized their processes.

The earliest work in this space, as well as much of the long-view-oriented work developing tooling for open-world data models and Linked Data tooling, takes place in the Credentials Community Group (CCG) of the W3C. The CCG is a community group operated through the W3C but open to participants that do not pay W3C dues. It promotes and hosts pre-standards work and maintains dialogue with various other W3C groups, not just those chartered to work on the VC and DID specifications. Since the CCG is hosted by the W3C, it inherits many of the procedures and specification publication processes from the members-only processes of the Consortium. The Secure Data Storage working group is cosponsored by the CCG and DIF to maximize the community-wide participation creating a pre-standards specification for Confidential Storage structures that can serve many different kinds of decentralized identity and data systems.

The Linux Foundation houses a complex family of Hyperledger projects, which coordinate global contributors to massive open-source codebases that power blockchains and other decentralized data and identity systems. Indy, the first major fit-for-purpose permissioned distributed ledger optimized for decentralized identity applications, is the oldest and most high-profile of the identity-focused projects in Hyperledger, designed to anchor decentralized identities and governed by the Sovrin Foundation.

Another major Hyperledger project is the modular and progressively more blockchain-agnostic Aries community, which evolved out of Indy-specific open-source systems. The Aries Project offers many common libraries, entire single-language frameworks, and cross-framework protocols so that small companies can quickly build into an ecosystem of interoperable service providers and business models. The DID-Communications working group is co-sponsored by Aries and DIF, allowing members of either group to participate directly in the specification of a DID-based communications protocol for the entire decentralized identity community, not just the ecosystem built around the Aries codebases.

Other groups also overlap and coordinate with DIF in their development of implementations and specifications in advance of formal standards. One of these is the OpenID Foundation (OIDF), which governs increasingly widespread standards for authentication that federate identity systems across most of the commercial consumer internet. As with Hyperledger Aries and CCG, this relationship is formalized as a “liaison agreement” between the two organizations, and also entails a Liaison Officer to coordinate the relationship; DIF’s DID Auth WG was put on pause in mid 2020 to focus on work happening in OIDF working groups, and hopes to re-open regular meetings to work on items on the DIF side again soon.

Another is the Linux Foundation’s first project devoted exclusively to data governance and related legal and ecosystem management issues, the Trust-over-IP Foundation (ToIP), where many DIF members work on task forces and working groups on compliance issues, consent tracking, data-capture best practices, and industry-specific governance issues. The decentralization-friendly but wider-scoped Kantara Initiative, which promotes user-centricity and data subject rights in legacy identity systems, is also an important touchstone for regulatory and legal issues. The Sovrin Foundation continues to maintain the active Indy production network and house many important collaborations and working groups, and are represented in many DIF working groups.

Coöpetition and DIF’s particular brand of co-development

The way DIF relates to other organizations working in the same space arose naturally from DIF optimizing its processes for neutrality and collaboration between companies of different sizes and market positions. DIF’s culture is one of business collaboration aligned with the increasingly buzzy term “co-opetition” (sometimes spelled “coöpetition” to emphasize the pronunciation). The term is a portmanteau combining cooperation and competition, but the clearest definition might be “coöperation between competitors, who remain competitors before and after coöperating”.

“Coöpeting” early in the design process makes for a unique pact between competitors, one that is crucial to the DNA of DIF: it is a forward commitment, before the design process, to cooperate through and after release of an open standard or codebase. This kind of pact makes the most sense in situations where network effects and portability of data and users matter more than platform control and vendor lock-in. Indeed, when your business plan is to maximize openness and interoperability, you end up cooperating earlier and more deeply than in more siloed forms of “open source” development. After all, much of what we consider open-source wasengineered for a single vendor’s business goals and opened up after release, once all the major design decisions had been made and tested. The difference of timeline and design process is hard to overstate!

Put bluntly, committing to coöpetitive development of common standards and protocols is committing in advance to a truly open standard, rather than one which advantages its designers on its path to general adoption. This kind of pact is particularly useful when forged between companies that are direct competitors or companies of vastly different sizes.

Cooperating in the patent-free zone

In large part, competitors and companies of different sizes find it far easier to work together substantively on new ideas once a safe space has been established for new ideas. Startups stay up at night worrying that their best ideas could be patented out from beneath them by collaborators with well-staffed legal departments, and large enterprises hesitate to do any work in the open that might endanger their R&D inventions. The solution for both is often coming together openly on a precisely-scoped working group or project with proper “IPR protections.”

Put in plain language, the IPR agreements protecting DIF’s work prevent all contributors from taking legal action against the results of the group, whether on the basis of previously-held patents, or the new ideas originating in the group.

The terms of these IPR agreements can seem cumbersome to people coming from non-profit or academic research, but in the private sector and particularly at industrial or global scale, the enforceability in global courts of ownership over ideas and patents is absolutely essential to software investments, infrastructural governance, and even geopolitics. The protocols, specifications, and libraries co-authored in an IPR-protected group can be thought of as safe dependencies, which cannot have their open-source status (i.e., their royalty-free status!) endangered by the current or pre-existing intellectual property of any of its contributors. The relative degree of this safety is of huge importance to infrastructural decision-making and long-term planning. These assurances can be an important insurance policy against one major risk in software development — patent action against code or its dependencies.

Another crucial way in which this kind of “co-development” protects participants from the risks inherent in cooperating with competitors is that DIF, not the contributing member organizations, “owns” the products of its members’ collective labors. In concrete terms, this means the ongoing governance, maintenance, and licensing of the products of working groups, often referred to as “technical control” of resulting code or specifications, stays with the Foundation rather than with any of the legal entities contributing. The risk that a trusted co-development partner will change its strategy, its culture, or its ownership and thus its governance structure is only one of the many unsavory surprises a product or a company can meet on the road to market.

This means that if the relationship sours between two parties collaborating on a standard or a reference implementation, the work might “fork” and take on two different future paths, but the version ratified by DIF stays open to the public under DIF’s management. In such a case, any DIF member can join the relevant group and maintain the project’s main branch, or influence future versions if the group continues the open work. In fact, DIF’s licensing even imposes some restrictions on contributing organizations forking a DIF collaboration to continue the work elsewhere, whether further development is carried out in the open or not.

It is important to note that DIF is not original here, but rather, standing on the shoulders of giants. We were founded within the Joint Development Foundation, now part of the Linux Foundation, and inherits not only their licenses and legal structures but also their culture of co-development and IPR protection. While the staff of DIF is happy to help socialize, navigate, and enforce these structures, none of us are lawyers and we defer to the professionals (including those available to DIF through the JDF) for all serious matters and conflict resolution on the subject of intellectual property.

Drilling further down

We’ve sketched out the different kinds of open source development and the many ways variously-open standards can structure markets, and positioned DIF among all these concepts and its neighbors in the community. There’s not much further down we can drill in general terms: it’s time to get particular and start positioning you and your organization in this landscape! In our next piece, we’ll sketch out how you can get strategic about open source, roll up your sleeves, and get your hands dirty at DIF.

Drilling Down: [Co-]Development in the Open was originally published in Decentralized Identity Foundation on Medium, where people are continuing the conversation by highlighting and responding to this story.


Berkman Klein Center

‘There has been less of a buffer’: discussing intimate partner violence during the pandemic

Berkman Klein Center event explores how technology factors into pandemic response Kendra Albert, Tanya Cooper, Roslyn Satchel, and Thema Bryant-Davis during a Berkman Klein Center event on “Marginalized Women, Technology, COVID-19, and Intimate Partner Violence.” Screenshot by Lydia Rosenberg. The COVID-19 pandemic has upended myriad aspects of everyday life, from education to the economy to
Berkman Klein Center event explores how technology factors into pandemic response Kendra Albert, Tanya Cooper, Roslyn Satchel, and Thema Bryant-Davis during a Berkman Klein Center event on “Marginalized Women, Technology, COVID-19, and Intimate Partner Violence.” Screenshot by Lydia Rosenberg.

The COVID-19 pandemic has upended myriad aspects of everyday life, from education to the economy to interpersonal relationships. For victims and survivors of intimate partner violence, the many struggles of escaping or finding support are particularly exacerbated by the pandemic. An event hosted by the Berkman Klein Center explored how technology factors into this problem — how it can help, and how it can harm.

Moderated by Roslyn Satchel, a fellow at the Berkman Klein Center and the Blanche Seaver Professor of Communication at Pepperdine, the event convened an interdisciplinary group of scholars to discuss various issues surrounding intimate partner violence, particularly for marginalized women, during the pandemic. The discussion covered fostering safe spaces online, access to resources, and navigating the legal system during the pandemic.

Thema Bryant-Davis, an associate professor of psychology at Pepperdine University, and director of the Culture and Trauma Research Lab, spoke about coping with trauma and supporting victims and survivors.

“The reality is that COVID-19 has also increased the risk of intimate partner violence when we see the large number of people who are practicing physical distancing and social distancing. There has been less of a buffer, less of an opportunity to come outside of the home and to seek services. And so we are mindful of the urgency of this work, as well as the additional barriers that make help-seeking difficult,” Bryant-Davis said.

“Part of what we are mindful of in the midst of COVID is the need for technology to be utilized in order to reach survivors and in order to protect survivors and help us to heal and restore,” Bryant-Davis said. “Many clinicians like me are currently practicing telehealth solely, so working with people in their healing journey by phone and by Internet which has benefits and challenges.” She pointed to easier access to resources not requiring travel, but also the dangers of sharing spaces with abusers.

Tanya Asim Cooper, associate clinical professor of law and director of the Restoration and Justice Clinic, shared her experience working with victims of intimate partner victims from a legal perspective and illustrated her work through a case study.

“Studies show racial disparities in domestic violence for victims of color, predominantly women and they are in the greatest danger,” she said. “From my experience and based on my research, victims of color generally are perceived as less credible victims, suffer more serious violence, and require more concrete evidence of abuse, especially physical violence. They need photographs, they need not just medical records, they need medical professional live testimony.”

These demands are hard for people to meet — especially during the pandemic, with courts closing and forcing victims to wait. Cooper described how her clinic transferred their work online, but how she worries about people lacking technological skills or access to the internet.

“If law enforcement and courts can’t or won’t assist, especially during the pandemic, let’s equip faith communities and other online communities where marginalized women go, and let’s equip them to help,” she said.

Several event participants mentioned the idea of fostering online communities and safe digital spaces during the event. Kendra Albert, a clinical instructor at Harvard Law School’s Cyberlaw Clinic, prompted the panel to question some of the values embedded in technologies and online tools and how those can be challenged in the intimate partner violence context.

As examples, Albert cited how phone companies may provide data about recently contacted phone numbers because “insiders are assumed to be safe.” Albert also pointed to Facebook’s “People You May Know” feature as a tool that has been shown to suggest mutual friends but that may expose degrees of connection that should remain private, such as therapist relationships “It’s just one example of these technologies that are often built on the assumption that more connection is better, that Facebook wants to connect to all of us or thinking about sharing as a net good, while not considering the very real reasons that people’s sharing or engagement with these platforms might be deeply contextual and they may have concerns about this information getting shared more broadly,” Albert said.

Although these contexts and situations are often not considered when developing such technology or their values, marginalized people are able to adapt the technologies to their circumstances, circumventing the intended uses and norms, Albert added.

“The reality is marginalized folks have been using, thriving, and changing technologies that were built without them in mind forever, for as long as we’ve had technologies,” they said. “And that we can think of examples of marginalized folks innovating and actually driving these technologies forward as the things that create change and in some cases, create money for these platforms. So I think while still keeping in mind that domestic violence victims and survivors, especially domestic violence victims and survivors that are women of color, Black and brown women could be much better served by these platforms and that their needs for contextual controls and control over one’s information.”

Towards the end of the conversation, Satchel drew connections between the speakers; she highlighted the theme of safety and security and acknowledged that the event is continuing an important conversation about centering the experiences of marginalized people and the value of cross-disciplinary discourse.

“This question of safety and security is really animating the very core of this discussion. And as a communication ethicist, as a person who actually studies communication in a variety of contexts, I really value interdisciplinarity. Why? Because it allows for conversations like this to happen,” she said. “The beauty is that we’re coming back to a common core, marginalized women. Women who are marginalized by race, socioeconomic status, ability, gender, sexual orientation, language, ethnicity, immigration status, and many more caste markers have very unique experiences, unique experiences that may very well cause them to call for different solutions and different options for justice.”

A podcast version of the event is available here. For more BKC events, visit our website and sign up for our events newsletter.

‘There has been less of a buffer’: discussing intimate partner violence during the pandemic was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Good ID

FIDO Recognition for European Digital Identity Systems and eIDAS Grows

Contributed by Sebastian Elfors, Senior Solutions Architect, Yubico Recognition of the value of FIDO in European digital identity systems and eIDAS continues to grow.  This month has featured two new […] The post FIDO Recognition for European Digital Identity Systems and eIDAS Grows appeared first on FIDO Alliance.

Contributed by Sebastian Elfors, Senior Solutions Architect, Yubico

Recognition of the value of FIDO in European digital identity systems and eIDAS continues to grow.  This month has featured two new updates in Europe on the FIDO front: the release of a landmark ENISA report that discusses the role FIDO2 plays in eIDAS, and the accreditation by the Czech government of a new eID solution using FIDO2.

In March 2021, the EU Cybersecurity Agency (ENISA) issued the report Remote ID Proofing, which describes the current regulatory landscape and supporting standards for the European countries’ remote identity proofing laws, regulations and practices. ENISA’s report is based on the ETSI TR 119 460 and ETSI TS 119 461 documents, which describe the policies and practices for remote identity proofing among trust service providers in the EU. Especially the eIDAS regulation, the AMLD5 directive to prevent money laundering, and EU directives on issuing ID-cards and exchanging identity information have been taken into account from a legal perspective.

Several methods for remote identification are proposed in the ENISA report: video recorded sessions, identification based on eID schemes or electronic signatures, bank identification, scanning of existing ID-cards, or a combination of several methods. In particular the option to identify a user with an eID scheme is of interest from a FIDO perspective. The following statement is written in section “2.2.4 Electronic identification means” of the ENISA report:

“A protocol used by several electronic identity means providers is OpenID connect. It is an authentication layer on top of OAuth 2.0 and is specified by the OpenID foundation. This protocol allows to verify the identity of the applicant based on the authentication performed by an Authorization Server, and by obtaining basic information about the applicant. Another technology that can be used in eID solutions is FIDO2. The FIDO Alliance explains in a whitepaper how FIDO2 can be used for eID means corresponding to eIDAS article 8.”

In the very same month, the Czech ministry of interior issued eIDAS accreditation for the Czech domain registry CZ.NIC, meaning that their identity provider mojeID can deploy FIDO2 as an eID scheme at eIDAS level of assurance High under the following conditions:

The FIDO2 authenticator is FIDO certified at Level 2 (or higher) The FIDO2 authenticator is based on a secure element that is certified at FIPS 140-2 Level 3 or Common Criteria EAL4 + AVA_VAN.5 The FIDO2 authenticator has a PIN set and the PIN is required for all transactions at level of assurance High Username and password are used in conjunction with FIDO2

Both ENISA’s report on remote identity proofing and the official approval of CZ.NIC’s FIDO-based eID scheme are great examples of how FIDO has been recognized as a viable authentication protocol for eIDAS compliant eID schemes in the EU.

The post FIDO Recognition for European Digital Identity Systems and eIDAS Grows appeared first on FIDO Alliance.