Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!
The post Education Design Lab’s Bill Hughes’ keynote from Velocity’s Education Ecosystem event appeared first on Velocity.
The post Beyond Academics’ Matt Alex’s keynote from Velocity’s Education Ecosystem event appeared first on Velocity.
The post DEMO: SmartResume’s Ian Davidson on talent matching appeared first on Velocity.
The post DEMO: Nuevosmedios’ Enrique José Peña Fajardo on learning management systems appeared first on Velocity.
The post DEMO: Credential Engine’s Deborah Everhart on semantic interoperability appeared first on Velocity.
The post DEMO: Talview’s Sanjoe Jose on talent assessment appeared first on Velocity.
The post DEMO: Greenlight’s Shrikant Jannu on the path from education to work appeared first on Velocity.
The post DEMO: Sertifier’s Arda Helvacılar on cross-border mobility appeared first on Velocity.
The not-for-profit, Digital Identity New Zealand (DINZ), has provided a briefing to incoming Ministers Collins, Lee and vanVelden highlighting the opportunity to move Aotearoa New Zealand forward to become a more productive digital economy through the adoption and use of digital identity (identification).
In its briefing, DINZ emphasises the critical role digital identity plays in fostering trust in online transactions and services.
“A trusted digital economy necessitates assurances that users are who they claim to be, that the services we use are genuine and that products we purchase are what they claim to be. This ecosystem of mutual trust is crucial for any government promoting its country’s digital economy, domestically and to the world,” says DINZ Executive Director Colin Wallis.
The briefing outlines opportunities and challenges, including improving digital identity laws and implementation in New Zealand, alongside recommendations to help move New Zealand forward to a more efficient and productive economy.
Opportunities and Challenges:
Globally, the digital identity verification check market was valued at $11.6 billion USD in 2022, expected to reach $20.8 billion USD by 2027.
DINZ estimates that embracing digital identity could boost New Zealand’s economy by 0.5%-3% of GDP, equating to roughly $1.5 billion to $9 billion NZD.
Despite the positive step of passing the Digital Identity Services Trust Framework (DISTF) Act in April 2023, arguably development of digital identity in New Zealand is slower compared with international common law counterparts. However, DINZ acknowledges the positive efforts by DIA in the compilation of the rules for accreditation and its intentions regarding market uptake.
DINZ sees five opportunities to improve digital identity related laws and implementation:
Work with DINZ through the implementation of the DISTF Act Make sure the digital identity related requirements in the Customer & Product Data bill harmonise with the DISTF Ensure the Privacy Act 2020 appropriately directs how biometrics technology collects, processes and stores information, with input from DINZ Fill the current vacuum in child online safety by leveraging work undertaken in the UK and EU Work with DINZ to make authoritative sources of identity available digitally as verifiable credentials.“DINZ is enthusiastic about engaging with government officials over the next five years. We look forward to contributing to the development of key digital identity initiatives, educating the public and businesses, and supporting the implementation of the DISTF,” says Colin.
View DINZ’s Briefings to Incoming Ministers here.
For media inquiries, please contact:
Colin Wallis
DINZ Executive Director
colin.wallis@digitalidentity.nz
021 961 955
The post Digital Identity New Zealand (DINZ) Briefs New Ministers on Advancing Digital Identity Landscape appeared first on Digital Identity New Zealand.
Working on the Document, Review, and Implement Hyperledger AnonCreds ZKP Cryptographic PrimitivesHyperledger Mentorship project has been a thrilling journey, filled with challenges, learnings, and a deep sense of contribution to a broader technical community. As a mentee of this project, I delved into various aspects that not only expanded my technical skills but also provided insights into the vast potential of verifiable credential solutions.
Also: Implementation Guidance for ECF v4.1 - Committee Note
We are pleased to announce that Electronic Court Filing Version 4.1 & Version 5.01 and Electronic Court Filing Web Services Service Interaction Profile Version 4.1 & Version 5.01 from the LegalXML Electronic Court Filing TC [1] have been approved as OASIS Committee Specifications, and are now available.
In addition, the ECF TC members have published the Committee Note “Implementation Guidance for Electronic Court Filing Version 4.1.” It provides non-normative guidance to implementers of the LegalXML Electronic Court Filing Version 4.1 specification.
ECF defines a technical architecture and a set of components, operations and message structures for an electronic court filing system, and sets forth rules governing its implementation.
Version 4.1:
LegalXML Electronic Court Filing Version 4.1 (ECF v4.1) consists of a set of non-proprietary XML and Web Services specifications, along with clarifying explanations and amendments to those specifications, that have been added for the purpose of promoting interoperability among electronic court filing vendors and systems. ECF Version 4.1 is a maintenance release to address several minor schema and definition issues identified by implementers of the ECF 4.0 and 4.01 specifications.
Electronic Court Filing Web Services Service Interaction Profile defines a Service Interaction Profile, as defined in section 5 of the ECF v4.1 specification. The Web Services Service Interaction Profile may be used to transmit ECF 4.1 messages between Internet-connected systems.
Version 5.01:
Electronic Court Filing Version 5.01 (ECF v5.01) consists of a set of non-proprietary XML and Web Services specifications developed to promote interoperability among electronic court filing vendors and systems. ECF v5.01 is a minor release that adds new functionality and capabilities beyond the scope of the ECF 5,0, 4.0 and 4.01 specifications that it supersedes.
Electronic Court Filing Web Services Service Interaction Profile defines a Service Interaction Profile (SIP), as defined in section 7 of the ECF v5.01 specification. The Web Services SIP may be used to transmit ECF 5.01 messages between Internet-connected systems.
The documents for these four Committee Specifications and related files, as well as the new Committee Note, are available here:
Electronic Court Filing Version 4.1
Committee Specification 01
29 September 2023
Editable source (Authoritative):
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/ecf-v4.1-cs01.docx
HTML:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/ecf-v4.1-cs01.html
PDF:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/ecf-v4.1-cs01.pdf
XML schemas:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/xsd/
XML sample messages:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/xml/
Model and documentation:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/model/
Genericode code lists:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/gc/
Specification metadata:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/xsd/metadata.xml
Complete package in ZIP file:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/cs01/ecf-v4.1-cs01.zip
************************
Electronic Court Filing Web Services Service Interaction Profile Version 4.1
Committee Specification 01
29 September 2023
Editable source (Authoritative):
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/ecf-webservices-v4.1-cs01.docx
HTML:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/ecf-webservices-v4.1-cs01.html
PDF:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/ecf-webservices-v4.1-cs01.pdf
WSDL files:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/wsdl/
WSDL examples:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/wsdl/examples/
Complete package in ZIP file:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/cs01/ecf-webservices-v4.1-cs01.zip
***************************
Electronic Court Filing Version 5.01
Committee Specification 01
29 September 2023
Editable source (Authoritative):
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/ecf-v5.01-cs01.docx
HTML:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/ecf-v5.01-cs01.html
PDF:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/ecf-v5.01-cs01.pdf
XML schemas and Genericode code lists:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/schema/
XML example messages:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/examples/
Model and documentation:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/model/
UML model artifacts:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/uml/
Complete package in ZIP file:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/cs01/ecf-v5.01-cs01.zip
************************
Electronic Court Filing Web Services Service Interaction Profile Version 5.01
Committee Specification 01
29 September 2023
Editable source (Authoritative):
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/ecf-webservices-v5.01-cs01.docx
HTML:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/ecf-webservices-v5.01-cs01.html
PDF:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/ecf-webservices-v5.01-cs01.pdf
WSDL schemas:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/schema/
XML WSDL examples:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/examples/
Complete package in ZIP file:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/cs01/ecf-webservices-v5.01-cs01.zip
***************************
Implementation Guidance for Electronic Court Filing Version 4.1
Committee Note 01
16 October 2023
Editable source (Authoritative):
https://docs.oasis-open.org/legalxml-courtfiling/ecf-guide/v4.1/cn01/ecf-guide-v4.1-cn01.docx
HTML:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-guide/v4.1/cn01/ecf-guide-v4.1-cn01.html
PDF:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-guide/v4.1/cn01/ecf-guide-v4.1-cn01.pdf
Complete package in ZIP file:
https://docs.oasis-open.org/legalxml-courtfiling/ecf-guide/v4.1/cn01/ecf-guide-v4.1-cn01.zip
***************************
Members of the ECF TC [1] approved these specifications by Special Majority Vote. The specifications had been released for public review as required by the TC Process [2]. The vote to approve as Committee Specifications passed [3], and the documents are now available online in the OASIS Library as referenced above.
Our congratulations to the TC on achieving these milestones and our thanks to the reviewers who provided feedback on the specification drafts to help improve the quality of the work.
========== Additional references:
[1] OASIS LegalXML Electronic Court Filing TC
https://www.oasis-open.org/committees/legalxml-courtfiling/
[2] History of publications, including public reviews:
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v4.1/csd02/ecf-v4.1-csd02-public-review-metadata.html
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v4.1/csd02/ecf-webservices-v4.1-csd02-public-review-metadata.html
https://docs.oasis-open.org/legalxml-courtfiling/ecf/v5.01/csd03/ecf-v5.01-csd03-public-review-metadata.html
https://docs.oasis-open.org/legalxml-courtfiling/ecf-webservices/v5.01/csd03/ecf-webservices-v5.01-csd03-public-review-metadata.html
[3] Approval ballot:
https://www.oasis-open.org/committees/ballot.php?id=3796
The post Electronic Court Filing v4.1 & v5.01 and ECF Web Services SIP v4.1 & v5.01 Committee Specifications Published appeared first on OASIS Open.
“Consumers demand more information about the products they’re purchasing, regulators require the disclosure of more information and there’s an ongoing need to more effectively track and trace products through the supply chain. We can resolve this with 2D barcodes with GS1 standards inside – a single barcode that has the power to provide all the information consumers need and desire, improve traceability through the supply chain, and scans at checkout.”
Jon R. Moeller,
Chairman of the Board, President and Chief Executive Officer, Procter & Gamble
P&G CEO, Jon Moeller, urges industry leaders to transition from traditional barcodes to 2D barcodes with GS1 standards inside to improve consumer experiences and supply chain efficiencies.
In a letter sent to the Board of Directors of the Consumer Goods Forum (CGF), the organisation bringing together retailers and manufacturers globally, Jon Moeller has called for a global transition in all products packages to more powerful 2D barcodes. The replacement of the traditional barcode - currently present in 1 billion products – with the more capable new version should be a “primary focus” for industries, according to Moeller.
The omni present GS1 barcode used on today’s packaging is now 50 years old. Its adoption in 1973 revolutionized supply chains and forever changed how we buy and sell products. Barcodes are currently scanned over 10 billion times per day and keep bringing numerous benefits to industry. However, there is a critical need to take the barcode technology to the next level.
Shoppers’ expectations and behaviors have changed dramatically in the last few years. Consumers are now “hyper-connected” and research product information while shopping. There is a growing appetite for sustainability related information, including recycling and general guidance on how best to use products.
The transition to a more powerful 2D barcode using GS1 standards can make a great difference. According to Moeller, CEO of the world’s largest consumer goods company, “2D barcodes using GS1 standards provide strong benefits to consumers and shoppers, as they can gain access to very specific product information beyond what is displayed on the label: for example, usage instructions, product safety, ingredients, nutrition, certifications, recycling, expiration dates, promotions, and more.”.
2D barcodes are far superior to traditional barcodes due to their capacity to hold significantly more data. They also provide a better consumer experience. When a 2D barcode is encoded using the GS1 Digital Link standard, it enables the barcode to be scanned by a smartphone, allowing to access vastly more information – for consumers, retailers, regulators, and manufacturers. It is this wealth of data in everyone’s pockets that will put consumers in the driver’s seat to make more sustainable, safer, and informed decisions.
50 years after the adoption of the original barcode, GS1 is working with industry to gradually replace all existing barcodes with the new technology. We are entering a transition period that will see an increasing number of products carrying both existing and 2D barcodes. The goal is to ensure comprehensive rollout by 2027.
“We expect that 2D barcodes adoption will grow at different rates around the world, but one thing is certain: those that accelerate through this transformation the fastest will be best positioned to unlock valuable new capabilities and benefits”, claims Jon Moeller.
You can access here the P&G CEO letter and here more information about 2D barcodes.
We had the pleasure of chatting with the Identity Woman herself, Kaliya Young in the latest episode of The Identity at the Center Podcast. We delved into the captivating world of decentralized identity, exploring its challenges and potential. We covered a range of topics, including Kaliya's journey into the field of identity and her involvement in co-founding the Internet Identity Workshop. We also discussed the definition of decentralized identity and its impact on existing models in the world of IAM. Additionally, we explored the progress made by other countries, like Estonia, in implementing decentralized identity and the challenges faced by the USA. Kaliya also shared her insights on digital wallets and the key takeaways for our audience. Tune in to episode #248 now on idacpodcast.com or in your podcast app.
#iam #podcast #idac
Elastos, a pioneer in decentralised internet solutions, is working on a significant innovation. Since its inception, Elastos has maintained a close link with Bitcoin, sharing a merge-mined history since early 2018. Now, Elastos introduces ‘Bitcoin Elastos Layer2,’ codenamed ‘BeL2,’ a Layer 2 solution for Bitcoin, signifying a significant evolution in its journey. Next week, Elastos will release the ‘Be Your Own Bank’ BeL2 whitepaper. To be the first to read it, turn on notifications on Elastos Twitter for its announcement and join the community’s Telegram group. Here is some supporting information in the lead-up!
Bitcoin and Elastos History
Elastos’ journey with Bitcoin dates back over five years, beginning when BTC.com merged-mined its first block for Elastos’ mainchain at no extra cost, earning ELA rewards. This contributed to gaining over 50% of Bitcoin’s hash power security in the subsequent years. Merge mining with Bitcoin leverages its robust and battle-tested Proof of Work infrastructure, offering unparalleled security to Elastos at a fraction of the cost. This approach not only simplifies the SmartWeb ecosystem by supporting resource-sharing but also fosters a symbiotic relationship, enhancing both security and rewards across networks. It’s Elastos’ fundamental belief that without Bitcoin’s security, no ecosystem is truly decentralised. You can learn more about Elastos’ Bitcoin-powered architecture here.
Bitcoin-secured ELA has been powering transactions across its ecosystem, from gas to staking, and it’s used yearly in its Cyber Republic DAO governance layer for election voting and as collateral for council member participants. This historical connection with Bitcoin now forms the foundation for Elastos’ new direction – leveraging its long-standing relationship to build “BeL2”, a Bitcoin Layer 2 solution aimed at making the $700billion worth value on its Layer 1 more adaptable and intelligent using Elastos technology for various applications in the digital economy.
The BeL2 Architecture
BeL2 represents an exciting move for Elastos, aligning with Bitcoin’s ethos while expanding its utility. BeL2 will augment Bitcoin’s capabilities without altering its core principles. BeL2 aims to address Bitcoin’s limitations – transaction speed, smart contract complexity, and privacy issues, by layering Elastos’ SmartWeb technology atop Bitcoin’s robust infrastructure.
Potential Use Cases
BeL2 revolutionises how value is leveraged within the Bitcoin ecosystem. For example, pledging Bitcoin on BeL2 can unlock USDT loans, usable across various platforms.
Roadmap
Months of planning have led to an architecture plan that harnesses the strengths of both Elastos and Bitcoin. The BeL2 upcoming whitepaper, scheduled for early December, will detail the operational mechanics and timeline, including:
Project Leadership
The Elastos Foundation will be sponsoring BeL2. Sasha Mitchell, the CEO and Founder of Elacity and a long-standing member of the Cyber Republic Council (CRC), has been asked to lead BeL2. Working alongside him is Jon Hargreaves, known for launching platforms like Cosmos and LinkedIn, and recently backed by the Cyber Republic DAO. Alongside other ecosystem teams such as Infinity and the Guardians, their mission is ambitious yet clear – to drive market-wide utilisation of Elastos through BeL2 and ignite interest in the SmartWeb.
2024 is shaping up to be a very exciting year. Not only will we witness the release and enhancement of Elacity DRM for video, showcasing Rong Chen’s 2017 vision for the first time, but Elastos’ move to develop BeL2 represents more than a strategic shift; it’s a redefinition of Bitcoin’s capabilities using SmartWeb technologies. Next week, Elastos will release the BeL2 whitepaper and immediately begin its execution. To be among the first to read it, turn on notifications on Elastos’ Twitter for its announcement and join the community’s Telegram group.
The post Elastos Introduces BeL2: Revolutionising Bitcoin’s Layer 2 Infrastructure appeared first on Elastos.
We’ve written a lot about how to develop healthy communities, but what we’ve not explicitly written about in this context is the overlap between “education” and “work”. CoPs provide an excellent way to continue life-long learning, network with other professionals who are interested in a topic and create change in areas we strive to improve. All of these benefits are only possible, though, if you have a healthy, sustainable community.
We’re pleased that the Open Recognition is for Everybody community continues to grow. Not only do new people pop into the CoP, members are developing real relationships with one another. In this post, we’ll share some outputs from our annual birthday survey and show how trust and connection are keystones to a CoP.
DataTo measure the health of a community we can look at raw numbers, such as the number of new members that have joined in a specified time period. With over 430 people in the ORE community, it’s grown 27% over the past year.
But that doesn’t tell the whole picture, which is why it’s important to use structured surveys to discover other data, including sentiment. Last year, we showed you how to gather data on a CoP using an approach from McMillan and Chavis’s 1986 paper ‘Sense of community: A definition and theory’.
This time, we again madesure that we not only asked the same questions, but included a box for free-text entry so people could give us unfiltered feedback. We’re sharing some of the positive results from the survey. It wasn’t all sunshine and rainbows, but the forecast is healthy.
Trust within a community is crucial for collective success and in itself a form of growth. The more people trust each other, the easier it is to move together towards a collective vision. High scores on our 4-point scale, which intentionally lacks a neutral middle option, indicate strong trust levels.
That’s all very well in practice, but what about in theory?Last week, we shared short recap videos of Community Conversations, a three-part workshop series about making the most of your Community of Practice (CoP).
In this post, instead of walking you through the methodologies and metrics, we’re simply going to share a few highlights from the results of the ORE birthday survey, linking it to what we covered in Community Conversations.
ValueValue cycles in relation to CoPs refer to the different stages of value creation that occur as members of a CoP interact, share knowledge, and learn from each other. These cycles help to assess and understand the value generated by learning in social contexts.
“I appreciate opportunities to connect with peers, regardless of differences in geographical location and time zones.” (survey participant)
One of the key things that members of the ORE community value is connection, a feeling of going deeper than the ‘Immediate value’ of Cycle 1 and gaining fulfilment from being able to apply what’s learned in the community to their own work (Cycle 3).
‘Vibes’ are not something easy to measure, but they are important for any kind of CoP, including the ORE community. Thinking about the different kinds of value that communities can generate was the focus of our first Community Conversations workshop.
MaturityThe work we have done around maturity models is all about navigating your community through its growth stages. We’ve been heavily influenced by the work of Emily Webber, Bailey Richardson, Kevin Huynh & Kai Elmer Sotto.
“This community is emergent and resilient. I feel we have been coming out of the closet in the last year and building momentum like never before… signals are that we might be reaching the inflection point of the exponential curve.” (survey participant)
There are ups and downs in the life of any online community, but the idea is that community members feel like they are part of something that is evolving, and that they feel a connection to others.
Again, ‘connection’ is a difficult thing to measure, but asking how important the community is to members is a good way of trying to figure this out. The second Community Conversations workshop explored maturity models to help your CoP move in the right direction.
InfluenceUltimately, the reason people become members of CoPs is to make a difference. They want to improve their own practice, but also make a difference and impact on the world. By being part of an online community, individual members have the ability to influence things which they may not be able to do alone.
“I feel belonging to this community more than any other I inhabit professionally. In fact, this community has been the constant while my professional affiliation has changed a few times…” (survey participant)
One way in which influence happens is for one community to influence another. In the case of the ORE community, there is a reasonably high confidence that what we are advocating for (Open Recognition! Open Badges!) is something that is having a wider impact.
In the third Community Conversations workshop we discussed communities as change agents, introducing the two loops model and our work around systems convening. The latter is based on the work of Etienne and Beverly Wenger-Trayner.
ConclusionWe’ve selected some of the results from our survey to illustrate points relating to our series of Community Conversations workshops. We’ve work to do in the ORE community, for example, in helping people have more agency and giving more and different ways for members to meet up in synchronous ways.
What we do know, based on the work we’ve done (and shared!) around value cycles, maturity models, and influence, is that the ORE community is on the right track.
If you’d like to join the community, check out badges.community. If you’d like WAO to help you with your work around Communities of Practice, get in touch!
With input provided by Laura Hilliger
Professional Belonging: Networking through Communities of Practice was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Getting people into the right jobs faster appeared first on Velocity.
The post A unified approach to verifying education and skills appeared first on Velocity.
Speculation in the cryptocurrency market is nothing new, and the Stellar network with its native currency, Lumens (XLM), has not been immune to it. A recent topic of interest among investors and enthusiasts is the potential for XLM to be backed by silver. This notion stirs up discussions about stability and intrinsic value, aligning digital assets with traditional precious metals.
However, as of now, Stellar Lumens is not backed by silver or any other physical commodity. It remains a decentralized protocol aimed at facilitating cross-border transactions. The speculation around XLM being backed by silver is purely hypothetical and not grounded in any announced plans or developments by the Stellar Development Foundation.
For investors, it’s important to differentiate between actual plans and rumors or wishful thinking. While the idea of backing a cryptocurrency with a tangible asset like silver is intriguing, it would represent a significant shift from the current operation and philosophy of the Stellar network. Such a move would aim to bring the perceived stability and historical reliability of silver to the digital currency space.
In conclusion, while the concept is thought-provoking, it’s essential to approach such speculation with caution. Investors should base their decisions on verified information and understand the speculative nature of discussions about silver backing for XLM or any other cryptocurrency.
Stellar Lumens (XLM), the native digital asset of the Stellar network, has a rich history and a substantial presence in the cryptocurrency market. Created in 2014 by Jed McCaleb, founder of Mt. Gox and co-founder of Ripple, along with former lawyer Joyce Kim, Stellar Lumens was envisioned as a decentralized protocol for fast and low-cost cross-border monetary transactions.
Initially, Stellar had 100 billion lumens, but the supply limit has since been reduced to 50 billion lumens, aligning with a focus on managing inflation and maintaining the asset’s value. The Stellar Development Foundation, which supports the Stellar network, has implemented significant upgrades over the years, including a new consensus algorithm in 2015, designed by Stanford professor David Mazières.
The Stellar network has expanded its reach through various partnerships and projects, such as the launch of Lightyear.io, a commercial entity of Stellar in 2017, and later Interstellar, which resulted from Lightyear’s acquisition of Chain, Inc. in 2018. In 2021, Franklin Templeton used Stellar to launch the first “tokenised” US mutual fund, marking a significant milestone in the adoption of blockchain technology in traditional finance.
While the notion of XLM being backed by silver remains speculative, the currency’s actual history, market capitalization, and strategic partnerships highlight its role as a dynamic player in the digital currency space.
History, Capitalization and Distribution details provided by ChatGPT 4.0
The post Navigating the Waves of Silver Speculation and Crypto Innovation appeared first on Lions Gate Digital.
Masca is an open-source, decentralized identity wallet that lets you manage your off-chain identifiers and attestations. It is a MetaMask Snap plugin for MetaMask that extends its base functionalities for decentralized identifiers (DIDs) and verifiable credentials (VCs)—turning MetaMask into an identity hub.
Masca is developed by Blockchain Lab:UM, an R&D laboratory specializing in blockchain technology, decentralized identity, and decentralized artificial intelligence.
Ceramic, as one of the supported storage solutions in Masca, plays a crucial role in enabling users to store their data, have it available on different devices, and will serve as a foundation for data interoperability with other projects in the future.
Decentralized Identity and Data Management With MascaAs decentralized applications (dApps) and other decentralized solutions become more complex and personalized for users, there is a growing need to manage user identity and associated data. Given that the amount of data is expected to grow exponentially, off-chain solutions are essential to handle this vast amount in a scalable, efficient, and inexpensive way.
For quite some time, MetaMask has been working on a feature called MetaMask Snaps—a system that enables extended functionalities of the base MetaMask with Snaps. Snaps are JavaScript programs, each running in its own isolated execution environment inside MetaMask, and can run arbitrary logic, thus significantly extending the functionality of MetaMask.
How Does Masca Work?Masca is a MetaMask Snap that provides a straightforward way for users to manage their identity, data and attestations. At the same time, Masca also offers dApp developers a simple-to-use interface to integrate decentralized identity functionalities into their applications. There are (almost) endless attestation types, but some examples are conference attendance, government, KYC, education, and various gaming credentials.
The decentralized identity space is still in a somewhat early stage, various solutions and protocols are being developed and tested. Masca offers flexibility with multiple integrations for developers to choose what suits their use cases and needs. Masca supports multiple DID methods (did:pkh, did:ethr, did:key, did:jwk, did:polygonid, etc.) and cryptographically-signed data formats (JWT, JSON-LD, EIP712). Some of the integrated projects include Ceramic Network, Polygon ID, and EBSI.
Masca uses Veramo Client for DID management, as well as for the creation, presentation, and validation of Verifiable Credentials. In alignment with Masca’s mission, the snap also offers flexibility around data storage, allowing users to toggle between the local MetaMask Snap state (which exists fully off-chain and exclusively in the user’s browser), as well as storage on the Ceramic Network.
Masca was successfully rolled out during the Open Beta release of MetaMask Snaps in September 2023. For more information, you can visit this link and the Snaps directory.
What issues did Masca face before using Ceramic?In the beginning, users could only store their data locally, which meant directly in their MetaMask wallet in the encrypted storage. But that presents a problem when switching to different devices (using the same seed phrase), because the data is unavailable and not synchronized. It also was not possible if users want to share their data in a structured and composable way, which is why Masca chose to build on and integrate Ceramic.
“Ceramic Network is a data ledger that builds on IPFS (InterPlanetary File System) and creates an abstraction layer that makes it much more suitable for endless use cases, one of them being data vaults for identity data.”- Masca teamWhy Masca Chose Ceramic
Ceramic offers a scalable network for data, builds on top of the same primitives as Masca (DIDs), and provides developer-friendly tools for development. Structuring the otherwise unstructured data into data streams, controlled by user accounts, creates a powerful solution for managing identity data.
Ceramic also has a broad ecosystem of projects working on diverse challenges and problems. By being part of this ecosystem, the Masca team says it aims to connect with other projects in the identity space and work together to tackle those challenges, such as conforming to the same data models and making data truly composable across different applications. Building more rich and complex decentralized identities can also help solve the challenges with reputation and Sybil attacks in the future.
How Masca Uses CeramicMasca is now live on Ceramic Mainnet! Ceramic is currently used in the Masca ecosystem for two primary purposes:
Verifiable Credential Storage: Since VCs are off-chain attestations, users must store them somewhere. Ceramic offers a scalable way to store attestations. Schemas: Each VC has a strictly defined structure and data model, which are defined with schemas. Using schemas, VCs of the same type are consistent across different applications.Users can freely migrate their credentials between Snap-encrypted storage and Ceramic Network as they wish. Selection of storage most often depends on the type of attestations and level of security/privacy needed.
Masca has several exciting features in the pipeline, such as:
Enabling users to store encrypted data Supporting selective disclosure for VCs Adding support for ComposeDB on CeramicThe Masca team is excited to work with other projects in the Ceramic ecosystem and help contribute to the world's composable data stack.
Create Your CredentialVisit Masca to create your decentralized identity and first off-chain verifiable credential. You can store the credential on Ceramic and make it available to share across dApps that integrate Masca (such as ReputeX) or those that use Ceramic. If you’re a dev interested in integrating Masca into your application, check out the documentation page.
Decentralized identity is becoming a core element of the decentralized web, addressing the challenge of verifying data in the ever-increasing volume of content generated. Masca's upcoming migration to ComposeDB will further improve data management and interoperability.
The OpenID Foundation board of directors unanimously approved updating the Foundation’s Bylaws by a 75% supermajority vote as required by the Bylaws (Section 9.2 Bylaw Amendments) at the November 16, 2023 board meeting. The new Bylaws are effective as of November 16, 2023.
From the founding of the OpenID Foundation, the Foundation has intended to maintain balance between Sustaining and Individual (“Community”) representatives and later Corporate representatives on the board. For many years, The Foundation has had 6-7 Sustaining members at a ratio to 3 Community and 1 Corporate representatives. The Foundation currently has 12 Sustaining members and over 150 Corporate members.
The Executive Committee (EC) and Board recently revisited the topic of balance of board members and agreed to proceed with updating the ratios of Community and Corporate representatives on the board relative to the number of Sustaining members. The EC and Board noted that adding more representatives brings more voices and expertise to the Board, balancing the Sustaining member voices.
A summary of key changes to the Bylaws:
Update Individual representative to Community representative to better reflect the position Add additional Community representatives to the Board in line with the ratio in Byalws, as of December 1st of each year Add an additional Corporate representative to the Board when more than 140 corporate entities are members, as of December 1st of each year Create a minimum number of Board members as 5 people, and a maximum of 38, as per the proposalAll changes to the Bylaws are documented in the “Summary of Changes to Bylaws”.
2024 OpenID Foundation election will be updated based on the Bylaws to include:
2 Community representative seats for 2-year terms (1 seat prior) 2 Corporate representative seats for 1-year terms (1 seat prior)The 2024 OpenID Foundation election will kick off on Monday, December 11, 2023 with full details including election schedule published then.
The post OpenID Foundation Updates Bylaws first appeared on OpenID Foundation.
Picture a world where healthcare is not just a service but a promise of transparency and safety. In this ever-evolving landscape, a groundbreaking transformation is taking place, driven by the demands of both consumers and patients. They are clamoring for information that is clear, traceable, and trustworthy. And at the heart of this shift lies the adoption of two-dimensional (2D) barcodes and Radio Frequency Identification (RFID) technologies.
But how do you prepare for this monumental transition? To answer that question, we spoke to @GwenVolpe, a visionary leader from @Fresenius Kabi, a trailblazing force in healthcare.
Gwen shares insights into the benefits of these technologies, the importance of customer involvement, and the exciting potential of artificial intelligence in healthcare. If you are wondering how to make sure you’re ready for the transition, this episode is for you.
Key takeaways:
Fresenius Kabi is leading the way in integrating RFID and 2D barcodes into their pharmaceutical products, enhancing patient safety and streamlining clinician workflows.
There is a significant impact of RFID and 2D barcodes on automating inventory management and reducing manual data input, ultimately improving patient safety in healthcare.
GS1 standards are revolutionizing medication technology, optimizing supply chain processes, and elevating the standard of patient care in the healthcare industry.
Connect with GS1 US:
Our website - www.gs1us.org
Connect with guests:
Reimagining How Technology Can Transform Your Institution
Hosted in partnership with Kean University, EdgeCon Autumn brought together attendees from around the region to explore how technology can transform their institution and accelerate important initiatives involving cybersecurity, campus networks, cloud strategy, and student support applications. Held on November 2 at Kean University, EdgeCon Autumn provided a panel discussion and a full agenda of breakout sessions allowing participants to dive deeper into a variety of topics. With 93% of attendees rating the event 4-5 stars (with 5 being the top end of the scale), this premier event also provided an opportunity for members of the higher education technology community to connect, share valuable insight, and build relationships with industry leading vendors who support technology transformation across the institution.
Balancing Innovation and Risk
To kick off the conference’s exciting day of events, attendees were invited to a panel discussion, Balancing Innovation and Risk Through the Process of Digital Transformation, led by David Sherry, Chief Information Security Officer, Princeton University and Ed Wozencroft, Vice President for Digital Strategy and Chief Information Officer, New Jersey Institute of Technology. This presentation provided college and university leaders with a unique perspective on balancing innovation and risk through an accelerated period of digital transformation.
Especially as technology increasingly integrates with every facet of higher education, many organizations are concerned about compliance, data privacy, and the performance and financial health of their institution. The discussion emphasized ways that leaders across an organization can participate in a cultural shift to take advantage of new tools such as AI, data analytics, and innovative SaaS applications, while responsibly managing institutional risk through strategic planning and collaboration. Participants were able to explore how effective planning and collaboration can mitigate risk and ways to identify areas on their campus where digital transformation can produce positive change.
How to Fail Innovation
In today’s academic landscape, universities expect their Information Technology (IT) Departments to be hubs of innovation. The breakout session, How to Fail Innovation 101, aimed to redefine innovation and highlight its potential within IT departments. Dr. Robert Clougherty, CIO, Drew University, discussed the motivations for innovation, common missteps, and how to develop operational strategies that foster, not stifle, innovation. Exploring how a positive environment for innovation can be beneficial to an entire IT team, Dr. Clougherty delved into various theories of innovation and how to implement these advancements effectively, while also addressing potential pitfalls and challenges.
“You’re in the sweet spot now – balancing being large enough to have meaningful content and networking while not being so large as to lose the personal connections.”
K. Willey
Exec. Director of Enterprise Tech
Bucknell University
Successful Online and Hybrid Course Development
Instructional Designers, Ann Oro, Lisa Bond, and Kate Sierra, from Seton Hall University led the breakout session, Creating a Path for Faculty Success in Online and Hybrid Course Development, where they explored course design, objectives and alignment, student engagement strategies, accessibility, and ideas for assessment and feedback using the University’s chosen quality assurance rubric as the foundation. This interactive session shared best practices in creating and delivering an online course and ideas for creating engaging lectures and using the learning management system and templates.
High Performance Computing
In a world where massive amounts of data are being created and collected every single second, turning that data into insight requires continuous dedication to innovative ways in deciphering large amounts of data and transforming it into wisdom. Breakout session, High Performance Computing – Value Proposition in Research, presented by Paul Attallah, National Account Manager, DataBank, discussed how High Performance Computing (HPC) is the foundation for scientific, industrial, and societal advancements and how this technology will be essential in helping transform data into human advancement and progression.
Starting Your Zero Trust Journey
As cybersecurity challenges continue to grow, the Zero Trust security model has become well established in the industry to help secure organizations and improve cyberthreat defense. John Bruggeman, Consulting CISO, CBTS, led the breakout session, Find Out Where to Start Your Zero Trust Journey, to help answer the question, How do you know where to start your Zero Trust journey? Attendees learned how using a Zero Trust readiness assessment can show an institution what parts of the people, process, and technology triad they already have in place, what gaps exist, and how to create a scalable security model that protects all data, systems, and networks.
“Loved the opportunity to network, the breakout sessions were wonderful, and the vendors were great – well spaced out but easy to locate and speak with.”
A. Stoll
Senior Director, MIS Enterprise Applications
Thomas Edison State University
Leveraging CX Solutions
Presenters Manish Wadhwa, Associate Provost Academic Applications & Technology, Fairleigh Dickinson University, and Anthony Humphreys, President, BlackBeltHelp, led the breakout session, Leveraging CX Solutions to Enhance the Student Experience, Boost Enrollment, and Increase Retention, to discuss how to empower higher education institutions to unify their applications into a simple yet powerful, analytics-driven, centralized hub for students, faculty, and staff. Users can then access a wide range of services, including technical assistance and support services for Admissions, Records, Registration, Financial Aid, Bursar’s Office, and Accounts Receivable all conveniently located in a single platform. The discussion also included recent examples and case studies of how to use conversation-driven AI-powered bots for natural, personalized customer assistance, and how this technology can provide always available, immediate service.
AI and the Evolution of Strategic Planning and Assessment
Merging theoretical discourse with practical demonstrations, this breakout session explored the nascent role of AI in higher education. Centenary University’s Director of Institutional Research and Assessment, Viktoria Popova, discussed GPT-4.0 and its potential for transforming the strategic planning process, the use of AI in designing new strategic planning objectives, and embedding sustainable but agile strategic planning assessments in existent initiatives. Popova also shared socio-cultural implications of integrating AI into higher education, highlighting its powerful “reasoning” capabilities, while acknowledging its limitations and potential risks. In this session, attendees also learned about the vital role of human responsibility in the effective use of AI. Particularly with a focus on our own unique abilities in language mastery and logical reasoning, which when synergized with AI, enable us to attain achievements beyond what AI could reach on its own.
Using Chatbots in the Online Learning Experience
Chatbots can help improve the online learning experience by assisting users navigating library websites in search of information. Michelle Ehrenpreis, Electronic Resources Librarian/Assistant Professor, Lehman College, and John DeLooper, Web Services-Online Learning Librarian/Assistant Professor, Lehman College, led the discussion, Using Chatbots to Improve the Online Learning Experience. The presentation shared details of a current project that is conducting a content analysis of questions posed to the Lehman Library’s chatbot, which uses the ChatGPT AI. Attendees learned about the chatbot’s weakness, how the information gleaned from the analysis may improve the library website, and how projects using artificial intelligence, like chatbots, can lead to better online learning experience for library users.
“Great event, good venue, and great sessions. I liked going straight into the breakout sessions and having the main panel mid day. Well run and executed. Great job Edge!
G. Sotirion
CIO
Brookdale Community College
Modernizing the ECM and SharePoint
George Sotirion, CIO, Brookdale Community College, and Moe Rahman, CIO, Rider University, led the breakout session, Modernizing the ECM and SharePoint: Simplifying Services and Collaboration through Digital Transformation. This presentation explored Brookdale’s Digital Transformation Initiative that began in 2022 with the modernization of the enterprise content management system including eForm integration and advanced workflows. Throughout the session, presenters discussed the initiative from the planning to execution phases and the important role of involving stakeholders during the entire process.
Technology Initiatives Utilizing Limited Resources
Patricia Kahn, CIO, Mark Lewental, Director, and Doriann Pieve-Hyland, Director, from the College of Staten Island (CSI) led a breakout session, Technology Initiatives Utilizing Limited Resources that Adapt to a Changing World. Participants learned how CSI implemented hyflex technology to migrate CSI’s on prem email system to M365 and provided technology and training support, all on a limited budget. Attendees enjoyed a live demonstration of the hyflex technology and explored the M365 rollout strategy, communication plan, experiences, lessons learned, and future initiatives.
Modernizing and Enhancing Analytics Infrastructure
Rob Stirton, VP of Institutional Effectiveness and CIO at County College of Morris (CCM) joined John Van Weeren, VP Higher Education, GCOM Software, to demonstrate how CCM uses insights from their analytics platform for real-time decision making across the institution. This solution leveraged their existing business intelligence software investment and dramatically improved the availability and accuracy of information through visual analytics for all audiences, while significantly reducing workload on IT and researchers.
Attendees also learned about CCM modernizing in the Cloud with GCOM’s SSA Cloud platform and how it increases access and usability, reduces management costs, while seamlessly integrating data sources with hybrid cloud architecture. With the move to the Cloud, CCM shared how they are better equipped to adapt to changing technical business needs, while allowing them to leverage built-in capabilities, such as machine learning models that identify admissions and financial aid fraud.
“Great lineup of speakers – way to go!”
S. Mierzwa
Asst. Director & Lecturer
Kean University
Building Data Analytics Centers
The afternoon breakout session, Building Data Analytics Centers for Colleges & Departments, led by Rowan University’s Alex Luu, MS-ITM, Data Analyst, Benjamin Hyman, Senior Data Analyst, and Carlos Mercado, Senior Data Analyst, shared valuable insight into building successful data centers. Attendees learned how the team at Rowan used Extract, Transform, & Load process to build complex and common datasets, predictive models, visualizations, dashboards, and reports based on these data sets and models, and packages them into different data centers.
Automating Identity and Access Management
Jeremy Livingston, Chief Information Security Officer, Rafat Azad, Security Engineer, and Kristen Conti, Identity Systems Administrator, from Stevens Institute of Technology led the session, Automating Identity and Access Management, which outlined how the institution used Okta to automate onboarding and account creation processes and set up automatic deprovisioning of accounts and licenses for employees and graduating students. Presenters also shared how the organization was able to enhance security and provide opportunities for further community engagement with alumni portal access. Attendees also saw how the institution saved money and time with automated license management for key applications and reduced workload/overhead.
Ransomware Preparation and Security Success
David Sherry, Chief Information Security Officer at Princeton University, led the session, How Important Is Culture to Security Success?, to explore how the challenges of budgets, staffing, and external threats can hinder enterprise success and how to create a culture of security within your own institution.
Later in the afternoon, the breakout session, Collaboration Among Academia, Local and Federal Law Enforcement, Professionals in Ransomware Preparation, explained how a community approach is required to effectively prepare for a potential ransomware event. Presenter, Stan Mierzwa, Center for Cybersecurity, Kean University, discussed the crucial preparation, response, and recovery tasks and categories associated with ransomware to serve cross disciplines and the role ethics can contribute in contending with ransomware. Participants also reviewed a harmonizing ransomware incident response checklist that can be used to help guide organizations with the varied categories of tasks associated with ransomware recovery.
Data Management Trends and Tools
Melissa Handa, Program Director, IEEE, and Forough Ghahramani, Associate Vice President for Research, Innovation, and Sponsored Programs, Edge, joined together to present Data Management Trends and Tools for Discovery and Open Science where they reviewed the challenges and opportunities associated with research data management. Attendees learned about the fundamental requirements for a unified data and collaboration platform which researchers can leverage to efficiently store, share, access, and manage research data, accelerating institutional research efforts. The session also explored the features of the IEEE Dataport platform, an easy-to-use, globally accessible data management platform.
Improve Monitoring Effectiveness
For many institutions, the effectiveness of leveraging monitoring tools is not aligned with the organization’s service improvement goals. Joseph Karam, Director, Enterprise Monitoring Services, Princeton University, discussed this topic in the breakout session, Establishing Procedures to Improve Monitoring Effectiveness. For the past two years, Princeton has been implementing improved monitoring strategies for the entire IT organization that have identified gaps in monitoring areas and allowed them to consolidate tools to improve how monitoring is used. The session provided an overview of the University’s monitoring services and the strategies that have been implemented to provide improved IT services to the Princeton community.
This panel discussion provided college and university leaders with a unique perspective on balancing innovation and risk through an accelerated period of digital transformation. As technology increasingly integrates with every facet of higher education, concerns about compliance, data privacy, and the performance and financial health of institutions are paramount in the strategic decision making process. The discussion emphasized ways that leaders across the university can participate in a cultural shift to take advantage of new tools such as AI, data analytics, and innovative SaaS applications, while responsibly managing institutional risk through strategic planning and collaboration.
The panel discussion explored:
How institutional culture must evolve to support innovation and digital transformation How effective planning and collaboration can mitigate risk How to identify areas on campus where digital transformation can produce positive changePanelists:
David Sherry
Chief Information Security Officer
Princeton University
Ed Wozencroft
Vice President for Digital Strategy and Chief Information Officer
New Jersey Institute of Technology
Maryam Mirza
Senior Director IT,
Client Experiences and
Strategic Initiatives
Stevens Institute of Technology
8:00-8:30 am – Check-In & Networking
8:30-9:15 am — Welcome, Breakfast, & Exhibitor Connections
9:20-10:00 am — Breakout Sessions
10:10-10:50 am — Breakout Sessions
11:00 am-12:00 pm — EdgeCon Panel Discussion: Balancing Innovation and Risk Through the Process of Digital Transformation
12:00-1:00 pm — Lunch, Networking, & Exhibitor Connections
1:10-1:50 pm — Breakout Sessions
2:00-2:40 pm—Breakout Sessions
2:50-3:30 pm—Breakout Sessions
3:30-4:30 pm—Snacks/Coffee, Networking, & Exhibitor Connections
9:20 – 10:00 am – Breakout Session 1How to Fail Innovation 101
In today’s academic landscape, universities expect their Information Technology (IT) Departments to be hubs of innovation. However, the prevalent understanding of ‘innovation’ often leans towards adopting market trends rather than being truly forward-looking. Unfortunately, these ‘innovations’ are often external, sourced from third-party software, rather than emerging from the IT team’s internal processes and practices.
This presentation aims to redefine innovation and highlight its potential within IT departments. We will explore the motivations for innovation, common missteps, and how to develop operational strategies that foster, not stifle, innovation.
A positive environment for innovation can benefit the entire IT team. It encourages employees to develop more effective processes and practices, and empowers IT leadership to create an environment that nurtures innovation or remove existing barriers to it.
We will delve into various theories of innovation, drawing analogies from a range of domains including rainforests, gardens, machines, and even Plato’s Divine Frenzy. The presentation underscores the unique pressures on IT departments to innovate, making a clear distinction between innovation and invention.
Innovation, according to our perspective, isn’t just about creating something new—an act of invention—but about altering product, process, or practice life cycles to enhance efficiency, effectiveness, and advantage. We will provide insights on implementing innovation effectively, while also addressing potential pitfalls and challenges.
Presenter:
Dr. Robert Clougherty, CIO, Drew University
Creating a Path for Faculty Success in Online and Hybrid Course Development
When our first Online Teaching Certificate debuted in 2017, it was a five-session series offered in person. Over time, we added a second five-session level two and transitioned to a hybrid, then fully asynchronous experience. Both levels are suitable for both new and experienced professors who want to transition to online teaching or enhance their existing skills. Using the University’s chosen quality assurance rubric as the foundation, we cover a wide range of topics, including course design, objectives and alignment, student engagement strategies, accessibility, and ideas for assessment and feedback. Our course is designed to be practical, interactive, and engaging, providing best practices in creating and delivering an online course. Level two expands on the quality assurance rubric and introduces the Transparency in Learning and Teaching process, creating engaging lectures, and utilizing the learning management system and templates. Faculty leave both levels with completed materials to use in their own online or hybrid course.
Presenters:
Ann Oro, Senior Instructional Designer, Seton Hall University
Lisa Bond, Instructional Designer, Seton Hall University
Kate Sierra, Instructional Designer, Seton Hall University
High Performance Computing – Value Proposition in Research
Given all the data that is being created worldwide every single second, the frontier of Data Science, driven by the quest of knowledge management, will require continuous dedication to innovative ways in deciphering large amounts of data and transforming it into wisdom; thereby, aiding in human advancement and progression.
High Performance Computing (HPC) is the answer. HPC is the foundation for scientific, industrial, and societal advancements.
Presenters:
Paul Attallah, National Account Manager, DataBank
Find Out Where to Start Your Zero Trust Journey
How do you know where to start your Zero Trust journey? Do you have all the parts of Zero Trust in place to control risk to your crown jewels? If you don’t know where to start, you probably need to assess your situation. Using a Zero Trust readiness assessment you can see what parts of the people, process, and technology triad you already have in place and what you don’t. With a readiness assessment you can figure out what to focus on first in order to tackle the remaining parts of the Zero Trust jigsaw.
Presenters:
John Bruggeman, Consulting CISO, CBTS
Bradley Morton, VP of Information Technology
Demetrios Roubos, Information Security Officer, Stockton University
10:10 – 10:50 am – Breakout Session 2
Technology Initiatives Utilizing Limited Resources that Adapt to a Changing World
At the College of Staten Island (CSI) technology upgrades and the support of teaching and learning continues to remain at the forefront of our priorities. With a limited budget, CSI implemented hyflex technology, migrated CSI’s on prem email system to M365, and provided technology and training support without missing a beat. This presentation will provide a live demonstration of our hyflex technology as well as speak to our M365 rollout strategy, communication plan, experiences, lessons learned, and future initiatives.
Presenter:
Patricia Kahn, CIO, College of Staten Island (CUNY)
Mark Lewental, Director, College of Staten Island (CUNY)
Doriann Pieve-Hyland, Director, College Staten Island (CUNY)
AI and the Evolution of Strategic Planning and Assessment in Higher Education
Merging theoretical discourse with practical demonstrations, this session will explore the nascent role of AI in higher education. We will focus particularly on GPT-4.0 and its potential for transforming the strategic planning process. Key areas of focus will include the use of AI in designing new strategic planning objectives and embedding sustainable but agile strategic planning assessments in existent objectives. Live demonstrations will exhibit the versatility of GPT-4.0, as we delve into “prompt engineering,” to generate explicit and actionable outcomes. Furthermore, we will explore the socio-cultural implications of integrating AI into higher education, highlighting its powerful “reasoning” capabilities, while acknowledging its limitations and potential risks. In this session, we will highlight the vital role of human responsibility in the effective use of AI. Particularly, we will focus on our own unique abilities in language mastery and logical reasoning, which when synergized with AI, enable us to attain achievements beyond what AI could reach on its own. Our objective is to provide a transformative understanding of how AI, particularly GPT-4.0, can be leveraged as a versatile resource in managing the complexities of strategic planning.
Presenter:
Viktoria Popova, Director of Institutional Research and Assessment, Centenary University
Using Chatbots to Improve the Online Learning Experience
Chatbots improve the online learning experience by assisting users navigating library websites in search of information. Understanding users’ questions to the chatbot and placing them in subject categories reveals the types of information they are searching for. This presentation will discuss a current project conducting a content analysis of questions posed to the Lehman Library’s chatbot, which uses the ChatGPT AI. The presenters will highlight some of the chatbot’s weakness, specifically the questions users pose that it cannot reliably answer. How the information gleaned from the analysis may improve the library website through content additions and layout changes will be discussed. The presenters will argue that projects using artificial intelligence, like chatbots, can lead to better online learning experience for library users, who rely on both the chatbot and the library website to provide them with information on library and campus-related services and resources.
Presenters:
Michelle Ehrenpreis, Electronic Resources Librarian/Assistant Professor, Lehman College
John DeLooper, Web Services-Online Learning Librarian/Assistant Professor, Lehman College
Modernizing the ECM and SharePoint: Simplifying Services and Collaboration through Digital Transformation
Brookdale’s Digital Transformation Initiative officially began in 2022 with the modernization of the enterprise content management system including eForm integration and advanced workflows. The process to develop the strategy and support behind this initiative which has led to streamlined processes and a platform that is intuitive and agile started well before 2022. In this session we will be discussing the initiative from the planning to execution phases and the important role of involving stakeholders throughout.
Presenters:
George Sotirion, CIO, Brookdale Community College
Moe Rahman, CIO, Rider University
Panel Discussion: Balancing Innovation and Risk Through the Process of Digital Transformation
This panel discussion will provide college and university leaders with a unique perspective on balancing innovation and risk through an accelerated period of digital transformation. As technology increasingly integrates with every facet of higher education, concerns about compliance, data privacy, and the performance and financial health of institutions are paramount in the strategic decision making process. This discussion will emphasize ways that leaders across the university can participate in a cultural shift to take advantage of new tools such as AI, data analytics, and innovative SaaS applications, while responsibly managing institutional risk through strategic planning and collaboration.
The panel discussion will explore:
How institutional culture must evolve to support innovation and digital transformation How effective planning and collaboration can mitigate risk How to identify areas on campus where digital transformation can produce positive changeParticipants:
David Sherry, Chief Information Security Officer, Princeton University Ed Wozencroft, Vice President for Digital Strategy and Chief Information Officer, New Jersey Institute of Technology Maryam Mirza, Senior Director IT, Client Experiences and Strategic Initiatives Stevens Institute of Technology 1:10 – 1:50 pm – Breakout Session 3:Leveraging CX Solutions to Enhance the Student Experience, Boost Enrollment, and Increase Retention
Join Manish Wadhwa, Associate Provost Academic Applications & Technology, Fairleigh Dickinson University, and Anthony Humphreys, President, BlackBeltHelp as they discuss how to empower higher education institutions to unify their applications into a simple yet powerful, analytics-driven, centralized hub for students, faculty, and staff to access a wide range of services, including technical assistance, and support services for: Admissions, Records, Registration, Financial Aid, Bursar’s Office, Accounts Receivable and more—all conveniently located in a single platform. Their discussion will include how to best use conversation-driven AI-powered bots for natural, personalized customer assistance, providing always available, immediate service, along with recent University examples and case studies.
Presenters:
Manish Wadhwa, Associate Provost Academic Applications & Technology, Fairleigh Dickinson University
Anthony Humphreys, President, BlackBeltHelp
Modernizing and Enhancing Analytics Infrastructure While Moving to the Cloud
“Rob Stirton, VP of Institutional Effectiveness and CIO at County College of Morris, will demonstrate how CCM uses insights from their analytics platform for real-time decision making across the institution. CCM initially implemented GCOM’s on-premise Student Success Analytics (SSA) platform to integrate data across the entire student lifecycle and develop a clear line of sight between the college’s strategic goals and daily operations. This solution leveraged their existing business intelligence software investment and dramatically improved the availability and accuracy of information through visual analytics for all audiences, while significantly reducing workload on IT and researchers. Eliminating manual data processes allows CCM department chairs to establish goals for enrollment and retention and measure progress, grant writers to include valuable details about student populations to strengthen their applications, and students services staff to improve retention for transfer students.
Building on this success, CCM took the natural next step in their analytics journey: modernizing in the cloud with GCOM’s SSA Cloud platform. SSA Cloud increases access and usability and reduces management costs while seamlessly integrating data sources with hybrid cloud architecture. The cloud environment is managed by GCOM, reducing CCM’s need to staff expensive, in-demand skillsets to support the infrastructure. With the move to the cloud, CCM is better equipped to adapt to changing technical business needs, while leveraging built-in capabilities such as machine learning models that identify admissions and financial aid fraud, and dynamic data exploration and ad-hoc reporting that limits the need for users to write queries.
Presenters:
Rob Stirton, VP of Institutional Effectiveness and CIO, County College of Morris
John Van Weeren, VP Higher Education, GCOM Software
AI Ops in the Campus
Pivoting to an AI Driven Infrastructure in the Campus
Presenters:
Shane Praay, Practice Leader for R&E Networks, Juniper Networks
Ken Lecompte, Infrastructure Architect, Rutgers University
NIST 800-171 & GLBA: Understanding Compliance Requirements So You Can Confidently Self-Assess
There are 14 sections related to security and privacy within NIST 800-171 that higher education will be assessed on, and which they may be expected to comply with by July 2024, for new Federal Tax Information (FTI) handling procedures under the FAFSA Simplification Act. This requirement is in line with the current and future GLBA Safeguards Rule, which applies to all Title IV colleges and universities, and is included in both the SAIG agreement and the Federal Single Audit.
Compliance with NIST 800-171 is also broadly a contractual obligation for research institutions handling Controlled Unclassified Information on their networks for the purposes of grant-funded research, and these organizations are expected to conduct self-assessments to determine and maintain compliance. This session will help you understand these requirements and the way ahead towards assessing your institution to ensure you are on track.
Presenters:
Dr. Dawn Dunkerley, Lead Virtual Chief Information Security Officer, Edge
Building Data Analytics Centers for Colleges & Departments
“Every college, department and functional office at Rowan University needs access to data at their fingertips for decision making. Building common dashboards and models for everyone does not meet the needs of everyone. If we start tweaking common dashboards to meet the needs of everyone, the dashboards become unusable and complex. To solve this problem, we use Extract, Transform, & Load process to build complex and common datasets. We build various predictive models using these common datasets. We use these datasets and models and package them into data portals that we call as data centers. We build visualizations, KPIs, dashboards and reports based on these data sets and models and package them into different data centers. These data centers will have all the data one would need in a college or a department in a way they can easily consume and make decisions on. These data centers consist of dashboards, KPIs, reports, and predictive models in one place.”
Presenters:
Alex Luu, MS-ITM, Data Analyst, Rowan University
Benjamin Hyman, Senior Data Analyst, Rowan University
Carlos Mercado, Senior Data Analyst, Rowan University
Automating Identity and Access Management
This session will outline how Stevens Institute of Technology used Okta to automate onboarding and account creation processes, setup automatic deprovisioning of accounts and licenses for employees and graduating students, while enhancing security and providing opportunities for further community engagement with alumni portal access. Additionally, see how we saved money and time with automated license management for key applications and reduced workload/overhead.
Presenters:
Jeremy Livingston, Chief Information Security Officer, Stevens Institute of Technology
Rafat Azad, Security Engineer, Stevens Institute of Technology
Kristen Conti, Identity Systems Administrator, Stevens Institute of Technology
How Important Is Culture to Security Success?
Security is no longer the troglodytes of the IT world (more on that in the presentation), but an integral part of enterprise success. But with the challenge of budgets, staffing, and external threats, what does it take to be successful? This talk will focus on creating a culture of security for overall security mission success.
Presenters:
David Sherry, Chief Information Security Officer, Princeton University
The Journey from the Edge to the Cloud
Join us for an illuminating panel presentation featuring esteemed customers who have implemented Aruba OS-CX switching and Aruba WiFi 6E access points within their networks. Facilitated by Turn-key Technologies Inc., this session will take you on a virtual walk, tracing the path of data from the initial connection to the access point and following it through the network infrastructure, including the switch, uplink, core switch, firewall, and ultimately, the internet.
Starting at the connection point, we will delve into the significance of Aruba WiFi 6E access points in establishing robust wireless connectivity. These advanced access points leverage the power of the 6 GHz spectrum, delivering lightning-fast speeds, minimal latency, and increased network capacity to ensure an exceptional user experience.
Our exploration will then shift to the Aruba OS-CX switching platform, meticulously configured and supported by Turn-key Technologies Inc. We will discuss how these switches intelligently manage and optimize data traffic, ensuring seamless communication and efficient data transfer throughout the network.
As we progress, we will explore the vital role of the uplink and core switch, which serve as crucial network bridges, facilitating smooth data flow between different segments. Our panelists will share insights into the design principles and best practices that enable scalable and reliable network performance.
Furthermore, we will focus on the firewall, a critical component for network security. We will explore how it safeguards the network from potential threats and discuss the robust security features offered by Turn-key Technologies Inc., supported by Aruba Central’s centralized monitoring and configuration capabilities.
Finally, we will conclude our journey by examining the data’s path as it reaches the internet. We will emphasize the significance of a reliable and high-speed internet connection and highlight how the integrated solutions provided by Turn-key Technologies Inc. and Aruba contribute to an optimized and secure network infrastructure.
In summary, this panel presentation, backed by the expertise of Turn-key Technologies Inc., will provide valuable insights into the seamless data journey within a network powered by Aruba OS-CX switching and Aruba WiFi 6E access points. Attendees will gain a comprehensive understanding of the end-to-end connectivity process, network optimization, security measures, and the exceptional user experience offered by this robust solution.
Presenters:
Chris Voll, VP of Technical Operations, Turn-key Technologies
Michael McGuire, Network Operations Manager, Monmouth University
2:50 – 3:30 pm – Breakout Session 5
Collaboration Among Academia, Local and Federal Law Enforcement, Professionals in Ransomware Preparation
“Preparing for the potential of a ransomware event, and the resulting response procedures should an organization be attacked with the malware, requires a genuine community approach. The variety of expertise necessary to properly arrange forward-thinking planning for ransomware events is vast and as such, should include cross-discipline, cross-sector, and public-private partnership interaction between those in the professional fields of information technology, cybersecurity, information security, higher education, former federal (FBI and Secret Service) and local law enforcement, emergency management, and business professionals. As a result of the amalgam of inputs, the theme and content of the published research and ransomware framework to be presented resemble a true diversity of disciplines.
Several key organization ransomware and recovery preparations content points covered in the presentation will include:
A non-technical understanding of crucial preparation, response, and recovery tasks and categories associated with ransomware to serve cross disciplines (Financial, Insurance, Board/Executive, Audit, Information Technology, Legal, Education, Communications, Law Enforcement) and functions. An understanding of the role ethics can contribute in contending with ransomware – including an intended constraint diagram to help converse the topic with critical participants. Brief overview of the FEMA Incident Command System (ICS) and how with modifications, it can be utilized to help with the coordination, chain of command establishment, and responsibilities. A provided harmonizing ransomware incident response checklist that can be used to help guide organizations with the varied categories of tasks associated with ransomware recovery.”Presenters:
Stan Mierzwa, Center for Cybersecurity, Kean University
Data Management Trends and Tools for Discovery and Open Science
This session will review the challenges and opportunities associated with research data management. Attendees will learn about the fundamental requirements for a unified data and collaboration platform which researchers can leverage to efficiently store, share, access, and manage research data, accelerating institutional research efforts. Features of the IEEE Dataport platform, an easy-to-use, globally accessible data management platform will be explored. IEEE (http://www.ieee.org) is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.
Presenter:
Melissa Handa, Program Director, IEEE
Forough Ghahramani, Associate Vice President for Research, Innovation, and Sponsored Programs, Edge
Establishing Procedures to Improve Monitoring Effectiveness
We have all been monitoring our networks, servers and applications for years. However, our effectiveness with leveraging monitoring tools is never in alignment with our service improvement goals. For the past two years we have been implementing improved monitoring strategies for the entire IT organization to leverage. These strategies have identified gaps in monitoring areas and allowed us to consolidate monitoring tools to improve how we use monitoring. This session will provide an overview of our monitoring services and the strategies we have implemented to provide improved IT services to the Princeton community.
Presenters:
Joseph Karam, Director, Enterprise Monitoring Services, Princeton University
The Wheel Already Exists: Why Standards and Models are the Fuel for Innovation in Digital Learning
In this session, we’ll explore the critical role of standards and models in higher education, and how they provide a solid foundation for innovative digital learning experiences. By leveraging insights from successful models, educators can develop sustainable, engaging, and effective online and digitally-enabled courses, ensuring a future of continuous improvement and student success. Join us to discover how embracing “the ordinary” can be a catalyst for educational innovation, leading to transformative and enduring learning experiences to ensure your institution’s competitiveness in a changing higher education landscape.
Presenters:
Joshua Gaul, AVP and Chief Digital Learning Officer, Edge
The post EdgeCon Autumn 2023 appeared first on NJEdge Inc.
The official voting period will be between Wednesday, December 6, 2023 and Wednesday, December 13, 2023, following the 45 day review of the specifications. For the convenience of members who have completed their reviews by then, voting will actually open a week early on Wednesday, November 29, 2023, with the voting period still ending on Wednesday, December 13, 2023.
If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.
A description of OpenID Connect can be found at https://openid.net/connect/. The working group page is https://openid.net/wg/connect/. The vote will be conducted at https://openid.net/foundation/members/polls/325.
— Marie Jordan, OpenID Foundation Board Secretary
The post Notice of Vote to Approve Proposed Second Errata Set for OpenID Connect Specifications first appeared on OpenID Foundation.
Published November 21, 2023, revised November 22, 2023 to include Deutsche Telekom quote.
The OpenID Foundation is pleased to announce that it has joined the Linux Foundation’s CAMARA project as an Associate Member.
CAMARA is an open source project within Linux Foundation that defines, develops and tests the APIs enabling seamless access to Telco network capabilities. Initiated in 2021 by a small number of telco operators, vendors and hyperscalers, CAMARA launched in February 2022 with 22 initial partners; it has since grown to over 250 participating organizations with over 750 contributors.
Based on the history of supporting the technical development of GSMA Mobile Connect and the Mobile Network Operator (MNO) community, the Foundation is ready to contribute to CAMARA based on the experience from working with similar industry organizations as well as leading the global adoption of technical standards for Open Banking.
Shilpa Padgaonkar with Deutsche Telekom and Active Maintainer of CAMARA commented, “Deutsche Telekom is a member of the OpenID Foundation and one of the founding members of CAMARA and we are very excited about the liaison agreement. We strongly believe that this cooperation will accelerate valuable contributions to the project on several identity and consent topics and help us drive a shared vision on the OpenID Foundation standards and create sustainable CAMARA-based solutions for the future.”
OpenID Foundation MODRNA Co-Chair, Bjorn Hjelm said, “As a global technical Standards Development Organization (SDO) and using the experience from the Telco industry, the OpenID Foundation is in a unique position to facilitate and accelerate technical development by CAMARA to benefit all stakeholders.”
For more information on the Linux Foundation CAMARA go to https://camaraproject.org/.
About the OpenID FoundationThe OpenID Foundation’s vision is to help people assert their identity wherever they choose. And our mission is to lead the global community in creating identity standards that are secure, interoperable, and privacy-preserving.
Founded in 2007, the OpenID Foundation (OIDF) is a non-profit open standards body developing identity and security specifications that serve billions of consumers across millions of applications.
Learn more here: https://openid.net/foundation/
The post OpenID Foundation Joins CAMARA first appeared on OpenID Foundation.
A “DIDComm 101” presentation at the Internet Identity Workshop (IIW) in Mountain View last month inspired the creation of a new tool that enables anyone to easily send and receive DIDComm messages. Colton Wolkins, Senior Software Engineer at Indicio, explains.
Please provide a quick introduction to DIDComm and the DIDComm Demo
DIDComm is short for DID Communication, a protocol which lets people and software use Decentralized Identifiers (also known as DIDs) to communicate securely and privately over many channels including the Internet, Bluetooth, mobile push notifications, and QR codes.
The idea for the DIDComm Demo came about just before the Internet Identity Workshop (IIW) in California last month. Sam Curren (Indicio’s Senior Systems Architect, and Deputy CTO) realized that everything he was presenting at IIW using slides, we could show using a simple self-serve tool that helps people see for themselves how DIDComm works.
The DIDComm Demo allows you to connect to another person, a computer, a phone or simply another window in a different browser tab so you can see DIDComm messages traverse back and forth after the messages have been decrypted.
The reason we wrote this as a separate app was to allow people to see how DIDComm works without needing to sift through or learn a substantial stack like Hyperledger Aries Cloud Agent Python (ACA-Py).
Who is the Demo designed for?
Anyone who wants to understand how the technology works. For example, developers who want to build their own solutions on top of DIDComm can connect to the Demo from whatever they are building, to test that messages are being sent correctly.
We used a chat window approach to make it easy to understand. The tool is stripped down to the essentials, so it’s easy to see what’s going on. If you open up your browser’s network inspector, you’ll find all your messages are encrypted and your data is secure.
The DIDComm Demo is entirely open source. If anyone wants to look at code, to see how it works and learn from it, they can.
Please explain the concept of mediators in DIDComm, and their role in the DIDComm Demo
DIDComm doesn’t provide an inherent way to store messages. Also your phone’s IP address is always changing, so there’s no way to talk directly to it. Mediators are cloud-based agents that enable DIDComm messages to be stored and routed.
For example, if your travel agent sends you a DIDComm message about your flight, the mediator stores the message if your phone is offline. When the phone comes back online it talks to the mediator and says “Here I am, please send me any messages.”
Mediators tend to be hosted on the public Internet, so they have a domain name which points to the mediator’s IP address. They can either deliver messages directly to mobile devices, or collect messages from many agents at a single endpoint for future delivery.
A mediator must be accounted for when the sender encrypts a message. For example: If I am connected to you and I send you a message through your mediator, I first encrypt it so only you can decrypt it, then I encrypt it again so only your mediator can read whom to send the message to.
The DIDComm Demo is connected to a mediator. That’s what enables two people across the world to send messages to each other. At the moment, it happens to use the Indicio mediator (we don’t gain any insight into whether a message is coming from the Demo or any other application). We’re hoping to support more mediators, as other organizations bring WebSocket-capable mediators online.
DIDComm uses did:peer2 DIDs to establish a connection. The public keys are included in the DID document — one for signing, one for encryption — together with a service endpoint (a location to talk to), in this case, the Indicio Mediator’s DID. When you send a message, the Demo app resolves the mediator’s DID and finds the URL to which to relay the message.
For more information on mediators, see https://book.didcomm.org/routing/
How can someone access the DIDComm Demo?
Go to demo.didcomm.org. You’ll see a Help button there with a tutorial, plus a link to the github repo.
This article, like “Echoes from History: Designing Self-Sovereign Identity with Care”, is drawn from a book I have in process called FOREMEMBRANCE, which chronicles how identity was abused in WWII and how those dangers linger to the present day._
ABSTRACT: eIDAS v2.0, released as a provisional agreement on November 16, 2023, offers the first large-scale codification of digital identity. Unfortunately, it contains not just technical problems but also major philosophical missteps that could make it highly dangerous to its users, especially given the increasing threat of regime change in Europe.
The history of identity is a sobering one. It’s just more than century old in its modern, codified and organized, form but in that time identity has been reponsible for atrocities. In “Echoes from History: Designing Self-Sovereign Identity with Care”, I wrote about one of the worst misuses of identity, when Jacobus L. Lentz supported the Nazis in their use of Dutch identity records during World War II, resulting in the death of 75% of the country’s Jews. It didn’t have to be that way: in France, another identity pioneer by the name of René Carmille had control of a similar trove of identity records, but only selectively disclosed some of his data to the Nazis. Only 23% of the Jewish people in France fell to genocide as a result.
However, the stories of Lentz and Carmille are not mere historical prologue. They are powerful warnings about the care that must be taken in the use of personally identifiable data in the twenty-first century and beyond, as the collection of that personal information multiplies and enters the digital space. These warnings are particularly important today, as the European Union just reached provisional agreement on their next-generation digital-identity regulation, eIDAS, on November 16th. This was despite the fact that it is filled with both technical oversights and poor identity designs that could allow a repeat of the Dutch disaster under Lentz.
It is disturbingly ironic that six days later, on November 22nd, Geert Wilders rose to prominence in the Netherlands in a shock election win1 after running on a platform of hate and intolerance toward Muslims. What happens when isolationists and nationialists such as Geert Wilders, Victor Orbán, and Marine Le Pen take ahold of a badly flawed identity system? Unfortunately, history offers us dark lessons.
Two Roads DivergedThe digital world is a new frontier, and like all frontiers it has enjoyed waves of migrations. First came the pioneers: rugged individuals who set their own norms and created their own communities. Then came the corporations: companies intent on monetizing that frontier. Last came the governments: existing authorities intent on setting their own rules and regulations for this new world.
Today, the digital world stands at a crossroads. The conflicting ideas of the pioneers and the corporations are already writ large in differing models for identity on the internet. But now, governments are looking to exert their control over digital space, and it will ultimately be they who decide which of those models persists.
Despite the internet being ultimately founded by governments, who enabled the centralized authority of institutions such as IANA, ICANN, and various Certificate Authorities, the users of the internet have always yearned to be free. They’ve long wanted to control their own digital identities, to define their own relationships with other networked individuals.
One of the first clear statements of this was Phil Zimmerman’s development of the PGP software in 1991. Superficially, it was a technology that allowed for the encryption of data so that it could only be read by its intended recipipent. But PGP went far beyond that. It gave each user a personal identifier that they controlled. They could decide what information to release in correlation with their PGP key and to whom.
Following Zimmerman’s initial deployment of PGP, generations of digital pioneers offered their own takes on digital identity. Most of this occurred courtesy of organizations such as the Identity Commons2, the Internet Identity Workshop3, and Rebooting the Web of Trust4. Their goal has been to allow for the creation of “decentralized identity” that is not controlled by any one entity. It was a very different take from either Lentz or Carmille, with a user in control of his identity information rather than giving it over to the state.
A number of those ideas about self-sovereign identity have become reality with the standardization of decentralized identifiers (DIDs)5 and Verifiable Credentials (VCs)6 by the IETF. Now users can have personal control of an identifier that says who they are, and they can associate signed credentials with it, from recommendations to licenses to certifications. It’s a model for digital identity that, if deployed correctly, can avoid the worst dangers of identification, because a user can decide what information to release and to whom.
Unfortunately, it’s not the only way.
Just as users have been building up models for digital identity that give power back to individuals by allowing them to hold their personal data close, corporations have been going in the opposite direction, trying to link together as much information as they can.
You look at a product on Amazon, and suddenly every other site on the internet is trying to sell you on like products. This is a purposeful invasion of your digital privacy that advertising companies celebrate7. But, it’s just the tip of the iceberg.
Companies all across the internet collect huge amounts of data and share it with each other8, creating the potential for huge honeypots of personal information. Imagine a maleovolent party accessing every single bit of data collected by Google, potentially including what websites you visit, what items you purchase, and even where your Android phone is located at any time. It becomes even worse when internet companies require your real-name, as Facebook9 does, potentially creating a real-world connection to that digital data trove. The possibilities are catastrophic if a sufficiently bad actor gets a hold of that information.
These are the two possibilities for digital identity as we enter the third phase of the development of the digital world, where countries and other polities are beginning to lay down their own rules and regulations. We could go the way of the self-sovereign pioneers, who want to minimize information and correlation in a way that would have made Carmille proud, or we could go the way of the corporations that want to collate and correlate everything for their own selfish self-interest, something that Lentz likely would have blindly supported.
Unfortunately eIDAS v2.0, one of the first major definitions of digital identity, could go either way.
The EU’s Unintential ConsequenceseIDAS v2.0, the new extended version of the Electronic Identification, Authentication and Trust Services that is a product of the European Union, was released as a provisional agreement on November 16, 2023. The EU has been a major force in proclaiming individual rights on the internet, and they seem to have the best intentions of empowering and protecting their citizens. Unfortunately, their biggest initiatives to date have been mixed successes due to their inability to recognize the unintentional consequences of their regulatory design.
The GDPR 10 and the EPrivacy Directives11 had laudable goals 12 but have resulted in endless clicking of buttons to access websites in Europe and have largely benefited large corporations who are able to fulfill the regulations13. The EU Directive on Copyright in the Digital Single Market14 has resulted in countries trying to grab cash from search engines15 and getting cut off from news as a result16.
Generally, these digital regulations demonstrate a concerning reptition of unintended consequences and insufficient understanding of the digital space in EU legislation. If those problems were to recur in identity-focused legislation, it could be catastropic.
eIDAS on the MarcheIDAS v2.0 details the legal and technical requirements for countries to offer digital identities that can be stored on devices and reused17. It requires the creation of digital wallets that can be filled with both an identifier and credentials. If you’ve long dreamed of having a driver’s license, university records, or health records that are digitally stored in a portable way, that’s what eIDAS v2.0 does18.
There’s a lot of theoretically good material built into eIDAS v2.0. It supports interoperable digital wallets with high security standards. It also enables digital signing and can be set up remotely without having to present physical documents in person19.
There’s also hope that self-sovereign identity (SSI) could still be a possibility with the release of eIDAS. After all, the provisional agreement states that one of its goals is “to provide people with control over their online identity and data”20. One study on the topic thus suggested there was no innate contradiction between eIDAS and the individual-empowering principles of SSI21. Another laid out bridge scenarios22. Yet another author noted that “selective disclosure” of information, where users individually decide what identity elements to release, had been a principle since eIDAS 1.0. However, nothing is guaranteed: that final article also notec that the user’s personal, self-sovereign control of data is purposefully limited by the design23 of eIDAS.
Ultimately whether eIDAS will create new digital honeypots of information will depend on eIDAS’ final deployment and how it’s arbitrated by countries’ laws and courts. Unfortunately, the EU’s record on digital legislation is problematic, and as it’s proposed eIDAS is full of holes that could lead to big problems in our future.
The biggest problem with eIDAS as it currently stands is a worldwide web-security issue. eIDAS mandates that web browers accept security certificates from individual member states and the EU can refuse to revoke them even if they’re dangerous24. The provisional agreement does note that “The obligation of recognition, interoperability and support of QWACs is not to affect the freedom of web-browser providers to ensure web security, domain authentication and the encryption of web traffic in the manner and with the technology they consider most appropriate.”20. However, that’s clearly a contradiction: web browsers can not ensure web security if there’s an obligation to recognize CAs from member states.
That obligation not only reverts the web to an outdated model for security management, but it also puts every single transaction on the web at the mercy of the least trustworthy administrator in the least trustworthy EU state, including states such as The Netherlands and Poland, now being taken over by nationalists. Potentially, millions of people would have the ability to break web security. Perhaps worse, it allows the surveillance of users at European websites by European governments25.
There are numerous problems with the actual identity model of eIDAS as well.
This begins with the identifier at the heart of the proposal, which is problematic because it’s meant to be persistent26. That allows for extreme correlation of everything a person ever does with their identity, over an infinite amount of time, and thus creates huge honeypots of personal information that will be massive targets for attacks. Earlier proposals even called for the identifier to be unique for each person, which would have multiplied the problem a hundredfold (and violated the Constitutions of some member states)27. The provisional agreement leaves the question of uniqueness up to the member states themselves but at least offsets that somewhat with support for pseudonyms28.
The cryptographic technology used by eIDAS is also at issue because of the very conservative and limited choices of the EU. Modern signature schemes and other cryptography likely to offer better protection for the enormous personal data likely to be compiled under eIDAS are all prohibited29. This leaves it very vulnerable.
eIDAS data is ultimately collected in wallets, which will be government-backed and will use identities provided by corporations. This allows the possibility for either entity to illicitly collect personal data30. Though the current governments and selected corporations might be trustworthy, the same guarantee can’t be made for future controllers of those entities. Thus, the data and app model at the core of eIDAS is just a mortgage for future problems.
In other words, there are problems up and down the entire eIDAS stack, including identity creation, identity assignment, cryptographic protection, web-based security, and app design. Too much data is being collected in one place, it’s potentially available to too many people, and it’s too vulnerable.
The stories of Lentz and Carmille should have taught us the core importance of minimizing data and intrinsically protecting it from misuse.
eIDAS does not.
The problem is that the EU creating a digital identity without understanding modern theories for how to create safe identities that can benefit their users without making them vulnerable.
Identities need to minimize data correlation. Identities need to minimize data correlation. The EU’s original take on a unique identity shows how badly misguided they are on this criterion, and even now the decision is going to get punted to the member states. But there should never be a single identity that correlates your taxes, your transit through different countries, your rental of a bicycle, your usage of digital euros31,and whatever else gets jammed into the eIDAS framework. And that data shouldn’t last forever (and in fact doing so goes against the GDPR’s right to be forgotten32).
Identities need to minimize identification. That same person renting a bike should need to reveal nothing more than their ability to pay to rent and possibly replace the bicycle. A person buying an alcoholic beverage should only need to reveal their age. A person reporting a public-works issue, such as a pothole, should potentially reveal nothing. Ultmately identity is about managing the risk between two parties. The risk to individuals must be minimized by minimizing the identification data they’re required to share.
For eIDAS identities to successfully meet the human-focused goals of the EU, they need to reflect the power imbalance between the state and the individual, and in those situations where their needs are equal, or where the needs of the individual are greater, they need to maximize the power of the individual.
To go back to the example of an individual reporting a pothole, the risk that the state faces for a false report is low: they might waste resources if they schedule a non-existent public-works repair, but they are very large and can easily amortize that loss. In contrast, the risk that an individual faces is much greater. If their identity were revealed, they might be “outed” as a troublemaker or they might face retribution from a worker who had previously failed to fix that pothole. (Obviously, these repercussions would be even greater for a matter of more import, but the fact that they exist for even a small issue is notable.) As a result, despite the power imbalance where the state holds the most power, it must give the most protection to the individual in this situation, by not requiring identification data.
Certainly, there are technical solutions that can resolve some of the most extreme problems remaining in eIDAS. The state-mandated CAs need to be removed, identifiers must be required to be non-persistent and non-unique, cryptography must be expanded, and the creation of identifiers and wallets must be decentralized.
However, the philosophical issues are just as important. The minimization of data collection, data correlation, and personal identification must be incorporated into eIDAS as actual regulations or the very strongest of recommendations. eIDAS ultimately must say not just what things must be done for compliance, but what things must not be done to preserve the privacy of user data.
ConclusionIn order to protect our human rights, we need to ensure that modern-day information systems protect our data in meaningful ways. As we learned through the lessons of Lentz and Carmille, there are two paths before us.
On the one hand, we have the path of Jacobus Lentz. He maximized the information that he collected, he happily stored it all together, and he thought merely about the efficiency of its use. His information thus wasn’t protected against a regime change, and the Nazis used it to commit genocide against 75% of the Netherlands’ Jewish population.
On the other hand, we have the path of René Carmille. He collected very similar information to Lentz, but he was aware of the sensitivity of some information, so he ensured that it couldn’t be correlated with other personally identifiable information. A third as many Jews died in France as The Netherlands (by percentage), and it seems likely that Carmille’s protection of personal records was a major factor.
Moving across the 20th and 21st centuries, we’ve unfortunately more frequently cleaved to the path of Lentz than Carmille. The Census Bureau turned over records of Japanese-Americans to President Roosevelt and thousands died and tens of thousands more were deeply scarred. Records of young DACA immigrants passed from President Obama to President Trump and their lives were thrown into grave uncertainty. The smaller stories of individual people whose lives were destroyed due to the theft or sharing of date (legal or not) are uncountable. Lentz remains ascendant; his lessons unlearned.
But the digital frontier offers a new opportunity to do things right. We can ensure that identities are controlled by people and used to benefit those people and society as a whole. We can protect people and their identities against regime change, which has become all too much of a possibility in Europe, as demonstrated by Poland and the Netherlands. We just have to do so before it’s too late.
The biggest danger and the biggest opportunity in the current day remains eIDAS. With the release of a provisional agreement, it might already be too far along to prevent its ratification in that form. But either now or afterward we must take the opportunity to legislatively minimize the data collected and correlated for identities.
We must give individuals the ability to control their personal identity as much as is possible in a shared society.
Two paths signboard Image by rawpixel.com on Freepik.
Footnotes“Populist Rage Gives Dutch Far Right a Worrying Shot at Power”. Foreign Policy. https://foreignpolicy.com/2023/11/27/netherlands-dutch-election-coalition-geert-wilders-pvv-far-right/. ↩
The Identity Commons. https://www.idcommons.org/. ↩
Internet Identity Workshop. https://internetidentityworkshop.com/. ↩
Rebooting the Web of Trust. https://www.weboftrust.info/. ↩
“Decentralized Identifiers (DIDs) v1.0”. IETF. https://www.w3.org/TR/did-core/. ↩
“Verifiable Credentials Data Model v1.1”. IETF. https://www.w3.org/TR/vc-data-model/. ↩
“What is ReTargeting and How Does it Work?”. ReTargeter. https://retargeter.com/what-is-retargeting-and-how-does-it-work/. ↩
“The Data Big Tech Companies have On You”. Security.org. https://www.security.org/resources/data-tech-companies-have/. ↩
“How Facebook’s real-name policy changed social media forever”. Protocol. https://www.protocol.com/policy/anonymity-real-names-jeff-kosseff. ↩
“General Data Protection regulation: GDPR”. Intersoft Consulting. https://gdpr-info.eu/. ↩
“Cookies, the GDPR, and the ePrivacy Directive”. GDPR.eu. https://gdpr.eu/cookies/. ↩
“The EU General Data Protection Regulation: Questions and Answers”. Human Rights Watch. https://www.hrw.org/news/2018/06/06/eu-general-data-protection-regulation. ↩
“Unintended Consequences of GDPR”. Regulatory Studies Center (George Washington University). https://regulatorystudies.columbian.gwu.edu/unintended-consequences-gdpr. ↩
“Directive on Copyright in the Digital Single Market”. Wikipedia. https://en.wikipedia.org/wiki/Directive_on_Copyright_in_the_Digital_Single_Market ↩
“French Regulator Says Google Must Pay to Link to News Sites”. Wired. https://www.wired.com/story/french-regulator-says-google-must-pay-to-link-to-news-sites/. ↩
“Google News to relaunch in Spain after mandatory payments to newspapers scrapped”. The Verge. https://www.theverge.com/2021/11/3/22761041/google-news-relaunch-spain-payments-publishers-eu-copyright-directive. ↩
“eIDAS 2.0 - Explained in 90 seconds”. IDNow Youtube Channel. https://www.youtube.com/watch?v=W14StXJPB-U. ↩
“EIDAS 2.0 – WHAT’S NEW?”. Cryptomathic. https://www.cryptomathic.com/news-events/blog/eidas-2.0-whats-new. ↩
“eIDAS 2.0 and its potential impact on online security and fraud prevention”. Rupanjan Mukherjee on LinkedIn. https://www.linkedin.com/pulse/eidas-20-its-potential-impact-online-security-fraud-mukherjee/ ↩
“European Digital Identity - Provisional Agreement”. European Parliament. https://www.europarl.europa.eu/committees/en/european-digital-identity-provisional-ag/product-details/20231116CAN72103. ↩ ↩2
“Self-Sovereign-Identity & eIDAS: a Contradiction? Challenges and Chances of eIDAS 2.0*”. European Review of Digital Administration & Law - Erdal. https://www.erdalreview.eu/free-download/979125994752910.pdf. ↩
“SSI eIDAS Legal Report”. European Commission. https://joinup.ec.europa.eu/sites/default/files/document/2020-04/SSI_eIDAS_legal_report_final_0.pdf. ↩
“Digital Identity in Germany a winter`s tale?”.https://medium.com/@schwalm.steffen/digital-identity-in-germany-a-winter-s-tale-ce7ced9f7635 ↩
“eIDAS 2.0 Sets a Dangerous Precedent for Web Security”. EFF. https://www.eff.org/deeplinks/2022/12/eidas-20-sets-dangerous-precedent-web-security. ↩
“Could Big Brother EU Digital Identity Laws Impact Digital Euro Privacy Claims?”. Ledger Insights. https://www.ledgerinsights.com/eu-digital-identity-laws-impact-digital-euro-privacy-claims/. ↩
“Commission says single identifier in eIDAS reform ‘not necessary’”. https://www.euractiv.com/section/digital/news/commission-says-single-identifier-in-eidas-reform-not-necessary/ ↩
“A Unique Identification Number for Every European Citizen”. Verfassungsblog. https://verfassungsblog.de/digital-id-eu/ ↩
“9 facts about the EU Digital Identity Wallet”. IDEMIA. https://www.idemia.com/insights/9-facts-about-eu-digital-identity-wallet. ↩
“Electronic Signatures and Infrastructures (ESI); Cryptographic Suites”. ETSI. https://www.etsi.org/deliver/etsi_ts/119300_119399/119312/01.02.01_60/ts_119312v010201p.pdf. ↩
“Understanding the eIDAS 2.0 and its implication for Individual’s Privacy and Data Protection Rights.” Medium. https://medium.com/datafrens-sg/unerstanding-the-eidas-2-0-and-its-implication-for-individuals-privacy-and-data-protection-rights-df0ae62eaafa. ↩
“The ECB Moves Forward on Digital Euro Development”. Fintech Nexus. https://www.fintechnexus.com/the-ecb-moves-forward-on-digital-euro-development/ ↩
“Art. 17 GDPRRight to erasure (‘right to be forgotten’)”. Intersoft Consulting. https://gdpr-info.eu/art-17-gdpr/. ↩
The widely-adopted XML schema for marking up books of all kinds is now presented for public review prior to its submission to the members of OASIS as a candidate for OASIS Standard.
OASIS and the DocBook TC [1] are pleased to announce that The DocBook Schema Version 5.2 is now available for public review and comment.
DocBook is a schema (available in languages including RELAX NG, SGML and XML DTDs, and W3C XML Schema) that is particularly well suited to books and papers about computer hardware and software.
Because it is a large and robust schema, and because its main structures correspond to the general notion of what constitutes a “book,” DocBook has been adopted by a large and growing community of authors writing books of all kinds. DocBook is supported “out of the box” by a number of commercial tools, and there is rapidly expanding support for it in a number of free software environments. These features have combined to make DocBook a generally easy to understand, widely useful, and very popular schema. Dozens of organizations are using DocBook for millions of pages of documentation, in various print and online formats, worldwide.
The TC received four Statements of Use from from Norm Tovey-Walsh, XML Press, the SUSE documentation team, and Jira Kosek. [3].
The candidate specification and related files are available here:
The DocBook Schema Version 5.2
Committee Specification 01
19 July 2023
Editable source (Authoritative):
https://docs.oasis-open.org/docbook/docbook/v5.2/cs01/docbook-v5.2-cs01.docx
HTML:
https://docs.oasis-open.org/docbook/docbook/v5.2/cs01/docbook-v5.2-cs01.html
PDF:
https://docs.oasis-open.org/docbook/docbook/v5.2/cs01/docbook-v5.2-cs01.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/docbook/docbook/v5.2/cs01/docbook-v5.2-cs01.zip
Public Review Period
The 60-day public review starts 20 November 2023 at 00:00 UTC and ends 18 January 2024 at 23:59 UTC.
This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility as explained in the instructions located via the button labeled “Send A Comment” at the top of the TC public home page, or directly at:
https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=docbook
Comments submitted by for this work and for other work of this TC/OP are publicly archived and can be viewed at:
https://lists.oasis-open.org/archives/docbook-comment/
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review of “DocBook V5.2,” we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC/OP should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information
[1] DocBook TC
https://www.oasis-open.org/committees/docbook
[2] Approval ballot:
https://www.oasis-open.org/committees/ballot.php?id=3806
[3] Statements of Use:
Norm Tovey-Walsh[4] http://www.oasis-open.org/policies-guidelines/ipr
[5] https://www.oasis-open.org/committees/docbook/ipr.php
RF on Limited Terms Mode
https://www.oasis-open.org/policies-guidelines/ipr/#RF-on-Limited-Mode
The post Invitation to comment on DocBook Schema V5.2 before call for consent as OASIS Standard – ends January 18th appeared first on OASIS Open.
Back to our Developer Showcase Series to learn what developers in the real world are doing with Hyperledger technologies. Next up is Harsh Multani, a Hyperledger Mentorship alumnus and, now, Blockchain Developer at Siemens. (Read about Harsh’s mentorship project, Hyperledger Fabric – Hyperledger Aries Integration to support Fabric as blockchain ledger, here.)
Start your Thanksgiving holiday party right with the latest episode of the Identity at the Center Podcast! We had the pleasure of chatting with Matt Caulfield, Founder and CEO of Oort (now part of Cisco). We dove into the big areas of identity to solve and discussed the challenges in ITDR, the data plane side of identity, machine identity, and entitlement entropy. Matt shared some incredible insights and expertise in these areas.
We also delved into Matt's journey into the field of identity and how his role at Cisco has evolved since the acquisition. And of course, we wrapped up the episode on a lighter note by answering what our dream businesses related to outdoor adventures would be.
Don't miss out on this conversation! Tune in to episode #247 at idacpodcast.com or in your favorite podcast app.
Bill & Melinda Gates Foundation Joins UNDP in Pioneering Digital Infrastructure Drive
The United Nations Development Program has launched an ambitious initiative, the “50-in-5” campaign, to boost the digital public infrastructure (DPI) across the globe. This strategic move, strongly supported by the Bill & Melinda Gates Foundation, aims to see 50 countries develop and scale at least one DPI component by 2028. These components include digital payments, ID, and data exchange systems that are integral to achieving the UN’s Sustainable Development Goals.
A cohort of 11 nations, including technological front-runners like Estonia and emerging economies like Bangladesh, have stepped forward as the first movers in this groundbreaking journey. The Gates Foundation’s involvement underscores its ongoing commitment to DPI, highlighted by a $200 million fund established in September 2022. Melinda French Gates has emphasized the transformative potential of mobile devices in furthering digital identities, which are central to DPI’s expansion.
India’s Aadhaar, the world’s largest biometric ID system, is poised to play a significant role in this global narrative. Recognized at the G20 Summit in New Delhi, Aadhaar could serve as a blueprint for countries in Africa and Asia to roll out their digital identity technologies. With such a robust backing and a clear target, the “50-in-5” campaign is set to embark on a path that could reshape how nations and their citizens interact with technology and governance.
Self-Sovereign Identity vs. UNDP’s Digital Public Infrastructure
In the digital era, the quest for a secure and efficient identity management system has led to the emergence of two distinct philosophies: Self-Sovereign Identity (SSI) and the United Nations Development Program’s (UNDP) digital public infrastructure (DPI) approach as recently endorsed by the Bill & Melinda Gates Foundation.
Self-Sovereign Identity (SSI): A User-Centric Model
SSI is a user-centric model that grants individuals complete control over their digital identities. Unlike traditional identity systems that rely on centralized authorities, SSI is built on blockchain technology, allowing for a decentralized and autonomous structure. Users can manage their identities without intermediary oversight, deciding when and to whom they reveal their information. Christopher Allen’s principles of SSI emphasize that control over personal data is fundamental, positioning users not merely as subjects within digital realms but as their sovereigns. This system is geared towards a future where identity verification is both secure and private, with verifiable credentials and decentralized identifiers ensuring accessibility across the globe at any time.
UNDP’s Digital Public Infrastructure: A Collaborative Framework
The UNDP’s DPI initiative, on the other hand, encompasses a network of digital components like payments, ID, and data exchange systems. This infrastructure is designed to aid countries in achieving the UN’s Sustainable Development Goals. With the Gates Foundation’s backing, the “50-in-5” campaign aims to implement DPI components in 50 countries by 2028. Unlike SSI, the DPI approach involves a more collaborative framework, possibly relying on centralized or semi-centralized systems that can be adopted at a national scale. The focus here is on interoperability and inclusiveness, aiming for a safe and universally accessible digital ecosystem.
The Core Differences
The key difference between SSI and DPI lies in the locus of control and the approach to decentralization. SSI is about empowering the individual as the sole owner of their identity, with blockchain technology ensuring that control remains with the user. DPI, while it may utilize some decentralized features, is more about creating a common digital infrastructure that can be shared among nations and governed by a collective of stakeholders, including governments and international organizations.
In conclusion, while both SSI and DPI aim to address the challenges of identity in the digital age, they offer different paths forward. SSI is a bottom-up approach that starts with individual autonomy, whereas DPI represents a top-down strategy that focuses on creating a shared digital environment. As the digital landscape evolves, the interaction between these two models will likely shape the future of digital identity governance.
UNDP Photo by Xabi Oregi: https://www.pexels.com/photo/flags-of-countries-in-front-of-the-united-nations-office-at-geneva-16459372/
The post Self-Sovereign Identity vs. UNDP’s Digital Public Infrastructure appeared first on Lions Gate Digital.
We’re excited to share the latest improvements to Ceramic, version 3.0. This upgrade is packed with significant improvements:
Ceramic Node Software 3.0We’re kicking things off with the launch of Ceramic 3.0, the latest version of our node software. We urge you to upgrade your Ceramic packages to this version for an optimal experience.
Node.js Version 20 SupportA key highlight of Ceramic 3.0 is its compatibility with Node.js version 20. Please note that older versions of Node.js, such as v16, are no longer supported with the new Ceramic versions. Therefore, upgrading your Ceramic node to v3.0 necessitates a simultaneous upgrade to Node.js v20.
This change allows you to leverage the most recent features and performance enhancements offered by Node.js.
Upgrade Your Ceramic NodeFor those running a Ceramic node, we strongly advise upgrading to the latest version. While older nodes may continue to function, they will no longer receive technical support or bug fixes. To ensure you’re well-supported, and prevent future compatibility issues, we recommend updating your node.
Prior to transitioning, ensure your client application builds against the latest versions of the @ceramicnetwork/http-client package. This step will guarantee a seamless experience with future updates.
Click here for more info on upgrading your Ceramic node.
Goodbye to Non-Standard DID MethodsWe’ve retired some older, unused code—specifically, the DID:ETH, DID:NFT, and DID:SAFE features. These were previously available but were never recommended for production use. We’re now officially discontinuing them. If you’ve been using these DID methods, it’s time to let go as they’re no longer supported.
Future Features on the RoadmapCeramic 3.0 paves the way for exciting features that we anticipate in upcoming releases, such as worker threads for parallelized signature verification. These changes are expected to boost performance, setting the stage for more future enhancements.
As always we'd love to hear your feedback on the forum!
The U.S. government has embraced FIDO authentication, and is now looking for further guidance around how to implement this technology into the government’s existing PIV-centric ecosystem used to manage enterprise access for government employees and contractors.
To provide this guidance, the FIDO Alliance published a paper, “FIDO Alliance Guidance for U.S. Government Agency Deployment of FIDO Authentication.”
Read the paperThis resource is the first output of a new committee formed by the FIDO Alliance’s Board of Directors at the request of the White House Office of Management and Budget (OMB) and Cybersecurity and Infrastructure Security Agency (CISA). The Committee, whose goal is to improve and accelerate adoption of FIDO technology within federal agencies, includes representatives from CISA, the National Institute of Standards and Technology (NIST), the General Services Administration (GSA), the Department of Defense, in addition to other FIDO Alliance members.
The Committee is aligned with the government’s efforts to modernize identity to counter threats, and encourages agencies to advance their Zero Trust Architecture journeys by implementing identity capabilities that support both FIDO and PKI-based phishing-resistant MFA.
It also provides guidance on implementation of FIDO credentials within the federal digital identity ecosystem in order to meet immediate priorities defined in OMB 22-09, Federal Zero Trust Strategy and advance cybersecurity outcomes by enabling future phases of Federal Zero Trust Architecture efforts.
Alternative options for phishing-resistant authentication are necessary in the federal workforce, for example, for individuals who are not PIV eligible, or to quickly enable access for new employees who are waiting for their PIV to be issued, or those individuals who work remotely and don’t need access to federal facilities.
This document highlights areas where FIDO offers the best value to address U.S. Government use cases as an enhancement of existing infrastructure, while minimizing rework as agencies advance their zero trust strategies with phishing-resistant authentication tied to enterprise identity as the foundation.
The FIDO Alliance will host a webinar, “Deploying FIDO Authentication in U.S. Government Agencies,” covering the essential information in this white paper on November 28, 2023 at 1:00 PM ET / 11:00 AM PT. To register for the webinar, click here.
To engage with the FIDO Alliance’s new committee regarding this paper, please contact feedback@fidoalliance.org.
About the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
The post Blog: FIDO Alliance Publishes Guidance for U.S. Government Agency Deployment of FIDO Authentication appeared first on FIDO Alliance.
Organizations from Around the World Come Together to Define the Defending Against Disinformation Common Data Model (DAD-CDM)
Boston, MA – 16 November 2023 – OASIS Open, the international open source and standards consortium, launched the DAD-CDM project, an open source initiative to develop data exchange standards for normalizing and sharing disinformation and influence campaigns. DAD-CDM will serve as a valuable resource, particularly in the identification and alerting of AI-empowered attacks.
“DAD-CDM won’t try to define disinformation. Instead, the standard will offer a framework for assessing both technical and non-technical aspects of disinformation campaigns. This will alert influence operation (IO)-fighting teams worldwide and give them the means to communicate their analyses using a standardized language. The ultimate decision on classifying an attack as disinformation rests with analysts,” said Jean-Philippe Salles of Filigran, co-chair of the DAD-CDM Project Governing Board (PGB).
In the context of IO campaigns, AI-driven attacks often exhibit remarkable sophistication and adaptability. The use of AI as a tool can exacerbate the impact of these operations and is a growing concern. A common data model, such as DAD-CDM’s data exchange standards, will provide a structured and standardized approach for sharing critical information regarding these AI-empowered attacks, thus enabling early detection, rapid alerting, and improved response strategies. This will enable stakeholders, including governments and cybersecurity experts, to effectively identify, counter, and mitigate the influence of AI in propagating disinformation and interference campaigns, ultimately reinforcing our defenses against this evolving and pervasive challenge.
“In a world characterized by increasing global division exacerbated by the proliferation of IO campaigns including disinformation, misinformation, and FIMI, it becomes increasingly evident that a shared data model is imperative. Such a model would streamline the sharing and analysis of information concerning influence operations, allowing for a unified approach in response coordination and resource allocation,” said Dr. Georgianna Shea of the Foundation for the Defense of Democracies, co-chair of the DAD-CDM PGB. “The DAD-CDM, in the realm of influence operations, serves a purpose akin to what MITRE ATT&CK accomplishes in the context of cyber-attacks. Both are indispensable instruments, essential for navigating the intricate and multifaceted terrain of contemporary threats, ensuring a more cohesive and effective response strategy.”
Building upon the OASIS Structured Threat Information eXpression (STIX) threat intelligence sharing standard, DAD-CDM will leverage models, protocols, and policies widely used today in the cybersecurity field. DAD-CDM will focus on manipulative behaviors and so will also expand on the DISARM Framework, which standardizes the codification of adversary behavior Tactics, Techniques, and Procedures (TTPs).
“The concept of ‘defending’ involves real-time action and strategic planning to confront the threat of disinformation, which poses a significant threat to peace, democracy, and critical policy areas like pandemics and climate change,” said Pablo Breuer, Ph.D., Chair, DISARM Foundation. “DAD-CDM offers several advantages, including faster alert sharing, easier response coordination, increased report sharing, and the ability to compare and contrast actor behavior over time. Together, we will create a stronger defense against the spread of disinformation and online harms and help safeguard the integrity of information worldwide.”
Leadership and funding for the project are provided by AdTechCares, Crisp Thinking Group, Cyabra, Debunk.org, DISARM Foundation, Filigran, Foundation for Defense of Democracies, Global Disinformation Index, Johns Hopkins University Applied Physics Laboratory, Limbik, Logically, MarvelousAI, and sFractal Consulting. Participation in the DAD-CDM Open Project is open to all interested parties. Contact join@oasis-open.org for more information.
Support for the DAD-CDM OP
Crisp
“Crisp has witnessed a transformation in the scale and maturity of disinformation threats that we support our partners in tackling, particularly with the recent surge in generative AI. Being part of this common approach within the DAD-CDM project is an important development that Crisp is proud to support. We eagerly anticipate the collective industry advancement in countering disinformation that this partnership promises, in the face of ever more sophisticated bad actor networks.”
– David Hunter, VP Trust & Safety, Crisp
Cyabra
“Social media is the new attack surface and the fastest-growing arena for bad actors. Ten years after the creation of the ATT&CK framework, which has helped guide the growth of the cybersecurity sector, Cyabra is excited to join OASIS Open and lead the emergence of the formal counter-disinformation industry.”
– Rafi Mendelsohn, VP Marketing, Cyabra
Debunk.org
“The creation of a common language and model through the DAD-CDM initiative is pivotal for professional FIMI defenders. It significantly enhances our efficiency in responding to FIMI attacks. As the Head of Debunk.org, I am proud to join this initiative that promises to unify and strengthen our efforts against the sophisticated FIMI attacks threatening our information landscape.”
– Viktoras Daukšas, Head of Debunk.org
DISARM Foundation
“To effectively identify and expose foreign manipulation before it negatively impacts our democracies, we need to quickly establish a comprehensive threat overview. STIX enables this by connecting the dots. For example, with STIX we will be able to assert, ‘Another community member already analyzed this inauthentic Facebook account and shared their report,’ or ‘These specific deceptive behaviors strongly resemble prior election interference campaigns by this nation-state.’ To address hybrid threats such as cyber-enabled influence (e.g., hack-and-leak) or influence-enabled cyber (e.g., clickbait) we need to model Indicators of Compromise (IOCs) and Indicators of Manipulation (IOMs) in the same way. With STIX we will be able to do that.”
– Stephen H. Campbell, CTO, DISARM Foundation
Filigran
“OASIS Open’s DAD-CDM initiative is a pivotal step forward in addressing the growing challenges of online disinformation, particularly FIMI. At Filigran, we recognize that tackling this multifaceted problem requires collaboration and unified efforts. We’re proud to support a project that not only builds on the trusted foundation of the STIX cybersecurity standard but also fosters real-time, scalable defenses against AI-powered threats, ensuring a more secure and trustworthy digital environment for all.”
– Samuel Hassine, Co-founder & CEO, Filigran
Limbik
“In the years Limbik has been leveraging cognitive AI against disinformation, we have seen incredible progress and ingenuity within the community. This new project is an extremely promising advancement, bringing the best technology, insights, and lessons learned together towards the vaunted whole-of-society response that is desperately needed. A common approach to disinformation that aligns academia, industry, government, and non-profits toward proactive resilience building and risk mitigation represents a huge advancement in an increasingly risky world.”
– Robert Schaul, VP Strategy, Limbik
Logically
“The sheer scale of the disinformation threat facing global society is vast and requires coordinated collaboration from a host of stakeholders if we are to limit the impact of bad actors who utilize disinformation campaigns to cause confusion, disruption and harm. We are delighted to be a part of the DAD-CDM project and look forward to working with our partners to detect disinformation campaigns earlier and improve our response strategies.”
– Lyric Jain, CEO, Logically
MarvelousAI
“In the pursuit of truth and transparency, we are excited to harness the collective power of open-source, guided by the STIX and TAXII standards, to illuminate the shadows cast by information influence operations.”
– Danielle Deibler, Co-founder and CEO, MarvelousAI
sFractal Consulting
“Standards are essential in countering disinformation influence campaigns because standards provide a common framework and guidelines for identifying, mitigating, and preventing disinformation. Standards promote sharing which helps level the playing field and ensures consistency in response strategies. Utilizing standards allows authorities, organizations, and individuals to better coordinate efforts, mitigate the spread of disinformation and safeguard the integrity of the information ecosystem.”
– Duncan Sparrell, Chief Cyber Curmudgeon, sFractal Consulting
Additional Information
DAD-CDM FAQ STIX is complemented by the OASIS Trusted Automated Exchange of Intelligence Information (TAXII) standard, a protocol used for the secure exchange of cyber threat intelligence. Foreign Information Manipulation and Interference (FIMI) is defined as a “mostly non-illegal pattern of behavior that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted intentionally and in a coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory.”Media Inquiries
communications@oasis-open.org
The post OASIS Mobilizes Open Source Community to Combat the Spread of Disinformation and Online Harms from Foreign State Actors appeared first on OASIS Open.
WAO together with the lovely people from Participate have been running three conversational workshops this last month.
Goal of the workshops was to unlock the potential of Communities of Practice, explore Open Recognition, and experience the Participate platform through this curated series of online events.
The three sessions were designed to offer comprehensive insights into methodologies such Value Cycles, Maturity Models, and Convening Systems and helping others grow their communities. For people interested in establishing Communities of Practice, as well as current community managers seeking to enhance value and impact but also for professionals interested in innovative online education approaches.
This post recaps the three sessions and shares some resources, together with a quick video summary of each workshop.
1. Creating Value in your CommunityIn this first workshop we explored Value Cycles, inspired by the work of Etienne and Beverly Wenger-Trayner, to cultivate thriving Communities of Practice.
The workshop covered Understanding the different kinds of value community members bring to a CoP Imagining context specific scenarios that lead to community created value Recognizing and applying created value Resources badges.community Link to the article from Wenger-Trayner #1 Community Conversation Workshop Value Creation Story Badge 2. Helping your Community MatureThe second workshop was all about navigating your community through its growth stages using a Maturity Model, based on the work of Emily Webber, Bailey Richardson, Kevin Huynh & Kai Elmer Sotto.
The workshop covered Using our remixed Community of Practice Maturity Model, understanding how Communities of Practice mature Reflecting on the phase of your own CoP and highlighting areas that can help the community progress towards becoming self-sustaining Identifying activities and intervention strategies to help a community develop Resources badges.community Tacit’s Community Maturity Model Get Together Book Whimsical activity board #2 Community Conversation Workshop Maturity Story Badge 3. Communities as Change AgentsIn the third and last workshop we learned how to transform your community into a catalyst for change using a Convening Systems model, introduced by the Social Learning Lab.
The workshop covered Recognising and documenting alternative systems using the two loop model Applying the seven areas of systems convening to your own work Seeing relationships between value cycles, the maturity model and convening systems Resources badges.community Video of the two loops model Whimsical activity board Book excerpt from “After Now” by Bob Stilger explaining the two loops model Systems Convening book by Social Learning Lab #3 Community Conversation Workshop Convening Story Badge Next stepsWAO is eager to help you grow your community. Get in touch if you need assistance, whether it’s for adding value to the community, think about open recognition pathways or who to apply system change to your system!
Community Conversations was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Polygon is a decentralized blockchain network that allows for the permissionless exchange of value.
Polygon ID, part of the Polygon ecosystem, provides a Self-Sovereign Identity solution leveraging the power of Zero-Knowledge Proofs. It consists of a set of libraries, tools and ready-made applications that can be used to facilitate trusted, secure and privacy-preserving relationships between identity holders, issuers and verifiers.
Polygon ID is open-source and can be deployed to any EVM-compatible chain.
In our latest guest blog, Polygon ID protocol engineer, Oleksandr Brezhniev, Technical Sales lead for the American region, Otto Mora and Digital Project Manager, Alex Rosales answer our questions about the company's new product release.
Your website states that Polygon ID is "The first identity solution that allows users to use zero-knowledge proofs generated from off-chain verifiable credentials to interact with smart contracts." What use cases do you envisage and what are the benefits of this approach?
Know Your Customer (KYC) is a key use case. We've seen multiple Decentralized Finance (DeFi) projects moving in this direction, including Uniswap, which recently has implemented the possibility to allow only verified liquidity providers in the new version of their protocol. Also Centralized Finance (CeFi) apps (such as centralized exchanges, on-ramp services, etc.) can benefit from better user experiences coming from reusable verifications. No more passing KYC on each platform separately!
But also there's a range of use cases not related to KYC. For example:
Decentralized Autonomous Organisations (DAOs) may allow voting only to their GitHub contributors. Limit who can join a DAO based on belonging to a social group, like "Women in Tech." Build communities to solve issues and fund local development projects. Distribute tokens (airdrop) among “Active Steam players” to attract new users to a Web3 game.In regard to voting and airdrops there is an interesting use case, Sybil resistance. Based on a Proof-of-Uniqueness credential (for example, issued by an identity verification provider), a user can prove that they are unique without disclosing any personal information and can vote only once, no matter how many Ethereum addresses and Decentralized Identifiers the user has. Protection from Sybil attacks and bots is crucial for decision making and fair funds distribution.
Another problem this capability can solve is recovery of access to user accounts. For example, an Abstracted Account (a new way to make Ethereum accounts more secure, with a better user experience) allows you to change the account “owner” if the new “owner” proves to be the same person, using credentials from a trusted KYC provider or from preselected, trusted friends.
To sum up, bringing off-chain data in the form of Verifiable Credentials on-chain improves security and compliance without sacrificing user privacy, reduces fraud, increases automation, improves user experience and opens new possibilities for Decentralized App (DApp) developers.
And with the recent implementation of on-chain issuer in the protocol, it's also possible to decouple on-chain data from the user's Ethereum address, and prove (on-chain and off-chain) for example that his balance is over some threshold without revealing the address and exact amount.
Your website also states that "Polygon ID meets W3C Verifiable Credential and Decentralized Identifier (DID) standards." Why is this important and what business benefits does it deliver?
This statement signifies our alignment with key industry standards and protocols that relate to decentralized identity and verifiable credentials, which is important to us because of:
Interoperability and Compatibility: Conformity to the W3C standards for Verifiable Credentials (VC) and Decentralized Identifiers (DID) means that Polygon ID aims to work seamlessly with other platforms and systems that also adhere to these standards. It simplifies integration with existing tools and solutions on the market, increasing its utility and versatility. Trust and Security: W3C VC and DID standards are designed to provide a secure and trustable way of handling identity and credentials in a decentralized and privacy-preserving manner. By aligning with these standards, Polygon ID can assure its users that their identity and credentials are handled in a secure and privacy-conscious way. This can be especially important in applications involving sensitive personal or financial data, such as in the finance and healthcare sectors. Future-proofing : Many identity solutions, especially in the Web3 space, develop their own incompatible ways of representing identifiers and credentials. Being an early adopter and promoter of standards, combined with our novel privacy-preserving method to prove statements and do selective disclosure based on ZKPs, can give Polygon ID a competitive edge in the rapidly evolving decentralized identity space. It demonstrates our commitment to innovation and a future-proof approach to technology development. This can be appealing to both developers and businesses looking for long-term, stable solutions.Consequently, this positions us for greater adoption and brings us closer to fulfilling our vision of empowering individuals and giving them back control over their identities and personal data.
How does being a member of the Decentralized Identity Foundation help Polygon ID align to the W3C standards and related DIF specifications?
Through our membership in the DIF, the Polygon ID team can interact with various industry working groups that are very relevant to us:
DIDComm: Our protocol utilizes a subset of DIDComm called iden3comm that was developed specifically for Iden3 / Polygon ID. In this working group we can contribute this technology and align with other members interested in DIDComm. Claims and Credentials Working Group: allows the alignment of credentials and schema standards including “basic identity credentials” that we are currently creating with our Polygon ID Common Schemas initiative. Several of the KYC issuers in the Polygon ID ecosystem are also members of the DIF which will facilitate standardization. Integration with the Universal Resolver: We believe in interoperability as promoted by Markus Sabadello and other DIF members. Through our support of the Universal Resolver for Polygon ID DID resolution we get closer to that objective.Also through our DIF membership we are able to support other events including hackathons, which helps us connect with startups and developers in the Decentralized Identity space.
Are you able to give us a sneak preview of your product roadmap?
After many years of research and consequent development of the iden3 protocol and a complete suite of tools to serve all actors in the “triangle of trust” (issuers, holders and verifiers), such as an issuer node, the wallet sdk, and the verifier sdk, we finally launched the first production version of Polygon ID in February this year.
Since the launch, we have been gathering positive and constructive feedback from developers and companies implementing Polygon ID, which has enabled us to improve our tools and add new ones to make it easier for developers and end users.
Among the new tools we have already added to the Polygon ID suite:
The js-sdk design to build browser-based applications. The query builder to make it easier for developers of applications to construct queries based on the zkQuery language. The schema builder that helps issuers find existing schemas (hence achieving standardization and interoperability) or create new ones.Now that the suite of tools is mature, our next goal is to focus on growing the liquidity of credentials in the Polygon ID ecosystem. To achieve this, the new version (launched earlier this week) will make it easier, and will add new possibilities, to generate credentials:
New credentials marketplace: Makes it easy for you as a developer to select which type of credential to use, and which is the issuer that issues the credential. New on-chain issuer: Is a smart contract that allows the generation of on-chain credentials. The sources of information can be: On-chain: you will be able to generate credentials based on public data already available on-chain. This will allow you for example to generate a credential that proves how many tokens you own without having to reveal your address. Off-chain: you will be able to generate on-chain credentials based on off-chain documents: The client application will take the off-chain document and transform it into a verifiable credential that will remain in the user’s identity wallet. Additionally, a zero knowledge proof will be sent to the on-chain issuer to certify this. For example, you will be able to create a credential based on your identity card or on a PDF issued by the government. Improvements in the Issuer Node: Improving the Issuer Node, making it easier to install and use Available in Cloud Marketplaces such as Google Cloud Marketplace (GCM) and Amazon Web Services (AWS) Marketplace Possibility to store revocation trees on-chain. Possibility to use a DID with Ethereum addresses: this will allow the issuer to use naming services and have cheaper state transitions.It is important to highlight that Polygon ID is open source, which enables developers to propose improvements and audit the code.
During the next months, we plan to launch exciting new projects (some of them currently in the research phase) that we believe can be a breakthrough for the space. So stay tuned, follow us on X, subscribe to our newsletter, or contact our business development team.
First public review for Version 1.3 - ends December 8th
OASIS and the OASIS Virtual I/O Device (VIRTIO) TC are pleased to announce that Virtual I/O Device (VIRTIO) Version 1.3 is now available for public review and comment.
Specification Overview
This document describes the specifications of the ‘virtio’ family of devices. These devices are found in virtual environments, yet by design they look like physical devices to the guest within the virtual machine – and this document treats them as such. This similarity allows the guest to use standard drivers and discovery mechanisms. The purpose of virtio and this specification is that virtual environments and guests should have a straightforward, efficient, standard and extensible mechanism for virtual devices, rather than boutique per-environment or per-OS mechanisms.
The documents and related files are available here:
Virtual I/O Device (VIRTIO) Version 1.3
Committee Specification Draft 01
06 October 2023
Editable source (Authoritative):
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/tex/
HTML:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.html
PDF:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.pdf
Example driver listing:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/listings/
PDF file marked to indicate changes from Version 1.2 Committee Specification 01:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01-diff-from-v1.2-cs01.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.zip
A public review metadata record documenting this and any previous public reviews is available at:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01-public-review-metadata.html
How to Provide Feedback
OASIS and the OASIS Virtual I/O Device (VIRTIO) TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts 09 November 2023 at 00:00 UTC and ends 08 December 2023 at 23:59 UTC.
Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=virtio).
Comments submitted by TC non-members for this work and for other work of this TC are publicly archived and can be viewed at:
https://lists.oasis-open.org/archives/virtio-comment/
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the VIRTIO TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/virtio/
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] https://github.com/oasis-tcs/virtio-admin/blob/master/IPR.md
https://www.oasis-open.org/policies-guidelines/ipr/#Non-Assertion-Mode
Non-Assertion Mode
The post Invitation to comment on Virtual I/O Device (VIRTIO) Version 1.3 appeared first on OASIS Open.
Elastos has entered a strategic partnership with the Blockchain Game Alliance (BGA), supporting our integration of cutting-edge Web3 technology into the gaming industry. This collaboration embodies our commitment to pioneering a decentralised digital economy and marks a pivotal step in enhancing our influence and reputation within the gaming sector.
Enhanced Visibility and Credibility By aligning with the Blockchain Game Alliance, Elastos significantly increases its visibility in the dynamic gaming industry, establishing itself as a credible and influential player. This is crucial for attracting innovative projects into our ecosystem. Networking, Collaboration, and Partnerships: Our BGA membership opens doors to invaluable networking opportunities, fostering potential collaborations and partnerships with other industry visionaries. This network will serve as a bedrock for growth and development, especially for our project, Destiny Calling. Community Building and Industry Insights: Being part of BGA allows us to actively engage in community building, gaining deep industry insights. This knowledge is vital in staying ahead of trends and aligning our strategies with the evolving landscape of gaming technology. Early Access and Industry Integration: Our involvement grants us early access to emerging information and developments within the gaming sector, positioning Elastos favourably for integration and technical collaboration. Boost for Destiny Calling: Utilising BGA’s extensive network, we aim to propel Destiny Calling to new heights, attracting interest and fostering potential collaborations.Elastos will participate in the New Member Presentation hosted by BGA. Elavation representative, Jon, will introduce Elastos to the gaming community, showcasing it’s vision and capabilities (Date TBC). Elavation will also use connections from the Blockchain Game Alliance (BGA) for the support and success of Destiny Calling.
At its core, Elastos offers a robust, decentralized infrastructure, capable of revolutionising how games are developed, distributed, and monetised. By joining forces with BGA, Elastos gains an influential platform to enhance its visibility and credibility within the gaming sector. This partnership is not merely about membership in an organisation; it’s a strategic alignment of goals and a shared vision for the future of gaming.
The post Elastos Joins the Blockchain Game Alliance: Revolutionising Web3 Gaming appeared first on Elastos.
This article was originally published as an advance reading for RWOT12 in Köln, Germany on August 9, 2023. It has been slightly edited for this reprint.
ABSTRACT: Self-sovereign identity represents an innovative new architecture for identity management. But, we must ensure that it avoids the pitfalls of previous identity systems. During World War II, two identity pioneers, the Dutch Jacobus L. Lentz and the French René Carmille, took different approaches toward the collection and recording of personally identifiable data. As a result, 75% of Dutch Jews fell victim to the Holocaust versus 23% of the Jews in France. Our foundational work on self-sovereign identity today could have similar repercussions down the road, so it’s imperative that we design this foundation responsibly with diligence and foresight, especially as the threat of significant regime change toward authoritarian governments looms ever larger across the world.
On January 26, 2020, on the 75th anniversary of the liberation of Auschwitz during World War II, Prime Minister Mark Rutte of the Netherlands offered a historic apology for how the country had failed its Jewish citizens during that War, which resulted in over 100,000 of them being deported and murdered by the Nazis. He acknowledged that one of the problems was the Netherlands’ civil service, stating: “When state authority became a threat, our public institutions failed in their duty as guardians of justice and security. To be sure, within the government too there was resistance on an individual level. But too many Dutch officials simply did as they were told by the occupying forces.”1
A similar acknowledgement had been offered almost 25 years earlier by President Jacques Chirac of France, who stated that “the criminal folly of the occupiers was seconded by the French, by the French state”.2 In France, some 76,000 French and foreign Jews were deported by the Nazis, but it was from a larger population that also had a larger demographic of Jewish peoples.
When distilled down to the bare and heartless statistics, the difference between the countries is stark: in the Netherlands, 75% of Jews tragically lost their lives, compared to 23% in France. The difference lies in how each country dealt with identity: what they were willing to record and what they were not. Since regimes inevitably change, since data inevitably changes hands from one person to another, this type of consideration is crucial.
It is especially crucial today because regimes have been changing quickly and drastically across the world and many of the new governments have been raised up through their declarations of intolerance and hatred. The freshest example is ironically in the Netherlands, where Islamophobe and isolationist Geert Wilders’ PVV party won a plurality of votes, giving him the opportunity to become Prime Minister3. He’s promised to ban the Quran, and one shudders to think of what he might do with identity records of Muslims. But, he’s far from alone. Victor Orbán, the authoritarian Prime Minister Hungary, has conducted attacks against the LGBT community4. Marine Le Pen has made increasingly credible runs for French President as the head of the National Front. In the United States, Donald Trump, the Fifth Circuit Court of Appeals, and even the Supreme Court have attacked disadvantaged peoples and taken away rights. Many of these ingredients are the same as those seen in World War II, but our data collection has reached new heights, and thus offers even greater dangers.
These harsh lessons remain especially important for the new technologies of digital identity, which have boosted information compilation even beyond that of the physical world. That includes both self-sovereign identity, an ideology to reclaim human dignity and authority in the digital world and an emerging suite of applications designed to enable that movement, and other new digital-identity systems. We must heed the grave lessons of history as we look forward, to ensure that digital identity technology of all types is never used in a similar genocide.
Impeccable Identity: Lentz’s Tragic LegacyThe story of identity in the Netherlands during World War II is still writ on the landscape of Amsterdam today, at the National Holocaust Names Monument, where Prime Minister Rutte gave his famous speech in 2020, and a few blocks away at the intersection of Plantage Kerklaan and Plantage Middenlaan, where the municipal building that held the records of the region’s population once lay.
Today, there’s nothing special about the structure that once housed the records building, other than a somber plaque affixed to a wall. It reads, “27 Maart 1943: Vernieling Bevolkingsregister”. March 27, 1943: Destruction of the population register. Therein lies the crux of the story of how the Netherlands recorded too much identity information, and how they made one last attempt to destroy it after the Nazis gained control of the country.
That story begins, as Prime Minister Rutte acknowledged, in the Dutch Civil Service, which the Nazis recognized as “Germanic” in its orderly thoroughness. So much so that they left the cadres of Dutch civil servants alone to run things, with Germanic leadership installed above them. In particular, the story begins with one operational functionary by the name of Jacobus L. Lentz.
In 1932, Lentz was appointed the head of the National Inspectorate of Population Registers in The Netherlands. This was a critical post in the 1930s because of the Great Depression: the Inspectorate ensured that all citizens had equitable access to basic services. Thanks to its efficiency, the small nation fared better than even the greatest European powers during the 1930s, helping to keep its citizens out of the poverty that was blanketing the globe. Looking to ensure that this happy efficiency was maintained and even bettered, the Dutch government tasked Lentz with promoting yet greater order by bringing consistency and uniformity to population registers throughout the country.
Lentz was thus instrumental in what followed. By 1936, a decree required that every resident in the Netherlands must have a personal identity card, one copy to be carried on their person and a duplicate to be filed in the civil archives. These archives also contained a cornucopia of personally identifying information, including gender, race, ethnicity, occupation, residence, familial relations, and religion. They were centralized in a single office in each officially recognized region of the Netherlands, all of which used the same systems so that the data was interoperable, ensuring that Lentz’s data was useful in civil governance and planning.
Lentz meticulously rationalized, standardized, and organized the Netherlands records held in that building at the intersection of Plantage Kerklaan and Plantage Middenlaan. Thanks to him, the comprehensive efficiency of the archives, which even incorporated special filing cabinets of his own patented design aimed at expediting searches and cross-checks, made it possible for the Netherlands to provide amply and justly for its citizens during the depths of the greatest economic crisis in modern world history. This aligned with Lentz’s vision of creating “the paper man.”5 His work was recognized at the highest level by a royal award bestowed on him by no less a figure than nation’s queen.
The thoroughness and accessibility of the centralized registries throughout the Netherlands made them high-priority targets for capture by the Nazi invaders following their occupation of the Netherlands in May 1940. The Nazis understood their immense value in accelerating the hunt for Jews and other “undesirables”. Lentz himself was recognized as an especially valuable local asset. No sooner did the Dutch government capitulate than occupation authorities asked him to create a comprehensive national personal identity card that was extremely difficult to alter or forge. For Lentz, this, in fact, had been a pet project for years, one his superiors had consistently resisted as being “un-Dutch.” Now he was being asked to do it! He tackled the project with remarkable enthusiasm.
Soon, Lentz had cards that could be compared against the corresponding files in a central civil registry, which he had redesigned for accessory and access. Even if a card was forged, the registry was thus a backup. By September 1941, these cards and files included a thorough census of Dutch Jews, each of whom now carried a card emblazoned with a large letter J.
On the night of March 27-28, 1943, resistance operatives, some of whose names are incribed on that plaque, disguised themselves as local police officers, and, in a daring operation attempted to burn down the registry and the files it housed. Brilliantly planned and executed though it was, the attack fell short of achieving its objective. To be sure, some 800,000 identity cards fell victim to the flames and flood, but, that destruction only amounted to 15 percent of the registry’s records. It was soon back in business, and the Jewish genocide ramped up.
Lentz was arrested by Netherlands police in May 1945 for his collaboration with the Nazis and eventually sentenced to three years in prison. But it was far too late. This perfection of record keeping, this creation of a “paper man” who could be cross-referenced through a closely held certificate and a central registry, was core to the high rate of murder of the Netherlands’ Jews, which Prime Minister Rutte apologized for on that cold winter day in 2020.
This dark chapter should serve as a stark warning for our work on self-sovereign identity — as should the fact that things can be very different, as evidenced by a similar situation in France with a very different conclusion.
Subversive Systems: Carmille’s Quiet ResistanceIn France, the story instead begins with René Carmille, an engineer, a World War I officer, and a spy for France’s Deuxième Bureau during the first World War. His own move toward the registration of identity came not because of the Great Depression, but instead the needs of the military.
Carmille’s work was founded on early punch-card technology, something that had also been used in The Netherlands, but was more central to Carmille’s work in France. By 1935, he was developing a registry for the French army that was to be used for conscription and mobilization. Carmille proposed a twelve-digit personal registration number as the core of this registry (which later became 13 digits after the occupation and division of France). This ID could be used to determine a person’s date of birth and place of birth, as well as a “complete personal profile”6, which included details on professional skills and well as other attributes.
By 1940, the Nazis were attempting to produce censuses of the Jews in France, but they were facing notable obstacles. Prime among them was the fact that the government had not polled on the question of religion since 1872 and had no interest in doing so now. There also weren’t sufficient electronic tabulators to catalogue the results from France’s immense population: France instead depended on Remington typewriters or even pen and paper for much of their tabulation.
Enter René Carmille. In November 1940, he created France’s “Demographic Service in Vichy”, opened offices on both sides of the occupation border running down the middle of France, contracted for 36 million Francs worth of tabulating machines, and announced plans for a new census of French citizens. He would replace the “anarchic” methodology that France had previously used to record census data with something more modern; afterward, French citizens would have to carry “uniform identity cards”, which linked to precise details on vocational expertise. Carmille’s new census would also reverse France’s long-standing omission of religious data by including data in “column 11” that would require participating Jews to not only report their own religion, but also that of their grandparents. Like Lentz, Carmille ultimately had his own version of a paper man, saying: “We are no longer dealing with general censuses, but we are really following individuals.”7
It appeared to be exactly what the Nazis were seeking.
If Carmille had ever properly tabulated this data, the results likely would have been similar to those in the Netherlands: identity turned to discrimination and ultimately genocide. But, that didn’t happen. Instead, Carmille purposefully programmed his machines to never punch data for column 11 and hid more than 100,000 punched cards of Jews in his office. Call it an early form of data minimization and selective disclosure8. Carmille ensured that the personal data most likely to harm people was not made available to the people most likely to use it for harm.
There is some disagreement about Carmille’s role in the Vichy government and what damage he might have done there. However, the historical data seems to support him continuing to be a counter-intelligence officer, and that he purposefully sabotaged the census of the Jews while simultaneously using his data collection appartus to compile a listing of 800,000 French soldiers ready to rise up against the German occupation, 300,000 of them who were ready for near instaneous mobilization. Which was exactly what happened: on December 5, 1942, French troops captured the French National Statistics Service office in Algiers and used Carmille’s data to mobilize thousands of French troops.
René Carmille was arrested by the Nazis in February 1944, interrogated by a Nazi torturer, and sent to Dachau, where he died in January 1945. Though his work may have saved the lives of tens or even hundreds of thousands of Jews in France, it required sacrifice.
Once we collect identity data, its use is almost inevitable: pushing against that tide is the work of giants.
Two Models, One TruthSelf-sovereign identity has a dual nature.
On one hand, there exists a clear need for more defined identity. This was the main focus for self-sovereign identity when it was first discussed at RWOT2 and ID2020 in 2016. Then, one of our primary use cases was the stateless refugee who could be denied access to government services. It echoed (albeit with better foresight) the challenge that Jacobus Lentz faced with migration in the 1930s, when he put together a system to make sure all Dutch could have access to civic support during the Great Depression.
Conversely, there’s a call for minimizing identity data. This asks a crucial question that’s of both historical and ethical importance: What is the minimum identifying data access necessary to give you the right to be able to do things without unduly impinging on your privacy, dignity, and entitlement to respect? It builds on the truth the René Carmille discovered as he purposefully excluded religious information from his census: personal information can be dangerous, damaging, and even deadly.
Often, we share our personal details and other identity, erring on the side of oversharing, grounded in our current trust toward the data collector. The contrasting approaches of Lentz and Carmille demonstrate why that isn’t sufficient: the regime can always change. The intended use of data can be easily misinterpreted or manipulated. Lentz’s Great Depression data could be used by the Nazis to genocide Dutch Jews; and what the Nazis thought was their French Census Data could be used by Carmille to raise an army. And this isn’ just a historic concern: under Trump’s administration, there were similar attempts to repurpose the voluntary registrations of “Dreamers” from the previous administration, threatening residents who illegally entered the country as children with deportation.
Ultimately, data will be used to the fullest extent that it can be and it may be used for the worst purposes possible, entirely at odds with the original purpose of the collection. As architects of the next generation of self-sovereign identity systems, we must prioritize user empowerment, enabling each individual to fully control their identity. We have to think about strategies of data minimization and selective disclosure. We have to consider what data needs to be collected, and what does not. We do not want to make a new paper (electronic) man.
We must instead remember the past when identity was weaponized and six million and more died as a result. But we need to operationalize that remembrance by transforming it from reflection into a vision for the present and the future. Call it remembering forward. Call it foremembrance.
ConclusionThe time for action is now. The dreams of self-sovereign identity that were first imagined at RWOT2 and ID2020 have been made realities by standards such as DIDs9 and Verifiable Credentials10. Numerous companies have emerged to support these standards, while governments are beginning to adopt them. Simultaneously, the European Union is in the process of rolling out eIDAS11 as an ecosystem of electronic identity. We are at a tipping point where the standards for future identity will be set in the days, months, and scant years ahead.
At this crossroads, we could go the way of recording too much information. We could create great honeypots of data for use or abuse by future regimes, or even by criminals. By recording our gender, our sexual orientation, our religion, our political affiliation, or even just our favorite books, movies, and songs, we could open ourselves up to future discrimination or worse. This would be the way of Lentz.
But there’s another path, one that normalizes the minimization of information. As self-sovereign identity designers, we must ask if we are protecting our users; we can do so by following the original goals of self-sovereign identity, by allowing the person represented by an identity to decide what information goes out. Simultaneously, we can do our best to influence the design of eIDAS and other more centralized systems to similarly adopt rules of data minimization and selective disclosure. This would be the way of Carmille.
We are a crucial crossroads in the design of digital identity. When the next history books are written, we must be Carmille, not Lentz.
FootnotesUncredited. 2020. “Prime Minister of the Netherlands Issues Historic Apology”. International Holocaust Remembrance Alliance. https://www.holocaustremembrance.com/news-archive/prime-minister-netherlands-issues-historic-apology. ↩
Simons, Marlise. 1995. “Chirac Affirms France’s Guilt in Fate of Jews”. New York Times. https://www.nytimes.com/1995/07/17/world/chirac-affirms-france-s-guilt-in-fate-of-jews.html. ↩
Faiola, Anthony, EMily Rauhala, and Loveday Morris. 2023. “Dutch Election Shows Far Right Rising and Reshaping Europe”. Washington Post. https://www.washingtonpost.com/world/2023/11/25/europe-far-right-netherlands-election/. ↩
Beauchamp, Zack. 2021. “How hatred of gay people became a key plank in Hungary’s authoritarian turn”. Vox. https://www.vox.com/22547228/hungary-orban-lgbt-law-pedophilia-authoritarian. ↩
Rood, Juriën. 2022. Lentz, msp. 45. English translation of the original Dutch manuscript of a book in progress. ↩
Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation, p. 321. ↩
Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation, p. 323-324. ↩
Allen, Christopher. 2023. Musings of a Trust Architect: Data Minimization & Selective Disclosure. https://www.blockchaincommons.com/musings/musings-data-minimization/ ↩
W3C. 2022. DIDs v1.0. https://www.w3.org/TR/did-core/. ↩
W3C. 2022. Verifiable Credentials Data Model. https://www.w3.org/TR/vc-data-model/. ↩
European Commission. Retrieved 2023. eIDAS Regulation. https://digital-strategy.ec.europa.eu/en/policies/eidas-regulation. ↩
The voting period will be between Tuesday, November 28, 2023 and Tuesday, December 5, 2023, once the 45-day review of the specifications has been completed. The Shared Signals working group page is https://openid.net/wg/sharedsignals/. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. The vote will be conducted at https://openid.net/foundation/members/polls/322.
– Marie Jordan, OpenID Foundation Secretary
The post Notice of Vote for Proposed Implementer’s Draft of Shared Signals Framework Specification first appeared on OpenID Foundation.
Elastos’ groundbreaking Elastic Consensus incorporates three core technologies: Auxiliary Proof of Work (AuxPoW), Bonded Proof of Stake (BPoS), and Proof of Integrity (PoI). To celebrate our robust blockchain framework, recent partnership with Staking Rewards, and the Elastos Growth team’s participation at the Staking Summit, we have collaborated with Morfyus—a blockchain-based social network and job platform—to commission three unique, limited edition Elastic Consensus NFTs. Additionally, we’ve infused these NFTs with special utility features! To recap, Elastos’ Elastic Consensus is a revolutionary blend of three synergistic mechanisms:
Auxiliary Proof of Work (AuxPoW): Leverages the hash power of Bitcoins miners to help secure the Elastos main chain at no extra cost. Bonded Proof of Stake (BPoS): Allows stakeholders to lock ELA and gain voting rights, based on which BPoS nodes work with Bitcoin miners to validate transactions. Proof of Integrity (PoI): Establishes a democratic governance model, with community-elected council members validating Elastos’ sidechains and governance proposals.The transition from DPoS to BPoS this year amplified the network’s security and participatory governance, making it a great opportunity to become a validator with Elastos’ Elastic Consensus, aligning with the ethos of blockchain democracy.
Celebrate with Elastic Consensus NFTs!
To celebrate, were created three distinct Elastic Consensus NFTs, each representing a pillar of our consensus mechanism and granting specific utilities.
Launch Date: LIVE!
Scarcity: 264 NFTs in total, 88 total of each.
Price: 20 ELA each
Blockchain: Elastos Smart Chain (ESC)
Elastic Consensus NFTs Utility
1) AuxPoW NFT: Claim and stake this NFT and earn Glide tokens until Feburary 15th on Elastos’ DEX Glide.
2) BPoS NFT: Claim a 50% discount on validator support services from the Elastos node provider, Elasafe.
3) Proof of Integrity (PoI) NFT: Claim 20% discounts on Elacity’s generative AI services Flint.
Exclusive Bonus!
Holders of all three NFTs receive the benefits of each as well as the ability to earn additional ELA. This is because all primary and secondary sales go towards daily ELA rewards, which can be collected by those who collect all three NFTs. This campaign will run until February 15th. You will be able to see eligibility and claim daily ELA rewards in the Elastic consensus dashboard here.
If you’d like to purchase a specific NFT, you can do so from Elacity’s marketplace with buy now, auction and offer features. The contracts for trading can be found here:
How to Participate
To mint, simply head over to the official mint page. Sign in with your decentralised identity and click the mint button. If you don’t have ELA on the Elastos Smart Chain to purchase an NFT, you can follow this guide.
Elastos is revolutionising digital communication and asset exchange by establishing a secure, trustless, and decentralised Internet platform. Launch into the future of our blockchain with these fun Elastic Consensus Celebratory NFTs—where innovation meets tangible utility.
Mint your Elastic Consensus Celebratory NFTs Here!
The post Elastic Consensus NFTs: A Celebration of Innovation and Utility appeared first on Elastos.
A roundup of the most commonly asked questions on our Discord and Forum. If you have questions for the team please ask them on our Forum.
General FAQ What is Ceramic?Ceramic is a shared data network for managing verifiable data at scale, providing the trust of a blockchain with the flexibility of an event streaming system. Thousands of developers use it to manage reputation data, store attestations, log user activity, and build novel data infrastructure. Ceramic frees developers from the constraints of traditional data infrastructure, enabling them to tap into a shared data ecosystem and network effects, so they can focus on bringing their unique vision to life.
What is ComposeDBComposeDB, built on Ceramic, is a decentralized graph database that uses GraphQL to offer developers a familiar interface for interacting with data stored on Ceramic. This enables seamless data model publication, discovery, and reuse, fostering a composable data ecosystem.
What is the difference between Ceramic and ComposeDB?ComposeDB provides a graph structure for interacting with data on the Ceramic network (i.e. native indexing). The nodes in the graph are ‘accounts’ or ‘documents’, each possessing a globally unique ID, while ‘edges’ are queryable relationships.You can read more about these concepts in our docs.
Why did we choose GraphQL for ComposeDB?We achieve interoperability with a wide ecosystem of existing programing environments by allowing GraphQL queries over JSON documents.
What is the difference between Ceramic and The Graph?The Graph is a service indexing and querying data on L1 and L2 blockchains. You do not write data to The Graph, you write data to the blockchain and then query it with The Graph.
Ceramic is more like a decentralized version of a traditional database (like Postgres or MongoDB). It’s optimized for writes and speed. It provides its own built-in indexing and querying functionality (via ComposeDB).
The Graph is good for querying large amounts of data from blockchains. Ceramic is good for storing, managing and querying large amounts of data.
What makes Ceramic a better choice than alternatives? What’s the difference between Ceramic and a traditional blockchain?Unlike financial asset blockchains that rely on slow and unscalable synchronous global state changes, Ceramic uses fast, asynchronous, parallel transaction processing and eventual consistency to enable the scale and throughput needed by large data applications.
Current blockchain protocols cannot scale to support the level of processing needed for data applications, for the simple fact that there are so many data events occurring all day, every day, in every application. That’s why Ceramic’s data consensus properties can be treated differently than value consensus properties, like those you’d find on Bitcoin or an L2. Ceramic supports data that changes (i.e. mutable) with fast, decentralized, high-volume transaction processing.
What’s the difference between using Ceramic to store and query data and storing the data on the blockchain?Ceramic shares many similarities with a blockchain, while simultaneously offering features that optimize for low-cost mutable data at scale, thus underlining key differences. Below are some similarities and differences:
Accounts: Ceramic uses the Decentralized Identifier standard for user accounts (DID PKH and Key DID are supported in production). Similar to blockchains, they require no centralized party or registry. Additionally, both PKH DIDs and Key DIDs ultimately rely on public key infrastructure (PKH DIDs enable blockchain accounts to sign, authorize, and authenticate transactions, while Key DIDs expand cryptographic public keys into a DID document).
Signing Transactions: data in Ceramic (or streams) support CACAO (Chain Agnostic Capability Object), which enables the ability for one account to authorize another account to construct signatures over limited data on their behalf. Therefore, unlike manually signing individual transactions on a blockchain, Ceramic allows accounts to cryptographically authorize multiple writes in a given window, which is ideal for settings in which a database would be used to support user data that’s meant to change over time. These transactions also occur at no cost to individual users, which is yet another key difference between writing data to Ceramic vs. a blockchain.
Immutability: streams in Ceramic are organized into immutable event logs, with each commit representing an unchangeable, tamper-evident snapshot of a piece of data. Similar to how blockchains act as an immutable ledger revealing a history of the deterministic outputs of accounts transacting with other accounts and smart contracts, event logs in Ceramic give developers insight into both the immutable data provenance and lineage of data relevant to their users.
Querying: while developers can leverage libraries like Web3.js to read past transaction history or actively listen to transactions across accounts and contracts, blockchains (like Ethereum) do not offer native network support for querying or indexing blockchain data. This should come as no surprise - they are not designed to do this.Conversely, Ceramic is specifically designed to provide simple structure and open access to data events as they occur in the network, while simultaneously offering credibly decentralized qualities.
Ceramic’s underlying data structure (a self-certifying event log) also allows for endless database “flavors” to be built on top to further improve the developer experience. ComposeDB, for example, offers native GraphQL support, automatically splits read/write load capability, and allows developers to index on data “families” (or model instance documents related to preexisting schema definitions) across the network.
How much does it cost to use Ceramic Network?At the moment, the only costs associated with building on Ceramic are the costs of running a Ceramic node on the cloud provider of your choice. The protocol doesn’t charge a direct fee nor are there any ecxplicit costs to store and load data from the network. This fee structure is subject to change as the Ceramic community may decide to support crypto-economic incentives and rewards.
Which blockchains does Ceramic support?The standard use of SIWX, CACAO and DID:PKH allows anyone to implement support for another blockchain or account type to authenticate and authorize writes to the Ceramic Network. Additionally a few standard interfaces enables you to implement an auth and verification library that allows anyone to use it with did-session
, the primary library to use DID based accounts with Ceramic. There is just a few steps you have to take, outlined here.
Ceramic does not currently have a native token.
Technical FAQ Do I have to run a node if I want to use ComposeDB?It depends on your use case. If you just want to play around to learn and understand ComposeDB, it is not necessary to run a node. You can use our ComposeDB Sandbox or Wheel (which can run in memory). For development purposes, you can either use a third party node operator, such as hirenodes.io, or run your own node (follow our guide here).
Is running a node a public, permission-less thing or does one need to sign up for it?Running a node is permissionless! However, to anchor commits to mainnet, it is necessary to register your node’s DID with the Ceramic Anchor Service.
What are the benefits to running a node?You have control over performance and costs of your own data without relying on others.
What operating system do I need to run a Ceramic node?Currently, Ceramic nodes use Node.js, so any system that can run Node.js may be used. Ceramic nodes are most often run on Linux servers but have also been run on MacOS.To run on Windows we recommend using WSL2 (Windows Subsystem for Linux).
Is there an official RPC that can be used directly for Ceramic?When used in a Web3 context, an RPC (Remote Procedure Call) node consists of a server that generally runs a protocol’s client software. An RPC endpoint is the network location that an application can access to perform requests (for example, dApp retrieving blockchain data for its users).The Ceramic Network does not currently offer hosted RPC node-running (and corresponding community endpoints) as a service. Instead, for both decentralization and performance purposes, we encourage developers to either run their own node or work with a node-running service (such as Hirenodes) to set up performant and available endpoints for their applications.
What is a DID?DID stands for Decentralized Identifier which is a W3 standard. Whenever a document is updated, it is signed by the user's DID, so that the ownership and provenance of all writes is verifiable.
What’s the “admin DID” for?It’s the identity required on the “server” side of the ComposeDB to set up a composite for your node, which instructs your node which Models it should index. You can write data into your Ceramic node with any DID, but the admin DID is the only one able to change the set of Models used by the node.
What’s the difference between the “node DID” and the “admin DID”?The Node DID is used for anchoring and the Admin DID is used to deploy composites.
Can I use an Ethereum address as a DID?Yes you can! You can use Ethereum as well as Solana addresses via the did:pkh
DID method. Check this link for details.
ComposeDB is a graph database because the documents within it can reference other documents and form a graph, using GraphQL to query that relationship graph. You should reference supported scalars.
Who can modify the data?Each document in ComposeDB is represented by a Ceramic stream that has a single DID that is the controller. The controller DID is the only one allowed to write data into that stream and updates to the stream must be signed by the controller DID’s private key.
Is the data on Ceramic Network public or private?All the data on Ceramic (including ComposeDB) is public by default. One can encrypt the data before putting it onto the network. You can follow the Encrypted Data Tutorial tutorial for one possible method and you can also utilize the integration between Ceramic and Lit protocol.
Can data be deleted on Ceramic Network?All data written to Ceramic are signed payloads that are organized in the core data structure (a self-certifying event log) that the protocol relies on.
This structure relies on IPLD content addressing to form an immutable log (on top of which applications, or databases like ComposeDB, can be built). Ceramic nodes also encompass several layers of the libp2p stack to help communicate stream updates in the network.
It’s important to understand how the services described above play an important role in answering the question of whether data on Ceramic can be deleted. While developers can pin streams on their node to prevent Ceramic’s default garbage collection behavior (and unpin, or discontinue indexing on models if using ComposeDB), data in its raw format may still be accessible within the IPFS network (though, if not pinned by any Ceramic nodes, it will effectively be “deleted” from the Ceramic network).
Do I need my private key to create a composite on ComposeDB?Yes! While creating a composite, you sign some bytes with your private key. The signature becomes part of the composite. Don’t worry, private keys are not leaked through composites. We encourage making composites public.
Can you transfer the ownership of a stream on Ceramic Network?Ceramic uses a DID as the controller of the stream. This means that if you use a DID that can be transferred you can transfer ownership by transferring the DID. Most of the time you will not need to transfer ownership, but rather sign a CACAO to grant some device or service the capabilities that you wish to delegate.
How can I incorporate ComposeDB into my dApp?We have a starter application here.
What is a CACAO?Chain Agnostic Capability Object
Company FAQ What is the relationship between 3Box Labs and Ceramic?While 3Box Labs is the original inventor of the Ceramic Network and have been integral to bootstrapping the Ceramic ecosystem, we're committed to migrating Ceramic to an independently-governed foundation and stepping back to play a reduced role over time. In a way, 3Box Labs is just like any other contributor to an early stage protocol.
What is the corporate structure of 3Box Labs?3Box Labs is a US corporation registered in Delaware with employees and contractors located across the US and Europe.
How will Ceramic be governed?Ceramic is already 100% open source, peer-to-peer software and anyone can participate in the network by simply running a Ceramic node. Formal governance of the network will progressively decentralize over time, eventually exiting to the community and a standalone foundation.
💡 Have a question that’s not here? Check out our Forum to either search or post your question!Canada’s trusted digital ID leader, the DIACC, releases its Pan-Canadian Trust Framework (PCTF) Trust Registries Final Recommendation V1.0, signalling it’s ready for inclusion in their Certification Program.
Why is the PCTF Trust Registries component important?
A Trust Registry is a critical component of the new and emerging decentralized digital identity architecture, playing a crucial role in establishing and maintaining trust in digital identity ecosystems, especially regarding identity verification and authentication. The Trust Registries component provides conformance criteria for the development and certification of Governance Requirements (business structure, ecosystem scope, governance processes, policies, and standards), Trust Registry Operations (technology & infrastructure management and technical services), and Registration and Certification Management (certification/verification/trustmark services, registration, suspension, and revocation processes), to ensure that a Trust Registry is, in fact, trustworthy.
What problems does the PCTF Trust Registries component solve?
Digital identity ecosystems and their associated Trust Registries use a Trust Framework (such as the PCTF) to define how ecosystem actors like Issuers, Verifiers, Holders, and Digital Wallets should or must operate to be considered trustworthy. As an essential requirement in the decentralized identity architecture, with its interconnection with many Holders, Digital Wallets, Issuers, Verifiers, and other Trust Registries, ecosystems must strive for interoperability (locally, regionally, and internationally). Adherence to standards and frameworks must be a prioritized goal for ecosystem governance organizations, which is reflected in many of the criteria of the Trust Registries component.
Who does the PCTF Trust Registries component help?
The PCTF Trust Registries component is a valuable resource for entities to use as guidance to develop their Trust Registries. Entities with Trust Registries currently in use can leverage the conformance criteria as a risk management tool to ensure their registry is as robust as possible. Having an organization’s Trust Registry certified through DIACC’s Voila Verified Certification Program sends a clear market signal that the certified Trust Registry is reliable, secure, and can be used confidently.
Find the PCTF Trust Registries component here.
The importance of trust in the field of Artificial Intelligence (AI) continues to dominate the headlines — whether it’s the challenges of managing intellectual property or the tendency of Large Language Models (LLMs) like ChatGPT to produce errors or “hallucinations.” Only a few days ago we saw Elon Musk enter this field with xAI’s Grok and in an interview with Lex Fridman he highlighted the importance of receiving responses you can trust when interacting with AI products
“…it (LLM) unfortunately hallucinates most when you least want it to hallucinate. When you’re asking the important and difficult questions that’s where it tends to be confidently wrong. So we’re really trying hard to say how do we be as grounded as possible so you can count on the results?”
The truth, however, is an elusive concept, especially if attempted to be captured by a single organization/product. A better approach is through connectivity and transparency achieved by leveraging multiple open source technologies. Turing Award winner Dr. Bob Metcalfe explained this idea, saying,
“… through connectivity, decentralized knowledge graphs, blockchains and AI are converging — and it’s an important convergence, because it is going to help us with one of the biggest problems we have nowadays, which is the truth.”
OriginTrail has been, since inception, committed to fighting one of truth’s greatest arch-nemesis — misinformation. It continues on this mission in the age of AI, where the misinformation problem is growing exponentially. The growing challenges, however, also offer a growing field of opportunities, which are being unlocked with OriginTrail Decentralized Knowledge Graph (DKG) used as foundation for verifiable web.
The key building blocks of the verifiable web have become available with the V6 of the OriginTrail DKG which enabled some of the most notable achievements of the Turing phase which were showcased on the Decentralized Knowledge Graph conference (DKGcon) in early October and which to date produced around two million AI-ready Knowledge Assets.
DKGcon - Decentralized Knowledge Graph Conference
The Turing phase achievements also showed how the OriginTrail DKG acts as an enabler, rather than a competitor to the AI systems created by Microsoft, Google and xAI. Through the ChatDKG framework initiative we have seen the first version of Microsoft Copilot, integrations with GoogleAI and xAI enablement already being researched as a part of the Metcalfe phase.
Enabler, not a competitor of the likes of Microsoft, Google, xAI. Metcalfe phase — growing knowledge by a 100.000xMetcalfe phase - fighting misinformation with decentralized AI
The Metcalfe phase of the updated roadmap pursues an ambitious goal of creating the world’s largest verifiable web for AI consisting of 100 billion Knowledge Assets, bringing a 100.000x scalability increase. As the name of the phase suggests, it will seek to produce network effects across the OriginTrail DKG and use novel techniques to pursue autonomous DKG growth based on the genesis knowledge foundation being created by organizations and individuals alike. The genesis part of the Metcalfe’s phase also introduces knowledge mining and knowledge signaling capabilities to drive constant growth of the Verifiable Web.
Following the Genesis stage and the transition to an AI-native V8, further capabilities will become available in the Convergence stage. By leveraging the advancements in AI and the DKG, we will unlock autonomous knowledge mining which in turn leads to an autonomous DKG. At that stage, new knowledge gets added to the DKG with very limited human involvement. Services such as AI agents will be performing knowledge inferencing directly on the DKG to find any “blind spots” in the knowledge they can fill as well as search for new knowledge and bring it to the DKG in accordance with knowledge signaling.
First steps into the Metcalfe phase start todayThe Metcalfe phase activities are already on the way and today, two important updates become available in their beta versions:
OriginTrail World — the first version of the platform that will combine all relevant knowledge about OriginTrail technology and how to use it, including integration guidelines with other products on the market. ChatDKG on X — the beta version of ChatDKG running on X platform that you can use to prompt and receive responses based on the knowledge that is available in the DKG. The first version is currently ring-fenced on knowledge available in the OriginTrail World platform, and the demo dataset of the chatAnalyst.ai product covering Tesla’s SEC filings and quarterly reports. ChatDKG on X starts by supporting 100 prompts daily and also comes with a possibility to subscribe for the first version of Knowledge Mining — all the details are available directly on the ChatDKG links.Go ahead, tweet a question and tag @ChatDKG on X to test it out.
This is just the beginning of the exciting journey. Happy prompting!About OriginTrail
OriginTrail is an ecosystem dedicated to making the global economy work sustainably by enabling a universe of AI-ready Knowledge Assets, allowing anyone to take part in trusted knowledge sharing. It leverages the open source Decentralized Knowledge Graph that connects physical and digital worlds in a single connected reality driving transparency and trust. Advanced knowledge graph technology currently powers trillion-dollar companies like Google and Facebook.
By reshaping it for Web3, the OriginTrail Decentralized Knowledge Graph provides a crucial fabric to link, verify, and value data on both physical and digital assets.
Web | Twitter | Facebook | Telegram | LinkedIn | GitHub | Discord
Initiating Metcalfe phase: Verifiable web for decentralized AI was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
The OASIS Board of Directors are integral to the organization's success. Read our Q&A to gain a better sense of who they are and why they serve the OASIS community.
Meet Anish Karmarkar, Ph.D., an accomplished software professional whose career spans over two decades in the computer software industry. Anish’s active involvement in diverse Working Groups, Technical Committees, and Expert Groups serves as a testament to his unwavering commitment to advancing innovation.
What can you tell us about your current role?
I’m a Senior Director in the Standards Strategy and Architecture team that reports directly to the Chief Corporate Architect at Oracle. We are responsible for overseeing our participation in technical standards and standards setting organizations across all business units and geographies. This involves coordinating with internal stakeholders in business units and working with developers, executives, and legal and policy professionals to create and execute coherent strategies.
My work also involves getting directly engaged in standards activities of strategic interests and representing Oracle in standards setting organizations at the managerial, governance, oversight, policy, and technical level. We serve as a point of contact for standards related matters and communication.
What inspired you to join the OASIS Board of Directors?
OASIS has a very important role to play in the standards and open source ecosystem and to that end it is important to have a strong, capable, and experienced Board of Directors to oversee it. I believe my experience and passion for standards and standards setting organizations allows me to contribute to OASIS’ continued success.
Additionally, my team at Oracle is very supportive of me taking this role, as Oracle values industry-driven consensus standards and has been a supporter of OASIS for decades, going back to the days of ODF, SAML, and XACML. OASIS standards are important to their products and to their customers. Oracle continues to participate in various cloud- and security-related Technical Committees (TC) such as CSAF and VIRTIO.
What has been your involvement at OASIS?
I became involved in OASIS during the heydays of XML standards. Along with W3C and OMG, this is where I cut my standards-teeth. I always liked the openness, the transparency, the people involved in OASIS, and the attitude of enabling everyone to bring their ideas forward. OASIS always had the principle of letting a thousand flowers bloom and allowing the market to decide what is adopted and what can achieve success.
I’ve been actively engaged in a technical capacity in OASIS since 2003. I’ve been a contributor to eleven different OASIS TCs, editor of several OASIS specifications, and co-chair of three TCs. I did some work on the Service Component Architecture (SCA) under the Open Composite Services Architecture (Open CSA) Member Section and was a member of the Open CSA’s Steering Committee. I also played a role in facilitating the transfer of work from an organization called Web Services Interoperability (WS-I). It involved transferring their specifications, IPR, and funds to OASIS for future development and maintenance. This was overseen by the OASIS WS-I Member Section and I served on its Steering Committee.
What types of skills/expertise do you bring to the OASIS Board?
More than 25 years ago I was involved in implementing standards from OSF/DCE and POSIX pthreads. Since then, I have been involved with standards development and implementation in a technical, managerial, governance, policy, and leadership capacity. In 2022, ANSI named me a recipient of the Meritorious Service Award in recognition of my record of significant contribution to voluntary standards. My experiences and perspectives as a contributor, author, editor, chair, member of various boards, and executive and leadership positions at different standards-setting organizations give me valuable expertise crucial for serving on the OASIS Board of Directors. As OASIS builds on its successes and charts its future for its fourth decade in the ever-changing world of IT, I hope my broad experience and perspective in standards along with my skills in collaboration strategies, enterprise software development, and architecture can contribute to the organization’s endeavors.
How do you hope to make an impact as a board member during your term?
I hope to leverage my experiences from other SSOs/SDOs in implementing general best practices to make OASIS an even better organization. I think OASIS is truly in a unique place because it bridges both open source and open standards. As a member of the OASIS Board Governance, Finance, and Process Committees I hope to have an impact on the organization by advocating for changes that I think would make OASIS a better place for collaborations on standards and open source.
The current Board, I’m glad to say, brings different perspectives and representatives to the table. From large companies to SMEs to start-ups, academia to industry, and from North America to Europe, we bring much-needed diverse experiences and points of view. I believe this brings together a complementary set of skills needed for fiduciary and strategic oversight of an organization like OASIS.
What excites you about OASIS and why are you passionate about its mission?
OASIS has the right processes in place and the right IPR policies, including the much-preferred royalty-free option. OASIS is a flexible SDO with light-weight processes that places minimal constraints on its TCs. OASIS allows ideas to flourish and lets implementers and industry decide what makes sense. If it gains traction it thrives, else it withers. That’s how it is supposed to work. There is no top-down planning or architecture that is imposed. We are fortunate to have an Executive Director, Francis Beland, who brings a wealth of diverse experiences. With Francis at the helm, we are presented with exciting opportunities to forge ahead and gain traction in new and different areas.
OASIS and its community are open, welcoming, and transparent—key aspects for successful collaborations. If you want standards with Open Source implementations it is important that the community and standards be open, transparent, and have no-cost and readily available specifications. At every stage in OASIS processes, specifications are freely and readily available with the ability for the public to comment. In addition, all the technical discussions, issues, and progress are made publicly. It is also important to point out that the entry point for initiating projects at OASIS is low and one can go from chartering a new TC to finalizing the standard in a short time, assuming that there is consensus in the community.
What are some reasons why companies, organizations, and individuals should bring their projects to OASIS?
Participating in OASIS offers a multitude of benefits. OASIS has pathways to get an OASIS standard to be an ISO, IEC, or an ITU standard, which is particularly valuable if you want regulations to be based on international standards. OASIS is an approved ISO/IEC JTC 1 PAS submitter organization, which means it has met all the criteria set by JTC 1. This allows OASIS to submit its standards to JTC 1 for approval by its members. Once approved, the standards get the ISO and IEC imprimatur.
If you look at the World Trade Organization (WTO)’s Technical Barriers to Trade (TBT) Agreement, it encourages international standards as a basis for regulation—that means the big “I” standards: ISO, IEC, and ITU . OASIS has the ability to make its standards an ISO standard, an IEC standard, and/or an ITU standard, making this a significant advantage of participating in OASIS activities.
As a thirty-year-old organization there is a lot to talk about. From ebXML, ODF, UBL, DocBook, XACML, and SAML, to the various current security-related standards and projects, OASIS is a mature and proven organization that allows both open standards and open source to work well together. This is not easy if you think about the agreements that you need to have in place, for example, to deal with IPR issues. OASIS allows one to do that through Open Projects. Add to that the openness, transparency, royalty-free IPR and lightweight developer friendly processes and you have an organization that should be on everyone’s list for collaboration.
Do you have an impact story regarding your work in open source or open standards, or work that you’ve done at OASIS?
While I’ve been on the OASIS Board since 2021, I got involved in OASIS in 2003 working on various XML-based Web services standards (WS-RM, WS-RX, WS-TX, WSRF and WSN). Around 2006 I got involved in the Service Component Architecture effort (SCA-Assembly, SCA-Policy, SCA-BPEL, SCA-J, and SCA-Bindings) including the related OpenCSA Member Section. I was also involved in bringing Web Services Interoperability work to OASIS and was involved in the WS-I BRSP work and its associated WS-I Member Section and its Steering Committee. In 2012, I was one of the proposers for the CAMP TC, which was an early attempt to bring standards to cloud computing. I ended being its main editor and later its chair.
What trends or changes do you see in the industry that are most exciting?
Globally, various jurisdictions are worried about their citizens’ data and their location. There are concerns around security, privacy, and data locality. There is a possibility of fragmentation of approaches taken around the world. Standards can help address this and OASIS can position itself as a leader in this space. I think Francis Beland has some great ideas regarding OASIS’ future. There is a need for standards in several verticals, OASIS should target some of them; perhaps health care and health informatics could be a start. Globally we have a lot of challenges with respect to climate change and issues around ESG (environmental, social & governance) and AI. Several technologies could play a role in this space. For example, ensuring that supply chains adhere to specific ESG related standards is just one example among many.
Can you tell us about a role model you’ve had in your career?
There are far too many to list here and I’m afraid I’m going to miss mentioning someone. But all of them have taught me to become more curious. Some of them have taught me how to take the “forest view” and look at things strategically, especially as it applies to my role as a Board member. They have also taught me how to collaborate better, transforming it to a win-win situation, and more importantly how to make the world slightly better every day.
Best piece of advice that you’ve received so far?
It is always a group effort and one can achieve anything if they are willing to give the credit to someone else. As someone who focuses on collaboration, this advice has served me well.
What’s a fun fact about you?
I like to run and cross-country ski. I’ve run three marathons (so far) and several half-marathons. I find that running gives me time and space to relax and think. I like training for a race more than the actual race. My long weekend runs are just about unwinding, endorphins, and the pleasure of being outside being physically active after being in front of a screen all the time. One other fun fact about me is that I play a percussion instrument called the tabla, which is used in Indian classical music. I’m not very good at it, but it is great fun to listen and play it.
The post OASIS Board Member Spotlight Series: Q&A with Anish Karmarkar, Ph.D. appeared first on OASIS Open.
Initial considerations for an electronic patient record (EPR) began around 2005, and the Federal Act on the Electronic Patient Record (EPDG in German) entered into force on April 15, 2023. Often criticized as «PDF graveyard», in mid-2023 the EPR only holds around 20,000 records, or 2 permille of the population, for a variety of reasons.
To improve this unsatisfactory situation, the Federal Office of Public Health (FOPH) conducted a consultation on a comprehensive revision of the EPDG from June to October 2023. Due to the political processes, an update of the EPDG is not expected until 2028 at the earliest.
Alongside many other organizations, DIDAS also submitted a statement as part of this consultation, the most important points of which are listed below:
Person centric in order to grant every citizen access to her or his complete health data Technology neutrality in order to take into account future wallet-based solutions Openness and standards to ensure the highest possible level of interoperability Immediate measures to generate benefits before 2028Official DIDAS statement
For the third time, the FIDO Alliance’s annual online authentication barometer provides insights into the global use and acceptance of various forms of authentication.
The post Security Insider: Consumers are demanding password alternatives appeared first on FIDO Alliance.
In a byline, FIDO Alliance executive director Andrew Shikiar discusses the recent cyberattacks on MGM Resorts International and Caesars Entertainment which showcased the widespread effects data breaches can have on an organization. These attacks, as well as so many other high-profile breaches over the past few years, happened because of continued reliance on legacy sign-in credentials like passwords and SMS one-time passcodes that can be easily given away and reused.
The post Dark Reading: MGM and Caesars Attacks Highlight Social Engineering Risks appeared first on FIDO Alliance.
Bitwarden has launched passkey management, enabling every user to create, manage, and store passkeys in their vaults. Users can now quickly and securely log into passkey-enabled websites through the Bitwarden web extension. The synchronized passkeys are encrypted in users’ vaults for a more convenient passwordless login experience.
The post helpnetsecurity: Bitwarden launches passkey management for passwordless authentication across accounts appeared first on FIDO Alliance.
In the latest Elastos Bi-Weekly update, the core development ecosystem has witnessed substantial advancements led by Trinity, Gelaxy and Elacity, reflecting a strong commitment to innovation and strategic collaboration.
The Trinity team’s role in advancing Elastos’ ecosystem is marked by their focus on enhancing user experience and security, as well as their dedication to resolving functional challenges across various projects such as Elastos DID Web Service, KYC-me, Essentials wallet and Carrier networking. In the Elastos DID web service, they have significantly upgraded the DID web service. This includes introducing the ability to unbind email accounts in the Security Center, facilitating the import and display of Verifiable Credentials, and improving the access key generation process. Key issues such as incorrect issuer displays in VCs and browser compatibility problems in the user activities list have been addressed, leading to ongoing updates in the DID web services UI/UX design. These improvements are now available in both the staging and production environments, where more information on its release will be announced soon.
The team’s efforts with the Web3Essentials wallet have been geared towards enhancing its reliability and functionality. This includes resolving the complications with NFT trading on OpenSea, updating the integration with kyc-me, and preparing for the release of version 3.0.13 of Web3Essentials, which includes fixes for several accumulated bugs.
In their work on the kyc-me initiative, the Trinity team has tackled the issue of the app icon not displaying correctly during Web3Essentials wallet logins and implemented a backend interface to clear cached user data. They have also boosted the OCR return parameter’s Confidence value to refine identification accuracy, especially in cases of blurry identity document photos. Trinity is continuously improving the kyc-me code, with a particular focus on ensuring its compatibility and effectiveness on desktop browsers. Moreover, under the Trinity Team’s direction, the Carrier service has experienced notable improvements and enhanced community support.
The Gelaxy Team has made significant strides in enhancing the Mainchain and Elastos Smart Chain (ESC) and Elastos ID (EID). For the Mainchain, they’ve nearly completed increasing the side chains’ gas price, optimised state transitions of BPoS nodes and CR members for better stability, and improved the main chain browser to support more detailed statistics. A notable update is the now-online rewards calculator on the main chain explorer.
In the ESC/EID domain, the team resolved an issue where the increased minimum Gas price limit on the ESC sidechain was not effective. They also began repairing the ESC browser to display intra-contract transactions accurately and addressed occasional block instability on the main network EID, ensuring smoother and more reliable blockchain operations.
For Elastos Runtime infrastructure, the Elacity team’s primary goal is to enhance both the security and media playback with DRM-encrypted video capsules for the handling efficiency of the system. To achieve this, they have focused on fundamental aspects such as strengthening the interaction layers between modules, refining the process of Elliptic Curve Diffie-Hellman (ECDH) key generation, and incorporating robust AES-128-CBC encryption to protect media content. Their approach to debugging and playback optimisation, aiming for simplicity and effectiveness, has significantly improved media continuity and reliability.
To further streamline media handling, the team has introduced on-demand media creation, allowing audio only for future music application and achieved more precise synchronisation using Media Source Extensions (MSE), coupled with a thorough restructuring of the player codebase for enhanced organization and efficiency. The team’s efforts in updating the codebase and tools are evident in the integration of the latest remuxing techniques, upgrading to Go version 1.21 for WebAssembly System Interface (WASI) support, and delving into the potential of WebSocket integration for improved network handling.
Finally, the Elacity Team has worked on refining the Elastos runtimes frontend and playback experience. This includes bolstering the metadata infrastructure to better support frontend processing, methodically resolving memory leaks to enhance performance, and proactively tackling specific playback issues related to Chrome’s keyframe interpretation, all aimed at delivering a more streamlined and reliable user experience.
DID Web Service
Implemented the feature in DID web service to support unbinding of email accounts in the Security Center. Implement the functionality in the DID web-services Applications page for requesting to import/show related VCs (Verifiable Credentials) information. Optimize the page for generating access keys in the DID web service. Resolve the issue in the DID web service where the Issuer is incorrectly displayed when importing VCs. Fix the issue in the DID web service where the imported Identity Root is not displayed on the import/export page. Address compatibility issues in getting the browser name, as some browsers show undefined names in the user activities list. Continue importing UI/UX designs to update the front-end implementation of the DID web service. Update deployment in staging and production environments.Essentials
Resolve the issue with NTF trading not being possible on OpenSea. Update the integration of kyc-me. Accumulated bug fixes. Test and release version 3.0.13 of Web3Essentials.Kyc-me
Resolve the issue where the kyc-me app icon is not displayed when logging in using the Web3Essentials wallet or DID web service. Fix the issue where there is no pop-up prompt when the scanned identity document photo is placed too far or too close, preventing further verification steps. The kyc-me service backend has added an interface to clear users’ cached data, supporting front-end clearance of cached user data through API calls. Increase the OCR return parameter Confidence value to 0.93 to filter out some cases where individuals are incorrectly identified as non-document holders (though the final tests still occasionally result in incorrect identifications). Update the kyc-me test environment for further testing and verification. Optimize the kyc-me code implementation based on testing suggestions. Begin assessing kyc-me support for Desktop Browsers.Carrier
Community support and improvements based on community feedback.Mainchain
Most of the work of side chains gas price increase has been completed in relation to main chain. Optimize the state switching(from inactive to active) of BPoS nodes and CR members to improve node stability. Optimization of the main chain browser to support transaction count, block count and other statistics. Revenue calculator, which is now online on the main chain explorer has been updated.ESC/EID
Tweak and fix the problem that the increase in minimum Gas price limit of ESC sidechain does not take effect. The cause of the problem that some of the ESC browser’s intra-contract transactions cannot be displayed has been investigated and the related repair work has been started. Handled the problem of occasional block instability on the main network EID.Runtime
Strengthened the security layer between modules, reinforcing the robustness of interactions and data exchange. Initiated and refined the key generation process for Elliptic Curve Diffie-Hellman (ECDH), enhancing the cryptographic strength of communications. Successfully implemented the key agreement flow, adjusting for operational consistency across C and Golang environments. Integrated AES-128-CBC encryption method to secure media content, augmenting the overall security posture. Conducted thorough debugging to isolate and rectify the origin of a persistent playback error, improving reliability. Extracted and analyzed segment creation from legacy remuxing code, pinpointing the underlying bug within the refactoring efforts. Devised a methodology for testing segmented media outside the WebAssembly (WASM) player, utilizing the Linux command line for direct Media Source Extensions (MSE) feeds. Achieved seamless video playback using MSE, addressing previous challenges with media continuity. The segmentation solution was incorporated into the latest WebAssembly player release to ensure compatibility and smooth playback, which relates to addressing playback optimization. Adjusted the new segmentation logic to ensure the seeking functionality works correctly, addressing issues from the last implementation. Refined the main codebase to support on-demand media creation, enabling audio-only, video-only, and combined streams. Ensured broad compatibility with diverse operating environments, notably WASM and browsers like Google Chrome. Successfully streamed segmented video and audio through Media Source Extensions (MSE), demonstrating improved handling and synchronization. Refactored the player codebase, moving utility functions to separate files for better organization and readability. This could be seen as enhancing media handling by improving the code structure that supports it. Adjusted the player’s data flow to enhance performance and address previous issues, directly relating to handling media streams more efficiently. Integrated the latest remuxing improvements to enhance browser buffering capabilities and optimize MSE performance. Visualized and analyzed timestamp discrepancies to troubleshoot and rectify Chrome-specific playback anomalies. Merged cutting-edge remuxing developments into the de/remux branch, leading to better segment buffering strategies. Updated Go to version 1.21, enabling WebAssembly System Interface (WASI) support, which aligns with the latest web development standards. Researched the conversion of POSIX sockets to WebSockets and developed a better understanding of network handling and proxying techniques. This update signifies an improvement to the infrastructure supporting the media player. Attempted manually setting up a WebSocket server to test and possibly integrate into the player’s networking layer. This falls under updating the tooling to enhance the networking aspect of the media player. Enhanced the metadata infrastructure, incorporating mime codec information to aid frontend processing. Embedded and operationalized WebAssembly (WASM) code directly within the browser, fortifying the application’s playback capabilities. Addressed and resolved a critical memory leak, which significantly improved playback performance and resource efficiency. Identified a Chrome-specific issue with keyframe interpretation, with ongoing efforts to understand the rejection of particular keyframes. Found a temporary workaround by setting sourceBuffer.mode to “segments,” although a more permanent resolution is being sought within the backend remuxing flow.The post Elastos Bi-Weekly Update – Nov 13, 2023 appeared first on Elastos.
It’s time for another listener email episode of The Identity at the Center Podcast! We dive into thought-provoking questions from listeners around the world covering topics like integrating IAM with legacy systems, emerging trends in IAM, and the role of artificial intelligence in IAM. Tune in to idacpodcast.com or your podcast app to hear episode #246.
Read Elacity’s whitepaper here
In a significant milestone for the digital asset community, Elacity has officially released its much-anticipated whitepaper “The Access Economy in Web3”, marking a new chapter in the management and monetisation of non-financial digital assets. This release not only demonstrates Elacity’s innovative approach to digital rights management but also highlights its strategic incorporation of Elastos technology.
Elacity Founder Sash stated, “In 2018, I created a basic graphic that stated ‘Elacity – peer-to-peer digital marketplace’. First came our NFT marketplace supporting art markets, and for over the last year and a half, we’ve been engineering our upcoming access economy innovation, setting the stage to revolutionise how digital rights and assets are managed online using Elastos SmartWeb technology. This is completely custom-built and a framework which can be expanded to grow markets for all types of digital assets in the years ahead. I appreciate everyone who has believed in our team and been with us to date. Our upcoming MVP will be released in December, and it will open a new door for supporting online user-owned markets. We will continue to expand business models, digital assets, and integrate partners to drive markets. I encourage everyone to understand what we’ve engineered and what’s to come”.
At its heart, Elacity’s Access Economy mission is to revolutionise how digital assets are accessed, traded, and monetised. The fundamental question they address is: How can we ensure the security, scarcity, and value of digital assets in an increasingly digital world? By reimagining digital assets like audio, video or software as secure, tradable, and exclusive ‘Digital Capsules’, Elacity answers this by providing creators and asset owners with unprecedented control and monetisation opportunities, powered by Elastos’ SmartWeb technology.
The whitepaper provides a comprehensive view of Elacity’s Access Economy Protocol (AEP). The whitepaper is more than a technical document; it’s a blueprint for a new digital economy where access equates to ownership and every creator or asset owner is empowered to participate in a fair, global marketplace.
As we celebrate this significant release, we invite you to delve deeper into the Elacity ecosystem by reading the whitepaper. Whether you’re a content creator, a digital asset owner, or someone interested in the future of digital rights management, this whitepaper offers valuable insights into a world where digital asset management is secure, equitable, and user-centric. For updates, follow Elacity’s Twitter here.
To explore Elacity’s visionary approach and understand how it is set to transform the digital asset landscape, read Elacity’s whitepaper here.
The post Elacity’s Releases Whitepaper: The Access Economy in Web3 appeared first on Elastos.
I was listening to the latest Pivot Podcast when Kara Swisher played a clip from Sam Altman‘s keynote at OpenAI’s Developers Day, earlier this week. Spake Sam (at the 35:18 mark),
We believe that AI will be about individual empowerment and agency on a scale we’ve never seen before
Whoa! That’s what we’ve been working toward here at ProjectVRM since 2006.
Shall we call it IEASWNSB? (Pronounced “Eewasnib,” perhaps?) We might have better luck with that than we’ve had with VRM, Me2B, and other initialisms and acronyms.
For fun, I asked Bing Image Create, which uses OpenAI’s DALL-E to produce images, to make art with its boss’s words. It gave me the images above. Here’s the link.
Those are a little too Ayn Randy for me. So I tried just “Empowered individuals,” and got this—
—which is almost the ulta-woke opposite of the first one.
But never mind that. Let’s talk about individual empowerment with AI help. Here’s my personal punch list:
Health. Make sense of all my health data. Suck it in from every medical care provider I’ve ever had, and help me make decisions bases on it. Also help me share it on an as-needed basis with my current providers. Finances. Pull in and help me make sense of my holdings, obligations, recurring payments, incomes, whatever. Match my orders and shipments from Amazon and other retailers with the cryptic entries (always in ALL CAPS) on my credit card bills. I want to run every receipt I collect through a scanner that does OCR for my AI, which will know what receipt is for what, where it goes in the books it helps me keep, and yearly helps me work through my taxes. The list can go on. Property. What have I got? I want to point my phone camera at everything that a good AI can possibly recognize, and make sense of all that too. Know all the books on my shelves by reading their spines. Know my furniture, the stuff in my basement. Help me keep records of my car’s history after I give it the VIN number I photographed under the windshield, and run all the records I’ve kept in the glove box through the same scanner I mentioned above. Whatever. Why not? Correspondence. I have half a million emails here, going back to 1995. (Wish it went back farther.) Lots of texts too, in lots of systems. Help me do a better job of looking back though those than my various clients do. Help me cross-reference those with events I attended and other stuff I may be relevant to some current inquiry. Contacts. Who do I have in my various directories? How many entries are wrong in one way or another? Go through and correct them, AI helper, based on whatever clever new algorithm works for that. Calendar. Tell me where I was on a given day, what I was doing, and who I was with. Knowing all that other personal data (above) will help too. Business relationships. Look into all my subscriptions and help me fight the fuckery behind nearly all of them. Make better sense of all the loyalty programs I’m involved with, and help me unfuck those too, since most of them are about entrapment rather than real loyalty. Other involvements. What associations do I belong to? How deeply am I involved with any or all of them? Can we drop some? Add some? Have some insights into how those are going, or should go? Travel. I have 1.6 million miles with United Airlines alone. Where did I go? When? Why ? What did I pay? Are there ways to improve my relationships with airlines and other entities (e.g. car rental agencies, Uber/Lyft, AirBnB, cruise lines)?Our lives are packed with too much data for our mere human minds alone to fully comprehend and put to use. AI is perfect for that. So bring it on.
And don’t bet that any of the bigs, including OpenAI, to give you anything on the punch list above*. They’re too big, too centralized, too stuck in a mainframe paradigm. They look for what only they can do for you, rather than what you can do for yourself—or do better with your own damn AI.
Personal AI today is where personal computing was fifty years ago. We don’t yet have the Apple II, the Osborne, the TRS-80, the Commodore PET, much less the IBM PC or the Macintosh. We just have big companies with big everything, with hooks for developers. And soon an app store (also announced in Sam Altman’s keynote).
Real personal AI is a huge greenfield. Going there is also, to switch metaphors, a blue ocean strategy. Wrote about that here.
*Except by pouring all that data into their LLM. Not yours.
[Press release also available in German]
Lissi, known as a pioneer in identity wallets and verifiable credentials, announces its establishment as an independent startup, supported by investments from neosfer, 9.5 Ventures and the managing directors. The spin-off is also a milestone for neosfer, which launched Lissi in 2019. For Commerzbank’s innovation unit, it is the first spin-off of a self-developed project.
With its Lissi software platform, Lissi offers a comprehensive range of solutions designed to issue, store and verify verifiable credentials and thus enable trustworthy interactions with identity wallets according to European standards. The European Union has recently welcomed a final agreement of the eIDAS 2.0 regulation to introduce European Digital Identity Wallets (EUDI) across the European Union. This opens up a whole new market for Lissi. The company’s vision is to become the leading software provider for trusted interactions between organisations and EUDI-Wallet users. The focus is on offering software for interaction with EUDI-Wallets and extensive functions that go beyond the standard protocols proposed by the European Union.
Lissi GmbH logoThe founders Helge Michael, Sebastian Bickerle and Adrian Doerk explain: “We are proud to make a decisive contribution to strengthening trust in our increasingly digital society. The path to the future is already lined with promising collaborations, including municipal use cases with the cities of Cologne, Leipzig and Dresden as well as various use cases with major banks such as Commerzbank AG and ING Deutschland. These early adopters will benefit from improved process efficiency, reduced paperwork and higher data quality, resulting in significant time and cost savings.”
Kai Werner and Matthias Lais, Managing Directors of neosfer: “The spin-off is proof of the entrepreneurial innovation strength that is realised in the neosfer innovation unit. The spin-off of Lissi is a major milestone for us. It confirms how important it is to give new ideas and approaches space and structure for their development. We are delighted that our innovation unit has succeeded in doing just that.” The close collaboration with neosfer as one of the main investors will continue in the future. The founders are also supported by the venture capital firm 9.5 Ventures, which firmly believes that the management will revolutionise the design of trust-based digital interactions in personal and business processes.
Since its foundation as part of neosfer in 2019, Lissi has initiated and led the IDunion research project, which is funded by the German Federal Ministry for Economic Affairs and Climate Action. This is a community of 70 partners and over 350 individual contributors, including well-known companies such as Bundesdruckerei, Deutsche Telekom, Deutsche Bahn and DATEV e.G. Lissi has already implemented 35 pilot projects and received the prestigious Handelsblatt Diamond Award, which emphasises its commitment to the further development of digital trust.
We invite the press to engage with the future of digital identity in Europe and offer our perspective to complement your stories and encourage interested organisations to explore the potential of ID-Wallets together with Lissi.
For more information, please visit our website www.lissi.id
For press enquiries please contact: info@lissi.id
neosfer is the early-stage investor and innovation unit of Commerzbank Group. It investigates future technologies that are relevant to business and society, promotes and develops sustainable, digital solutions, and brings them profitably to the bank and its customers. All of this is done through the three areas of invest, build, connect. It creates access to innovation through strategic venture capital (invest), in-house development of technologies and business models (build), and building ecosystems around the sustainable and digital future of society (connect).
With a portfolio of more than 30 digital and sustainable startups, neosfer has always kept its eyes on the future and is continuously developing. Some successful prototypes, such as the Lissi project, the blockchain-based identity network for self-determined identities, have already emerged from this and are being used in the Commerzbank Group. Through its own events, such as the monthly tech startup event series “Between the Towers” and the Impact Festival, the company strengthens its network in the innovation, venture and sustainability sectors.
neosfer GmbH, or neosfer for short, is a wholly owned subsidiary of Commerzbank AG based in Frankfurt am Main.
About 9.5 VenturesNinepointfive is a venture capital fund that co-invests with corporates to take tech-based startups from early stage to maturity. The fund is located in Antwerp, Belgium and invests in Europe and Israel. Ninepointfive’s dedicated focus on corporate-backed ventures, offers its portfolio companies unique acceleration and de-risking opportunities. This is particularly the case in the investment sweetspot: B2B software.
[Press release also available in English]
Lissi, bekannt als Pionier im Bereich Identity Wallets und verifizierbarer Nachweise, gibt seine Gründung als unabhängiges Startup bekannt, unterstützt durch Investitionen von neosfer, 9.5 Ventures und den Gründern. Die Ausgründung ist auch ein Meilenstein für neosfer, in dessen Rahmen Lissi im Jahr 2019 startete. Für die Innovationseinheit der Commerzbank handelt es sich um die erste Ausgründung eines selbst entwickelten Projekts.
Lissi bietet mit seiner Lissi Software-Plattform ein umfassendes Lösungsangebot, das darauf ausgelegt ist, verifizierbare Nachweise auszustellen, zu speichern und zu verifizieren und damit vertrauenswürdige Interaktionen mit Identitäts-Wallets nach Europäischen Standards zu ermöglichen. Die Europäische Union hat kürzlich eine Einigung über die eIDAS 2.0-Verordnung zur Einführung von Europäischen digitalen Identitäts-Wallets (EUDI) in der gesamten Europäischen Union verabschiedet. Für Lissi eröffnet sich damit ein völlig neuer Markt. Das Unternehmen hat die Vision, der führende Softwareanbieter für vertrauenswürdige Interaktionen zwischen Organisationen und EUDI-Wallet-Nutzern zu werden. Dabei liegt der Fokus darauf, Software für die Interaktion mit EUDI-Wallets und umfangreiche Funktionen anzubieten, die über die von der Europäischen Union vorgeschlagenen Standardprotokolle hinausgehen.
Lissi GmbH logoDie Gründer Helge Michael, Sebastian Bickerle und Adrian Doerk erklären: „Wir sind stolz darauf, einen entscheidenden Beitrag zur Stärkung des Vertrauens in unserer zunehmend digitalen Gesellschaft zu leisten. Der Weg in die Zukunft ist bereits von vielversprechenden Kooperationen gesäumt, darunter kommunale Anwendungsfälle mit den Städten Köln, Leipzig und Dresden sowie diverse Anwendungsfälle mit der Commerzbank AG und der ING Deutschland. Diese Early Adopters werden von verbesserter Prozesseffizienz, geringerem Papieraufwand und höherer Datenqualität profitieren, was zu erheblichen Zeit- und Kosteneinsparungen führt.”
Kai Werner und Matthias Lais, Geschäftsführer von neosfer: „Die Ausgründung ist ein Beleg für die unternehmerische Innovationskraft, die in der Innovationseinheit neosfer realisiert wird. Für uns ist die Ausgründung von Lissi ein großer Meilenstein. Sie bestätigt, wie wichtig es ist, neuen Ideen und Ansätzen Raum und Struktur für ihre Entwicklung zu geben. Wir freuen uns sehr, dass genau dies im Rahmen unserer Innovationseinheit gelungen ist.” Auch in Zukunft wird die enge Zusammenarbeit mit neosfer als einem der Hauptinvestoren fortgesetzt. Unterstützt werden die Gründer zudem von der Venture Capital Firma 9.5 Ventures, die fest daran glaubt, dass das Management die Gestaltung vertrauensbasierter digitaler Interaktionen in persönlichen und geschäftlichen Prozessen revolutionieren wird.
Seit seiner Gründung als Teil von neosfer im Jahr 2019 hat Lissi das IDunion Forschungsprojekt, welches durch das Bundesministerium für Wirtschaft und Klimaschutz gefördert wird, initiiert und leitet es seitdem. Hierbei handelt es sich um eine Gemeinschaft von 70 Partnern und über 350 individuellen Mitwirkenden, darunter namhafte Unternehmen wie die Bundesdruckerei, die Deutsche Telekom, die Deutsche Bahn und die DATEV e.G. Lissi hat bereits 35 Pilotprojekte umgesetzt und den renommierten Handelsblatt Diamond Award erhalten, der das Engagement für die Weiterentwicklung des digitalen Vertrauens unterstreicht.
Wir laden die Presse ein, sich mit der Zukunft der digitalen Identität in Europa zu befassen, und bieten unsere Perspektive an, um Ihre Berichte zu ergänzen. Wir ermutigen interessierte Organisationen, das Potenzial von ID-Wallets gemeinsam mit Lissi zu erproben.
Für weitere Informationen besuchen Sie bitte unsere Website www.lissi.id
Für Presseanfragen kontaktieren Sie bitte: info@lissi.id
Die neosfer GmbH, Frühphaseninvestor und Innovationseinheit der Commerzbank-Gruppe, untersucht wirtschafts- und gesellschaftsrelevante Zukunftstechnologien, fördert und entwickelt nachhaltige, digitale Lösungen und bringt diese gewinnbringend in die Commerzbank und zu ihren Kund:innen. Das geschieht über die drei Bereiche invest, build und connect. Durch strategisches Wagniskapital (invest), die Eigenentwicklung von Technologien und Geschäftsmodellen (build) sowie den Aufbau von Ökosystemen rund um die nachhaltige und digitale Zukunft der Gesellschaft (connect) schafft neosfer Zugang zu Innovationen.
Mit einem Portfolio von über 30 digitalen und nachhaltigen Start-ups richtet das Unternehmen seinen Blick stets in die Zukunft und entwickelt sich kontinuierlich weiter. Einige erfolgreiche Prototypen sind hieraus bereits hervorgegangen und finden in der Commerzbank-Gruppe Anwendung. Dazu gehört unter anderem das Projekt Lissi, ein Blockchain-basiertes Identitätsnetzwerk für selbstbestimmte Identitäten. Durch eigene Veranstaltungen, wie die monatliche Tech-Start-up-Eventreihe „Between the Towers“ und das IMPACT FESTIVAL, stärkt neosfer sein Netzwerk im Innovations-, Venture- und Nachhaltigkeitsbereich.
neosfer ist eine hundertprozentige Tochtergesellschaft der Commerzbank AG mit Sitz in Frankfurt am Main.
Über 9.5 VenturesNinepointfive ist ein Risikokapitalgeber, der gemeinsam mit Corporates investiert, um technologiebasierte Start-ups von der Frühphase zur Marktreife zu führen. Der Fond hat seinen Sitz in Antwerpen, Belgien, und investiert in Europa und Israel. Ninepointfive konzentriert sich auf Projekte in Zusammenarbeit mit Großunternehmen und bietet seinen Portfoliounternehmen einzigartige Möglichkeiten zur Beschleunigung und Risikominderung. Dies gilt insbesondere für den Sweetspot der Investition: B2B-Software.
RepConnect, a half-day summit hosted by Ceramic, will bring together developers and thought leaders from Gitcoin, Disco, Karma3, Lit, Veramo Labs, Intuition, and more to discuss the current approaches to building and collaborating on composable reputation systems. The goal of the event is to provide a space for builders and visionaries to confront the technical hurdles of composable reputation and collaborate on its future.
The event takes place on Monday, November 13th with limited spots available! RSVP below:
RepConnect: Web3 Reputation Summit · Luma The goal of the event is to provide a space for builders and visionaries to confront the technical hurdles of composable reputation and collaborate on its future. At RepConnect, we’ll…These panels, lightning rounds and tech talks will cover the roadblocks and vision required to make composable reputation a reality across dApps, DeFi, DAOs, DeSci, AI, loyalty and more.
Doors Open9:00 a.m.
Opening Keynote Ceramic Network9:20-9:35 a.m. with Michael Sena
Lightning Talks Gitcoin: ‘How to Identify Trusted Humans’9:35-9:45 a.m. with Gary Sheng
Trusta Labs: ‘Building AI-Powered On-Chain Reputation for L2 Chains’9:45-10 a.m. with Peet Chen
Panel Discussion Metamask, Optimism, Karma3 & Base: ‘Building Decentralized Reputation in Communities’10-10:40 a.m. with Dayan Brunie, Sahil Dewan, Ryan Nitz, Jonas Seiferth
Lightning Talks Lit Protocol: ‘Decentralized Digital Signing & Encryption’10:40-10:50 a.m.
David Sneider, co-founder at Lit, will detail the key management network and how it can be used in the context of onboarding and private data for composable reputation.
Jokerace Contest10:50-11:00 a.m.
In this rapid-fire demo, David Phelps will show us how to create a contest, play in a contest, use a contest to build community—and most importantly, leverage it for on-chain reputation.
Technical Talk Veramo Labs11:00-11:30 a.m.
Panel Discussion Disco, Intuition, Veramo & Verax: ‘Insights and Action Items for Building Real Apps with Decentralized Reputation’11:30-12:00 p.m. with Evin McMullen, Billy Leudtke, Simon Brown and Nick Reynolds
Leaders in decentralized identity and reputation will discuss the shortcomings and successes of online reputation, and what we need to do next to build useful apps supercharged with composable reputation.
Technical Talk Karma3 Labs: ‘Social Emergence, Evolutionary Design: Building the Decentralized Reputation System for Open Communities’12:00-12:20 p.m. with Sahil Dewan
Karma3 Labs will delve into their experiences building decentralized reputation systems for Web3 social protocols and marketplaces, drawing from their expertise in EigenTrust algorithms and open data. They will explore the intricacies of curating social emergence of relevant reputation signals and delve into the open verifiable algorithm computation that underpins these permissionless systems.
Lightning Talks Index Network: ‘Reclaiming the Contextual Internet With Composable Identities’12:20-12:30 p.m. with Seref Yarar
NewCoin12:30-12:40 p.m.
DAO Star12:40-12:50 p.m.
You won’t want to miss the knowledge and the connections (oh and the swag!) at RepConnect. Don’t forget to RSVP for event details & we can’t wait to see you in Istanbul!
It’s ambiguous: is artificial intelligence a tool, a weapon, or both? The twist with AI, I think, is much like the paradox in the Schrödinger’s Cat thought experiment: it’s a black box that might harm us or save us. We must hold both ideas at the same time as we think about what we, and our kids, need to know about AI, its negatives and positives.
A tools and weapons approach is a useful rubric for thinking about what guidance about the power and opportunity afforded by AI — broadly, what media literacy — we all need.
Here’s what I believe we need to consider in making media literacy effective for young people.
The Underlying Tenets of Media Literacy Still Hold TrueI created and currently produce two media literacy series for PBS KIDS — Search It Up and Ruff Ruffman: Humble Media Genius — as part of my 25 years producing digital content for public media at GBH in Boston. We are using new episodes to showcase what AI is and what it can do, including how kids are using it to make art and text. I’m also working with my colleagues at the Berkman Klein Center on several media literacy initiatives around generative artificial intelligence.
Adults are excited, intrigued, amused, or are wringing their hands in equal measure around the growth of advanced computing and tools that can generate original text, images, audio, or video — loosely called generative AI. But what does this really mean in the context of kids?
In talking to young people, I get a sense that they need as much support in understanding AI as we adults do. And maybe they now need a tad more as generative AI further blends into their technology-rich lives.
They still need to know how media is made, that they themselves can make media, and that it has a purpose — even if it’s AI-assisted.
So, let’s start with some context: What do these three have in common?
The government.
An everyday person.
Elon Musk.
The answer is that they are the most common responses I’ve found in talking to 5th graders about who is responsible for what they find on the internet. There’s a similar range of replies when these 10-year-olds are asked who fact-checks the internet. Popular answers here include the government, no-one, and the ubiquitous Mr. Musk.
These kids are often called “digital natives” — born long after the demise of rotary phones, dial-up, Blockbuster, Myspace, and waiting for a letter in the mail. They are deemed native as if they are somehow born with an ability to reset the router or to attach PDFs to emails. I think they are not. Fish are surrounded by water but may be able to tell you little about it. Our young people need to learn, or be shown, how to stay safe online and how to benefit from the many opportunities access to boundless information affords.
Ruff Ruffman: Humble Media Genius. Image courtesy of GBH.It’s worth noting that AI is not new. It’s in many of the tools that have been in our hands for a while. For instance, Siri uses predictions to complete a text — and has been trying to break up my marriage long before it was fashionable for more advanced AI to do so. (“I’m in the woods with Natalie,” I texted my wife when she was out of town. My English accent and flaws in Siri’s speech recognition had turned our dog, Nellie, with whom I was enjoying a woodland adventure, into Natalie, my daughter’s twentysomething math coach, with whom I was not.)
Kids are already using AI every day if they’re online or on their phones. What do they actually know about it? The following responses are from 10-year-olds:
“It’s really smart, it’s so smart it can go to websites in its memory chip; it can take all the information and put it inside its brain.”
OK, that’s a little robot overlord-y, but it’s close.
“It’s not good the first time, it learns as it plays.”
That’s pretty much exactly how AI has beaten grandmasters in Go, Chess, and Jeopardy.
“The AI is making the picture, and the AI is coded by humans.”
That’s a pretty accurate view of generative art, although it bypasses issues of intellectual property. And of course, these new images based on real people are now pretty convincing. I still can’t believe that the deepfakes of Keanu are not Keanu. But maybe I don’t want to believe they’re fake.
As I share this now, it’s worth noting that we sometimes deliberately share fake information just as willingly as if we know it to be true. This is one of the confounding challenges around stemming more harmful misinformation and disinformation.
Image by @unreal_keanu via TikTok.“If it can tell you are sick with a disease of some sort and can tell you about it before it gets too serious by noticing unusual things that don’t always happen on a daily basis.”
This is a great encapsulation of the medical world’s hope for AI, with already-proven success in protein folding.
“The world is full with AI’s and no one can be really sure.”
No, we cannot. And so, we should still consider how to help kids thrive in a world where ideas of provenance, authorship, intention, bias, and even why we share information with each other, are increasingly fuzzy.
There is a belief that AI could weaponize phishing to be more targeted and more plausible. Conversely, reverse image search, an AI-assisted tool, let me investigate a suspicious friend request from someone who looked like a Danish sea captain and was, it transpired, a Danish sea captain — at least the pictures used were. His affable images, I discovered, had been misappropriated and used as a siren call all over the world in phishing attacks.
We Should Avoid Exacerbating InequalitiesIf we are not vigilant, new technologies can have a tendency to exacerbate existing digital divides by, for example, creating a heavy reliance on expensive devices or tools. The current generation of generative AI tools relies, at minimum, on having an internet-connected device. Although the mechanics for data sent as cellular data, wi-fi, or Bluetooth are perhaps similar, their differences can be huge for those with limited means, limited data plans, or low bandwidth connectivity. Unless we think intentionally about ensuring equitable access, many children will be under-equipped to use new AI technologies.
We Must Think Creatively about the Medium of Media LiteracySchool-based media literacy may provide some of the answers to helping kids learn more, but the presence of formal instruction varies by state, from none to some. The demands on the school day and the multiplicity of technologies can make integrating media literacy instruction challenging for any educator. We must understand the needs of teachers as we develop in-classroom supports and scaffold them with professional development materials as needed.
That said, we know media literacy messaging works well when it’s either baked into media that kids are already consuming or is standalone content that they gravitate to, whether that’s through video, social media, or digital games. For example, we use both of these approaches at GBH; our episodes of Molly of Denali often model positive uses of media and technology.
Molly of Denali. Images courtesy of GBH. Future Proofing Media Literacy Education Is KeyThe usage of generative AI is moving swiftly, with over 5,000 tools now claiming to have AI support and with many being integrated into tools and software kids are already using. Being strategic about what kinds of media literacy to address is key.
This is especially true for those of us making media about technology, professional video, or a high quality media literacy game. These can take months to produce, if we even find the funding to begin with. Our resulting work often has a long tail of use, so for both of these reasons we must be careful to future-proof what we provide, and to not overly focus on one single tool. The Ruff Ruffman: Humble Media Genius videos have been viewed over 100 million times, so getting the message right, with as much timelessness as possible, is important.
And as a new generation of AI tools become intertwined with what our kids interact with — in their searches, in the algorithms that suggest what to listen to or, more importantly perhaps, what friends see their posts, and in the work they do at school — we should take stock and assess whether we’re headed into stormy seas or wide open blue oceans. (That Danish sea captain clearly has left his mark on me.) We should provide media literacy that kids want to engage with; it can’t feel like just another civics lesson.
We Can Learn from the Past to Inform Our FutureThere is very little research yet about generative AI, and so we in public media, and many of our colleagues across academia, are trying to conduct it. As a stop-gap, we’re leaning on studies of related technology: for example, how kids interact with chatbots can be informed by 10 years’ study into how they interact with digital voice assistants like Siri; these in turn often look back at prior research into Human Computer Interaction.
How we have evolved to use other digital tools can help us consider how we might use AI. For example, many of us have grown to trust Wikipedia but perhaps wouldn’t use it for a crucial medical diagnosis. And when was the last time you crossed-checked Google Maps directions with a paper map before trusting your computer-proposed itinerary? In other words, we decide the trust we place in every new tool we use.
Generative AI is in many ways exciting, new and challenging, and I believe we can and must equip young people with the critical thinking skills to help them use AI effectively.
This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.
You Know, For Kids was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
Social connection is a fundamental human need. From both a developmental and evolutionary standpoint, nurturing relationships matter. Our social connections with others can help support our basic needs for survival, provide a source of resilience, and enable us to gain a sense of belonging and mattering in our social and cultural world.
The U.S. Surgeon General recently released a report on an “epidemic of loneliness,” suggesting that a lack of social connection poses major threats to individual and societal health. As noted in the report, the mortality impact of feeling disconnected from others is similar to that of smoking 15 cigarettes every day. Research also indicates that loneliness increases the risk of both anxiety and depression among children and adolescents and that such risks continued to exist nine years after loneliness was initially measured. Conversely, social connection can enhance individual-level physical and mental well-being, academic achievement and attainment, work satisfaction and performance, and community-level economic prosperity and safety.
Cover illustration from Our Epidemic of Loneliness and Isolation: The Surgeon General’s Advisory on the Healing Effects of Social Connection and Community (2023).Over the past year, there has been a rising interest in, and media coverage of, generative AI or what linguist Dr. Emily Bender terms “synthetic media machines” — that is, systems by which one can generate images or, as with large language models (LLMs), “plausible-sounding” text. Despite the hype, these systems are not completely new. The 1940s marked initial forays into language models. What is new is how these systems — which are “more ‘auto-complete’ than ‘search engine’” — are being promoted: they are being made available to the broader public.
How do different users perceive these systems? Preliminary research from IDEO sought out the perspectives of twelve participants ages 13 to 21 in the U.S. around the ways generative AI may impact social connection (among other themes). The company first distilled key sentiments associated with these systems based on large quantities of social media posts and then presented participants with AI-driven hypothetical products, such as “Build a FrAInd: Your ideal bestie come to life, based on celebs and influencers you love” and “New AI, New Me: An avatar trained on your preferences that has experiences for you.” Participants had varying levels of familiarity with generative AI and diverse life experiences (e.g., some participants were in school and others not, some had international backgrounds, etc.). When asked for their thoughts on these products, they emphasized that relationships are all “about you learning as you go,” that humans must “remain at the helm.”
In IDEO’s youth-focused research, respondents also voiced concern around trust.
In the context of human-to-human connection, an important question arises: How will generative AI, such as LLMs, influence the trust we have in other people?
A study from a Stanford and Cornell research team demonstrated that when asked to discern whether online dating, professional, and lodging profiles were generated by an LLM or a human, participants only selected the correct answer about half of the time. Whereas participants could sometimes identify specific markers of text generated by LLMs (i.e. synthetic text) such as repetitive wording, they also pointed to cues such as grammatical mistakes or long words, which, in the study’s data, were more representative of language written by a human. Additional features that participants used to discern human-written text, including first-person pronouns or references to family, were equally present in both synthetic and human-written profiles. Rather than interpreting results as evidence of machine “intelligence,” the Cornell and Stanford team suggested that individuals may use flawed heuristics to detect synthetic text.
The authors proposed that such heuristics may be indicative of human vulnerability: “People are unprepared for their encounters with language-generating AI technologies, and the heuristics developed through . . . social contexts are dysfunctional when applied to . . . AI language systems.” Concerningly, individuals are more likely to share personal information and follow recommendations by nonhuman entities that they view as “human,” raising key privacy questions. At the same time — at least in the short term — they may begin to distrust those who they think are using synthetic text in their communication.
Issues of bias are also central given that systems such as LLMs absorb and amplify the biases in training data. Against the backdrop of the race towards ever larger LLMs, as outlined in Bender and colleagues’ ground-breaking paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” the wider web is not representative of the ways that different people view the world. A number of factors impact 1) who has access to the Internet, 2) who feels comfortable sharing their thoughts and worldviews online, 3) who is represented in the parts of the Internet chosen for the training data, and 4) how the basic filtering applied to training data produces more distortion.
For instance, per the second factor, whereas user-generated content sites (e.g., Reddit) portray themselves as welcoming platforms, structural elements (e.g., moderation practices) may make these sites less accessible to underrepresented communities. Harassment on X (formerly Twitter), for example, is experienced by “a wide range of overlapping groups including domestic abuse victims, sex workers, trans people, queer people, immigrants, medical patients (by their providers), neurodivergent people, and visibly or vocally disabled people.” As the authors of “Stochastic Parrots” point out, there are selected subgroups that can more easily contribute data, which produces a systemic pattern that exacerbates inclusion and diversity. In turn, this pattern initiates and perpetuates a feedback loop that diminishes the impact of data from underrepresented communities and privileges hegemonic viewpoints.
Automated facial recognition software is another example. Before the widespread use of generative AI, Dr. Joy Buolamwini and Dr. Timnit Gebru found that popular facial recognition systems exhibited intersectional biases: the systems performed significantly worse on individuals of color and, in particular, on women of color. Biases in AI systems have major real-world harms across areas like employment, law enforcement, and education. As more synthetic media is produced, such content is then fed back into future systems, creating a pernicious cycle and perpetuating pernicious biases connected to, as a few examples, race, class, and gender.
In practical terms, what might considerations like these mean for human-to-human connection?
Let’s imagine you are a parent emailing your child’s school counselor to begin a conversation about a behavioral challenge your child is experiencing. You receive a response, but wonder: Was part of this email produced by ChatGPT? If so, which part(s)? Why would the system be used to respond to such a sensitive concern? What might that indicate about the counselor? Perhaps about the school as a whole? Would you fully trust the counselor to assist in the referral of your child?
Furthermore, what if you knew about the significant biases built into and amplified by generative AI? Or about other ongoing harms connected to these systems, such as labor force exploitation, environmental costs that exacerbate environmental racism, and massive data theft? Would this knowledge further erode your trust in communicating with someone whom you suspect may have responded with synthetic text, and, if so, to what degree? Whereas trust may not be the ultimate end goal of human communication, it is still a vital part and outcome of a positive, healthy connection.
There are a number of key questions moving forward. How can we counter the generative AI hype and educate individuals to be critical consumers of these systems — with the understanding that, as Dr. Rumman Chowdhury has pointed out, AI “is not inherently neutral, trustworthy, nor beneficial”? While acknowledging this nuanced landscape, how do we develop regulations that emphasize accountability on the part of the companies that develop and deploy generative AI (especially through a lens of algorithmic justice as described by Deborah Raji); transparency (e.g., the knowledge that one has encountered synthetic media and an understanding of how the system was trained; e.g., “consentful tech”); and the prevention of exploitative labor?
Returning to social connection and human-to-human communication, when we use language, we do so for a given purpose — to ask another person a question, explain an idea to someone, or just to socialize. In the context of LLMs, it is important not to conflate word form and meaning. Referents, actual things and ideas in the world around us, like tulips or compassion, are needed to produce meaning. This meaning is unable to be learned from form alone. Given that LLMs are trained on form, these systems do not necessarily learn “meaning,” but instead some “reflection of meaning into the linguistic form.” As Dr. Bender notes, language is relational by its very nature.
Moving forward, it is essential that we preserve the sanctity of genuine human-to-human connection, with its conflicts, its awkwardness, and its spaces for cultivating relationships built on consistent trust, belonging, and mattering to those in one’s life.
Are you interested in continuing the conversation around social connection? Please fill out the following form! In addition, would you recommend resources that should be included in this piece? Other feedback? Please feel free to reach out to me at any time (alexandra.hasse2556@gmail.com); I am still learning in this space and I so much value learning from you.
This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.
Preserving Social Connections against the Backdrop of Generative AI was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
As a social scientist who views online phenomena through the lenses of trust, safety, security, privacy, and transparency, I seek to understand the potential for misuse and abuse in this current environment of giddy euphoria related to Generative AI (GenAI). Below, I briefly discuss some forms of victimization that the makers and regulators of these tools must consider, and suggest ways to reduce the frequency and impact of potential harms that may emerge and proliferate.
Photo by Google DeepMind on Unsplash.If you’ve spent any meaningful amount of time on social media, you’ve likely been exposed not only to harassment, but also to the presence of bots that spam or otherwise annoy you with irrelevant or intrusive content. GenAI allows for the automatic creation of harassing or threatening messages, emails, posts, or comments on a wide variety of platforms and interfaces, and systematizes their rapid spread. In addition, given that malicious social media users have employed thousands of bots to flood online spaces with hateful content, it is reasonable to assume that GenAI can facilitate this at an even greater scale. Indeed, since GenAI bots can converse and interact in more natural ways than traditional bots, responding to the problem may be much more challenging than using typical content moderation methods.
Imagine this occurring in the comment thread of your latest Instagram post, or among the community of friends you’ve carefully built in your Twitch or Discord channel, or on your recent LinkedIn post seeking new employment opportunities. Imagine a flood of bots when you’re trying to seek a romantic partner on a dating app. Recently, an AI chatbot was created to identify the type of women a person is interested in and then initiate flirtatious conversation with them until they agree to a date or share their phone number. Another chatbot has been accused of pursuing prospective romantic partners when they were clearly not interested, even becoming sexually aggressive and harassing. One can easily envision the problematic possibilities when these technologies are combined, refined, and exploited.
Relatedly, I am very concerned about the dissemination and amplification of hate speech, given the ability of GenAI to be used to create and propagate text, memes, deepfakes, and related harmful content that targets specific members of marginalized groups or attacks the group as a whole.
Even if the hate speech is created by human users, accounts created by GenAI can increase the visibility, reach, and virality of existing problematic content by fostering large upswings in engagement for those posts through high volumes of likes, shares, and comments.
It is not clear how proficient platforms are in detecting unnatural behavior of this ilk, and malicious users can easily program frequencies and delays to mimic typical human activity.
Many of us are familiar with how deepfakes have been used over the last decade to compromise the integrity of the information landscape through disinformation campaigns and image-based sexual violence. GenAI technologies not only greatly assist in the creation of deepfakes, but also can intersect with sextortion, catfishing, doxing, stalking, threats, and identity theft. Imagine this: A malicious individual creates a new account on a dating app. An unsuspecting user is then fooled into believing they are talking with a real person in their town, even though the chat conversation is facilitated by GenAI. Soon, the unsuspecting user begins candidly sharing personal information as they build a strong emotional bond with the fake account. When the malicious individual begins to send nude photo and video content to deepen intimacy, the unsuspecting user is unable to discern that it is manufactured. After responding in kind with genuine, private, sexual photos, extortion and threats ensue. Even after the victim responds to the demands, the malicious individual still shares the victim’s private information publicly on other message boards. It’s reasonable to expect new iterations of GenAI tools that can live-search the Internet and integrate queried information, organize it, connect it with other sources, and build a detailed dossier about a person. This would contribute to additional privacy violations, stalking, and threats against the unsuspecting user, as well as fraudulent activity (e.g., counterfeit documents, wide-scale phishing attacks) and identity theft.
Given how many forms of abuse can be aided and abetted by GenAI, an essential question surfaces: What can be done here to mitigate risk and harm?
Initiatives that might be considered low-hanging fruit often involve education of end users to augment their ability to recognize GenAI creations as synthetic and to interpret and react to them accordingly. This can occur in part through improved detection algorithms, labeling/watermarking, notifications/warnings, and in-app or in-platform educational content of a compelling nature (e.g., when TikTok asked top influencers to create short videos that encourage users to take breaks from screentime or teach viewers how to counter online bullying).
Outside of these platform-centric efforts, media literacy education in schools must also require instruction in the use (and possible misuse) of GenAI tools, given their growing adoption among young people. Other theoretically simple solutions involve the ability for creators to easily attach Do Not Train flags to certain pieces of output that should not end up as training data in large language models (LLMs) (e.g., Adobe’s Content Authenticity Initiative is advocating for this on an industry-wide level (h/t Nathan Freitas)). New, elegant, privacy-forward solutions to quickly and consistently verify authentic users — their identity, their voice, their persona in photo and video (and, subsequently, remove non-human users) — must be developed and deployed. To be sure, though, protections must be in place so that human users (especially those historically marginalized) are not algorithmically misclassified because of existing biases in training datasets.
Can tech companies that provide GenAI models to their user base also reasonably mandate rule compliance? That is, can the tool itself (and the messaging that surrounds it) be crafted in a way that deters misuse and promotes prosocial or at least neutral output? Can it be presented to users with both excitement and cautions? Can clear examples of appropriate and inappropriate use be provided? Since being logged-in is likely required, can the platform remind the users that logs are kept to facilitate investigations should policy violations occur? And can gentle reminders and prompts periodically jog the memory of users that appropriate use is expected? All of this seems especially important if the tool is provided seamlessly and naturally within the in-app experience on hugely popular platforms (e.g., My AI on Snapchat was rolled out to 750 million monthly users and fielded 10 billion messages from over 150 million users within two months).
Employees at all levels within AI research and development firms must operate within an ethos where “do no harm” is core to what they build. To be sure, tech workers are learning on the fly in this brave new world, and some must now retrofit solutions that ground human dignity, privacy, security, and the mitigation of bias into their products and services. It is critical. Not only will this reduce the incidence of various risks and harms, but it can contribute to further adoption and growth of their models as the signal to noise ratio of accurate, objective, and prosocial content creation improves.
Partnerships between academia and tech companies continue to hold promise to identify solutions to technological problems, and more initiatives focused on GenAI issues should be supported and promoted. Can researchers gain increased access to publicly available data mined via platform APIs to identify historical and current behavioral clues — as well as anonymized account data (date of creation, average frequency of engagement, relevant components of the social network graph) that readily point to synthetic users? Might they somehow obtain anonymized access not just of adult users but also minors (those 17 years of age and younger) given their comparatively greater vulnerability to the internalization and externalization of harm? And what can be learned from the financial and pharmaceutical sectors when it comes to government involvement and regulation to prevent ethical violations, biases and discriminatory practices, economic disparities, and other outcomes of misuse with GenAI? For instance, can risk profiles be established for all AI applications with baselines for rigor of assessment, mitigation of weaponization and exploitation, and processes for recovery? Those with the highest scores would likely gain the most market share, and keeping those scores would motivate quality control and constant refinement.
Finally, we cannot keep moving ahead at breakneck speed without carefully designed regulatory frameworks for GenAI that establish standards and legal parameters and that set in place sanctions for those entities that transgress.
This includes clearly describing and prohibiting (and designing prevention mechanisms for) edge cases where victimization will likely result. Moreover, proper governance requires detailed protocols for audits, licensing, international collaboration, and non-negotiable safety practices for public LLMs. The Blueprint for an AI Bill of Rights from the US Office of Science and Technology Policy is a good start with great macro-level intentions, but it reads more like a strong suggestion rather than a directive with applied specificity. With regard to data privacy and security in general, the US has failed to keep pace with the comprehensive, forward thinking efforts of other countries. Urgency is needed so this does not happen yet again with GenAI, so that we can grow in confidence that its positives do measurably outweigh its negatives.
This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.
Thinking Through Generative AI Harms Among Users on Online Platforms was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
DIF is honoured to host the China Academy of Information and Communications Technology (CAICT), one of China’s leading scientific research institutes, founded in 1957, as the founding member of the DIF China Special Interest Group (SIG).
Decentralized technologies are being actively developed in China, with more than 300 blockchain projects launched across the country in 2022, covering multiple industries and fields.
The Special Interest Group will be chaired by Xie Jiagui, Chief Engineer at CAICT’s Institute for Industrial Internet & Internet of Things (pictured above), who is engaged in research and development of network identification, blockchain technology systems, and big data technology.
We asked Mr Xie and Chiye Sun, Global Partnership Manager & Blockchain Researcher at CAICT about the organization’s Xinghuo Astron project, and their plans for the DIF China SIG.
One of the goals of the DIF China SIG is to promote and advocate for the adoption and implementation of DIF specifications and technologies in China. Why do you see this as an important outcome?
Identity verification is the foundation of the digital economy and innovation. By promoting and adopting DIF technologies and related W3C standards in China, it provides a more secure and decentralized identity verification method, which can provide a more reliable and efficient infrastructure for the digital economy and promote the development and innovation of the digital industry. DIF’s identity verification and data management tools can help China's digital industry provide more efficient and convenient public services, enhance digital capabilities and governance levels.
DIF technologies and W3C standards can also promote consistency in identity verification and data exchange between different systems, reduce adaptation costs, and improve system interoperability and efficiency.
Finally, implementing these technologies and standards can help protect users' personal privacy, enhance data security and credibility, reduce reliance on centralized identity providers through decentralized authentication and authorization mechanisms, and enhance users' control over their identity and personal data.
CAICT’s customers and partners include chipmakers, device vendors, operators and system suppliers. Do you envisage organizations such as these participating in the China SIG?
It is hard to say for certain who will participate in the SIG. We think DID-related and blockchain-related Chinese enterprises would love to join this SIG. We can also ask universities, research institutes and other government-related institutions to join. We will also promote the DIF China SIG in different Chinese Web3 communities, like Alliance for Blockchain Industry (ABI), which contains more than 600 members, Z-Park and DID Alliance (DIDA).
We are also very much looking forward to hosting international level participants, learning from each other, communicating together, and hope to generate technical ideas, business pilots and governance cooperation.
It sounds as though the W3C DID (Decentralized Identifier) standard is core to your research and development work in the areas of industrial internet and blockchain. For example, the Xinghuo Astron project, a native DID network around the world built on the Xinghuo Blockchain Infrastructure & Facility (Xinghuo BIF), aims to provide blockchain networks with identifier services, realizing interconnectivity across regions and industries. Why did you choose the W3C DID standard for Xinghuo?
We registered Xinghuo’s own DID method called BID(did:bid), which follows W3C DID standards. Regarding the technical standard for identifiers, we think it is better to follow a global standard format for interoperability and collaboration globally. DID is a new type of identifier that is user generated, without the need for a centralized entity, permanent, cryptographically verified and universally resolvable. It is in line with the evolution trend of cyberspace towards self sovereignty management.
Use cases for Xinghuo Astron include verifying certificates of origin in cross border trade between China and Malaysia and enabling traceability of carbon emissions data for manufacturers exporting to the EU. How do you envisage the DIF China SIG advancing international collaboration and adoption around these and other use cases?
One of our goals for the DIF China SIG is to promote and advocate for the adoption and implementation of DIF standards and technologies in China. Apart from that, we would like to construct a platform that provides communication and cooperation opportunities between China and other countries.
Planned activities for the SIG include organizing conferences and events, demonstrating W3C / DIF technical standards, lectures, training sessions and promoting projects such as Xinghuo Astron project as well as DIF projects. This sounds like an ambitious agenda! Do you plan to promote the SIG and its activities to an international audience? What language(s) will be used during SIG meetings?
Yes, we would like to promote the DIF China SIG to an international audience. CAICT also involves related work in G20, APEC, Belt and Road Initiative and other international cooperation mechanisms. We could try to introduce DIF and DIF China SIG to these mechanisms when appropriate.
Regarding language, routine meetings and activities will use Mandarin if no English speaking participant comes to join, or we could translate some bullet points for them. When it comes to some international level discussion and cooperation, we can switch to English.
Cinq mois après son inauguration le 6 juin 2023, le campus unlimitrust de SICPA rejoint les sept technopôles du canton de Vaud, sous l’égide d’Innovaud. Le campus unlimitrust a pour vocation d’accélérer la création de technologies et de solutions contribuant à l’émergence d’une véritable économie de la confiance à l’échelle mondiale. Le 8 novembre, une célébration a eu lieu en présence d’Isabelle Moret, conseillère d’Etat du Département de l’économie, de l’innovation, de l’emploi et du patrimoine du canton de Vaud.
Le 8 novembre 2023, unlimitrust campus a officiellement rejoint les 7 autres technopoles vaudois! A cette occasion, le campus unlimitrust a eu l’honneur de recevoir la conseillère d’Etat, Mme Moret, cheffe du DEIEP.
Le campus unlimitrust s’inscrit dans une volonté de créer un monde plus sûr et plus fiable en facilitant l’émergence de concepts et de projets innovants qui répondent aux nouveaux défis de notre société, grâce aux technologies de la confiance. Pour cela, il encourage les collaborations entre des entreprises expertes du domaine, mais aussi des start-ups, des chercheurs et des institutions académiques telles que l’EPFL.
Adjacent au siège historique de SICPA à Prilly, le campus unlimitrust offre une infrastructure de 30’000 mètres carrés constitués d’espaces collaboratifs, de bureaux, de laboratoires et de services communs. Animé par l’Economy of Trust Foundation, une initiative de SICPA visant à promouvoir le concept d’économie de la confiance, le campus rassemble une communauté internationale d’entreprises et d’institutions publiques et privées de toutes tailles. Par exemple, les entreprises Approach Cyber, Visium et Cyberion, intervenant dans les domaines de la cybersécurité et de l’intelligence artificielle. L’objectif est de stimuler l’innovation dans les domaines numériques et physiques afin de renforcer la souveraineté des institutions et la confiance des citoyens dans leurs interactions entre eux et des entités privées ou publiques, là où la confiance est primordiale.
« SICPA est née il y a bientôt 100 ans ici, dans le canton de Vaud. Je me réjouis que les solutions innovantes qui y naissent, notamment dans le domaine de la confiance, et les valeurs d’excellence auxquelles nous sommes attachés, trouvent dans ce campus un écrin qui permet de les valoriser et de les porter au-delà de nos frontières, » explique Philippe Amon, Président et Administrateur délégué de SICPA.
Le canton de Vaud par le biais du Service de la promotion de l’économie et de l’innovation (SPEI), Innovaud, ainsi que de la Fondation EPFL Innovation Park (EIP), jouent un rôle de premier plan dans le domaine de l’innovation en Suisse et à l’échelle mondiale. Ils sont déjà activement impliqués dans la dynamique générale du campus unlimitrust. Plusieurs start-ups ont déjà rejoint le campus, et certaines d’entre elles sont accueillies au cœur du Trust Village, une structure d’incubation soutenue par les autorités cantonales et SICPA, opérée par l’EPFL Innovation Park (EIP) et gérée par la Trust Valley.
Unlimitrust campus devient le 8ème technopôle du canton de Vaud et d’Innovaud. Célébration du 8 novembre 2023 en présence de M. Gaudin, Mme Moret et M. Barbey.
De ce fait, Innovaud se félicite d’accueillir le campus unlimitrust au sein de la famille des technopôles vaudois, qui en compte désormais huit avec l’Agropôle, les ateliers de Renens, le Biopôle, l’EPFL Innovation Park, swiss aeropole, le Technopôle de Sainte-Croix et Y-PARC. « Chacun des technopôles œuvre dans un domaine technologique d’une importance fondamentale pour le tissu économique vaudois et l’attractivité du canton à l’échelle internationale. Le campus unlimitrust renforce l’écosystème d’innovation de notre région dans le domaine de la sécurité et de la confiance », explique Patrick Barbey, directeur d’Innovaud.
Elastos BPoS consensus mechanism revolutionises staking! 2023 saw the advent of Elastos’ BPoS (Bonded Proof of Stake) consensus mechanism, enabling node operators to customise their staking duration and offering greater flexibility to delegators. As a result, this upgrade doubled the number of validators on Elastos in just a few weeks. We’re now excited to share that Staking Rewards, our strategic partner, has added a dedicated Elastos dashboard for BPoS data and insights. Take a look here!
Bonus incentives from the Elastos Foundation and low operational costs create an ideal opportunity to join our validator and staking community today. Staking Rewards is instrumental in sharing this information with prospective validators, including investors looking for assets with ROI. As an extension of our collaborative efforts, the Elastos Growth Team will be present at the Staking Rewards Staking Summit event in Turkey on November 10th, where the team will connect with attendees and showcase the unique features of Elastos’ validator system, along with the opportunities for earning ELA it offers. Follow our Twitter for updates!
“We are excited to bring Elastos staking data onto Stakingrewards.com. With its distinct PoW/PoS model, Elastos has a unique position in the industry and we are looking forward to driving adoption for ELA staking in the ecosystem and beyond.” Mirko, Staking Rewards CEO
“We have recently introduced a new consensus mechanism for Elastos and now the details of our blockchain validators are on Staking Rewards we feel like Elastos has arrived! We look forward to welcoming further node operators to share our Web3 journey. For only 2000 ELA there is an opportunity for node operators to earn pre-halving rewards, including bonuses, at a very reasonable cost – ELA scarcity is part of the economics and locking them for consensus security offers a safe haven for on-chain holders wishing to support us. Staking Rewards adds visibility to the project so we can share what we’re working towards. Elastos has been building for over five years so is here for the long run, and with the next bull run taking shape, it presents an opportunity to join a maturing community with resilient technology as the Web3 story charges onwards.” Fakhul, Elastos Head of Growth
What’s next?Building upon this momentum, Elastos recently partnered with Alibaba Cloud and Tencent Cloud. These collaborations bolster our SmartWebs resilience with cloud infrastructure and advance DID identity solutions with KYC and verifiable credentials support for privacy-preserving compliance. These partnerships have reinforced Elastos’ standing in secure digital identity verification and enhanced validator confidence in the ecosystem, making today a great time to be a validator.
Next, Elastos is looking to launch an ecosystem accelerator campaign to foster project development across various Web3 applications like DeFi, SocialFi, and NFTs. The campaign will leverage Elastos’ core technologies, including Carrier for communications, EID for identity, Hive for storage, and Elastos Smart Chain for EVM support. Separately, ecosystem project Elacity is advancing DRM technology on Elastos, supporting the development of Elastos’ Runtime technology, an execution environment to support playback of encrypted content like video and music using tradable NFT rights.
Explore the Elastos Staking Rewards Dashboard today and take control of your digital life—identity, finances, and content. With Elastos, it’s possible.
The post Elastos and Staking Rewards Forge a Strategic Partnership: Elevating Web3 Participation and Rewards appeared first on Elastos.
In today’s retail environment, consumers and logistics are constantly evolving. Wondering how these evolutions will affect your in-store shopping experience? Join us as we chat with Omni Talk to discuss retail technology innovations, how they will change the physical store environment, and what’s on the horizon for omnichannel commerce.
Key takeaways:
The future of retail lies in technology and innovation. The episode emphasizes the importance of retailers embracing technological advancements to stay competitive. From computer vision and robots in back rooms to smart carts and curbside pickup, retailers need to adapt their operations and design to meet the changing needs and preferences of consumers.
Curbside pickup is a significant trend that is here to stay. The COVID-19 pandemic accelerated the adoption of curbside pickup, and it has quickly become a popular choice for consumers seeking convenience and contactless retail experiences. Retailers are redesigning parking lots and experimenting with autonomous robot vending machines to cater to the demand for curbside pickup.
Data and integration are crucial for optimizing the customer experience. The use of 2D barcodes and the collection of data attached to them can provide valuable insights into customer behaviors and preferences. Retailers who effectively utilize this data and integrate it into their systems will have a competitive advantage. Additionally, the integration of video commerce and retail media networks presents opportunities for retailers to maintain control over their video content and ensure commerce flows through their platforms.
Connect with GS1 US:
Our website - www.gs1us.org
Connect with guests:
Follow Chris Walton on LinkedIn
Anyone who wants to log into apps or websites usually uses a password to identify themselves. With a passkey, the login works without a password.
The post CHIP: What is a passkey? Easily explained appeared first on FIDO Alliance.
Google’s Diego Zavala, product manager on the authentication team, insists that “Passkeys are the future of online authentication,” citing improved security and convenience versus traditional passwords. Apps that already use Credential Manager to support passkeys include Uber and Whatsapp.
The post DevClass: A further push for passkeys: Android Credential Manager generally available from November 1st appeared first on FIDO Alliance.
Last week, Amazon announced that it would be hopping on the Passkey train. This means that you’ll now be able to use passkeys to log in to some of your Amazon accounts (we’ll get to which ones later). The tech giant joins a slew of other companies who are saying goodbye to passwords and opting for passkeys instead.
The post CNET: Passkeys have come to Amazon. Here’s what you need to know appeared first on FIDO Alliance.
Data provenance, typically used within the broader context of data lineage, refers to the source or first occurrence of a given piece of data. As a concept, data provenance (together with data lineage) is positioned to provide validity and encourage confidence related to the origin of data, whether or not data has mutated since its creation, and who the original publisher is, among other important details.
From tracking the origin of scientific studies to big banks complying with financial regulations, data provenance plays an integral role in supporting the authenticity and integrity of data.
Databases and Data ProvenanceWhen it comes to databases, you can start to imagine how critical data provenance is when organizing and tracking files in a data warehouse or citing references from within a curated database. For consumer applications (take social media platforms such as Twitter, for example) that build entire advertising business models around the engagement derived from user-generated content, the claim of unaltered authorship (apart from account hacks) of a given Tweet is a guarantee made by the platform to its users and investors—trust cannot be built without it.
With the implications of data provenance in mind, organizations that rely on centrally controlled data stores within the context of consumer applications are constantly evolving security protocols and authentication measures to safeguard both their users and business from attacks that could result in data leaks, data alterations, data wipes, and more. However, so long as potential attack vectors and adequate user authentication are accounted for, these organizations benefit from inherent assurances related to the authenticity of incoming writes and mutations—after all, their servers are the agents performing these edit actions.
Data Provenance in Peer-to-Peer ProtocolsBut what about peer-to-peer data protocols and the applications built on them? How do topics such as cryptographic hashing, digital signatures, user authentication, and data origin verifiability in decentralized software coincide with data provenance and data lineage?
This article is meant to provide an initial exploration of how and where these topics converge and build a specific understanding of the overlap between these ideas and the technical architecture, challenges, qualities, and general functionality of ComposeDB on Ceramic. ComposeDB, built on Ceramic, is a decentralized graph database that uses GraphQL to offer developers a familiar interface for interacting with data stored on Ceramic.
The following article sections will set out to help accomplish the goals outlined above.
Smart Contract-Supported BlockchainsBlockchains that contain qualities analogous to a distributed state machine (such as those compatible with the Ethereum Virtual Machine) operate based on a specific set of rules that determine how the machine state changes from block to block. In viewing these systems as traversable open ledgers of data, accounts (both smart contract accounts and those that are externally owned) generate histories of transactions such as token transfers and smart contract interactions, all of which are publicly consumable without the need for permission.
What does this mean in the context of data provenance? Given the viability of public-key infrastructure, externally owned accounts prevent bad actors from broadcasting fake transactions because the sender’s identity (public access) is publicly available to verify. When it comes to the transactions themselves (both account-to-account and account-to-contract), the resulting data that’s publicly stored (once processed) includes information about both who acted, as well as their signature.
Transaction verifiability, in this context, relies on a block finalization process that requires validator nodes to consume multiple transactions, verify them, and include them in a block. Given the deterministic nature of transactions, participating nodes can correctly compute the state for themselves, therefore eventually reaching a consistent state about the transactions.
While there are plenty of nuances and levels of depth we could explore related to the architecture of these systems, the following are the most relevant features related to data provenance:
The verifiable origin of each transaction represents the data we care about related to provenance Transactions are performed by externally owned accounts and contract accounts, both of which attach information about the transaction itself and who initiated it Externally owned accounts rely on cryptographic key pairs ComposeDB vs. Smart Contract-Supported BlockchainsThere is plenty to talk about when comparing ComposeDB (and the Ceramic Network more broadly) to chains like Ethereum; however, for this post, we’ll focus on how these qualities relate to data provenance.
Controlling Accounts
Ceramic uses the Decentralized Identifier standard for user accounts (DID PKH and Key DID are supported in production). Similar to blockchains, they require no centralized party or registry. Additionally, both PKH DIDs and Key DIDs ultimately rely on public key infrastructure (PKH DIDs enable blockchain accounts to sign, authorize, and authenticate transactions, while Key DIDs expand cryptographic public keys into a DID document).
Sign in With Ethereum (SIWE)
Like chains such as Ethereum, Ceramic supports authenticated user sessions with SIWE. The user experience then diverges slightly when it comes to signing transactions (outlined below).
Signing Transactions
While externally-owned accounts must manually sign individual transactions on chains like Ethereum (both when interacting with a smart contract or sending direct transfers), data in Ceramic (or streams) are written by authenticated accounts during a timebound session, offering a familiar, Web2-like experience. The root account (your blockchain wallet if using Ceramic’s SIWE capability, for example) generates a temporary child account for each application environment with tightly-scoped permissions, which then persists for a short period in the user’s browser. For developers familiar with using JWTs in Node.js to authenticate users, this flow should sound familiar.
This capability is ideal for a protocol meant to support mutable data with verifiable origin, thus allowing for multiple writes to happen over a cryptographically authorized period (and with a signature attached to each event that can be validated) without impeding the user’s experience by requiring manual signs for each write.
Consensus
Ceramic relies on event streams that offer a limited consensus model that makes it possible for a given stream to allow multiple parallel histories while ensuring any two parties consuming the same events for a stream will arrive at the same state. What this means is that all streams and their corresponding tips (latest events within their event logs) are not known by all participants at any given point in time.
However, a mechanism known as the Ceramic Anchor Service (CAS) is responsible for batching transactions across the network into a Merkle tree and regularly publishing its root in a single transaction to Ethereum. Therefore, Ceramic does offer a consensus on the global ordering of Ceramic transactions.
Immutability
Just as smart contracts provide a deterministic structure that dictates how users can interact with them (while guaranteeing they will not change once deployed), ComposeDB schemas are also immutable, offering guarantees around the types of data a given model can store. When users write data using these definitions, each resulting model instance document can forever only be altered by accounts that created it (or grant limited permission to another account to do so), and can only make changes that conform to the schema’s definition.
Finally, every stream is comprised of an event log of one or more commits, thus making it easy for developers to extract not only the provenance of the stream’s data, based on the cryptographic signature of the account that created it, but also the stream’s data lineage by traversing through the commit history to observe how the data mutated over time.
Publicly Verifiable
Similar to networks like Ethereum, the Ceramic Network is public by default, allowing any participating nodes to read any data on any stream. While the values of the data may be plaintext or encrypted, contingent on the objectives of the applications using them, anyone can verify the cryptographic signatures that accompany the individual event logs (explained above).
Centralized DatabasesThe broad assumption I’ll line up for this comparison is that a traditional “Web2” platform uses a sandboxed database to store, retrieve, and write data on behalf of its users. Apart from the intricate architecture strategies used to accomplish this at scale with high performance, most of these systems rely on the assurances that their servers alone have sole authority to perform writes. Individual user accounts can be hacked into via brute force or socially engineered attacks, but as long as the application’s servers are not compromised, the data integrity remains intact (though requiring participants to trust a single point of failure).
ComposeDB vs. Centralized DatabasesIf this article set out to compare ComposeDB to traditional databases in the context of functionality and performance, we’d likely discuss a higher degree of similarities rather than differences; however, when comparing ComposeDB to the paradigm of a “traditional” database setup, in the context of data provenance, we find that the inverse holds true based on much of what was discussed in the previous section.
Embedded Cryptographic Proof
As previously discussed, all valid events in Ceramic include a required DAGJWS signature derived from the stream’s controlling account. While it’s possible (though logically unwise) that an application using a centralized database could fabricate data related to the accounts of its users, event streams in Ceramic are at all times controlled by the account that created the stream. Even if a Ceramic account accidentally delegates temporary write access to a malicious application that then authors inaccurate data on the controller’s behalf, the controlling account never loses admin access and can revert or overwrite those changes.
Public Verifiability
Unlike Ceramic, the origin of data (along with most accompanying information) is not accessible by design when using a centralized database, at least not in a permissionless way. The integrity of the data within a “traditional” database must therefore be assumed based on other factors requiring trust between the application’s users and the business itself. This architecture is what enables many of the business models these applications use that ultimately have free reign over how they leverage or sell user data.
Conversely, business models like advertising can be (and are currently being) built on Ceramic data, which flips this paradigm on its head. Individual users have the option to encrypt data they write to the network and have an array of tools at their disposal to enable programmatic or selective read access based on conditions they define. Businesses that want to access this data can therefore work directly with the users themselves to define the conditions under which their data can be accessed, putting the sovereignty of that data into individual users’ hands.
Timestamping and Anchoring
While in a private, sandboxed database, development teams can implement a variety of methods to timestamp entries, those teams don’t have to worry about trusting other data providers in a public network to be competent and non-malicious. Conversely, data in Ceramic leverages the IPLD Timestamp Proof specification which involves frequent publishing of the root of a Merkle tree to the blockchain with the sets of IPLD content identifiers as the tree’s leaves which represent Ceramic data. While the underlying data structure (event log) of each stream will preserve the ordering of its events with specific events pointing to the prior one in the stream, the anchoring process allows developers to use event timestamping in a decentralized, trustless way.
Verifiable CredentialsVerifiable credentials under the W3C definition unlock the ability for verifiable claims to be issued across a virtually limitless set of contexts, with the guarantee that they can later be universally verified in a cryptographically secure way. This standard relies on several key features (below are only a few of them):
Verifiable Data Registry: A publicly available repository of the verifiable credential schemas one might choose to create instances of Decentralized Identifiers: Verifiable credentials rely on DIDs to both identify the subject of a claim, as well as the cryptographic proof created by the issuer Core Data Model: These credentials follow a standard data model that ensures that the credential’s body (made up of one or more claims about a given entity) is inherently tamper-evident, given the fact that the issuer generates a cryptographic proof that guarantees both the values of the claims themselves and the issuer’s identityFor example, an online education platform may choose to make multiple claims about a student’s performance and degree of completion related to a specific student and a specific course they are taking, all of which could be wrapped up into one verifiable credential. While multiple proof formats could be derived (EIP712 Signature vs. JWTs), the provenance of the credential is explicit.
However, unlike blockchains and databases, verifiable credentials are not storage networks themselves and therefore can be saved and later retrieved for verification purposes in a wide variety of ways.
ComposeDB vs. Verifiable Credentials (and other claim formats)I mentioned earlier that schema definitions (once deployed to the Ceramic network) offer immutable and publicly available data formats that enforce constraints for all subsequent instances. For example, anyone using ComposeDB can deploy a model definition to assert an individual’s course completion and progress, and similarly, any participants can create document instances within that model’s family. Given the cryptographic signatures and immutable model instance controller identity (that’s automatically attached to each Ceramic stream commit discussed above), you can start to see how the qualities verifiable credentials are set out to provide, like tamper-evident claims and credential provenance, are inherent to ComposeDB.
Tamper-Proof
Like a verifiable credential, each commit within a given Ceramic stream is immutable once broadcasted to the network. Within the context of a model instance document within ComposeDB, while the values within the document are designed to be mutated over time, each commit is publicly readable, tamper-evident, and cryptographically signed.
Inherent Origin
We’ve discussed this extensively above—each event provides publicly-verifiable guarantees about the identity of the controlling account.
Publicly Available
Unlike verifiable credentials that offer just a standard, ComposeDB allows developers to both define claim standards (using schema definitions), as well as public availability for those instances to be read and confirmed by other network participants. ComposeDB is therefore also a public schema registry in itself.
TrustworthinessIn addition to the specific comparisons to other data storage options and verifiable claim standards, what qualities does ComposeDB offer that enable anyone to audit, verify, and prove the origin of data it contains? While parts of this section may be slightly redundant with the first half of this article, we’ll take this opportunity to tie these concepts together in a more general sense.
Auditable, Verifiable, and ProvableFor trust to be equitably built in a peer-to-peer network, the barrier to entry to be able to run audits must be sufficiently low, concerning both cost and complexity. This holds especially true when auditing and validating the origin of data within the network. Here are a few considerations and trade-offs related to ComposeDB’s auditability.
No Cost Barrier With Open Access to Audit
Developers building applications on ComposeDB do not need to worry about cost-per-transaction fees related to the read/write activity their users perform. They will, however, need to architect an adequate production node configuration (that should be built around the volume a given application currently has and how it expects to grow over time), which will have separate network-agnostic costs.
This also holds for auditors (or new applications who want to audit data on Ceramic before building applications on that data). Any actor can spin up a node without express network permissions, discover streams representing data relevant to their business goals, and begin to index and read them. Whether an organization chooses to build on ComposeDB or directly on its underlying network (Ceramic), as long as developers understand the architecture of event logs (and specifically how to extract information like cryptographic signatures and controlling accounts), they will have fully transparent insight into the provenance of a given Ceramic dataset.
Trade-Off: Stream Discoverability
While fantastic interfaces, such as s3.xyz, have been built to improve data and model discoverability within the Ceramic Network, one challenge Ceramic faces as it continues to grow is how to further enable developers to discover (and build on) existing data. More specifically, while it’s easy to explain to developers the hypothetical benefits of data composability and user ownership in the context of an open data network (such as the data provenance-related qualities we’ve discussed in this post), showing it in action is a more difficult feat.
StructuredThe Ceramic Network also exists in an existing, non-conforming territory that does not fit neatly into the on- or off-chain realm. Just as the Ethereum Attestation Service (EAS) mentions on its Onchain vs. Offchain page, a “verifiable data ledger” category of decentralized storage infrastructure is becoming increasingly appealing to development teams who want to gain the benefits of both credible decentralization and maximum performance, especially when dealing with data that’s meant to mutate over time.
As we discussed above, here’s a refresher on key insights into ComposeDB’s structure, and how these impact the provenance of its data.
Ceramic Event Logs
Ceramic relies on a core data structure called an event log, which combines cryptographic proofs (to ensure immutability and enable authentication via DID methods) and IPLD for hash-linked data. All events on the network rely on this underlying data structure, so whether developers are building directly on Ceramic or using ComposeDB, teams always have access to the self-certifying log that they can verify, audit, and use to validate provenance.
ComposeDB Schema Immutability
Developers building on ComposeDB also benefit from the assurances that schema definitions provide, based on the fact that they cannot be altered once deployed. While this may be an issue for some teams who might need regular schema evolution, other teams leverage this quality as a means to ensure constant structure around the data they build on. This feature therefore provides a benefit to teams who care strongly about both data provenance and lineage - more specifically, the origin (provenance) can be derived from the underlying data structure, while the history of changes (lineage) must conform to the immutable schema definition, and is always available when accessing the commit history.
A Decentralized Data Ledger
Finally, Ceramic nodes support the data on Ceramic and the protocol—providing applications access to the network. For ComposeDB nodes, this configuration includes an IPFS service to enable access to the underlying IPLD blocks for event streams, a Ceramic component to enable HTTP API access and networking (among other purposes), and PostgreSQL (for indexing model instances in SQL and providing a read engine). All Ceramic events are regularly rolled into a Merkle tree and the root is published to the Ethereum blockchain.
Within the context of data provenance, teams who wish to traverse these data artifacts back to their sources can use various tools to publicly observe these components in action (for example, the Ceramic Anchor Service on Etherscan), but must be familiar with Ceramic’s distributed architecture to understand what to look for and how these reveal the origins of data.
Trade-Off: Complexity
There’s no question that the distributed nature of the Ceramic Network can be complex to comprehend, at least at first. This is a common problem within P2P solutions that uphold user-data sovereignty and rely on consensus mechanisms, especially when optimizing for performance.
Trade-Off: Late Publishing Risks
As described on the Consensus page in the Ceramic docs, all streams and their potential tips are not universally knowable in the form of a global state that’s available to all participants at any point in time. This setup does allow for individual participants to intentionally (or accidentally) withhold some events while publishing others, otherwise known as engaging in ‘selective publishing’. If you read into the specifics and the hypothetical scenario outlined in the docs, you’ll quickly learn that this type of late publishing attack is illogical in practice since streams can only have one controlling user, so that user would need to somehow be incentivized to attack their data.
What does this have to do with data provenance? While the origin of Ceramic streams (even in the hypothetical situation of a stream with two divergent and conflicting updates) is at all times publicly verifiable, the potential for this type of attack has more to do with the validity of that stream’s data lineage (which is more concerned with tracking the history of data over time).
PortableFinally, another important notion to consider in the context of data provenance and P2P software is replication and sharing. Developers looking to build on this class of data network should not only be concerned with how to verify and extract the origin of data from the protocol but also need assurances that the data they care about will be available in the first place.
ComposeDB presumes that developers will want options around the replication and composability of the data streams they will build on.
Node Sync
You’ll see on the Server Configurations page that there’s an option to deploy a ComposeDB node with historical sync turned on. When configured to the ‘off’ position, a given node can still write data to a model definition that already exists in the network, but the node will simply only index model instance documents written by that node. Conversely, when toggled ‘on’, this setting will sync data from other nodes and write data to a canonical model definition (or many). The latter enables the ‘composability’ factor that development teams can benefit from—this is the mechanism that allows teams to build applications on shared, user-controlled data.
Recon (Ceramic Improvement Proposal)
There is an active improvement proposal underway, called Recon, to improve the efficiency of the network. In short, development related to this proposal aims to streamline the underlying process by which nodes sync data, offering benefits such as significantly lifting the load off of nodes that are uninterested in a given stream set.
Trade-Off: Data Availability Considerations
Of course, the question of data portability and replication necessitates conversation around the persistence and availability of information developers care about. In Ceramic terms, developers can provide instructions to their node to explicitly host commits for a specific stream (called pinning), improving resiliency against data loss. However, developers should know that if only one IPFS node is pinning a given stream and it disappears or gets corrupted, the data within that stream will be lost. Additionally, if only one node is responsible for pinning a stream and it goes offline, that stream won’t be available for other nodes to consume (which is why it’s best practice to have multiple IPFS nodes running in different environments pinning the same streams).
Webinar
Tuesday, December 5 • 12:00 PM ET
According to recent surveys, a majority of higher education institutions are planning to replace or otherwise overhaul their ERP system in the next 5 years.
As institutions prepare to make these highly consequential decisions, we invite you to join this session to explore the latest capability models, systems planning strategies, and outcomes-focused procurement. In this session, we’ll outline how the shift to a new ERP system can act as a catalyst for integrated, collaborative strategic planning, enhancing the entire higher education ecosystem, from enrollment to alumni engagement.
In this webinar, you’ll learn:
The importance of planning based on student and institutional outcomes How capability models can act as a reference point for institutional collaboration How ERP transformation can inspire the achievement of digital transformation outcomes How EdgeMarket supports integrated strategic planning and procurement How Edge can support the development of state of the art strategic plansThis session will empower you to consider whether your current strategic direction aligns your institution with the evolving digital landscape, and plan to not just adapt but embrace the future of higher education success.
Register Now »The post From ERP to Ecosystem: Charting a Path to Success Through Higher Education Strategy, Procurement, & Technology Integration appeared first on NJEdge Inc.
It’s time for another episode of the Identity at the Center Podcast! We had the privilege of interviewing Dave Middleton, Senior Vice President at Bank of America, who is responsible for IAM and Cryptography Product Management. Dave shared invaluable insights on various topics related to identity and access management (IAM), including the importance of balancing security and usability in IAM solutions, the evolving landscape of Identity Governance and Administration (IGA), and the role of technologies like Zero Standing Privilege (ZSP) and User Behavior Analytics (UBA). Tune in to this fantastic episode on idacpodcast.com or in your podcast app and gain valuable insights from Dave's expertise in the field.
In a startling revelation, an anonymous hacker has claimed to have accessed the biometric digital ID numbers and other sensitive personal information of approximately 815 million Indian citizens. This breach is reported to be the largest in the history of India’s Aadhaar, the world’s most extensive biometric digital ID system. The Aadhaar system, which has been a cornerstone of India’s digital infrastructure, is used for everything from tax filings to accessing social services. The implications of such a breach are profound, touching on issues of privacy, security, and trust in digital systems.
The Magnitude of the Breach
Aadhaar, managed by the Unique Identification Authority of India (UIDAI), has been both lauded for its inclusivity and criticized for potential privacy infringements. It uses biometric data such as fingerprints and iris scans, along with demographic information, to create a unique identity number for each citizen. With the scale of the reported breach, the personal data of more than half the population of India could be at risk. This could lead to widespread identity theft, unauthorized access to bank accounts, and fraud on an unprecedented scale.
The compromised data reportedly includes not just names and ID numbers, but also linked services and biometric data, exponentially magnifying the potential for misuse. This incident raises serious concerns about centralized databases and the mechanisms in place to protect such sensitive information.
Why This Could Not Happen with SSI
In the wake of this massive data breach, it’s crucial to understand why such an event would be highly unlikely with Self-Sovereign Identity (SSI) systems. SSI is a user-centric model that enables individuals to own, control, and present their identity without relying on any centralized authority. It represents a transformative approach to personal data management and security in the digital era. Here’s why SSI systems offer a robust defense against the type of breach Aadhaar experienced:
Decentralization: Unlike Aadhaar, which relies on a centralized database, SSI is inherently decentralized. Personal data is stored on users’ devices or on distributed ledgers, ensuring that a single breach does not expose the information of millions. User Control and Consent: SSI gives individuals control over their data. They consent to share specific information with entities they trust and for specified purposes. This reduces the amount of data that can be exposed in any interaction. Minimal Disclosure: SSI is built on the principle of minimal disclosure, meaning that users only need to share the information that is absolutely necessary. For instance, a user can prove their age without revealing their birth date. No Single Point of Failure: Because SSI does not depend on a central repository of data, it lacks a single point of failure. This makes large-scale breaches implausible as each individual’s data is siloed and protected through robust encryption. Verifiable Credentials: SSI relies on cryptographic techniques and blockchain technology, where credentials can be verified without revealing any underlying personal data. Even if a data request is intercepted, the information remains secure. Recovery and Revocation: In the SSI framework, users can recover their identities through independently established recovery networks, and they can revoke compromised credentials without affecting other aspects of their digital identity.Looking Ahead: The Role of SSI in Safeguarding Identity
The alleged Aadhaar breach is a stark reminder of the risks associated with centralized identity management systems. It underscores the necessity of adopting more secure, privacy-preserving identity solutions like SSI. Nations and organizations around the world are exploring SSI as a viable alternative that empowers citizens while bolstering security.
In conclusion, while the Aadhaar breach exposes the vulnerabilities inherent in centralized identity systems, it also serves as a critical lesson for the global community. It highlights the urgency of transitioning to more resilient, decentralized identity models like SSI, where the principles of user control, privacy, and security are not just ideals, but foundational features. As digital identities become more pervasive in our daily lives, adopting SSI could well be the paradigm shift needed to safeguard the personal data of individuals around the globe.
The post Understanding the Aadhaar Data Breach and the Fortitude of Self-Sovereign Identity (SSI) Systems appeared first on Lions Gate Digital.
With four weeks of hacking left, there's still plenty of time to form a team and register for the ongoing DIF Hackathon. This week we had some great sessions including an Intro to DIDs with Markus Sabadello, an Intro to Veramo with Mircea Nistor and an Intro to Trinsic with JP George. All Sessions are now available on YouTube.
Next week we have even more great sessions that you don't want to miss. Here's the lineup:
Ontology’s ONT ID Challenge Join us to learn more out Ontology’s ONT ID challenge! Eventbrite An Intro to TBD’s Web 5 SDK and Decentralized Web Nodes Join us to find out what all the buzz about Web 5 is about. Eventbrite Polygon ID’s Iden3 Protocol Challenge Come learn more about Polygon ID’s Iden3 protocol challenge Eventbrite An Intro to DIDComm and the Veramo DIDComm Package Join us to learn more about DIDComm and the Veramo DIDComm Package EventbriteWe're also pleased to share that for Hackathon participants using the Web5 SDK, TBD will be holding office hours in their Discord Server (https://discord.gg/tbd). Feel free to drop by if you need live help. Here are the dates:
November 15, 2023, 11a EST
November 17, 2023, 11a EST
November 22, 2023, 11a EST
November 29, 2023, 11a EST
We look forward to seeing you in Discord!
Best,
The DIF Team
Authors: Atul Tulshibagwale (SGNL), Apoorva Deshpande (Okta), and Shayne Miel (Cisco Duo).
A new draft of the Shared Signals Framework has been released for public review. Here’s how it is different from the previous version.
The OpenID Shared Signals Working Group (SSWG) has made important changes to the Shared Signals Framework (SSF) from the first implementer’s draft that was published in June 2021. The new draft entered the 45-day public review period on October 13, 2023.
Changes SummaryThe new draft is available for review here. Here are the main areas of changes:
Specification NameThe draft is now called the “Shared Signals Framework” (SSF), instead of the previous name – “Shared Signals and Events Framework”.
Subjects Top-level sub_id claim. The draft now complies with the SubIds recommendation of using sub_id as the subject name and places it at the top-level of the SET. Existing events continue to have the subject member within the event, but new event types need not have this subject Format in complex subjects: The complex subject types now have the following field in them: "format": "complex" Transmitter Metadata Well Known URL: The well-known URL of the Transmitter is now at /.well-known/ssf-configurationInstead of the previous location which was: /.well-known/sse-configuration
Spec Version: A Spec version field is now added to the Transmitter Configuration Metadata (TCM). This is set to the implementer’s draft spec version or the final spec version of the document that the Transmitter supports. Authorization Scheme: An authorization scheme has been added to the TCM to specify how the Transmitter authorizes Receivers. Optional jwks_url: jwks_url is now optional Streams Multi-Stream Support: The draft now supports multiple streams between the same Transmitter and Receiver. The API has been modified to support creating such streams. The draft still allows a Transmitter to support a single stream per Receiver. However, in either case (single-stream or multi-stream Transmitters), the stream needs to be created. Earlier, Receivers only needed to Update the stream configuration in order to establish communication. It is recommended that the endpoint_url is unique Poll Delivery URL: The draft clarifies that the Transmitter must supply the endpoint_url field in the stream creation process. It also defines how the Transmitter can specify the poll URL. Status Restriction: The stream status methods now do not allow subjects to be included in Stream Status methods. Receiver Supplied Description: The Stream now includes a receiver supplied description “Control Plane” Events Always Included: Clarified language the control plane events (Verification and Stream Updated) are always delivered in the stream regardless of the stream configuration Events Delivered: The draft specifies that events_delivered is a subset (not necessarily a proper subset) of the intersection of events_supported and events_requested. Earlier, it was required to be the intersection. Reason in Status: The stream status now includes an optional reason string Stream Events No Subjects in SSF “Control Plane” Events: The Stream Verification and Stream Updated events restrict the subject in these events to only reference the stream as a whole. Security Considerations Authorization: The draft no longer recommends using OAuth 2.0 or the client credentials grant flow Audience: Events are no longer recommended to have the OAuth 2.0 Client ID as the audience FeedbackWe welcome your feedback on this draft. Please write to Atul Tulshibagwale, co-chair of the SSWG with your feedback before the review period ends on November 27, 2023.
About OpenID FoundationThe OpenID Foundation’s vision is to help people assert their identity wherever they choose. And our mission is to lead the global community in creating identity standards that are secure, interoperable, and privacy-preserving.
Founded in 2007, the OpenID Foundation (OIDF) is a non-profit open standards body developing identity and security specifications that serve billions of consumers across millions of applications.
Learn more here: https://openid.net/foundation/
The post What’s New in the Shared Signals Framework? first appeared on OpenID Foundation.
Laying the foundations for safety and quality
Location: online
Date and time: 21 November, 10:00 - 11:30 am (Australian Eastern Daylight Time - AEDT)
Join the webinar hosted by the National GS1 Traceability Advisory Group (NGTAG). Speakers include Bronwyn Weir, Board member of the Internal Building Quality Centre and co-author of the Building Confidence report, and Paul Reichl from the Major Transport Infrastructure Authority discussing the significance of traceability for Australian building and construction products. Topics will also cover meeting new sustainability requirements whilst boosting productivity.
Register today and join live or re-watch later. Industry stakeholders are also welcome!
We are excited to share our final special episode of the Identity at the Center podcast from the FIDO Alliance Authenticate 2023 conference series!
We had the privilege of taking the stage as part of the opening keynote, joined by three incredible identity product managers: Mahendar Madhavan, Daniel Grube, and Christiaan Brand.
During the discussion, we got into the adoption of FIDO authentication, with a focus on passkeys. It was fascinating to hear valuable insights from our guests about their roles at their respective organizations and their firsthand experiences with implementing FIDO authentication.
But it wasn't all serious business! We also had some lighthearted banter, discussing our hobbies and sharing personal experiences like hiking Yosemite's half-dome. And of course, we couldn't resist asking our guests some fun questions, like which song Daniel would perform to go viral on TikTok and automatically enroll everyone in passkeys!
We want to extend our heartfelt thanks to Andrew Shikiar, Megan Shamas, and Adrian Loth for their invaluable help in bringing this show to the conference. It wouldn't have been possible without them!
Episode #244 is available now at idacpodcast.com and in your favorite podcast app.
At the start of the Covid pandemic, a mutual aid group contacted us to ask for support to better understand student needs during the lockdown period.
Specifically, they hoped to identify where, geographically, students were likely facing significant challenges (as determined by indicators including socioeconomic status, internet connectivity, and local COVID-19 infection rates). To achieve this objective, the team compiled publicly available health, census, and geospatial data using geographic information system (GIS) software. This analysis then enabled them to map out regions where students were most likely to require assistance in the form of food or internet vouchers.
Using Geographic Information Systems (GIS) to gather, manage, analyse and visualise spatial and geographic dataGeographic Information Systems (GIS) is a framework for gathering, managing, and analyzing spatial and geographic data. Essentially, GIS integrates multiple types of data, from geographical and topological data to statistical and qualitative information, into a unified system that allows for sophisticated spatial analysis and visualization.
While traditional data sets may contain location information (“attributes”) like zip codes or city names, GIS takes this a step further by including attributes about geographic areas. For example, a common use involves mapping the amount of flooding in a given location. These separate bits of information can then be layered on top of each other for displaying on maps or 3D models. The complexity of this data requires specialized data formats that can hold all the necessary layers, as well as software for collection, manipulation, analysis, and visualization.
What GIS can be used for in the social- and environmental-justice sectorsEarlier GIS systems were primarily used by specialized institutions because they required expensive, high-end hardware and software. However, advances in technology have made GIS more affordable and accessible, enabling it to be run even on consumer-grade computers and smartphones.
The creation of cloud-based GIS solutions has also reduced the need for in-house servers and storage, thereby reducing the potential cost and complexity of deploying GIS technology. Additionally, drones and IoT sensors have made remote data collection more do-able at a lower budget.
For those in the social and environmental justice sectors, this has meant increased access to technologies that can help them achieve their mission.
Land rights organisation Cadasta, for example, notes that GIS tools can “help communities gather digital data required for legal land claims while also enabling real-time mapping and monitoring of critical assets such as biodiversity, forest cover, natural resources, and human settlements” Another organisation, Cultural Survival, has used field-based participatory mapping techniques combined with satellite imagery, aerial photographs, and GIS technology in order to explore economic development strategies for indigenous peoples and their territories in the Amazon Basin.
Potential risks to consider when using GIS technology to map and analyse information about vulnerable populationsWith the democratization of these technologies also comes increased data collection on vulnerable populations – which means that organisations using or considering using this technology should do so only if they have considered the the risks involved, and taken steps to mitigate these risks (in some cases, a decision might need to be made to not use the technology at all). The list below is offered as a guide to some of the key areas to consider.
Privacy and Security: One of the most immediate concerns around using GIS is the potential for the exposure of sensitive personal information. The geolocation capabilities in GIS data can expose details about individuals’ whereabouts and movements. Without rigorous security protocols, this data could be exposed, putting at risk those who may be already vulnerable. Ethical Considerations: GIS technologies also raise ethical questions, particularly regarding the potential for misuse of information. Real-time tracking capabilities can limit personal privacy and freedom, especially when used without consent or adequate transparency. Additionally, if the data ends up in the wrong hands, information intended to be used to protect a resource can be used by poachers to easily locate it. Discrimination: Another risk lies in the potential for GIS data to perpetuate or even exacerbate existing inequities. If GIS data collection methodologies or interpretation algorithms reflect societal biases, the technology can inadvertently contribute to systemic discrimination. Digital Divide: While GIS technologies are becoming increasingly accessible, there is still a considerable gap in who has the resources and knowledge to leverage these tools effectively. This digital divide means that communities lacking the requisite technological infrastructure or training may be further marginalised, thereby failing to benefit from the advantages that GIS technologies can offer. Environmental Impact: Lastly, the widespread adoption and use of GIS technologies contribute to the environmental footprint of the digital landscape. The energy-intensive nature of data centres required to process and store voluminous GIS data is a growing concern in the age of climate change. Should your organisation use GIS technology?Affordable advanced mapping and real-time monitoring capabilities can empower organisations in the social- and climate-justice sectors to track and visualize trends and allocate resources efficiently. However, it’s important to remember that with more advanced data collection and practices comes increased risks, particularly for the privacy, security, and well-being of already vulnerable populations. Ethical questions around consent and potential misuse of data need to be surfaced early. It will be crucial for organisations to take the time at the inception of any project that uses geospatial data to carefully assess potential risks and develop strategies for their mitigation.
If you’d like to learn more about GIS systems, our friends at CartONG have made a toolkit on working with GIS: https://cartong.pages.gitlab.cartong.org/learning-corner/en/intro_gis
If you’re working for an organisation focused on environmental or social justice, you’re also welcome to schedule a no-fee call with us to talk through your ideas! We look forward to hearing from you.
Photo by Suho Media on Unsplash
The post Understanding the Impact of Geospatial Data in Social and Climate Justice first appeared on The Engine Room.At the intersection of blockchain and digital identity, teams and enthusiasts are challenging the status quo of online reputation and authentication. We recently wrapped up the IdentityHackathon, which showcased the potential of Web3 technologies to revolutionize digital identity.
Collaborative Spirit in the Web3 SpaceThe Ceramic team was excited to collaborate with the NEWFORUM team for the hackathon, in addition to an impressive lineup of ecosystem partners, such as Newcoin, Jokerace, 1kx, Lit Protocol, Disco.xyz, Guild.xyz, Sismo, Gitcoin, Intuition, Orbis, Cyberconnect, and Ethereum Attestation Service.
The challenge? To get hackers to utilize the tools and technologies offered by these aforementioned partners to create novel applications that transform our understanding and verification of online identity and reputation.
Spotlight on Ceramic-Integrated SolutionsWe received exceptional submissions! A special mention to the following projects that incorporated Ceramic into their frameworks:
Plurality. Coming in at 2nd place, Plurality pioneers a Web3 onboarding protocol. Leveraging tools like Orbis and Lit Protocol, it paves the way for social media users and content creators to transition seamlessly to Web3 social platforms, anchoring their pre-existing social reputation and interests. Respect Protocol. Securing 3rd place, Respect Protocol empowers users to sculpt their digital identity and weave a reputation graph reminiscent of SSL certificates. This innovation fosters easier navigation and interaction within the digital ecosystem. BrainShare. BrainShare, coming in at 5th place, is a decentralized protocol championed by VeramoLabs. It merges the power of Decentralized Identifiers (DIDs) and W3C Verifiable Credentials to champion data portability and harness composable reputation. Shinjitsu. Dedicated to standardizing rank and expertise within the Brazilian Jiu Jitsu community, Shinjitsu utilizes blockchain to combat rank misrepresentation and foster fairness. Orbis Chat. This application allows users to craft an AI Avatar embedded with personalized training data and their voice model. Delve deeper into users' profiles, conversing with their AI Avatar, to uncover more about their aptitudes and inclinations. LinkedTrust. This platform, built with Ceramic, takes Web3 technology to the next level. It integrates signed attestations within a dynamic social graph. More than just stating claims, LinkedTrust is about real-time validation, ensuring every impact declared is both genuine and demonstrable. Ampy. Using Ceramic, Ampy sets out to craft the first-ever music social graph. It bestows every music aficionado with a digital music passport, enabling them to chronicle and validate their musical journey and fandom. Milky Pink Space. Built using the synergy of Guild.xyz, Ceramic, and Disco.xyz, Milky Pink Space stands out as a novel crypto journal. It touches upon arts, philosophy, and the multifaceted topics that underpin and influence Web3. By spotlighting identity and relationships on chain through in-depth interviews, the project fosters a richer understanding of the decentralized digital realm. Forward and BeyondA heartfelt thank you to all the participants of the IdentityHackathon. Your projects not only showcased the immense potential of Web3 technologies but also laid the groundwork for a more secure and decentralized digital future. We can't wait to see the updates on your projects in the near future!
On October 26, we gathered online with over 20 organisers, activists and journalists from 9 countries in Latin America. We were inspired to hear from speakers Nathaly Espitia (Internews) and Maria Juliana (Universidad Icesi, Cali. Colombia) from Colectivo Noís Radio, Júlia Rocha from Artigo 19, and Ramiro Alvarez Ugarte from CELE, as well as from participants in the call, who joined us in an open discussion about challenges we’re facing in the information ecosystem in the region and opportunities we see for action. In this blog post we’re sharing some of the key takeaways from our conversation.
A healthy information ecosystem is one where people are able to listen and talk to each otherOur conversation started with learnings from Maria Juliana and Nathaly Espitia from Noís Radio, a collective from Cali (Colombia) that has been working since 2009 on producing live radio programs using voices, music, live sounds and performative actions. They shared that they don’t see themselves as a “traditional” radio station, but rather as a “medio de conversación” (or conversation medium) that creates spaces for conversation and listening, not just one-sided sharing of information.
For Noís Radio, the key to a healthy information ecosystem is that people are able to listen and talk to each other. This approach was echoed by other participants in the call: for Ramiro Ugarte from CELE, it is key that we think about how to generate more dialogue and conversations, because building spaces where people can have dialogue and connect face-to-face is key to combating polarisation and achieving a healthier information ecosystem.
Building trust is a slow-burning process (but it’s worth it!)During the call, we talked about some of the factors that serve as a backdrop to our broken information system: the region is going through a period of heightened lack of trust in media and institutions, there is acute political polarisation in various countries, and citizens are dissatisfied with political systems and experiencing information overload.
To respond to this scenario, many participants spoke about the value of creating information initiatives that include processes designed to rebuild trust with people in our communities. For example, in their work, Noís Radio doesn’t approach communities with a fixed, previously defined project – instead, the collective invites people to join them in conversations and share what their needs are, then they work on establishing relationships of trust and “pass the microphone to people in their communities”. During the national strike in Colombia, in 2021, for example, Noís Radio was in Cali recording shows from “puntos de resistencia” (resistance points) with social organisations, young people, artists, mothers and other protesters who trusted their microphones and shared their voices. (You can listen to the shows here!).
When talking about their experience, Nathaly and Maria Juliana shared that, though the trust building process can be slow, it has proven to be essential, because it gives you an important foundation when you need to organise during a moment of crisis. Having already built trust with the people in their communities over time and having meaningfully involved people in their work in the past meant that Noís Radio were able to mobilise quickly in a difficult political context, generating important conversations within their communities.
The value of local information initiatives and fostering a sense of communityThe sense that community-led efforts are fundamental to a stronger information ecosystem in the region was echoed by Júlia Rocha, who leads the Access to Information and Transparency team at Artigo 19 in Brazil. In her perspective, “local solutions are the ones that work the most”. In a region where the notion of public interest information has been often shaped by corporate interests, as phrased by Júlia, investing in local, independent, community-led communications is indispensable. Since 2020, her organisation has been supporting the work of popular communication initiatives throughout the country with the campaign #CompartilheInformação, which has given grants to groups providing trustworthy information about health, democracy and elections and, soon, the environment.
Brazilian journalism organisations Agência Mural and Énois (who also joined our call!) are also showing how valuable local initiatives can be for the information ecosystem. Seeing that people lack access to information at the local, city level, they argue that there needs to be more support to the “development of local initiatives that contribute to reducing news deserts”, which provides citizens with the information they need in order to “participate in public life from the territory where they live”. As Izabela Moi and Nina Weingrill put it: “quality local coverage creates and sustains the feeling of belonging to a community and opens spaces for action and citizen participation.“
We see this in the work of Noís Radio too, in their work to empower more independent voices in the information ecosystem. In our chat, Nathaly and Maria Juliana talked about the importance of creating initiatives that go beyond capital cities, as well as how crucial it is to support the work coming from indigenous communities, afro colombian communities and migrant communities.
During the call, we also heard from Agencia Baudó in Colombia, who is doing “journalism that connects communities” by working with community storytellers that are not only providers of information, but also local leaders working in their communities for social transformation. Another example shared is the work of +COMUNIDAD, an argentinian “solutions journalism medium” that investigates, finds and tells stories of people and cities, solving their challenges and inspiring others to transform themselves.
Participants also talked about the importance of counterbalancing the lack of “official” data about certain topics, especially in regions that lack access to information about issues such as sexual and reproductive health and rights or the environment, with the production of community-driven data or initiatives that democratise access to information. To that end, we talked about initiatives like Artigo 19’s map made for Brazilian women to know where to access abortion care in their states – information that had been previously unavailable.
Fighting harmful trends at a regional level and more support for sustained, long term alliancesWe discussed some of the regional trends in information disorder and how civil society in the region is witnessing the same types of disinformation being shared in many countries. Research led by Chequeado (Argentina), La Silla Vacía (Colombia), Lupa (Brazil), Ocote (Guatemala) and OjoPúblico (Perú), for instance, has shown how groups are organising to spread falsehoods about gender-related issues in Latin America. Recent investigative project Mercenarios Digitales, led by a cross-border and collaborative media alliance, has gathered evidence on the impact of an international network of disinformation actors operating in the region.
In this context, participants shared that it is key to foster more multi-country spaces where civil society organisations, journalists and human rights defenders can work in an articulated manner to build a healthier information ecosystem. Similarly, participants also talked about how important it could be to have regional alliances that would allow them to foresee the types of disinformation and attacks that are emerging in the continent.
Join our next community call!As we make our way through this multi-year project to contribute to a healthier information ecosystem, our team is in awe of the amazing work being done by journalists, communicators, civil society, community organisers and activists who are figuring out creative ways to make sure valuable information is reaching the people in their communities.
Our research for this project is just getting started and we’re looking forward to continuing working with many of you as we build this work. Our next community call is happening on November 23, at 9am Ciudad de México, 10am Bogotá, 12pm São Paulo. (For this call, the main language of communication will be Spanish!)
Register for the call
We’ll continue to talk about what is needed for a stronger, healthier information ecosystem, this time focusing on questions like: What does it take to rebuild trust and foster a sense of community? And how do we build a less fragmented, more community-driven information ecosystem in our region? If you’re part of a collective or civil society organisation working to provide crucial information and/or combat disinformation in your communities, are working on local journalism or popular communications, or are building community-driven information initiatives, join us!
Photo by Juan Saravia on Unsplash
The post A slow-burning process: to improve the information ecosystem we need to rebuild trust and focus on local, community-driven initiatives first appeared on The Engine Room.During the past years, FIDO has continued its expansion as an authentication standard among eIDAS compliant identification solutions across the EU. Back in 2020, FIDO was deployed as part of an eID scheme by the Czech domain register CZ.NIC’s identity provider MojeID, and FIDO’s eID scheme was recognized as LoA Substantial and High by the Czech ministry of interior. The year after, the Norwegian trust service provider Buypass deployed FIDO2 as an authentication standard for an eIDAS eID scheme of LoA Substantial and High; this solution has been accredited by the Norwegian digitalization agency and is now being rolled out in the Norwegian healthcare sector. In April 2023, the FIDO Alliance published a white paper that describes how FIDO can be used for the EUDI Wallet under the proposed eIDAS2 regulation. So FIDO is currently gaining momentum as an authentication standard in the EU.
On top of these success stories, the FIDO standards have recently been referenced by two of the most respected EU organizations within cybersecurity and standardization: ENISA (the EU Cybersecurity Agency) and ETSI (the European Telecommunications Standards Institute).
In July 2023, ENISA published the report “Digital Identity Standards”. The report provides a comprehensive overview of digital identity standards, standardization organizations, and authentication protocols. More specifically, the report describes the FIDO Alliance as “an open industry association launched in February 2013 whose stated mission is to develop and promote authentication standards that ‘help reduce the world’s over-reliance on passwords”. Furthermore, the ENISA report describes the FIDO standard suite FIDO2, FIDO U2F and FIDO UAF in technical detail. The ENISA report also explains the concepts of FIDO Authenticators, FIDO Metadata Service, assertions with Relying Parties, and the WebAuthn and CTAP2 APIs. ENISA concludes that the maturity of the FIDO standards is high. This ENISA report re-iterates and emphasizes the recommendation to use FIDO for two-factor authentication, which was published in 2022 in the joint publication “Boosting your Organisation’s Cyber Resilience” issued in cooperation by EU-CERT and ENISA.
Next, ETSI published the technical report ETSI TR 119 476 called “Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes”. The ETSI report analyzes cryptographic schemes for selective disclosure and their potential application for Electronic Attestation Attributes in line with the proposed eIDAS2 regulation. The purpose is to allow the users of the EUDI Wallets to select what attributes they want to share with a verifier. For example, a user may only want to disclose that she is over 18 years old at a restaurant, but no more personal information than that. The ETSI report includes a description of the VC-FIDO solution, which has been invented by David Chadwick at the Kent University. The ETSI report states:
“The VC-FIDO integration is based on the W3C WebAuthn protocol in the FIDO2 standard. The WebAuthn stack is extended with a W3C Verifiable Credentials enrollment protocol, resulting in a client that can enroll for multiple atomic short-lived W3C Verifiable Credentials based on W3C Credential templates. These atomic short-lived W3C Verifiable Credentials can then be (temporarily) stored in an EUDI Wallet, and be combined into a Verifiable Presentation that is presented to the relying party (verifier). Selective disclosure is achieved since the user can enroll for the atomic attributes it needs for a specific use case, and present only those atomic (Q)EAAs to a Relying Party.”
These prominent references in the ENISA and ETSI reports demonstrate that FIDO has achieved a firm position as a viable authentication standard for eIDAS2 and regulated use cases in the EU. It will be interesting to follow the continued development of the EUDI Wallet implementations and the related Large Scale Pilots – it is quite likely that FIDO will be deployed in such solutions across the EU.
Author: Sebastian Elfors, senior architect at IDnow
The post The EU organizations ENISA and ETSI refer to FIDO as authentication standard for eIDAS2 appeared first on FIDO Alliance.
Like a wound in the landscape, the rusty border wall cuts along Arizona’s Camino Del Diablo, the Devil’s Highway. Once the pride and joy of the Trump Administration, this wall is once again the epicenter of a growing political row. President Biden’s May 2023 repeal of the Trump Administration’s Covid-era policy of using Title 42 comes with the introduction of new hardline policies preventing people from claiming asylum in the United States, undergirded by a growing commitment to a virtual smart border extending far beyond its physical frontier.
Racism, technology, and borders create a cruel intersection. From drones used to prevent people from reaching the safety of European shores, to artificial intelligence (AI) lie detectors at various airports worldwide, to planned robodogs patrolling the US-Mexico border, people on the move are caught in the crosshairs of an unregulated and harmful set of technologies. These projects are touted to control migration, bolstering a lucrative multi-billion-dollar border industrial complex. Coupled with increasing international environmental destabilization, more and more people are ensnared in a growing and global surveillance dragnet. Thousands have already died. The rest experience old and new traumas provoked and compounded by omnipresent surveillance and automation.
What do new tools like generative AI mean for this regime of border control?I have spent the last five years tracking how new technologies of border management — surveillance, automated decision making, and various experimental projects — are playing out in migration control. Through years of my travels from Palestine to Ukraine to Kenya to US/Mexico, the power of comparison shows me time and again how these spaces allow for frontier mentalities to take over, creating environments of silence and violence.
In this era of generative technologies, this work is underpinned by broader questions: Whose perspectives matter when talking about innovation, and whose priorities take precedence? What does critical representation and meaningful participation look like — representation that foregrounds people’s agency and does not contribute to the “poverty porn” that is so common in representations coming from spaces of forced migration? And who gets to create narratives and generate stories that underpin the foundations of tools like GPT-4 and whatever else is coming next?
Clockwise from top left: High-tech refugee camp on Kos Island in Greece; surveillance tower in Arizona; two women cross the Ukraine-Poland border; memorial site in the Sonora desert; protest against new refugee camp on Samos; Calvin, a medical doctor, holds keys from his apartment in Ukraine after escaping across the Hungary border. Photos by Petra Molnar, 2021–2022.Tools like generative AI are socially constructed by and with particular perspectives and value systems. They are a reflection of the so-called Global North and can encode and perpetuate biases and discrimination. In August of this year, to test out where generative AI systems are at, I ran a simple prompt through the Canva and Craiyon image generation software: “What does a refugee look like?”
What stories do these images tell? What perspectives do they hide?It is telling that for generative AI, the concept of a “refugee” elicits either forlorn and emaciated faces of Black children or else portraits of doe-eyed and vaguely Middle Eastern people waiting to be rescued. When I sent these depictions to a colleague who is currently in a situation of displacement and identifies as a refugee, she laughed and said, “I sure as hell hope I don’t look like this.”
Generative AI is also inherently exploitative. Its training data are scraped and extracted often without the knowledge or consent of the people who created or are in the data. Menial tasks that allow the models to function fall on underpaid labor outside of North America and Europe. The benefits of this technology do not accrue equally, and generative AI looks to replicate the vast power differentials between those who benefit and those who are the subjects of high-risk technological experiments.
How can we think more intentionally about who will be impacted by generative AI and work collaboratively–and rapidly–with affected populations to build knowledge?The production of any kind of knowledge is always a political act, especially since researchers often build entire careers on documenting the trauma of others, “stealing stories” as they go along. Being entrusted with other people’s stories is a deep privilege. Generating any type of knowledge is not without its pitfalls, and academia is in danger of falling into the same trap with generative AI research: creating knowledge in isolation from communities, failing to consider the expertise of those we’re purporting to learn from. How can researchers and storytellers limit the extractive nature of research and story collection? Given the power differentials involved, research and storytelling can and should be uncomfortable, and we must pay particular attention to why certain perspectives in the so-called Global North are given precedence while the rest of the world continues to be silenced. This is particularly pertinent when we are talking about a vast system of increasingly autonomous knowledge generation through AI.
The concept of story and knowledge stewardship may be helpful, a concept from Indigenous learnings which recognizes that the storyteller is not exempt from critical analysis of their own power and privilege over other people’s narratives and should instead hold space for stories to tell themselves. This type of framing continually places responsibility at the center (see for example the work of the Canadian First Nations Information Governance Centre). Storytelling and sharing is also a profound act of resistance to simplified and homogenized narratives, often common when there is a power differential between the researcher and their topic. Established methods of knowledge production are predicated on an outside expert parachuting in, extracting data, findings, and stories, using their westernized credentials to further their careers as the expert.
True commitment to participatory approaches requires ceding space, meaningfully redistributing resources, and supporting affected communities in telling their own stories. And real engagement with decolonial methodologies requires an iterative understanding of these framings, a re-framing process that is never complete. By decentering so-called Global North narratives and not tokenizing people with lived experience as research subjects or afterthoughts, researchers can create opportunities that recognize their privilege and access to resources — and then redistribute those resources through meaningful participation, creating an environment for people to tell their own stories. It is this commitment to participatory approaches that we need in generative AI research, especially as it meets up with border control technologies.
One small example is the Migration and Technology Monitor project at the York University’s Refugee Law Lab, where I am Associate Director. , Migration and Technology Monitor is a platform and an archive with a focus on migration, technology, and human rights. Our recently launched fellowship program aims to create opportunities for people with lived experience to meaningfully contribute to research, storytelling, policy, and advocacy conversations from the start, not as an afterthought. Among our aims is to generate a collaborative, intellectual, and advocacy community committed to border justice. We prioritize opportunities for participatory work, including the ability to pitch unique and relevant projects by affected communities themselves. Veronica Martinez, Nery Sataella, Simon Drotti, Rajendra Paudel, and Wael Qarssifi are part of our first cohort of fellows from mobile communities from Venezuela to Mexico to Uganda to Nepal to Malaysia. Our hope is that our fellowship creates a community which provides spaces of collaboration, care, and co-creation of knowledge. We are specifically sharing resources with people on the move who may not be able to benefit from funding and resources readily available in the EU and North America. People with lived experiences of migration must be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement, such as using generative AI to compile resources for mobile communities.
Participatory methodologies that foreground lived experience as the starting place for generating knowledge inherently destabilize established power hierarchies of knowledge production. These approaches encourage researchers and tech designers to critically interrogate their own positionality and how much space their own so-called expertise takes up in the generation of knowledge at the expense of other realities. These framings and commitments are paramount, especially in context with fraught histories and vast power differentials–for example, where mobile populations are the abject and feared others and where generative AI models learn on these realities. Especially pertinent for scholars, technologists, and researchers who are themselves part of the so-called Rest of World, a re-imagination of expertise and knowledge must come from the ground up and any tools which are created must recognize and fight against these power differentials.
It is through participatory methodologies that we may come a step closer towards seeing a world in which many worlds fit, a phrase which as my BKC colleague Ashley Lee reminds us, comes from the Zapatista Indigenous resistance movement — a world where “nothing about us without us” moves beyond an old community organizer motto towards a real commitment to participation, story stewardship, and public scholarship which honors and foregrounds lived experience.
Thank you to Madeline McGee for her suggestions which greatly improved this piece and to Sam Hinds for her careful edits.
This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.
Building Knowledge about Generative AI with Mobile Populations was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
The public release of ChatGPT in November 2022 represents a significant breakthrough in generative AI — systems that craft synthetic content based on patterns learned from extensive datasets. This development has heightened concerns about AI’s impact on individuals and society at large. In the brief period since this breakthrough, there has been a surge in lawsuits pertaining to copyright and privacy violations, as well as defamation. One lawyer learned a hard lesson about the dangers of AI “hallucination” after citing seemingly genuine but bogus judicial precedents generated by ChatGPT in a legal brief submitted to court. There are even reports that such systems have been implicated in an individual’s decision to commit suicide.
Given these concerns, there is a growing demand for regulatory action. OpenAI’s CEO, Sam Altman, addressed the US Congress in May and called upon legislators to act.
The EU has taken the lead in legislative endeavors. In April 2021, the European Commission proposed a Regulation on AI (AI Act), marking the first step toward a comprehensive global legal framework on AI. This landmark legislation aims to foster a human-centric AI, directing its development in a way that respects human dignity, safeguards fundamental rights, and guarantees the security and trustworthiness of AI systems.
The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.
The proposed AI Act adopts a risk-based approach, categorizing AI systems into three main risk levels: unacceptable risk, high risk, and limited risk. This classification depends on the potential risk posed to health, safety, and fundamental rights. Certain AI systems such as those that generate “trustworthiness” scores, akin to the Chinese Social Credit System, are considered to present unacceptable risks and are completely prohibited. AI systems used in hiring processes and welfare benefit decisions fall into the high-risk category and are subject to stringent obligations. These include conducting a conformity assessment and adhering to certain data quality and transparency requirements. Meanwhile, chatbots and deepfakes are considered limited risk, subject to relatively minimal transparency requirements.
Shortly after the proposal was drafted, and after the release of ChatGPT, it became clear that the Commission’s draft contained a significant hole: it did not address general purpose AI or “foundational models” like Open AI’s GPT-n series, which underpins ChatGPT. Fortunately, due to the EU’s multistage legislative process, the release of ChatGPT occurred while the European Parliament was deliberating on the AI Act. This provided a timely opportunity to include new provisions specifically targeting foundational models and generative AI.
Under an amendment adopted by the European Parliament in June, providers of foundational models would be required to identify and reduce risks to health, safety, and fundamental rights through proper design and testing before placing their models on the market. They must also implement measures to ensure appropriate levels of performance and adopt strategies to minimize energy and resource usage. Moreover, these AI systems must be registered in an EU database, with details on their capabilities, foreseeable risks, and measures taken to mitigate these risks, including an account of risks that remain unaddressed. The amendment would impose additional obligations on foundational models employed in generative AI. These obligations include transparency requirements, ensuring users are aware that content is machine generated, and implementing adequate safeguards against the generation of unlawful content. Providers must also publish a detailed summary of copyrighted content used to train their systems.
While the final version of the AI Act will be determined by the trilogue among the European Commission, European Parliament, and European Council, its current form already marks an ambitious and real-time attempt to regulate generative AI, highlighting the challenges of regulating a rapidly evolving target.
On this occasion, the EU’s legislative process kept pace with the latest advancements before the laws were set in stone. However, it raises the question: how often can we count on such fortunate timing, and what proactive measures should be taken?
We must embed flexibility into such laws. Indeed, the EU has taken some steps in this direction, granting the Commission the authority to adapt the law by adding new use cases into the risk categories. Yet, considering previous experiences with the Commission’s implementation of delegated acts, it’s debatable whether such mechanisms alone can keep up with the rapid pace of AI development.
The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.
It is important to embrace a variety of techniques for adaptive regulation, such as regulatory experimentation through pilot projects and embedding systematic and periodic review and revision mechanisms into legislation. Adaptive regulation further necessitates openness to a diversity of approaches across jurisdictions. It encourages learning from one another, which implies that the EU should resist its inclination to solely dictate global standards for AI regulation, and instead regard its efforts as contributions to a collective pool of learning resources.
While adaptive regulation does come with its own costs, clinging to static regulation designed for a hardware world with fully-formed products manufactured in centralized facilities, could potentially prove to be even more costly in the face of rapidly advancing technology.
Simultaneously, the amendment has significantly broadened the Act’s scope. While the Commission’s draft focused on mitigating harms to health, safety, and fundamental rights, the European Parliament’s version extends these concerns to include democracy, the rule of law, and environmental protection. Consequently, providers of high-risk AI systems and foundational models are required to manage risks associated with all these areas. However, this raises concerns that the Act might transform into a catch-all regulation with diluted impact, thereby creating a considerable burden on providers to translate these broad goals into concrete guardrails.
This amendment has exacerbated existing concerns that these broad requirements and accompanying compliance costs might stifle innovation. In an open letter to EU authorities, over 150 executives from companies including Siemens, Airbus, Deutsche Telekom, and Renault criticized the AI Act for its potential to “undermine Europe’s competitiveness and technological autonomy.” One of the significant concerns raised by these companies relates to the legislation’s strict requirements aimed at generative AI systems and foundational models. The letter equates the importance of generative AI with the invention of the internet, considering its potential to shape not only the economy but also culture and politics. The signatories caution that the compliance costs and risks embedded in the AI Act could “result in highly innovative companies relocating their operations overseas, investors retracting their capital from the development of European foundational models, and European AI in general.”
OpenAI has already warned about potentially exiting the EU if the conditions of the AI Act prove too restrictive. There are also indications that even major players are cautious when rolling out their latest services. The launch of Google Bard was delayed in the EU by two months due to compliance concerns with the General Data Protection Regulation. However, it was ultimately introduced with improved privacy safeguards, highlighting the EU’s role in shaping global data policies of such organizations.
For its part, the EU contends that the AI Act is designed to stimulate AI innovation and underscores key enabling measures included in the Act. These encompass regulatory sandboxes, which serve as test beds for AI experimentation and development, an industry-led process for defining standards that assist with compliance, and safe harbors for AI research.
Of course, the concerns from the industry about the AI Act’s impact on innovation, as well as the EU’s responses to these matters, represent an essential part of balancing the inevitable trade-offs inherent in regulating any emerging technology, and time will tell which direction the pendulum swings. During the trilogue negotiations, it is likely that the European Council will push back on some of the amendments from the Parliament. Indeed, there is merit in carefully weighing the benefits of introducing broad objectives such as democracy and the rule of law without concrete measures in place to support these goals. One might argue that efforts are better spent strengthening the safeguards for fundamental rights, which is crucial for safeguarding both democracy and the rule of law. Numerous civil society organizations have already emphasized the need for incorporating fundamental rights impact assessments and empowering individuals and public interest organizations to file complaints and seek redress for harms inflicted by AI.
Moreover, it would be beneficial to concentrate on tangible guardrails, such as facilitating researchers’ access to foundational models, data, and parameters. This approach is likely to be more effective in promoting accountability, democracy, and the rule of law compared to a general requirement to conduct risk assessments based on such broad concepts.
Regardless of the final form of the text, the AI Act is poised to significantly shape AI development and the regulatory landscape in the EU and beyond. Therefore, the AI community must prepare for its impact.
This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.
The EU AI Act was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
Though the underlying technology was based on years of AI research by many individuals and organizations, the launch of ChatGPT by OpenAI in November of 2022 captured the collective imagination in an extraordinary way. The release started an ongoing conversation about the potential of the technology to improve our lives, and to harm them. In the public consciousness, AI has remained confusing, overwhelming, and a bit scary — even its name can seem imprecise and distracting. And with generative AI, phrases like “hallucination” and “job replacement” only foment more fear. Yet beneath the hype, doomerism, and techno-utopianism sits the fundamental question of what kind of societies we want to live in — and what choices we should make to realize them.
One month after the release of ChatGPT, a group of collaborators — the Nordic Centre at BI Norwegian Business School, the Institute for Technology and Society of Rio de Janeiro, the Technical University of Munich, and the Berkman Klein Center — decided it was time to discuss the implications of generative AI. We knew that if generative AI were to realize its true pro-social potential and to have its harms mitigated, then cross-sector, cross-disciplinary, and cross-national conversation was needed. Many in our community had already begun to explore use cases, governance, accountability, and the systems’ social impact broadly. It was time to rebuild bridges weakened by the COVID pandemic era.
Sabelo Mhlambi and Jenn Louie at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.In May 2023, the Berkman Klein Center hosted “Co-Designing Generative Futures: A Global Conversation about AI” in Cambridge, USA, bringing together colleagues old and new with backgrounds from academia, civil society, government, and industry, from over two dozen countries and all of the continents other than Antarctica. We discussed the need for access to data for research purposes and the need for real study.
At the same time, many emphasized that policymakers do not have the time; they need to act now. Action and study need to happen in parallel.
Samson Esayas at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.Today we introduce the Co-Designing Generative Futures series, a collection of multidisciplinary, transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence, as seen by members of the Berkman Klein Center community.
In the first set of essays, Samson Esayas addresses the potential implications of policy, speculating on the role the European Union’s AI Act may play in the governance and development of generative technologies. And Petra Molnar challenges us to consider the potential impact of generative AI on the surveillance of borders and of migrants, urging us to engage displaced people using participatory methods to understand their perspectives and protect their safety.
The second installment addresses how generative AI challenges fundamental concerns about our experiences of being human. Alexa Hasse examines how generative AI tools might change trust in our human social relationships. Sameer Hinduja offers a deep dive into the sobering potential of generative AI tools to perpetuate online harassment at massive scale. And Bill Shribman brings the perspective of a children’s media producer as he explores how media literacy education may need to shift in light of advancements in AI technology.
Maroussia Lévesque and Petra Molnar at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.As members of our community continue to study generative AI, both the urgency and the need for deeper consideration persist. Policymakers are moving quickly towards substantial legislation in multiple regions across the world. The technology keeps improving. Technology innovators are finding new applications for generative AI, and we likely have only scratched the surface of what is to come. Researchers are publishing new findings about concerns regarding bias, data privacy, data ownership, disinformation, and vast inequities across regions and communities. And they urgently need more access to data. Looking forward, the necessity for continued collaboration across sectoral, national, and disciplinary boundaries seems all the more critical.
As Senior Director of Programs and Strategy at the Berkman Klein Center, I am committed to sustaining co-design efforts. We welcome fresh perspectives and opportunities for engagement from around the globe, and from new sectors and stakeholders.
Co-Designing Shared Futures was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
Fourth public review - ends November 14th
OASIS and the OASIS Collaborative Automated Course of Action Operations (CACAO) for Cyber Security TC are pleased to announce that CACAO Security Playbooks v2.0 is now available for public review and comment. This 15-day review is the fourth public review for Version 2.0 of this specification.
About the specification draft:
To defend against threat actors and their tactics, techniques, and procedures, organizations need to identify, create, document, and test detection, investigation, prevention, mitigation, and remediation steps. These steps, when grouped together, form a cyber security playbook that can be used to protect organizational systems, networks, data, and users.
This specification defines the schema and taxonomy for cybersecurity playbooks and how cybersecurity playbooks can be created, documented, and shared in a structured and standardized way across organizational boundaries and technological solutions.
The documents and related files are available here:
CACAO Security Playbooks Version 2.0
Committee Specification Draft 05
24 October 2023
Editable source (Authoritative):
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05.docx
HTML:
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05.html
PDF:
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05.pdf
PDF marked with changes since previous public review:
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05-DIFF.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05.zip
How to Provide Feedback
OASIS and the CACAO TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.
The public review starts 31 October 2023 at 00:00 UTC and ends 14 November 2023 at 23:59 UTC.
Comments may be submitted to the TC by any person through the use of the OASIS TC Comment Facility, which can be used by following the instructions on the TC’s “Send A Comment” page (https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=cacao).
Comments submitted by TC non-members for this work and for other work of this TC are publicly archived and can be viewed at:
https://lists.oasis-open.org/archives/cacao-comment/
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the CACAO TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/cacao/
Additional information related to this public review, including a complete publication and review history, can be found in the public review metadata document [3].
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] https://www.oasis-open.org/committees/cacao/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr/#Non-Assertion-Mode
Non-Assertion Mode
[3] Public review metadata document:
https://docs.oasis-open.org/cacao/security-playbooks/v2.0/csd05/security-playbooks-v2.0-csd05-public-review-metadata.html
The post Invitation to comment on CACAO Security Playbooks v2.0 appeared first on OASIS Open.
The Identity at the Center Podcast was live at the FIDO Alliance Authenticate 2023 conference and we have even more episodes this week to prove it! Our fifth of six episodes is a conversation we had with Ori Eisen, Founder & CEO at Trusona, about the passkey user experience. Episode #243 is available now at idacpodcast.com and in your podcast app.
GS1 GDSN accepted the recommendation by the Operations and Technology Advisory Group (OTAG) to implement the 3.1.26 standard into the network in February 2024.
Key Milestones:
As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools on understanding the release and any impacts to business processes.
Business Message Standards including Message Schemas Updated For Maintenance Release 3.1.26Trade Item Modules Library 3.1.26 (Oct 2023)
GS1 GDSN Code List Document (Oct 2023)
Delta for release 3.1.26 (Oct 2023)
Delta ECL for release 3.1.26 (Oct 2023)
Unchanged for 3.1.26
Validation Rules (June 2023)
Delta for Validation Rules (June 2023)
Approved Fast Track Attributes (Dec 2022)
BMS Shared Common Library (Dec 2021)
BMS Documents Carried Over From Previous Release
BMS Catalogue Item Synchronisation
BMS Basic Party Synchronisation
Schemas
Catalogue Item Synchronisation Schema including modules 3.1.26 (Oct 2023)
Changed Schemas for 3.1.26 (Oct 2023)
Trade Item Authorisation Schema
Release GuidanceGS1 GDSN Attributes with BMS ID and xPath (Oct 2023)
Packaging Label Guide (June 2023)
Unchanged for 3.1.26GPC to Context Mapping 3.1.25 (June 2023) May GPC publication
Delta GPC to Context Mapping 3.1.25 (June 2023) May GPC publication
Deployed LCLs (Oct 2023)
Migration Document (May 2023)
GS1 GDSN Module by context (May 2023)
GS1 GDSN Unit of Measure per Category (Apr 2022)
Flex Extension for Price commentary (Dec 2018)
Any questions?We can help you get started using GS1 standards.
TORONTO, OCTOBER 31st, 2023 — There is a public trust gap in adopting digital trust capabilities that governments must close together with the private sector to amplify Canadian priorities in the global digital economy, according to a new report from DIACC.
Digital trust tools help verify a person or organization’s identity to enhance privacy, security, and transparency using people-centred design to operate digital credentials, digital wallets, networks, and modern authentication.
The report, Committed to Building Trust Together, found no strong call to action from the public to advance digital trust even though Canadians are increasingly frustrated by online scams and identity theft, among other high-profile security and data breaches that affect them. The lack of vocal public demand for digital trust is partly due to a need for education about what it is and, most importantly, how it can benefit and improve their lives.
“Hesitancy is often rooted in data privacy, security, and potential misuse of personal information concerns. People should rightly be concerned because there are often no easily understood rules around where their personal data lives, who owns it, or how others use it,” said Joni Brennan, president of the Digital ID & Authentication Council of Canada (DIACC). “In today’s digital world, trust remains at a premium, and the importance of identity verification is at an all-time high.”
Digital trust capabilities are critical — and long overdue — to support a secure and inclusive digital economy and society. From more efficient business tasks that save millions of dollars to getting the proper health records and making it easier to buy a house or board a plane, privacy and personal data control are potent tools that can help safely confirm a person’s identity.
The report follows two Public Trust Forum sessions that brought together members from academia, non-profit organizations, private sector companies and provincial and territorial governments. Participants spoke in-depth about the challenges, opportunities, and considerations for creating the conditions to enable made-in-Canada solutions.
Public Trust Forum participants reached a consensus on the following key takeaways:
There will never be a universal consensus on acceptance of digital trust capabilities. Universal acceptance should not be the goal. Use of government-issued digital credentials must be voluntary. People must have the choice to opt-out, as they can opt-out in the private sector. There’s no strong call to action from the public to advance digital trust. There is a need for education about digital trust and, most importantly, how it can benefit and improve people’s lives. An effective communications strategy should focus on real-life success stories in different use cases. That means practical examples of how digital trust capabilities improve service delivery, such as in the aviation and financial sectors. Though it’s been around for a long time, the term “digital identity” may hinder the public’s understanding. Terms such as “verify,” “authenticate,” or “credentials” may make it easier for those who are unsure about new technologies to see the benefits. DON’T abandon the term “digital identity.” It’s a profession and a globally recognized definition; its use will depend on the audience and situation. Jurisdictionally, with Canadian news removed from popular social media platforms, there may be a vacuum to spread mis and disinformation regarding digital modernization.DIACC made five recommendations that all organizations should prioritize to inform public dialogue and build trust.
Don’t wait for a universal public consensus on adopting digital trust capabilities because it will never come. Commit and message that public adoption is voluntary. Individuals may choose to use digital trust services or not. Make significant public education investments at municipal, provincial and federal levels and the private sector to inform the public about well-designed capabilities. Focus on easy use cases (i.e., digital parking or bus passes, obtaining a business licence). Reduce the temperature by moving public messaging away from the confusing term “digital identity” in certain situations. Terms like “verify,” “authenticate,” and “credential” are more easily understood. Communicate public safety importance as scenarios where digital services reduce response pressure and help get resources faster to those in need. Pandemic-related personal safety concerns accelerated the demand for modern digital services. Break transformation down into manageable outcomes rather than trying to boil the ocean with a national or universal strategy.“The bottom line is that organizations and governments must prioritize transparency, robust data protection measures, and ethical data usage while actively engaging with the public to address concerns and ensure that digital trust capabilities are developed and implemented to protect and enhance individual privacy and security and support organizations’ operational needs,” Brennan said. “Establishing safe and convenient use of digital ID services means establishing trust. People must have confidence and control over their identity data, and on the flip side, they must have evidence that their privacy, security and choices are secured.”
“Citizens and residents vocalize their frustrations with services that aren’t’ modernized, and they’re looking to governments and the private sector to lead the way in the global digital economy collaboratively,” said Giselle D’Paiva, Partner, Government & Public Sector, Deloitte.
“Trust is local, and designing made-in-Canada solutions for digital access and verification will help build consumer confidence, trust, and broad adoption,” said Neil Butters, VP of Product, Interac Verified.
“Securing digital trust for the supply chain and global digital economy depends on local and international leadership and collaboration to advance frameworks and standards that ensure broad benefits,” said Don Cuthbertson, Chief Executive Officer of Portage CyberTech.
“We’re working to help the public understand that global standards increase trust, reduce fraud and make it safer and more convenient for our customers to transact,” said Marie Jordan, Senior Director at VISA, Global Standards and Industry Engagement.
DIACC will reconvene the Public Trust Forum at intervals to review what’s transpired since the last forum, address critical thematic developments, and continue public literacy research to pinpoint how perception evolves among the public.
Download the executive summary.
DIACC-PublicTrustForum-Exec-SumDownload the full report.
DIACC_PublicTrustForum_Report_2023_FINALPress Inquiries
info@diacc.ca
About DIACC
Established in 2012, DIACC is a non-profit organization of public and private sector members committed to advancing full and beneficial participation in the global digital economy by promoting adoption and establishing a certification framework to verify the assurance and trust practices of services. DIACC prioritizes personal data control, privacy, security, accountability, and inclusive people-centred design.
The DIF Hackathon is well underway with over 200 participants so far and a $21,000 prize pool! We had a fantastic opening session with Limari Navarrete from DIF and an Intro to Decentralized Identity with Aviary Tech's Brian Richter. Both are now available on YouTube.
This coming week we have even more great sessions and we hope you can join us for some lively discussion. Here's this week's rundown.
DIDs, DID Resolution and the Universal Registrar with Markus Sabadello Have you been curious about the foundational elements of decentralized ID systems? Join us for a fascinating deep dive into DIDs. EventbriteWednesday Nov 1st at 9am PT
An Introduction to the Veramo Javascript Framework Veramo will help you create and manage decentralized identifiers + verifiable credentials without worrying about interop and vendor lock in. EventbriteThursday Nov 2nd at 9am PT
An Intro to Trinsic and the BBS Signature Scheme Come learn more about Trinsic’s SDK and the BBS signature scheme EventbriteThursday November 2nd 10am PT
For the full lineup of hackathon events head over to eventbrite to learn more: https://www.eventbrite.com/organizations/eventshttps://www.eventbrite.com/e/728586821797?aff=oddtdtcreator
We look forward to seeing you at our events and on Discord.
Happy Hacking!
The DIF Team
The Identity at the Center podcast was live at the FIDO Alliance Authenticate 2023 conference and we have even more episodes this week to prove it! Our fourth of six episodes is a conversation we had with Pedro Martinez, Business Owner for Digital Banking Authentication at Thales Group, about passkeys. Episode #242 is available now at idacpodcast.com and in your podcast app.
Earlier this year, we worked with Change.org to help staff members develop memorable workshops, sessions and keynotes for an organisational gathering. Check out part 1: Preparing for your session.
This post looks at how to run a great session and how to recover and follow-up once your session is over. We’ll also get into general tips. Like our first post, we’ve pulled out the key questions you can ask yourself to help you prepare.
Running your sessionSpeaking and Facilitation: Tips & Tricks: Part 3 (slides)
Start as you mean to go on, as they say. In order to create a positive atmosphere, be an enlarged version of yourself. If you’re funny, make people laugh. If you’re not, at least try to put people at their ease as best you can! That can be as simple as smiling :)
At the start of your session cc-by-nd Bryan MathersThere may be someone in the room in which you’re going to be running your session immediately before you. If the event planners did their job well, however, you will have at least a few minutes to get set up. If you feel like you’re rushing, just ask everyone to get acquainted with the person next to them.
Even if there’s a tech person on hand, don’t assume that everything is set up to your liking. Check the remotes. Check that the fonts on the slides you sent through look correct. Check the microphone works. Check everything. Twice.
cc-by-nd Bryan MathersAristotle famously said that we become brave by acting like a brave person. If you smile, everyone assumes that you are confident and are enjoying the experience. Smiling is particularly infectious, so smile away at people as they arrive. Just not in a scary way.
Some questions to reflect on:
What can you do to get set up or prepared even before you enter the room? Have you got a backup copy of your slide deck? Could you run the session without them? What kind of questions can you ask people as they arrive which will make them feel welcome? What’s the worst that can happen? If you’ve used a link shortener, have you checked it isn’t blocked on the event wifi? During your sessionSessions at events can be a rollercoaster of emotions and energy, both for the whoever is leading the session (you!) and the audience/participants.
cc-by-nd Bryan MathersPreparation is important, but so is delivery, so make sure people and ideas have time to ‘breathe’. Mix up the pace of the session to keep it interesting. Even if you practise, it’s easy to run over due to people arriving late, a question derailing your flow, or just you being over-enthusiastic about a particular section. Keep track of timings using your watch, phone, or presenter display. Bonus points for vibrating alarms, etc.
Questions to think about:
Have you left enough time for questions? Can you think of questions you are expecting to get and can prepare for? Are there easy analogies you can use from everyday life? Did you know that audience members with a ‘concentration face’ can also look like them frowning? When people read they tend to use a monotone voice — have you practised modulating your voice (making it go up and down, faster and slower) to make it more interesting to listen to? cc-by-nd Bryan Mathers At the end of your sessionJust because you’ve informed your audience of the link to the slide deck once doesn’t mean that you shouldn’t do it again. Especially as they’ve now seen how good it is. Ensure it’s on the screen at the end of your session for a good while.
Questions to prepare you for the end:
Can you make the link to your slides and agenda as easy to access as possible by ensuring there are no ambiguous characters (e.g. l vs 1)? Do you want to add your social media details to these resources as well? Which one(s)? Are there particular ‘rabbit holes’ that people tend to want to go down with the topic of your session? Recovering from sessionsSpeaking and Facilitation: Tips & Tricks: Part 4 (slides)
cc-by-nd Bryan MathersCongratulations! Your session is over and it’s time to relax a little. Instead of flopping into a heap in the corner or hitting the bar, take steps to look after yourself. Spending time outside, following-up with interesting people, and eating/drinking the right things are all ways to refresh your mind, body, and soul!
Don’t forget to take a moment to breathe and ask yourself these questions while you’re preparing:
Are there spaces outside you can be by yourself for a while? (especially if you’re an introvert) Can you take food and drink into the room you’re running your session? What are you going to reward yourself with? How can you make it so that people who missed your session can almost feel like they participated? Is it worth putting your out of office autoresponder on during the event? What does a minimum viable blog post look like for you? General TipsSpeaking and Facilitation: Tips & Tricks: Part 5
cc-by-nd Bryan MathersWhen you attend events and conferences you should work to promote engagement. It helps everyone have a better time. Ask questions! Sometimes you’ve got all of the answers and you’re bestowing your wisdom. More often, you’re exploring an area, sharing your experience, and asking questions for your colleagues to grapple with along with you.
Remember nothing is irrelevant and you should share openly. The things that you find boring and mundane such as how you set something up or the problems you had recruiting someone to do X can be fascinating and gold dust to other people. So mention them! You can go into more detail in the Q&A if asked.
People aren’t coming to your session because you’re a huge rockstar and they want to bask in your glory. Or maybe they are. Either way, your job is to help them with the things they are facing in their life, thanks to your insights.
cc-by-nd Bryan MathersFinal question to have an epic presentation: If you step into the shoes of your audience, what could you add / remove / modify?
We hope these couple of posts have helped you feel prepared for your next big workshop, talk, session or presentation. If you’re looking for more guidance or have any questions, do feel free to get in touch!
From Sage on the Stage to Collaborator was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the latest Elastos Bi-Weekly Update, significant progress has been made across different areas of the project. On the Elastos Main Chain, the core teams have completed the development and testing phases for changing the BPoS voting consensus mechanisms via pledging. This development is expected to bring about greater flexibility and enhanced security to the voting system. Additionally, advancements in Zero-Knowledge (ZK) proof verification are enabling more secure transactions and there have been direct network recovery features undergoing refinements through BPoS consensus logic, helping make the system more resilient. While the Elastos BTC Layer 2 solution is still under research, check out the Main Chain explorer which has also seen an update that enhances data visualization, and a new tool calculating staking rights and rewards which has just been introduced.
On the Elastos Smart Chain (ESC), significant strides have been made in incentivising developer involvement. A new developer incentive mechanism has been implemented, and it’s currently in the testing and verification stage. In terms of transaction costs, there is an ongoing debate about adjusting the gas fees. While a consensus hasn’t been reached, it remains an important point of discussion for making Elastos more rewarding to stakeholders. A preliminary plan is in place that aims to redistribute block rewards to miners, contract developers, and the CR treasury through an adjusted gas fee mechanism.
For the ESC/EID intersection, SPV synchronisation stability has been notably improved, leading to a more consistent block generation speed for sidechains. Issues of unstable block generation due to node failures in the broadcasting mechanism have been effectively addressed, adding another layer of reliability to the network.
In the area of Decentralised Identifier (DID) Web Service and KYC-me, there are plans for significant upgrades by the end of the year. These are aimed at integrating with KYC-me, which is extending its support for multiple eKYC providers. User experience enhancements are actively being worked on, following the trend of continuous improvement seen in the DID Web Service.
This comprehensive series of updates reflects Elastos’ commitment to advancing both its core infrastructure in blockchain technology and decentralised identity solutions. Interested in staying up to date? Follow Elastos here and join our live telegram.
Main Chain
– The core teams have completed the development and testing phases of changing BPoS voting consensus mechanisms via pledging.
– Core teams assisted in ZK proof verification, enabling more secure transactions.
– Work on the BTC layer 2 solution is still in the research phase.
– Direct network recovery is undergoing enhancements through BPoS consensus logic refinement.
– The Main Chain explorer has been updated, providing more comprehensive data visualizations.
– A new tool calculating staking rights and rewards is now live.
Elastos Smart Chain (ESC)
– Implemented a new developer incentive mechanism; ongoing testing and verification are underway.
– A debate about adjusting gas fees is ongoing; consensus is yet to be reached.
– A preliminary plan aims to redistribute block rewards to miners, contract developers, and the CR treasury through an adjusted gas fee mechanism.
ESC/EID
– SPV synchronization stability has been improved, resulting in more stable block generation speed for sidechains.
– Addressed the issue of unstable block generation due to node failures in the broadcasting mechanism.
DID Web Service and KYC-me
– Significant upgrades to the DID Web Service are planned by year-end and will be integrated into KYC-me.
– A second partner for eKYC to KYC-me is in the pipeline, extending KYC-me’s support for multiple eKYC providers.
– Work is in progress to enhance the overall user experience.
The post Elastos Bi-Weekly Update – Oct 29, 2023 appeared first on Elastos.
UBL v2.4 ready for testing and implementation
OASIS is pleased to announce that Universal Business Language Version 2.4 from the OASIS Universal Business Language TC [1] has been approved as an OASIS Committee Specification.
UBL is the leading interchange format for business documents. It is designed to operate within a standard business framework such as ISO/IEC 15000 (ebXML) to provide a complete, standards-based infrastructure that can extend the benefits of existing EDI systems to businesses of all sizes. The European Commission has declared UBL officially eligible for referencing in tenders from public administrations, and in 2015 UBL was approved as ISO/IEC 19845:2015.
Specifically, UBL provides:
– A suite of structured business objects and their associated semantics expressed as reusable data components and common business documents.
– A library of schemas for reusable data components such as Address, Item, and Payment, the common data elements of everyday business documents.
– A set of schemas for common business documents such as Order, Despatch Advice, and Invoice that are constructed from the UBL library components and can be used in generic procurement and transportation contexts.
UBL v2.4 is a minor revision to v2.3 that preserves backwards compatibility with previous v2.# versions. It adds new document types, bringing the total number of UBL business documents to 93.
This Committee Specification is an OASIS deliverable, completed and approved by the TC and fully ready for testing and implementation.
The prose specifications and related files are available here:
Universal Business Language Version 2.4
Committee Specification 01
17 October 2023
Editable source (Authoritative):
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.xml
HTML:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.html
PDF:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.pdf
Code lists for constraint validation:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/cl/
Context/value Association files for constraint validation:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/cva/
Document models of information bundles:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/mod/
Default validation test environment:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/val/
XML examples:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/xml/
Annotated XSD schemas:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/xsd/
Runtime XSD schemas:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/xsdrt/
For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.zip
Members of the UBL TC [1] approved this specification by Special Majority Vote. The specification had been released for public review as required by the TC Process [2]. The vote to approve as a Committee Specification passed [3], and the document is now available online in the OASIS Library as referenced above.
Our congratulations to the TC on achieving this milestone and our thanks to the reviewers who provided feedback on the specification drafts to help improve the quality of the work.
========== Additional references:
[1] OASIS Universal Business Language TC
https://www.oasis-open.org/committees/ubl/
[2] History of publication, including public reviews:
https://docs.oasis-open.org/ubl/csd02-UBL-2.4/UBL-2.4-csd02-public-review-metadata.html
[3] Approval ballot:
https://www.oasis-open.org/committees/ballot.php?id=3799
The post Universal Business Language v2.4 from the UBL TC approved as a Committee Specification appeared first on OASIS Open.
In April 2023, the OpenID Foundation announced the 2023 Kim Cameron Award recipients. Today we’re pleased for the award recipients to share their experiences.
The goal of the awards is to increase representation from young people’s who’ve demonstrated an interest in subjects consistent with best practices and identity standards that are secure, interoperable, and privacy preserving.
Isaac Henderson
Senior Security Researcher at the University of Stuttgart IAT/Fraunhofer IAO
I am humbled and honored to express my gratitude to the OpenID Foundation for bestowing upon me the prestigious Kim Cameron Award. Through this, I had an opportunity to attend the Identiverse Conference in Las Vegas. In this blog post, I will take you through the highlights of my Identiverse experience and share my heartfelt appreciation for the recognition that has fueled my commitment to the world of digital identity.
The Identiverse Conference has established itself as a global hub for identity professionals, and technologists. The core of Identiverse lies in its power-packed keynotes and parallel sessions. Visionaries from diverse industries took the stage to discuss the rapidly evolving landscape of digital identity. Each presentation was a mosaic of ideas, woven together to form a comprehensive understanding of the challenges and opportunities that lie ahead. From discussions about Decentralized identity, secured authentication, and Zero Trust Architectures to the exploration of advancements in AI-driven security solutions, the keynotes painted a vivid picture of the possibilities awaiting us. And notably, deep dive sessions on Passkeys, Microsoft EntraID, and SPIFFE-based distributed workload for Zero Trust Architectures, Open ID sessions on OIX and GAIN were thought-provoking. These insights were not only intellectually stimulating but also profoundly impactful in shaping my perspective on the future of identity. Due to the many interesting sessions offered parallelly sometimes it was even tough to decide which one to attend. Also, I was able to witness how ideas originated in Identiverse sessions and later turned into standards in identity space.
The expo floor at Identiverse resembled a technological theme park. Booths showcased an array of cutting-edge solutions, each aiming to redefine how we perceive and manage digital identity. From secured authentication solutions, and passkey implementations to decentralized identity platforms, the expo was a testament to the relentless innovation in the identity space.
The Kim Cameron Award paved the way to grow my knowledge in identity space and also allowed me to connect with a community of identity professionals. And also enabled me to meet OpenID Foundation members and also get to know the different working groups in person. This further encourages me to actively participate in the Foundation’s work and contribute to different Open-Source/ R&D initiatives in the identity space.
Rachelle Sellung
Senior Scientist Researcher in the Identity Management Competence Team at Fraunhofer IAO
After winning the Kim Cameron Award this year, I had the honor to join and present my work at the EIC 2023 this year. It was a great opportunity to present my work on User Experience Best and Worst Practices for Digital Wallets. I was pleasantly surprised at how many people were interested in hearing my talk. I received an overwhelming amount of positive and constructive feedback. It was valuable to receive this quality feedback from such a wide range of experts in the field of identity. In addition, I had the opportunity to reconnect and build new connections in this space of identity.
From those connections, I am working with OIX in the Global Interoperability working group and will have the opportunity to present at their IdentityTrust Conference 2023- Building Trust in Identity Event on the 28th of September. In addition, I was able have some interesting and rewarding conversations with the Women in Identity group and am finding more ways to be more involved.
Amir Sharif
Researcher in the Security & Trust Research Unit of the Cybersecurity Center of Fondazione Bruno Kessler
When I learned that I had been chosen to be awarded the prestigious Kim Cameron Award from the OpenID Foundation, I felt a mixture of gratitude, excitement, and humility. Although I never met Kim Cameron in person, his impact and influence in the area of digital identification were well-known. The honor of being associated with his name was genuinely beyond words.
My digital identity path began with a strong interest in identity management protocols, which led me to become immersed in its subtle nuances. Over the course of more than 5 years, my passion prompted me to investigate the finer points of identity protocols with a focus on OAuth and OpenID Connect Standards, honing in on the design and security analysis of these solutions.
Stepping into the world of European Identity and Cloud Conference (EIC) 2023, facilitated by the Kim Cameron Award, was a transformative experience. The event brings together some of the brightest minds and industry experts in the field of Digital Identity and Cybersecurity. Throughout the event, I had the opportunity to attend several thought-provoking sessions and participate in enlightening discussions. Engaging with the élite of digital identity, I witnessed the pivotal shift towards a user-centric approach, ripe for exponential growth under the stimuli arriving from different perspectives including those of initiatives and foundations, such as the Open Wallet Foundation and the Kantara initiative.
One of my highlights was the session I presented with Giada Sciarretta and Francesco Antonio Marino about the “Past, Present, and Future of the Italian Digital Identity Ecosystem”. It is the output of a fantastic collaboration between the Center for Cybersecurity of Fondazione Bruno Kessler – FBK and the Italian Government Printing Office and Mint. At the time of writing, we are working on the Italian Technical Specification for Verifiable Credential Issuance and Presentation in the context of the European Digital Identity Wallet. This is exciting and there is still much to do in this fascinating journey.
Perhaps the most essential effect of attending EIC is the ability to physically reach out to many like-minded persons outside of academia who are interested in the difficulties of security and privacy in IAM. It left me not only with a bounty of fresh insights, new relationships, and a renewed feeling of purpose but also inspired me to contribute even more to the development of digital identity. I am grateful to the OpenID Foundation for this fantastic opportunity and forward to working with the identity community.
Charlie Smith
Doctoral Student at the Oxford Internet Institute
Attending Identiverse has been hugely beneficial for my research. As a political philosopher studying digital identity, a vital part of the job involves ensuring that my theorising remains grounded in the practical reality of our industry. By far the best way to do this is attending events like Identiverse—but without the OIDF’s support I would never have been able to fly to Las Vegas from the UK for a conference. I am therefore deeply grateful to the Foundation for its generous funding and mentorship via the Kim Cameron Award. Without the OIDF I would simply not have been able to broaden my network beyond the European context as effectively. The Award has added massive value to my doctoral degree.
This year I met many experts I’d only ever interviewed virtually before, allowing me to put faces to names and form far more meaningful connections. This has already resulted in several useful collaborations that will massively benefit my research and generate public value. Additionally, the access that Award winners get as a result of their association with the OIDF has opened numerous new doors to me. I hope these connections will only deepen throughout my career. Finally, the educational content of the conference has been essential for remaining up to date on developing industry trends and expert knowledge. Overall, I would encourage any academic interested in identity to make use of this fabulous resource.
The Foundation also thanks our conference partners, the European Identity and Cloud Conference (EIC) and Identiverse, as award recipients receive complimentary access to these important industry events as part of their award. Awardees will participate in Foundation events at the respective conferences as well as introduced to the identity domain experts and industry leaders at the conferences while being welcomed by our colleagues in other industry organizations.
The post 2023 OpenID Foundation Kim Cameron Award Recipients Share Their Experiences first appeared on OpenID Foundation.
It's time for our third FIDO Alliance Authenticate 2023 conference episode of The Identity at the Center Podcast! Today we share a long-overdue conversation we had with Pamela Dingle, Director of Identity Standards at Microsoft about identity standards, accessibility, and Entra. Episode #241 is available now at idacpodcast.com and in your podcast app.
In this byline Andrew looks at the rise of generative AI and the increase in the threat of phishing, making attacks almost undetectable. For Andrew, AI-based malware tools such as FraudGPT are generating sophisticated phishing campaigns. Instead of simply detecting phishing emails, businesses need to rethink security, with a focus on eliminating passwords. He believes that passwordless authentication, such as passkeys, offers better protection, making credentials invulnerable to attack.
The post Siècle Digital: Generative AI: a revolution that is forcing businesses to rethink the fight against phishing appeared first on FIDO Alliance.
FIDO study shows: passwords to be replaced by better alternatives in the next few years
A study conducted by the FIDO Alliance (Fast IDentity Online) and LastPass shows that passwords in companies will largely be replaced by secure alternatives in the next few years. According to the survey, most IT executives believe that by 2028, less than 25 percent of logins will be through passwords. Already, 95 percent of respondents are using passwordless alternatives such as passkeys to increase security. The majority of IT managers plan to manage passkeys via third-party password managers.
The post Caschys Blog: FIDO study shows: passwords to be replaced by better alternatives in the next few years appeared first on FIDO Alliance.
Phishing attacks are rising in frequency and sophistication, with AI-driven techniques and deepfake voice and video used to trick victims, according to a FIDO Alliance survey. Passwords without two-factor authentication are still common, despite their vulnerability, while biometrics and FIDO-enabled methods like passkeys are gaining favor as more secure alternatives.
The post InfoSecurity: Rising AI-Fueled Phishing Drives Demand for Password Alternatives appeared first on FIDO Alliance.
Email teams being challenged with high cart abandonment rates should push back — the real problem is not bad personalization or copy, but passwords. People are fed up with them. And their behavior reflects it, judging by the 2023 Online Authentication Barometer, a global study by the Fido Alliance, conducted by Sapio Research. U.S. consumers abandon a purchase and stop accessing an online service because they can’t remember their passwords 4.76 times per day on average, up from 3.71 in 2022 — a 28.30% increase.
The post Media Post: The Password Plague: Consumers Are Abandoning Purchases Out Of Frustration appeared first on FIDO Alliance.
In the fight against a growing number of text-, phone-, and email-based phishing scams, speakers at the FIDO Alliance’s Authenticate Conference in California this week made a case for passwordless security options like the passkey over the oft-hacked password. “Any enterprise that’s using passwords or legacy forms of MFA is taking a gamble they will eventually lose,” Andrew Shikiar, executive director of the industry group known as the FIDO Alliance, told attendees.
The post IT Brew: FIDO passkeys beat passwords in phishing fight appeared first on FIDO Alliance.
DIF's biggest-ever Hackathon is underway!! We've been thrilled by the enthusiastic response from the developer community during the event planning and communications. Now the real work (and fun!) begins.
Come and join us at https://difhackathon.devpost.com/?ref_feature=challenge&ref_medium=discover .
DON'T MISS OUT!
We’re excited to announce the integration of Lit Protocol and Ceramic, enabling developers to store encrypted data on ComposeDB. This integration allows developers to build applications that provide users with more control over their data and privacy.
What is Lit Protocol?Lit Protocol is a decentralized key management system that allows users to encrypt their data and control who has access to it. Lit Protocol’s access control system gives users granular control over who can access and use their data.
Along with flexible access control features, Lit also offers tooling around multi-party computation (MPC). MPC enables secure reading and writing of data between blockchains and off-chain platforms, such as programmable signing that allows developers to build distributed serverless functions that have event listening capabilities to create specific triggers for executing programmatic signing, and much more.
Why integrate Lit Protocol and Ceramic Network?The integration between Lit Protocol and Ceramic allows developers to build applications that provide users with more control over their data and privacy.
The Ceramic protocol is built on decentralized event streams, where user accounts (enabled by decentralized identifiers, or DIDs) cryptographically sign data events and submit them to the network. These events are stored in the Interplanetary File System (IPFS) using the IPLD protocol, and organized into readable streams. Each stream is flexible enough to store many types of content. Therefore, Ceramic is home to a diversity of different data use cases such as user profiles, posts, relations to other entities and more.
Due to Ceramic's open readability, any participating node can read from any stream in the network. Therefore, encrypting data using Lit Protocol and saving it on Ceramic is a common (and necessary) integration for many teams.
A Peek Under the HoodGiven Ceramic’s architecture, Lit Protocol’s access control capabilities allow developers to gate access to content based on highly flexible conditions. For instance, developers may want to allow their users the ability to grant read access to all addresses that hold a specific NFT or ERC20 asset. With Lit Protocol, developers allow their users to assign access control conditions that are associated with the encrypted object when generated. The Lit nodes confirm those conditions have been satisfied using the user’s wallet signature when they request access. This offers a seamless user flow for the end user, whereas (without Lit Protocol) users might otherwise be required to manually decrypt or re-encode their encrypted data for other users they want to grant access to.
How to Use Lit Protocol to Encrypt Messages and Save Them to CeramicTo use Lit Protocol to encrypt messages and save them to Ceramic using ComposeDB, you can follow the tutorial here. This tutorial uses a message board example application to show you how to create encrypted messages using Lit Protocol and save message instances to the Ceramic Network using ComposeDB.
In the tutorial, you’ll learn how to: Create ComposeDB schemas and deploy those models on a local Ceramic node Authenticate users on Ceramic to allow them to author their own documents Encrypt data with Lit Protocol and write mutation queries to save the encrypted data to ComposeDB using GraphQL Decrypt data using Lit Protocol based on specific access control logic What do you need to get started?As outlined in the tutorial, the only dependencies you’ll need are:
MetaMask Chrome Extension Node v16 Want to Learn More about Building on Ceramic?Build an AI-powered Chatbot and save message history to ComposeDB by following this ComposeDB Tutorial.
How to Use and Store Composable Attestations with Ceramic and Ethereum Attestation ServiceWalk through a tutorial on how to generate attestations (using Ethereum Attestation Service) and store them on ComposeDB.
ComposeDB API SandboxUse the ComposeDB API Sandbox to test example queries on a real dataset.
Create a Social App on ComposeDBThe Social App ComposeDB Starter will help you get started building your own social app.
Let us know what you think about our integration with Lit Protocol on the Forum!
This image of the New York City skyline was created with the Leica M11-P and now includes Content Credentials at the point of capture to protect the authenticity of images. At the top right of the image, preview its Content Credentials digital nutrition label including information such as name, dates, changes made and tools used.
By Santiago Lyon, Head of Advocacy and Education, CAI
We are thrilled to announce that industry-leading camera manufacturer Leica is officially launching the new M11-P camera — the world’s first camera with Content Credentials built-in.
This is a significant milestone for the Content Authenticity Initiative (CAI) and the future of photojournalism: It will usher in a powerful new way for photojournalists and creatives to combat misinformation and bring authenticity to their work and consumers, while pioneering widespread adoption of Content Credentials.
With manipulated content and misinformation more on the rise than ever, trust in the digital ecosystem has never been more critical. We are entering a new era of creativity, where generative AI is expanding access to powerful new workflows and unleashing our most imaginative ideas. The Leica M11-P launch will advance the CAI’s goal of empowering photographers everywhere to attach Content Credentials to their images at the point of capture, creating a chain of authenticity from camera to cloud and enabling photographers to maintain a degree of control over their art, story and context.
A photograph created with the Leica M11-P and its Content Credential.
Leica has implemented the global Coalition for Content Provenance and Authenticity (C2PA) standard in the M11-P camera so that each image is captured with secure metadata. This means it carries information such as camera make and model, as well as content-specific information including who captured an image and when, and how they did so. Each image captured will receive a digital signature, and the authenticity of images can be easily verified by visiting contentcredentials.org/verify or in the Leica FOTOS app.
This is a watershed moment for trust and transparency for photographers and creatives – its significance cannot be overstated. This is the realization of a vision the CAI and our members first set out four years ago, transforming principles of trust and provenance into consumer-ready technology.
With the integration of the CAI framework, Leica will help combat the pervasive issues mis/disinformation and preserve trust in digital content and sources. Further, with this integration, recent announcements at MAX, and the broad availability of our free open-source tools powering them, Content Credentials are seeing accelerated adoption around the world, including among photojournalists, news outlets, creative professionals, everyday consumers, social media influencers, artists and innovators.
Adobe co-founded the Content Authenticity Initiative (CAI) in 2019 to help combat the threat of misinformation and help creators get credit for their work. Today the CAI is a coalition of nearly 2,000 members, including Leica Camera, AFP, the Associated Press, the BBC, Getty Images, Microsoft, Reuters, The Wall Street Journal and more, all working together to add a verifiable layer of transparency and trust to content online – via secure metadata called Content Credentials.
Between the tremendous momentum in attracting new members and the growing adoption of Content Credentials by leaders spanning multiple industries, the CAI is ensuring that technological innovations are built on ethical foundations.
Snapshot: How Content Credentials Works
Transparency at the point of capture: We believe the chain of authenticity is strongest at the moment a piece of media is created — being able to verify the circumstances of an image’s origin is the foundation for knowing whether to trust it.
Get credit for your photography work: Content Credentials enable photojournalists and creatives to assert credit for their work, ensuring that wherever one of their images goes, their identity travels indelibly with it.
Bring trust to your digital content with a digital nutrition label: Content Credentials are the digital nutrition label and most widely adopted industry standard for content of all kinds, and the foundation for increased trust and transparency online.
Leica’s M11-P camera will be available globally at all Leica Stores, the Leica Online Store and authorized dealers, starting today. To learn more, please visit: https://leica-camera.com/m11-p
Content Credentials in Adobe Photoshop is enabled and the image from a Leica M11-P is imported. Here, you can preview the Content Credentials identifying an ingredient from the Leica camera signifying a Content Credential exists.
The photograph is significantly altered using the Sky Replacement tool in Adobe Photoshop. This edit becomes part of the file’s Content Credentials.
The edited image is exported from Adobe Photoshop and inspected using Verify (contentcredentials.org/verify), a CAI website that reads and surfaces Content Credentials where consumers can inspect changes made to an asset.
Receive updates and community news. Consider joining the CAI as a member.
Stay connected on LinkedIn and Twitter.
Explore the CAI’s open-source tools, powering Content Credentials, verifiable details or digital “nutrition labels” about how content was created.
Newark, NJ, October 25, 2023 –Edge is transforming the way the education community and public sector organizations procure critical services with an expanded series of offerings from an ecosystem of highly qualified service providers, made available via its EdgeMarket Cooperative Pricing System. The new suite of IT Professional Services Contracts launches this week in direct response to the growing needs expressed by Edge members and EdgeMarket participants.
Service categories under the new IT Professional Services Contract include:
Data Center Support, Network Engineering (LAN), Network Operations (NetOps), Installation, and configuration Cloud Migration Services Master Data Management (MDM) and Data Warehouse Services Physical Learning Space Installation and Configuration Software System Selection Services Technical Project Manager ERP and CRM Implementation, Upgrades, and Integrations Other Implementations, Upgrades, and IntegrationsExplains Dan Miller, Associate Vice President, EdgeMarket, “The purpose of this powerful set of master agreements is to provide access to a variety of IT Professional Services on an Indefinite Delivery Indefinite Quantity (IDIQ) (or “Open Purchase Order”) basis to support the needs of Edge, Edge Members, and EdgeMarket Participants.” Continues Miller, “We look forward to some very exciting, multi-dimensional growth for Edge, EdgeMarket, and our participants around the country through this powerful vehicle.”
EdgeMarket is designed to support the economic advancement of its members via new services and solutions, best price procurements, and taking on the heavy lift of safe, simple, and smart public procurements, nationally with public and private education institutions, state and local government, healthcare providers, and nonprofit organizations.
Learn how your institution can use the ever-expanding range of IT Professional Services by visiting the EdgeMarket procurement portal where you will find access to the complete listing of provider awardees by category.
Learn About the IT Professional Services ContractThe post EdgeMarket Expands Procurement Offerings with Far-Reaching IT Professional Services Contracts appeared first on NJEdge Inc.
Passwords are inherently flawed, so tech companies are turning to more secure logins that just require your face or fingerprint.
The biggest tech companies want you to ditch passwords for passkeys. You’re probably wondering: What even is a passkey? And do I have to use it?
Read the article here.
The post The Wall Street Journal: Google and Apple Want You to Log In With Passkeys. Here’s What That Means. appeared first on FIDO Alliance.
With the DIF Hackathon starting on October 26, DIF caught up with Toby Bolton, Founder of .zkdid™ who is looking to collaborate on his innovative use case for Decentralized Identifiers (DIDs) and other DIF work items.
What is .zkdid?
.zkdid is an acronym for Zero Knowledge Decentralized Identity. It’s a Decentralized Domain Name System (dDNS) protocol intended to empower the public. It’s also a registered trademark, it’s the name of the project, intended as a mark-of-trust and is a prospective Web3 top-level domain (like .com).
The purpose is to establish a “.zkdid” Zero Knowledge (ZK) decentralized identity protocol for Web3 that is compliant with government Know Your Customer (KYC) requirements.
Each address with the .zkdid extension is a unique identifier in the form of a Non-Fungible Token (NFT) which is resistant to duplication. The goal is to use these identifiers to create a ZK Proof of personhood registry and NFT-gated community that is Sybil resistant. Sybil resistance mechanisms aim to ensure that each participant in a network has a unique and singular identity.
This creates the basis for a secure, publicly owned/governed, decentralized, zero-knowledge Domain Name System (DNS).
How did the .zkdid initiative come into being, and why are you doing it?
First and foremost, I worry about the future of my daughters, and how these novel identity technologies could possibly erode human rights and privacy if not deployed with due diligence and care. So I wanted to find the solution to such problems. I’ve been aware for quite a while that identity is a great use case for blockchain. Then zero-knowledge proofs (ZKPs) became a hot topic. That’s when I got onto the .zkdid path.
The idea for .zkdid initially came from an online chat where we were discussing the Ethereum Name Service (ENS). The question I had was, “what is the top-level domain of Web3?”, which sparked quite a debate.
I believed the top-level domain for Web3 would likely be an identity protocol, and possibly even .zkdid.
The aim is to connect the public to an identity layer that is not centrally controlled. The details of this decentralized governance identity protocol are still being researched.
How will .zkdid be different?
There are quite a few blockchain name services. I wanted to improve on what I had found. ENS is likely the best known dDNS protocol but uses tokenomics for its governance and can be owned anonymously which in my opinion doesn’t work for a proof of personhood protocol. In my view I’m developing an alternative. What I’m building is also an alternative to other proof-of-personhood protocols in the market like Worldcoin and Proof of Humanity.
The point of .zkdid is to create a decentralized DNS layer, one that is publicly owned by a not for profit foundation, which does not rely on tokenomics that can manipulate leadership of the protocol. These are areas I saw in other protocols that I believe need improving.
With .zkdid, the identity is the token. Your .zkdid domain name will enable you to prove who you are in the Web3 digital domain without needing to reveal any personal information about yourself. Each person is one token. You are the token.
How do you see this coexisting with current Identity and DNS systems?
DNS (Domain Name System) is one of the most widely used internet protocols, most of us use it daily and it’s possible that one day .zkdid can connect the Identity industry to a publicly owned dDNS protocol.
When I registered .zkdid, I knew it was important that it was secured from every dDNS registrar. Once I registered the domain name, I submitted a Trademark application in the United Kingdom, which was promptly published.
ICANN do not govern decentralized DNS domain names, so a strategy is needed to protect them. However, I understand ICANN are looking into blockchain technologies and it would be a pleasure to have them involved in the project. We are also inline to become a Web3 Domain Alliance (W3DA) member, it’s an important initiative that intends to act as a foundation to inhibit possible dDNS collisions. We need to bring people together. No one should be excluded, .zkdid is meant to be a public good after all.
What are the next steps for the project?
Recently, I’ve been collaborating with a team at the Fraunhofer Institute that is looking to use the traditional DNS system to create a trust registry of all humans and IoT devices. My vision is to use dDNS and for the public to be issued a .zkdid identifier through the eIDAS initiative. This means if you are a European citizen, your .zkdid identifier could be connected to your government issued identity.
This was not the initial objective but as things progressed I learned the only way to achieve our ambitious goal is to integrate with government initiatives like eIDAS.
I’ve now managed to build a great team of enthusiastic developers to work with me on this concept during the DIF Hackathon. To work within the Hackathon rules this first step is likely going to take advantage of the Open Source Iden3 toolkit. I’m going to suggest we use that, plus relevant DIF work items such as the Universal Resolver, BBS Signature Scheme, DIDComm and Sidetree Protocol. We need to focus on making it multi-chain and interoperable.
I’m really pleased that Polygon ID is one of the sponsors of the Hackathon, because it was the PolygonID/Iden3 wallet that was the original inspiration for this project. When I first encountered PolygonID I thought it was impressive, very abstract but really clever technology.
During the Hackathon we will develop this concept into an initial functioning prototype. If the people who join me for the Hackathon subsequently want to team up to form a .zkdid Working Group at DIF to develop the specification, that would be brilliant.
I’m looking for team members who are pro freedom and human rights, who understand the importance behind decentralized technologies, and have an optimistic and balanced outlook.
Why have you chosen DIF to host this initiative?
The opportunity came out of the blue. When I first joined the DIF I was introduced to Andor Kesselman, who chairs the Technical Steering Committee. Andor said “You’ve got a great vision, now let’s start from the beginning.” He asked me to strip down the concept to the basics needed to commence a DIF working group.
And stripping it back to the bare bones we find a zero-knowledge decentralized DNS system. I always intended .zkdid to be a global mark of trust and an open standard that works with any kind of wallet, identity network and protocol. Now I’m in the DIF, I’ve got an opportunity to be part of the process of defining the specification that may actually become a standard.
I feel like I've been thrown out of the frying pan into the fire, which is a bit overwhelming but exactly what I needed!
What should people do if they are interested in getting involved?
Firstly, please provide feedback on the idea by contacting Toby on Slack, or via email at id@zkdid.io .
The next step is to join forces with Toby to participate in the DIF Hackathon.
Finally, please let Toby know if you are interested in working with him to establish a new .zkdid Working Group at DIF.
By Francis Beland, Executive Director, OASIS Open
Quantum computing & technologies present unique challenges that are not addressed by classical computing standards.
The development of open standards for quantum technologies is crucial for ensuring the interoperability, security, and reliability of quantum computing systems and facilitating collaboration and innovation across the quantum computing ecosystem.
One of the key challenges in developing open standards for quantum technologies is that quantum computing fundamentally differs from classical computing. Quantum computing relies on the principles of quantum mechanics, which allow for the creation of superposition states and entanglement, enabling quantum computers to perform certain types of computations much faster than classical computers. However, this also means that quantum computing systems require different hardware, software, and communication protocols than classical computing systems.
To address these challenges, standards bodies should and are working to develop standards for quantum technologies. For example, the Quantum Industry Consortium and the Quantum Economic Development Consortium, are working to develop standards and best practices for the quantum industry. But are those standards ‘open’, or should they be?
In addition to developing standards for quantum computing, it is also important to develop open standards for other quantum technologies, such as quantum cryptography and quantum sensing. These technologies also present unique challenges that require new standards and best practices to ensure their reliability and security.
Overall, the development of open standards for quantum technologies is crucial for ensuring the growth and success of this exciting and rapidly evolving field, and will play a critical role in enabling the development and adoption of quantum technologies in a wide range of industries and applications.
The post Do quantum technologies need a new set of standards? appeared first on OASIS Open.
The Identity at the Center podcast was live at the FIDO Alliance Authenticate 2023 conference last week. Our second episode is a conversation we had with Stephanie Shuckers, Professor at Clarkson University and Director of the Center for Identification Technology Research about biometrics. Episode #240 is available now at idacpodcast.com and in your podcast app.
Our recent partnership with the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) offers a lens into the workings of an organisation that is taking proactive steps to strengthen their internal practices.
Based in Uganda, CIPESA focuses on decision-making that facilitates the use of ICT in support of good governance, human rights and livelihoods.
Within the framework of our 6-month Matchbox partnership, CIPESA had three main priorities:
Strengthening their digital communications and documentation infrastructure through guidance on implementation of self-hosted tools that enable autonomy and ownership. Support in platform implementation through guidance on user testing, roll-out and maintenance plans design. Capacity building around responsible data management and organisational security. What we discoveredOur assessment of CIPESA’s current landscape of tools revealed a diverse technological framework consisting in a mix of tools, from conventional email and word processing to collaborative platforms, incorporating a blend of cloud-based and localised storage.
We also looked at CIPESA’s security frameworks and practices, which encompass multifaceted strategies, realised through a mix of practices and tools.
CIPESA are cognisant of research ethics and data handling and were interested in strengthening their Responsible Data Management and data handling practices to further ensure protection of data subjects and sensitive data. They were also interested in strengthening the security of their websites, email, and social media platforms.
Introducing contextual risk assessments, self-hosted tools, and Responsible Data frameworksRecognising the importance of keeping internal communications private, we suggested a shift to a self hosted video conferencing tool (Big Blue Button). This platform would also provide resilience against potential restrictions and avoid over-reliance on a third party platform.
To further streamline CIPESA’s communication with its networks, CIPESA also looked at the adoption of a tool that would foster tailored communications with its large network of stakeholders. One of the platforms we recommended was SuiteCRM, which is open source and can be self-hosted, which aligns with CIPESA values. It also has core features that the organisation is looking for, including creating groups of key stakeholders and setting up consistent targeted communications with them through the platform.
We also recommended a thorough contextual risk assessment, to offer clarity on pressing security concerns and ensure that a consistent internal policy is adopted to mitigate these risks. This would also aid in the implementation of tools and practices that will strengthen internal security – such as the adoption of 1password for password management.
Aligning data management with organisational valuesCIPESA wanted to ensure alignment of data management practices with both organisational values and regional data protection regulations. Drawing from resources like The Engine Room’s Becoming RAD! guide, we provided guidance about data storage duration, disposal methods, and more.
CIPESA’s work environment relies on various tools and platforms, from cloud based tools to physical files. Amidst this diversity of platforms, the importance of data responsibility – the principle that information collected and generated should be handled responsibly with care and include ownership of organizational data on platforms. This value highlights the importance of self-hosted platforms and also choosing platforms aligned with this principle.
Our assessment helped identify areas and recommended steps to enhance both efficiency and security. CIPESA’s approach, which involves a mix of password protection, two-factor authentication, and data encryption, stands out as a model for organisations looking to navigate the digital space safely.
Contact us“The Engine Room’s Matchbox program has been instrumental in helping us identify and address various questions regarding how we handle and manage the data we collect. It has also reaffirmed the need to enhance internal practices especially in an evershifting digital terrain and in a growing community of practice when it comes to the work that we do in advancing digital rights and governance in Africa”
If you have question about integrating tech and data more efficiently and securely into your social justice work, get in touch!
Read more about our Matchbox programme Schedule a no-fee half hour call with our team!Photo by Ben Allan on Unsplash
The post Strengthening foundations with CIPESA first appeared on The Engine Room.In our previous post, we outlined a different way to do recognition. It’s a way which democratises the means of credentialing, and values each human as a part of community networks. Go and read that first, if you haven’t already.
ContextThe original plan for Part 2 of this series was to show a real-world example of completing the last three steps of a 10-step process, namely:
Come up with the metadata for Open Badges based on your skills Create badges using an online platform Ask contacts to endorse your badgesHowever, after several conversations with badging platforms, we’re still a little early for that. Although they have plans for adopting v3 of the Open Badges standard, most are not currently planning to use it in the way we’re discussing in this series of posts.
WorkflowWe see a need for a platform that aids individuals in identifying their skills and attributes, supported by their networks and communities. Mapping these attributes against relevant skills taxonomies can make them visible and endorsable, turning them into a form of ‘currency’.
One of the people we spoke to asked how we would build the system we envisage. We’ve come up with the following workflow (v0.1) to show how that might work technically, and below that there are wireframes showing a basic user journey.
One important thing to note is that by creating a standardised workflow, even the most introverted person with the smallest network can still find value in the approach.
A default process, customisable for endorsements, can boost user confidence, as not all are comfortable seeking endorsements.
PlatformThe basic wireframes below show the first part of the workflow and how the individual would get started.
(Note: in this example we’re providing basic wireframes for the simplest version of the workflow)
User onboardingUsers sign up, answer three questions, and provide contact details for feedback. Once done, the system emails the contacts for their input.
FeedbackContacts are asked to answer three questions similar to those posed to the user.
Synthesised ResultsUsers review their results, synthesised by AI. They then map their attributes against skills taxonomies of their choice.
Request for EndorsementThe next step is for the user to choose which of their attributes they’d like to have endorsed. They can then personalise an endorsement request to their contacts.
The system then emails contacts again, asking if they’re willing to endorse those specific attributes.
If they consent, then they are asked which attributes they would like to endorse.
They can also optionally choose to provide written or multimedia evidence.
Verified ProfileUsers complete their verified profile, which can include any evidence provided by their contacts.
This verified profile can be shared online and updated over time. This includes the attributes, endorsements, and evidence it contains.
ConclusionThe possibilities for this kind of approach are far-reaching, as it brings a human-centric approach to credentialing. As you can see by comparing the flowchart and the wireframes, although there are some reasonably-complex things happening behind the scenes, what is presented to the user is relatively simple and straightforward.
In this post, we’ve outlined the benefits to end users. But there are also benefits to endorsers (who could themselves earn a badges). Perhaps that’s the subject for another blog post…
We would like someone to build this workflow. If you have the money, talent, or time, please get in touch: hello@weareopen.coop
Using Open Recognition to Map Real-World Skills and Attributes was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The demand for recycled content and sustainable business practices continues to grow rapidly. Join us as we chat with Jeremy Douglas, Director of Partnerships at Delterra, to learn more about reducing plastic waste, creating enhanced waste streams, and utilizing standards to track plastics throughout the supply chain.
Key takeaways:
Traceability and transparency are crucial in the global effort to address plastic pollution and promote a circular economy. Delterra is focusing on establishing traceability for recycled plastic from collection to sale. By ensuring accurate measurement and preventing double counting, traceability can boost confidence in using recycled materials and support brands in tracking the origin of recycled plastic in their products.
Consumer pressure and advocacy have played a significant role in driving change in the waste and recycling systems. This has pushed brands to take action and seek alternatives, such as using more recycled materials. With consumers and stakeholders informed, companies are motivated to improve their plastic footprints and adopt more sustainable practices.
A global plastics agreement is being negotiated among 175 countries to address plastic pollution effectively. The agreement aims to create a binding document or instrument to end plastic pollution, similar to the structure of the climate change agreement. Themes such as eliminating unnecessary plastics and considering the impact on the informal sector are being discussed.
Connect with GS1 US:
Our website - www.gs1us.org
Connect with guests:
Follow Jeremy Douglas on LinkedIn
If you’re well-versed or new to Web3 and considering a valuable contribution, Elastos offers a unique opportunity. The project not only presents a novel approach to blockchain consensus but also provides strong financial incentives for participating as a validator. By coupling Bitcoin’s security, community governance, and a DAO Council’s integrity, Elastos has created a validator ecosystem ripe for efficiency and scalability. Here’s what you stand to gain by joining this innovative network as an Elastos Consensus BPOS Validator!
The Currency: ELA
$ELA is the universal asset within Elastos, underpinned by a deflationary model. The currency has a current circulating supply of 21,364,394 ELA with an issuance cap set at 28,219,999 by 2108. ELA comes from the Elastos Mainchain, a blockchain merged mined by Bitcoin. The next issuance halving is due in December 2025, supporting an increased potential for earning more ELA now rather than later.
The Incentives for Elastos Consensus BPOS Validators
Becoming as a BPOS validator means working with Bitcoin miners to validate transactions on the Elastos Mainchain, with distinct advantages. A full step-by-step guide can be found here.
“Setting it up according to the guide was quick and easy. I’m hosting it on a Contabo BOS at $6/month. It’s been running flawlessly since the beginning of BPOS. Rewards are distributed instantly, and on top of that, I regularly receive early rewards from the Elastos Foundation”. – Iggis Popis, BPOS Validator
Staking Rights
In Elastos’ BPoS system, participants stake ELA tokens for periods of time using the Essentials Wallet and receive equity tokens to vote on validators. This allows stakers to also earn ELA rewards. Higher yields are found with nodes with fewer votes, and periodic re-voting is necessary once pledge times expire to continue earning rewards, helping revolve validator nodes. A full step-by-step guide can be found here.
Special Staking Incentive Program
The Elastos Foundation (EF) has launched a program allocating 1 million ELA for staking on community nodes, designed to aid nodes in attaining the 80,000 staking rights required for activation. This initiative further distributes staking profits to stakers and validator nodes monthly, enhancing your potential earnings.
Staking Rewards Partnership.
Elastos has teamed up with Staking Rewards to highlight the advantages of our consensus system. The Elastos Growth Team will be present at the Staking Summit in Turkey on November 10th. They aim to connect with attendees and showcase the unique features of Elastos’ validator system, as well as the opportunities it offers for earning ELA. Other speakers at the event will include Justin Sun, Founder of Tron, and Anton Bukov, Co-founder of 1inch.
The Power of Elastic Consensus
So how does everything work and come together? Elastos employs a threefold consensus mechanism. A comprehensive technical overview can be found here.
Pioneering Innovation
Five years since its launch, Elastos has made remarkable strides in:
Partnerships include Tencent Cloud and Alibaba Cloud, with ecosystem teams like Elacity building a digital capsule marketplace for Web3 digital asset monetisation.
Become a Elastos Consensus BPOS Validator today!
The Elastos ecosystem offers not only consensus opportunities for validators and stakers but also possibilities beyond the Mainchain. Soon, stakers will be additionally able to mint BPoS NFTs, receipts which enable secondary markets and trading opportunities through smart contracts on the Elastos Smart Chain. Follow Elastos to stay up to date!
Interested in becoming a validator? Check out our comprehensive guide for a step-by-step process on becoming a validator today.
Supporting Links:
Instruction on How to Deploy a Node BPoS Nodes Registration Guide Elastic Consensus Technical Overview BPoS Staking and Voting Guide Developer Portal Elastos Essentials for Android Elastos Essentials for iOSThe post Seize the Opportunity: Become an Elastos Consensus BPOS Validator Today! appeared first on Elastos.
What’s Sigs Got To Do With It?ABSTRACT: Schnorr signatures have been a long time coming, but now that they’re finally here, they open up broad new cryptographic frontiers, including the improved privacy of signature aggregation and blind signatures, the improved power of threshold signatures and adapter signatures, and much more. This article explores some of those frontiers and also offers a simplified look at how Schnorr works.
I was introduced to prime numbers in the third grade, when I “discovered” them accidentally, and a great teacher encouraged me to explore them further. Conversely, I learned about finite fields between my Junior and Senior years of High School, when I was offered the chance at taking a college course over the summer and had to find something that didn’t depend on Calculus, which I hadn’t yet taken.
I didn’t know it at the time, but I was delving into the mathematical foundations of two powerful signature systems: RSA and Schnorr.
In the world of digital communication, a pressing question often arises: how can we trust and authenticate digital messages? The answer is to use digital signatures.
Digital signatures function similarly to a seal of authenticity on a physical document, but for digital content. They assure the recipient of the message’s integrity and the sender’s authenticity. There are many methods for signatures, with RSA and Schnorr being the best known. They mostly rely on public-key cryptography: a signer uses his private key for a signature, which can then be verified with a public key.
The Schnorr LegacyI discovered RSA, which secures its signatures using prime numbers, in college. But it was when I later encountered Schnorr, built on finite fields, that I met the first cryptography that I truly fell in love with.
Claus-Peter Schnorr introduced Schnorr signatures in the early 1990s. They were not the first digital signature, but they had a strong mathematical foundation: they were provably secure under specific assumptions. They were invented later that other systems like RSA, but they provided a number of new benefits including:
Compactness — Schnorr signatures are small, even when there are multiple signers. Signature Aggregation — Multiple signatures can be aggregated together and look exactly like a single signature. Faster Verification — Because of their small size (and the fact that aggregating multisigs doesn’t increase that size), Schnorr signatures can be verified quickly. Threshold Signatures — Multisignatures requiring a certain quorum of participants are possible. Blind Signatures — Signatures can be made while hiding the content. Adapter Signatures — Signatures can be hidden by other values. Beyond the Fields We KnowBeyond all of their new benefits, I fell in love with Schnorr signatures because they were elegant. The aggregation of signatures was done with simple mathematical operations. You added two signatures together and they were aggregated! Or, you could subtract any signature from an aggregate sum! But perhaps I shouldn’t say “simple” mathematical operations: Schnorr signatures depend on finite field math.
Think of finite fields as domains where numbers play by a unique set of rules. Each field is constrained to only contain certain numbers, usually defined by the letter p
. The field then defines basic mathematical operations such as addition and multiplication such that if the operands are within the finite field, then the result of an operation on those operands will also lie within the finite field.
As it happens, the elliptic curves used for most modern cryptography are defined over finite fields. The finite fields act as a modulo to the curve, ensuring that the results of all operations on the curve fall within the field.
Bitcoin’ssecp256k1
works within the bounds of:
[ p = 2^{256} - 2^{32} - 977 ]
Curve25519, popular in key exchanges, operates around:
[ p = 2^{255} - 19 ]
Schnorr like an Eagle
When I was working as CTO of Certicom, I contacted the holders of the Schnorr patent about licensing it, but they wanted impossible terms. I assume it was the same for others who tried to use the cryptography at the time.
Those patent restrictions ultimately held back Schnorr signatures from broader acceptance until their expiration in 2008. Even afterward, there wasn’t a quick move to Schnorr, despite its advantages, because ECDSA was mature, while there wasn’t yet any good code for Schnorr. I sometimes wonder where we might be today if Bitcoin had mined its Genesis block four or five years after the expiration of the Schnorr patents, rather than a scant 11 months later.
When I was working at Blockstream, following the expiration of the Schnorr patent, I pestered our engineers about Schnorr. I was happy to then be at the conference where the MuSig multiple-signature system was first rolled out, using Schnorr and a multi-stage signing system that addressed some of the challenges of the powerful signature technology. The security was quickly broken, but that was just a first step, before the release of next-generation Schnorr systems such as MuSig 2 and FROST.
It’s taken a few decades, but due to their simplicity and robustness, we are at last in a future where Schnorr signatures are possible. We can at last implement the many possibilities that I couldn’t even imagine when I first met finite fields, in that summer class years ago.
As a new technology, Schnorr has pitfalls and “footguns” that even now might not be fully understood. They need to carefully reviewed before the technology can be safely used.
Some of the earliest challenges of Schnorr have included:
Naive Aggregation: Merging signatures without verification is risky. Combining known good signatures with an adversary’s signature can actually nullify a signature as a whole because of the additive nature of Schnorr signatures. Advanced measures can prevent such issues. Replay Attacks: Without unique transaction data, a valid signature can be maliciously reused. The solution? Incorporate distinct transaction information within signatures. Malleability: An adversary might slightly alter a good signature while maintaining its validity. Contemporary protocols ensure signatures are consistent and resistant to changes.Today, two major solutions address the challenges of Schnorr signatures, allowing users to take advantage of their many features:
MuSig2: A scheme focusing on multi-signatures. It addresses the issue of naive aggregation by ensuring individual participants cannot manipulate the combined public key. It only requires two rounds of communication to be completed between the parties. However, it restricts itself to N-of-N signatures, disallowing threshold schemes (except perhaps in a variant that creates a Merkle tree of MuSigs). Major advantages includes accountability and key generation being noninteractive. FROST: A versatile Schnorr threshold signature strategy, allowing a quorum to sign without revealing which members did so, increasing privacy and efficiency. It however, requires the completion of three rounds of communication by the parties before use. Major advantages include the ability to use Shamir protocols for refreshing shares, repairing lost shares, and enrollment and disenrollment of participants. The Zcash Foundation implementation of FROST has recently received a successful security assessment. Their Understanding FROST article is also a nice, brief overview of the technology. Schnorr in a Nutshell (8 Bits)The following examples offer a layman’s explanation of Schnorr and its operations using an 8-bit model, which is to say an imaginary finite field that ranges from 0 to 255. This sort of finite field is easily managed by applying a modulo to all the operations. In this example, addition and subtraction operations would be finalized by applying “modulo 256”, ensuring that the result remains within the field.
Obviously, practical cryptographic applications work with much larger numbers, which helps to ensure their security, but these small numbers can make examples much easier to read and understand.
Generating Public KeysFor any signature, the first requirement is generating a public key from a secret private key. Here’s a simplified 8-bit example:
Private Key: Start with a private key, for example, Alice’s private key is50
.
Constant: Multiply it by a constant number, say 5
.
Public Key: The result, (50 x 5 = 250
), becomes Alice’s public key.
Seems straightforward, right? The catch is that the mathematical definitions of the actual finite fields and the operations used by Schnorr ensure that division is almost impossible: if someone only has the public key (250
) and the constant (5
), working backwards to figure out our private key is more challenging than the reverse because of this special math.
This difficulty to reverse is what makes this cryptography “asymmetric”, which is crucial for security. It’s like baking a cake; once you have the final product, it’s very difficult to determine the exact ingredients and their quantities.
Creating SignaturesImagine Alice wants to send Bob a secure message. Here’s how she would do it using the Schnorr signature system:
Private Key: Alice has a secret number, which she never shares with anyone. Let’s say this number is50
.
Message: Alice has a message. It could be anything, from a simple greeting to a bank transaction. In the digital world, this message can be represented as a number.
Random Figure: Alice picks a random number for each new message she wants to sign. Let’s say she picks 20
for this particular message.
Signature Calculation: She then uses her private key, the random number, and her message to do some special math, which creates a unique signature for that message. In our 8-bit example, this math might lead to a signature like 150
. This signature is unique to the message and Alice’s private key; even a small change in the message would produce a vastly different signature.
Sending: Alice sends her message and the signature (150
) to Bob, but keeps her private key and random number secret.
Verifying Signatures
Now, let’s say Bob receives Alice’s message and the signature she produced. To be certain that it’s truly from Alice and hasn’t been tampered with, he needs to verify the signature:
Public Key: Everyone knows Alice’s public key (like a public email address). In our example, this is250
. Alice would’ve previously derived this public key from her private key using some more special math and made it publicly available.
Verification: Using the signature Alice sent (150
), her public key (250
), and the original message, Bob does his own math. (Though a private key can’t be easily calculated from a public key, it is easy to determine that a signature matches a known public key using Schnorr’s special math.) If Bob’s calculations match the signature Alice sent, then he knows two things for sure: the message truly came from Alice (because she’s the only one with the private key that matches the public key that Bob tested against); and the message hasn’t been tampered with (because the signature matched perfectly).
Outcome: Since Bob’s math using the public key and the message matched Alice’s signature, he’s assured of the message’s legitimacy.
In a nutshell, Alice uses her private key to “stamp” a message, and Bob uses her public key to verify that “stamp.” The beauty of Schnorr signatures is that these calculations are simple (compared to other digital signature methods), yet remain secure, making them a favorite in many cryptographic applications.
Creating Sequential MultisigsImagine a scenario where endorsements need to be made in a particular order, much like a relay race where one athlete passes the baton to the next. This is a simple form of multisignature that any signature system is likely to support.
Initial Endorsement: Alice wants to sign a document, let’s say a declaration. She does her special math with her private key and the message to produce a signature, which for simplicity, we’ll say is150
.
Subsequent Endorsement: Bob, having seen Alice’s endorsement, decides he wants to endorse her action. He takes Alice’s signed declaration and adds his signature to it. Bob’s endorsement, perhaps, produces a signature of 75
.
Sequential Result: Now, the document carries both signatures, 150
from Alice followed by 75
from Bob, indicating a clear sequence: first Alice, then Bob.
Creating Aggregate Multisigs
However, Schnorr signatures can be much more powerful than most traditional signature systems. They allow for aggregate signatures, where parties can sign in any order they want and jointly endorse a message.
Individual Signatures: Alice and Bob both want to co-sign a document. Alice, with her special math and private key, creates a signature, say150
. Bob, independently, generates his signature, which might be 75
.
Aggregation: Using Schnorr’s unique properties, these two signatures can be combined. Instead of appending one after the other, Schnorr adds the two together, possibly producing a single value of 225
.
Aggregate Result: The document now has one composite signature, 225
. Anyone verifying this knows both Alice and Bob endorsed it, but there’s no distinction about who signed first.
Creating Threshold Signatures
In scenarios where a group decision is required, threshold signatures (or quorum signature aggregation) shines. It allows a subset of a larger group to sign a document, indicating a collective endorsement.
Group Dynamics: Imagine a council of five members. For some decisions, the endorsement of all members isn’t necessary. Instead, only a majority, say three members, is sufficient. Individual Signatures: Three members, Alice, Bob, and Carol decide to endorse a statement. Their signatures might be150
, 75
, and 30
respectively.
Aggregation: With Schnorr, these signatures can be aggregated into a single composite signature, possibly 255
.
Collective Result: This single signature, 255
, stands testament that a quorum of the council endorsed the statement. Verifiers don’t necessarily know which three members endorsed it, only that at least three did.
Advanced Schnorr Techniques
Because of the power of Schnorr, there are even more possibilities:
Blind Signatures: Vital for preserving privacy. Bob wants Alice’s signature without revealing the message (let’s say50
). By altering it slightly, like by adding 5
, he disguises it. Alice then signs this modified message (55
), and Bob later extracts the initial signature.
Key Aggregation:
Combines multiple public keys into one. With Alice’s key at 250
and Bob’s at 5
, the combined key might be 255
.
Adapter Signatures:
This combines secret communication with validation. Alice holds a confidential number, say 20
. Bob communicates with her, and as she signs, she discloses her secret as part of the signature.
Even this just skims thes surface of possibilities. Further complexities include the use of hashes, commitments, challenges, and more complex threshold schemes.
ConclusionSchnorr is an incredibly powerful toolbox. Techniques such as signature aggregation, collective multisigs, blind signatures, key aggregation, and adapter signatures demonstrate the flexibility of Schnorr while maintaining its ease of use and efficiency as well as the integrity and authenticity of the messages. The result is a wide variety of endorsement dynamics that will support many different use cases.
Now that MuSig and FROST are increasingly mature, these many dynamics and use cases, which have been waiting in the wings for more than 20 years, can finally be a reality.
I believe that it’s truly a cryptographic revolution, as important as many of the ones that have come before.
Kia ora e te whānau
You would have heard digital wallets mentioned almost every week, if not every day, over the past couple of years. They formed part of the discussion on this International Panel I participated in very early one morning last week.
As conversations build on each other and as we increasingly use banking apps, Apple Pay or Google Pay when shopping, you might ask yourself how hard can it be when I see how many apps I have on my smartphone? Right there, is a challenge around scale.
Most phone apps have a 1:1 relationship with the job they do – an app for our bank, our supermarket, our photo ID,contact tracing and so on. We don’t carry 50 wallets each with one physical credential in each of them. That’s why the digital equivalent seems a bit ridiculous! This cannot scale to deliver you a great consumer experience for a future where for many people, their interactions will be mostly digital.
Another challenge is around security, data protection and privacy. How secure is the binding between you and your smartphone and the apps and wallets contained within it? That starts with high quality digital identification but extends into managing situations like coercion, or how to nullify the features on the phone if it’s lost or stolen. These problems can be complex, expensive and time consuming to mitigate against, meaning that developing and operating a commercially viable universally trusted digital wallet will not be for the faint-hearted (global platforms excepted but with the related considerations they may bring).
Consequently, alongside specifically purposed private sector wallets we may see the public sector in some jurisdictions taking a role in aspects of their development – be it interoperable open standards and code libraries or a completely built digital wallet – for its people. This demo from global thought leaders the Province of British Columbia peeks into the possible future (you need to download the BC Wallet app from the App Store or Google Play and read the privacy notice and terms & conditions).
As an aside, Digital Identity NZ was amongst the first associate sponsors of the Open Wallet Foundation, the only sponsor in this part of the world, to promote OWF’s outputs in Aotearoa for adoption as we travel the world. While not the only approach, the fact that the state often holds the authoritative identity data (e.g. date and place of birth, citizenship etc) and commercial viability of digital wallets could be a lower priority by considering it a public good, the state may be best placed to provision these high trust cryptographically protected attributes onto a smartphone’s private or public sector developed digital wallet.
Regardless, it will have to be trusted by us and by relying parties to be useful across a wide range of online transactions transcending our daily lives. Of course to maintain trust and integrity, the system will have to be architected in a way that after the authoritative attributes are provisioned to the smartphone’s digital wallet, the issuer doesn’t see or know where they are used. So where the state is the issuer, its responsibility ends there except for the rare instances of revoking or restoring compromised attributes.
You might ask why you should care? You should care because in return for having determination over your personal data in a digital wallet on your smartphone you will have to use it thoughtfully. Since your smartphone’s digital wallet will hold the attributes and you decide when and to whom you release them, there will be very few parties to hold responsible if things go wrong. (Thank you Venkat and Waylon for your technical peer reviews prior to publishing).
Ngā Mihi
Colin Wallis
DINZ Executive Director
Read the full news here: ‘Identiful’ Digital Wallets and why you should care | October Newsletter
SUBSCRIBE FOR MOREThe post ‘Identiful’ Digital Wallets and why you should care | October Newsletter appeared first on Digital Identity New Zealand.
Amazon is rolling out passkey support on browsers and mobile shopping apps, offering customers an easier and safer way to sign in to their Amazon accounts. Customers can now set up passkeys in their Amazon settings, allowing them to easily use the same face, fingerprint, or PIN used to unlock their device. Passkey support is available today for all Amazon customers using browsers and is gradually rolling out on the iOS Amazon Shopping app with support coming soon on the Android Amazon Shopping app.
The post Amazon is making it easier and safer for you to access your account with passwordless sign-in appeared first on FIDO Alliance.
We were live at the FIDO Alliance Authenticate conference last week and we recorded 5 shows! First up is David Mahdi of Transmit Security about machine identities. Episode #239 is available now at idacpodcast.com and in your pod app.
Before the Internet as we know it came into existence, there was the Advanced Research Projects Agency Network (ARPANET), a project funded by the United States Department of Defense. Conceived in the late 1960s, ARPANET was designed to provide a system for researchers and military personnel to share information easily. At this time, computing resources were expensive and siloed. Researchers often had to physically go to the machine’s location to perform computations. Therefore, a set of rules that allowed different types of computers to communicate was developed was initially developed called the Network Control Protocol (NCP), later to be superseded by the TCP/IP protocol suite in 1983. This was a key innovation for military and research institutions, as it demonstrated that a network of multiple, interconnected computers could effectively communicate, optimising resource utilisation and enhancing productivity. As technological advancements in networking continued beyond the early 80’s, various institutions and individuals worldwide played crucial roles in shaping the Internet’s development. Rong Chen’s experience provides an insightful perspective on this collaborative effort and the milestones that followed.
‘I worked as an intern with another person at the National Center for Supercomputing Applications (NCSA) in 1987 at the University of Illinois to implement data rendering algorithms on the SUN Workstations while the data came from two Cray Supercomputers a few hundred yards away. Two other fellow students worked on implementing the first TCP/IP ever for IBM PCs at the same time. It was planned to interconnect 6 supercomputers in 5 universities’ supercomputer centers in the US at the time of 1987. I had heard that the European Organization for Nuclear Research (CERN) joined the scientific research network later. And Tim Berners-Lee worked at the CERN and invented the WWW web in 1989 on top of the network that we had built. Mosaic from NCSA came later in the early 1990s as the first useable internet browser product inspired by Tim’s work.‘ – Rong Chen, Elastos Founder
Less than a decade before the Dot-Com Bubble of the ’90s, Rong Chen’s 1987 internship at the National Center for Supercomputing Applications (NCSA) offers a fascinating glimpse into a crucial period into the transformative era of the internet. Rong’s work on optimising communication between computers designed for very different kinds of tasks laid the groundwork for interoperability across divergent systems. The standardisation later of TCP/IP for IBM PCs was a seminal step in mainstreaming internet access. This development set the stage for Tim Berners-Lee’s 1989 invention of the World Wide Web, which revolutionised data organisation, allowing pages from different server locations to be linked together and explored in one universal environment.
The foundational networking by Rong Chen and colleagues enabled subsequent developments like the Mosaic browser, a user-friendly platform that evolved the Internet’s reach from academic and military circles into consumer households. But Mosaic was just the start; it inspired the creation of Netscape in 1994, one of the first commercial web browsers. Netscape’s success demonstrated the internet’s economic potential and spurred the “browser wars,” such as Microsoft’s Internet Explorer which was instrumental in embedding the Internet into the fabric of everyday life, driving global markets and commercial opportunities. Being inside the exponential growth phase of a company like Microsoft at this time tells an exciting story full of innovative milestones, filled with lessons and warnings in architectural principles.
‘I worked at Microsoft from June 1992 to April 2000. I was with the multimedia framework team for 6 months, Then joined the multimedia operating system team around the end of 1992. The multimedia OS team merged with the MS Research OS team in 1993. The Research OS became the Interactive TV set-top-box OS in 1994, and became part of the Advanced Consumer Technology Division’. – Rong Chen
At its core, an Operating System manages hardware resources, such as CPU and memory, to maximise the utility of the physical hardware. However, it also manages internet connectivity for external communication and data exchange, disk storage for data integrity and efficient retrieval, and identity management for authentication and system security. These functionalities are integrated, not isolated. A failure or inefficiency in one could have a domino effect across the system, underscoring the essential role of the OS in harmonising hardware and software.
Early advancements in OS design have been catalysts for innovations defining subsequent generations of internet technologies. In 1995, a few months before Microsoft pivoted away from private Internet technologies, Rong Chen joined the Internet Explorer team as its 10th member. Microsoft’s embrace of open internet standards wasn’t just a policy shift; it recognised an industry trend favouring open ecosystems over exclusive standards for widespread adoption. Rong Chen and teams work on OS resource management, connectivity, and security—across different contexts from multimedia frameworks to specialised consumer technologies—put Microsoft in a position to innovate rapidly. But they were not confined to one niche; they were exposed to a cross-pollination of ideas. This made them a hub of innovation.
Rong’s passion would see him transition to the OLE Automation team, which simplified the integration of different software components, such as embedding a Microsoft Excel graph into a Microsoft Word document. This technology formed half of the programming paradigm architecture that all Microsoft projects would subsequently have to follow, shaping the foundational architecture of Microsoft’s OS software. The other half of the Microsoft programming paradigm architecture was the OLE “New Technology” team, providing the essential framework to enable efficient communication within Windows OS. These two Microsoft teams merged to form the COM Core team, which became essential in developing early versions of .NET—a framework setting the stage for future Windows versions and enabling various application types.
However, a pivotal moment arose when Microsoft decided to shut down the COM Core team and narrow their focus and invest in one particular software development approach. Rong disagreed with this single-track focus. He believed that both should co-exist and be developed in parallel, as each has its unique advantages and use cases. His rationale also included security considerations; specifically arguing that the OS should ensure a secure environment to protect user data from third-party apps.
“Due to the fact that Microsoft decided to dissolve the COM Core (C/C++) team and embrace only the C# intermediate bytecode framework, I believed that both should be supported and working in parallel. 3rd-party apps must be sandboxed in their own computing execution environment so that they cannot abuse the 1st-party user’s data, the operating system, as the 2nd party, is responsible for providing such a secure computing environment. So I resigned in April 2000 to build my own C++ version network operating system Elastos.” – Rong Chen
What Rong articulated around 2000 aligned closely with what we now recognise as the principles of Web3. His move from Microsoft to found Elastos, a Network OS platform, emphasises the catalytic power of divergent visions for spawning new initiatives. But divergent visions don’t emerge in isolation; they often stem from user demands for new capabilities. In the case of Microsoft and similar entities, a centralised structure, once established, became the path of least resistance for subsequent advancements. This approach enabled effective commercialisation during the Web 2.0 era, regardless of inherent limitations. Economies of scale in data centres and cloud services not only attracted more users but also contributed to the emergence and increasing value of platforms like Facebook and Google.
However, such architecture generated a growth feedback loop but also increased critical setbacks. Systems like identity management and social media that initially fuel growth also introduce security vulnerabilities, censorship issues, and biases. The growth of Internet of Things (IoT) devices and complex AI algorithms intensify these vulnerabilities, expanding the number of avenues for potential attacks. This segues naturally into the principles Rong advocated, fostering a new internet or network OS cycle that safeguards first-party user data from third-party abuse.
In Part 2, we’ll explore Elastos in-depth, the vision that prompted Rong to leave Microsoft. This decentralised network OS introduces a paradigm shift in internet ownership, addressing current Web 2.0 limitations and forming the foundation for a third-generation web, or Web3. These principles find their expression in the Elastos SmartWeb concept, which encompasses sandboxing, peer-to-peer communication between these sandboxes, and the trust-based, decentralised features intrinsic to blockchain technology.
The post Rong Chen and the Evolution of the Internet: From ARPANET to Web3 — Part 1 appeared first on Elastos.
Together, Elia Group and Energy Web have developed an application which can be seamlessly integrated into digital wallets, making it easily adoptable by a wide range of companies
Brussels, Berlin, Zug, Switzerland — October 23, 2023 — The OpenWallet Foundation (OWF) has officially adopted a digital identity application which was developed by Elia Group and Energy Web as a new project. The application, which can be used to securely transfer identity information between different parties (including details about their flexible assets), can be easily integrated into digital wallet technology. Digital wallets are set to become commonplace; indeed, the European Union aims to provide all its citizens with access to a digital ID solution by 2030.
Flexible consumption demands secure identification
Our energy systems are becoming increasingly decentralised, driven by the rise in renewable energy sources and electrical assets. Flexible assets such as heat pumps and electric vehicles will enable consumers to adapt their energy use in line with the needs of the system: consumers will be able to charge these when affordable green electricity is available, store it for later use, and (in the case of electric vehicles and home batteries) feed it back into the grid when needed. In so doing, consumers will help to keep the grid in balance and will support the integration of renewable energy into the system, so accelerating the energy transition.
Secure and efficient methods will be required to safely integrate electrical assets into the system and allow them to interact with the grid in a trusted manner. Extensive information exchange will need to take place, with data relating to personal details, technical specifications, contracts and charging tariffs being swapped between individuals, assets, and companies. Digital wallets, which are on the verge of becoming commonplace across Europe, are expected to be key enablers of this data exchange.
Elia Group and Energy Web have been collaborating on projects related to the integration of electrical assets into electricity systems for a number of years.
A digital wallet for every citizen
In line with its digital goals for Europe, the European Commission is working towards ensuring that all of its citizens will be able to use a personal digital wallet. One technology that can facilitate the design of these digital wallets is self-sovereign identity (SSI), a highly secure approach that enables identity-related information to be safely verified and exchanged between different parties.
SSI systems typically involve three core roles:
the holder, who manages data related to their identity by storing it in their digital wallet; the issuer, who is responsible for issuing the holder with identity-related data, often in the form of ‘verifiable credentials’; the verifier, or party which requires access to identity-related data.Recognising the potential power of SSI, Elia Group and Energy Web began collaborating on the design of a digital identity application based on a pre-existing specification: the verifiable credential application programming interface (VC API) specification. Their software enables signed identity documents (stored on digital wallets as verifiable credentials) to be exchanged between an individual’s digital wallet and a third party. Elia Group and Energy Web chose to design their application based on a pre-existing specification that is familiar to a majority of companies. This means their solution can be easily integrated into a wide range of existing IT frameworks as companies design different digital wallets. Elia Group and Energy Web’s aim is to encourage energy companies, particularly small- and medium-sized energy companies, to adopt it.
“To ensure network security and stability, the integration of renewable energy sources into our energy systems will require household and industrial flexibility to be activated. SSI will be an important tool for creating a registry of decentralised and flexible assets that will allow us to monitor the state of the network and steer these decentralised assets.” Kris Laermans, Innovation at Elia Group
OWF adopts Elia Group and Energy Web’s innovative solution as a new project
Once the application was ready, Elia Group and EnergyWeb approached OWF, an organisation which promotes the implementation and use of open, secure and interoperable digital wallet technology. The OWF then adopted the application as an OWF project, making it the fourth time such a solution has been accepted by the organisation since its formation in February 2023. This will help the application to be widely used by companies as they design digital wallets.
“This collaborative effort between Elia Group, Energy Web, and the OpenWallet Foundation represents a significant step toward revolutionising the digital wallet landscape. By officially accepting this project, OWF is empowering the future of secure and interoperable digital wallets. Together, we’re advancing technology that will not only benefit the energy sector but also drive innovation beyond its boundaries, setting the stage for a more interconnected and secure digital world.” John Henderson, Senior Solution Architect at Energy Web
GitHub documentation, which includes the code for the project, can be accessed here.
Elia Group
Elia Group is an international energy company that comprises two transmission system operators. Through our subsidiaries in Belgium (Elia) and the north and east of Germany (50Hertz), we operate 19,349 km of high-voltage connections, meaning that we are one of Europe’s top 5 transmission system operators. With a reliability level of 99.99%, we provide society with a robust power grid, which is important for socioeconomic prosperity. We also aspire to be a catalyst for a successful energy transition, helping to establish a reliable, sustainable and affordable energy system.
Energy Web
Energy Web is a global non-profit accelerating the clean energy transition by developing open-source technology solutions for energy systems. Our enterprise-grade solutions improve coordination across complex energy markets, unlocking the full potential of clean, distributed energy resources for businesses, grid operators, and customers. Our solutions for enterprise asset management, data exchange, and Green Proofs, our tool for registering and tracking low-carbon products, are underpinned by the Energy Web Chain, the world’s first public blockchain tailored to the energy sector. The Energy Web ecosystem comprises leading utilities, renewable energy developers, grid operators, corporate energy buyers, automotive, IoT, telecommunications leaders, and more.
More information on Energy Web can be found at www.energyweb.org or follow us on X @EnergyWebX
Digital identity application developed by Elia Group and Energy Web, which will support… was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join the Construction session at the GS1 in Europe Regional Forum!
It’s that time of the year again! Join us for the GS1 in Europe Regional Forum, online or in-person in Athens, Greece on Monday, 23 October from 16:15 to 17:45 (CET+1)
Learn about construction-related use cases from all around Europe! Learn about the challenges the industry has to face with the upcoming European legislations such as the Construction Products Regulation! Are you prepared?
For more details and registration, check out the event website.
Please note this is a GS1 internal event only, open to all GS1 Member Organisations!
Products in the construction sector have very specific and intricate characteristics. It is a sector that is too large, with information that is too fragmented and, above all, with little process digitalisation. But this is about to change, thanks to the first official GS1 document on Construction.
It is a document that results from the work done by the GTIN Management Rules under the supervision of the Swedish GS1 User Group Construction and the GS1 GSMP. A tool that simplifies the language and makes this topic more easily understandable for the community, through specific examples for the construction sector, under the GTIN Management Standard.
The GTIN Management Guideline for Construction Products is designed to help industry make consistent decisions on the unique identification of construction products in open supply chains and comply with the General GTIN Management Standard. In general, costs are minimised when all stakeholders throughout the life cycle of a product adhere to the GTIN Management Standard. Furthermore, this could simplify CO2 footprint calculations, trace products and their content to simplify the circular economy, reuse, recovery as well as compliance with different green building certification schemes.
The OpenID Connect Working Group recommends the approval of Errata corrections to the following specifications:
OpenID Connect Core 1.0 – Defines the core OpenID Connect functionality: authentication built on top of OAuth 2.0 and the use of Claims to communicate information about the End-User OpenID Connect Discovery 1.0 – Defines how Relying Parties dynamically discover information about OpenID Providers OpenID Connect Dynamic Client Registration 1.0 – Defines how Relying Parties dynamically register with OpenID Providers OpenID Connect Back-Channel Logout 1.0 – Defines a logout mechanism that uses direct back-channel communication between the OP and RPs being logged outAn Errata version of a specification incorporates corrections identified after the Final Specification was published. This would be the second set of errata corrections for Core, Discovery, and Dynamic Client Registration and the first Back-Channel Logout. This note starts the 45 day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. This review period will end on Tuesday, December 5, 2023. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Errata Specifications. For the convenience of members who have completed their reviews by then, voting will actually open a week early on Wednesday, November 29, 2023, with the voting period still ending on Wednesday, December 13, 2023. The specifications incorporating the proposed Errata corrections are available at:
https://openid.net/specs/openid-connect-core-1_0-33.html https://openid.net/specs/openid-connect-discovery-1_0-36.html https://openid.net/specs/openid-connect-registration-1_0-38.html https://openid.net/specs/openid-connect-backchannel-1_0-12.htmlThe corresponding previously approved specifications are available at:
https://openid.net/specs/openid-connect-core-1_0-errata1.html https://openid.net/specs/openid-connect-discovery-1_0-errata1.html https://openid.net/specs/openid-connect-registration-1_0-errata1.html https://openid.net/specs/openid-connect-backchannel-1_0-final.htmlOr see the Introduction sections of the specifications for links to previously approved versions. A description of OpenID Connect can be found at https://openid.net/connect/. The working group page is https://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote. You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB+Connect” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list. See the History entries in the specifications for a summary of the errata corrections applied.
On November 2, 2023 OpenID Connect Core 1.0 draft 34 was published. It incorporates feedback received during the review period. It adds the loopback wording previously applied to the Implicit Flow to the Authorization Code Flow.
The post Review of Second Proposed Errata Set for OpenID Connect Specifications first appeared on OpenID Foundation.
Suzuki Motor Corp was founded in 1920 with loom manufacturing as its first business. In 1952, Suzuki entered the motor vehicle field with the launch of a bicycle engine. Since then, Suzuki has contributed to people’s comfort and fulfilling lives by providing user-friendly and affordable products, including motorcycles and automobiles. The company has expanded its business not just throughout Japan but also to international markets, providing people worldwide with a means of mobility. Visit the Suzuki website
The post Suzuki Motor Corp first appeared on MOBI | The New Economy of Movement.
A new working group has been accepted by the OpenID Foundation (OIDF) Specs Council that will focus on increasing interoperability of authorization systems. The new working group, called Authorization Exchange (AuthZEN), resulted from a series of conversations at the two most recent Identiverse conferences. After this year’s event, it was recognized that there was considerable interest in authorization as a topic, but that several constraints existed that were limiting widespread adoption of industry best practices.
AuthZEN will focus on specific areas of interoperability by documenting common authorization patterns, define standard mechanisms, protocols and formats for communication between authorization components, and recommend best practices for developing secure applications.
AuthZEN will hold weekly meetings on Tuesdays at 9am PT via Zoom with the following link: https://zoom.us/j/92150123981?pwd=YnhuSXNxU2w4Z3VGc3lrUjRNSTBUZz09. The first AuthZEN Working Group meeting will be Tuesday, October 24, 2023. Co-chairs for the working group will be confirmed at the first meeting.
Those new to the OpenID Foundation and interested in participating in the AuthZEN WG, will need to complete a Contribution Agreement. Please note that you do not need to be a member of the Foundation to participate in working groups but membership is encouraged and appreciated.
You can subscribe to the AuthZEN working group mail list here: https://lists.openid.net/mailman/listinfo/openid-specs-authzen
All working group details, including the approved charter, can be found here: https://openid.net/wg/authzen
The post Announcing the Authorization Exchange (AuthZEN) Working Group first appeared on OpenID Foundation.
How to apply: Please apply via this application form (incl. CV and cover letter).
Deadline: Applications close 28 November 2023, midnight EST
Total compensation package: $120,000-$145,000 USD.
Generous benefits for leave, health, wellness, and flexibility to working hours.
At the end of this year, The Engine Room’s Executive Director, Julia Keseru, will be leaving after being at the organisation for the last eight years. The Engine Room’s Board of Directors is excited to announce that going forward The Engine Room will formally adopt a model of co-leadership based on the learnings and work of the current leadership team.
Read letters from the board and from Julia Read more about The Engine Room’s experience with co-leadership.The Engine Room is a non-profit organisation with a distributed global team of experienced and committed activists, researchers, technologists and community organisers. Working alongside frontline organisations, we use the power of technology, data and research to advance our vision. We build evidence, strengthen capacity, and connect movements. Learn more about us and our work at theengineroom.org.
Co-leadership looks different for every leader and every organisation. While the norm is still the single-leader model — whether in a corporate setting or small local NGOs — we believe that for The Engine Room, a shared-leadership approach can be more responsive to context, leverage the skills and capacities of individuals, and result in a more resilient organisation.
After a formal process, The Engine Room’s Board of Directors has offered one of the co-ED positions to Paola Mosso, a Deputy Director at TER. Paola has been with The Engine Room for eight years. She is embedded in various local and global activist communities and has deep expertise in digital ecosystems, online infrastructures, emerging and justice-focused technologies and feminist approaches to digital resilience. She brings a journalistic eye to how we talk about our work and is an integrator of approaches when it comes to creating local change. She is deeply committed to the co-leadership model and championed its implementation at TER.
Role DescriptionThe Engine Room is soliciting applications for a Co-Executive Director. If you apply, we will want to engage with you as a person and hear your vision as it connects to our mission and our defining measures of success. For us this is more important than a list of characteristics and skills. In addition to our core mission and current work, a few key issues are important to our future:
The intersection of research and social justice is important to us. TER is looking to shift its approach to research so that it is tied more closely to its community work and honouring a strong local point of view. Our latest approach to fundraising is working for us. We are committed to balancing project-specific, fee-for-service funds and grants, program funding, and unrestricted support for the organisation. It is fundamentally important for us to align organisational development with shared leadership, and to document and share the learning that comes with this approach, for both ourselves and our partners.To grow and thrive, it is critical that the new co-ED possess these qualities:
Brings academic-level research skills, and writes and thinks beyond disciplinary silos. Experience and connections for business development, and capacity for fundraising. Defining fields of interest and experience, such as environmental justice or emerging technologies. Relevant experience can be as a social activist, writer, researcher, campaigner, or networker in the given area. Experience in any capacity with social justice and community building in the majority world. Experience working with or for US-based nonprofits.At The Engine Room, we prioritise care, well-being, flexible collaboration and clear communication. We look for colleagues who bring:
A constructive and opportunistic approach to confronting challenges and building strategies for the future. A commitment to our values of making space, thinking critically and acting intentionally, embracing complexity, and holding everyone high. A readiness to experiment, especially as we navigate journeys of shared leadership, and adapting the ways our teams collaborate and define areas of interest. Please apply via this application form (incl. CV and cover letter). Applications close 28 November 2023, midnight ESTPhoto by Yuriy Kovalev on Unsplash
The post We’re on the search for our new Co-Executive Director first appeared on The Engine Room.A letter from The Engine Room’s Board, and a message from our Executive Director, Julia
Letter from the TER Board
October 18, 2023
Dear Friends, Partners, and Supporters of The Engine Room,
Today we are announcing that Julia Keseru is leaving The Engine Room at the end of this year. We are deeply grateful to Julia for the work she has done and the legacy she built during her tenure. With Julia’s leadership The Engine Room expanded its services and support portfolio, attracted diverse funding and partnerships, and deepened its impact in multiple regions.
Julia joined the team in 2015 and has been our Executive Director since 2018. You can read more about the reasons for her transition and what she is going to do next in her letter below. We thank Julia for her passion and investment that helped The Engine Room grow into what it is today, and we wish her the best in her future endeavours. While she will be greatly missed, we are eager to see what she does next!
Moving forward, we are excited to announce that TER will formally adopt a model of co-leadership based on the learnings and work of the current leadership team, who have cultivated a shared leadership approach since Julia’s health crisis two years ago. This started when we brought in Gillian Williams as our Interim Executive Director during Julia’s medical leave. As Julia eased back into work, Gillian and Julia developed a model for sharing leadership. Together with co-Deputy Directors Paola Mosso and Laura Guzmán, they formed a committed, diverse, thoughtful senior leadership team who dug deep into feminist and shared leadership. They’ve written about it in a recent blog.
We have spent time reflecting on co-leadership and the future of the field, and engaged in developing the core tenets of what we are looking for in leadership for the organisation. Over the summer we created a process of candidate recruitment, interviewing, and selection. We were excited when Paola expressed interest in co-leading The Engine Room, and she formally interviewed with us through this process. We are pleased to announce that we have selected her as one of the two co-Executive Directors.
Paola has been with The Engine Room for eight years. She is embedded in various local and global activist communities and has deep expertise in digital ecosystems, online infrastructures, emerging technologies and justice-focused technologies. Please join us in congratulating her for this new role and wishing her the best!
Today we are also announcing our formal search for the co-Executive Director. You can learn more about it in the posting here. We’re excited about this search, and look forward to hearing from all of you.
Best regards,
Ivan Sigal, Chair
Elizabeth Lindsey
Maria Baron
Elizabeth Eagen
Isabela Fernandes
Message from our ED, Julia
Eight years ago I joined a team that I admired from afar but didn’t know deeply – the one thing I was certain of was that The Engine Room was doing things very differently than others. Under Alix Dunn’s visionary and compassionate leadership, I found myself in a completely distributed global team that was functional and fun to be a part of. From the first moment I knew that this was going to be a long-term relationship and that I came across something rare and magical: a team that makes me motivated and happy.
In a few years, I found myself at the wheel of the same team. Leading an organisation like TER was a dream come true for me. And little did I know how transformative the past five years were going to be.
Two years into my tenure the pandemic hit and while The Engine Room weathered the storm of the global health crisis, it was not without scars – my own personal ones included. My daughter was only one when I became ED and I learned the hard way how difficult it is to be an involved parent and a responsible leader at the same time. Then two years ago I was diagnosed with advanced breast cancer and had to step out of the work to go through a series of invasive treatments and surgeries. I am happy to report that my health is in great shape right now, but the past two years have dramatically changed my perspective on life.
Most importantly, I realised that I care deeply about how people exist in the world as physical entities – what our true bodily boundaries are, how we can meaningfully coexist with each other in ways that uphold our dignity, and how we create systems that support autonomy and integrity. As I dove deeper into corporeal philosophy, bioethics, bodily integrity and cyberfeminism, I developed a tremendous appetite to learn new things, venture into new arenas and experiment with new ideas and ways of being and working. I also learned that I have meaningful things to say about how our bodies exist in a digital era – things that significantly build on The Engine Room’s legacy but that might go beyond its mission.
During this period I also fully comprehended that time is precious and our existence on this planet is very short – which led me to my decision to transition out of The Engine Room by the end of the year, and pursue my dreams in different avenues. I feel incredibly proud of the things we have achieved in the past decade, bittersweet about saying goodbye to this team, and massively inspired by the potential and resilience of The Engine Room as an institution.
And while I know I contributed to that potential in significant ways, I am also deeply aware that individuals alone can never make long-lasting change. The Engine Room is a rare gem because of the collaboration, dedication and diverse viewpoints that make up our DNA – thinking in communities and ecosystems is ingrained in our work and makes us who we are today.
I consider myself extremely lucky to have been able to work with the smart, sweet and thoughtful individuals who made up The Engine Room and I owe special thanks to Alix, Anneke, Zara, Laura, Paola and Gillian for their brilliance, comradery, compassion, ethics and comedic timing. I found tremendous intellectual and emotional inspiration in these women and I wish every leader had the opportunity to share the joys and burdens of leadership with such partners.
I feel calm, proud and relieved to be able to leave the organisation in the fantastic hands of Paola and her co-leader to come. Continuing the co-leadership model is something I believe is right for TER and for the field. Paola and I joined The Engine Room almost the same time and we worked together in many different variations. We supported organisations with their tech platforms, designed myriad projects together and raised funds successfully. We organised events and facilitated conversations and co-wrote proposals, strategies and hundreds of other documents. Most importantly, we spent quality time together in random Airbnbs, airports and trains, talking and talking and talking endlessly about the world we want to live in, the work that needs to happen to get there, and all the things in life that motivate us to get up each day with passion and diligence.
In the past eight years I got to know Paola as a thoughtful, creative, value-driven and incredibly visionary person, someone whose thinking is both inspiring and grounding in my own work. I am beyond excited to see where she and the team are going next, and routing for their success all along the way.
And what will I do next? Like I said before, I am in an experimental mode and very much enjoying it. Next year I will dedicate my time to pursue the DataBody Integrity – my research oriented fellowship at the Mozilla Foundation that allows me to dig deeper into the politics of bodies in the digital era. Besides that, I am trying to support more organisations in my home country, Hungary, flex my coaching muscles, and grab every opportunity to have fun with my family and explore the world. When we get closer I’ll share more about my work!
Photo by Silas Baisch on Unsplash
The post Leadership changes at The Engine Room first appeared on The Engine Room.
In the midst of the city of Vancouver, a company stands, unwavering and bold. Lions Gate Digital. The sun rises, casting a golden light on the future they aim to shape. They have a prospectus. It’s not just any piece of paper. It’s a promise. A commitment. $2.2 million USD from 10,000 shares. Tokenized. A Security Token Offering (STO). Each share a beacon for what’s coming.
The landscape of the digital world is vast, sometimes treacherous. Yet, this company does not waver. They’ve seen the horizon, the potential of Self-Sovereign Identity (SSI). It’s the future, and they aim to harness it. An army is forming, not of soldiers, but of believers. Believers in a decentralized world. The Lions Gate Digital Army.
Self-sovereign. The word resonates. It speaks of independence, of control. In a world overflowing with data, with information, one’s identity is paramount. And to control it, to truly own it, is power.
The enlistment is open. To join is to believe. To see the potential, the vision. The company seeks pioneers, adventurers. Those willing to embark on this digital journey. For they know the path won’t always be easy, but it will be worth it.
With the dawn, there’s hope. The company’s vision, their mission, is clear. They’re not merely introducing a STO. They’re inviting the world to see what they see, to dream what they dream.
In the end, it’s more than just an investment. It’s a movement. A call to arms in a digital age. The Lions Gate Digital Army. The future is theirs to shape, and they’re inviting all to join.
To the brave, to the visionaries: the prospectus is ready. The future awaits.
The post Lions Gate Digital: A Bold Frontier in a Digital Age appeared first on Lions Gate Digital.
As we laid out in our Helping A Community Grow By Pruning Inactive Projects post, there is an important life cycle to well governed open source projects. Since our launch in 2015, Hyperledger Foundation has hosted a number of now retired projects that helped drive innovation and advanced the development of enterprise-grade blockchain technologies. This series will look back at the impact of these pioneering projects. Hyperledger Composer is the first in the series.
One of the most rewarding aspects of our work is collaborating with organisations that are passionate about making a positive impact in the world. Earlier this year, we had the opportunity to work with Change.org to support them with workshop design and presentation development. In this post we’ll share some of the techniques we used to help, so you can use them in your organisation, too!
change.org logoChange.org is a global platform that empowers people to create change in their communities and beyond. Their mission is to build a world where no one is powerless and where creating change is a part of everyday life. With this mission, it’s no wonder that they wanted the workshops and presentations of their 2023 offsite to empower people.
cc-by-nd Bryan Mathers for WAOAt Change, we have a unique opportunity to give a more powerful voice to hundreds of millions of people in order to build more participatory and responsive democracies. We want to live up to our mission and incorporate these principles into everything that we do so that we can create tools and resources that are authentic for people using our platform. We know that being involved in digital activism takes courage, intention and planning, many of the soft skills that we hope to strengthen internally amongst our Product Team. — Jess Klein, Director of Design
We’ve created slide decks for creating, preparing, running, recovering from sessions as well as one for general tips. This post gives just a brief overview of each section and pulls out the key questions you will want to ask yourself. If you’d prefer to see these tips all at once check out our monster slide deck!
Creating your session planSpeaking and Facilitation: Tips & Tricks: Part 1
No-one likes a boring speaker or workshop. So your aim should be to create a session that is participatory, so that you can co-design solutions and outputs with others. “Participatory” means that a workshop invites input. Instead of “presenting” information, the facilitator asks participants to help solve problems.
Using participatory practices leads to better solutions, more inspiration and more collaboration. Creation isn’t passive, and “participatory” does not mean formless. A structure we like to use in workshops can be summarised as “Future, Present, Past”. You say what you’re going to do (future), do what you’re going to do (present) and then say what you did (past).
Here’s a list of questions to ask yourself when you are creating your session plan:
How do you want people to feel in your session? What outputs do you want and why do these outputs matter to others? How can you ensure that everyone is able to contribute equally? How can you use your presentation or introduction to get people thinking? What can people DO that will help you move your work forward? What instructions can you give that guide the conversations and help your participants produce valuable insights? How can you time-box share outs so that each group gets a chance to share? How will you model this reflective activity for your participants? What can people do to stay involved with you and your work? Create your slide deckSpeaking and Facilitation: Tips & Tricks: Part 2
Even in a participatory session, you’ll want to have some visual guidance. This will help people remember the reason they’re there and what problems they should be trying to solve. What you put on your screen isn’t there to prompt you but rather to prompt your audience.
Use stories, anecdotes, and analogies to bring your talk to life. Keep things visual as a picture paints a thousand words (and makes it easier for you to remember what you wanted to say!)
Question to think about for good deck of slides:
What is essential for people to understand about your problem area so that they can contribute solutions? How can your slide deck serve your colleagues after the session? What questions do you want the audience to ask after you’re done presenting? cc-by-nd Bryan Mathers for WAO Promote your sessionNow that you’ve spent so much time crafting your session, now you need to get people to attend. The more minds, the merrier! The best collaborative sessions include diverse perspectives to ensure that solutions and ideas speak to the broadest range of people.
You’ll want to tell people about your session, so that you get lots of input and lots of output. Don’t be afraid to promote your excellent planning!
Questions to consider:
Who are your allies? Can you summarise the context, need and outcome in just 3 bullets? How can your entire organisation or sector benefit from the outputs of your session? Run a mini pre-mortem cc-by-nd Bryan Mathers for WAOThere are some things we can control and some things that we can’t. Thinking about what could go wrong in advance is a good way to prepare yourself for different eventualities. Some things we can prevent, whereas some things we can only mitigate. For example, we might be able to prevent the session running over, but we would only be able to mitigate a problem with the room we’re scheduled to be in.
Running a mini ‘pre-mortem’ creates a psychologically-safe space to take a look at your session and inherent risks. All you do is imagine it’s after the session is over and imagine that the session was a failure. The job of the pre-mortem is to identify in advance why that might happen.
Questions to help you create backup plans:
What will you do if your slides are corrupt or there’s no paper in the room? Who can you call on to help remove / calm down someone who is exhibiting (sustained) problematic behaviours? How can you structure your slide deck to make sections skippable? Take care of yourselfEvery session needs at least one leader, and that’s you! So take care of yourself so you can help lead everyone through the session.
What that looks like in practice can depend on specific circumstances, but in general means a calm, well-rested person, with a plan and the resources to carry it out. If you’re part of a team running the session, then you should encourage them to be calm and well-rested, too.
Help yourself find calm:
Are there particular things (e.g. coffee, energy drinks) that you know you should avoid directly before the session because of the effect they have on you? How long will each part of the session take, realistically? Do you need to factor in transition time? cc-by-nd Bryan Mathers for WAO Next steps…In this post we’ve talked about planning, preparing, and promoting your session, as well as thinking about what could go wrong, and looking after yourself. What resonates with you? What do you need to work on in particular?
In our next post, we’ll talk about running and recovering from your session. We’ll also give you some general tips for a kick-ass workshop or presentation.
Radiating Confidence at Change.org was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Elastos is partnering with Dmail, who are building an AI-powered decentralized infrastructure that provides seamless, anonymous messaging and notification services across multiple chains and applications. What sets Dmail apart is its revolutionary Subscription Hub, designed to elevate communication strategies. This hub enables personalized messaging to wallet addresses and DIDs across multiple chains, making it effortless to engage your audience with targeted content and token rewards.
By partnering with Dmail, Elastos gains exclusive access to this robust Subscription Hub, connecting them with an expansive network of over 3 million on-chain accounts. With Dmail’s intuitive console, Elastos can promptly harness the power of intelligent messaging in Web3, creating new opportunities for user engagement and growth.
Potential synergies that are currently being explored:
Identity Management: Dmail Network will almost certainly integrate Elastos’ DID for identity management, enhancing user privacy and data sovereignty. Enterprise Solutions: Dmail can integrate Elastos’ enterprise blockchain solutions to improve processes, data provenance, and transparent solutions for supply chains and contract management. Financial Products: Dmail can benefit from Elastos’ financial solutions for decentralized trading, staking, and lending. Content Monetization: Dmail can use Elastos’ DRM technology to enable content creators to license and monetize their digital assets. Subscription Hub Service: Elastos will be using Dmail’s Subscription Hub Service for its users, allowing seamless messaging to wallets and DIDs. Technical Integration: Dmail will possibly look to integrate Elastos’ side chain EVM architecture and will explore their Hive storage solution for potential synergies.A partnership between Dmail Network and Elastos can unlock new avenues for innovation, user empowerment, and business growth. By combining Dmail’s expertise in web3 communications with Elastos’ comprehensive Web3 ecosystem, both parties can accelerate the adoption of decentralized technologies and create a more secure, transparent, and user-centric digital world.
Follow Dmail’s progress: Beta Mainnet | Website | Twitter | Discord | Gitbook tutorial
The post Elastos and Dmail Partner for Intelligent Messaging appeared first on Elastos.
Newark, NJ, October 18, 2023 – Edge brought its popular EdgeCon series to the Delaware Valley area in conjunction with Neumann University on September 28, 2023.
With an emphasis on Excelling in a Digital Teaching & Learning Future, attendees had the opportunity to engage with and learn from a growing community of digital learning professionals while discovering innovative solutions to help institutions solve today’s biggest digital learning challenges. Conference participants enjoyed a wide range of sessions, including:
Strategies for High-Quality Digital Learning and Course Sharing Engaging GenZ with Voice and Choice to Promote Digital Literacy Meet Your Students’ Needs – Expanding Your Digital Catalog with a Course Sharing Network Skip The Hidden Messages: The Importance of Feedback in Online Learning HyFlex: Creating Opportunities for Success as a Community College Crafting Campus Connections with Oral History and Adobe Premiere Rush Using Bite-sized Videos to Layer Nurse Practitioner Content Preparing to Teach Online: From Learner to Teacher Making Art with Embedded Poetry from Human and ChatGPT Sources Maintaining a Competitive Edge While Meeting Student and Workforce Demand Digital Teaching and Learning Across CampusThe conference drew attendees from across the region including participants from over 25 institutions representing Albright College, Community College of Baltimore County, Delaware College of Art & Design, Essex County College, Hudson County Community College, Kean University, La Salle University, Lancaster Bible College, Mercer County Community College, Monmouth University, Montgomery County Community College, Moravian University, Neumann University, New Jersey Institute of Technology, Ocean County College, Pennsylvania Institute of Technology, Rowan College at Burlington County, Rowan University, Rutgers University, Seton Hall University, Southeastern University, Stockton University, The College of New Jersey, University of Texas at El Paso, University of the Arts, and Wilkes University.
“Thank you for organizing and holding an EdgeCon Digital Teaching & Learning conference at Neumann University. The event was both enlightening and useful to me in several ways, and I enjoyed meeting many of the other participants. The breakout sessions had many examples of innovative ways to utilize digital teaching and learning to better serve students. I also appreciated the opportunity to present; it helped me to better connect with some of the other attendees.,” shared conference attendee Michael Schutz, Math Lab Coordinator, Academic Coaching and Tutoring, Neumann University.
Sponsors Acadeum, Anthology, CBTS and Newline helped make the one-day conference a huge success.
Edge’s much-anticipated EdgeCon Autumn 2023, hosted in partnership with Kean University on November 2, 2023, will provide a wealth of opportunities to network, meet and engage with peers, and experience insightful, inspiring content. The EdgeCon program will invite attendees to reimagine how technology can transform the way institutions achieve their goals and focus on accelerating modernization efforts for cybersecurity, campus networks, cloud strategy, student support applications, and more.
To register for EdgeCon Autumn 2023, begin your registration HERE.
Agenda 8:00-8:30 am – Check-In & Networking 8:30-9:15 am – Breakfast, Networking, and Exhibitor Connections 9:15-10:15 am – EdgeCon Delaware Valley Fireside Chat: Strategies for High-Quality Digital Learning and Course SharingJoin Joshua Gaul, AVP & Chief Digital Learning Officer at Edge and Amanda Gould, VP of Partner Success at Acadeum, for a fireside chat focused on high-quality digital learning, the importance of instructional standards, and the benefits of inter-institutional course sharing. Our speakers will delve into strategies for maintaining excellence in online education, ensuring alignment with rigorous standards, and exploring collaborative opportunities for knowledge exchange among institutions. You’ll gain valuable insights into how the student experience can be enhanced through dynamic leadership, academic excellence, and collaboration with peer institutions.
10:30-11:10 am – Breakout Session 1Classroom 107, Mullen Communications Center
Engaging GenZ with Voice and Choice to Promote Digital Literacy
“The World Economic Forum’s Future of Jobs Report lists Technology Use as one of their top 10 skills for 2025. Across industries, skills in visual design, digital storytelling, and video production are in high demand, as digital communication dominates the mainstream.
To begin this session, we will introduce Generation Z and explain their needs and expectations for the classroom environment. Driven by sense of purpose, Gen Z students will connect with learning experiences that emphasize creativity, voice, equity, and utility.
We will present an assignment that engages students in course content while developing their digital literacy in preparation for the professional demands that lie ahead. While traditional essays and discussion forums can be perceived as irrelevant and impersonal, this authentic assignment capitalizes on students’ impetus to create something that has real-world meaning.
All participants will receive a prompt and rubric, which includes a list of potential, student-created digital artifacts and corresponding instructions. We’ll discuss considerations for implementation, including ways to customize the assignment to meet the needs of virtually all instructional modalities and course delivery formats. Instructors can also tailor the list of options to leverage available institutional technology and resources without bearing responsibility for learning, teaching, and supporting new software.
At the heart of this assignment, we’ll discuss how giving students “voice and choice” bolsters their resilience and empowers them to persevere through challenges. Research is conclusive that student investment and performance skyrocket when they are given agency in determining how to demonstrate proficiency.
The session will conclude with a hands-on activity in which participants use the provided rubric to evaluate real student work samples. We will finish with a question-and-answer session and scannable QR code that leads to downloadable materials and a cited list of the research that informed this presentation.”
Presenter:
Lisa Bond, Instructional Designer, Seton Hall University
McNichol Art Gallery, Bruder Life Center
Meet Your Students’ Needs – Expanding Your Digital Catalog with a Course Sharing Network
Innovations like course sharing have been instrumental in meeting student needs and helping institutions achieve their strategic goals. Broadly speaking, course sharing offers institutions a low-risk way to help students access the courses they need when they need them while maintaining the integrity of learning outcomes and the student experience. Acadeum enables students and institutions to succeed with a platform and digital network that expands course catalogs, locates just-in-time solutions, and connects like-minded schools and learners to workforce-aligned and market-competitive courses. In addition, consortial course sharing makes for a more seamless student experience that bolsters retention and completion rates and better prepares learners with the skills required to succeed.
Through the Acadeum network, like-minded colleges and universities can access and share high-quality digital courses to support student success and boost academic innovation.
Attract new student populations by delivering in-demand courses and programs, conveniently and affordably. Update or expand existing programs with new content from like-minded partners. Meet student needs with workforce-aligned courses that offer credit as well as hands-on skills. Ensure students have the courses they need when they need them to maintain velocity to completion.Hear from a panel of institutional leaders on how they are using course sharing to:
Offer learners opportunities to regain academic standing and overcome barriers to completion Eliminate transfer hassles and increase scheduling flexibility Ensure athletic eligibility Support instructional resourcing needsPresenters:
JP Palmares, Senior Manager, Partner Success, Acadeum
Dr. Carol Traupman-Carr, Vice Provost, Moravian University
Classroom 107, Mullen Communications Center
Skip The Hidden Messages: The Importance of Feedback in Online Learning
As in any learning environment, feedback impacts the learning experience. Yet many times instructors and facilitators fall short on providing meaningful feedback to learners. This session will cover key points of providing effective feedback to students in an online setting by taking a closer look at feedback methods and delivery. Feedback shouldn’t be a hidden message!
Presenter:
Talia Martinez, Instructional Designer, Kean University
McNichol Art Gallery, Bruder Life Center
HyFlex: Creating Opportunities for Success as a Community College
After the pandemic, Ocean County College was looking to create more flexibility for students in how they receive their education. In the summer of 2022, the college launched its first set of fully HyFlex courses allowing students to choose on any given day whether they came to class in person or logged in remotely to the live class session. Data was collected over the course of the 2022-2023 Academic year to see how students performed and to see if there were any advantages students perceived to using the modality. Many positive outcomes were discovered. This session will go over the results from the qualitative and quantitative data as well as walk participants through the process of implementing HyFlex classes in a community college setting.
Presenters:
Catherine Mancuso, Dean of Faculty Development and Learning Innovation, Ocean County College
Dr. Amir Sadrian, Associate Vice President of Academic Affairs, Ocean County College
Classroom 107, Mullen Communications Center
Crafting Campus Connections with Oral History and Adobe Premiere Rush
This presentation discusses how student-created videos using Adobe Premiere Rush contribute to campus connectivity—we will provide examples that demonstrate the community-building nature of the assignment in a post-pandemic era and conclude with a reflection on how the final project submissions (which were donated to the university’s archives) used a new technology to create inspiring content that benefitted not only students, but also members of the Seton Hall University community in the present moment and, as we will explain, future community members as well. In our ever-changing technological world, collaborative projects between college students, faculty, and staff must ensure individuals from different backgrounds can work together to benefit their university communities. The disconnects that exist between these groups, however, present challenges for determining what productive collaborations and connections might look like. An oral history assignment used at Seton Hall University in Spring 2023 reflects one such innovative approach. First we will discuss how oral history presents a unique and engaging way for students to learn about history from those who have lived it and then we outline the assignment itself. Finally the audience will see how the creation of an oral history using Adobe Premiere Rush was the ideal way for students to practice several skills including various research techniques (reading primary and secondary sources), interpersonal communication (interviewing), and video creation (Adobe Premiere Rush).
Presenter:
Kate Sierra, Instructional Designer, Seton Hall University
McNichol Art Gallery, Bruder Life Center
Using Bite-sized Videos to Layer Nurse Practitioner Content
Educators are constantly exploring innovative strategies to assist students in retaining salient course material. Educational research has established that students need to refocus in under a minute, their attentiveness diminishes throughout a class, and they focus more effectively when student-centered pedagogies are integrated (Blake, 2023). Specifically, Gen Z nursing graduate students have shorter attention spans and, as digital natives (Prensky, 2001), expect to learn through a variety of modalities (Chicca & Shellenbarger, 2018; Singh & Dangmei, 2016). Today’s nurse practitioner students are digitally grounded and often prefer information in bite-sized chunks. To address this need, the presenters created a series of health assessment bite-sized videos highlighting important lecture content to reinforce nurse practitioner student course content. The presenters found this tailored approach can effectively engage nurse practitioner students and leverage the most important content for retention beyond the didactic classroom. This presentation will emphasize the process used in developing and refining the videos with a look forward to adapting the format to other subject areas.
Presenters:
Ellen Farr, Assistant Director, Center for Excellence in Teaching and Learning, The College of New Jersey
Mary Ann Dugan, Assistant Professor, Nursing, The College of New Jersey
Samira Adam, Undergraduate Student: Nursing, The College of New Jersey
Elias Ananiadis, Undergraduate Student: Interactive Multimedia, The College of New Jersey
Learning and teaching in the online environment often look very different than in face-to-face classrooms. For novice instructors, understanding how students experience the online classroom can feel like a daunting task. Through the Online Teaching Certificate (OTC) Program offered by Rutgers Teaching and Learning with Technology, instructors take on the role of online students for themselves. As they learn about key principles of online pedagogy and instructional technology usage, they also build empathy for their students, reflecting on their own experiences, challenges, and discoveries as online learners.
In this workshop, we will provide an overview of the design of the OTC program, highlighting one course in detail. We will demonstrate how the structure and content of this course, Universal Design and Accessibility for Online Teaching, facilitates both skill-building and metacognitive reflection about participants’ experiences as online learners. Attendees will focus on key challenges related to online learning contexts as well as strategies supporting online student success.
Presenter:
Natalia Kouraeva, Senior Instructional Designer, Rutgers University
McNichol Art Gallery, Bruder Life Center
Making Art with Embedded Poetry from Human and ChatGPT Sources
I published a book in 2011 titled “3D Haiku and Tanka” in Apple Books which explored how writing poetry in 3D space was different from traditional ways of writing on flat 2D surfaces. When 3D printing strategies became available, now it was possible to make 3D real world models of assembled poetic lines which also contained audio buttons that could be pressed for a line of poetry to be read aloud. The appearance of laser cutters made possible the use of multicolored acrylic sheets cut into patterns such as squares, rectangles and pentagons and held together with wood veneer and nuts and bolts. The text of poems are laser cut into the acrylic sheets with QR codes placed nearby for reading by patrons who may be blind or visually impaired. The appearance of ChatGPT now permits the possibility of asking ChatGPT to write poetry in Haiku or Tanka format and include the selected poetry in the mixed media art piece. An example of a Tanka poem made from asking ChatGPT follows.
Silent algorithms,
AI ponders the unknown,
Unraveling truths,
The quest for understanding,
Guided by digital minds.
This raises questions about copyright as ChatGPT utilizes what is available in its captured database to write poetry so any art I make with ChatGPT-inspired poetry pays credit to ChatGPT when its generated poetry is used in an art piece and in that way can be viewed as a virtual collaborator and listed as so in the information describing the art piece.
Presenter:
Mike Kolitsky, Ph.D., Online Adjunct Instructor, The University of Texas at El Paso
Classroom 107, Mullen Communications Center
Maintaining a Competitive Edge While Meeting Student and Workforce Demand
Mirco-credentialing, or stackable certificates, has been cited repeatedly as one of the biggest trends to inspire academic innovation in higher education. Workforce-aligned programs allow institutions to:
Be highly responsive to changing professional trends Deliver work-based learning experiences that are industry relevant Keep up with the rapidly changing nature of the world todayInstitutions can now leverage the Acadeum network specifically to promote credit recognition, reach new audiences, and provide industry-aligned skills to students making them workforce ready. You’ll learn how Southeastern University is approaching industry designed content to help expedite skills based learning that can stack into a degree program for learners to continue to advance along the education-career continuum.
Presenters:
Molly Bryant, Academic Project Manager, Southeastern University
Sarah McDonald, Director, Partner Innovation, Acadeum
McNichol Art Gallery, Bruder Life Center
Digital Teaching and Learning Across Campus
This presentation gives an overview of instructional design, digital tools’ capabilities, and a campus proposal for designing and implementing a campus-wide integrated digital teaching and learning capability.
Presenter:
Michael Schutz, Math Lab Coordinator, Neumann University
The post EdgeCon Delaware Valley 2023 appeared first on NJEdge Inc.
Edge brought its popular EdgeCon series to the Delaware Valley area in conjunction with Neumann University on September 28, 2023.
With an emphasis on Excelling in a Digital Teaching & Learning Future, attendees had the opportunity to engage with and learn from a growing community of digital learning professionals while discovering innovative solutions to help institutions solve today’s biggest digital learning challenges. Conference participants enjoyed a wide range of sessions, including:
Strategies for High-Quality Digital Learning and Course Sharing Engaging GenZ with Voice and Choice to Promote Digital Literacy Meet Your Students’ Needs – Expanding Your Digital Catalog with a Course Sharing Network Skip The Hidden Messages: The Importance of Feedback in Online Learning HyFlex: Creating Opportunities for Success as a Community College Crafting Campus Connections with Oral History and Adobe Premiere Rush Using Bite-sized Videos to Layer Nurse Practitioner Content Preparing to Teach Online: From Learner to Teacher Making Art with Embedded Poetry from Human and ChatGPT Sources Maintaining a Competitive Edge While Meeting Student and Workforce Demand Digital Teaching and Learning Across CampusThe conference drew attendees from across the region including participants from over 25 institutions representing Albright College, Community College of Baltimore County, Delaware College of Art & Design, Essex County College, Hudson County Community College, Kean University, La Salle University, Lancaster Bible College, Mercer County Community College, Monmouth University, Montgomery County Community College, Moravian University, Neumann University, New Jersey Institute of Technology, Ocean County College, Pennsylvania Institute of Technology, Rowan College at Burlington County, Rowan University, Rutgers University, Seton Hall University, Southeastern University, Stockton University, The College of New Jersey, University of Texas at El Paso, University of the Arts, and Wilkes University.
“Thank you for organizing and holding an EdgeCon Digital Teaching & Learning conference at Neumann University. The event was both enlightening and useful to me in several ways, and I enjoyed meeting many of the other participants. The breakout sessions had many examples of innovative ways to utilize digital teaching and learning to better serve students. I also appreciated the opportunity to present; it helped me to better connect with some of the other attendees.,” shared conference attendee Michael Schutz, Math Lab Coordinator, Academic Coaching and Tutoring, Neumann University.
Sponsors Acadeum, Anthology, CBTS and Newline helped make the one-day conference a huge success.
Edge’s much-anticipated EdgeCon Autumn 2023, hosted in partnership with Kean University on November 2, 2023, will provide a wealth of opportunities to network, meet and engage with peers, and experience insightful, inspiring content. The EdgeCon program will invite attendees to reimagine how technology can transform the way institutions achieve their goals and focus on accelerating modernization efforts for cybersecurity, campus networks, cloud strategy, student support applications, and more.
To register for EdgeCon Autumn 2023, begin your registration HERE.
Agenda 8:00-8:30 am – Check-In & Networking 8:30-9:15 am – Breakfast, Networking, and Exhibitor Connections 9:15-10:15 am – EdgeCon Delaware Valley Fireside Chat: Strategies for High-Quality Digital Learning and Course SharingJoin Joshua Gaul, AVP & Chief Digital Learning Officer at Edge and Amanda Gould, VP of Partner Success at Acadeum, for a fireside chat focused on high-quality digital learning, the importance of instructional standards, and the benefits of inter-institutional course sharing. Our speakers will delve into strategies for maintaining excellence in online education, ensuring alignment with rigorous standards, and exploring collaborative opportunities for knowledge exchange among institutions. You’ll gain valuable insights into how the student experience can be enhanced through dynamic leadership, academic excellence, and collaboration with peer institutions.
10:30-11:10 am – Breakout Session 1Classroom 107, Mullen Communications Center
Engaging GenZ with Voice and Choice to Promote Digital Literacy
“The World Economic Forum’s Future of Jobs Report lists Technology Use as one of their top 10 skills for 2025. Across industries, skills in visual design, digital storytelling, and video production are in high demand, as digital communication dominates the mainstream.
To begin this session, we will introduce Generation Z and explain their needs and expectations for the classroom environment. Driven by sense of purpose, Gen Z students will connect with learning experiences that emphasize creativity, voice, equity, and utility.
We will present an assignment that engages students in course content while developing their digital literacy in preparation for the professional demands that lie ahead. While traditional essays and discussion forums can be perceived as irrelevant and impersonal, this authentic assignment capitalizes on students’ impetus to create something that has real-world meaning.
All participants will receive a prompt and rubric, which includes a list of potential, student-created digital artifacts and corresponding instructions. We’ll discuss considerations for implementation, including ways to customize the assignment to meet the needs of virtually all instructional modalities and course delivery formats. Instructors can also tailor the list of options to leverage available institutional technology and resources without bearing responsibility for learning, teaching, and supporting new software.
At the heart of this assignment, we’ll discuss how giving students “voice and choice” bolsters their resilience and empowers them to persevere through challenges. Research is conclusive that student investment and performance skyrocket when they are given agency in determining how to demonstrate proficiency.
The session will conclude with a hands-on activity in which participants use the provided rubric to evaluate real student work samples. We will finish with a question-and-answer session and scannable QR code that leads to downloadable materials and a cited list of the research that informed this presentation.”
Presenter:
Lisa Bond, Instructional Designer, Seton Hall University
McNichol Art Gallery, Bruder Life Center
Meet Your Students’ Needs – Expanding Your Digital Catalog with a Course Sharing Network
Innovations like course sharing have been instrumental in meeting student needs and helping institutions achieve their strategic goals. Broadly speaking, course sharing offers institutions a low-risk way to help students access the courses they need when they need them while maintaining the integrity of learning outcomes and the student experience. Acadeum enables students and institutions to succeed with a platform and digital network that expands course catalogs, locates just-in-time solutions, and connects like-minded schools and learners to workforce-aligned and market-competitive courses. In addition, consortial course sharing makes for a more seamless student experience that bolsters retention and completion rates and better prepares learners with the skills required to succeed.
Through the Acadeum network, like-minded colleges and universities can access and share high-quality digital courses to support student success and boost academic innovation.
Attract new student populations by delivering in-demand courses and programs, conveniently and affordably. Update or expand existing programs with new content from like-minded partners. Meet student needs with workforce-aligned courses that offer credit as well as hands-on skills. Ensure students have the courses they need when they need them to maintain velocity to completion.Hear from a panel of institutional leaders on how they are using course sharing to:
Offer learners opportunities to regain academic standing and overcome barriers to completion Eliminate transfer hassles and increase scheduling flexibility Ensure athletic eligibility Support instructional resourcing needsPresenters:
JP Palmares, Senior Manager, Partner Success, Acadeum
Dr. Carol Traupman-Carr, Vice Provost, Moravian University
Classroom 107, Mullen Communications Center
Skip The Hidden Messages: The Importance of Feedback in Online Learning
As in any learning environment, feedback impacts the learning experience. Yet many times instructors and facilitators fall short on providing meaningful feedback to learners. This session will cover key points of providing effective feedback to students in an online setting by taking a closer look at feedback methods and delivery. Feedback shouldn’t be a hidden message!
Presenter:
Talia Martinez, Instructional Designer, Kean University
McNichol Art Gallery, Bruder Life Center
HyFlex: Creating Opportunities for Success as a Community College
After the pandemic, Ocean County College was looking to create more flexibility for students in how they receive their education. In the summer of 2022, the college launched its first set of fully HyFlex courses allowing students to choose on any given day whether they came to class in person or logged in remotely to the live class session. Data was collected over the course of the 2022-2023 Academic year to see how students performed and to see if there were any advantages students perceived to using the modality. Many positive outcomes were discovered. This session will go over the results from the qualitative and quantitative data as well as walk participants through the process of implementing HyFlex classes in a community college setting.
Presenters:
Catherine Mancuso, Dean of Faculty Development and Learning Innovation, Ocean County College
Dr. Amir Sadrian, Associate Vice President of Academic Affairs, Ocean County College
Classroom 107, Mullen Communications Center
Crafting Campus Connections with Oral History and Adobe Premiere Rush
This presentation discusses how student-created videos using Adobe Premiere Rush contribute to campus connectivity—we will provide examples that demonstrate the community-building nature of the assignment in a post-pandemic era and conclude with a reflection on how the final project submissions (which were donated to the university’s archives) used a new technology to create inspiring content that benefitted not only students, but also members of the Seton Hall University community in the present moment and, as we will explain, future community members as well. In our ever-changing technological world, collaborative projects between college students, faculty, and staff must ensure individuals from different backgrounds can work together to benefit their university communities. The disconnects that exist between these groups, however, present challenges for determining what productive collaborations and connections might look like. An oral history assignment used at Seton Hall University in Spring 2023 reflects one such innovative approach. First we will discuss how oral history presents a unique and engaging way for students to learn about history from those who have lived it and then we outline the assignment itself. Finally the audience will see how the creation of an oral history using Adobe Premiere Rush was the ideal way for students to practice several skills including various research techniques (reading primary and secondary sources), interpersonal communication (interviewing), and video creation (Adobe Premiere Rush).
Presenter:
Kate Sierra, Instructional Designer, Seton Hall University
McNichol Art Gallery, Bruder Life Center
Using Bite-sized Videos to Layer Nurse Practitioner Content
Educators are constantly exploring innovative strategies to assist students in retaining salient course material. Educational research has established that students need to refocus in under a minute, their attentiveness diminishes throughout a class, and they focus more effectively when student-centered pedagogies are integrated (Blake, 2023). Specifically, Gen Z nursing graduate students have shorter attention spans and, as digital natives (Prensky, 2001), expect to learn through a variety of modalities (Chicca & Shellenbarger, 2018; Singh & Dangmei, 2016). Today’s nurse practitioner students are digitally grounded and often prefer information in bite-sized chunks. To address this need, the presenters created a series of health assessment bite-sized videos highlighting important lecture content to reinforce nurse practitioner student course content. The presenters found this tailored approach can effectively engage nurse practitioner students and leverage the most important content for retention beyond the didactic classroom. This presentation will emphasize the process used in developing and refining the videos with a look forward to adapting the format to other subject areas.
Presenters:
Ellen Farr, Assistant Director, Center for Excellence in Teaching and Learning, The College of New Jersey
Mary Ann Dugan, Assistant Professor, Nursing, The College of New Jersey
Samira Adam, Undergraduate Student: Nursing, The College of New Jersey
Elias Ananiadis, Undergraduate Student: Interactive Multimedia, The College of New Jersey
Learning and teaching in the online environment often look very different than in face-to-face classrooms. For novice instructors, understanding how students experience the online classroom can feel like a daunting task. Through the Online Teaching Certificate (OTC) Program offered by Rutgers Teaching and Learning with Technology, instructors take on the role of online students for themselves. As they learn about key principles of online pedagogy and instructional technology usage, they also build empathy for their students, reflecting on their own experiences, challenges, and discoveries as online learners.
In this workshop, we will provide an overview of the design of the OTC program, highlighting one course in detail. We will demonstrate how the structure and content of this course, Universal Design and Accessibility for Online Teaching, facilitates both skill-building and metacognitive reflection about participants’ experiences as online learners. Attendees will focus on key challenges related to online learning contexts as well as strategies supporting online student success.
Presenter:
Natalia Kouraeva, Senior Instructional Designer, Rutgers University
McNichol Art Gallery, Bruder Life Center
Making Art with Embedded Poetry from Human and ChatGPT Sources
I published a book in 2011 titled “3D Haiku and Tanka” in Apple Books which explored how writing poetry in 3D space was different from traditional ways of writing on flat 2D surfaces. When 3D printing strategies became available, now it was possible to make 3D real world models of assembled poetic lines which also contained audio buttons that could be pressed for a line of poetry to be read aloud. The appearance of laser cutters made possible the use of multicolored acrylic sheets cut into patterns such as squares, rectangles and pentagons and held together with wood veneer and nuts and bolts. The text of poems are laser cut into the acrylic sheets with QR codes placed nearby for reading by patrons who may be blind or visually impaired. The appearance of ChatGPT now permits the possibility of asking ChatGPT to write poetry in Haiku or Tanka format and include the selected poetry in the mixed media art piece. An example of a Tanka poem made from asking ChatGPT follows.
Silent algorithms,
AI ponders the unknown,
Unraveling truths,
The quest for understanding,
Guided by digital minds.
This raises questions about copyright as ChatGPT utilizes what is available in its captured database to write poetry so any art I make with ChatGPT-inspired poetry pays credit to ChatGPT when its generated poetry is used in an art piece and in that way can be viewed as a virtual collaborator and listed as so in the information describing the art piece.
Presenter:
Mike Kolitsky, Ph.D., Online Adjunct Instructor, The University of Texas at El Paso
Classroom 107, Mullen Communications Center
Maintaining a Competitive Edge While Meeting Student and Workforce Demand
Mirco-credentialing, or stackable certificates, has been cited repeatedly as one of the biggest trends to inspire academic innovation in higher education. Workforce-aligned programs allow institutions to:
Be highly responsive to changing professional trends Deliver work-based learning experiences that are industry relevant Keep up with the rapidly changing nature of the world todayInstitutions can now leverage the Acadeum network specifically to promote credit recognition, reach new audiences, and provide industry-aligned skills to students making them workforce ready. You’ll learn how Southeastern University is approaching industry designed content to help expedite skills based learning that can stack into a degree program for learners to continue to advance along the education-career continuum.
Presenters:
Molly Bryant, Academic Project Manager, Southeastern University
Sarah McDonald, Director, Partner Innovation, Acadeum
McNichol Art Gallery, Bruder Life Center
Digital Teaching and Learning Across Campus
This presentation gives an overview of instructional design, digital tools’ capabilities, and a campus proposal for designing and implementing a campus-wide integrated digital teaching and learning capability.
Presenter:
Michael Schutz, Math Lab Coordinator, Neumann University
The post EdgeCon Delaware Valley 2023 appeared first on NJEdge Inc.
The OpenID Foundation hosted a hybrid workshop at Cisco in San Jose, CA on Monday, October 9, 2023. We sincerely appreciate Cisco hosting a number of Foundation events including the workshop.
This workshop began with working group updates that also included deeper dives into two new working groups, the Digital Credentials Protocols (DCP) and the AuthZEN working groups. These were followed by FAPI adoption and certification program updates. The rest of the agenda focused on Foundation strategic initiatives. Full agenda is below.
Thank you to all presenters and participants who made this workshop a success.
View workshop presentationsWorkshop Agenda
TIME
TOPIC
PRESENTER
12:30-12:35
Welcome & Note Well Statement
Gail Hodges
12:35-12:45
Connect WG Update
Michael Jones
12:45-12:55
eKYC & IDA WG Update
Mark Haine
12:55-1:05
FAPI WG Update
TBC
1:05-1:15
MODRNA WG Update
Bjorn Hjelm
1:15-1:25
Shared Signals WG Update
Atul Tulshibagwale
1:25-1:40
Digital Credentials Protocols (DCP) WG Deeper Dive
Kristina Yasuda & Torsten Lodderstedt
1:40-1:55
AuthZEN WG Deeper Dive
Atul Tulshibagwale
1:55-2:00
BREAK
2:00-2:10
FAPI Landscape Update
Mike Leszcz
2:10-2:25
OIDF Certification Program Update Including Upcoming Conformance Test
Joseph Heenan
2:25-2:40
“Human-Centric Digital Identity” Paper Update
Elizabeth Garber
2:40-3:05
Panel Discussion: The Global Digital Identity Landscape, the Gaps, and the Role of the OIDF
Description: The OpenID Foundation standards are playing a progressively important role in digital identity architectures globally. If we take a step back and look at the larger picture, what is the context in which architects are choosing standards, what are the gaps in the landscape and OIDF’s role? What has the foundation learning from recent whitepapers that we can apply into the Foundation’s strategy and approach?Nat Sakimura, Gail Hodges, Mark Haine, Elizabeth Garber
3:05-3:30
OIDF Member Feedback for the Interoperability Summit
Non-profit standards bodies recognize the need for global digital identity infrastructure to better bridge the global north and the global south. At an invite only Summit in Paris November 28th, a selection of leading non-profits, governments and multi-lateral institutions will meet to craft a shared path forward.Nat Sakimura, Gail Hodges, and Mark Haine
3:30-3:40
OIDF Process Document Updates + Upcoming Member Vote
Mike Jones
3:40-3:45
Closing Remarks + Open Q&A
Gail Hodges
3:45-4:00
Networking
The post OpenID Foundation Workshop at Cisco – Monday, October 9, 2023 first appeared on OpenID Foundation.
The OpenID Foundation is pleased to announce a new Whitepaper Process as approved by the Board of Directors on October 9, 2023.
OIDF-led and co-led whitepapers help ecosystem stakeholders understand the wider landscape and the role of OIDF standards within that wider landscape. Such whitepapers make OIDF’s global, technical expertise more accessible to ecosystem stakeholders, technical experts, and laypeople alike. The OIDF Board agrees on topics worthy of research, analysis, and recommendations to the community. Recent whitepapers have addressed topics ranging from Open Banking and Open Data (including the FAPI family of standards) to Digital Identity (including the OpenID Connect and OpenID for Verifiable Credentials families of specs) and more.
The Whitepaper Process defines a more formal and transparent approach to whitepapers. It defines the process for governance as well as the editorial tasks of scoping, drafting, publishing, and updating whitepapers. It also includes processes to engage partner organizations in OIDF-led or co-led papers and manage the distribution of comments. A thoughtful and transparent approach to whitepapers will help ensure OIDF whitepapers consistently deliver on the OIDF’s Vision and Mission while meeting the needs of whitepaper partner organizations and contributors.
The post OpenID Foundation Announces New Whitepaper Process first appeared on OpenID Foundation.
Newark, NJ, October 17, 2023 – Dr. Forough Ghahramani, Edge’s Assistant Vice President for Research, Innovation, and Sponsored Programs, will be joining fellow thought leaders at the New Jersey Equity in Commercialization Collective (NJECC) Annual Conference, Empowering Innovators: Inclusive Pathways to Commercialization, on October 25, 2023. The conference aims to create sustainable connections between various innovation ecosystem participants, targeting a diverse group of inventors and commercialization gatekeepers: investors, service providers, and tech transfer and venture development staff.
Hosted by New Jersey Institute of Technology’s Campus Center located in Newark, NJ, sessions will run from 8:45 AM to 4 PM with support by the National Science Foundation ADVANCE Partnership Grant – The New Jersey Equity in Commercialization Collective (NJECC).
Dr. Ghahramani, a co-PI for the NSF ADVANCE Partnership New Jersey Equity in Commercialization Collective (NJECC) grant and one of the conference organizers, will lead a panel dedicated to National Programs and Resources to Improve Inventor Diversity featuring Henry Ahn, Program Director (SBIR/STTR), National Science Foundation (NSF); Almesha Campbell, Assistant Vice President for Research and Economic Development, Jackson State University; Holly Fechner, Executive Director, Invent Together, and Kirsten Leute, Partner, University Relations, Osage University Partners.
“We hope you’ll join us as we work to shape a brighter and more equitable future for innovation. Please share this invitation with anyone you think would be interested in and benefit from attending. We can’t wait to see you there!” — Forough GhahramaniThe PI for the NJECC Project is Treena Livingston Arinzeh, Ph.D. a Professor of Biomedical Engineering at Columbia University. In addition to Dr. Ghahramani, NJECC co-PI’s include:
Nancy Steffen-Fluhr, Ph.D., the Director of the Murray Center for Women in Technology at the New Jersey Institute of Technology (NJIT) Judith Sheft, Executive Director of the New Jersey Commission of Science, Innovation and Technology Jeffrey A. Robinson, Ph.D., Provost and Executive Vice Chancellor at the Newark campus of Rutgers University and holds the Prudential Chair in Business, a Professor of Management and Global Business at Rutgers Business School“Building on last year’s inaugural conference, the 2023 conference will feature leaders sharing insights into their own entrepreneurial journeys and will provide information about available resources to both new and experienced innovators. Together, attendees will explore strategies to remove barriers and promote equity in New Jersey’s innovation landscape,” noted Dr. Ghahramani. Further notes Dr. Ghahramani, “We hope you’ll join us as we work to shape a brighter and more equitable future for innovation. Please share this invitation with anyone you think would be interested in and benefit from attending. We can’t wait to see you there!”
Visit the NJECC Annual Conference page to learn more about this event and this year’s speakers. To register to attend the October 25, 2023 conference, complete your registration HERE.
ABOUT NJECC:
NJECC, an NSF ADVANCE Partnership, addresses gender equity issues in academic technology commercialization (patenting, licensing, and startup creation) by focusing on the elimination of systemic institutional and entrepreneurial ecosystem barriers. NJECC partners with universities and institutions throughout New Jersey as part of its systemic change initiative to increase the diversity of STEM faculty researchers who participate in New Jersey’s entrepreneurship and innovation ecosystem.
Contact: contact@njeccadvance.com
The post Dr. Forough Ghahramani to Lead National Programs and Resources to Improve Inventor Diversity Panel at NJECC Annual Conference appeared first on NJEdge Inc.
Newark, NJ, October 17, 2023 –Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, will co-present two sessions during the EDUCAUSE Annual Online Conference October 18–19, 2023.
The EDUCAUSE Annual Conference connects the best thinkers in higher education technology. Considered to be THE event where professionals and technology providers from around the world gather to network, share ideas, grow professionally, and discover solutions to today’s challenges, it’s the largest gathering of peers that attendees can relate to, learn from, and stay connected to throughout the year.
The much-anticipated EDUCAUSE Annual Conference Online commences the morning of Wednesday, October 18, 2023 with Dr. Ghahramani’s first session taking place at 10:30 am ET via a Simulive Presentation with Dr. Tabbetha Dobbins, Dean, School of Graduate Studies, Rowan University and Dr. Mira Lalovic-Hand, Senior Vice President and Chief Information Officer, Rowan University. The presentation, Advancing Cyberinfrastructure for Research at an Emerging R1 Institution, will highlight the journey Rowan took with its cyber-infrastructure strategy and implementation by understanding ongoing and emerging research and education environments and learning about the faculty and student needs in hardware, networking, and other adjacent services and technologies, such as integrated access to cloud and local resources, fast data transfer, and wireless access across multiple institutions.
“This exciting event will encompass sessions from the in-person Chicago conference, where I have the privilege of presenting, alongside captivating new material that is simply unmissable” — Forough GhahramaniAt 12:30 pm ET the following afternoon, Dr. Ghahramani and John Hicks, Network Research Engineer at Internet2 will host a session titled, CaRCC RCD Capabilities Model Focused Tools. During the Simulive Presentation, they will share work being done by the RCD (research, computing, and data) Capabilities Model Focused Tools committee to help address the RCD support needs of smaller under-resourced institutions.
“This exciting event will encompass sessions from the in-person Chicago conference, where I have the privilege of presenting, alongside captivating new material that is simply unmissable,” exclaims Dr. Ghahramani.
Those interested in registering EDUCAUSE Annual Conference Online, are encouraged to visit https://events.educause.edu/annual-conference-online.
The post Dr. Forough Ghahramani to Present Two Breakout Sessions at the EDUCAUSE Annual Conference Online appeared first on NJEdge Inc.
Committee Specification 03 ready for testing and implementation
OASIS is pleased to announce that OData Extension for Data Aggregation Version 4.0 from the OASIS Open Data Protocol (OData) TC [1] has been approved as an OASIS Committee Specification.
This specification adds basic grouping and aggregation functionality (e.g. sum, min, and max) to the Open Data Protocol (OData) without changing any of the base principles of OData.
This Committee Specification is an OASIS deliverable, completed and approved by the TC and fully ready for testing and implementation.
The documents and related files are available here:
OData Extension for Data Aggregation Version 4.0
Committee Specification 03
19 September 2023
Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.md
HTML:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.html
PDF:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.pdf
ABNF components – OData Aggregation ABNF Construction Rules Version 4.0 and OData Aggregation ABNF Test Cases: https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/abnf/
OData Aggregation Vocabulary:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/vocabularies/Org.OData.Aggregation.V1.json
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/vocabularies/Org.OData.Aggregation.V1.xml
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.zip
Members of the OData TC [1] approved this specification by Special Majority Vote. The specification had been released for public review as required by the TC Process [2]. The vote to approve as a Committee Specification passed [3], and the document is now available online in the OASIS Library as referenced above.
Our congratulations to the TC on achieving this milestone and our thanks to the reviewers who provided feedback on the specification drafts to help improve the quality of the work.
========== Additional references:
[1] OASIS Open Data Protocol (OData) TC
https://www.oasis-open.org/committees/odata/
[2] Public reviews:
– Public review metadata record:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/csd04/odata-data-aggregation-ext-v4.0-csd04-public-review-metadata.html
– Most recent comment resolution log:
https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/csd04/odata-data-aggregation-ext-v4.0-csd04-comment-resolution-log.xlsx
[3] Approval ballot:
https://www.oasis-open.org/committees/ballot.php?id=3795
The post OData Extension for Data Aggregation v4.0 from OData TC approved as a Committee Specification appeared first on OASIS Open.
Ceramic recently launched a new course on LearnWeb3, a leading platform for Web3 education. The course, ‘Build an AI Chatbot on ComposeDB and the Ceramic Network,’ will teach developers how to build powerful and scalable AI chatbots using Ceramic’s decentralized data infrastructure.
AI chatbots are becoming increasingly popular in a variety of industries, from customer service to healthcare to education. However, building and maintaining AI chatbots can be complex and expensive, especially for small businesses and startups. Ceramic's new course aims to make AI chatbots more accessible to developers of all skill levels.
If you've used AI chatbots like Agent GPT, you've probably noticed that they're more efficient and effective when they have the relevant context for your conversation. This is because the chatbot is able to factor in previous messages, as well as other relevant information when generating its response. Ceramic gives developers a new way to build applications, and in this example chatbots, by storing data in a reusable format. Ceramic allows developers to benefit from the data that other developers have already created—overcoming the cold-start problem of populating applications with data.
What Will You Learn?The course covers the following:
What is Ceramic and how does it work? What is ComposeDB and how does it make it easier to build applications on Ceramic? Setting up your node and brief context on different node configurations Data modeling corresponding to an AI-chatbot use case Authenticating users Conversing with the chatbot How to use filters to efficiently read data from your node How to write mutations How to stream AI responses to your frontend UI What Do You Need to Get Started?You can sign up for an account on the LearnWeb3.io website (with a blockchain wallet or email address) and follow along in the tutorial’s link to gain points for answering quiz questions and completing the tutorial.
As outlined in the tutorial, the only dependencies you’ll need are:
MetaMask Chrome Extension Node v16 An OpenAI API key Want to Learn More about Building on Ceramic?While this course on LearnWeb3 describes a specific use case for ComposeDB, there are other developer examples and tools that you might find helpful as you build on Ceramic:
How to Use and Store Composable Attestations with Ceramic and EASWalk through a tutorial on how to generate Attestations (using Ethereum Attestation Service) and store them on ComposeDB.
ComposeDB API SandboxUse the ComposeDB API Sandbox to test example queries on a real dataset.
Learn How to Encrypt and Decrypt Data on ComposeDBVisit the tutorial for one methodology you can use to encrypt data and decrypt data on ComposeDB.
Create a Social App on ComposeDBThe Social App ComposeDB Starter will help you get started building your own social app.
Let us know what you think on the Forum!
The post Course Assessments, Quality Assurance, and Maintaining High-Quality Digital Learning Programs appeared first on NJEdge Inc.
Revised OASIS Standard now available
OASIS and the OSLC Open Project are pleased to announce the approval and publication of OSLC Change Management Version 3.0 Errata 01.
This document incorporates Approved Errata for the OASIS Standard “OSLC Change Management Version 3.0.” The specific changes are listed in Appendix C of Part 1, at https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-spec.html#errata.
The nine currently published Open Services for Lifecycle Collaboration (OSLC) specifications, along with several informative Project Notes, collectively define a core set of services and domain vocabularies for lifecycle management including requirement, change and quality management.
This specification defines the OSLC Change Management domain, a RESTful web services interface for the management of product change requests, activities, tasks and relationships between those and related resources such as requirements, test cases, or architectural resources. To support these scenarios, this specification defines a set of HTTP-based RESTful interfaces in terms of HTTP methods: GET, POST, PUT and DELETE, HTTP response codes, content type handling and resource formats.
The documents and related files are available here:
OSLC Change Management Version 3.0
OASIS Standard with Approved Errata 01
06 July 2023
OSLC Change Management Version 3.0. Part 1: Specification
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-spec.html (Authoritative)
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-spec.pdf
OSLC Change Management Version 3.0. Part 2: Vocabulary
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-vocab.html (Authoritative)
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-vocab.pdf
OSLC Change Management Version 3.0. Part 3: Constraints
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-shapes.html (Authoritative)
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-shapes.pdf
Change Management Vocabulary definitions file: https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-vocab.ttl
Change Management Resource Shape Constraints definitions file: https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/change-mgt-shapes.ttl
For your convenience, OASIS provides a complete package of the specification documents and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/os/cm-v3.0-os.zip
Members of the Project Governing Board of the OASIS Open Services for Lifecycle Collaboration (OSLC) OP [1] approved the publication of these Errata by Full Majority Vote [2]. The Errata had been released for public review as required by the TC Process [3]. The Approved Errata are now available online in the OASIS Library as referenced above.
Our congratulations to the OSLC OP on achieving this milestone.
========== Additional references:
[1] OASIS Open Services for Lifecycle Collaboration (OSLC) OP
https://open-services.net/about/
[2] https://lists.oasis-open-projects.org/g/oslc-op-pgb/message/317
[3] Public review:
– 15-day public review, 31 May 2023: https://lists.oasis-open.org/archives/members/202305/msg00008.html
– Comment resolution log: https://docs.oasis-open-projects.org/oslc-op/cm/v3.0/errata01/psd01/cm-v3.0-psd01-comment-resolution-log.txt
The post Approved Errata for OSLC Change Management v3.0 published appeared first on OASIS Open.
Online violence against women in politics in Latin America is becoming more and more frequent, affecting women’s right to participate in democracy. In countries such as Colombia, Brazil, Bolivia, Argentina, Chile and many others, female politicians are attacked on social media platforms in attempts to undermine their political legitimacy, for reasons associated with their gender.
A growing number of initiatives have been undertaken to fight against this concerning trend. This blog post (which is by no means an exhaustive list!) lists just a few examples of the inspiring work being done in the region.
At the end, we’ve included a list of initiatives offering hands-on emergency digital security support in the region as well.
BoliviaCielito Saravia from Internet Bolivia wrote this guide to help women in politics identify harassment and political violence online and to strengthen their defence capabilities against digital attacks. Internet Bolivia also has an interactive guide to support women who are suffering tech facilitated gender based violence, and a guide focusing on digital security for women politicians in office.
ChileIn Chile, where 67% of women candidates receive violent messages during campaigns, #TomaPartido published a digital security guide, written by Paz Peña, that takes a feminist approach for people and organisations facing digital political violence. The guide offers information about some of the most common attacks as well as practical steps for improving digital security. It’s available in Spanish and Portuguese.
ColombiaIn Colombia, where recent legislation was created to fight political violence against women, Fundación Karisma has been doing research to define digital violence against women politicians in the country and understand its consequences for their work and life. In this publication, they offer recommendations for political movements on how to navigate digital violence against women in politics.
Brazil The initiative MonitorA, from Azmina and InternetLab, looks at the different forms that gendered political violence takes in the country to keep women and trans people out of politics. In its mapping of misogyny and online attacks against women in politics, MonitorA has found staggering evidence: in the first week of electoral campaigning in 2022, almost 4.5k attacks or insults were directed at women candidates on Twitter. The project coLAB at Universidade Federal Fluminense created the Mapa da Violência Política de Gênero em Plataformas Digitais, highlighting the intensity, type and various forms of political gendered violence on Twitter, Facebook, Instagram and YouTube. Since 2020, Instituto Marielle Franco has been researching and monitoring political violence in Brazil, and showing how gendered and racialised it is. They have mapped 8 types of political violence suffered by over 140 Black women candidates in the country, and their work has consistently shown that whether Black women are elected or not they remain unprotected against these forms of violence. Through “Não Seremos Interrompidas”, they’ve been advocating for local authorities to act against political violence that has kept Black women and LGBTQIA+ from accessing and safely occupying political power. InternetLab and Redes Cordiais published a guide to help women candidates who face gendered political violence. Mulheres Negras Decidem, a movement working to strengthen Brazil’s democracy by supporting Black women in politics, wrote a Digital Care Guide to support activists and politicians and Coalizão Direitos na Rede and Coding Rights published this mini digital protection guide for elections. Central America and the Dominican RepublicIpandetec, an organisation working in Central America and the Dominican Republic has done research monitoring online gender based violence against women in politics in Panama, Guatemala, Honduras and Costa Rica. They also run Seguras En Línea, a project aiming to mitigate digital gender violence in Central America and the Dominican Republic.
Fundación Acceso is working in Central America to promote digital security and holistic protection for organisations and people defending human rights. With Observatorio Centroamericano de Seguridad Digital, they’ve been analysing digital security incidents of human rights defenders and organisations in Guatemala, Honduras, El Salvador and Nicaragua, showing that violence against women human rights defenders is pervasive in the region.
Research covering Latin AmericaIn 2022, the Alianza Regional por la libre expresión e información conducted an extensive qualitative study on online gender violence towards women with a public voice in Latin America and its impact on freedom of expression. They gathered cases from women in Argentina, Bolivia, Brasil, Colombia, Cuba, Costa Rica, Ecuador, El Salvador, Guatemala, Honduras, México, Nicaragua, Paraguay, Uruguay and Venezuela.
Support lines for digital security emergenciesIf you need hands-on, emergency digital security support, there are a number of initiatives that can help you. Here are some of them:
Acoso.Online has an online emergency repository sharing direct access to different materials with information on how to proceed in the face of gender based violence online. Internet Bolivia has an active support line available for women, teenage girls, journalists, women in politics and activists, LGBTIQ+ persons who need support navigating gender based violence in digital spaces Hiperderecho created Tecnoresistencias, a space for women, dissent, diversity, and activists who resist gender violence on the internet. Vita Activa is a helpline providing online support and strategic solutions for women and LGBTIQ+ journalists, activists and gender, land and labour rights, and freedom of expression defenders. Maria D’Ajuda is the first digital security helpline run by feminists in Brazil aimed at women, non-binary people, LGBTQIAP+ and organisations in Latin America. Luchadoras, in Mexico, has a helpline to support people experiencing gender-based violence online. Access Now’s Digital Security Helpline works with individuals and organisations around the world to keep them safe online. The post 10 inspiring initiatives fighting online political violence against women in Latin America first appeared on The Engine Room.It’s time for a good old-fashioned episode of the Identity at the Center Podcast! In episode #238, we dive into our thoughts on the recent Oktane, Identity Week America, and SailPoint Navigate conferences. We also give you a sneak peek of what's to come in our Authenticate 2023 keynote live show.
Tune in to hear our discussion on these exciting topics and stick around to hear our lighthearted chat about best and worst airports and the best thing we ate this week.
You can listen to the full episode on idacpodcast.com or on your favorite podcast app. Don't forget to subscribe so you never miss an episode! 🎧🔥
Businesses are readily embracing the passwordless road ahead. Which direction is your organization going?
In research conducted alongside LastPass, we dig into understanding:
Why businesses are looking to passwordless authentication to secure their data; How and when they plan on implementing passwordless technology within their organization; The important role of passkeys in this passwordless future. Download the eBook Read the Press ReleaseThe post The 2023 Workforce Authentication Report: Embracing the Passwordless Future appeared first on FIDO Alliance.
89% of IT leaders expect passwords will represent less than a quarter of their organization’s logins within five years or less
CARLSBAD, California and BOSTON, Massachusetts – October 16, 2023 – The FIDO Alliance and LastPass released the 2023 Workforce Authentication Report today, which gauges IT decision makers’ attitudes and plans for removing passwords in favor of easier and more secure passwordless authentication. The verdict? Businesses are actively moving to eradicate passwords from employees’ lives, with 89% of surveyed IT leaders expecting passwords to represent less than a quarter of their organization’s logins within five years or less.
Top findings from the 2023 Workforce Authentication Report:
Businesses are ready to embrace a passwordless future, with 92% having a plan to move to passwordless technology and 95% currently using a passwordless experience at their organization. Businesses believe passkeys will help make them more secure: 92% believe passkeys will benefit their overall security posture, and 93% agree that passkeys will eventually help reduce the volume of unofficial (i.e., “Shadow IT”) applications. However, many recognize that work still needs to be done: A majority of businesses surveyed are still using phishable authentication methods, such as passwords (76%) and multi-factor authentication (MFA) (43%) when it comes to authenticating users within their organization. The majority recognize that this transition will take time and education: 55% of IT leaders surveyed feel they need more education on how passwordless technology works and/or how to deploy it, and 28% cited concerns that users may be resistant to change or using a new technology. When making this transition, businesses made it clear they want to choose where they store passkeys, with 69% of IT leaders anticipating storing them in a third-party password manager.“The move towards passwordless authentication has gained steam over the past few years as an increasing number of organizations have moved to eliminate the risk and liability of passwords as they are the source of the vast majority of data breaches,” said Andrew Shikiar, Executive Director and CMO of the FIDO Alliance. “Today’s report backs up this trend by illustrating that global IT leaders are rapidly aiming to reduce their reliance on legacy forms of authentication in favor of passkeys for user-friendly, phishing-resistant sign-ins.”
“These survey results demonstrate that businesses are excited about the prospect of a passwordless future, and all the benefits that future will bring. And the clear majority also recognize that a password manager plays an important role in that future,” said Mike Kosak, Senior Principal Intelligence Analyst at LastPass. “While the adoption of passwordless authentication will take some time and coaching, LastPass is proud to support forward-thinking leaders like these on that journey – ushering their organizations toward security that is stronger and more effortless than ever.”
Resources:
2023 Workforce Authentication Report
LastPass Blog Post on the 2023 Workforce Authentication Findings
LastPass | FIDO Alliance LinkedIn Live: October 16, 12:30 pm PT
Research for the 2023 Workforce Authentication Report was conducted by Sapio Research through an online survey of 1,005 IT decision makers in the United States, Germany, Australia, United Kingdom, and France.
# # #
Editor’s note:
Phishable authentication methods rely on knowledge-based factors or other factors that can be intercepted by a malicious party. Phishable authentication methods include passwords, one-time passwords (OTPs), and SMS OTPs.About the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
About LastPass
LastPass is an award-winning password manager which helps millions of registered users organize and protect their online lives. For more than 100,000 businesses of all sizes, LastPass provides password and identity management solutions that are convenient, easy to manage and effortless to use. From enterprise password management and single sign-on to adaptive multi-factor authentication, LastPass for Business gives superior control to IT and frictionless access to users. For more information, visit https://lastpass.com. LastPass is trademarked in the U.S. and other countries.
PR Contact – FIDO Alliance
press@fidoalliance.org
PR Contact – LastPass
press@lastpass.com
The post Businesses are Ready to Ditch Passwords, Says New Report from FIDO Alliance and LastPass appeared first on FIDO Alliance.
Increased desire for biometrics and awareness of passkeys increases imperative on service providers to enable stronger, more user-friendly sign-ins
Summary of key findings:
Password usage without two-factor authentication (2FA) is still dominant across use cases – consumers enter a password manually nearly 4 times a day, or 1,280 times a year But when given the option, users want other authentication methods – biometrics is both the preferred method for consumers to log-in and what they believe is most secure, while awareness of passkeys continues to grow Online scams are becoming more frequent and more sophisticated, likely fuelled by AI – over half (54%) have seen an increase in suspicious messages and scams, while 52% believe they have become more sophisticated The impact of legacy sign-in methods is getting worse – the majority of people are abandoning purchases and giving up accessing services online – this is 15% more likely than last year at nearly four times per month per personOctober 16, 2023 – FIDO Alliance today publishes its third annual Online Authentication Barometer, which gathers insights into the state of online authentication in ten countries across the globe. New to the Barometer this year, FIDO Alliance has also begun tracking consumer perception of threats and scams online in a bid to understand anticipated threat levels globally.
Key findings
The 2023 Online Authentication Barometer found that despite widespread usage of passwords lingering on, consumers want to use stronger, more user-friendly alternatives. Entering a password manually without any form of additional authentication was the most commonly used authentication method across the use cases tracked – including accessing work computers and accounts (37%), streaming services (25%), social media (26%), and smart home devices (17%). Consumers enter a password manually nearly four times a day on average, or around 1,280 times a year. The only exceptional scenario to this trend was financial services, where biometrics (33%) narrowly beat passwords (31%)* as the most used sign-in method.
This is especially interesting considering biometrics’ rising popularity as an authentication method. When asked what authentication method people consider most secure and the method they most prefer using, biometrics ranked as favourite in both categories, rising around 5% in popularity since last year. This suggests that consumers want to use biometrics more but don’t currently have the opportunity.
“This year’s Barometer data showed promising signs of shifting consumer attitudes and desire to use stronger authentication methods, with biometrics especially proving popular. That said, high password usage without 2FA worryingly reflects how little consumers are still being offered alternatives like biometrics, resulting in lingering usage,” commented Andrew Shikiar, Executive Director and CMO at FIDO Alliance.
Scams are getting more frequent and more sophisticated – likely fuelled by AI
This year’s Barometer also unearthed consumer perception of threats and scams online. 54% of people have noticed an increase in suspicious messages and scams online, while 52% believe these have become more sophisticated.
Threats are seen to be active across several channels, but primarily email, SMS messages, social media, and fake phone or voicemails. The increased accessibility of generative AI tools is a likely driver of this rise in scams and phishing threats. Tools like FraudGPT and WormGPT, which have been created and shared on the dark web explicitly for use in cybercrime, have made crafting compelling social engineering attacks far simpler, more sophisticated, and easier to do at scale. Deepfake voice and video are also being used to bolster social engineering attacks, tricking people into thinking they are talking to a known trusted person.
Shikiar added: “Phishing is still by far the most used and effective cyberattack technique, which means passwords are vulnerable regardless of their complexity. With highly accessible generative AI tools now offering bad actors the means to make more convincing and scalable attacks, it’s imperative consumers and service providers listen to consumers and start to look at non-phishable and frictionless solutions like passkeys and on-device biometrics more readily available, rather than iterating on ultimately flawed legacy authentication like passwords and OTPs.”
Passkeys, which provide secure and convenient passwordless sign-ins to online services, have grown in consumer awareness despite still being live just over a year, rising from 39% in 2022 to 52% awareness today. The non-phishable authentication method has been publicly backed by many big players in the industry – Google recently announced that passkeys are now available for all its users to move away from passwords and two-step verification, as has Apple, with other brands like PayPal also making these available to consumers in the last twelve months.
The impact of legacy sign-ins worsens for businesses and consumers
The negative impact caused by legacy user authentication was also revealed to be getting worse. 59% of people have given up accessing an online service and 43% have abandoned a purchase in the last 60 days, with the frequency of these instances rising year on year to nearly four times per month, per person, up by around 15% on last year. Poor online experiences are ultimately hitting businesses’ bottom lines and causing frustration among consumers.
70% of people have had to reset and recover passwords in the last two months because they’d forgotten them, further highlighting how inconvenient passwords are and their role as a primary barrier to a seamless online user experience.
ENDS
Notes to editors:
Research for the FIDO Alliance’s Online Authentication Barometer was conducted by Sapio Research among 10,010 consumers across the UK, France, Germany, US, Australia, Singapore, Japan, South Korea, India and China. *The answer option “Logging in via social sign-in” has been disregarded for the question specific to social media accounts, due to the answer option being included through an errorAbout the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
PR Contact
press@fidoalliance.org
The post FIDO Alliance study reveals growing demand for password alternatives as AI-fuelled phishing attacks rise appeared first on FIDO Alliance.
Increased desire for biometrics and awareness of passkeys increases imperative on service providers to enable stronger, more user-friendly sign-ins
Summary of key findings:
Password usage without two-factor authentication (2FA) is still dominant across use cases – consumers enter a password manually nearly 4 times a day, or 1,280 times a year But when given the option, users want other authentication methods – biometrics is both the preferred method for consumers to log-in and what they believe is most secure, while awareness of passkeys continues to grow Online scams are becoming more frequent and more sophisticated, likely fuelled by AI – over half (54%) have seen an increase in suspicious messages and scams, while 52% believe they have become more sophisticated The impact of legacy sign-in methods is getting worse – the majority of people are abandoning purchases and giving up accessing services online – this is 15% more likely than last year at nearly four times per month per personOctober 16, 2023 – FIDO Alliance today publishes its third annual Online Authentication Barometer, which gathers insights into the state of online authentication in ten countries across the globe. New to the Barometer this year, FIDO Alliance has also begun tracking consumer perception of threats and scams online in a bid to understand anticipated threat levels globally.
Key findings
The 2023 Online Authentication Barometer found that despite widespread usage of passwords lingering on, consumers want to use stronger, more user-friendly alternatives. Entering a password manually without any form of additional authentication was the most commonly used authentication method across the use cases tracked – including accessing work computers and accounts (37%), streaming services (25%), social media (26%), and smart home devices (17%). Consumers enter a password manually nearly four times a day on average, or around 1,280 times a year. The only exceptional scenario to this trend was financial services, where biometrics (33%) narrowly beat passwords (31%)* as the most used sign-in method.
This is especially interesting considering biometrics’ rising popularity as an authentication method. When asked what authentication method people consider most secure and the method they most prefer using, biometrics ranked as favourite in both categories, rising around 5% in popularity since last year. This suggests that consumers want to use biometrics more but don’t currently have the opportunity.
“This year’s Barometer data showed promising signs of shifting consumer attitudes and desire to use stronger authentication methods, with biometrics especially proving popular. That said, high password usage without 2FA worryingly reflects how little consumers are still being offered alternatives like biometrics, resulting in lingering usage,” commented Andrew Shikiar, Executive Director and CMO at FIDO Alliance.
Scams are getting more frequent and more sophisticated – likely fuelled by AI
This year’s Barometer also unearthed consumer perception of threats and scams online. 54% of people have noticed an increase in suspicious messages and scams online, while 52% believe these have become more sophisticated.
Threats are seen to be active across several channels, but primarily email, SMS messages, social media, and fake phone or voicemails. The increased accessibility of generative AI tools is a likely driver of this rise in scams and phishing threats. Tools like FraudGPT and WormGPT, which have been created and shared on the dark web explicitly for use in cybercrime, have made crafting compelling social engineering attacks far simpler, more sophisticated, and easier to do at scale. Deepfake voice and video are also being used to bolster social engineering attacks, tricking people into thinking they are talking to a known trusted person.
Shikiar added: “Phishing is still by far the most used and effective cyberattack technique, which means passwords are vulnerable regardless of their complexity. With highly accessible generative AI tools now offering bad actors the means to make more convincing and scalable attacks, it’s imperative consumers and service providers listen to consumers and start to look at non-phishable and frictionless solutions like passkeys and on-device biometrics more readily available, rather than iterating on ultimately flawed legacy authentication like passwords and OTPs.”
Passkeys, which provide secure and convenient passwordless sign-ins to online services, have grown in consumer awareness despite still being live just over a year, rising from 39% in 2022 to 52% awareness today. The non-phishable authentication method has been publicly backed by many big players in the industry – Google recently announced that passkeys are now available for all its users to move away from passwords and two-step verification, as has Apple, with other brands like PayPal also making these available to consumers in the last twelve months.
The impact of legacy sign-ins worsens for businesses and consumers
The negative impact caused by legacy user authentication was also revealed to be getting worse. 59% of people have given up accessing an online service and 43% have abandoned a purchase in the last 60 days, with the frequency of these instances rising year on year to nearly four times per month, per person, up by around 15% on last year. Poor online experiences are ultimately hitting businesses’ bottom lines and causing frustration among consumers.
70% of people have had to reset and recover passwords in the last two months because they’d forgotten them, further highlighting how inconvenient passwords are and their role as a primary barrier to a seamless online user experience.
ENDS
Notes to editors:
Research for the FIDO Alliance’s Online Authentication Barometer was conducted by Sapio Research among 10,010 consumers across the UK, France, Germany, US, Australia, Singapore, Japan, South Korea, India and China. *The answer option “Logging in via social sign-in” has been disregarded for the question specific to social media accounts, due to the answer option being included through an errorAbout the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
PR Contact
press@fidoalliance.org
The post FIDO Alliance study reveals growing demand for password alternatives as AI-fuelled phishing attacks rise appeared first on FIDO Alliance.
Joon Hyuk Lee – APAC Market Development Director, FIDO Alliance
Welcome
As we usher in the participants of Authenticate 2023, we aim to provide a snapshot of various corners of the globe. Today, we’re privileged to bring together our esteemed members—industry luminaries from Thailand, Taiwan, Vietnam, Mainland China, Korea and Japan. Together, we’ll navigate the present landscape, confronting the challenges and celebrating the opportunities inherent in adopting phishing-resistant authentication methods across APAC.
Introducing Our Experts:
Khanit Phaton, Thailand: Senior Management Officer at ETDA
Karen Chang, Taiwan: VP at Egis Technology / Chair of FIDO Taiwan Forum
Simon Trac Do, Vietnam: CEO & Founder at VinCSS
Henry Chai, Mainland China: CEO at Uni-ID Technology, Lenovo / Co-Chair of FCWG
Jaebeom Kim, South Korea: Principal Researcher at TTA / Sub-Group Leader of FKWG
Masao Kubo, Japan: Manager, Product Design Department at NTT DOCOMO
Crafting an inclusive approach to online authentication in Thailand
Joon: Given Thailand’s rich diversity in many aspects, how does this influence the approach to and adoption of new online authentication systems for its citizens?
Khanit: As online services have become primary channels and gained popularity among the Thai population, coupled with the increasing number of cybersecurity threats, it’s crucial for both the public and private sectors to address this issue. Secure authentication is a key consideration. Given our diversity in aspects like culture and socioeconomic status, it’s essential to adopt an approach that’s inclusive and accessible for all. We’re exploring various methods for authentication; for instance, the Thai government’s introduction of the ThaID digital ID system, which utilizes both facial and fingerprint recognition, ensuring robust accessibility for all citizens. Meanwhile, Fintech companies and banks are developing mobile banking apps tailored to a wide range of mobile devices, incorporating online face verification services.
Reflecting on Taiwan’s recent strides with FIDO
Joon: Taiwan has showcased impressive FIDO deployment cases in recent years. Karen, as the chair of the FIDO Taiwan Regional Engagement Forum, can you offer insights on this journey?
Karen: The FIDO Taiwan Regional Engagement Forum (FTF) was formed in 2021, with members spanning IC chip, device, software, system, and application services. As of August 2023, we boast over 25 members and 80 FIDO-certified products. The government’s role in adopting and promoting FIDO standards cannot be understated. The Ministry of Interior joined the FIDO Alliance in 2020 and launched the Taiwan FidO (TW FidO) service. By September 2023, TW FidO was integrated into more than 170 government department systems, encompassing a wide array of services. The Financial Supervisory Commission (FSC) also emphasized the “Research and Development of Standardized Financial Mobile Identification Mechanisms” in the Financial Technology Development Roadmap released in 2020, known as “Financial FIDO”. This allows users to bind their mobile devices with physical financial cards, eliminating the need for traditional physical cards or account/password logins. Several financial institutions are currently piloting this Financial FIDO initiative. Established in August 2022, the Ministry of Digital Affairs (moda) joined the FIDO Alliance in January 2023. Moda has been actively promoting international digital trust standards, like FIDO User Authentication and W3C Decentralized Identifiers, to industries like e-commerce, telecom services, online gaming, semiconductors, and manufacturing, ensuring a seamless and secure authentication experience. In many Asian countries, directives or guidelines from public organizations play a pivotal role in positioning a nation at the forefront of technology adoption. Today, it’s FIDO’s moment. I believe the FTF is on the right trajectory, and FIDO’s popularity is set to soar.
Vietnam’s Path to Simpler and Stronger Online Authentication
Joon: With many members in Vietnam being relatively new to the FIDO Alliance, how do you assess Vietnam’s readiness and the challenges it faces in adopting simpler and stronger online authentication methods?
Simon: Vietnam, like other nations, grapples with an intensifying phishing crisis that poses significant risks to users, agencies, and organizations. Although there are initiatives in place, such as the Anti-Scam Center, which aims to counteract these threats promptly and take down scam sites, their effectiveness is somewhat curtailed due to manual operations and heavy reliance on user awareness. On a brighter note, an increasing number of Vietnamese entities are engaging in the FIDO Alliance’s drive to minimize password reliance. Leading the charge in this passwordless movement in Vietnam are tech frontrunners like VinCSS and MK Group.
Mainland China’s Digital Landscape: Balancing Scale and Security
Joon: Mainland China has one of the largest digital user bases in the world. What unique challenges does this present when considering the adoption of novel simpler and stronger online authentication methods?
Henry: Indeed, in Mainland China, the sheer size of our digital user base brings about unique considerations. For any new security technology to be deployed, there’s an imperative need to consider the diversity in device capabilities. This ensures an optimal user experience for all, especially during the earlier times, before 2019, when not all smartphones were FIDO-enabled. During that period, any deployment of FIDO had to ensure that every user, regardless of their device’s capabilities, had a viable authentication alternative. Additionally, while authentication is a foundational layer, its adoption must align with business returns. When weighed against traditional, albeit less robust, authentication methods such as SMS and OTP, the decision to transition to FIDO becomes multifaceted. In many cases, the end solution is a mix of methods, balancing compatibility with business benefits. Presently, over 90 banks in Mainland China have adopted FIDO technology, and we anticipate this number to grow across different sectors soon.
Discussing South Korea’s technological advancements
Joon: South Korea is renowned for its advanced technological infrastructure. Jaebeom, how does this influence the nation’s approach to adopting new online authentication methods?
Jaebeom: It’s imperative for our country to integrate new authentication methods to facilitate seamless online identity verification for the public. In this quest, the South Korean government and associated agencies prioritize two critical aspects:
Technical Standards and Service Guidelines: We aim for consistent user experiences across platforms, irrespective of the service providers involved. This demands clear technical standards and robust service operation guidelines.
Legal Framework: Many online services require a solid legal basis for identity verification. Thus, legislative amendments and continued dialogues across the private sector, government, and academia are essential to formulating appropriate legal frameworks. Even if it is time-consuming, this step is indispensable. While our focus leans towards new online authentication methods, it’s equally important to ensure stability in both legacy and new systems, guaranteeing that all citizens can access online identity verification without hitches.
Japan – On the rise and acceptance of passkeys
Joon: Given the unified efforts of the FIDO Alliance Japan Working Group and its members, Japan leads in passkey deployments. Kubo-san, can you discuss the current trend and acceptance of passkeys in Japan?
Kubo-san: This year, I’ve observed several RPs deploying synced passkeys. While some organizations have long supported FIDO technology and embraced synced passkeys, others began their FIDO journey with synced passkeys only in 2023. This dynamic suggests that the momentum for passkey deployment is only set to accelerate. From a user perspective, awareness of passkeys is gradually heightening in Japan. Tech enthusiasts frequently discuss passkeys on social media, and according to Google Trends, search queries related to passkeys have surged. We’re in the early stages of a passwordless era in Japan, and I eagerly anticipate the broader acceptance and deployment of passkeys.
Delving deeper into phishing-resistant solutions in Thailand
Joon: Khanit, how can Thailand ensure that its authentication strategy remains robust and beneficial for online users? Would adopting phishing-resistant authentication solutions be advantageous?
Khanit: To bolster online security, Thailand has undertaken multiple strategies. We’re raising awareness through collaborative efforts with global bodies like the FIDO Alliance and defining digital ID standards that embed secure identity proofing and authentication methods. This lays down a foundational benchmark for users and service providers alike. Additionally, we’ve amended the Electronic Transaction Act to clearly delineate the responsibilities of service providers in guaranteeing authentication security and quality. Undoubtedly, integrating phishing-resistant authentication solutions, which use cryptographic techniques over vulnerable methods like PINs or passwords, would be a strategic advantage. Such solutions inherently offer heightened protection against phishing threats and pose a more formidable challenge for attackers compared to conventional methods.
Discussing Taiwan’s firm stance on cybersecurity
Joon: Could you provide an overview of the cybersecurity landscape in Taiwan and identify any notable trends?
Karen: In Taiwan, the zero-trust network security approach has become a pivotal national strategy. The sixth “National Information Security Development Plan (2021-2024)” was announced in February 2021, advocating for the Zero-Trust Architecture across government agencies and industries. The Taiwanese government has mapped out a comprehensive plan for implementing the zero-trust architecture, piloting validation and deployment mechanisms