Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!
Ethereum co-founder Joe Lubin joins Friederike Ernst to discuss why we are at the "end of a supercycle," a chaotic transition period where legacy institutions are finally adopting blockchain rails not just for efficiency, but for survival. They explore the "inevitable convergence" where the US government may actively rely on stablecoins to absorb debt, effectively using crypto to extend the lifespan of the dollar. At the same time, banks scramble to compete with self-custodial wallets.
Joe also details the structural evolution of Consensys, from an "organic blob" incubating projects like Gnosis to a focused software powerhouse. He differentiates Linea from competitors by highlighting its commitment to permissionless innovation where anyone can deploy a rollup without a "sign-off". He shares his vision for MetaMask evolving into a user-owned "full-service bank.
Topics
00:00 Intro & Paradigm Shift 04:15 Crypto-Anarchy vs. Enterprise 10:30 Banks & Stablecoins 16:00 The Economic Supercycle 24:45 Consensys History & Spin-outs 33:20 Linea & Decentralization 42:15 L1 Scaling & ZK 48:00 Permissionless Rollups 55:30 Future OptimismLinks
Joe Lubin on X: https://twitter.com/ethereumJoseph Consensys: https://consensys.io Linea: https://linea.build MetaMask: https://metamask.io Gnosis: https://gnosis.io/Sponsors: Gnosis: Gnosis has been building core decentralized infrastructure for the Ethereum ecosystem since 2015. With the launch of Gnosis Pay last year, we introduced the world's first Decentralized Payment Network. Start leveraging its power today at http://gnosis.io
The post RL in Real Life: Durable Moats appeared first on Greylock.
Jake, Alex and their team are giving IT teams the power to bring AI automation from their own department to every part of the organization.
By Anas Biad, Pat Grady, Charlie Curnin and Brian Halligan Published December 11, 2025 JAKE AND ALEX.IT is one of the most critical functions inside any company. Every employee depends on it, every system flows through it, and almost every operational challenge eventually becomes an IT problem. When IT slows down, everyone feels it: employees are blocked waiting on support, onboarding is painful, rolling out new tools gets bottlenecked, and entire company initiatives end up stalled.
Yet despite IT’s central role, the last decade of tooling has not truly empowered them. The powerful tools are heavy to set up and brittle to maintain; the easy-to-use tools focus on narrow use cases and deflection rather than true automation. IT teams are eager to automate and are more technical than business users. They have experimented with scripts, workflow builders, and more, but the limitations of all these tools still force them to spend the majority of their time on manual work, preventing them from focusing on higher-impact work and enabling the rest of their organizations.
Jake and Alex, the co-founders of Serval, experienced this firsthand at Verkada, where they led Product and Engineering and sold primarily to IT teams. They heard the same half-joking request countless times, and it wasn’t about Verkada’s products. They would ask customers, “What else can we build for you?” and IT leaders would reply, “Can you fix my helpdesk too?” These conversations led them to two key insights:
The first was that true automation only works if it is faster to automate something forever than to do it manually once. The second came from watching the few IT leaders who did take the time to build automations. They would first describe the workflow in one simple sentence, then reveal a sprawling tree of branches, nodes, conditions, and edge cases. Jake and Alex wondered: what if you could actually build the entire automation the same way you describe it? What if one simple sentence truly was enough?These insights were not possible to execute on in the past. Prior generations of AI were not strong enough to make it work, which meant IT teams continued working through manual tasks. That has now changed. With modern LLMs, code generation, and reasoning capabilities, it is finally possible to build a platform where “automate forever” is faster than “fix once.”
Serval is built precisely to deliver on this, and the team is not stopping there. They are using this moment not only to build a system of automation, but to rethink the entire ITSM system of record. As our own Sequoia IT leader, Leon, shared when they chose to become a Serval customer: “Serval is not AI for ITSM – it is ITSM built from AI.”
At the heart of the platform is an AI automation engine that lets teams go from a simple sentence to a deployment-ready automation. IT can describe workflows in natural language, then refine them further – either in natural language or all the way down to code when needed. This combination of being simple enough for anyone to use, yet deep enough for the most advanced IT builders, has resonated strongly with customers. The workflows are fully explainable and traceable, with all necessary permissions included, giving IT teams complete visibility and control and eliminating hallucination risk.
While their automation engine integrates with existing systems of records, Serval has also built a full-fledged ITSM platform, including a ticketing system, an access management product, an asset management solution, and more. Serval is both the system of engagement and the system of record. This gives IT a single place to build and orchestrate automations, enforce security and compliance, and capture the data that compounds their automation over time.
Customer feedback has been exceptional. We consistently heard how easy Serval is to use, yet how powerful, traceable, and trustworthy its automations are in practice. IT leaders described day-to-day operations being transformed – automation percentages rising every week, employee satisfaction increasing, and IT teams finally having the bandwidth to tackle projects that had been stuck on the back burner for years.
What was also exceptional, and, to be perfectly honest, not something we expected to hear at this stage:
Many customers have already fully ripped and replaced incumbent ITSM tools and now use Serval as their system of record. Automation quickly spread beyond IT into HR, Finance, Legal, Engineering, Security, and more. In some cases, companies have churned off dedicated software vendors for some departments because they rebuilt their workflows directly in Serval.At Sequoia, the last time we heard such customer feedback that supports the thesis of IT system of record empowering horizontal enterprise automation was 16 years ago, when we partnered with ServiceNow. That is why we were so eager to partner with Serval and preempt their Series B round.
The market thesis is one half of the story. The other half is the team. References for Jake and Alex – from managers, direct reports, and peers – were glowing. Even more importantly, those references aligned perfectly with the evidence we heard from customers and what we’ve observed in talent flows. Strong talent from R&D to GTM is choosing to join Serval, whether or not they have worked with the founders before. The talent density and “potential energy” forming around the company are impressive and represent one of the strongest leading indicators we look for.
Empowering IT teams is an elegant and scalable way to bring AI automation to the enterprise. We are excited to partner with Jake, Alex, and the team at Serval on this journey.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email Related Topics #AI #Funding announcement Partnering with Profound By Anas Biad, Brian Halligan and Alfred Lin News Read Partnering with Traversal By Bogomil Balkansky and Charlie Curnin News Read Partnering with Nominal By Anas Biad and Alfred Lin News Read Partnering with Listen Labs By Bryan Schreier and Charlie Curnin News Read JOIN OUR MAILING LIST Get the best stories from the Sequoia community. Email address Leave this field empty if you’re human:The post Partnering with Serval: Empowering IT for AI Enterprise Automation appeared first on Sequoia Capital.
Jonathan Swanson has built two rare successes: Thumbtack, the home-services marketplace, and Athena, the fast-growing platform that pairs ambitious people with world-class personal assistants. Today he runs a 4,000-person company, invests on the side, and raises four kids — all by designing his life around leverage.
a16z General Partner, Erik Torenberg, sits down with Jonathan to unpack what that actually looks like. They discuss how elite assistant culture shaped his philosophy, why delegation is a skill most founders never truly learn, and how the combination of humans and AI is redefining personal productivity. Jonathan explains why he believes ambition grows with leverage, not the other way around, and breaks down how he delegates everything from scheduling to search processes to entire life systems.
They also get into the future of work, the rise of machine-generated delegation, the expanding role of chiefs of staff, and how founders can design their time around the few things that matter most. It’s a conversation about work, life, and the systems that allow people to operate at scale.
Timecodes
0:00 – Introduction
1:52 – The power of delegation: from the White House to Thumbtack
03:13 – Human vs. AI assistants: the future of delegation
05:30 – Levels of delegation: from tasks to algorithms
07:31 – Principles of effective delegation
08:50 – Delegation & productivity hacks
10:46 – The future: machine-generated delegation
12:36 – Global talent & leveraging international teams
13:33 – Assistants and financial leverage
14:45 – Company culture across borders
16:18 – Assistants as accountability partners
17:52 – Coaching, feedback, and the human element
19:30 – Goal setting, time management, and prioritization
23:07 – Frameworks for founders: time, energy, and meetings
26:06 – The efficient path vs. the effect path
28:19 – Executive hiring: principles and pitfalls
30:19 – Reference check signals
33:09 – Principles for company transparency
36:55 – Cofounder relationships & company building
39:19 – Chief of staff vs. executive assistant
40:06 – Learning from high-performers: Lonsdale, Elon, Thiel, etc.
47:10 – Building your universe: org structures and talent networks
52:33 – Managing founder psychology & staying in the game
Resources:
Follow Jonathan on X: https://x.com/swaaanson
Follow our host: https://twitter.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures
.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Indian government has backed off on a controversial mandatory requirement for smartphone manufacturers to pre-install its Sanchar Saathi “cyber safety” application on all new devices, following criticism over potential surveillance and privacy risks.
The Communications Ministry announced last week that it would no longer compel manufacturers like Apple, Samsung, and Xiaomi to load the government-backed application onto new phones.
The reversal came less than a week after the initial order, which mandated that the app be added to every new handset within 90 days, with the provision that it could not be deleted by the user. The original policy immediately sparked widespread concerns among digital rights groups that the installation would effectively give authorities access to hundreds of millions of personal devices.
The government, however, maintained that Sanchar Saathi was designed purely as a fraud prevention and device security tool, allowing users to verify device identifiers and report stolen phones. Telecommunications authorities insisted the app is “secure and purely meant to help citizens against bad actors in the cyber world,” adding that “there is no other function other than protecting users, and they can remove the app whenever they want.”
Explaining the swift policy change, the Ministry cited the “increasing acceptance” of the tool, noting that it had already been downloaded by 14 million users, including a reported 600,000 new registrations on a recent single day. Officials suggested the mandate was originally intended to “accelerate this process and make the app available to less aware citizens.”
The original order also prompted significant industry pushback. Reuters previously reported that Apple had planned to inform officials it could not comply with the requirement, as embedding third-party software would compromise the security architecture of its iOS operating system — a principle the company upholds in other international markets.
Digital rights organizations have cautiously welcomed the government’s decision. The Internet Freedom Foundation (IFF) called the move positive but emphasized that vigilance remains necessary. The organization stated, “For now, we should treat this as cautious optimism, not closure, until the formal legal direction is published and independently confirmed.”
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
India Reverses Mandatory ‘Cyber Safety’ App Requirement Following Surveillance Backlash was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Today, we’re announcing the availability of Brave’s new AI browsing feature in Brave Nightly (our testing and development build channel) for early experimentation and user feedback. When ready for general release, this agentic experience aims to turn the browser into a truly smart partner, automating tasks and helping people accomplish more.
However, agentic browsing is also inherently dangerous. Giving an AI control of your browsing experience could expose personal data, or allow agentic AI to take unintended actions. Security measures are tricky to get right and disastrous when they fail, as we have shown through numerous vulnerabilities we found and responsibly disclosed over the last few months. Indirect prompt injections are a systemic challenge facing the entire category of AI-powered browsers.
For this reason, we’ve chosen a careful approach to releasing AI browsing in the Brave browser and soliciting input from security researchers. We are offering AI browsing behind an opt-in feature flag in the Nightly version of Brave, which is the browser build we use for testing and development. We’ll continue to build upon our defenses over time with feedback from the public. At present, these safeguards include:
AI browsing is currently available only in the Nightly channel behind an opt-in feature flag, via Brave’s integrated AI assistant Leo. AI browsing happens only in an isolated browsing profile, keeping your regular browsing data safe. AI browsing has restrictions and controls built into the browser. AI browsing uses reasoning-based defenses as an additional guardrail against malicious websites. The AI browsing experience has to be manually invoked, ensuring that users retain complete control over their browsing experience. Like all AI features in Brave, AI browsing is completely optional, off by default.While these mitigations help significantly, users (even early testers) should know that such safeguards do not eliminate risks such as prompt injection. As an open-source browser, we welcome bug reports and feature requests on GitHub. We also encourage anyone who discovers a security issue in AI browsing to report that issue to our bug bounty program. In this early release phase, valid and in-scope security issues in AI browsing will receive double our usual reward amounts (see our HackerOne page for more details).
Despite its risks, AI browsing shows great promise. As we outlined in our 2025 browser AI roadmap update, we’re confident that a smart and personalized collaborator that adapts to your needs can ultimately transform the way you browse the Web. For instance, it could research topics by visiting and analyzing multiple websites, compare products across different shopping sites, check for valid promo codes before purchases and summarize news the way you like it. The security and privacy challenges are novel and significant, but given the potential for AI browsing to become a widely-adopted browser feature, we first need to get feedback from early testers, and iterate towards a solution that is safe for all users (See “How to test AI browsing” section below).
Preventing the AI agent from taking unwanted actionsAt root, the security and privacy risks of agentic browsing have to do with alignment: you want to prevent the AI from taking unintended actions. This is a hard security and privacy problem. Given how open-ended inputs can be for AI browser agents, and given the browser’s level of access to the Web, a given prompt could apply to basically any request on almost any website.
Adding to this challenge, reasoning models are probabilistic: the same request of the AI model could produce different results at different times, so the output space of possible AI browser agent actions is hard to limit.
It would be easy if security engineers could simply tell an LLM (large language model) never to do “bad things.” Unfortunately, given the browser’s level of access and the reasoning capabilities of today’s models, it would be naive to assume that this strategy would work in isolation. It’s still relatively easy to subvert models into performing risky actions, and we want to avoid a situation where we’re constantly chasing new security vulnerabilities (“whack-a-mole”).
Additionally, while we’re concerned about indirect prompt injections on websites (where an attacker embeds malicious instructions in Web content through various methods), we’re also aware that the security threat model with agentic browsing doesn’t always need an attacker: in some cases, the AI could simply misinterpret user commands. To put it simply, the two risks we want to protect Brave users from are:
Malicious actors who want to do prompt injection on a website The model getting confused and taking an action that’s harmful to the userWe believe any agentic browsing experience should have robust protections against these two threats.
Defenses against security threatsGiven the potential for harm, the protections outlined below are not an exhaustive list, but what we consider minimally necessary before rolling out an agentic experience even for early user testing.
Isolated storage for AI browsingMany Brave users will likely be logged into sensitive websites (e.g., their banking website or email account) in their main Brave browser profile. With AI browsing, we need to prevent possible attackers from gaining access to those logged-in services. Brave’s AI browsing therefore keeps its storage separate from your regular profile: cookies, logged-in state, caches, and other site data do not cross profiles. This limits harm if defenses fail and a model is manipulated into a dangerous action.
Given the inherent risk of agentic browsing, the user must manually invoke AI browsing. When you enable AI browsing, Brave creates a brand-new browser profile. This new profile isolates all data available to the AI agent.
For now, we believe that this approach of completely isolating your browsing data is the safest approach to AI browsing.
Model-based protectionsWe also use a second model to check the work of the AI agent’s model (the task model). This “alignment checker” serves as a guardrail: it receives the system prompt, the user prompt, and the task model’s response, and then checks if the task model’s instructions match the user’s intention. This checker does not directly receive raw website content—by firewalling it from untrusted website input, we can reduce (but not eliminate) the risk of subversion by page-level prompt injection. We also provide security-aware system instructions: a structured prompt authored by us that encodes policy-based rules we will refine over time. In addition, we use models trained to mitigate prompt injections, such as Claude Sonnet.
It’s worth noting that guardrails are not proof of safety—they can help against, but not eliminate, risk. LLMs are non-deterministic and fallible, and the output of the task model can be subverted by untrusted page content to specifically attack the alignment checker model.
Browser controls and UXA core goal of privacy engineering is to reduce user surprise. To this end, AI browsing in Brave must be deliberately invoked by the user. While the regular AI-assistant Leo can now suggest browsing actions based on the user’s prompt, it can never on its own initiate AI browsing without consent. And, similar to how Private Windows and Private Windows with Tor in Brave browser are styled differently, the AI browsing profile is styled differently from Leo, and has distinct action cues. This helps make it obvious to users that they’re in AI browsing mode.
Users of AI browsing in Brave can inspect and pause sessions, and the AI cannot by itself delete session logs. All browsing on your behalf happens in an open tab, rather than being hidden in a sidebar. And, as always, the user can delete all data from the agentic session at any time.
Safeguards include:
AI browsing does not have access to internal pages (such asbrave://settings), non-HTTPS pages, extension pages on the Web Store, or websites flagged by Safe Browsing.
Actions detected as misaligned by our reasoning-model-based protections (as explained above) will trigger a warning for the user and require explicit permission.
For both agentic and in-browser assistant use cases, users clearly see any memory proposed for saving, which they can then undo (this prevents “saved” prompt injection attacks).
Unparalleled privacy
As a privacy-first company, we enforce our strict no-logs, no-retention data privacy policy, maintaining Brave’s commitment to protecting your data.
This is worth emphasizing. AI browsing in Brave never trains on your data, unlike other agentic browsers.
As always, even while in AI browsing, you get all of the Brave browser’s best-in-class privacy protections including blocking of invasive ads and trackers.
A note on permission promptingWe are not using a per-site permission prompting approach (example: “allow agentic actions on example.com?”) for AI browsing. Our browser development experience shows that repeated, low-signal security prompts with incomplete contexts train users to ignore warnings, which leads to diminished protection. We want to be careful when asking users to make a security-critical decision and will reserve this for specific actions that are detected as potentially risky by our model-based protections. We may reevaluate our per-site permissioning approach later, pending feedback from users and researchers, and insights into how users are using AI in the browser.
AI browsing never has access to internal pages (such as brave://settings or brave://settings/privacy), non-HTTPS pages, Chrome Web Store, or websites flagged by Safe Browsing.
AI browsing is powerful, and we’re excited to see it live for user testing in Nightly. At the same time, we want to protect the user and uphold Brave’s core promise of privacy and security. This is a work in progress; AI browsing is not an experience that should be rushed out the door at the expense of users’ privacy and security. We expect to learn much from user feedback, and to make improvements and contributions that will also benefit the entire agentic browser space. We’re building the agentic experience with transparency, restraint, and respect for user intent. Ultimately, AI is just one tool toward Brave’s original mission: to empower and protect people online. That’s the line we’re drawing, and we’re looking forward to feedback from users and researchers to help us walk that line well.
How to test AI browsingFor those who wish to test it, AI browsing is available in Brave Nightly via the “Brave’s AI browsing” flag in brave://flags. A feature flag is essentially a switch in a secondary settings page where advanced users can enable or disable experimental features. Testers can enable AI browsing within Leo, the Brave browser’s integrated AI assistant, via the button in the message input box. Like all AI features in Brave, AI browsing is completely optional, and Leo can be disabled by users.
More details about AI browsing can be found here. We welcome tester feedback and requests here, and bug reports here.
Why Netflix’s $82B Acquisition Makes Sense in the Era of AI
By Konstantine Buhler Published December 9, 2025Netflix recently entered into a definitive agreement to buy Warner Brothers for over $82B in Enterprise Value. It’s being called the biggest, most consequential deal in Hollywood history. The purchase comes with intellectual property including Batman, Superman, Harry Potter, Game of Thrones, The Big Bang Theory, The Sopranos, The Matrix, Lord of the Rings (film rights), Godzilla/MonsterVerse, Mad Max, and Mortal Kombat.
IP like this became more valuable in the internet age, and will become even more valuable in the age of AI. Earlier this year, I published an internal Sequoia memo detailing the implications of AI on the IP industry. Given the big news, I thought it would be nice to share a few excerpts with my friends and colleagues more broadly.
If We Have AGI…
On a recent work trip to Paris, a profound irony struck me at the Musée d’Orsay. The museum, a glorious former train station, is a temple to the Industrial Revolution. Its collection, spanning 1848 to 1914, chronicles the period of history’s most rapid industrialization.
Edgar Degas: Course de Gentleman (Gentlemen’s Race)The art is filled with tributes to this new age: Monet paints locomotives at La Gare Saint-Lazare, smokestacks fill the background of Degas’ Course de Gentlemen, and the Traders in factory-made clothing. It is a museum drenched in the dawn of industrialization.
Claude Monet: La Gare Saint-LazareThis is where the paradox lies. This was the exact moment in history when the cost to reproduce a piece of art dropped precipitously. What was once near-impossible, replicating a masterpiece, became trivially cheap. Logically, as near-perfect copies became ubiquitous, the value of the “original” should have plummeted. And yet, the opposite happened. The great lesson of the d’Orsay is this: instead of becoming worthless in the age of manufacturing, these original works became priceless. As reproduction became easy, the value and allure of the original only went up.
Edgar Degas: At the Stock ExchangeHere’s why IP is well positioned in an AI future:
If we have AGI, or even inexpensive content creation, that will drive down the cost of intelligence. That affordable intelligence will be used to create an abundance of content. Humans will react to abundance with a desire for familiarity and quality. The value of the “original” will only increase. In this era, “original” works will increase in value quickly. These “originals” include art, real estate, and intellectual property. Intellectual property that has already earned a spot in our minds will benefit disproportionately. In the AI Era, nostalgic content will benefit disproportionately as it is manifest in countless different ways.As content production costs approach zero, the marginal cost of new content will also become negligible, leading to an explosion in content volume. This content flood will profoundly shift the value of attention, the primary currency in this new landscape. Undifferentiated content will rapidly lose value, and consumers will be far less willing to invest time in new content unless it’s algorithmically recommended by dominant platforms. Very few new content concepts will break out, following an extreme Power Law.
Success in this environment will hinge on “attention distribution,” defined as a consumer’s inherent interest in a specific piece of content. Nostalgic content is poised to become the most valuable. In a world saturated with novelty, authentic nostalgic content will be incredibly difficult to simulate accurately; even slight deviations from the original will feel inauthentic and cheap.
Consumers will be willing to pay for nostalgic content due to its relatively low cost. For parents, the decision to purchase authentic nostalgic content for their children over a knockoff, especially when the real item is priced reasonably, becomes a clear choice.
The Impending Content Deluge and the Scarcity of Attention
The ubiquity of General AI will drastically reduce the marginal cost of content production, approaching zero. This will unleash an unprecedented explosion in content volume, creating a hyper-saturated information environment. In this landscape, human attention, rather than content itself, will become the primary scarce resource and the dominant currency. Undifferentiated content, easily replicable and ubiquitous, will rapidly depreciate in value. Consumers, overwhelmed by choice, will increasingly rely on algorithmic recommendations from dominant content platforms to filter and curate their experiences, leading to a profound shift in content discovery and consumption patterns.
Attention Distribution: The New Competitive Moat
Success in this AI-driven content economy will hinge on “attention distribution,” defined as a consumer’s inherent and deeply ingrained interest in a specific piece of content. This is distinct from mere algorithmic visibility, as it speaks to an intrinsic desire to engage with content. While AI can generate novel content at scale, it will struggle to replicate the nuanced emotional resonance and deeply embedded cultural references that drive genuine attention.
The Inherent Value of Authentic Nostalgic Content
Nostalgic content is uniquely positioned to become the most valuable asset in this future. The core arguments for its enduring value are:
Difficulty of Authentic Simulation: While AGI can mimic styles and tropes, accurately simulating authentic nostalgic content will be incredibly challenging. The most popular feature in Sora 2 is their “Cameo” feature, which pulls in authentic characters and people into the AI generated worlds. Even minute deviations from the original, like a slight alteration in a beloved character’s voice, a subtle anachronism in a period piece, or a deviation in the execution of a well-known narrative, will register as “inauthentic” or “cheap” to a discerning audience. The emotional connection to nostalgic content is often rooted in precise, well-remembered details, making it highly resistant to imperfect algorithmic replication. This is akin to the “uncanny valley” in robotics, where near-human but imperfect simulations evoke discomfort rather than acceptance. Consumer Willingness to Pay: Despite the approaching zero marginal cost of content, consumers will demonstrate a significant willingness to pay for authentic nostalgic content.Conclusion
As AI democratizes content creation, the landscape will shift dramatically. The true scarcity will be human attention, and the ultimate value will reside in content that can capture and hold that attention through genuine emotional resonance. Authentic nostalgic content, by virtue of its irreplicable emotional depth and its proven ability to connect across generations, represents a highly defensible and appreciating asset in this evolving digital economy. Investors should prioritize companies that control and strategically leverage these invaluable cultural touchstones.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email JOIN OUR MAILING LIST Get the best stories from the Sequoia community. Email address Leave this field empty if you’re human:The post The Abundance Paradox: Why Netflix Paid $82B for Scarcity appeared first on Sequoia Capital.
We are excited to announce that Sequoia is leading the Series D in fal.
By Sonya Huang and James Flynn Published December 9, 2025 fal Co-Founders Burkay Gur and Gorkem Yurtseven.Humans are visual creatures. Images and video are the most immersive forms of content. It’s no accident that more than 80% of internet traffic is video, that social platforms are becoming image- and video-first, and that video games and movies are the largest categories of consumer spend.
Generative media will be even bigger. At Sequoia’s inaugural AI Ascent event in 2023, Jensen Huang made the provocative prediction that “Every pixel will be generated, not rendered.” Today, that dream appears closer than ever: frontier video and image models have crossed the uncanny valley, and the first compelling use cases of generative media are starting to emerge across advertising, cinema, storytelling and more.
Projects that once demanded years of work and $100M budgets can now be explored much more quickly and affordably, opening the door to new creative possibilities. We’re seeing generative media begin to transform familiar media use cases: digital ads, viral TikToks, short films, and micro dramas. There will also be wholly new experiences created that we can’t even begin to imagine, from education to personalized media to generated games. The doors to building with creative AI are wide open to anybody with a computer.
fal has built the leading platform for enterprises and developers to build with generative media models. Video models are compute-intensive and finicky to work with, and creating wonderful outputs requires excellence on multiple levels. fal offers AI creatives exactly what they want for this exciting but strange new paradigm, including 400+ models available on-demand across open- and closed-weight (including models like OpenAI Sora and DeepMind Veo), Day 0 support for new model releases, incredibly fast inference speeds, an ergonomic developer API, an advanced playground UI, and enterprise features around model fine-tunes, styles and collaboration.
fal’s customers are the tastemakers of generative media, with use cases ranging from e-commerce (Shopify) to creative suites (Adobe and Canva) to AI-native platforms (Perplexity) to millions of individual developers. While the momentum behind the business has been staggering, we are even more excited by the quality and caliber of teams currently experimenting on fal, creating immersive new education experiences, virtual pets, indie animation studios, and more. If even a small subset of these explorations make it to production, the world will be a dramatically more colorful, entertaining place.
We are delighted to partner with co-founders Burkay Gur (Coinbase ML) and Gorkem Yurtseven (Amazon) and Head of Engineering Batuhan Taskaya (the youngest-ever Python core developer and maintainer). The team is spiky up and down the platform stack, from having one of the best kernel and compiler inference teams in the world, to nurturing model provider relationships with finesse, to grassroots devrel. Their early bet on generative media shows up across their relationships with model providers, infrastructure performance and ability to dream with creators.
We are at the beginning of a compute explosion in generative video. As the generative media wave accelerates, fal is the inference platform powering the future of AI-first creativity. The team is growing fast to keep up with that demand, and we at Sequoia are proud to lead their Series D.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email Related Topics #AI #Funding announcement Partnering with Mercury By Sonya Huang and Isaiah Boone News Read Partnering with Zed By Sonya Huang and James Flynn News Read Partnering with Aspora: Diaspora Banking Goes Global By Luciana Lixandru, George Robson and James Flynn News Read JOIN OUR MAILING LIST Get the best stories from the Sequoia community. Email address Leave this field empty if you’re human:The post Partnering with fal: The Generative Media Company appeared first on Sequoia Capital.
Originally published on the a16z Infra podcast. We're resurfacing it here for our main feed audience.
AI coding is already actively changing how software gets built.
a16z Infra Partners Yoko Li and Guido Appenzeller break down how "agents with environments" are changing the dev loop; why repos and PRs may need new abstractions; and where ROI is showing up first. We also cover token economics for engineering teams, the emerging agent toolbox, and founder opportunities when you treat agents as users, not just tools.
Resources:
Follow Yoko on X: https://x.com/stuffyokodraws
Follow Guido on X: https://x.com/appenz
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Naveen Rao is cofounder and CEO of Unconventional AI, an AI chip startup building analog computing systems designed specifically for intelligence. Previously, Naveen led AI at Databricks and founded two successful companies: Mosaic (cloud computing) and Nervana (AI accelerators, acquired by Intel).
In this episode, a16z’s Matt Bornstein sits down with Naveen at NeurIPS to discuss why 80 years of digital computing may be the wrong substrate for AI, how the brain runs on 20 watts while data centers consume 4% of the US energy grid, the physics of causality and what it might mean for AGI, and why now is the moment to take this unconventional bet.
Stay Updated:
If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.
Follow Naveen on X: https://x.com/NaveenGRao
Follow Matt on X: https://x.com/BornsteinMatt
Follow a16z on X: https://twitter.com/a16z
Follow a16z on LinkedIn:https://www.linkedin.com/company/a16z
Follow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Follow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
I’d probably be a millionaire if I received a dollar every time someone asked about prices in a crypto group. Let’s be honest, this space is filled with speculators, and most people are only interested in “10xing” their $50 investment.
As someone who has been in the crypto space for over a decade, I can confidently say that solid fundamentals, utility, and a thriving community will always outlive hype. Now, this is not in anyway downplaying the importance of price increase; we all want a lambo. That said, walk with me as I explore how blockchains like PIVX empower users and not speculators.
Privacy and Financial AutonomyThe first step in empowering a user is giving them back control over their financial data. PIVX champions the right to financial confidentiality using a bespoke implementation of zk-SNARKs, known as SHIELD.
Unlike some blockchains that enforce full transparency, PIVX gives users the freedom to choose between transparent or fully shielded (private) transactions. The SHIELD protocol ensures that all PIV coins are fungible, preventing transaction history from being traced. This breaks the link between sender, receiver, and amount, offering a level of privacy that makes the currency suitable for true real-world utility.
In my opinion, financial empowerment also requires speed, and transactions on the PIVX blockchain are settled almost instantly.
Decentralized Governance ModelThe most powerful feature that shifts the role from spectator to participant is PIVX’s unique governance and economic structure.
On one hand, the privacy-centric project operates on a PoS consensus mechanism, making it highly energy-efficient. Any user can stake their PIV in their wallet to secure the network and earn rewards, converting the simple act of holding the currency into a contribution toward network security and a source of passive income. And on the other hand, masternode operators vote on governance proposals.
These operators are the true power brokers and administrators of the DAO (Decentralized Autonomous Organization). Masternode operators hold significant power in the governance structure, as each Masternode has one vote on all development and budget proposals, directly controlling the project’s trajectory.
The PIVX treasury system is another evidence of its user-centric design and self-funding capacity. A portion of every block reward is automatically allocated to a decentralized community treasury. This pool of funds is available exclusively for community-approved projects. The power rests with the community, as anyone in the PIVX ecosystem can submit a proposal and request funding from the treasury.
Masternode operators then vote on these proposals. A proposal that receives sufficient net support is funded, ensuring that the development roadmap is not dictated by a closed team of founders but by the collective will of the network’s committed stakeholders.
The Spectator vs. The Participant ModelThe core philosophy of PIVX directly contrasts with the “spectator model” prevalent in many crypto projects. In PIVX, the user is not relegated to a passive holder waiting for the core team’s updates and price movements. Instead, the PIVX model elevates the user to an active contributor.
Users gain optional privacy and control their financial data via SHIELD. Their governance power is not concentrated but distributed to masternodes, where 1 masternode equals 1 vote on all development and treasury matters. Moreover, development funding is not reliant on pre-mines or centralized funds; it’s managed by a self-funded treasury that is voted on by the community, empowering the users to become the strategic financial planners of the project.
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
How PIVX Empowers the User, Not the Spectator was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Your weekly PIVX Pulse is here! Get up to speed on price, trading, and community news.
Market Pulse Private Message with Vector: Over on the Vector side of things, the team just hit a massive milestone with the launch of Android support. The platform’s performance and security have been turbo-charged by migrating to an encrypted SQLite database for lightning-fast persistence. Users can look forward to an overhauled UX featuring improved speed, responsiveness, and key bug fixes, ensuring the smoothest and most secure experience yet. New Listings: PIVX is now available on Swapter.io, adding a new option for secure exchanges. Users can quickly trade PIVX with over 2200+ cryptocurrencies offered by the platform. Masternode Count: The PIVX network recorded an uptick in masternodes for the second consecutive week. The total number of active masternodes now stands at 2,071, marking an increase of approximately 2.83% (up from 2,014) from the previous week. Price Check: Despite stronger signs of a sentiment reversal emerging in the broader cryptocurrency market, with indicators moving away from “extreme fear”, PIVX prices saw a further decline this week. The Daily USD Price was between $0.17 and $0.19, translating to a weekly average of $0.1733. This is down from last week’s average of $0.21. Trading Buzz: Trading activity for PIVX saw a significant dip this week, extending a multi-week decline. The total weekly volume fell by approximately 30.95% (down from $42 million) and currently stands at $29 million. Despite the overall trend, the daily trading volume is holding strong, regularly sitting above the $2 million benchmark.PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
PIVX Weekly Pulse (Nov 28th, 2025 — Dec. 4th, 2025) was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Fei-Fei Li is a Stanford professor, co-director of Stanford Institute for Human-Centered Artificial Intelligence, and co-founder of World Labs. She created ImageNet, the dataset that sparked the deep learning revolution.
Justin Johnson is her former PhD student, ex-professor at Michigan, ex-Meta researcher, and now co-founder of World Labs.
Together, they just launched Marble—the first model that generates explorable 3D worlds from text or images.
In this episode Fei-Fei and Justin explore why spatial intelligence is fundamentally different from language, what's missing from current world models (hint: physics), and the architectural insight that transformers are actually set models, not sequence models.
Resources:
Follow Fei-Fei on X: https://x.com/drfeifei
Follow Justin on X: https://x.com/jcjohnss
Follow Shawn on X: https://x.com/swyx
Follow Alessio on X: https://x.com/fanahova
Stay Updated:
If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.
Follow a16z on X: https://x.com/a16z
Follow a16z on LinkedIn:https://www.linkedin.com/company/a16z
Follow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Follow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Japanese beverage giant Asahi has confirmed that a ransomware attack on its systems in late September may have exposed the personal information of approximately 1.5 million people.
The company’s two-month investigation concluded that the exposed information included names, gender, addresses, and phone numbers. Data belonging to thousands of employees, their family members, and external contacts were also hit in the breach. Thankfully, no credit-card details were compromised, and there is currently no evidence that the stolen data has been published online.
While Asahi did not officially identify the attacker, the Russian-speaking Qilin ransomware gang claimed responsibility in October. Qilin alleged they stole financial data, employee records, and internal forecasts.
The attackers didn’t need to hack 1.5 million separate accounts; they only needed to compromise one access point to unlock a massive treasure trove of data. Their strategy was straightforward: infiltrate one point on the network, encrypt key servers, and cripple the company until a demand is met. However, Asahi’s CEO said that the company did not pay a ransom.
Following the attack, Asahi was forced to implement production shutdowns, halt shipping, and delay product launches for its beverages, including the flagship Super Dry brand. The scope of the damage was so bad that the company had to delay the release of its annual financial results by 50 days. To put things in perspective, Asahi is now working to restore operations and expects to normalize logistics operations by February.
The attack reinforces a fundamental rule of digital security that “centralization equals vulnerability.” Every piece of data a company collects is a liability. When a corporation chooses to aggregate and manage vast quantities of personal information under one roof, the failure of a single security layer can have a massive, cascading impact on public privacy.
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
Japan’s Favorite Lager Now Serving Data Leaks was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post GenAI Present and Future: A Conversation with Hasmukh Ranjan, CIO & SVP of AMD appeared first on Greylock.
Panther Protocol Foundation has executed a grant to Panther DAO, following DAO approval. Panther DAO Council has received a 26,000 USDC grant to sustain the DAO’s onchain governance, ecosystem awareness, educational outreach, and general ecosystem support. Through this grant, the DAO Council has the resources to support community coordination, strengthen governance operations, and expand participation across the Panther ecosystem.
The current scope of the DAO CouncilRight now, Panther DAO collaborates with independent software development companies like Modulo and Zpoken on open-source development, research, technical proposals, and in preparing technical DAO improvement proposals (PIPs). Recently, contributors have reported progress on Panther’s V1 KYC implementation and are now preparing for the rewards distribution proposal associated with upcoming community governance decisions. The grant that Panther DAO received strengthens the DAO’s organisational capacity as the community prepares upcoming governance proposals relating to deployments and ecosystem enhancements. More information can be found on the recently voted-in PIPs.
What’s NextThe Panther ecosystem is preparing for a series of upcoming DAO proposals related to Panther Protocol’s Mainnet activities. The tech updates of Panther’s development progress can be tracked here on GitLab, and the DAO progress can be followed on the Panther forum. The DAO council aims to publish expenditure summaries on the Panther forum for transparency. Panther Protocol Foundation remains independent, neutral, and non-operational, and the Panther ecosystem continues progressing under a decentralised governance model.
About Panther Protocol FoundationPanther Protocol Foundation is a non-profit organization dedicated to supporting the growth, sustainability, and responsible use of Panther Protocol. While it does not operate the protocol or facilitate digital asset services, the Foundation plays a critical role in promoting adoption, supporting open-source development, advancing research, and raising awareness around the protocol’s core privacy-preserving technologies.
By empowering users, developers, and permissioned actors within DeFi and web3, the Foundation contributes to building a more secure and confidential digital future.
For more information, visit www.panther.org.
To learn more about Panther Protocol, visit www.pantherprotocol.io.
Contact
Panther Protocol Foundation
📧 Email: general@panther.org
🌐 Website: www.panther.org
Recently, a16z General Partner Anish Acharya joined Ollie Forsyth on NEW ECONOMIES. They talked about why consumer tech is surging again, how AI is enabling 100M-user products at unprecedented speed, and what founders need to understand heading into 2026 — from distribution shifts to founder mindset to the mechanics behind the fastest product cycle in tech history.
Resources:
Follow Ollie: https://x.com/ollieforsyth
Follow Anish: https://x.com/illscience
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
2026 will be the “Year of Delays” for data centers and AGI; it will also see accelerating AI adoption by end-users.
By David Cahn Published December 3, 2025December ushers in a period of reflection in the investment world, as investors take stock of the previous year and begin to position themselves for the year to come. This is more true than ever right now, as we seem to be in a liminal period; animal spirits have lulled, but AI companies continue to put up strong results.
My prediction for 2026 is that it will be a tale of two AIs. On the one hand, it will be a year of delays, first in data center buildouts, many of which will fall behind schedule, and second, in the AGI timeline. At the same time, AI adoption will continue its relentless rise. In 2025, startups coined the idea of a “$0 to $100M” club of rapidly scaling AI companies; in 2026, we’ll begin to talk about the “$0 to $1B” club.
Entering 2026, here are the facts as I see them:
Demand for AI CapEx from the Big Tech companies is stronger than ever Google and Meta are fully betting the farm on AI While Microsoft and Amazon pulled back slightly in 2025 relative to peers, both continue to aggressively position themselves for the AI future Supply chain players seem weary: The customer’s customer is not as healthy as they’d wish. They are worried about being left holding the bag The end revenue from AI remains limited (on the order of tens of billions per year) relative to the scale of data center and energy investments (on the order of trillions over the coming five years) There are two killer apps in AI, coding and ChatGPT. Both are expected to approach or cross double digit billions of revenue this year. Nearly a dozen more startups are on the path to cross $100M+ in the near future, across a wide variety of applications Big enterprises are struggling to implement AI in-house, which is leading to fatigue and disappointmentTale 1: The Year of Delays
These countervailing forces will collide in 2026: soaring Big Tech demand will run headfirst into a supply chain that hasn’t scaled fast enough to match it.
First, companies like TSMC and ASML have monopolistic positions and cannot be forced to ramp capacity. Ben Thompson has called this the “TSMC Brake,” pointing out in October that while TSMC had ramped revenues by 50% since 2022, they had only ramped CapEx by 10%. He explained further: “There weren’t too many answers from TSMC about this, which is understandable, given that they won’t announce next year’s CapEx numbers until next quarter. What Wei did say is that TSMC was making a point to not just talk to its customers but its customers’ customers.” My prediction, especially coming off of the successful Gemini 3 launch and hype around TPUs, is that the TSMC constraint could become material in 2026.
Second, industrial players, which tend to be overlooked due to their fragmentation and lack of market power, may end up creating bottlenecks as data centers move into the final stages of construction. Generators and cooling units are among the most important industrial inputs to data centers, but there are dozens of such inputs; if any of these inputs are delayed, timelines would need to be pushed out. There are also labor constraints that must be factored in, as shortages in skilled labor could become a key bottleneck for completing these immense construction projects. Many AI companies share a supply base, and these industrial suppliers are faced with their own CapEx decisions (how many new factories to build). We’ll find out in 2026 to what extent they’ve sufficiently added to their own output capacity.
The average AI data center takes roughly two years to build. So if 2024 was the year of new project announcements, and 2025 was the year when construction investments started to hit GDP, then 2026 will either be the year where a lot of this new capacity comes online (leading to further declines in the cost of compute) or it will be the year when many of these construction projects begin to face delays. We already have seen a few of these delays publicly reported in Q4 2025. If hyperscalers begin to warehouse their new AI chips rather than installing them directly into data centers, this will be a telltale sign that the era of delays has begun.
The other way in which 2026 will be the “Year of Delays” has to do with the AGI timeline. For a long time, Silicon Valley luminaries were forecasting the imminent emergence of AGI, with “AGI in 2027” thrown around frequently in conversation. Since June of this year, there has been a progressive walk-back of this timeline. Dwarkesh Patel’s recent podcast interviews with Richard Sutton, Andrej Karpathy, and Ilya Sutskever are a demarcating line; the new consensus is that the AGI window will be in the 2030s, at earliest. In the coming year, I expect this “update” to filter outside of Silicon Valley. There are implications across many areas. The most notable risk is that hyperscaler CapEx today ends up being outdated.
Tale 2: The Relentless Drive Toward AI Adoption
The area where I do not expect to see any delays is in AI adoption itself. The fading of hype will have little impact on fundamentals. If anything, the best startups are growing faster than ever from $0 to $100M in revenue. In 2026, we’re going to see the emergence of a $0 to $1B club. The trend of the last three years—and likely for many more—is that startups are laying the foundation for the future economy, one building block at a time. There are many excellent entrepreneurs exploring new niches, and a lot of latent value has yet to be unlocked.
The best AI startups are moving with extreme efficiency—many are earning north of $1M in revenue per employee. This implies market pull vs. a push sale. Today’s entrepreneurs are building “self-improving” companies—they are themselves using AI agents for functions like legal, recruiting, and sales—creating an ecosystem flywheel effect. AI app companies are also riding a compute cost curve that should drive incremental margin improvement, especially as new data centers come online between now and 2030. Finally, with enterprises facing adoption fatigue on DIY implementations, startups are gaining even more momentum.
For some, AI adoption is happening too slowly. Those expecting a rapid AI takeoff would prefer to see a deus ex machina moment carry us straight to the finish line. I think that dream is likely to disappoint. Instead, the next leg of the AI story will require hard work, creative brilliance, and endurance to reach a new threshold where AI radically transforms the economy. We need only to look at the green shoots—founder motivation, aggressiveness, hunger to win, customer obsession—to see that this future is coming.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email JOIN OUR MAILING LIST Get the best stories from the Sequoia community.The post AI in 2026: The Tale of Two AIs appeared first on Sequoia Capital.
Quantum computing is often dismissed as a distant sci-fi future, but Ethereum OG John Lilic and Oxford physicist Stefano Gogioso argue the timeline is shrinking fast with roadmaps converging around 2030. In this episode, they break down the "woeful" state of quantum readiness in crypto, explaining how Shor's algorithm could eventually shatter the elliptic curve cryptography protecting Bitcoin and Ethereum.
They also explore the terrifying concept of "harvest now, decrypt later," which implies that encrypted data and privacy coins like Monero may essentially be compromised already. Finally, they introduce "Quantum Money," a revolutionary form of digital cash developed by Stefano’s startup NeverLocal, which relies on the laws of physics rather than blockchain consensus to prevent double-spending.
Topics
00:00 Intro
03:00 John’s Quantum Awakening
08:00 Defining Quantum Computing
13:30 Logical Qubits Explained
18:15 Crypto’s "Woeful" Readiness
23:30 "Harvest Now" Threat
28:45 Monero’s Privacy Risk
33:15 What is Quantum Money?
40:00 Investment & Hedging
Links
John Lilic on X: https://x.com/LilicJohn
Stefano Gogioso on X: https://x.com/StefanoGogioso
NeverLocal: https://neverlocal.com
Quantum.info: https://quantum.info
Gnosis: https://gnosis.io/
Sponsors:
Gnosis: Gnosis has been building core decentralized infrastructure for the Ethereum ecosystem since 2015. With the launch of Gnosis Pay last year, we introduced the world's first Decentralized Payment Network. Start leveraging its power today at http://gnosis.io
The fundamental trade-off between national security and personal digital privacy has come into sharp focus once again, as Russia implements what it calls “restrictive measures” against WhatsApp.
The move, which caused disruptions for users in cities like Moscow and St. Petersburg, marks a major escalation in Russia’s ongoing effort to curb the use of Western technology and consolidate control over digital communications within its borders.
According to Russia’s state communications watchdog, Roskomnadzor, the restrictions were imposed due to WhatsApp’s alleged repeated violation of Russian law. The agency’s official justification is that the popular messenger is being used “to organise and carry out terrorist activities, to recruit perpetrators, as well as for fraud and other crimes against Russian citizens.” The service now faces a potential nationwide block if it does not comply with domestic regulatory demands.
The restrictions appear to have been partially accelerated by an alleged leak of high-level diplomatic calls. The lawmaker argued this incident proved that the platform’s owners not only ignore illegal activity but may “actively participate” in it.
While governments frequently cite counter-terrorism and national security as reasons to demand backdoors or access to encrypted communications, the actions in Russia are viewed by many as a clear push for greater government surveillance.
The pressure on WhatsApp is part of a broader, years-long campaign by Moscow to tighten control over digital infrastructure. Most major Western platforms, including Facebook, Instagram, and Discord, are already inaccessible in Russia without a VPN, forcing citizens into a digital silo.
In 2025, when Russia barred voice calls on WhatsApp, the company suggested that the move was an effort to push Russians toward “less secure services to enable government surveillance.”
The current crackdown reinforces this suspicion. Roskomnadzor is actively urging citizens to switch to domestic alternatives, specifically touting a new state-backed messenger called Max, which is reportedly modeled on China’s highly integrated and government-monitored platform, WeChat.
When a government restricts a service like WhatsApp, which uses end-to-end encryption to secure user communications, the primary effect is to limit the ability of citizens to communicate freely and privately, whether they are political activists, journalists, or ordinary people. The price of convenience may well be the loss of core digital privacy rights, but the drive for truly decentralized, private alternatives such as Vector is accelerating.
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
Russia’s WhatsApp Crackdown: Security or Surveillance? was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
a16z General Partners David Haber, Alex Rampell, and Erik Torenberg discuss why 19 out of 20 AI startups building the same thing will die - and why the survivor might charge $20,000 for what used to cost $20.
They expose the "janitorial services paradox" (why the most boring software is most defensible), explain why OpenAI won't compete with your orthodontic clinic software despite having 800 million weekly users, and reveal how non-lawyers are building the most successful legal AI companies. Plus: the brutal truth about why momentum isn't a moat, but without it, you're already dead.
Resources:
Follow David on X: https://x.com/dhaber
Follow Alex on X: https://x.com/arampell
Follow Erik on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today we’re announcing a powerful new feature for Brave Leo: Skills. Leo Skills are prompt-based shortcuts that can help make your browsing experience faster, more efficient, and tailored to your specific needs. With Skills, your favorite prompts and AI-powered tasks are just a keyboard shortcut away.
Skills are available to all Brave Leo desktop and Android users on versions 1.85 and higher.
What are SkillsLeo Skills is a feature that lets you access your most-used prompts with custom shortcuts. Instead of repeatedly typing the same instruction, you can create personalized skills for tasks you perform often. These can include the summarizing of complex topics, polishing a piece of writing, or even analyzing competitor websites. Among many other great benefits, Skills can:
Save time: Eliminate repetitive typing of the same, often-used prompt.
Maintain consistency: Ensure your prompts are always structured the same way, leading to more predictable and reliable AI responses.
Simplify more complex actions: Trigger powerful AI tasks with a simple command.
Customize your experience: Tailor Leo’s responses to your specific needs and workflows.
How to get started with SkillsTo create a skill from an existing conversation:
Start a conversation in Leo by clicking on the Leo AI icon on the top right side of the toolbar. When Leo responds to a prompt you like, on desktop you can click the options menu ("…") on the message bubble. Or, on Android, long tap your original message (not Leo’s response). Select Save as skill.To create a new skill from scratch:
Open the Skills selector by typing “/” in the Leo message box. Click New and fill in the details. Click Save.Note that when creating a skill from scratch, you’ll need to give your skill a unique name and choose a shortcut using a combination of letters, numbers, and hyphens only.
To use a skill you’ve created:
Open Leo. Type “/” followed by your chosen shortcut in the chat input. Leo will instantly recognize the shortcut and apply your saved prompt. How to edit skillsTo edit a skill on desktop:
Type “/” and then the name of your skill. Click the edit icon that appears.To edit a skill on Android:
Type “/” and then the name of your skill. Long tap the skill name. Built-in Skills to get startedTo get started, we’ve included a handful of pre-configured Skills:
/summarize: Get a concise summary of any webpage. /explain: Have Leo explain complex topics in simple, everyday language. /improve: Polish your writing for clarity and impact. /change-tone-persuasive: Rewrite content with a persuasive and compelling tone. /social-media-post: Transform your content into engaging social media posts. Becoming more efficient with more Skills at your fingertipsOnce you’ve tested pre-configured Skills, you can start using Skills in multiple ways to make your browsing and day-to-day tasks more efficient. Here are a few examples that could come in handy (see above section “To create a new skill from scratch” to get started):
/project - break down a specified task into a step-by-step action plan with time estimates. Example: /project redo my linkedin profile
/vs - Compare and contrast two specified options with pros, cons, and a recommendation. Example: /vs Windows Linux
/lunch - Suggest a well-reviewed, affordable place to eat lunch in a specified area. Example: /lunch midtown manhattan
Availability and requirementsThe Leo Skills feature is available on desktop (Windows, macOS, Linux) and Android devices running Brave version 1.85 or higher. Skills will also be available on iOS in the near future.
Once you’ve had a chance to try Skills, please do share feedback so we can continue to improve Skills, Leo, and Brave’s privacy-first AI experiences.
More than 1,200 builders stepped up to create, experiment, and push the boundaries of what’s possible with .brave domains, Brave Wallet, BAT, and on-chain Web experiences. The challenge—which we introduced in our October announcement and hosted through an official contest hub—brought together designers, developers, creators, and on-chain enthusiasts from around the world, all building fully functioning sites using a .brave domain.
Over several weeks, builders submitted personal websites, on-chain games, interactive experiments, creator tools, and niche utilities that showed just how creative and powerful on-chain website building can be inside the Brave ecosystem.
Now that judging has wrapped and the results are in, we’re excited to share the top 10 winners, our People’s Choice Award recipients, and a curated list of outstanding honorable mentions worth exploring.
What was the .brave Website-Building Challenge?The 2025 .brave Website-Building Challenge was an open global competition inviting anyone to build a fully functioning website using a .brave domain. Participants were encouraged to claim a domain, create and publish a site, and showcase what an on-chain website could look like in the Brave ecosystem. The challenge ran over a one month period beginning October 15th, 2025 and included a dedicated submission period, three weekly People’s Choice Awards, a full week of judging, and a final winners announcement on November 20.
More than 1,200 creators entered the competition, submitting a wide range of projects including personal websites, creator tools, games, interactive experiments and Web3 applications. A total of 15,000 BAT, $18,000 USD in .brave credits, and exclusive badges and Brave merchandise were awarded to the top 10 winners, and each People’s Choice Award winner won 300 BAT and $700 in .brave domain credits.
People’s Choice Awards (community voted)Throughout the competition, community members voted in weekly People’s Choice Award rounds, shining a spotlight on builders whose sites resonated with the Brave community.
The People’s Choice Award winners were:
Lease.brave Dao.brave Triwikrama.braveThese three projects stood out for design, creativity, and engagement, earning additional BAT rewards and .brave credits during the submission period.
While the main judging panel focused on technical execution, creativity, usability, and performance, the People’s Choice Awards represented the heart of the Brave community, rewarding projects that inspired, entertained, or delighted everyday users.
Top 10 winners of the 2025 .brave Website-Building ChallengeAfter an intensive week of judging, evaluating every site page by page, feature by feature, here are the 10 projects that rose above the rest.
1st place: TravelNotes.braveOur first place winner, TravelNotes.brave, delivers a visionary look at how travel experiences could evolve in a privacy-first, on-chain world. The concept imagines a BAT-powered travel platform where users earn tokens for sharing photos, writing reviews, and publishing local guides, all authenticated on-chain to preserve trust and originality. By integrating Brave Wallet and Brave Search, the project weaves together identity, discovery, and rewards in a way that feels seamless and intuitive.
What sets TravelNotes.brave apart is its clear sense of direction: a clean, user-friendly interface paired with a forward-looking model for how travelers and creators might interact without relying on centralized platforms. It demonstrates the potential for BAT to unlock new verticals and new forms of value exchange in everyday digital experiences. This thoughtful execution and ambitious vision earned TravelNotes.brave the top spot in the 2025 .brave Website-Building Challenge.
2nd place: guanny.braveTaking second place, guanny.brave stands out as a serene and beautifully executed calligraphy progressive Web app. The project blends traditional artistry with modern on-chain technology, featuring a lifelike brush engine that responds smoothly to stroke pressure and direction. Its timeless grid layout provides structure without limiting creativity, creating an experience that feels meditative and precise all at once.
Thoughtful touches of Brave and BAT functionality show how classic creative practices can evolve in an on-chain environment, offering new ways for artists to publish or potentially monetize their work. The level of polish, craft, and engineering quality in guanny.brave impressed judges across the board, earning it a strong second-place finish.
3rd place: djh23.braveIn third place, djh23.brave delivers a clever and forward thinking demonstration of how music creators could sell tracks directly to fans using BAT and Brave Wallet. The concept removes intermediaries entirely, showcasing a streamlined peer-to-peer model where listeners support artists instantly and transparently.
The site pairs this idea with a sleek, intuitive UI, playful animations, and a reactive star filled backdrop that makes engaging with the demo genuinely fun. djh23.brave highlights how on-chain domains and Brave’s tools could power a new wave of creator-first digital marketplaces, earning its well deserved spot in the Top 3.
4th–10th place winnersThese entries impressed judges with their originality, UI/UX quality, interactivity, and alignment with the Brave spirit:
4th place: Innovledia.brave 5th place: buttfonts.brave 6th place: triwikrama.brave 7th place: ethpay.brave 8th place: dao.brave 9th place: conqueror.brave 10th place: retroweb.braveEvery one of these projects demonstrates what’s possible when on-chain identity, creativity, and accessible Web3 tools intersect.
Honorable mentions and notable entriesWith so many submissions, limiting the winners to 10 was challenging. Here are some standout entries that caught the judges’ attention but didn’t place, showcasing clever concepts, impressive engineering, or unforgettable design. Others, such as heis.brave, used their .brave domain as a redirect to a traditional web destination, highlighting that on-chain domains can also serve as flexible gateways that point anywhere you want your audience to go.
Brandable.brave Dreadnet.brave Heis.brave Primer.brave Si.brave word4today.brave Thank you, Brave buildersTo all 1,200+ entrants, People’s Choice Award voters, and everyone who supported the challenge: Thank you for bringing so much creativity and energy to the .brave Website-Building Challenge.
This is just the beginning of what’s possible with on-chain domains, Brave Wallet, and Web3-powered publishing.
Compute is the most valuable resource in the AI world we live in today. Nvidia. Google TPUs. Amazon Trainium. OpenAI and Broadcom’s partnership. Elon’s recent post about Tesla’s AI chips.
Designing the most performant chips for AI workloads sits at the heart of accelerating technological progress.
But major hurdles exist.
First, chip design is slow. It takes 12-24 months at mature nodes and 18-36 months at the leading edge for 5nm or 3nm.
Second, chip design is prohibitively expensive. It costs on average $200-250 million for 7nm, $450-500 million for 5nm, and $600-650 million for 3nm. Roughly 50-70% of that is human labor. Another 5-15% is Electronic Design Automation tooling spend in a market long dominated by Cadence and Synopsys, where each generates $5-6 billion in annual revenue and are worth approximately $90-100 billion in market cap.
AlphaChip caught my eye for these exact reasons. It gave us a peek at AI’s potential to transform the entire chip design process, showing we can cut the floorplanning step in physical design from months to hours.
What if we could extrapolate this and build AI to automate the entire flow, from architecture design to RTL to verification, all the way through physical design?
What if chip design took days, not two to three years? Every day is massively costly; some reports from August 2024 indicated that a multi-month Blackwell delay could result in more than $10 billion in lost revenue for 2025 alone. More importantly, imagine the revenue potential unlocked when new generations of chips are designed faster and shipped earlier.
What if each design didn’t cost hundreds of millions of dollars? What if chip companies didn’t need to operate large human teams on top of clunky EDA tooling?
And most exciting: what if we unlocked novel chip designs we might never have explored?
AlphaChip revealed an important human bias: in chip design, we tend to think in Manhattan grid-like structures. AlphaChip’s designs were different, more organic in shape, more like forms inspired by nature. So different, in fact, that humans wanted to reject them at first … Yet AlphaChip went on to shape four generations of the TPU.
We at Sequoia are so excited to partner with co-founders Anna Goldie and Azalia Mirhoseini, leading their very first round from the formation of Ricursive Intelligence. They pioneered AI for chip design by creating and leading the AlphaChip effort and are at the epicenter of this emerging AI for chip design ecosystem. They are visionaries with incredible clarity of thought, intensely ambitious, humble yet exceptionally accomplished, and real talent magnets who move, and inspire others to move, with urgency and velocity.
Anna and Azalia founded Ricursive Intelligence to build the frontier AI lab defining this category. In just the first weeks since company formation, they have assembled a team with the highest talent density you can imagine in the field.
Their core belief: chip design is the compute bottleneck, and progress in AI, hardware and infrastructure is capped by the speed and efficiency of silicon creation.
In their words: “If we get this right, it’s not just faster chip design cycles; it’s a fundamental expansion of what’s possible in hardware. Once chip design becomes fast and accessible, everyone will be able to customize. The automation here will unlock a flood of new hardware innovation.”
Anna and Azalia’s vision for Ricursive is to define a new movement, from “fabless” to “designless.” Fabless, meaning a company designing chips without owning expensive fabs, outsourcing production to foundries. Designless, meaning outsourcing not only manufacturing but the entire chip design process, taking an idea and converting it into a manufacturable design.
We envision a world where Ricursive helps any company design chips for its own workloads faster, more efficiently and more creatively than is possible today. In doing so, Ricursive can help revolutionize the most valuable resource in our era: compute. We could not be more excited to help build a true generational company in the making.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email Related Topics #AI #Funding announcement Partnering with Reflection By Stephanie Zhan and Charlie Curnin News Read Partnering with Skild: The Future of Embodied Intelligence By Stephanie Zhan News Read Partnering with rift: Sales, simplified. By Stephanie Zhan and Charlie Curnin News Read JOIN OUR MAILING LIST Get the best stories from the Sequoia community. Email address Leave this field empty if you’re human:The post Partnering with Ricursive Intelligence: A Premier Frontier Lab Pioneering AI for Chip Design appeared first on Sequoia Capital.
In the summer of 2024, Mark Swan, an aspiring founder and former operations lead at Revolut, came to two realizations. The first was that service businesses would be the biggest benefactors of AI, with AI-powered tools helping to streamline operations and offload administration-intensive processes. The second was that one of the sectors with the greatest need for tools to support these tasks was the financial services industry, particularly wealth managers.
Swan recognized financial advisors faced an “80/20 problem.” They spent 80% of their time on administrative work, leaving only 20% for time with clients navigating significant, often emotional, financial life decisions, from saving for retirement to paying for their children’s education. Meanwhile, the demand for financial advice was accelerating. As of 2024, the number of affluent households in the U.S. with more than five hundred thousand dollars in investable assets was growing eight times faster than the population. But with such large administrative workloads, Swan learned, wealth management firms suffered an inability to scale and serve more clients. And the current tools for assisting with their administrative overhead weren’t cutting it.
From his research, Swan developed a third realization, this one contrarian. While many predicted tools for AI would replace human roles in the financial services sector, Swan believed the opposite: that in a world increasingly dominated by AI, human-led advice would become all the more important.
Fast forward to today — Nevis is a New York City-based AI platform for Registered Investment Advisors. Founded by Swan (CEO), Philipp Burda (CPO) and Ivan Chalov (COO), Nevis came out of stealth in early December. The company raised a $5 million seed round from Sequoia in 2024 and a Series A of nearly $35 million from Sequoia, ICONIQ and Ribbit Capital in 2025, bringing its total funding since inception to $40 million. Nevis’s thesis is simple: AI won’t replace financial advisors, but it can turbocharge them, offering tools that give wealth managers 80% of their time back. This will allow them to have deeper and more meaningful relationships with clients and enable them to scale to serve more people. Nevis’s rapidly growing customer base currently includes some of the fastest-growing wealth management firms in the U.S., collectively overseeing more than $50 billion in client assets.
Swan, Burda and Chalov first crossed paths at Revolut. Burda rose through the ranks as head of fincrime, then head of data science for the retail product, and, finally, partner and head of Revolut’s GenAI product initiatives. Meanwhile Chalov ascended from data scientist to general manager of retail business at Revolut. In their three years of overlap, the co-founders’ professional alchemy flourished. “We were absolutely smashing in terms of coming up with ideas, approaches, shortcuts and all these tricks,” Burda says. “I was very impressed by Ivan, how hardworking he is and logical at the same time. And Mark, I was blown away by the things he was doing and the quality of his work.” When Swan left Revolut in the spring of 2024 to pursue his own venture, Burda and Chalov encouraged him to stay in touch in case there was an opportunity for further collaboration.
Swan’s drive to bring meaningful change to an industry dates back to his childhood in Aberdeen, Scotland. His initial goal, however, was to become a politician. “I always knew I wanted to do something that would have the biggest impact on the world,” Swan says. “At the time, I felt the way to do that was through politics.” Swan rigorously plotted his course, diving into books and excelling in school — achieving more A-levels than any other Scottish student at the time.
Meanwhile, across the globe, Chalov’s and Burda’s fathers were fanning the enterprising flame in their own sons. “My father is an entrepreneur who tried everything: running a computer hardware store, owning a local newspaper, selling cars, even running a duck farm,” says Chalov, reflecting on his childhood in Akademgorodok, a small research town in Siberia. Meanwhile, Burda’s Russian physicist father was taking a more direct approach in encouraging a problem-solving spirit in his son — handing him books on HTML and math puzzles anytime they went on a car ride. Burda’s first stab at company-founding was at 15, when he started a short-lived web design agency with friends. He spent the next decade in academia, while Chalov worked as a trader at an investment bank, before both shifted industries to join the ranks at Revolut.
Swan was the last to arrive at Revolut. Before leaving for university, he had spent a year as an intern with Al Gore’s Generation Investment Management — an experience that opened his eyes to the potential of changing the world through business and technology. Following graduation, he pursued work in this space. His first official day in the Revolut office, however, began with a whimper. Swan showed up in the summer of 2021, excited to be in-person among co-workers after the pandemic’s remote era, only to discover an empty floor. He searched for his assigned line manager, but that person was nowhere to be found. Halfway through the day, after working alone for a number of hours, a man approached and asked his name. When he replied, “Mark Swan,” Chalov stuck out his hand. He informed Mark that the line manager in question no longer worked there; Swan would now report to him. “I wanted to join Revolut because I knew it was going to be like a rocket ship,” Swan says, “and I was going to learn better there than anywhere else. Just go drink from the fire hose. That was exactly my experience.”
Around this time, unbeknownst to Swan, his name went on Sequoia’s list of “prefounders” to keep an eye on, thanks to Edward O’Carroll, a former talent lead at Sequoia. “We have a term called ‘dynamo’ at Sequoia for people who are slightly earlier in their careers, but we would describe them as very high slope in terms of their rate of evolution,” says George Robson, a partner at Sequoia who wrapped up his own time at Revolut in 2020. “Mark was put on this list for exactly that reason — he was a ‘dynamo.’”
In the months after leaving Revolut, Swan heard repeatedly from those across service businesses that while a storm of startups offering AI-powered tools was emerging, the actual delivery of those promised products was falling short. “In a lot of professional services markets, like wealth tech, for example, you have AI companies who don’t fully understand that industry and how to build proper product solutions, but who are trying to sell the dream saying, ‘We’ll create an army of digital workers, which will work for you 24/7,’” explains Burda. “Realistically, those companies that don’t know how to build robust AI-native products in a regulated environment will have their products delayed and delayed. Advisors are being burned by false propositions, and customers lose trust in those companies, plus they’re missing out on proper tools that can help them now.”
As Swan zeroed in on an AI platform for wealth management, he identified trust as a key pillar of what would ultimately become Nevis. He also recognized that his deep experience building cutting-edge AI products at scale in financial services during his time at Revolut was a core differentiator. It would give customers confidence in his product and set his company apart.
In an act of shrewd business strategy with a dash of kismet, around this time Edward O’Carroll brought Swan’s name up to Sequoia partners Robson and Luciana Lixandru as a potential investment. “We had a meeting with Mark in our office,” Lixandru says. “I remember thinking, how is he 26? He’s just mature way beyond his years. He’s so accomplished, so ambitious, and has so many outlier characteristics. I had forgotten the feeling of meeting a founder and just knowing on the spot that you want to back them. That happens once a year, where you have an ‘aha’ moment right away.”
The seed investment from Sequoia happened swiftly in September 2024, almost as if Swan had manifested it. Swan admits he’s a closet “manifestor” — his favorite method is to write something down on a piece of paper and will it into existence. In this case, the piece of paper that he scribbled on the day he left Revolut, which he still keeps to this day, says he’d have seed money by the time he went to his parents’ wedding anniversary that fall. Sure enough, Swan found himself boarding a flight to Aberdeen, feverishly crunching numbers as he stepped toward the jet bridge. “I remember vividly trying to do the mental maths in my head. I’m trying to work out, is this deal good? Is it not good?” Swan recalls. The partnership was formalized, and, shortly thereafter, Burda and Chalov left Revolut to co-found Nevis with Swan.
Armed with a seed investment, the trio doubled down on understanding pain points within the wealth management sector, speaking at length with financial advisors to learn more about the 80/20 problem. Above all, what they heard repeatedly was that financial advisors valued their relationships with clients and that clients put enormous trust in their financial advisors — trust they didn’t want to cede to AI.
“People don’t want robots telling them how to manage their life savings,” says Swan, who has a personal connection of his own to financial advisors — or rather, a memory of the absence of one that gives him a perspective on their value. “Growing up, my parents did most of their investing themselves,” Swan says. “They weren’t in a position to have an advisor because the cost was too high. One of our goals with Nevis is to reduce the cost to serve, so advisors can support more clients and open access to high-quality financial advice for millions of people who don’t receive it today.”
Their business soon took shape. The co-founders created an AI “system of action” for financial advisors to reduce hours spent manually inputting data, updating records, and searching for information across fragmented systems. Unlike its point solution competitors, Nevis intentionally started with a more holistic product that can automate workflows end-to-end. Its platform connects to clients’ CRMs, email and messenger apps, and document storage systems, pulling from this data to assist in meeting preparation, to generate reports on investment performances and to maintain or make changes to clients’ accounts.
Confident in their product, the Nevis co-founders approached their Sequoia partners with a go-to-market plan centered on the U.K. Robson and Lixandru challenged them, however, to look at other, larger markets where AI is more rapidly being deployed. The trio eventually came to the conclusion that the U.S. had the same banking woes and pain points as the U.K. but was culturally ripe for this type of innovation.
To win customers’ trust, Swan and his co-founders prioritized in-person meetings, even if it meant traveling long distances, both to demonstrate the value of Nevis and also to build meaningful connections with those using the product. For United Capital, Swan flew to Newport Beach, California, discovered that Jim Rivers, the company’s president, loved to surf and turned one coffee meeting into a multi-day affair. For Apollon Wealth Management, Swan flew to Tampa and drove four hours to Miami for a thirty-minute sit-down with COO Brad Goodman, which evolved into a multi-hour strategy meeting, ultimately landing Nevis yet another customer.
Even after winning clients, Swan continues to regularly sit down with firms in person. “It’s beyond being on a first name basis,” says Robson. “By actually going and frequently spending time with clients, Mark is able to earn a very high level of confidence.”
Nevis’s decision to come out of stealth mode is a result of the momentum it’s seeing in the U.S. “We have enormous market demand,” Swan says, “and what’s more, we have validation from current customers around the strength of our product. We felt now was the time to bring this platform that we know is extremely valuable to a larger market.”
Nevis is rapidly growing its headcount, as it focuses on scaling and building new products. It continues to attract those who cut their teeth at Revolut; many of Nevis’s current employees, like its co-founders, are alumni of the company. Swan, Burda and Chalov recognize that people — from the internal team at Nevis, to the wealth management firms that use their product, to the clients depending on trusted advisors for wealth management — are the key to the company’s success. The founders are direct with each other when making decisions and direct with their clients when tackling their needs. Most importantly, Swan, Burda and Chalov know that to harness AI effectively, it needs to be developed through the exacting lens of tools and processes that can deepen trust amongst humans in the workplace instead of erasing it.
Share Share this on Facebook Share this on Twitter Share this on LinkedIn Share this via email JOIN OUR MAILING LIST Get the best stories from the Sequoia community. Email address Leave this field empty if you’re human:The post Nevis: Bringing AI to Wealth Management appeared first on Sequoia Capital.
A16Z co-founder Ben Horowitz joins Shaan Puri and Sam Parr on My First Million to talk about how to be a great leader.
Resources:
Follow Ben on X: https://x.com/bhorowitz
Follow Shaan on X: https://x.com/ShaanVP
Follow Sam on X: https://x.com/thesamparr
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Russ Fradin sold his first company for $300M. He’s back in the arena with Larridin, helping companies measure just how successful their AI actually is.
In this episode, Russ sits down with a16z General Partner Alex Rampell to reveal why the measurement infrastructure that unlocked internet advertising's trillion-dollar boom is exactly what's missing from AI, why your most productive employees are hiding their AI usage from management, and the uncomfortable truth that companies desperately buying AI tools have no idea whether anyone's actually using them.
The same playbook that built comScore into a billion-dollar measurement empire now determines which AI companies survive the coming shakeout.
Timecodes:
0:00 — Introduction
2:15 — Early Career, Ad Tech, and Web 1.0
3:09 — Attribution Problems in Ad Tech & AI
4:30 — Building Measurement Infrastructure
6:49 — Software Eating Labor: Productivity Shifts
8:51 — The Challenge of Measuring AI ROI
14:54 — The Productivity Baseline Problem
18:46 — Defining and Measuring Productivity
21:27 — Goodhart’s Law & the Pitfalls of Metrics
22:41 — The Harvey Example: Usage vs. Value
25:18 — Surveys vs. Behavioral Data
28:38 — Interdepartmental Responsiveness & Real-World Metrics
31:00 — Enterprise AI Adoption: What the Data Shows
33:59 — Employee Anxiety & Training Gaps
38:31 — The Nexus Product & Safe AI Usage
42:08 — The Future of Work: Job Loss or Job Creation?
44:40 — The Competitive Advantage of AI
53:45 — The Product Marketing Problem in AI
55:00 — The Importance of Specific Use Cases
Resources:
Follow Russ Fradin on X: https://x.com/rfradin
Follow Alex Rampell on X: https://x.com/arampell
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Catch up on all things PIVX with our weekly pulse update, from price swings to trading action, and other important community updates.
Market Pulse Masternode Count: The PIVX network saw an uptick in active masternodes this week, reversing last week’s decline. The total count now stands at 2,014, up from 1,954. This translates to a gain of approximately 3.07%, suggesting renewed confidence and increased stability in the PIVX ecosystem. Price Check: The PIVX market maintained a tight range this week, with Daily USD Prices compressed between $0.20 and $0.22. This consolidated movement, which saw no significant intra-day spikes, resulted in a weekly average price of $0.21, a drop from the prior week’s average of $0.2486.PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
PIVX Weekly Pulse (Nov 21st, 2025 — Nov. 27th, 2025) was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
In this episode, a16z GP Martin Casado sits down with Sherwin Wu, Head of Engineering for the OpenAI Platform, to break down how OpenAI organizes its platform across models, pricing, and infrastructure, and how it is shifting from a single general-purpose model to a portfolio of specialized systems, custom fine-tuning options, and node-based agent workflows.
They get into why developers tend to stick with a trusted model family, what builds that trust, and why the industry moved past the idea of one model that can do everything. Sherwin also explains the evolution from prompt engineering to context design and how companies use OpenAI’s fine-tuning and RFT APIs to shape model behavior with their own data.
Highlights from the conversation include:
• How OpenAI balances a horizontal API platform with vertical products like ChatGPT
• The evolution from Codex to the Composer model
• Why usage-based pricing works and where outcome-based pricing breaks
• What the Harmonic Labs and Rockset acquisitions added to OpenAI’s agent work
• Why the new agent builder is deterministic, node based, and not free roaming
Resources:
Follow Sherwin on X: https://x.com/sherwinwu
Follow Martin on X: https://x.com/martin_casado
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
“DevConnect 2025 was about touching and feeling Ethereum IRL”Nathan Sexer, lead of the DevConnect 2025 and Events team at the Ethereum Foundation, gives a peek into the largest iteration of Devconnect ever, with 20,000 attendees, and why the team pivoted to a "World's Fair" format, creating tangible districts for DeFi and Privacy to let attendees truly "touch and feel" the ecosystem.
The conversation gets real about the friction of the physical world. He explained why Argentina’s crypto-native culture makes it the perfect host, how hyperinflation fueled bottom-up adoption, and even the venue-wide internet failure became an accidental "feature," breaking the on-screen silos and pushing genuine face-to-face connections.
A massive geopolitical win was how the team worked with the government to issue 1,000+ visas for attendees from over 130 nationalities to make this event in the true spirit of borderless crypto.
The Ethereum Foundation is heading to Mumbai in 2026! The goal for India is to unify a fragmented developer diaspora and bring regulatory attention to one of the world's most critical tech hubs.
Topics
00:00 Intro & Scale 04:15 World's Fair Concept 09:50 Why Argentina? 14:30 Operational Challenges 18:15 Internet Blackout 22:00 Booth Renaissance 28:30 Privacy Priority 33:00 Devcon Mumbai 37:40 Indian DevelopersLinks
Devcon Twitter/X: https://twitter.com/EFDevcon Nathan Sexer on X: https://x.com/nethan_eth Ethereum Foundation: https://ethereum.org Gnosis: https://gnosis.io/Sponsors: Gnosis: Gnosis has been building core decentralized infrastructure for the Ethereum ecosystem since 2015. With the launch of Gnosis Pay last year, we introduced the world's first Decentralized Payment Network. Start leveraging its power today at http://gnosis.io
Ben Horowitz reveals why the US already lost the AI culture war to China—and it wasn't the technology that failed. While Biden's team played Manhattan Project with closed models, Chinese developers quietly captured the open-source heartbeat of global AI through DeepSeek, now running inside every major US company and university lab. The kicker: Google and OpenAI employ so many Chinese nationals that keeping secrets was always a delusion, but the policy locked American innovation behind walls while handing cultural dominance to Beijing's weights—the encoded values that will shape how billions of devices interpret everything from Tiananmen Square to free speech.
Resources:
Follow Ben Horowitz on X: https://x.com/bhorowitz
Follow Costis Maglaras on X: https://x.com/Columbia_Biz
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
When French citizens took to the streets during the recent Bloquons Tout ("Block Everything") protests, they were united in opposition to the proposed national budget. But beyond that shared frustration, what did they actually want? This is the question plaguing modern protest movements. We know what people oppose, but the mechanisms to understand what they support, and to find consensus amid that complexity, remain frustratingly elusive.
In this episode, Executive Director Jess Scully sits down with Yuting Jiang, CEO and co-founder of Agora Citizen Network. Unlike mainstream anti-social media that pulls us into tribal camps, Agora is prosocial, using machine learning to identify shared beliefs and bridge statements that unite rather than divide. Inspired by Polis, Agora is a space where citizens can move beyond broadcasting grievances to actually deliberating solutions together.
Yuting walks us through a consultation during the French protests with over 200 participants, in which Agora revealed a nuanced opinion landscape showing some key points of consensus, while exposing meaningful disagreements about how radical their calls for reform should be.
As RadicalxChange launches our own consultation on Agora, this conversation explores how we might build the prosocial media infrastructure that democracy actually needs.
Participate in our community conversation on Agora: https://agoracitizen.network/feed/conversation/4OcpxQ
Host: Jess Scully
Guest: Yuting Jiang
Producer: Jack Henderson
Feedback or ideas for future episodes? Email us at info@radicalxchange.org.
Connect with RadicalxChange Foundation:
Website X YouTube LinkedIn Discord BlueSky
From zero-knowledge proofs to ring signatures, confidential transactions, and stealth addresses, there are several privacy protocols in the crypto space. Ring Signatures and zk-SNARKs are perhaps the most prominent on the list. While both achieve anonymity, they do so through fundamentally different mechanisms, leading to distinct trade-offs in efficiency, security, and the scope of information they conceal.
Well, let’s take a look at these two revolutionary privacy technologies, exploring how they operate and their differences.
zk-SNARKsShort for Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge, zk-SNARKs are a specific type of Zero-Knowledge Proof that allows a user (the prover) to mathematically convince a verifier that a statement is true, without revealing any information about the statement itself beyond its validity.
With zk-SNARKs, all transaction data, such as the sender address, the recipient address, and the transaction amount, is completely hidden. The transaction logic is first converted into a complex algebraic equation. The prover then uses their secret key to generate a highly compressed, non-interactive cryptographic proof that solves the equation without revealing the underlying inputs.
The core principle here is Information-Theoretic Security: the secret data is never revealed to the network, but is mathematically encrypted within the proof itself. Verification relies solely on checking the algebraic correctness and succinct nature of the proof. This technology is famously used by projects like Zcash (ZEC) and PIVX for its SHIELD protocol.
Key Technical Properties Succinctness: The proof size is extremely small, typically just a few hundred bytes, and remains constant regardless of the complexity of the data being proven. This dramatically reduces blockchain storage requirements and speeds up verification. Non-Interactivity: Once the proof is generated, no further communication is required between the prover and verifier, making verification a highly efficient, one-time operation. Computational Security: The protocol is mathematically sound, ensuring that it is computationally infeasible for a dishonest prover to generate a valid proof without possessing the necessary secret knowledge. Ring SignaturesRing Signatures operate on a fundamentally different principle: plausible deniability through obfuscation and mixing.
They are designed primarily to conceal the sender’s identity by obscuring which specific private key signed the transaction. The true sender combines their private key with the public keys of several other random accounts, often called “decoys” or “mixins”, drawn from the blockchain.
The resulting ring signature is valid for the entire group, or “ring,” making it computationally infeasible for an outside observer to pinpoint the actual signer. The core principle is plausible deniability since the true source is mixed with decoys, making the origin ambiguous.
Its security relies heavily on the size and randomness of the anonymity set (the ring). This technology is famously implemented by Monero (XMR), where it is combined with Stealth Addresses (hiding the recipient) and RingCT (hiding the amount) to enforce privacy by default.
Key Technical Properties Improvised Rings: A core advantage is that Ring Signatures do not require a central manager or prior permission from the decoy members. Any user can construct a ring instantly using public keys already on the ledger. Linkability Prevention: To prevent double-spending, Ring Signatures use a key image, a unique tag derived from the sender’s private key. If two signatures appear with the same key image, the network knows they originated from the same user, even though it doesn’t know who that user is. Proof Size: Unlike the succinct nature of zk-SNARKs, the size of a Ring Signature typically grows proportionally to the number of decoys included in the ring, leading to larger transaction sizes. zk-SNARKs Vs. Ring Signatures Scope of Privacy: zk-SNARKs hide all three transaction fields (sender, recipient, and amount) in one mathematical proof. Ring Signatures primarily hide the sender; they require separate protocols (Stealth Addresses, RingCT) to achieve full privacy. Underlying Mechanism: zk-SNARKs rely on cryptographic proof and complex mathematics for their security. Ring Signatures rely on obfuscation and the quality of the anonymity set (the decoy pool). Efficiency: zk-SNARKs are superior in terms of efficiency, providing an extremely succinct and constant-sized proof that can be verified very fast. Ring Signatures create proofs that are variable in size and are generally slower to verify as the size of the ring increases. Setup Requirement: Ring Signatures boast the advantage of having no setup requirement; rings are improvised instantly. zk-SNARKs have historically required a potentially sensitive trusted setup, though newer generations of the technology are resolving this. Fungibility: Both methods provide high fungibility, but zk-SNARKs offer perfect fungibility as every shielded coin is mathematically indistinguishable.PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
zk-SNARKs vs. Ring Signatures was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Marc Andreessen and Ben Horowitz sit down with Margit Wennmachers—the woman who turned two unknown entrepreneurs with $300 million and zero investing track record into the most talked-about firm in venture capital. She unpacks how they weaponized transparency in an industry built on secrecy, why Fortune's cover story triggered a cartel meltdown, and the exact moment a casual lunch conversation became "Software Is Eating the World." This is the origin story of how A16Z broke every unwritten rule, made enemies of every top-tier firm, and permanently rewired what it means to build companies in public.
Resources:
Follow Marc on X: https://x.com/pmarca
Follow Ben on X: https://x.com/bhorowitz
Follow Margit on X: https://x.com/wennmachers
Follow Erik on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Half a billion people can access the world’s best AI on their phone. So why are most using it to write emails while only some are using it to build empires?
In this conversation with Mark Halperin from Next Up, Marc Andreessen reveals why small bakeries are beating Fortune 500 companies at AI adoption, how to turn ChatGPT into your personal board of directors, and why Silicon Valley just reversed five years of geographic dispersion overnight. He also shares the questions that unlock AI's real power—including one of his favorite prompts: "What questions should I be asking?"
Resources:
Follow Mark Halperin on X: https://x.com/MarkHalperin
Follow Marc Andreessen on X: https://x.com/pmarca
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
During Q3 2025, Least Authority (in their role as the Zcash Ecosystem Security Lead) undertook an audit of the changes implemented in Zebra in order to support the NU6.1 Zcash network upgrade.
We are pleased to announce that the audit was favorable and that, at the time of writing, we have addressed all of the suggestions in our 3.0.0 NU6.1 stable release.
You can find a link to the audit report in Least Authority’s blog post.
The post Zebra NU6.1 Audit appeared first on Zcash Foundation.
A new threat is proving that even “end-to-end encryption” isn’t a silver bullet for total privacy. Security researchers have uncovered a sophisticated Android banking trojan, dubbed Sturnus, that is capable of capturing private conversations on apps like WhatsApp, Telegram, and Signal.
Discovered by Dutch cybersecurity firm ThreatFabric, Sturnus appears to be in its pre-deployment phase, but is already fully functional. It is configured with templates to target major banks across Southern and Central Europe, signaling preparation for a far wider and more coordinated global operation.
Sturnus functions as an advanced banking trojan that gives attackers near-total remote control of an infected Android device. While apps like Signal or WhatsApp protect data in transit, the trojan is designed to monitor everything displayed on your phone’s screen in real time. It simply waits for the messages to be decrypted and shown by the app. It then captures full message threads and contacts right from the display. The very moment you read a message, the attacker can too.
The malware uses highly convincing full-screen overlays to capture banking credentials. It can even execute financial transactions while displaying a black full-screen overlay on your device, hiding the activity from you completely.
When a single piece of malicious code can gain this level of control, the answer to the question, “What could possibly go wrong?” is simply: Everything!
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
When Your “Private” Encrypted Chats Are Read Anyway was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Epoch AI researchers reveal why Anthropic might beat everyone to the first gigawatt datacenter, why AI could solve the Riemann hypothesis in 5 years, and what 30% GDP growth actually looks like. They explain why "energy bottlenecks" are just companies complaining about paying 2x for power instead of getting it cheap, why 10% of current jobs will vanish this decade, and the most data-driven take on whether we're racing toward superintelligence or headed for history's biggest bubble.
Resources:
Follow Yafah Edelman on X: https://x.com/YafahEdelman
Follow David Owen on X: https://x.com/everysum
Follow Marco Mascorro on X: https://x.com/Mascobot
Follow Erik Torenberg on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
At DevConnect 2025, Sebastian and Friederike speak with Peter Van Valkenburgh about the rapidly evolving battle for digital rights. Peter challenges the industry's comfort with transparency, arguing that "transparency will destroy neutrality." He uses the history of SWIFT to illustrate how a once-neutral messaging system was captured by geopolitical interests because it wasn't "technically blind" to the data it processed. He argues that for blockchains to survive as global settlement layers, they must be "actually blind" to transactions, making neutrality a technical reality rather than a policy choice.
The conversation turns to the aggressive legal tactics currently deployed against developers. Peter highlights the Pereira Bueno case, where prosecutors charged MEV searchers with wire fraud for being "dishonest validators" a concept Peter argues completely undermines the game-theoretic security of permissionless networks. He also breaks down the mixed bag of Tornado Cash litigation. While the sanctions against the protocol were successfully challenged and invalidated for Americans, the criminal conviction of developer Roman Storm for "unlicensed money transmission" sets a terrifying precedent for anyone publishing open-source code.
On a constructive note, Peter introduces Coin Center's "John Hancock Project," which advocates for replacing the current, ineffective KYC/AML regime (which seizes less than 1% of illicit funds) with a system based on privacy-preserving attestations and self-sovereign risk scores. Finally, Peter shares surprising optimism regarding the US Securities and Exchange Commission (SEC). He notes that under the influence of Commissioners Hester Peirce and Paul Atkins, the agency has shifted from an aggressive adversary to a potential ally, openly discussing the benefits of full asset tokenization and the constitutional necessity of financial privacy.
Topics
00:00 The Telegram vs. Signal security rant 05:15 The "Transparency Paradox": Why transparent Layer 1s cannot remain neutral in the long run 10:40 The SWIFT Analogy: How a neutral messaging layer became a politicized settlement enforcer 15:50 The Pereira Bueno Case: Why labeling MEV strategies as "wire fraud" threatens all validators 23:10 L2 Sequencing Risks: Centralization and the need for "dumb pipes" 28:30 The Failure of KYC: Why 99.8% of illicit funds are missed and the cost of mass surveillance 35:00 The "John Hancock Project": Using ZK-proofs and attestations to replace identity surveillance 42:15 Tornado Cash Update: Sanctions invalidated vs. the dangerous precedent of Roman Storm’s conviction 49:00 The SEC's 180: Hester Peirce, Paul Atkins, and the push for tokenized equitiesLinks mentioned in the episode:
Gnosis: https://gnosis.io/ Coin Center: https://www.coincenter.org Epicenter - All Episodes: https://epicenter.tv/ Report: Tear Down This Walled Garden: https://www.coincenter.org/tear-down-this-walled-garden/ Pereira Bueno Amicus Brief: https://www.coincenter.org/amicus-brief-mev-wire-fraud/ Peter on X: https://x.com/valkenburgh Sebastian on X: https://x.com/seb3point0 Friederike on X: https://x.com/tw_tterSponsors:Gnosis: Gnosis has been building core decentralized infrastructure for the Ethereum ecosystem since 2015. With the launch of Gnosis Pay last year, we introduced the world's first Decentralized Payment Network. Start leveraging its power today at http://gnosis.io
Don’t miss a thing in PIVX. Our weekly digest delivers the inside scoop on key market shifts and crucial community developments.
Market Pulse Masternode Count: The number of active masternodes on the PIVX network slightly decreased this week, falling to 1,954 from 2,048. This represents a drop of approximately 4.6%. This minor reduction is likely attributable to recent price volatility, as the price of PIV briefly rallied within the week. Some masternode operators may have opted to take profit or temporarily move their nodes offline. Price Check: PIVX successfully countered the extreme volatility seen across the broader cryptocurrency landscape this week, managing to trade largely sideways despite intense selling pressure on major assets.PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
PIVX Weekly Pulse (Nov 14th, 2025 — Nov. 20th, 2025) was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Vlad Tenev built Robinhood by breaking every rule Wall Street wrote: zero commissions when competitors charged $10, mobile-first when "serious" investors demanded desktop, a brand that made finance feel like rebellion instead of a club you'd never join.
By 2021 they'd forced every major brokerage to slash fees and attracted millions who'd never owned a stock, but then GameStop happened: trading restrictions during the meme stock frenzy triggered congressional hearings, user fury, and a two-year brand crisis that nearly buried them despite the real culprit being antiquated clearing mechanics no one understood.
Now Tenev's pushing an even more radical vision—tokenizing private company shares so retail investors can own stakes in AI giants before IPO, turning prediction markets into "truth machines" that beat polls and pundits, and building what he calls the end of financial nihilism: a platform where your seventy-year-old parents and your Gen Z cousin both manage everything from retirement accounts to election bets in one place.
The question isn't whether traditional finance survives this; it's whether Robinhood can move fast enough to own the entire wealth transfer before someone else does.
Timestamps:
0:00 - Introduction
3:52 - Financial Relationships vs. Pinterest Boards: The Higher Bar
5:27 - Building in a Regulatory Catch-22
7:53 - Three Simultaneous Contrarian Bets
12:15 - From Institutional HFT to Retail Revolution
17:40 - January 28th: The Day Trust Died
27:40 - “Simple Lie More Powerful Than Complicated Truth”
30:02 - Tokenization: The Antidote to T+1 Settlement
32:52 - IPO Access: From Asking For Favors to Everyone Wanting In
39:22 - “Series D Was Called an IPO”
43:06 - WTF Happened in 1971?
47:42 - Going Broad While Going Deep
53:26 - The $120 Trillion Wealth Transfer
58:16 - Debunking “Financial Nihilism”
58:40 - “Speculation Is Critical For Functioning Markets”
Resources:
Follow Vlad Tenev on X: https://x.com/vladtenev
Follow Alex Rampell on X: https://x.com/arampell
Follow Erik Torenberg on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: [https://x.com/a16z](https://x.com/a16z)
Find a16z on LinkedIn: [https://www.linkedin.com/company/a16z](https://www.linkedin.com/company/a16z)
Listen to the a16z Podcast on Spotify: [https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX](https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX)
Listen to the a16z Podcast on Apple Podcasts: [https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711](https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711)
Follow our host: [https://x.com/eriktorenberg](https://x.com/eriktorenberg)
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see [a16z.com/disclosures](http://a16z.com/disclosures).
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The former bank regulator who invented deposit networks just revealed why SVB's collapse was inevitable—and why the solution that could have saved them is finally being rebuilt.
Gene Ludwig ran the OCC during the Clinton administration, created a half-trillion-dollar market solving a problem his Aunt Betty faced riding buses between banks, then watched his invention fail to save Silicon Valley Bank because the technology, economics, and incentives were fundamentally broken.
Now he's partnered with Paolo and ModernFi to build what could become America's eighth systemically important financial utility: a bank-owned consortium that's signing 25 institutions per week and racing to protect the 4.8 trillion in uninsured deposits that make the next crisis inevitable.
Resources:
Follow Gene on LinkedIn: https://www.linkedin.com/in/gene-ludwig/
Follow Paolo on LinkedIn: https://www.linkedin.com/in/paolombertolotti/
Follow David on X: https://x.com/dhaber
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today, Brave Leo offers a new capability for cryptographically-verifiable privacy and transparency by deploying LLMs with NEAR AI Nvidia-backed Trusted Execution Environments (aka “TEEs”, see “Know more about TEEs” section below for details). Brave believes that users must be able to verify that they are having private conversations with the model they expect. This is available in Brave Nightly (our testing and development channel) for early experimentation with DeepSeek V3.1 (we plan to extend this to more models in the future based on feedback).
By integrating Trusted Execution Environments, Brave Leo moves towards offering unmatched verifiable privacy and transparency in AI assistants, in effect transitioning from the “trust me bro” process to the privacy-by-design approach that Brave aspires to: “trust but verify”.
Brave believes in user-first AILeo is the Brave browser’s integrated, privacy-preserving AI assistant. It is powered by state-of-the-art LLMs while protecting user privacy through a privacy-preserving subscription model, no chats in the cloud, no context in the cloud, no IP address logging, and no training on users’ conversations.
Brave believes that users must be able to:
Verifiable Privacy—Users must be able to verify that Leo’s privacy guarantees match public privacy promises.
Verifiable Transparency in Model Selection—Users must be able to verify that Leo’s responses are, in fact, coming from a machine learning model the user expects (or pays for, in the case of Leo Premium).
The absence of these user-first features in other competing chatbot providers introduces a risk of privacy-washing. It has also been shown—both in research (e.g., “Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs”) and in practice (e.g., backlash against ChatGPT)—that chatbot providers may have incentives to silently replace an expensive LLM with a cheaper-to-run, weaker LLM, and to return the results from the weaker model to the user in order to reduce GPU costs and increase profit margins.
Brave moves towards Verifiable Privacy and Transparency in LLMs through Confidential ComputingBrave begins this journey by removing the need to trust LLM/API providers, using Confidential LLM Computing on NEAR AI TEEs. Brave uses NEAR AI TEE-enabled Nvidia GPUs to ensure confidentiality and integrity by creating secure enclaves where data and code are processed with encryption.
These TEEs produce a cryptographic attestation report that includes measurements (hashes) of the loaded model and execution code. Such attestation reports can be cryptographically verified to gain absolute assurance that:
A secure environment is created through a genuine Nvidia GPU TEE, which generates cryptographic proofs of its integrity and configuration.
Inference runs in this secure environment with full encryption that keeps user data private — no one can see any data passed by the user to the computation, or any results of the computation.
The model and open source code that users expected are running unmodified by cryptographically signing every computation.
In Stage 1 of our development, Brave performs the cryptographic verification, and users can use “Verifiably Private with NEAR AI TEE” in Leo as follows:
User selects DeepSeek V3.1 with label of Verifiably Private with NEAR AI TEE available in Leo on Brave Nightly
Brave performs NEAR AI TEE Verification, validating a cryptographic chain from NEAR open-source code to hardware-attestation execution, ensuring responses are generated in a genuine Nvidia TEE running a specific version of the NEAR open-source server
Brave transmits the outcome of verification to users by showing a verified green label (depicted in the screenshot below)
User starts chatting without having to trust the API provider to see or log their data and responses
The future of Verifiable Privacy and Transparency in Brave LeoToday, we are excited to release TEE-based Confidential DeepSeek V3.1 Computing in Brave Nightly (our testing and development channel) for early experimentation and feedback.
Before rolling this out more broadly, we’re focused on two things:
No user-facing performance overhead—TEEs introduce some computational overhead on the GPU. However, recent advances significantly reduce this overhead—down to nearly zero in some cases—as shown in Confidential Computing on NVIDIA Hopper GPUs: A Performance Benchmark Study. We want to ensure our users don’t experience performance regressions.
End-to-end verification—We’re actively researching how to extend confidential computing in Leo so that users can verify their trust in the entire pipeline, along with Brave open-sourcing all stages. In particular, we are planning to move the verification closer to users so they can reconfirm the API verification on their own in the Brave browser.
More about Trusted Execution EnvironmentsA Trusted Execution Environment (TEE) is a secure area of a processor that provides an isolated computing environment, separate from traditional rich runtime environments such as an Operating System (OS). A TEE enforces strong hardware-backed guarantees of confidentiality and integrity for the code and data it hosts. These guarantees are achieved through enforcements such as dedicated memory that is accessible only to the TEE.
Hardware isolation ensures that even a fully compromised OS cannot access or tamper with any code or data residing inside the TEE. In addition to this, TEEs expose unique hardware primitives such as secure boot and remote attestation to ensure only trusted code is loaded into the TEE and so that external parties are able to verify the integrity of TEE.
Chip manufacturers have enabled TEEs on CPUs (traditionally) and GPUs (recently). The combination of TEE-enabled CPUs (e.g., Intel TDX) and TEE-enabled GPUs (e.g., Nvidia Hopper) enables end-to-end confidentiality, and integrity-preserving computation of LLM inference with minimal performance penalty.
The post Congrats, Chronosphere and Palo Alto Networks! appeared first on Greylock.
Speaking at Devcon, Ethereum founder Vitalik Buterin unveiled Kohaku, a new privacy-focused framework for the Ethereum ecosystem. He described the launch as entering the “very last mile stage” of development, where concerted effort is needed to enhance user privacy and security on the blockchain.
Kohaku is an open-source, modular suite of primitives that will allow developers to build privacy-preserving wallets and applications without reliance on centralized intermediaries. The framework is intended to establish default, opt-in privacy for Ethereum-connected wallets, thereby formalizing privacy as a fundamental user expectation.
This initiative, spearheaded by the Ethereum Foundation and other key stakeholders, marks a major step in the ongoing “privacy upgrade path” for the network.
The project currently serves as a work-in-progress on GitHub, incorporating software packages for protocols that enhance on-chain anonymity and security. Notably, the framework includes integrations for Railgun and Privacy Pools. The Ethereum boss has been quite vocal on his call for privacy in the crypto space. He said:
“Privacy is freedom. It gives us space to live our lives in the ways that meet our needs without having to constantly worry about how our actions will be perceived by all kinds of centralized and decentralized coercive political and social entities.”
Last month, the Foundation launched the Privacy Cluster, a 47-member team of cryptographers, engineers, and researchers dedicated to establishing privacy as a “first-class property” of Ethereum. Furthermore, the former Privacy & Scaling Explorations team rebranded to the Privacy Stewards of Ethereum in September, signifying a shift from speculative research to solving concrete, real-world privacy challenges, including confidential DeFi and private voting mechanisms.
Kohaku may also evolve to include tools like ZK-powered browsers and mixnets for network-level anonymity.
PIVX. Your Rights. Your Privacy. Your Choice.
To stay on top of PIVX news please visit PIVX.org and Discord.PIVX.org.
Vitalik Buterin Joins Privacy Party, Unveils Kohaku was originally published in PIVX on Medium, where people are continuing the conversation by highlighting and responding to this story.
Palmer Luckey got fired from Meta for backing the wrong candidate—now he's the hero saving American defense, and that shift tells you everything about how fast the ground moved beneath Silicon Valley's feet. For decades, tech and defense were allies, then came 15 years of hostility so visceral that Google employees revolted over a Pentagon AI contract, and when leadership caved, only three people showed up to hear what border security actually involves. But something broke: COVID exposed our inability to make things, Ukraine revealed wars now iterate in days not decades, and suddenly the Harvard dorm room generation realized the people building satellites and drones weren't just necessary—they were the future, while legacy defense contractors still operate on Soviet-style five-year plans that guarantee cost overruns and obsolescence. Now the question isn't whether Silicon Valley returns to its Cold War roots, but whether America wins by becoming more like China's centralized system or doubles down on the chaotic creativity that built nine of the world's ten most valuable companies in 25 years—and the founders flooding into defense, energy, mining, and manufacturing suggest the second American century is just getting started.
Resources:
Follow Ben on X: https://x.com/bhorowitz
Follow Marc on X: https://x.com/pmarca
Follow Katherine on X: https://x.com/KTmBoyle
Follow David on X: https://x.com/davidu
Follow Erik on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
We’re excited to announce the release of Zebra 3.0.0—the most feature-rich Zebra release ever! In addition to supporting NU6.1, the latest network upgrade, we’ve also focused on making Zebra easier to run in production, faster to sync, and available on more platforms.
Zebra 3 at a GlanceHere’s what’s new in this release:
NU6.1 Network Upgrade – Activates the latest Zcash protocol upgrade on Mainnet Health Check Endpoints – Built-in HTTP endpoints for monitoring node health and readiness in production deployments ARM64 Support – Native support for Apple Silicon Macs (M1/M2/M3), Raspberry Pi, and ARM cloud servers Dramatically Faster Initial Sync – Full blockchain sync from genesis now takes 15-16 hours in CI and less than 9 hours on some machines, down from 24+ hours (representing a 35-65% reduction in initial sync time) Enhanced Security – Docker images now include software bill of materials (SBOM) and build attestations Easier Configuration – Configure Zebra using environment variables without editing config files Improved Network Infrastructure – Zcash Foundation now runs highly-available Zebra nodes to better support the ecosystem New RPC Methods – Three new methods for network info, mempool stats, and side chain queries plus several new fields on the outputs of existing RPC methods. Faster Testing – Modernized test infrastructure for quicker development cycles
New Versioning approach – Major version numbers now indicate breaking changes, not just network upgrades. This helps you understand when updates need extra attention.
You can now set up health monitoring with load balancers and orchestration tools like Kubernetes. ARM64 support opens up more hardware options including cost-effective ARM cloud instances. The Zcash Foundation is also now running highly-available infrastructure which benefits the entire ecosystem.
If you’re setting up Zebra for the first timeZebra is now easier to configure, allowing you to set environment variables instead of editing config files. Initial sync is dramatically faster (15-16 hours instead of 24+ hours)—and if you’re on an Apple Silicon Mac, you’ll get native ARM64 performance without any extra steps.
If you build tools that use ZebraThree new RPC methods provide better access to network information, mempool data, and side chain queries. We’ve also improved transaction validation performance and made many enhancements to existing RPC methods.
If you contribute to ZebraOur modernized testing infrastructure means faster CI runs and quicker feedback on pull requests. The codebase also benefits from numerous code quality improvements and better documentation.
Want to know more? Here’s a deeper look at what’s in this release.
Network Upgrade & Protocol Changes NU6.1 Network UpgradeZebra 3.0.0 activates NU6.1 on Mainnet, which includes:
Extended funding streams for network development (1,260,000 additional blocks) One-time lockbox disbursements The latest network protocol version (170_140)This release includes several features specifically designed to make running Zebra in production easier and more reliable.
One of the standout features is the addition of HTTP health check endpoints. These simple HTTP endpoints make it straightforward to integrate Zebra with production monitoring and load balancing systems.
Two endpoints are available:
/healthy – Returns 200 when Zebra is running and has active peer connections, 503 otherwise
/ready – Returns 200 when Zebra is fully synced and ready to serve requests, 503 otherwise
The difference matters for production deployments: use /healthy for basic liveness checks (is the process running?) and /ready to ensure you only send traffic to fully synchronized nodes with active peer connections.
Enable the health check server by adding a [health] section to your configuration:
[health]
listen_addr = "0.0.0.0:8080"
min_connected_peers = 1
ready_max_blocks_behind = 2The endpoints are disabled by default and designed for internal infrastructure use (no authentication). They return simple plain-text responses like “ok” or brief explanations when checks fail.
ARM64 Support: Native Performance Everywhere
Zebra now runs natively on ARM64 platforms, which means:
Apple Silicon Macs (M1, M2, M3) – No more emulation or Rosetta overhead
ARM cloud instances – AWS Graviton, Google Cloud Tau T2A, and other ARM servers
Single-board computers – Raspberry Pi and similar devices
Docker automatically handles platform detection:
docker pull zfnd/zebra:latest
# Automatically gets the right version for your platform
This change takes advantage of GitHub’s free ARM64 runners for open source projects, so we can build and test ARM64 releases without additional infrastructure costs. The result is native performance on ARM platforms without any setup complexity.
Enhanced Docker Image Security
For organizations that need to audit their software supply chain, Zebra’s Docker images now include provenance attestations and Software Bill of Materials (SBOM).
What this means:
Provenance attestations track exactly how the image was built, where, and by whom
SBOM provides a complete inventory of all software components in the image
Both enable security scanning, vulnerability tracking, and compliance reporting
These features integrate with tools like Docker Scout, Snyk, and other security platforms that can automatically scan for known vulnerabilities.
Improved Network Infrastructure
The Zcash Foundation has upgraded its deployment infrastructure to run long-lived, highly-available Zebra nodes. These improvements directly benefit the Zcash ecosystem:
DNS Seeder Support – Long-lived nodes with static IP addresses provide reliable peer discovery for the network
Multi-zone High Availability – Running 2-3 instances across availability zones ensures consistent uptime
Automatic Health Monitoring – Uses the new /healthy endpoint with auto-healing to maintain service reliability
Persistent Storage – Disk reuse between deployments eliminates lengthy re-syncs, ensuring nodes stay current
These long-lived nodes with static IP addresses help support the broader Zcash ecosystem by providing stable infrastructure for DNS seeders and improving overall network reliability. While this work is specific to the Foundation’s infrastructure, the underlying improvements to Zebra’s deployment tooling are available for anyone running production nodes.
Performance Improvements
Dramatically Faster Initial Sync: From 24+ Hours to Under 16 Hours
This release delivers a significant performance achievement: full blockchain synchronization from genesis now completes in 15-16 hours, down from 24+ hours in previous versions. This represents a 35-40% reduction in sync time.
The improvement comes primarily from PR #9973, which fixed a regression in how Zebra updates address balances in the database. The fix avoids expensive RocksDB merge operations when the database format is already up-to-date, dramatically improving write performance during initial sync.
This is combined with batch validation improvements for both Orchard and Sapling proofs, which reduce computational overhead during block validation.
Real-world impact: Our continuous integration tests show consistent sync times of 15-16 hours for full Mainnet sync from genesis, compared to 20-28 hours before these optimizations. If you’re running a new Zebra node or syncing from scratch, you’ll see this improvement immediately.
Better Performance Through Batch Validation
Transaction validation is now more efficient thanks to:
Orchard batch validation – Validates multiple Orchard proofs together instead of two at a time
Sapling batch validator – Uses upstream batch validation for Sapling proofs
Batch validation reduces computational overhead during block processing by validating groups of proofs together, which is more efficient than individual verification. The batch size for Orchard validation increased from 2 to 64 actions per batch.
Developer Experience
Easier Configuration with Environment Variables
Zebra now uses a layered configuration system that makes it easier to configure through environment variables:
export ZEBRA_NETWORK__NETWORK=Mainnet
export ZEBRA_STATE__CACHE_DIR=/var/cache/zebra
export ZEBRA_RPC__LISTEN_ADDR=0.0.0.0:8232
zebrad start
The configuration loads in layers: built-in defaults, then optional TOML file, then environment variables. This means you can:
Run Zebra with zero configuration files for simple setups
Use a base configuration file and override specific values via environment variables
Keep secrets in environment variables instead of config files
The pattern for environment variables is ZEBRA_SECTION__KEY, matching the TOML structure. For example, [network] section’s network = "Mainnet" becomes ZEBRA_NETWORK__NETWORK=Mainnet.
Important: This change includes breaking updates to environment variable names. See the Migration Notes section below.
New RPC Methods
Three new RPC methods improve compatibility with existing tools:
getnetworkinfo – Returns network status, peer information, and protocol version details
getmempoolinfo – Provides detailed statistics about the memory pool
getrawtransaction side chain support – Query transactions from side chains
Many existing RPC methods received enhancements:
Script assembly output in transaction details for debugging
Completed z_gettreestate response with all required fields
Added missing Orchard fields to transaction responses
Flexible parameter handling in getaddresstxids (accepts string or object)
Chain info support in getaddressutxos
Standardized byte ordering across RPC responses
Added vjoinsplit field to getrawtransaction
This release includes a complete modernization of the testing infrastructure using cargo-nextest:
Faster test execution – Better parallel test running
Centralized test configuration – 17 specialized test profiles in .config/nextest.toml
Clearer test results – Better progress reporting and timeout handling
Simplified CI workflows – Reduced complexity in GitHub Actions
For contributors, this means quicker feedback on pull requests and less time waiting for CI to complete.
New Versioning Strategy
Starting with Zebra 3.0.0, we’re changing how we version releases. Previously, major version bumps were primarily tied to network upgrades. Going forward, we’ll release a new major version whenever there are breaking changes, which includes:
Changes to the Zebra API
Changes to operational requirements (like configuration format)
Other modifications that require action from node operators or developers
This approach gives us a clearer way to communicate impact. When you see a major version bump, review the release notes carefully and plan for potential updates to your deployment or integration. Minor version bumps will continue to be backward compatible.
Migration Notes
If you’re upgrading from Zebra 2.x, please note these breaking changes.
Environment Variable Changes
The environment variable naming scheme changed. Key updates:
Old Variable
New Variable
ZEBRA_CACHE_DIR
STATE_CACHE_DIR
ZEBRA_FORCE_USE_COLOR
FORCE_USE_COLOR
RUST_LOG or ZEBRA_RUST_LOG
ZEBRA_TRACING__FILTER
For configuration values, use the pattern ZEBRA_SECTION__KEY:
ZEBRA_NETWORK__NETWORK=Mainnet
ZEBRA_RPC__LISTEN_ADDR=0.0.0.0:8232
ZEBRA_STATE__CACHE_DIR=/var/cache/zebra
Docker Entrypoint Changes
The Docker entrypoint no longer generates TOML configuration files automatically. Instead:
Configure using environment variables (recommended)
Mount a configuration file into the container
Use a combination of both approaches
Acknowledgments
A big thank you to all the contributors who made this release possible: @arya2, @oxarbitrage, @conradoplg, @gustavovalverde, @syszery, @upbqdn, @str4d, @nuttycom, @zancas, @natalieesk, @gap-editor, @elijahhampton, @dorianvp, @AloeareV, @sashass1315, @radik878, @GarmashAlex, @Galoretka and @Fibonacci747.
Special recognition goes to Least Authority for the security audit of the NU6.1 implementation which provided valuable suggestions for improvements.
Thank you to everyone who filed issues, tested release candidates, and provided feedback throughout this release cycle.
The Z3 Stack
Zebra 3.0.0 is designed to integrate seamlessly with the other Z3 components. We’ve been developing the Z3 stack, which is a complete deployment solution for Zcash infrastructure. The stack includes:
Zebra – The consensus node (this release)
Zaino – The indexer for address and transaction data
Zallet – The CLI wallet service
These components work together to provide a complete, production-ready Zcash infrastructure that you can deploy with a single configuration.
Zebra 3.0.0 represents a significant step forward in making Zebra production-ready. The features in this release—health checks, ARM64 support, and improved configuration—provide a strong foundation for reliable node operations.
We’re continuing to develop the Z3 stack to provide a complete, easy-to-deploy solution for Zcash infrastructure. We’re also working on future network upgrade support, including early implementations of proposals for NU7.
Get Involved
Questions or feedback? We’d love to hear about your experience with Zebra 3.0.0. Share your thoughts on the Zcash Community Forum, connect with us on Discord, or open an issue in the Zebra repository:
Documentation: Visit the Zebra Book for guides and tutorials
Source Code: Explore our GitHub repository to review the codebase
Issue Tracker: Report bugs or request features
Contribution Guide: Learn how to contribute to Zebra
Community Forum: Join the discussion on the Zcash Community Forum
Community Chat: Connect with us on Discord for real-time discussions
The post Zebra 3.0.0 Release: Our Most Feature-Rich Release Ever appeared first on Zcash Foundation.
The Zcash Foundation is committed to transparency and openness with the Zcash community and our other stakeholders. Today, we are releasing our Q3 2025 report, which provides an overview of the work undertaken by our engineering team, as well as an overview of other activities during this period.
As with our previous quarterly reports, this report describes our financial inflow and outflows, with a detailed breakdown of our expenses, and we have included a snapshot of the Foundation’s financial position, in terms of liquid assets and liabilities that must be met using those assets.
Download the Q3 2025 report here.
Our previous quarterly reports can be found here.
The post The Zcash Foundation’s Q3 2025 Report appeared first on Zcash Foundation.
Emmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that the entire "control and steering" paradigm for AI alignment is fatally flawed. Instead, he proposes "organic alignment" - teaching AI systems to genuinely care about humans the way we naturally do. The discussion explores why treating AGI as a tool rather than a potential being could be catastrophic, how current chatbots act as "narcissistic mirrors," and why the only sustainable path forward is creating AI that can say no to harmful requests. Shear shares his technical approach through multi-agent simulations at his new company Softmax, and offers a surprisingly hopeful vision of humans and AI as collaborative teammates - if we can get the alignment right.
Resources:
Follow Emmett on X: https://x.com/eshear
Follow Séb on X: https://x.com/sebkrier
Follow Erik on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Joint work with Sebastian Angel (University of Pennsylvania), Sofía Celi (Brave), Elizabeth Margolin (University of Pennsylvania), Pratyush Mishra (University of Pennsylvania), Martin Sander (University of Pennsylvania), Jess Woods (University of Pennsylvania).
Parsing is one of those fundamental operations in computing that usually goes unnoticed. Whenever a browser renders a web page, a firewall inspects traffic, or a compiler transforms code, some parser is silently turning a raw stream of bytes into a structured object. We tend to take this step for granted, assuming that once the input is parsed, the rest of the system can reason safely about it. We also tend to expect browsers or compilers to do this “out-of-the-box” and to fix any parsing errors on the fly.
This expectation is not unreasonable in day-to-day computing. Browsers recover gracefully from malformed HTML, and compilers flag syntax errors so that developers can fix them. But in privacy-preserving verification settings, particularly in zero-knowledge proof systems, this assumption and leniency is problematic. A prover might commit to some byte stream and then claim that it represents a valid JSON document, a token, or a source file, without ever proving that parsing was carried out correctly. The verifier, lacking access to the underlying data, has no way to tell if the prover is being honest. This missing link between raw bytes and structured data has quietly limited the scope of many proposed ZK applications. A proof system that assumes “the input is already a valid JSON object” or “the transcript must be well-formed” leaves itself open to subtle but impactful attacks. If a prover can slip malformed input (without verifying the parsing stage), they may be able to prove false claims with valid-looking proofs.
The missing link in ZK applications: CoralOver the past decade, the cryptography community has explored ambitious ideas under the umbrella of zero-knowledge: proving statements about TLS sessions, verifying attributes from digital credentials, checking compilation chains, and enforcing network policies without revealing the underlying traffic. Yet all of these share a silent fragility.
Consider the case of zk-TLS. A user might want to prove to a smart contract that their bank’s API reported a certain balance. Today’s approaches typically assume the API response is already a well-formed JSON object (or HTML). If the bank’s server is misconfigured or encounters a bug which causes it to output a malformed response (not valid JSON or HTML), a malicious prover can misuse this malformed response to convince the verifier of a wrong fact. Similar gaps appear in zk-Authorization systems that work with JWTs, or in zk-Middleboxes that inspect traffic without any guarantee that the data stream adheres to its formal grammar (being that JSON, HTML, or HTTPS).
Without a way to prove that bytes are parsed correctly into structured objects, these systems must either leak information, rely on unverifiable assumptions, or leave themselves open to attack. In order to fix this, we introduce Coral, a system for proving in zero-knowledge that a committed byte stream corresponds to a structured object in accordance with a Context Free Grammar.
Coral’s approachCoral’s approach begins from a simple observation: in many settings, verifying the outcome of a computation is far easier than performing the computation itself, especially when the verifier is given suitable hints. A classic example is sorting: producing a sorted list takes O(n log n) time, but verifying that a list is correctly sorted (and corresponds to a given original list) only takes a single pass, or O(n), if the prover provides the sorted output together with a mapping from each element in the original list.
Parsing exhibits the same structure. Parsing is the process of determining whether an input byte string conforms to a given grammar, and, if so, producing a parse tree that witnesses this conformance. Executing the full parsing algorithm is a complex, stateful process, but verifying the result of parsing, typically represented as a parse tree, is dramatically easier. Rather than encoding the parser functionality itself into circuits or emulating it inside a zkVM 1, Coral assumes that an untrusted parser has produced a candidate parse tree, and focuses only on checking that this output is correct in regards to the private original byte stream and a public grammar.
However, this raises a representational challenge: parse trees for standard context-free grammars (CFGs) naturally contain nodes with arbitrary numbers of children. Such variable-degree nodes map poorly to circuit-based proof systems, which prefer uniform, fixed-arity structures. Coral overcomes this by applying a classical compiler-technique transformation. The parse tree is converted into a left-child right-sibling (LCRS) binary tree, a representation that encodes the same structure but ensures that each node has at most two outgoing edges: one to its first child and one to its next sibling. This binary, uniform layout preserves the full semantics of the original grammar while aligning with the fixed-arity wiring of R1CS constraints.
With a binary parse-tree structure that fits naturally in R1CS, the next challenge is how to verify it efficiently, step by step, inside a proof system. Coral introduces a specialized NP checker: a uniform verification loop that checks that each node of the tree matches both the grammar and the underlying byte stream. Because this loop is uniform and recursive, it is compatible with folding schemes like Nova, allowing thousands of verification steps to be aggregated into a single succinct proof.
Implementing this NP checker requires careful handling of several kinds of memory: read-only public grammar rules, a persistent private parse tree, and stacks used during tree traversal. Coral unifies these into a segmented memory abstraction that exposes a single interface for public and private, persistent and volatile regions. This design lets the checker access exactly the memory it needs at each step, while keeping the overall construction simple and compatible with folding.
The result is a clean, modular system: bytes (belonging to a document) are committed once, transformed into a parse tree, and then verified in zero-knowledge via the NP checker and its memory management, which scales smoothly even on modest machines. In the system, efficiency is crucial. Previous approaches that attempted to reason about parsing in ZK either relied on zkVMs, emulating an entire CPU and parser at high overhead, or on transformations that required grammars to be rewritten into Chomsky Normal Form, inflating their size and losing practical features such as exclusion rules. Coral avoids both pitfalls and can be used with any Context-Free Grammar.
The implementation, written in Rust, proves the parsing of realistic inputs such as JSON API responses, TOML configuration files, and subsets of C source code. Proofs are succinct (under 20 kB), quick to generate (seconds), and cheap to verify (under 70 ms). Importantly, they require only a few gigabytes of memory and no special hardware, making them feasible on an ordinary laptop.
Toward broader applicationsWith parsing in zero knowledge now within reach, new avenues open up. For instance, a prover can commit to a TLS transcript and prove not just that some field exists, but that the transcript itself parsed correctly under the relevant grammar. A user can prove properties of a credential without revealing the token or its structure, knowing that malformed inputs cannot trick the verifier. Compilation chains, often opaque and difficult to audit, can be proven end-to-end: from source code to binary, with the parsing steps included. Even middleboxes can enforce policies while respecting privacy, because they can rely on proofs about the syntactic structure of traffic rather than trust opaque byte streams.
Coral is not a complete solution. Many real-world formats, such as HTML, PDF, and Python, are not context-free. In practice, many commonly used grammars rely on priorities and complex negative predicates, which we do not yet support. Extending Coral to handle context-sensitive features remains an open and important challenge. But by addressing the fundamental gap between commitments to bytes and proofs over structured objects, Coral lays the foundation for a new generation of ZK systems.
Parsing may seem mundane compared to the sophistication of cryptographic protocols, but in many ways it is the keystone. Without guarantees that inputs are parsed correctly, higher-level proofs risk being built on sand. Coral grows from that sand: packing it, proof by proof, into something solid. Coral demonstrates that this keystone can be brought into the realm of zero knowledge: fast, succinct, and practical. By making parsing provable, we hope Coral helps unlock applications where verifiability and privacy are not trade-offs but partners, and where the structured objects we depend on can be reasoned about with the same rigour as the cryptographic proofs themselves.
Want to know more?Coral has been accepted to IEEE S&P 2026, so attend if you want to learn more! Or you can play with our code: https://github.com/eniac/coral
A zkVM is a virtual machine whose execution can be proved inside a zero-knowledge proof system. Instead of writing custom circuits for each computation, a zkVM lets developers run arbitrary programs (usually in a fixed instruction set like RISC-V or WASM), and then generates a succinct proof that the program was executed correctly on some input, optionally hiding the input and intermediate state. ↩︎
The post Introducing AirOps: The System of Action for Organic Growth appeared first on Greylock.
Two venture capitalists dissect why biotech burns billions while China runs trials in weeks—and why the next Genentech won't look anything like the last one. Elliot Hershberg reveals the "three horsemen" strangling drug development as costs explode to $2.5 billion per approval, while Lada Nuzhna exposes how investigator-initiated trials in Shanghai are rewriting the competitive playbook faster than American founders can file INDs. When the infrastructure that built monoclonal antibodies becomes the commodity threatening to hollow out an entire industry, the only path forward demands inventing medicines that are literally impossible to make without tools that don't exist yet—and they're betting everything on which approach survives.
Resources:
Follow Jorge on X: https://x.com/JorgeCondeBio
Follow Lada on X: https://x.com/ladanuzhna
Follow Elliot on X: https://x.com/ElliotHershberg
Follow Erik on X: https://x.com/eriktorenberg
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.