bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence An e/acc zoom-in on brain interfaces, towards human superintelligence Summary
Artificial superintelligence (ASI) is perhaps 3–10 years away. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence (HSI).
bci/acc draws on today’s technologies without requiring big scientific breakthroughs. It’s e/acc zoomed-in for BCI. It’s solarpunk: optimistic and perhaps a little gonzo. And it could be a grand adventure for Humanity.
Based on talks at Foresight Institute in Dec 2023 & Nov 2023 [video] and NASA Oct 2023. They extend this 2016 blog post, and this 2012 talk at BrainTalks@UBC.
=== Contents ===
1.
Introduction2.
Artificial Superintelligence 2.1
How market forces drive ASI 2.2
The journey to artificial superintelligence 2.3
ASI Risk 2.4
Approaches to ASI Risk -
Decelerate ->
let evolution happen ->
speed it up (e/acc) -
Cage ->
fancier cage -
Align post-hoc ->
dumb-to-smart chain ->
during training -
Get competitive (bci/acc)3.
Human Superintelligence, via bci/acc 3.1
Introduction -
High-bandwidth BCI challenges -
Implants-first vs
masses-first 3.2
Baseline tech for bci/acc -
EEG for typing;
for focus, more -
Glasses with subtitles;
with voice interface -
AR Goggles + hand gestures: Meta Quest 3 -
AR Goggles + eye-tracking: Apple Vision Pro -
Eye-tracking is BCI 3.3
BCI killer apps
-
Silent messaging;
internal dialog -
Perfect memory;
share visual memories -
Talk in pictures;
talk in brain signals 3.4
The journey to high-bandwidth BCI -
Bandwidth++ via implants;
via optogenetics.
Bike-shedding.
-
Invasive BCI into mainstream ->
growth -
Your BCI will be part of *you* ->
hyper-local alignment 3.5
The journey to human superintelligence 3.6
Cognitive liberty4.
Conclusion5.
Appendix 1. Introduction
It was summer 1995. In the pages of Wired magazine, I read about a new product called MindDrive: “The first computer product operated by human thought”. I was skeptical. But I had to try it! So I dropped $150 and got one.
I’d slip the MindDrive on my index finger, and boot up into the game “MindSkier”. I’d ski downhill in first-person view, and try to steer between the 30 or so pairs of gates. I’d steer by “thinking”. It was actually an echo of my thoughts: the device’s gold-plated sensor tracked my skin conductivity (GSR). I would miss about 30% of the gates, compared to missing 80% of them if the device wasn’t on my finger at all. It worked, barely. A starting point for the next!
Left: the MindDrive. Right: In
Rosie Revere, Engineer, Rosie’s great aunt teaches her a brilliant lesson.
At a giant engineering science fair, I set up the MindDrive for anyone to try. There was a line around the block [Spec1999]. There was a latent interest in BCI.
In 2001, I splurged $2K and bought an “Interactive Brainwave Visual Analyser” (IBVA). I’d wear a blue headband holding sticky electrodes to sense electrical signals on my forehead, i.e. an electroencephalogram (EEG). It sent the EEG signals to my computer, which got displayed as animated 3d graphics. More usefully, I could access the signals directly with my own software — so I did. I could hack BCI! Alas, it was hard to get good signals. I also tried OCZ NIA and Emotiv EPOC later on, but they weren’t qualitatively better.
From these limited experiments — and adjacent work in AI and analog circuits — I had a feeling that BCI bandwidth could be optimized a lot. This 2012 work from Tsinghua University confirmed my hunch, achieving moderate typing speeds [Tsh2012]. A decade of optimizing later, we’re now at 62 words per minute (very good).
High-bandwidth BCI is not a scientific mystery; it’s an engineering problem.
Why might we be interested in high bandwidth BCI?
The answer is artificial superintelligence (ASI): AI machines with 1000x+ the cognitive ability of humans. ASI may happen as soon as 3-10 years from now. Market forces are pushing it into existence because there’s a lot of money at stake.
How do we, as humans, have a role in a world of AI machines with 1000x our cognitive abilities?
Humans need a substrate that’s actually competitive to ASI: silicon. The best way to do that is brain-computer interfaces (BCIs). We’ve got to do this soon enough for ASI time scales, therefore, we need to accelerate BCI and get mass adoption. The net result will be human superintelligence (HSI).
The rest of this article has two sections:
Artificial superintelligence (ASI): what’s driving ASI, ASI risk, and approaches to address risk. Human superintelligence: how to accelerate BCI and achieve mass adoption, to get Humanity competitive with ASI. 2. Artificial Superintelligence (ASI) 2.1 How market forces drive ASI
Market forces have been driving AI compute up. The plot below shows how the compute for AI training has risen, from about 1950 until now (2024). The y-axis is exponential, and each tick is another order of magnitude. Therefore while the drawn curve is linear, the trend is exponential.
Market forces are driving AI compute up. [
Graph from LessWrong.com, with my 20 PFLOPs overlay]
The compute has grown quickly. It started with 100 (1⁰²) floating point operations per second (FLOPs) in 1950, to 1⁰²⁴ now. That’s 22 orders of magnitude of compute power in three-quarters of a century. To intuit just how much growth this is: it’s the difference between 1mm, vs flying to Alpha Centauri and back 10,000 times.
From this growth, we now have a lot of compute. To help intuition: George Hotz frames 20 PetaFLOP/s as “1 person” worth of brainpower (compute). This is akin to 746 Watts being “1 horse” worth of power (1 hp). Just as it’s easier to reason about horses worth of power, it’s easier to reason about persons worth of compute. We surpassed “1 person” worth of compute in about 2012. Now we’re 10 million times beyond; it’s like all the brainpower of NYC rolled into one compute system.
Market forces have driven compute up because it meant more money. More compute unlocked more markets, each which was highly lucrative: from space & radio to TV, from the PC to the cellphone, from the smartphone to AI now and AR/VR soon. AI has a voracious appetite for compute, with $ benefits that accrue. That’s why there’s so much money flowing into AI right now, and no sign of abating.
2.2 Path to ASI
For decades, we’ve had AIs that can do tasks that only a human could previously do. That is, narrow AI. Examples are antenna design and analog circuit synthesis. Almost as long, we’ve had AIs that can do a task at a level far exceeding a human. These are also called narrow AI. Examples are digital circuit synthesis and software compilers.
We’re about to get AI that can do all tasks that only a human can previously do. That is, artificial general intelligence (AGI). To riff on Paul Graham, AIs have will have progressed from “smart” (good at one thing) to “wise” (decent at everything).
Market forces will drive AGI from 1x smarter than humans, to 2x, to 10x, then 100x, then 1000x. It will happen quickly: there is $ to be made. We’ll arrive at AI that can do all tasks at a level far exceeding any human. That is, artificial superintelligence (ASI).
ASIs will be wildly smarter than humans. In humans 2 is an idiot and 6 is an Einstein; so what is 1000 or 1,000,000? [Rutt2024a] It’s such a difference that it’s hard to imagine as a possibility; this cognitive dissonance will prevent most people from truly realizing this until it’s right upon them.
ASIs will be wildly smarter than humans [From
@AiSafetyMemes] 2.3 ASI Risk
Humans are 1000x+ smarter than ants. As humans, we don’t respect the rights of ants or “what the ants have to say”. We are their gods.
ASIs will be 1000x+ smarter than humans. We are now the ants. There is little guarantee that ASIs will respect our rights. This is ASI risk.
What will it feel like? God-like intelligence will beget god-like power: the ASIs will become our gods. In the Hyperion sci-fi series, ASIs exist, yet humans still barely comprehend them, except to know that ASI power is unimaginably vast [Hyp1989].
What can we do about ASI risk? Section 2.4 reviews various ideas.
2.4 Approaches to ASI Risk 2.4.1 Idea: decelerate
Yudkowsky and others advocate to slow down or pause AI progress, then figure out how to solve ASI risk. It’s highly appealing at first glance. As with all such ideas: one must be careful, because wishing doesn’t make it true.
Alas, there is a problem: for such a deceleration to work, all deceleration efforts would need to be successful. If even just one entity defects, they could dominate the others. And that’s why this route likely won’t happen. There’s an AI race; at the core, it’s China vs USA, and there’s too much at stake for one side to cede speed to the other. So the race will go on. It’s like nuclear: for all the disarmament theatre, we still have the nukes.
Alas, what might happen is deceleration for all players except the US government and Chinese government (plus their proxies), and organized criminals. This hurts human freedom because it diminishes “voice” and “exit” for individuals, not to mention freedom to work on solving ASI risk 🤦 [Verd2023]. It’s a common trick for governments to use the banner of safety to take further control [Snow2013].
“They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.” — Benjamin Franklin
Perhaps most importantly, this “approach” doesn’t actually address the problem of ASI risk. That is, if AI was decelerated, then we’d still have to solve the core problem of ASI risk! That’s what most other ASI-risk approaches aim to do.
2.4.2 Idea: let evolution happen
“Let evolution happen” is the framing of Google founder Larry Page, and many others. They see humans as simply one step in the tree of evolution; that ASI is the next step; that we should be proud that we made the next step happen; and that if our biological bodies can’t compete (they can’t) then we should let go and get over it; that this is evolution.
From my work on evolutionary computation, I’ve seen how powerful evolution can be. It doesn’t matter whether we like this framing, this really could be the scenario that happens.
However, letting go is not a solution to ASI risk. Personally, I’d love to keep building and playing for as long as I can, in a grand adventure, until I opt-in to end that adventure. While people have invented a thousand rationalizations for death, I choose life until further notice. Humanity should be the same. We have a potential grand adventure in front of us! So we should rage, rage against the dying of the light. Humanity should choose life until further notice.
2.4.3 Idea: speed it up (e/acc)
“Effective accelerationism” (e/acc) is a movement sparked by @BasedJeffBezos and @BayesLord, and extended & promoted by technologist / VC Marc Andreesen, among others. I find myself aligned with most of e/acc philosophy: grounded in physics, optimistic, build-up not tear-down, and more.
e/acc’s approach to AI is “let everyone have at it, speed it up”. It aims for a multi-polar AI world: thousands (or millions or billions) of superintelligent AIs or entities with superintelligent AIs, keeping each other in check. It’s a bit like the USA which balances power among three entities (legislative, executive, judiciary). Or, it’s like blockchains which balance power among thousands of nodes.
Therefore, perhaps surprisingly, e/acc is likely safer than the “deceleration” approach (which only has balance among two powers) 😲!
e/acc is also open to human superintelligence (HSI), but with no no special emphasis. It’s meant to be an umbrella idea, for others to add detail with zoom-ins.
Vitalik Buterin’s “decentralized accelerationism” (d/acc) zooms in on e/acc that emphasizes decentralized technologies with a bit more bias towards safety. Like e/acc, it’s open to HSI, though with no special emphasis.
Among those thousand or billion+ superintelligent AI entities, e/acc assumes that at least some of them will be friendly to humans; and that they will help humans have a role in the future. But what if the friendly ones are overruled by the unfriendly ones? And as the ASI risk introduction covered, why would gods bother treating ants well?
Fortunately, e/acc is sufficiently broad that it allows for variants not needing this assumption. The most promising variant is: use BCI to get a competitive substrate, with mass adoption. That’s bci/acc! This post will elaborate below.
2.4.4 Idea: put it in a cage, unplug if things go awry
First, some background. You can think of Bitcoin as a really dumb robot that does just one thing: maintain a ledger of transactions. Yet it’s also sovereign: it answers to no one, it is its own independent entity, you can’t unplug it. Similarly, Ethereum is sovereign. The Uniswap V2 decentralized exchange contracts running on Ethereum are sovereign too: they answer to no one. Arweave permanent data storage is sovereign. Ocean Predictoor AI-powered data feeds are sovereign. Every smart contract that doesn’t have governance is sovereign. Finally, the internet itself is sovereign. Building sovereign software systems is a solved problem. Appendix 5.1 elaborates.
With that background in place, let’s review the idea: “put the ASI in a cage, and unplug if things go awry”.
Here’s one problem: you can’t unplug it. The ASI is smart, so it’s already made itself decentralized, therefore sovereign, therefore un-unpluggable. Just like Bitcoin.
Some observers see this idea and other similarly glib “takes” as a waste of energy. The “AI Alignment Bingo” in Appendix 5.2 offers a concise (and hilarious) summary of many takes & responses.
2.4.5 Idea: fancier cage
The idea is to use advances in cryptography, blockchain, and more to make the cage “hack proof”. Sergey Nazarov of Chainlink is a proponent, among others.
The problem: humans are the weak link in computer systems. Hackers like Kevin Mitnick have made Swiss cheese of compute systems by tricking gullible humans to give him access, not by attacking the software or hardware directly. Therefore, the “fancier cage” idea is not feasible unless we 100% solve human gullibility (not going to happen).
Loki after tricking Thor to escape a fancy cage: “Are you ever not going to fall for that?” [
Avengers 1] 2.4.6 Idea: align via a post-hoc wrapper
This is the approach that OpenAI took for GPT. The idea is akin to installing an aftermarket exhaust system on a new car, to tune behavior in a particular direction. For example, train an unconstrained LLM first; then tack on RLHFs training to align with human values. If all goes well, scale up this approach as we get to AGI and ASI.
Alas, it has been shown to be easy to jailbreak, with holes everywhere, as the world has witnessed on ChatGPT running GPT4. Finding issues and adding more constraints will end up as endless whack-a-mole. I’ve been there, for other AI problems. The root problem is that tacking a band-aid on such a core problem will (likely) never be enough.
Main: aligning an AI via a post-hoc wrapper is like adding an aftermarket exhaust system to your car. Bottom right: endless jailbreaks is like whack-a-mole, where as soon as you whack one issue, another pops up. 2.4.7 Idea: dumber AIs aligning smarter ones
This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.
Alas, this is only as strong as its weakest link (and links can be weak), there is risk of over-leverage (think 2008 financial crisis), and the ASI at the end of the chain might disagree or change the rules. Appendix 5.3 elaborates.
2.4.8 Idea: align the AI while training
Can we ever align something 1000x smarter than us? This idea side steps that concern in the near term in two complementary ways:
Diligently choose a training set based on strongly-held human values [Weng2023]. Start with 1x-level or even 0.1x-level as in a human baby. And then grow it to a child, then teenager, then adult, and beyond. Where it’s aligned the whole time [Goer2013].
This is akin to growing square watermelons, that grow subject to human-induced constraints 🍉 🤖. The hope — but not guarantee — is that as it goes from 1x to 10x and beyond, it remains aligned to human values.
This approach also assumes that data-centric learning will be the trick to get to ASI. It may be one of the most important, but maybe not the most important [Rutt2024b].
There’s promise to this idea; it’s worth trying.
2.4.9 Idea: bci/acc: get a competitive substrate via BCI
Silicon is a wildly powerful substrate: it already has amazing compute, storage and bandwidth and it keeps improving exponentially. It’s what’s powering AI, and soon, AGI and ASI.
This idea is: our current meatbag brains just can’t compete against silicon for processing power. It’s “1 person” of processing power vs 10 million.
Everything that silicon touches goes exponential: the “Silicon Midas Touch”. For our brains to compete with silicon, they must touch silicon. The higher bandwidth the connection, the more that our brains can unlock the power of silicon for our selves.
Therefore we need to swallow our pride, stop treating carbon like a deity, and get a competitive substrate: silicon. The specific “how” is brain-computer interfaces (BCIs), or uploading. The target is human superintelligence. Some call this “the merge”. Others, “intelligence amplification” (e/ai).
Given ASI timelines of 3–10 years, simply hoping for “the merge” means that the merge likely won’t happen fast enough. We need to accelerate it somehow. The options are BCIs or uploading. Where uploading is still mostly a scientific problem and way too far out to be relevant to ASI risk. In contrast, BCI has already matured past the science into engineering problems. Of the two, BCI is the most pragmatic.
We can’t just invent an amazing BCI technology. To truly counter ASI, we need to get it in the hands of the mainstream billions.
In short, we need to accelerate BCI and get it to mass adoption. This is what bci/acc is all about.
Another framing of bci/acc is (in one variant): is “align the AI at the core, as you train” but in the most hyper-localized way imaginable: train one AI for every single human, where each human is constraining the AI in real-time, and the AI starts small and grows iteratively. It’s a square-watermelon AI as a co-processor to your brain 🍉 🧠.
bci/acc: accelerate BCI and take it to the masses. It uses BCI killer apps like silent messaging (SMs) to create market demand, which in turn drive BCI device evolution to a substrate competitive with ASI 3. Human Superintelligence via bci/acc 3.1 Introduction
Accelerating BCI (bci/acc) is among the least-discussed approaches, yet it may have the best chance of success to address ASI. So it’s imperative that we explore bci/acc more deeply.
3.1.1 High-Bandwidth BCI challenges
To go all the way to human superintelligence, non-invasive BCI likely won’t have enough bandwidth. We will need super-high bandwidth BCI, via neural implants (invasive) or optogenetics (semi-invasive) or other such brain technologies.
Alas, going invasive or semi-invasive has its own challenges, on engineering, regulatory, and societal fronts:
Engineering. The main goal is to increase bandwidth — a hard enough thing on its own. Yet engineering must also solve critical privacy risks, lest we lose cognitive liberty. Regulatory. Getting approval for human trials on (semi) invasive brain technologies is currently a long, high-friction process in the name of safety of the test subjects. Alas, the current regulatory structure ignores the much larger risk of ASI risk to Humanity: a bike-shedding problem. How can we speed this up? Societal acceptance. Even if the devices existed and regulations were approved, invasive BCI currently feels icky to most people. This will affect Humanity’s ability to manage ASI risk. The
Overton Window will likely need to shift so that mass society is more open to such technologies.
There are two different routes to solving challenges (1)(2)(3): implants-first and masses-first. Let’s explore each.
3.1.2 Implants-First Route
Elon Musk’s Neuralink has made great progress in the previous decade.
Tansu Yegen on Twitter: "🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw / Twitter"
🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw
Neuralink is perhaps the furthest along on engineering (1). Its path to regulatory (2) is to focus on healing people, which limits its speed. Societal acceptance (3) is on ice until regulatory (2) is much farther along. In short, its route is (full 1) → (full 2) → (full 3). While I’m a Neuralink fan, to maximize chance of success, I’d love to see more companies chase this route.
3.1.3 Masses-First Route
Given ASI timescales, the Neuralink route to (1)(2)(3) may not be fast enough. There’s another path: route (partial 1, full 2, full 3) → (better 1, full 2, full 3) → (full 1, full 2, full 3). That is: start with non-invasive BCI tech that has no regulatory issues, and get mass adoption. Use this mass adoption to grow societal openness and open up regulations towards (semi) invasive BCI.
The starting point is killer apps for healthy people with non-invasive tech.
Killer app. To hit the masses, BCI needs a killer app. We need to “
make something people want”. Silent messaging (SMs) aka pragmatic telepathy is one candidate; perfect memory is another; there are more. Below, I explore candidate killer apps like silent messaging. Once we have that first killer app, we can expand to adjacent functionalities. Healthy people. To hit the masses, BCI needs to be optimizing healthy humans versus merely fixing human ailments. Otherwise it’s not mass-market enough. Non-invasive first. To hit the masses, the BCI needs to be non-invasive to start. Invasive won’t get enough takers at the beginning, and regulatory is a bottleneck. But to truly leverage the Silicon Midas Touch we must get to invasive. How? Pressure from market forces and ASI risk will take us over the hump. 3.1.4 Discussion & Outline
bci/acc allows for an implants-first route, a masses-first route, and other routes. We don’t know which will be best; we should explore all of them aggressively. Since Neuralink’s actions elaborate the implants-first route, much of this post will focus on the masses-first route. (To be clear, bci/acc includes all routes.)
The next sections elaborate on masses-first bci/acc as follows. First, I will briefly review some emerging technologies that will help. Then, I survey some candidate BCI killer apps. Then, I describe how demands from market forces and ASI risk will drive BCI performance up. Finally, I describe how many iterations take us to human superintelligence (HSI), a new phase for Humanity.
3.2 Baseline Tech for bci/acc
Humanity’s technology capability frontier keeps expanding. This section explores technologies on the market that are adjacent to bci/acc. They can be used as lego blocks towards launching the first BCI killer apps.
3.2.1 EEG for Typing (“Silent Messaging”)
EEG for typing keeps improving. As mentioned earlier, as of 2023 researchers could type via EEG at 62 words per minute. And it keeps getting better. How do you think Stephen Hawking wrote his books? (Yes, EEG.)
3.2.2 EEG for Focus
There are other companies targeting mainstream with EEG. For example, Neurable is making consumer BCI headphones to help people focus. You put on their headphones, which detect electrical signals on the skin around your ear, and they ping you when you fall out of focus. There’s also EEG to track emotions, alertness, arousal, meditation effectiveness, and more [Ref].
3.2.3 Subtitles on Glasses
XRAI, Vuzix and others offer glasses with subtitles, for the deaf: “hear with your eyes”. The glasses have a microphone to capture audio, then transcribes via AI-based voice recognition, then renders text to the subtitles display. The tech can be inexpensive since the subtitles can use 1970s-era LCD displays, and 99% of the rest can be on a smartphone.
3.2.4 AI-powered glasses with voice interface
11 years ago, we had Google Glass doing this. It was officially scrapped due to privacy concerns, and unofficially because society just wasn’t ready for it. Since then, we’ve had ten more years of smartphone evolution and adoption. We’re in an Instagram x TikTok era where privacy matters less, for better and for worse.
In October 2023, the Rayban | Meta Smart Glasses shipped. This device records and stores video directly from the glasses. You can tap it to send photos or videos to friends. There was no privacy or weirdness pushback. The Overton Window had shifted: 11 years was more than enough for society to be ready. From personal experience: they’re lightweight to wear, and according to Ray-Ban employees they’re selling briskly.
3.2.5 AR Goggles + Hand Gestures: Meta Quest 3
The Meta Quest 3 was released in October 2023. Whereas its predecessors were Virtual Reality (VR) goggles, it brings in the real world: Augmented Reality (AR), aka mixed reality or spatial computing. It scans your room, and renders real-plus-overlay into your headset’s display. It tricked my brain into “being there”. You can control it with hand gestures, but these are still unreliable; the Quest still supports handheld controllers.
3.2.6 AR Goggles + Eye Tracking: Apple Vision Pro
For any given device idea, Apple may iterate for years or decades before they release it, if ever. Why? Because they only release when the device not only “doesn’t suck”, but is actually pleasant or delightful to use. This was the case for phones, for tablets, and for cars (still a WIP).
It’s also the case for AR goggles. They have patents on AR going back two decades. Yet they finally put a device up for presale on Jan 19, 2024: The Apple Vision Pro. From Apple’s perspective, they’ve cracked AR well enough to release something pleasant or delightful.
What’s changed? Eye tracking based input. Eye tracking has been used for medical research for decades, and also more widespread things things like consumer marketing for 10+ years. You can use eye tracking to type, move a cursor, click a button, and more.
Apple Vision Pro has eye tracking. Knowing Apple’s approach to new devices, they probably already have interfaces to type, move a cursor, and click buttons — all hands free, accurate, and pleasant.
As it rolls out, there’s a good chance people will find it as magical as multi-touch in phones. Eye-tracking is to AR control, what multi-touch is to phones. It may be the remaining piece to take AR beyond video games and truly mainstream. And, it will become table stakes for AR; expect Quest 4 to have it.
I can’t emphasize this enough: eye-tracking may be the “unlock” that makes these head-mounted glasses or goggles actually useful.
3.2.7 Eye-Tracking is BCI (!)
Eye tracking offers the hands-free benefits of BCI with the accuracy of moving your hands. Eye tracking feels like BCI, moving your eyes doesn’t really feel like movement. Yet it’s nearly as accurate as moving your hands, because ultimately eye tracking is motor control.
If a 20-year-old university student’s eyes are bloodshot, there’s a good chance they are hungover, got little sleep, or both. To generalize this, our eyes tell a lot about our health. In the last few years, there’s been an explosion of research using HD images or videos for medical diagnosis or treatment. A recent “Frontiers in Neuroscience” edition had 23 articles dedicated to this topic, including this intro.
So: (1) Eye tracking takes HD videos of eyes (2) HD videos of eyes are sensors for brain activity (3)
HD video of eyes implies a BCI sensor. Modern eye-tracking takes HD videos of your eyes. Thus, modern eye tracking is BCI.
[Quote from
Frontiers in Neuroscience] 3.3 Candidate BCI Killer Apps
We’ve covered how ASI is coming, and how Humanity’s best chance to stay competitive is to accelerate BCI and take it to the masses (bci/acc). To get BCI to mass adoption, we need an application of BCI that the masses really want to use — a killer app.
We don’t know which killer app might take off first. However, we can explore possibilities. This section reviews some of those.
3.3.1 Candidate killer app: Silent Messaging
Just as Neal Stephenson’s 1992 novel “Snow Crash” is the archetypical vision for Virtual Reality, Vernor Vinge’s 2006 novel “Rainbows End” is the archetype for Augmented Reality.
Infused throughout Rainbows End, there’s a special <sm> tag for when the characters are messaging each other with “silent messages” (SMs):
Vinge leaves the reader to infer what specifically SMs are. But one soon realizes that it’s messaging each other simply by thinking about it. Yes, telepathy, but presented as just part of the furniture, and it just works, therefore “pragmatic telepathy”.
SMing = silent messaging = sending text or voice by thinking about it. Send = eye-tracking / EEG / etc. Receive = subtitles on glasses.
How would we do this? One inputs messages via EEG BCI, eye-tracking, or subvocalization. One receives messages via subtitles on glasses or goggles, or audio in your ear.
Specific implementations are any combination of the above. Examples:
Glasses with subtitles + EEG BCI sensors on the top of the glasses touching your forehead inconspicuously An Apple Earbud-like device that captures sub-vocalizations, then synthesizes speech and outputs to others as audio. Apple Vision Pro for eye-tracking input and subtitles-based output. Therefore society may get (pragmatic) telepathy upon the release of Apple Vision Pro (!). 3.3.2 Candidate Killer App: Internal Dialog
Imagine Jiminy Cricket on your shoulder, sharing advice or facts when you call upon him. Without having to pull out your phone and type; without having to read results on your screen. “What’s the capital of Portugal?” “Is this person lying to me?” “What’s next on my TO-DO list today?”
To achieve this is straightforward: type with BCI / eye-tracking / subvocalization. It goes to a ChatGPT bot. And the output is rendered visually in the glasses / goggles or in audio.
3.3.3 Candidate killer app: Perfect Memory
Here, you record images / audio / video with glasses, goggles, or a necklace-style device like Rewind Pendant. This gets stored locally or globally.
You search for the recordings via EEG BCI, eye tracking, or sub-vocalization. Or, use near-infrared non-invasive BCI on the back of your scalp to see what’s going on in your visual cortex. It doesn’t need to be perfect; it just needs to be good enough to serve as a query across video feeds. Even ten years ago, research results were extremely promising.
Once you’ve found the memory, it gets rendered in the glasses or goggles.
You won’t have to retrieve by moving your fingers around or anything, you’ll just be moving your eyes around, or thinking with the EEG, and you’ll be able to retrieve these videos. Everything you saw, you’ll have perfect memory of. It will feel magical.
Perfect memory. (1) Record via glass gam, then store (2) retrieve via eye-tracking / EEG / etc (3) project result on glasses’ display 3.3.4 Candidate killer app: Share Visual Memories
Here, you search & retrieve videos like in the “perfect memory”.
Then, you click “share” and choose “to whom” via BCI / eye-tracking / sub-vocalization.
A picture’s worth a thousand words: we’ll be able to communicate with others at higher bandwidth than ever before.
3.3.5 Candidate Killer App: Talk in Pictures
Here, you share video to others, but no longer bound by what you’ve seen or found. Rather, you type (via BCI etc) to prompt a generative AI art system. You do this in real-time, and send the images / videos in real time to someone else. They see it and respond, in images / video.
Now. You’re. Talking. In. Pictures.
3.3.5 Candidate Killer App: Talk in Brain Signals
We go further than talking in pictures. If the devices always displayed raw brain signals alongside text or images, then over time our brains will learn the mapping. It won’t be much different than learning Spanish, sign language, or Morse code. Our brains can handle unusual inputs, like learning to see with your tongue. The net result: we could communicate directly with raw brain signals. AI research often finds “direct” to be better than using intermediate features, if there is enough data. It’s a brain-brain interface.
From this, a new kind of language — a neural language — could emerge, which will chunk lower-level abstractions into higher-level ones for higher bandwidth yet [Rutt2023c]. We’ll have transitioned from skeuomorphic languages for our brain (text/images as a bridge to the past, tuned for the outer world) to brain-native languages (tuned to our inner world).
This approaches the long-held science fiction dream of “mind meld” as “a telepathic union between two beings; in general use, a deep understanding.” We can start building primitive versions now.
Mind-meld: talk in pictures, raw brain signals, or a new neural language 3.4 The Journey to High-Bandwidth BCI
We’ve discussed the risk from ASI, how BCI is the most pragmatic path, BCI challenges (engineering, regulatory, societal), and possible BCI killer apps to kick-start usage by the masses. What then? This section explores how market forces and ASI risk will drive further evolution and adoption of BCI, including a transition to more invasive technologies.
3.4.1 Introduction
A silicon stack co-brain offers 100x+ more storage, and 100x+ more compute, compared to bio-stack brains (our current brains). Alas, these are held back by the low bandwidth between the bio-stack brain and the silicon-stack co-brain.
Non-invasive techniques like EEG, eye-tracking and subvocalization can only take us so far [BciTech]. There’s an upper bound to their bitrates; it’s not very high; and we’ll probably squeeze every last bit from them.
And. There are invasive techniques that promise 100x+ more bandwidth. Most promising are chip implants, and optogenetics. Let’s review those, then see how those might enter mainstream usage by healthy humans.
3.4.2 Bandwidth++ via Implants
Here, a doctor or machine opens up a portion of your skull, slips in a chip, and seals it back up. That chip then talks to your brain, and wirelessly to computers. 100x+ the bandwidth compared to EEGs, boom.
Research has happened for decades. Neuralink is a leading example. It’s in early stages of human trials.
Implants (conceptual) 3.4.3 Bandwidth++ via Optogenetics
Optogenetics enables reading & writing on the brain. One gets an injection containing a “useful virus” that changes specifically targeted neurons to fire when light is shined on them; and more. Put precisely:
“Optogenetics is a technique to control or to monitor neural activity with light which is achieved by the genetic introduction of light-sensitive proteins. Optogenetic activators [“opsins”] are used to control neurons, whereas monitoring of neuronal activity can be performed with genetically encoded sensors for ions (e.g. calcium) or membrane voltage. The effector in this system is light that has the advantage to operate with high spatial and temporal resolution at multiple wavelengths and locations”.
Optogenetics research is proceeding. As of 2021, there were four clinical trials involving optogenetics (on humans).
Optogenetics is promising for mass BCI because it’s less invasive than chip implants (injection vs surgery) and maybe more bandwidth (across the whole brain, yet fine-grained).
However, due to genetic manipulation and coaxing our brains to fire photons, many side effects are possible. For example, what if the brain fires too much and causes a seizure? Nonetheless, given ASI risk, research needs to proceed with even more urgency than before. It will need to get past a bike-shedding problem, as the next section elaborates.
Optogenetics (conceptual) 3.4.4 Invasive BCI regulation has a bike-shedding problem
None of the research on implants or optogenetics is (officially) aimed at healthy humans; it’s all for fixing human ailments.
Why? Because it’s already super-hard to get regulatory approval for human trials for the latter; going for the former has seemed unattainable.
Why? Put yourself in the shoes of a regulator. You’re used to balancing risk vs reward for a narrowly-scoped problem to fix a specific human medical ailment. You’re not used to balancing risk vs reward for a civilization-scoped issue, to avoid a non-medical existential risk for all Humanity. (Despite being the gatekeeper for that.)
So what do you do? You focus on what you know, and dismiss away the existential risk. This has a term: bike shedding. When a safety committee for a nuclear power plants spends 95% of its time discussing the bike shed because they aren’t equipped to do anything about the big hairy nuclear risk issue.
BCI research is being bike-shedded right now. I’m hopeful that this will change as regulators and their higher-ups recognize the issue.
3.4.5 What will tip invasive BCI into the mainstream?
Given the current regulatory constraints, how can invasive BCI accelerate into the mainstream? I see two main forces driving demand to make this happen: fixing ASI risk, and market forces.
ASI Risk. Ideally the regulators of large nations recognize the bike-shedding bias and reduce BCI restrictions, perhaps being super-aggressive to accelerate BCI via a “BCI Manhattan Project”. This could build on existing BCI-for-defense research like DARPA’s decades-long program.
Smaller hungry nations may take the lead, for the $ and the PR. There’s $ and PR incentive to nations that loosen rules to meet market demand, such as Estonia’s E-residency, China’s Shenzhen special economic zone, and Singapore’s crypto regulations. There’s a growing trickle in the medical domain. For starters, via the Zuzalu project, Montenegro recently lightened rules to catalyze longevity research. Most interestingly, Honduras already has very light rules for medical testing: Bryan Johnson recently leveraged it to get a novel gene therapy there; there’s nothing stopping aggressive BCI testing in Honduras. Growing movements like Network State and Blueprint will further catalyze this jurisdictional arbitrage for invasive BCI.
Market forces. Consumers who start with non-invasive BCI will demand more performance, therefore more bandwidth, which means invasive BCI. Thus, there’ll be a bottom-up consumer push for invasive BCI.
When consumers see others using BCI for medical treatment that receive benefits far beyond getting healthy again, they’ll get particularly insistent.
People with the $ who are ready to accept the high risk and high reward will fly to Honduras for medical invasive-BCI tourism. Or they’ll build their own, just as Professor Xavier built Cerebro BCI in X-Men. Military BCI will leak into criminals and the black market, then into mainstream to satiate demand, like in Strange Days and Cyberpunk: Edgerunners. Businesses will sprout up to get “medical BCI” in the hands of anyone who asks, like we saw for medical marijuana in California.
The $ and risk tolerance to get high-bandwidth BCI first will enjoy a significant advantage. This will raise legitimate questions about fairness. Ideally, cost and risk will come down quickly, to make it broadly accessible. Let’s see.
Left: Professor Xavier using Cerebro in X-Men. Middle: Spinal-implant BCI in Cyberpunk Edgerunners. 3.4.6 Mainstream invasive BCI will grow, a lot
We just covered how invasive BCI will tip into the mainstream. What happens next? It’s the economics, silly. There was great demand even before invasive BCI, despite limited bandwidth. Invasive BCI will unlock massive bandwidth, critically, to a market demanding it. So BCI growth rate will steepen.
The BCI market will merge with the $500B smartphone market, if it hadn’t already done it pre-invasive. iPhone 20 or 25 will be BCI-based, perhaps via a merge with Apple Vision Pro. Meta Quest 7 or 10 will get invasive BCI to complement eye-tracking and other non-invasive BCI. Neuralink will launch their “phone”. Expect Samsung, Microsoft, OpenAI and others to get in the game too. A lot of $ is up for grabs.
Shmoji on Twitter: "whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone / Twitter"
whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone
What then? The devices will evolve and improve, subject to intense competition, one generation to the next, like smartphones did in the past 40+ years.
Market forces and the Silicon Midas Touch drive performance. We’ll see 10x in bandwidth, which unlocks 10x+ more storage and compute. Then 100x in bandwidth, unlocking more storage and compute yet. Then, especially once we go semi-invasive/invasive with optogenetics or implanted chips, we’ll see 100x+ bandwidth, and corresponding 100x+ in storage and compute.
Moore’s Law, and AI improvements within BCI apps will further catalyze usefulness and demand. As a recent example, “BrainGPT” uses LLMs to interpret brain data for significant error reductions.
3.4.7 Your BCI will be part of *you*
We are all natural born cyborgs: when you ride a bike, it becomes part of you, as far as your brain is concerned. Same for keyboards.
The same will be true for BCI.
As far as your brain is concerned, your BCI — and the computers that you access — will be part of you.
3.4.8 Hyper-localized AI Alignment
You’ll be a cyborg with a bio-stack meatbag brain and silicon-stack brain working in unison. This feels as natural as using a keyboard or a bicycle.
The compute & storage of the silicon-stack will have its own AI-based automation, to abstract complexity from the bio-stack side. As its compute & storage grows, we can expect emergent intelligence (in the John Holland sense). Then a concern arises: could the silicon stack AI take over the bio stack? The alignment problem rears its head here too! Fortunately, there’s a natural solution.
To maximize the chance that the silicon stack stays aligned with us, we ensure that processing or storage does not outpace bandwidth, at each evolutionary step along the way. This is no guarantee however: what if the silicon-side starts accessing way more compute from the internet?
This is different from traditional AI alignment approaches: here we are aligning the AI in real time, aligning it with our selves. It’s hyper-localized to each of us. It’s one aligned AI per human, rather than 1 or 10 for the human race. Therefore it’s 10 billion times more fine-grained and personalized. It’s AI alignment taken to the limit (in the calculus sense). There’s no guarantee that this will work. But it’s highly promising.
3.5 The Journey to Human Superintelligence
This essay started with the ASI risk, and has shown a path to accelerate BCI and take it to the masses. The previous section showed how market- and risk-driven evolution took us to high-bandwidth BCIs. This section picks up from that.
At first, the silicon-stack brain’s power will be much weaker than the bio-stack one, bottlenecked by bandwidth. Then, we’ll increase bandwidth iteratively, with corresponding unlocks in compute and storage.
The silicon-stack side will get par with the bio-stack side.
Then it will start to surpass it.
We’ll keep going, as the market will demand it. [Rutt2024d]
The silicon-stack side will become radically more powerful than the bio-stack side.
And that will be fine with us! We’ll have gone through each BCI generation iteratively, vs being sprung on us all at once. Our worries will abate as the silicon-stack AI will be aligned.
In fact, the silicon-stack will feel like part of *you* as far as you’re concerned, because you’re a natural-born cyborg along with the rest of us. The emergent patterns of intelligence on the silicon-stack side will be wholly our own.
It will feel like the most natural thing in the world.
Each of us will grow our compute & storage by 10x, 100x, 1000x, more. The silicon-stack emergent patterns of intelligence — part of us — will grow 1000x too. Yet we will still be humans.
We will have grown to achieve human superintelligence.
There’s more. Let’s say you’ve got to 1000x storage & compute, 1000x intelligence via the silicon-stack side of your self. Let’s say you’re now 90 years old and on your deathbed. Your bio-stack body and brain is dying.
Yet your bio-stack brain is now only 1/1000 the intelligence of the silicon-stack side. It’s probably been annoying you for a while, perhaps holding you back. And now it’s really holding you back, lying there.
What do you do?
You clip it like a fingernail.
And now you are on a pure silicon stack.
There’s more. Consider the possibility that in 100 years (or 20) that the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent. They are Sovereign General Intelligences (SGIs).
What will the landscape look like? Hyperion Cantos provides inspiration. SGIs will inhabit the datumplane: “common ground for man, machine, and AI.”
The datumplane: common ground for man, machine and AI
So we have a path to unbound ourselves from biological constraints, while retaining our humanity. Which makes it a great time to ask. How big can you dream? What’s the biggest thing that civilization could possibly achieve? What do we want, as Humanity?
I mean Humanity in the broadest sense of the word: not just humans, but the multiple layers of civilization that encompass humans. Our thoughts and dreams, our patterns of intelligence, and how we want to self-actualize as a civilization.
As a baseline, we definitely know we don’t want to die, whether from asteroid strikes, nuclear holocaust or AIs terminating us all. “Not die” is a starting point. Accelerating BCI helps address all of those, because it allows us to easily be multi-planetary and be competitive with pure AIs.
“Not die” is an “away” viewpoint. Can we be more positive than that, with a “towards” perspective? Several steps more optimistic is: explore the cosmos, Star Trek style. That would be a grand adventure for Humanity on its own.
We can do better yet: let’s reshape the cosmos! Build Dyson Spheres to harness the power of stars directly — Kardashev Scale Type II. Reshape the cosmos at the scale of galaxies (Type III). Master energy at the level of the universe (Type IV). Even perhaps attain the knowledge to manipulate the universe at will (Type V). Now that would be an adventure for Humanity! Count me in [Kard].
A grand adventure for Humanity: explore and reshape the cosmos! 3.6 Cognitive Liberty
It’s one thing for the surveillance state or surveillance capitalism to monitor our electronic lives, as they do now. We’ve almost come to terms with it, whether we like it or not. But what about monitoring our thoughts? This will be a red line for most people, and rightly so. If our thoughts cannot be private, we risk freedom and personal sovereignty.
This is the concept of “cognitive liberty” or the “right to mental self-determination”: the freedom of an individual to control their own mental processes, cognition, and consciousness. I was happily surprised to discover a book-length exposé of the issue in “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology” by Nita A. Farahany.
A Web3 framing is: “Your keys, your thoughts. Not your keys, not your thoughts”. Web3 points to a potential starting-point too: you hold your keys to your data. And use infrastructure like like Arweave to store your brain data, and Ocean Protocol to manage your brain data. Appendix 5.1 elaborates. But this is only a partial solution; there will be many devilish challenges that need to be worked out. For example: if you hold the keys in your head, will the BCIs see those thoughts too? There’s dozens or thousands of man-years worth of R&D needed here.
It’s hard to understate the importance of cognitive liberty. We need more work on this, asap. I’d love to see more funding to research efforts here, not to mention BCI-acceleration efforts in general.
In an age of BCI, how do we protect our thoughts and retain cognitive liberty? 4. Conclusion
ASI is coming, perhaps in 3–10 years. Humanity needs a competitive substrate, in time for this. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses. That is bci/acc.
The “masses-first” bci/acc variant is to bring non-invasive BCI with killer apps like silent messaging to healthy people first; then to use market momentum to get over the invasive-BCI hump; and finally, to keep growing the power of each human’s bio-stack and silicon-stack brains. Looping this repeatedly, the net result is human superintelligence (HSI).
In perhaps 100 years (or 20) the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent: they are Sovereign General Intelligences (SGIs). They’ll be reshaping and exploring the cosmos, climbing the Kardashev scales.
bci/acc is solarpunk: optimistic and perhaps a little gonzo. It’s e/acc zoomed-in for BCI. And it could be a grand adventure for Humanity.
5. Appendices 5.1 Appendix: Sovereign Web3 Software
This section describes how many state-of-the-art Web3 systems are already sovereign — beholden to no one — and how Web3 capabilities will keep expanding for powerful sovereign agents.
Decentralized Elements of Computing. A blockchain is a network of machines (nodes) that maintains a list of transactions. It’s decentralized: no single entity owns or controls the network. With a decentralized list of transactions, we can then decentralize the elements of computing: storage, compute, and communications. “Web3” is a more accessible term than “blockchain”, but they basically mean the same thing.
Storage of Value. In blockchains, storage comes in two parts: storage of value, and storage of data. We already have a wonderful decentralized store of value, ie “digital gold”: Bitcoin which stores BTC tokens. Released in 2009, it has a
market cap > $700B and
tens of millions of users. Just as Bitcoin has BTC as its native token, Ethereum has ETH, and so on; each are stores of value. Finally, there are tokens as decentralized scripts on top of a chain (e.g.
ERC20). Storage of Data. Smallish amounts of data can live on a chain itself; that’s how value is stored. We also have larger-scale decentralized data storage:
Arweave and
Filecoin are the leading projects. We have decentralized access control to that data via
Ocean Protocol, and decentralized data feeds via
Chainlink [LINK]. Compute (Processing). The first really great decentralized compute system was
Ethereum which came out in 2015. It runs smart contracts, which are simply small scripts running on decentralized compute hardware. It’s pretty expensive to do compute directly on Ethereum smart contracts, so there are ways that it’s scaling up. These include (a) more powerful “Layer 1” chains like
Solana, (b) “Layer 2” chains, especially “
Zero-Knowledge Rollup” L2s which enable compute to be off-chain with provable compute results stored on-chain, and (c ) decentralized compute markets like
iExec and
Golem, and many more recent ones including AI-specialized ones. Communications. Being decentralized networks, all blockchains have an element of communications built in. And, there are multi-chain protocols like
Cosmos or
Polkadot, and cross-chain protocols like
THORchain,
CCIP, and
Chainflip.
Smart Contracts & Chains are Sovereign. Perhaps surprisingly, every smart contract running on a chain is sovereign 🤯 [Gov]. For example, Uniswap V2 is a popular decentralized exchange. Each Uniswap pool — say ETH/USDT — has its own smart contract. Each of those pools is a robot that just “does its thing”: holding liquidity, giving some USDT for people who bring it ETH, and giving some ETH for people who bring it USDT. There are no humans helping it, it “just runs”, you can’t turn it off, it’s not beholden to any specific individual, organization or jurisdiction. It is sovereign.
Each chain is sovereign too. Each chain is beholden to no one. It’s why Bitcoin can be framed as a life form.
These sovereign smart contracts and chains can do all the usual things: store & manage wealth, store & manage data, do compute, and communicate. Uniswap and Bitcoin answer to no one.
And, they have rights! While no one lobbied for these robots’ rights, and no law was created for these robots’ rights, they have rights nonetheless, because they can manipulate resources without asking. How? because the technology itself allows for it: it’s a dry code, not wet code approach to rights. It’s “your keys, your Bitcoin” for robots themselves. Do you get it yet anon?
AI & Agents for Web3. So far, chain-based robots haven’t been very smart. But this is changing as Web3 capabilities grow. Some examples:
Prediction is the essence of intelligence.
Ocean Predictoor is a recent system for prediction feeds, powered by AI prediction bots and consumed by AI trading bots. The feeds themselves are sovereign; the
bots can be too.
Fetch.ai and
SingularityNET are Web3 systems to run decentralized agents (bots). These agents can be sovereign: no one owns or controls.
The above AI * Web3 projects are by OGs in both AI & Web3. Advances in Web3 storage, processing, and communication have helped their capabilities. And the recent explosion in AI interest has brought a large new wave of AI * Web3 projects.
5.2 Appendix: AI Alignment Bingo
In 2022, Rob Bensinger tweeted the following text and image. It’s become a useful (and hilarious) reference in many AI & alignment crowds.
“Bad take” bingo cards are terrible, because they never actually say what’s wrong with any of the arguments they’re making fun of. So here’s the “bad AI alignment take bingo” meme that’s been going around… but with actual responses to the “bad takes”!
5.3 Appendix: Issues on ASI Risk Idea “dumber AIs aligning smarter ones”
This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.
Here, we elaborate on the issues.
“Chain risk” framing. Each link needs to have crazy-high reliability, which likely isn’t achievable. Probability of failure = 1–(probability of failure of link 1, pfail 1) * (pfail 2) * (pfail 3) * … * (pfail n) * (probability of failure of non-link components); assuming independent pfails. E.g. if there are 5 links in the chain and perfect reliability for non-link components, and you want <1% chance of failure, then each link must have <1e-5 (0.001%) chance of failure. “Over-leverage risk” framing. This can be seen as over-leverage risk too. The 2008 financial crisis illustrates how over-leverage can go badly wrong. In 2008, there was a chain of derivatives on housing mortgages like Credit Default Swaps, which amplified billions into tens of trillions: home mortgage → 10x derivative → 100x derivative → 1000x derivative. Any fluctuation in home mortgages, such as slight changes to interest rates, rippled to 1000x effects downstream. Smartest entity might disagree. Rhetorically, could an ant align a bee → align a mouse → dog → align a chimpanzee → align a human? If you’re the human, would you let this happen? Smartest entity could change rules of weaker layers. It’s the smartest entity not just disagreeing, but actively pushing the other layers to its own benefit. In the 2008 financial crisis, to make more $, the bankers at the top (final link) were highly incentivized to grow the $ volume of mortgages (first link). This resulted in craziness.
For example, a strawberry picker husband & wife with < $15,000 combined annual income obtained a loan for a $720,000 house, with no money down. They had no hope of paying back the loan; the chain couldn’t last; the chain broke; the 2008 financial meltdown happened.
To summarize, solving ASI risk with a chain of AIs has great risk on its own.
Acknowledgements
Thank you very much to the following people for review, discussion, and feedback on these ideas and this essay in particular: Jim Rutt, Mark Meadows, Albert Wenger, Lou de Kerhuelvez, Jeff Wilser, Kalin Harvey, Bruce Pon, Joel Dietz, Shan Sundaramahalingam, and Jeremy Sim.
And, thank you to Mark Meadows for the opportunity to share the ideas with NASA, and Lou de Kerhuelvez & Allison Duettman for the opportunity to share with Foresight Institute. Finally, thanks to the e/acc movement for the courage and optimism. (And for inspiration of “bci/acc” label, it’s an improvement on “Bandwidth++”).
Notes
[Spec1999] The fair was Spectrum 1999, held every four years at the University of Saskatchewan’s College of Engineering. Some people skied no better than random; others had 0% error. This was a useful lesson on the high variability among people in BCI accuracy. I found the same thing in experiments on other BCI devices too.
[Tsh2012] The researchers’ tricks included: more sensors, active not passive sensing (visual evoked potentials), maximize rate of neural firing, error correction codes, and AI
[Rutt2024a] Thanks to Jim Rutt for this specific framing. To expand on Jim’s words, lightly edited: “it’s not just 1000x more powerful but qualitatively different. The ASI could actually deeply understand in total detail even the most complex book. That’s very different from how humans create a rough highly compressed representation. Human memories are faulty and low fidelity; machines are not. Clock speed is 1 ms (1e-3 s) for neurons, and sub-nanoseconds (<1e-9 s) for chips. Today meat minds have a huge advantage in parallelization, but that will eventually be solved in silicon.”
[Hyp1989] Many sci-fi novels explore potential relations between humans and ASIs, where they act as gods to humans. The Hyperion Cantos and A Fire Upon the Deep are two prominent examples; there are more.
[Verd2023] “Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI”, Lex Fridman Podcast #407, Dec 29, 2023 https://www.youtube.com/watch?v=8fEEbKJoNbU
The e/acc movement has a second argument : the general idea of deceleration runs against the physics tendency of entropy to shrink over time. On earth, this manifests as evolution in biology (ability to acquire resources and reproduce), and as evolution in human organization (ability to acquire resources and reproduce — capitalism). Even in the highly-unlikely event somehow all the above deceleration efforts were successful, in the medium term, entropy and evolution will route around these anyway.
[Snow2013] Remember, Edward Snowden’s 2013 revelations didn’t stop the goals of PRISM surveillance. Now, USA and its allies simply get the data via big tech companies vs. directly.
[Weng2023] From private conversation with Albert Wenger in late 2023, soon to be public.
[Goer2013] This is an oft-repeated example by Ben Goertzel, from 2013 and likely earlier.
[Rutt2024b] Thanks to Jim Rutt for this idea, and inspiration for the wording. (Private correspondence.)
[Rutt2024c] Thanks to Jim Rutt to help develop this idea.
[BciTech] These include functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), transcranial stimulation like TMS (magnetic) and tFUS (focused ultrasound), and more. Each has its own strengths and weaknesses. bci/acc may use any of these. Endovascular BCI has a particularly promising tradeoff of minimally-invasive yet high-signal.
[Rutt2024d] A great question from Jim Rutt, lightly edited: “While the market is an excellent hill climber, there is no guarantee at all that it finds global maxima. Maybe there ought to be a political/social layer. Is this what humanity wants?”
[Kard] Once bci/acc unbounds humanity 100% from biological constraints, the “bci” part is less important. bci/acc generalizes back into e/acc. These Kardashev-scale goals are 100% in-line with the goals of e/acc.
Also: If we were bound by our bio stack, there would have been a hitch. The nearest star beyond our sun is Proxima Centauri. It takes 4.3 years to get there if you’re traveling at light speed. OK, doable. However, if you travel at Voyager speed — the man-made device that’s gone the fastest in space so far — it will take 73,000 years. That’s a time scale 10x larger than the time since the Greeks. Less practically doable. As sci-fi author Charlie Stross has said “Sending canned primates was never going to end happily”. Good thing we’ve unbound ourselves from our bio stack!
[LINK] Chainlink got its start doing decentralized data feeds. As it’s grown, its scope has expanded to much more.
[Gov] Assuming no governance, which is true for a lot of smart contracts. Uniswap V2 has no governance, though V3 does.
bci/acc: A Path to Balance AI Superintelligence was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.