Cortical Labs just wired approximately 200,000 living human neurons into a 1993 video game. The neurons started learning to play it.
Not a simulation. Not a digital model inspired by biology. Actual human brain cells, taken from adult donors, reprogrammed into stem cells, differentiated into neurons, mounted on a chip, and connected to Doom in real time.
The platform making this possible is called the CL1, Cortical Labs' commercially available biological computer, and the successor to their earlier research prototype, DishBrain. It is one of the more consequential developments in computing hardware that most developers have not looked at closely enough.
Here is what it actually is, how it works at a technical level, why it matters for the future of AI hardware, and why the ethics of this technology are already lagging behind the science.
DishBrain, CL1, and How We Got Here
Cortical Labs is a neuroscience and biotech startup based in Melbourne, Australia. Their work sits at the intersection of wet biology and digital computing.
The original DishBrain prototype was a research platform that grew neurons directly onto a high-density microelectrode array (HD-MEA). That array served as a two-way interface: delivering electrical stimuli into the cultured tissue and reading the neurons' electrical output back out. In 2022, lead researcher Brett Kagan and his team published results in Neuron showing that DishBrain could learn to play Pong in under five minutes, a task that takes standard deep reinforcement learning algorithms roughly 90 minutes.
That paper, peer-reviewed and open access, is the evidentiary foundation of everything that follows.
The CL1 is the commercial evolution of that research. It uses a 59-electrode array at sub-millisecond speeds, an internal life-support system handling temperature control, gas mixing, waste filtration, and circulation, and a biological operating system called biOS that lets developers deploy code directly to the living neural network via a Python API. The neurons inside a CL1 unit routinely survive six months or more. The first 115 units shipped in 2025 at $35,000 each.
The neurons themselves are derived from skin or blood samples taken from adult volunteers, reprogrammed into induced pluripotent stem cells (iPSCs), and then differentiated into cortical neurons before being cultured on the chip.
How the Doom Interface Works
Running Doom on the CL1 is more complex than Pong, and the implementation reflects that complexity honestly.
Doom presents a three-dimensional environment with enemies, weapons, spatial navigation, and real-time threat response. To connect this to neurons that have no visual system, engineers had to convert the game's visual and state data into patterns of electrical stimulation across the 59 electrodes. Neurons fire in response. Those firing patterns get translated back into in-game controls: movement, turning, and shooting.
Critically, this was not a large internal research effort. The working Doom implementation was built by Sean Cole, an independent developer with no prior background in biological computing, in under a week using the CL1's Python API. That is a significant signal about the accessibility of the platform, and it is also a reason to be precise about what the Doom demo represents: a developer proof-of-concept, not a peer-reviewed research result.
The learning mechanism in CL1 is also more layered than pure predictive coding. For DishBrain's original Pong experiment, the theoretical framework was active inference via the Free Energy Principle, developed by co-author Professor Karl Friston at University College London. The neurons received unpredictable stimulation when they failed and predictable stimulation when they succeeded, driving adaptation toward stable firing patterns.
For Doom, the team added a second layer: an AI system that continuously refines how game information gets encoded into electrical signals sent to the neurons, optimizing the input to better shape the cells' behavior. The neurons are adapting. The AI is adapting the signal the neurons receive. It is a hybrid loop, not a purely biological one.
Performance expectations should be calibrated accordingly. Neurons tend to show evidence of seeking out enemies, shooting, and reorienting, but they lose frequently. Cortical Labs reports the neurons reached their current performance level faster than silicon-based machine learning baselines, which is the more meaningful claim.
Why This Is Different From Conventional AI
The distinction matters more than it might appear on the surface.
Energy efficiency is the most immediate engineering argument. Training GPT-3 consumed approximately 1,287 MWh of electricity, according to research published by Patterson et al. in 2021. The human brain runs on roughly 20 watts. The gap between those two numbers is not a rounding error. It is a fundamental constraint on where silicon-based AI can go.
Biological neurons close that gap by orders of magnitude. They learn with extreme energy efficiency, adapt their physical structure in response to experience, and do not require gradient checkpointing or distributed training infrastructure. A full 30-unit CL1 server rack draws only 850 to 1,000 watts total.
Here is how the three main computing paradigms compare at a high level:
| Approach | Hardware | Energy Use | Adaptability | Maturity |
|---|---|---|---|---|
| Silicon AI (GPU/TPU) | NVIDIA H100, Google TPU | Very high | Requires retraining | Production-ready |
| Neuromorphic (silicon) | Intel Loihi 2, IBM NorthPole | Low | Event-driven, limited | Research / early deployment |
| Biological (wetware) | Cortical Labs CL1 | Extremely low | Structural, continuous | Early commercial / research |
Neuromorphic chips like Intel's Loihi 2 and IBM's NorthPole pursue energy efficiency using silicon architectures inspired by biological neurons. The CL1 takes a different path: instead of mimicking biology in silicon, it uses actual biology.
That is not a subtle distinction. Neuromorphic chips approximate the behavior of neurons. The CL1's neurons are neurons.
For a broader picture of where AI hardware constraints are heading, or to ground yourself in the machine learning fundamentals that make biological computing research legible, those are worth having in context before going deeper here.
The Consciousness Problem Nobody Wants to Name
This is where most mainstream coverage quietly looks away, and where the conversation gets genuinely important for anyone building in the AI space. The neurons used in the CL1 are derived from human genetic material. They demonstrate measurable behavioral adaptation in response to feedback. Their activity patterns are structurally similar to those in developing biological brains. Consciousness researchers cannot currently agree on a universal definition or reliable test for detecting consciousness. The two dominant frameworks, Integrated Information Theory (IIT) and Global Workspace Theory (GWT), make different and sometimes conflicting predictions about what systems might warrant moral consideration.
And here's an uncomfortable parallel: When a large language model demonstrates unexpected emergent capability, the field responds with safety frameworks, alignment research, and governance proposals. When living human neurons demonstrate emergent adaptive behavior on a chip, most people make a Doom joke and move on. That asymmetry is worth sitting with, even if the current answer to the consciousness question is clearly negative.
The more the system scales, the more that question deserves a revisit. Cortical Labs is already thinking about it. The broader AI research community is mostly not. Those building applied AI systems will likely encounter the downstream implications first.
Where This Fits in the AI Hardware Landscape
The CL1 is not going to replace a GPU cluster next year. But framing it as a research curiosity misses what is actually happening.
The U.S. DARPA Biological Technologies Office is actively funding research in this space. European programs have been exploring hybrid biological-silicon systems for years. The current AI hardware bottleneck driven by memory constraints and energy ceiling concerns makes biological alternatives more strategically interesting, not less.
The timeline for practical wetware computing is genuinely uncertain. But the proof-of-concept phase that the CL1 represents is the same phase that transformer architectures occupied around 2017. What GPT-class models became in five years is a useful reminder of how quickly a validated research direction can accelerate once the platform is accessible to outside developers.
The AI applications currently bottlenecked by energy and hardware limits, including real-time edge inference, continuous learning systems, and adaptive robotics, are exactly the use cases where biological efficiency would matter most. That is not a coincidence. It is the research agenda.
If you are building in the AI/ML space and want the foundational knowledge to evaluate where this technology is heading, these AI courses cover the neural network and deep learning concepts that make biological computing research legible.
What the CL1 Doom Demo Actually Proves (and Does Not)
Precision matters here. These are the facts:
- Demonstrated: Living neurons on a chip show adaptive, goal-directed behavior in response to a complex 3D environment
- Demonstrated: The CL1 platform is accessible enough for an outside developer to build a working Doom interface in under a week using a Python API
- Demonstrated: The neurons reached current performance levels faster than silicon-based machine learning baselines
- Demonstrated: Biological tissue can serve as a functional computing element in a bidirectional digital interface at commercial scale
- Not demonstrated: Competent Doom gameplay. The neurons currently perform at beginner level and lose frequently.
- Not demonstrated: Consciousness, sentience, or subjective experience. Researchers explicitly state the system lacks the architecture for this.
- Not demonstrated: The Doom result is peer-reviewed. The 2022 Pong result is. The Doom demo is a product announcement with published source code on GitHub.
- Not demonstrated: Scalability to general-purpose AI workloads
The Doom demo is significant because it validates the platform's accessibility and shows adaptation in a complex environment. It is not a scientific paper. Treat the two accordingly.
A Timeline Worth Tracking
- 2021: DishBrain prototype debuts with approximately 800,000 neurons on an 8-electrode array
- 2022: DishBrain paper published in Neuron. Neurons learn Pong in under five minutes. Peer-reviewed and open access. Ethics paper published alongside.
- 2023 to 2024: CL1 development. 59-electrode array, biOS platform, Python API published on GitHub. Research into more complex environments begins.
- 2025: First 115 CL1 units ship commercially at $35,000 each.
- 2026: Independent developer Sean Cole builds working Doom implementation on CL1 in under a week. Cortical Cloud launched for remote developer access. Doom demo published on YouTube with source code on GitHub.
Each step has compressed the gap between proof-of-concept and a deployable platform. The pace matters because it determines how much runway the field has to build governance, ethics, and regulatory frameworks before the technology advances further.
The Practical Takeaway for Developers
You do not need to rewrite your ML pipelines today. But you do need to track this, and the CL1's open Python API means you can actually interact with it now if you want to.
The two constraints that will define the next decade of AI development are energy and adaptability. Biological computing directly addresses both. The foundational research is published in top-tier peer-reviewed journals. The commercial platform is shipping. The API is open. DARPA is funding it.
That combination does not describe a curiosity. It describes an early-stage paradigm shift at the moment it becomes accessible to outside developers.
The neurons are learning. The API is open. The question is whether the people building AI systems are paying close enough attention to notice what that means.
Frequently Asked Questions
What is the difference between DishBrain and the CL1?
DishBrain was Cortical Labs' original research prototype, built on an 8-electrode HD-MEA array and used in the 2022 Pong experiments published in Neuron. The CL1 is the commercial biological computer that succeeded it. The CL1 uses a 59-electrode array at sub-millisecond speeds, an internal life-support system keeping neurons alive for six months or more, a biological operating system called biOS, and a Python API that allows external developers to deploy code directly to the living neural network. The Doom demo was run on a CL1, not on the original DishBrain prototype.
How does the CL1 connect neurons to a video game?
Game state data, such as visual information and enemy positions, gets converted into patterns of electrical stimulation delivered to the neurons through the electrode array. The neurons fire in response, and those firing patterns get decoded into in-game actions like movement and shooting. For Doom, an AI layer also continuously refines how game information is encoded into electrical signals to better shape the neurons' responses. The neurons adapt to the stimuli, and the AI adapts the stimuli to guide the neurons. It is a hybrid biological-digital learning loop.
How well do the neurons actually play Doom?
Not well by gaming standards. Neurons show evidence of seeking enemies, shooting, and reorienting, but they lose frequently. The more meaningful claim is that they reached their current performance level faster than silicon-based machine learning baselines. The Doom demo is a proof-of-concept and platform showcase, not a claim of gaming proficiency.
Is the Doom experiment peer-reviewed?
No. The Doom demo is a product announcement video posted to YouTube, with source code available on GitHub. It was built by an independent developer named Sean Cole in under a week using the CL1's Python API. The foundational science behind the platform is peer-reviewed: the 2022 DishBrain Pong paper authored by Brett Kagan, Karl Friston, and colleagues was published in Neuron and is open access. Those are two distinct things and should be evaluated accordingly.
Where do the neurons in the CL1 come from?
The neurons are derived from voluntary adult donors. Skin or blood samples are taken from the donors, reprogrammed into induced pluripotent stem cells (iPSCs), and then differentiated into cortical neurons before being cultured on the chip. They are not taken directly from human brains. The process uses human genetic material as a starting point, which is part of why Cortical Labs maintains active collaboration with bioethicists and regulatory experts.
How is biological computing different from neuromorphic computing?
Neuromorphic computing uses silicon chips engineered to mimic the architecture and behavior of biological neurons. Systems like Intel's Loihi 2 and IBM's NorthPole use event-driven, spike-based signaling to approximate how neurons communicate, achieving much better energy efficiency than conventional GPUs. Biological computing, as in the CL1, uses actual living neurons as the computational substrate. The distinction is fundamental: neuromorphic chips model biological behavior in silicon, while biological systems use the real thing. Both pursue energy efficiency and adaptability, but through completely different approaches.
How energy-efficient is the CL1 compared to GPU-based AI?
Dramatically more efficient. Training GPT-3 consumed approximately 1,287 MWh of electricity, according to the 2021 Patterson et al. study on AI carbon footprints. The human brain runs on roughly 20 watts continuously. A full 30-unit CL1 server rack draws only 850 to 1,000 watts total while running experiments. Biological systems inherit this efficiency advantage because they use the same underlying substrate as the brain rather than approximating it in power-hungry silicon. As AI training demands scale, this gap becomes an increasingly serious engineering constraint on conventional approaches.