This Rational Animations video is about the research and practical challenges of “whole brain emulation” or “mind uploading”, presented as a step by step guide. We primarily follow the roadmap of Sandberg and Bostrom’s 2008 report, linked in the notes. The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and outside reviewers including several of the authors of the cited sources. Production credits are at the end of the video. You can find the script of the video below.
So you want to run a brain on a computer. Luckily, researchers have already mapped out a trail for you, but this won’t be an easy task. We can break it down into three main steps: First, getting all the necessary information out of a brain; Second, converting it into a computer program; and third, actually running that program. So, let’s get going!
Our goal is to build a computer system that acts the same way a brain does, which we call a “whole brain emulation”. Emulation is when one computer is programmed to behave exactly like another, even if it’s using different hardware. For instance, you can emulate a handheld game console on your computer, and play games made for the real console on the emulated version. Similarly, an emulation of a human brain—or maybe the whole central nervous system—would be able to think and act exactly like a physical person. Alan Turing showed in the 1930s that any computer that meets certain requirements, including the one you’re using to watch this video, can in principle emulate any other computer and run any algorithm, given enough time and memory.[1] Assuming the brain fundamentally performs computations, then our goal is at least theoretically achievable. To actually emulate a human brain, we’ll follow the roadmap given by Anders Sandberg and Nick Bostrom in 2008.[2] Crucially, we don’t need to fully understand every aspect of the brain in order to emulate it, especially hard philosophical problems like consciousness.
But knowing it’s possible is one thing—implementation is another. Our first challenge will be to get the information we need from a human brain. Researchers aren’t yet sure what level of detail we’ll need, but research on small animals suggests we’ll at least need to map all the brain’s nerve cells, called neurons; the connections between them, called synapses; and model how each pair of connected neurons influences each other. We’re currently working on getting this information for C. elegans, a tiny transparent worm with just 302 neurons. We’ve found all the worm’s neurons and synapses, which are the same from worm to worm. Figuring out how they behave has proven more difficult, though we’re making some progress.
By observing the flow of calcium ions in living worms under a microscope, researchers are slowly developing statistical models that mimic the worm’s nervous system[3]. We can use this knowledge to determine how physical features of the worm’s synapses influence the synapse’s behavior – one major tool for scaling our work up to human brains.
But human brains are much larger and noticeably not transparent, so we’ll need additional techniques. One option might be to work on preserved human brains. If we can preserve all of a brain’s relevant structures, we can catalogue them at our leisure. And we’ve made progress on this front, too. For example, neuroscience research company Nectome has successfully preserved animal brains[4] by filling them with preservative chemicals called aldehydes and cooling them down close to absolute zero. Techniques like these preserve not just the connections between neurons, but also biomolecules like proteins and mRNA within the neurons themselves, including the molecular changes associated with gene expression. However, we haven’t tested these techniques on human brains yet. And the more information we need to preserve to run our emulation, the harder the task of preservation becomes.
If we want to scan a particular living person’s brain instead of a preserved one, we may need to use advanced technologies like nanotechnology[5]. Nanotechnology is often treated like magic in science fiction, but we already know about real, natural nanomachines, such as viruses and mitochondria. If we can learn to make our own mitochondria-size nano workers, a future brain scan may be performed by sending genetically engineered microorganisms into the brain. The microorganisms could then store the necessary information in their DNA to be extracted later. But that’s just one extremely speculative possibility. A less dramatic but more realistic possibility is that scanning brains in detail will simply get easier with incremental improvements in existing techniques like ultrasound, as we’ve seen with other technologies.
So let’s start scanning! Let’s assume we’ve solved scanning with one of these techniques, or something else entirely. What’s important is that now we have the data we need. Now it’s time to turn our scan into a computer emulation. We’ll first need to take the raw brainscan data and convert it to a form we can use, perhaps a big list of neurons and synapses, and an accurate model of how each connection behaves. Given that there are 100 trillion synapses in the brain, there’s no way we can do this manually. It will have to be automated one way or another—and it’s a safe bet that AI would probably be involved. We won’t necessarily need human-level AI—specialized systems based on today’s neural nets could be able to do the job. Suppose, for example, that the raw data from our brain scans will be a colossal number of similar images. Then, neural nets could help process those images to create 3-dimensional maps of the brain regions we’ve scanned.
Now comes the hard part: determining how the brain’s fundamental structures that we’ve scanned, such as all the synapses, operate. Hard—but not impossible. For example, by studying the synapses of smaller organisms we might be able to deduce how a synapse behaves from information we can easily gather, like each synapse’s shape and position, perhaps using AI again. We also want our emulated brain to be able to learn and remember information, so we’ll need to understand how neurons and synapses grow and change over time. We’ll also need data on the timing of neurons firing, on how different incoming signals interact within a neuron,[6] and on the behavior of neurotransmitters, the biochemicals that allow signals to cross between neurons. And there may be challenges even beyond this—we just don’t know enough to say for sure right now. However we approach it, this is another area in which we’ll need automation and AI to do the bulk of the work, just because of how much data we’ll need to analyze. The good news is that once we’ve constructed the first whole brain emulation, it should get easier with every future attempt.
So we’ve processed our scan and our emulation is ready to go! The final piece of the puzzle is running our emulation on an actual computer. Of all the steps, this seems like the most straightforward, but it still might pose a challenge.
How much computing power do we need? As a first reference point, how much computing power does a human brain have? Sandberg and Bostrom found that other researchers’ best estimates put this around 1 quadrillion (10^15) operations per second, comparable to a single high end computer graphics processor in 2023.[7] The estimates in this range assume that most of the brain’s computation happens at the scale of synapses. If more computation is done at an even smaller scale, the true number could be much higher. On the other hand, if we can effectively abstract the behavior of groups of neurons, we might need much less processing power. As a high estimate, we can look at simulations of individual neurons. A 2021 paper[8] showed that the firing behavior of a single biological neuron can be modeled with more than 99% accuracy using an artificial neural net of around a thousand artificial neurons in 5 to 8 layers, using about 10 million operations for every millisecond of simulation time[9]. If we were to run this model for all 100 billion (10^11) or so neurons in an entire brain, we’d require about 1 sextillion (10^21) operations per second, a little less than a thousand times the power of the world’s top supercomputer in early 2023.[10] Computers’ processing power has been growing exponentially for decades, with the top supercomputer of 2023 being a thousand times more powerful than the top computer 15 years prior in 2008. There are conflicting opinions on how long this trend can continue, but if progress doesn’t slow down too much then we should expect to be able to reach 10^21 operations per second on a single supercomputer some time in the late 2030s.[11] There are other challenges beyond processing power, such as getting enough high-speed computer memory to store our emulation’s data and being able to get that data to the processors quickly enough to run the emulation at full speed, but Sandberg and Bostrom conclude that those factors are likely to be solvable before processing power.
Any one of the three main steps—the scanning, the interpretation, or the computing power—could turn out to be the most difficult piece of the puzzle.
If scanning is the hardest challenge, then soon after the first person’s brain is scanned we may have numerous emulations of that one person running around in the world.
If the most difficult step is converting our scan into an emulation, then when we do figure that out we may already have full brain scans of a number of individuals ready to go. A reason that might happen is if interpretation takes more computing power than running the actual emulation.
If computer power is the limiting factor, either for running the emulation itself or to run our scan conversion algorithms, we might see steady progress as brain emulations of larger and more complex animals or regions of the brain are slowly developed on the most advanced supercomputers.
However we’ve arrived here, it’s been a difficult path. We’ve developed and refined new methods of neural scanning, advanced our understanding of the brain’s structure by leaps and bounds, and taken advantage of decades of progress in computing hardware. Now we’re finally ready to turn on our first whole brain emulation. It’s time to flip the switch and say hello to a whole new kind of world.
Notes
Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: 230-265. https://doi.org/10.1112/plms/s2-42.1.230↩︎
Francesco Randi, Anuj K Sharma, Sophie Dvali, and Andrew M Leifer (2022): Neural signal propagation atlas of C. elegans, arXiv:2208.04790 [q-bio.NC] ↩︎
Eth, D., Foust, J., & Whale, B. (2013). The Prospects of Whole Brain Emulation within the next Half-Century. Journal of Artificial General Intelligence, 4(3) 130-152. DOI: 10.2478/jagi-2013-0008 ↩︎
How to Upload a Mind (In Three Not-So-Easy Steps)
Link post
Cross-posted to the EA forum
This Rational Animations video is about the research and practical challenges of “whole brain emulation” or “mind uploading”, presented as a step by step guide. We primarily follow the roadmap of Sandberg and Bostrom’s 2008 report, linked in the notes. The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and outside reviewers including several of the authors of the cited sources. Production credits are at the end of the video. You can find the script of the video below.
So you want to run a brain on a computer. Luckily, researchers have already mapped out a trail for you, but this won’t be an easy task. We can break it down into three main steps: First, getting all the necessary information out of a brain; Second, converting it into a computer program; and third, actually running that program. So, let’s get going!
Our goal is to build a computer system that acts the same way a brain does, which we call a “whole brain emulation”. Emulation is when one computer is programmed to behave exactly like another, even if it’s using different hardware. For instance, you can emulate a handheld game console on your computer, and play games made for the real console on the emulated version. Similarly, an emulation of a human brain—or maybe the whole central nervous system—would be able to think and act exactly like a physical person. Alan Turing showed in the 1930s that any computer that meets certain requirements, including the one you’re using to watch this video, can in principle emulate any other computer and run any algorithm, given enough time and memory.[1] Assuming the brain fundamentally performs computations, then our goal is at least theoretically achievable. To actually emulate a human brain, we’ll follow the roadmap given by Anders Sandberg and Nick Bostrom in 2008.[2] Crucially, we don’t need to fully understand every aspect of the brain in order to emulate it, especially hard philosophical problems like consciousness.
But knowing it’s possible is one thing—implementation is another. Our first challenge will be to get the information we need from a human brain. Researchers aren’t yet sure what level of detail we’ll need, but research on small animals suggests we’ll at least need to map all the brain’s nerve cells, called neurons; the connections between them, called synapses; and model how each pair of connected neurons influences each other. We’re currently working on getting this information for C. elegans, a tiny transparent worm with just 302 neurons. We’ve found all the worm’s neurons and synapses, which are the same from worm to worm. Figuring out how they behave has proven more difficult, though we’re making some progress.
By observing the flow of calcium ions in living worms under a microscope, researchers are slowly developing statistical models that mimic the worm’s nervous system[3]. We can use this knowledge to determine how physical features of the worm’s synapses influence the synapse’s behavior – one major tool for scaling our work up to human brains.
But human brains are much larger and noticeably not transparent, so we’ll need additional techniques. One option might be to work on preserved human brains. If we can preserve all of a brain’s relevant structures, we can catalogue them at our leisure. And we’ve made progress on this front, too. For example, neuroscience research company Nectome has successfully preserved animal brains[4] by filling them with preservative chemicals called aldehydes and cooling them down close to absolute zero. Techniques like these preserve not just the connections between neurons, but also biomolecules like proteins and mRNA within the neurons themselves, including the molecular changes associated with gene expression. However, we haven’t tested these techniques on human brains yet. And the more information we need to preserve to run our emulation, the harder the task of preservation becomes.
If we want to scan a particular living person’s brain instead of a preserved one, we may need to use advanced technologies like nanotechnology[5]. Nanotechnology is often treated like magic in science fiction, but we already know about real, natural nanomachines, such as viruses and mitochondria. If we can learn to make our own mitochondria-size nano workers, a future brain scan may be performed by sending genetically engineered microorganisms into the brain. The microorganisms could then store the necessary information in their DNA to be extracted later. But that’s just one extremely speculative possibility. A less dramatic but more realistic possibility is that scanning brains in detail will simply get easier with incremental improvements in existing techniques like ultrasound, as we’ve seen with other technologies.
So let’s start scanning! Let’s assume we’ve solved scanning with one of these techniques, or something else entirely. What’s important is that now we have the data we need. Now it’s time to turn our scan into a computer emulation. We’ll first need to take the raw brainscan data and convert it to a form we can use, perhaps a big list of neurons and synapses, and an accurate model of how each connection behaves. Given that there are 100 trillion synapses in the brain, there’s no way we can do this manually. It will have to be automated one way or another—and it’s a safe bet that AI would probably be involved. We won’t necessarily need human-level AI—specialized systems based on today’s neural nets could be able to do the job. Suppose, for example, that the raw data from our brain scans will be a colossal number of similar images. Then, neural nets could help process those images to create 3-dimensional maps of the brain regions we’ve scanned.
Now comes the hard part: determining how the brain’s fundamental structures that we’ve scanned, such as all the synapses, operate. Hard—but not impossible. For example, by studying the synapses of smaller organisms we might be able to deduce how a synapse behaves from information we can easily gather, like each synapse’s shape and position, perhaps using AI again. We also want our emulated brain to be able to learn and remember information, so we’ll need to understand how neurons and synapses grow and change over time. We’ll also need data on the timing of neurons firing, on how different incoming signals interact within a neuron,[6] and on the behavior of neurotransmitters, the biochemicals that allow signals to cross between neurons. And there may be challenges even beyond this—we just don’t know enough to say for sure right now. However we approach it, this is another area in which we’ll need automation and AI to do the bulk of the work, just because of how much data we’ll need to analyze. The good news is that once we’ve constructed the first whole brain emulation, it should get easier with every future attempt.
So we’ve processed our scan and our emulation is ready to go! The final piece of the puzzle is running our emulation on an actual computer. Of all the steps, this seems like the most straightforward, but it still might pose a challenge.
How much computing power do we need? As a first reference point, how much computing power does a human brain have? Sandberg and Bostrom found that other researchers’ best estimates put this around 1 quadrillion (10^15) operations per second, comparable to a single high end computer graphics processor in 2023.[7] The estimates in this range assume that most of the brain’s computation happens at the scale of synapses. If more computation is done at an even smaller scale, the true number could be much higher. On the other hand, if we can effectively abstract the behavior of groups of neurons, we might need much less processing power. As a high estimate, we can look at simulations of individual neurons. A 2021 paper[8] showed that the firing behavior of a single biological neuron can be modeled with more than 99% accuracy using an artificial neural net of around a thousand artificial neurons in 5 to 8 layers, using about 10 million operations for every millisecond of simulation time[9]. If we were to run this model for all 100 billion (10^11) or so neurons in an entire brain, we’d require about 1 sextillion (10^21) operations per second, a little less than a thousand times the power of the world’s top supercomputer in early 2023.[10] Computers’ processing power has been growing exponentially for decades, with the top supercomputer of 2023 being a thousand times more powerful than the top computer 15 years prior in 2008. There are conflicting opinions on how long this trend can continue, but if progress doesn’t slow down too much then we should expect to be able to reach 10^21 operations per second on a single supercomputer some time in the late 2030s.[11] There are other challenges beyond processing power, such as getting enough high-speed computer memory to store our emulation’s data and being able to get that data to the processors quickly enough to run the emulation at full speed, but Sandberg and Bostrom conclude that those factors are likely to be solvable before processing power.
Any one of the three main steps—the scanning, the interpretation, or the computing power—could turn out to be the most difficult piece of the puzzle.
If scanning is the hardest challenge, then soon after the first person’s brain is scanned we may have numerous emulations of that one person running around in the world.
If the most difficult step is converting our scan into an emulation, then when we do figure that out we may already have full brain scans of a number of individuals ready to go. A reason that might happen is if interpretation takes more computing power than running the actual emulation.
If computer power is the limiting factor, either for running the emulation itself or to run our scan conversion algorithms, we might see steady progress as brain emulations of larger and more complex animals or regions of the brain are slowly developed on the most advanced supercomputers.
However we’ve arrived here, it’s been a difficult path. We’ve developed and refined new methods of neural scanning, advanced our understanding of the brain’s structure by leaps and bounds, and taken advantage of decades of progress in computing hardware. Now we’re finally ready to turn on our first whole brain emulation. It’s time to flip the switch and say hello to a whole new kind of world.
Notes
Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: 230-265. https://doi.org/10.1112/plms/s2-42.1.230 ↩︎
Sandberg, A. & Bostrom, N. (2008): Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf ↩︎
Francesco Randi, Anuj K Sharma, Sophie Dvali, and Andrew M Leifer (2022): Neural signal propagation atlas of C. elegans, arXiv:2208.04790 [q-bio.NC] ↩︎
Rafi Letzter, “After Break with MIT, Nectome clarifies it has no immediate plans to upload brains” https://www.livescience.com/62212-nectome-grant-mit-founder.html ↩︎
Eth, D., Foust, J., & Whale, B. (2013). The Prospects of Whole Brain Emulation within the next Half-Century. Journal of Artificial General Intelligence, 4(3) 130-152. DOI: 10.2478/jagi-2013-0008 ↩︎
“Dendritic computations captured by an effective point neuron model”, Songting Li et. al. 2019 https://doi.org/10.1073/pnas.1904463116 ↩︎
NVIDIA ADA GPU ARCHITECTURE, https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf ↩︎
Beniaguev, D., Segev, I., & London, M. (2021). Single cortical neurons as deep artificial neural networks. Neuron, 109(17), 2727-2739.e3. Single cortical neurons as deep artificial neural networks—ScienceDirect ↩︎
Joseph Carlsmith, 2020. “How Much Computational Power Does It Take to Match the Human Brain?” https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/ ↩︎
https://www.top500.org/lists/top500/2022/06/ ↩︎
https://www.top500.org/statistics/perfdevel/ ↩︎