I largely agree with the main thrust of the argument. What would this line of thought imply for the possibility of mind-uploading? Do we need to simulate every synapse to recreate a person, or might there be a way to take advantage of certain regularities in the computational structure of the brain to convert someone’s memories/behavioral policies/personality/etc. into some standard format that could be imprinted on a more generic architecture?
A couple of quibbles, though:
Side note 1: I use “brain-inspired AGI” in the sense of copying (or reinventing) high-level data structures and algorithms, not in the sense of copying low-level implementation details, e.g. neurons that spike. “Neuromorphic hardware” is a thing, but I see no sign that neuromorphic hardware will be relevant for AGI. Most neuromorphic hardware researchers are focused on low-power sensors, as far as I understand.
Depending on what exactly you mean by “neuromorphic”, I take issue with this. If you want to use traditional CPU/GPU technology, I imagine that you could simulate an AGI on a small server farm and use that to control a robot body (physically or virtually embedded). However, if you want to have anywhere near human-level power/space efficiency, I think that something like neuromorphic hardware will be essential.
You can run a large neural network in software using continuous values for neuron activations, but the hardware it’s running on is only optimized for generic computations. “Neurons that spike” offer many advantages like power efficiency and event-based Monte Carlo sampling. Dedicated hardware that runs on spiking neuron analogs could implement brain-like AGI models far better than existing CPUs/GPUs in terms of efficiency, at the cost of generality of computation (no free lunch).
Does AGI itself require neuromorphic hardware *per se*? No. Will the first implementation of scalable AGI algorithms and data structures be done in software running on non-AGI-dedicated hardware? Probably. Will those algorithms involve simulating Na/K/Ca currents, gene regulation, etc. directly? Probably not. But will it be necessary to convert those algorithms and data structures into something that could be run on spiking/event-based neuromorphic hardware to make it competitive, affordable, and scalable? I think so. Eventually. At least if you want to have robots with human-level intelligence running on human-brain-sized computers.
By the same token, in this sense, I expect that understanding the key operating principles of human intelligence will be dramatically easier than understanding the key operating principles of the nervous system of a 100-neuron microscopic worm!! Weird thought, right?!
This is wrong unless “key operating principles” means something different each time you say it (i.e. it refers to the algorithms and data structures running on the human brain, but then it refers to the molecular-level causal graph describing the worm’s nervous system). Which is what I assume you meant.
What would this line of thought imply for the possibility of mind-uploading?
In my mind it implies that we’ll invent human-level AGIs before we invent mind-uploading technology. And therefore we should all be working on the problem of creating safe and beneficial AGI! And then they can help us figure out about mind-uploading :-P
But since you ask… I guess I’m intimidated by the difficulty of uploading a mind at sufficiently high fidelity that when you turn it on the “person” reports feeling the same and maintains the same personality and inclinations. I don’t think we would reach that even with a scan that measured every neuron and every synapse, because I suspect that there are enough sorta quasi-analog and/or glia-involving circuits or whatever, especially in the brainstem, to mess things up at that level of precision.
if you want to have anywhere near human-level power/space efficiency, I think that something like neuromorphic hardware will be essential.
I think a computer can be ~10,000× less energy-efficient than a human brain before the electricity costs reach my local minimum wage, right? So I don’t see near-human-level energy efficiency as a requirement for practical transformative AGI. Ditto space efficiency. If we make an AI that could automate any remote-work job, and one instantiation of the model occupies one server rack, well that would be maybe 1000× less space-efficient than a human brain, but I think it would hardly matter for the majority of applications, including the most important applications. (And it would still probably less space-inefficient than “a human in a cubicle”!)
Dedicated hardware that runs on spiking neuron analogs could implement brain-like AGI models far better than existing CPUs/GPUs in terms of efficiency, at the cost of generality of computation (no free lunch).
That’s possible, though in my mind it’s not certain. The other possibility in my mind that the algorithms underlying human intelligence are just fundamentally not very well suited to implementation via spiking neurons!! But spiking neurons are the only thing that biology has to work with! So evolution found a way to shoehorn these algorithms to run on spiking neurons. :-P
I’m not trying to troll here—I don’t have a good sense for how probable that is, but I do see that as one legitimate possibility. To take an example, a faster more-serial processor can emulate a slower more-parallel processor but not vice-versa. We engineers can build either style of processor, but biology is stuck with the latter. The algorithms of human intelligence could have massive computational shortcuts that involve spawning a fast serial subroutine, and we would never know it just by looking at biology, because biology has never had that as an option!
I agree that “literally existing CPUs/GPUs” are going to work slower and less scalably than an ASIC tailor-made to the algorithms that we have in mind. And I do assume that people will start making and using such ASICs very quickly. I guess I’d just be surprised if those ASICs involve spikes. Instead I’d expect the ASIC to look more like a typical digital ASIC, with a clock and flip-flops and registers and whatnot. I mean, I could be wrong, that’s just what I would guess, because I figure it would probably be adequate, and that’s what people are currently really good at designing. When we’re many years into superhuman AGIs designing improved chips for even-more-superhuman AGIs, I have no clue what those chips would look like. But I also don’t think it’s useful to think that far ahead. :-P
This is wrong unless “key operating principles” means something different each time you say it (i.e. it refers to the algorithms and data structures running on the human brain, but then it refers to the molecular-level causal graph describing the worm’s nervous system). Which is what I assume you meant.
Sorry, I guess that was a bit unclear. I meant “key operating principles” as something like “a description that is sufficiently detailed to understand how the system meets a design spec”. Then the trick is that I was comparing two very different types of design specs. One side of the comparison was “human intelligence”, which (in my mind) is one particular class of human capabilities. So the “design spec” would be things like “it can learn to use language and program computers and write poetry and tell jokes etc. etc.” Can we give a sufficiently detailed description to understand how the human brain does those things? Not yet, but I think eventually.
Then the other side of my comparison was “nervous system of the worm”. The “design spec” there was (implicitly) “maximize inclusive genetic fitness”, i.e. it includes the entire set of evolutionarily-adaptive behaviors that the worm does. And that’s really hard because we don’t even know what those behaviors are! There are astronomically many quirks of the worm’s nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it’s adaptive only in some exotic situation that comes up once every 12 generations, or it’s ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.
Y’know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes how the worm twitches, which might infinitesimally change how noticeable the worm is to predators when it’s in a certain type of light. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it’s also possibly just an “implementation detail”. And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.
If instead you name a specific adaptive behavior that the worm does (say, when it sees a predator it runs away), then I would certainly agree with you that understanding the key operating principles of that specific worm behavior will probably be much much much easier than understanding the key operating principles of human intelligence.
Thanks for the feedback. To be clear, I also have trouble trying to think of how one might implement certain key brain algorithms (e.g., hierarchical free-energy minimization) using spiking neurons. We might even see the first “neuromorphic AGIs” using analog chips that simulate neural networks with ReLU and sigmoid activation functions rather than spiking events. And these would probably not come until well after the first “software AGIs” have been built and trained. However, I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not. Eventually energy efficiency will become a big enough deal that someone (maybe an AGI?) whose headspace is saturated with thinking about event-based neuromorphic algorithms will create something that outcompetes other forms of AGI. And all the work being done with neuromorphic hardware today will feed into the inspiration for that future design. /speculation
As far as understanding worm vs. human brain key operating principles goes, it’s important to remember that the human brain is hundreds of millions of times larger and more complex than the worm’s whole nervous system. It’s easy to think about (approaching) human intelligence as a bunch of abstract data structures and algorithms, rather than as an astronomically complex causal web of biological implementation details, in part because we are humans. We spend our whole lives using our intelligence and, as social animals, inferring the internal mental processes of other humans. Approaching either the human brain or the worm brain from the perspective of low-level implementation details as being the “key operating principles” is going to result in an investigation vastly more complex and hopeless than approaching either from a more abstract cognitive/behavioral level. And for each perspective separately, the human is vastly more complicated to figure out than the worm. Just to illustrate my point:
Sorry, I guess that was a bit unclear. I meant “key operating principles” as something like “a description that is sufficiently detailed to understand how the system meets a design spec”. Then the trick is that I was comparing two very different types of design specs. One side of the comparison was “worm intelligence”, which (in my mind) is one particular class of worm capabilities. So the “design spec” would be things like “it can learn to modify its rate of reversals and omega and delta turns in response to a conditioned stimulus and eat food and poop and evade predators etc. etc.” Can we give a sufficiently detailed description to understand how the worm brain does those things? Not yet, but I think eventually.
Then the other side of my comparison was “nervous system of the human”. The “design spec” there was (implicitly) “maximize inclusive genetic fitness”, i.e. it includes the entire set of evolutionarily-adaptive behaviors that the human does. And that’s really hard because we don’t even know what those behaviors are! There are astronomically many quirks of the human’s nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it’s adaptive only in some exotic situation that comes up once every 12 generations, or it’s ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.
Y’know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes the human’s facial expression, which might infinitesimally change how noticeable the human’s cognitive/emotional state is to other humans in a particular social context. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it’s also possibly just an “implementation detail”. And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.
My point was simply to draw attention to the need to compare apples to apples. It’s more about deconfusing things for future readers of this post than for correcting your actual understanding of the situation.
I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not.
Sure, I wouldn’t say “rule out”, it’s certainly a possibility, especially if we’re talking about the N’th generation of ASICs. I guess I’d assign <10% probability that the first-generation ASIC that can run a “human-level AGI algorithm” is based on spikes. (Well, depending on the exact definitions I guess.) But I wouldn’t feel comfortable saying <1%. Of course that probability is not really based on much, I’m just trying to communicate what I currently think.
draw attention to the need to compare apples to apples
In an apples-to-apples comparison, it’s super duper ridiculously blindingly obvious that a human nervous system is harder to understand than a worm nervous system. In fact I’m somewhat distressed that you thought I was disagreeing with that!!!
I added a paragraph to the article to try to make it more clear—if you found it confusing then it’s a safe bet that other people did too. Thanks!
I largely agree with the main thrust of the argument. What would this line of thought imply for the possibility of mind-uploading? Do we need to simulate every synapse to recreate a person, or might there be a way to take advantage of certain regularities in the computational structure of the brain to convert someone’s memories/behavioral policies/personality/etc. into some standard format that could be imprinted on a more generic architecture?
A couple of quibbles, though:
Depending on what exactly you mean by “neuromorphic”, I take issue with this. If you want to use traditional CPU/GPU technology, I imagine that you could simulate an AGI on a small server farm and use that to control a robot body (physically or virtually embedded). However, if you want to have anywhere near human-level power/space efficiency, I think that something like neuromorphic hardware will be essential.
You can run a large neural network in software using continuous values for neuron activations, but the hardware it’s running on is only optimized for generic computations. “Neurons that spike” offer many advantages like power efficiency and event-based Monte Carlo sampling. Dedicated hardware that runs on spiking neuron analogs could implement brain-like AGI models far better than existing CPUs/GPUs in terms of efficiency, at the cost of generality of computation (no free lunch).
Does AGI itself require neuromorphic hardware *per se*? No. Will the first implementation of scalable AGI algorithms and data structures be done in software running on non-AGI-dedicated hardware? Probably. Will those algorithms involve simulating Na/K/Ca currents, gene regulation, etc. directly? Probably not. But will it be necessary to convert those algorithms and data structures into something that could be run on spiking/event-based neuromorphic hardware to make it competitive, affordable, and scalable? I think so. Eventually. At least if you want to have robots with human-level intelligence running on human-brain-sized computers.
This is wrong unless “key operating principles” means something different each time you say it (i.e. it refers to the algorithms and data structures running on the human brain, but then it refers to the molecular-level causal graph describing the worm’s nervous system). Which is what I assume you meant.
Thanks for the comment!
In my mind it implies that we’ll invent human-level AGIs before we invent mind-uploading technology. And therefore we should all be working on the problem of creating safe and beneficial AGI! And then they can help us figure out about mind-uploading :-P
But since you ask… I guess I’m intimidated by the difficulty of uploading a mind at sufficiently high fidelity that when you turn it on the “person” reports feeling the same and maintains the same personality and inclinations. I don’t think we would reach that even with a scan that measured every neuron and every synapse, because I suspect that there are enough sorta quasi-analog and/or glia-involving circuits or whatever, especially in the brainstem, to mess things up at that level of precision.
I think a computer can be ~10,000× less energy-efficient than a human brain before the electricity costs reach my local minimum wage, right? So I don’t see near-human-level energy efficiency as a requirement for practical transformative AGI. Ditto space efficiency. If we make an AI that could automate any remote-work job, and one instantiation of the model occupies one server rack, well that would be maybe 1000× less space-efficient than a human brain, but I think it would hardly matter for the majority of applications, including the most important applications. (And it would still probably less space-inefficient than “a human in a cubicle”!)
That’s possible, though in my mind it’s not certain. The other possibility in my mind that the algorithms underlying human intelligence are just fundamentally not very well suited to implementation via spiking neurons!! But spiking neurons are the only thing that biology has to work with! So evolution found a way to shoehorn these algorithms to run on spiking neurons. :-P
I’m not trying to troll here—I don’t have a good sense for how probable that is, but I do see that as one legitimate possibility. To take an example, a faster more-serial processor can emulate a slower more-parallel processor but not vice-versa. We engineers can build either style of processor, but biology is stuck with the latter. The algorithms of human intelligence could have massive computational shortcuts that involve spawning a fast serial subroutine, and we would never know it just by looking at biology, because biology has never had that as an option!
I agree that “literally existing CPUs/GPUs” are going to work slower and less scalably than an ASIC tailor-made to the algorithms that we have in mind. And I do assume that people will start making and using such ASICs very quickly. I guess I’d just be surprised if those ASICs involve spikes. Instead I’d expect the ASIC to look more like a typical digital ASIC, with a clock and flip-flops and registers and whatnot. I mean, I could be wrong, that’s just what I would guess, because I figure it would probably be adequate, and that’s what people are currently really good at designing. When we’re many years into superhuman AGIs designing improved chips for even-more-superhuman AGIs, I have no clue what those chips would look like. But I also don’t think it’s useful to think that far ahead. :-P
Sorry, I guess that was a bit unclear. I meant “key operating principles” as something like “a description that is sufficiently detailed to understand how the system meets a design spec”. Then the trick is that I was comparing two very different types of design specs. One side of the comparison was “human intelligence”, which (in my mind) is one particular class of human capabilities. So the “design spec” would be things like “it can learn to use language and program computers and write poetry and tell jokes etc. etc.” Can we give a sufficiently detailed description to understand how the human brain does those things? Not yet, but I think eventually.
Then the other side of my comparison was “nervous system of the worm”. The “design spec” there was (implicitly) “maximize inclusive genetic fitness”, i.e. it includes the entire set of evolutionarily-adaptive behaviors that the worm does. And that’s really hard because we don’t even know what those behaviors are! There are astronomically many quirks of the worm’s nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it’s adaptive only in some exotic situation that comes up once every 12 generations, or it’s ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.
Y’know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes how the worm twitches, which might infinitesimally change how noticeable the worm is to predators when it’s in a certain type of light. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it’s also possibly just an “implementation detail”. And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.
If instead you name a specific adaptive behavior that the worm does (say, when it sees a predator it runs away), then I would certainly agree with you that understanding the key operating principles of that specific worm behavior will probably be much much much easier than understanding the key operating principles of human intelligence.
Thanks for the feedback. To be clear, I also have trouble trying to think of how one might implement certain key brain algorithms (e.g., hierarchical free-energy minimization) using spiking neurons. We might even see the first “neuromorphic AGIs” using analog chips that simulate neural networks with ReLU and sigmoid activation functions rather than spiking events. And these would probably not come until well after the first “software AGIs” have been built and trained. However, I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not. Eventually energy efficiency will become a big enough deal that someone (maybe an AGI?) whose headspace is saturated with thinking about event-based neuromorphic algorithms will create something that outcompetes other forms of AGI. And all the work being done with neuromorphic hardware today will feed into the inspiration for that future design. /speculation
As far as understanding worm vs. human brain key operating principles goes, it’s important to remember that the human brain is hundreds of millions of times larger and more complex than the worm’s whole nervous system. It’s easy to think about (approaching) human intelligence as a bunch of abstract data structures and algorithms, rather than as an astronomically complex causal web of biological implementation details, in part because we are humans. We spend our whole lives using our intelligence and, as social animals, inferring the internal mental processes of other humans. Approaching either the human brain or the worm brain from the perspective of low-level implementation details as being the “key operating principles” is going to result in an investigation vastly more complex and hopeless than approaching either from a more abstract cognitive/behavioral level. And for each perspective separately, the human is vastly more complicated to figure out than the worm. Just to illustrate my point:
My point was simply to draw attention to the need to compare apples to apples. It’s more about deconfusing things for future readers of this post than for correcting your actual understanding of the situation.
Sure, I wouldn’t say “rule out”, it’s certainly a possibility, especially if we’re talking about the N’th generation of ASICs. I guess I’d assign <10% probability that the first-generation ASIC that can run a “human-level AGI algorithm” is based on spikes. (Well, depending on the exact definitions I guess.) But I wouldn’t feel comfortable saying <1%. Of course that probability is not really based on much, I’m just trying to communicate what I currently think.
In an apples-to-apples comparison, it’s super duper ridiculously blindingly obvious that a human nervous system is harder to understand than a worm nervous system. In fact I’m somewhat distressed that you thought I was disagreeing with that!!!
I added a paragraph to the article to try to make it more clear—if you found it confusing then it’s a safe bet that other people did too. Thanks!