My impression is that the linked blog post is claiming we haven’t even been able to get things that are qualitatively as impressive as a worm. So why would we get things that are qualitatively as impressive as a human? I’m not claiming it has to be an upload.
This is because, counterintuitively, worms being small makes them a lot harder to simulate.
I could believe this (based on the argument you mentioned) but it really feels like “maybe this could be true but I’m not that swayed from my default prior of ‘it’s probably as easy to simulate per neuron’”.
Also if it were 100x harder, it would cost… $300. Still super cheap.
Have we actually tried the massive search method I recommended in Neuromorph?
That’s what the genetic algorithm is? It probably wasn’t run with as 3e17 flops, since compute was way more expensive then, but that’s at least evidence that researchers do in fact consider this approach.
At this point I guess I just say I haven’t looked into the worm literature enough to say. I can’t tell from the post alone whether we’ve neuromorphed the worm yet or not.
“Qualitatively as impressive as a worm” is a pretty low bar, I think. We have plenty of artificial neural nets that are much more impressive than worms already, so I guess the question is whether we can make one with only 302 neurons that is as impressive as a worm… e.g. can it wriggle in a way that moves it around, can it move away from sources of damage and towards sources of food, etc. idk, I feel like maybe at this point we should make bets or something, and then go read the literature and see who is right? I don’t find this prospect appealing but it seems like the epistemically virtuous thing to do.
I do feel fairly confident that on a per-neuron basis worms are much harder than humans to simulate. My argument seems solid enough for that conclusion, I think. It’s not solid enough to mean that you are wrong though—like you said, a 100x difference is still basically nothing. And to be honest I agree that the difference probably isn’t much more than that; maybe 1000x or something. That’s computational expense, though; qualitative difficulty is another matter. If you recall from my post about birds & planes, my position is not that simulating/copying nature is easy; rather it’s that producing something that gets the job done is easy, or at least in expectation easier than a lot of people seem to think, because the common argument that it’s hard is bogus etc. etc. This whole worm-uploading project seems more like “simulating/copying nature” to me, whereas the point of Neuromorph was to try to lazily/cheaply copy some things from nature and then make up the rest in whatever way gets the job done.
What do you imagine happening, in the hypothetical, when we run the Neuromorph project? Do you imagine it producing gibberish eternally? If so, why—wouldn’t you at least expect it to do about as well as a transformer or regular RL agent or whatever of comparable size? Do you imagine that it does about that well, but not significantly better, even after all the artificial evolution? I guess that seems somewhat plausible to me, but less than 50% likely. I’m very unsure of this of course, and am updating downwards in light of your pushback.
idk, I feel like maybe at this point we should make bets or something, and then go read the literature and see who is right? I don’t find this prospect appealing but it seems like the epistemically virtuous thing to do.
Meh, I don’t think it’s a worthwhile use of my time to read that literature, but I’d make a bet if we could settle on an operationalization and I didn’t have to settle it.
What do you imagine happening, in the hypothetical, when we run the Neuromorph project?
I mostly expect that you realize that there were a bunch of things that were super underspecified and they don’t have obvious resolutions, and if you just pick a few things then nothing happens and you get gibberish eternally, and if you search over all the underspecified things you run out of your compute budget very quickly. Some things that might end up being underspecified:
How should neurons be connected to each other? Do we just have a random graph with some average degree of connections, or do we need something more precise?
How are inputs connected to the brain? Do we just simulate some signals to some input neurons, that are then propagated according to the physics of neurons? How many neurons take input? How are they connected to the “computation” neurons?
To what extent do we need to simulate other aspects of the human body that affect brain function? Which hormone receptors do we / don’t we simulate? For the ones we do simulate, how do we determine what their inputs are? Or do we have to simulate an entire human body (would be way, way more flops)?
How do we take “random draws” of a new brain? Do we need to simulate the way that DNA builds up the brain during development?
Should we build brains that are like that of a human baby, or a human adult, given that the brain structure seems to change between these?
I’m not saying any of these things will be the problem. I’m saying that there will be some sort of problem like this (probably many such problems), that I’m probably not going to find reasoning from my armchair. I also wouldn’t really change my mind if you had convincing rebuttals to each of them, because the underlying generator is “there are lots of details; some will be devilishly difficult to handle; you only find those by actually trying to solve the problem and running headfirst into those details”. You’d either have to argue against the underlying generator, or actually do the thing and demonstrate it was feasible.
(Btw, I have similar feelings about the non-Neuromorph answers too; but “idk I’m not really compelled by this” didn’t seem like a particularly constructive comment.)
I do feel fairly confident that on a per-neuron basis worms are much harder than humans to simulate. My argument seems solid enough for that conclusion, I think.
Idk, here are a few ways it could be wrong:
Most of the effect of DNA is on things that are not the brain, so the effect you mention is tiny. (This doesn’t make it wrong, just uninteresting.)
Most of the effect of DNA in worms is in deciding the number + structure of the neurons, which we already know and can hardcode in our simulation. However, in humans, DNA doesn’t determine number + structure of neurons, and instead it somehow encodes priors about the world, which we don’t know and can’t hardcode in our simulation.
Worms are small enough and simple enough that it’s fine to just have one type of neuron (e.g. perhaps all neurons are the same length, which affects their function), whereas this isn’t true for human brains (e.g. we need to try multiple variations on the lengths of neurons).
Again, I don’t particularly think any of these are true, it’s more like I start with massive uncertainty when attempting to reason about such a complex system, and an armchair argument is usually only going to slightly shift me within that massive uncertainty. (And as a general heuristic, the cleverer it is, the less it shifts me.)
(Btw, I have similar feelings about the non-Neuromorph answers too; but “idk I’m not really compelled by this” didn’t seem like a particularly constructive comment.)
On the contrary, I’ve been very (80%?) surprised by the responses so far—in the Elicit poll, everyone agrees with me! I expected there to be a bunch of people with answers like “10%” and “20%” and then an even larger bunch of people with answers like “50%” (that’s what I expected you, Ajeya, etc. to chime in and say). Instead, well, just look at the poll results! So, even a mere registering of disagreement is helpful.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
I mostly expect that you realize that there were a bunch of things that were super underspecified and they don’t have obvious resolutions, and if you just pick a few things then nothing happens and you get gibberish eternally, and if you search over all the underspecified things you run out of your compute budget very quickly.
Interesting. This conflicts with something I’ve been told about neural networks, which is that they “want to work.” Seems to me that more likely than eternal gibberish is something that works but not substantially better than regular ANN’s of similar size. So, still better than GPT-3, AlphaStar, etc. After all, those architectures are simple enough that surely something similar is in the space of things that would be tried out by the Neuromorph search process?
I think the three specific ways my claim about worms could be wrong are not very plausible:
Sure, most of the genes don’t code neuron stuff. So what? Sure, maybe the DNA mostly contents itself with specifying number + structure of neurons, but that’s just a rejection of my claim, not an argument against it. Sure, maybe it’s fine to have just one type of neuron if you are so simple—but the relevant metric is not “number of types” but “number of types / number of neurons.” And the size of the human genome limits that fraction to “something astronomically tiny” for humans, whereas for worms it could in principle go all the way up to 1.
I’m more sympathetic to your meta-level skepticism though.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.
My impression is that the linked blog post is claiming we haven’t even been able to get things that are qualitatively as impressive as a worm. So why would we get things that are qualitatively as impressive as a human? I’m not claiming it has to be an upload.
I could believe this (based on the argument you mentioned) but it really feels like “maybe this could be true but I’m not that swayed from my default prior of ‘it’s probably as easy to simulate per neuron’”.
Also if it were 100x harder, it would cost… $300. Still super cheap.
That’s what the genetic algorithm is? It probably wasn’t run with as 3e17 flops, since compute was way more expensive then, but that’s at least evidence that researchers do in fact consider this approach.
At this point I guess I just say I haven’t looked into the worm literature enough to say. I can’t tell from the post alone whether we’ve neuromorphed the worm yet or not.
“Qualitatively as impressive as a worm” is a pretty low bar, I think. We have plenty of artificial neural nets that are much more impressive than worms already, so I guess the question is whether we can make one with only 302 neurons that is as impressive as a worm… e.g. can it wriggle in a way that moves it around, can it move away from sources of damage and towards sources of food, etc. idk, I feel like maybe at this point we should make bets or something, and then go read the literature and see who is right? I don’t find this prospect appealing but it seems like the epistemically virtuous thing to do.
I do feel fairly confident that on a per-neuron basis worms are much harder than humans to simulate. My argument seems solid enough for that conclusion, I think. It’s not solid enough to mean that you are wrong though—like you said, a 100x difference is still basically nothing. And to be honest I agree that the difference probably isn’t much more than that; maybe 1000x or something. That’s computational expense, though; qualitative difficulty is another matter. If you recall from my post about birds & planes, my position is not that simulating/copying nature is easy; rather it’s that producing something that gets the job done is easy, or at least in expectation easier than a lot of people seem to think, because the common argument that it’s hard is bogus etc. etc. This whole worm-uploading project seems more like “simulating/copying nature” to me, whereas the point of Neuromorph was to try to lazily/cheaply copy some things from nature and then make up the rest in whatever way gets the job done.
What do you imagine happening, in the hypothetical, when we run the Neuromorph project? Do you imagine it producing gibberish eternally? If so, why—wouldn’t you at least expect it to do about as well as a transformer or regular RL agent or whatever of comparable size? Do you imagine that it does about that well, but not significantly better, even after all the artificial evolution? I guess that seems somewhat plausible to me, but less than 50% likely. I’m very unsure of this of course, and am updating downwards in light of your pushback.
Meh, I don’t think it’s a worthwhile use of my time to read that literature, but I’d make a bet if we could settle on an operationalization and I didn’t have to settle it.
I mostly expect that you realize that there were a bunch of things that were super underspecified and they don’t have obvious resolutions, and if you just pick a few things then nothing happens and you get gibberish eternally, and if you search over all the underspecified things you run out of your compute budget very quickly. Some things that might end up being underspecified:
How should neurons be connected to each other? Do we just have a random graph with some average degree of connections, or do we need something more precise?
How are inputs connected to the brain? Do we just simulate some signals to some input neurons, that are then propagated according to the physics of neurons? How many neurons take input? How are they connected to the “computation” neurons?
To what extent do we need to simulate other aspects of the human body that affect brain function? Which hormone receptors do we / don’t we simulate? For the ones we do simulate, how do we determine what their inputs are? Or do we have to simulate an entire human body (would be way, way more flops)?
How do we take “random draws” of a new brain? Do we need to simulate the way that DNA builds up the brain during development?
Should we build brains that are like that of a human baby, or a human adult, given that the brain structure seems to change between these?
I’m not saying any of these things will be the problem. I’m saying that there will be some sort of problem like this (probably many such problems), that I’m probably not going to find reasoning from my armchair. I also wouldn’t really change my mind if you had convincing rebuttals to each of them, because the underlying generator is “there are lots of details; some will be devilishly difficult to handle; you only find those by actually trying to solve the problem and running headfirst into those details”. You’d either have to argue against the underlying generator, or actually do the thing and demonstrate it was feasible.
(Btw, I have similar feelings about the non-Neuromorph answers too; but “idk I’m not really compelled by this” didn’t seem like a particularly constructive comment.)
Idk, here are a few ways it could be wrong:
Most of the effect of DNA is on things that are not the brain, so the effect you mention is tiny. (This doesn’t make it wrong, just uninteresting.)
Most of the effect of DNA in worms is in deciding the number + structure of the neurons, which we already know and can hardcode in our simulation. However, in humans, DNA doesn’t determine number + structure of neurons, and instead it somehow encodes priors about the world, which we don’t know and can’t hardcode in our simulation.
Worms are small enough and simple enough that it’s fine to just have one type of neuron (e.g. perhaps all neurons are the same length, which affects their function), whereas this isn’t true for human brains (e.g. we need to try multiple variations on the lengths of neurons).
Again, I don’t particularly think any of these are true, it’s more like I start with massive uncertainty when attempting to reason about such a complex system, and an armchair argument is usually only going to slightly shift me within that massive uncertainty. (And as a general heuristic, the cleverer it is, the less it shifts me.)
On the contrary, I’ve been very (80%?) surprised by the responses so far—in the Elicit poll, everyone agrees with me! I expected there to be a bunch of people with answers like “10%” and “20%” and then an even larger bunch of people with answers like “50%” (that’s what I expected you, Ajeya, etc. to chime in and say). Instead, well, just look at the poll results! So, even a mere registering of disagreement is helpful.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
Interesting. This conflicts with something I’ve been told about neural networks, which is that they “want to work.” Seems to me that more likely than eternal gibberish is something that works but not substantially better than regular ANN’s of similar size. So, still better than GPT-3, AlphaStar, etc. After all, those architectures are simple enough that surely something similar is in the space of things that would be tried out by the Neuromorph search process?
I think the three specific ways my claim about worms could be wrong are not very plausible:
Sure, most of the genes don’t code neuron stuff. So what? Sure, maybe the DNA mostly contents itself with specifying number + structure of neurons, but that’s just a rejection of my claim, not an argument against it. Sure, maybe it’s fine to have just one type of neuron if you are so simple—but the relevant metric is not “number of types” but “number of types / number of neurons.” And the size of the human genome limits that fraction to “something astronomically tiny” for humans, whereas for worms it could in principle go all the way up to 1.
I’m more sympathetic to your meta-level skepticism though.
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.edit: I no longer endorse these super rough estimates of compute. I did some research and estimation and put the results here: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Blue Brain does actually have a human brain model waiting in the wings, it just tries to avoid mentioning that. A media-image management thing. I spent the day digging into your question about OOMs, and now have much more refined estimates. Here’s my post: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.