(Btw, I have similar feelings about the non-Neuromorph answers too; but “idk I’m not really compelled by this” didn’t seem like a particularly constructive comment.)
On the contrary, I’ve been very (80%?) surprised by the responses so far—in the Elicit poll, everyone agrees with me! I expected there to be a bunch of people with answers like “10%” and “20%” and then an even larger bunch of people with answers like “50%” (that’s what I expected you, Ajeya, etc. to chime in and say). Instead, well, just look at the poll results! So, even a mere registering of disagreement is helpful.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
I mostly expect that you realize that there were a bunch of things that were super underspecified and they don’t have obvious resolutions, and if you just pick a few things then nothing happens and you get gibberish eternally, and if you search over all the underspecified things you run out of your compute budget very quickly.
Interesting. This conflicts with something I’ve been told about neural networks, which is that they “want to work.” Seems to me that more likely than eternal gibberish is something that works but not substantially better than regular ANN’s of similar size. So, still better than GPT-3, AlphaStar, etc. After all, those architectures are simple enough that surely something similar is in the space of things that would be tried out by the Neuromorph search process?
I think the three specific ways my claim about worms could be wrong are not very plausible:
Sure, most of the genes don’t code neuron stuff. So what? Sure, maybe the DNA mostly contents itself with specifying number + structure of neurons, but that’s just a rejection of my claim, not an argument against it. Sure, maybe it’s fine to have just one type of neuron if you are so simple—but the relevant metric is not “number of types” but “number of types / number of neurons.” And the size of the human genome limits that fraction to “something astronomically tiny” for humans, whereas for worms it could in principle go all the way up to 1.
I’m more sympathetic to your meta-level skepticism though.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.
On the contrary, I’ve been very (80%?) surprised by the responses so far—in the Elicit poll, everyone agrees with me! I expected there to be a bunch of people with answers like “10%” and “20%” and then an even larger bunch of people with answers like “50%” (that’s what I expected you, Ajeya, etc. to chime in and say). Instead, well, just look at the poll results! So, even a mere registering of disagreement is helpful.
That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
Interesting. This conflicts with something I’ve been told about neural networks, which is that they “want to work.” Seems to me that more likely than eternal gibberish is something that works but not substantially better than regular ANN’s of similar size. So, still better than GPT-3, AlphaStar, etc. After all, those architectures are simple enough that surely something similar is in the space of things that would be tried out by the Neuromorph search process?
I think the three specific ways my claim about worms could be wrong are not very plausible:
Sure, most of the genes don’t code neuron stuff. So what? Sure, maybe the DNA mostly contents itself with specifying number + structure of neurons, but that’s just a rejection of my claim, not an argument against it. Sure, maybe it’s fine to have just one type of neuron if you are so simple—but the relevant metric is not “number of types” but “number of types / number of neurons.” And the size of the human genome limits that fraction to “something astronomically tiny” for humans, whereas for worms it could in principle go all the way up to 1.
I’m more sympathetic to your meta-level skepticism though.
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.edit: I no longer endorse these super rough estimates of compute. I did some research and estimation and put the results here: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Blue Brain does actually have a human brain model waiting in the wings, it just tries to avoid mentioning that. A media-image management thing. I spent the day digging into your question about OOMs, and now have much more refined estimates. Here’s my post: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.