That said, I’d be interested to hear why you have similar feelings about the non-Neuromorph answers, considering that you agreed with the point I was making in the birds/brains/etc. post. If we aren’t trying to replicate the brain, but just to do something that works, yes there will be lots of details to work out, but what positive reason do you have to think that the amount of special sauce / details is so high that 12 OOMs and a few years isn’t enough to find it?
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.
The positive reason is basically all the reasons given in Ajeya’s report? Like, we don’t tend to design much better artifacts than evolution (currently), the evolution-designed artifact is expensive, and reproducing it using today’s technology looks like it could need more than 12 OOMs.
I don’t think the birds/brains/etc post contradicts this reason, as I said before (and you seemed to agree).
Ok, as a former neuroscientist who has spent a lot of years (albeit not recent ones) geeking out about, downloading, and playing with various neural models, I’d like to add to this discussion. First, the worm stuff seems overly detailed and focused on recreating the exact behavior rather than ‘sorta kinda working like a brain should’. A closer, more interesting project to look at (but still too overly specific) is the Blue Brain project [ https://www.epfl.ch/research/domains/bluebrain/ ]. Could that work with 12 more OOMs of compute? I feel quite confident it could, with no additional info. But I think you could get there with a lot less than 12 OOMs if you took a less realistic, more functional project like Nengo [ https://www.nengo.ai/ ]. Nengo is a brain simulation that can already do somewhat interesting stuff at boring 2019 levels of compute. If you gave it GPT-3 levels of compute, I bet it would be pretty awesome.
And beyond that, neuroscientists have been obsessively making separate little detailed computer models of their tiny pieces of specialized knowledge about the brain since the 1980s at least, here’s some links [ https://compneuroweb.com/database.html ]. There are archives of hundreds of such models, open source and available for download. Each one with a subtly different focus, painstakingly confirmed to be accurate about the tiny piece of the picture it was designed to model. With enough compute it would be easy to brute force search over these and over BlueBrain to see if anything was useful to the base Nengo model. I played with a bunch of such models back around 2012-2015.
And of course, there’s Numenta [ https://numenta.com ]. A bit more abstract than Nengo, a bit more efficient. The researchers have put more work into distilling out just what they believe to be the critical components of the human cortex. I’ve been following their work for over 20 years now, since Jeff’s first book got me into neuroscience in the first place. I agree with some of their distillations, not all of them, but generally feel like they’ve got some workable ideas that just hasn’t been given the oomph they need to really shine.
If I had just 4 OOMs to work with, I’d start with Numenta, and bring in a few more details from Nengo. If I had 9 OOMs, I’d go straight to Nengo. If I had 12 OOMs, I’d go full BlueBrain.edit: I no longer endorse these super rough estimates of compute. I did some research and estimation and put the results here: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Blue Brain does actually have a human brain model waiting in the wings, it just tries to avoid mentioning that. A media-image management thing. I spent the day digging into your question about OOMs, and now have much more refined estimates. Here’s my post: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to
Hmmm, it seems we aren’t on the same page. (The argument sketch you just made sounds to me like a collection of claims which are either true but irrelevant, or false, depending on how I interpret them.) I’ll go back and reread Ajeya’s report (or maybe talk to her?) and then maybe we’ll be able to get to the bottom of this. Maybe my birds/brains/etc. post directly contradicts something in her report after all.