(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
(I know ~nothing about any of this, so might be misunderstanding things greatly)
12 OOMs is supposed to get us human-level AGI, but BlueBrain seems to be aiming at a mouse brain? “It takes 12 OOMs to get to mouse-level AGI” seems like it’s probably consistent with my positions? (I don’t remember the numbers well enough to say off the top of my head.) But more fundamentally, why 12 OOMs? Where does that number come from?
From a brief look at the website, I didn’t immediately see what cool stuff Nengo could do with 2019 levels of compute, that neural networks can’t do. Same for Numenta.
Blue Brain does actually have a human brain model waiting in the wings, it just tries to avoid mentioning that. A media-image management thing. I spent the day digging into your question about OOMs, and now have much more refined estimates. Here’s my post: https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to