FWIW, I spent a few months researching and thinking specifically about the brain FLOP/s question, and carefully compared my conclusions to Joe Carlsmith’s. With a different set of sources, and different reasoning paths, I also came to an estimate approximately centered on 1e15 FLOP/s . If I were to try to be more specific than nearest OOM, I’d move that slightly upward, but still below 5e15 FLOP/s.
This is just one more reasonably-well-researched opinion though. I’d be fascinated by having a conversation about why 1e14 FLOP/s might be a better estimate.
A further consideration is that this an estimate for the compute that occurs over the course of seconds. Which, for simplicity’s sake, can focus on action potentials. For longer term brain processes, you need to take into account fractional shares of relatively-slow-but-high-complexity processes like protein signalling cascades and persistent changes in gene expression and protein structures in the cytoplasm and membrane receptor populations.
I mention this not because it changes the FLOP/s estimate much (since these processes are relatively slow), but because keeping these in mind should shape one’s intuition about the complexity of the computation and learning processes that are occuring. I feel like some set of people greatly underestimate this complexity, while a different set overestimates it.
Relevant further thoughts from me on this: https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=qCSJ2nPsNXC2PFvBW
I’d be fascinated by having a conversation about why 1e14 FLOP/s might be a better estimate.
I think I don’t want to share anything publicly beyond what I wrote in Section 3 here. ¯\_(ツ)_/¯
For longer term brain processes, you need to take into account fractional shares of relatively-slow-but-high-complexity processes
Yeah I’ve written about that too (here). :) I think that’s much more relevant to how hard it is to create AGI rather than how hard it is to run AGI.
But also, I think it’s easy to intuitively mix up “complexity” with “not-knowing-what’s-going-on”. Like, check out this code, part of an AlphaZero-chess clone project. Imagine knowing nothing about chess, and just looking at a minified (or compiled) version of that code. It would feel like an extraordinarily complex, inscrutable, mess. But if you do know how chess works and you’re trying to write that code in the first place, no problem, it’s a few days of work to get it basically up and running. And it would no longer feel very complex to you, because you would have a framework for understanding it.
By analogy, if we don’t know what all the protein cascades etc. are doing in the brain, then they feel like an extraordinarily complex, inscrutable, mess. But if you have a framework for understanding them, and you’re writing code that does the same thing (e.g. sets certain types of long-term memory traces in certain conditions, or increments a counter variable, or whatever) in your AGI, then that code-writing task might feel pretty straightforward.
FWIW, I spent a few months researching and thinking specifically about the brain FLOP/s question, and carefully compared my conclusions to Joe Carlsmith’s. With a different set of sources, and different reasoning paths, I also came to an estimate approximately centered on 1e15 FLOP/s . If I were to try to be more specific than nearest OOM, I’d move that slightly upward, but still below 5e15 FLOP/s. This is just one more reasonably-well-researched opinion though. I’d be fascinated by having a conversation about why 1e14 FLOP/s might be a better estimate.
A further consideration is that this an estimate for the compute that occurs over the course of seconds. Which, for simplicity’s sake, can focus on action potentials. For longer term brain processes, you need to take into account fractional shares of relatively-slow-but-high-complexity processes like protein signalling cascades and persistent changes in gene expression and protein structures in the cytoplasm and membrane receptor populations. I mention this not because it changes the FLOP/s estimate much (since these processes are relatively slow), but because keeping these in mind should shape one’s intuition about the complexity of the computation and learning processes that are occuring. I feel like some set of people greatly underestimate this complexity, while a different set overestimates it. Relevant further thoughts from me on this: https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=qCSJ2nPsNXC2PFvBW
I think I don’t want to share anything publicly beyond what I wrote in Section 3 here. ¯\_(ツ)_/¯
Yeah I’ve written about that too (here). :) I think that’s much more relevant to how hard it is to create AGI rather than how hard it is to run AGI.
But also, I think it’s easy to intuitively mix up “complexity” with “not-knowing-what’s-going-on”. Like, check out this code, part of an AlphaZero-chess clone project. Imagine knowing nothing about chess, and just looking at a minified (or compiled) version of that code. It would feel like an extraordinarily complex, inscrutable, mess. But if you do know how chess works and you’re trying to write that code in the first place, no problem, it’s a few days of work to get it basically up and running. And it would no longer feel very complex to you, because you would have a framework for understanding it.
By analogy, if we don’t know what all the protein cascades etc. are doing in the brain, then they feel like an extraordinarily complex, inscrutable, mess. But if you have a framework for understanding them, and you’re writing code that does the same thing (e.g. sets certain types of long-term memory traces in certain conditions, or increments a counter variable, or whatever) in your AGI, then that code-writing task might feel pretty straightforward.