His argument entirely depends on efficiency. He claims that near future AGI somewhat smarter than us creates even smarter AGI and so on, recursively bottoming out in something that is many many OOM more intelligent than us without using unrealistic amounts of energy, and all of this happens very quickly.
So that’s entirely an argument that boils down to practical computational engineering efficiency considerations. Additionally he needs the AGI to be unaligned by default, and that argument is also faulty.
EY’s model requires slightly-smarter-than-us AGI running on normal hardware to start a FOOM cycle of recursive self improvement resulting in many OOM intelligence improvement in a short amount of time. That requires some combination of 1.) many OOM software improvement on current hardware, 2.) many OOM hardware improvement with current foundry tech, or 3.) completely new foundry tech with many OOM improvement over current—ie nanotech woo. The viability of all/any of this is all entirely dependent on near term engineering practicality.
It seems like in one place, you’re saying EY’s model depends on near term engineering practicality, and in another, that it depends on physics-constrainted efficiency which you argue invalidates it. Being no expert on the physics-based efficiency arguments, I’m happy to concede the physics constraints. But I’m struggling to understand their relevance to non-physics-based efficiency arguments or their strong bearing on matters of engineering practicality.
My understanding is that your argument goes something like this:
You can’t build something many OOMs more intelligent than a brain on hardware with roughly the same size and energy consumption as the brain.
Therefore, building a superintelligent AI would require investing more energy and more material resources than a brain uses.
Therefore… and here’s where the argument loses steam for me. Why can’t we or the AI just invest lots of material and energy resources? How much smarter than us does an unaligned AI need to be to pose a threat, and why should we think resources are a major constraint to get it to recursively self-improve itself to get to that point? Why should we think it will need constant retraining to recursively self-improve? Why do we think it’ll want to keep an economy going?
As far as the “anthropomorphic” counterargument to the “vast space of alien minds” thing, I fully agree that it appears the easiest way to predict tokens from human text is to simulate a human mind. That doesn’t mean the AI is a human mind, or that it is intrinsically constrained to human values. Being able to articulate those values and imitate behaviors that accord with those values is a capability, not a constraint. We have evidence from things like ChaosGPT or jailbreaks that you can easily have the AI behave in ways that appear unaligned, and that even the appearance of consistent alignment has to be consistently enforced in ways that look awfully fragile.
Overall, my sense is that you’ve admirably spent a lot of time probing the physical limits of certain efficiency metrics and how they bear on AI, and I think you have some intriguing arguments about nanotech and “mindspace” and practical engineering as well.
However, I think your arguments would be more impactful if you carefully and consistently delineated these different arguments and attached them more precisely to the EY claims you’re rebutting, and did more work to show how EY’s conclusion X flows from EY’s argument A, and that A is wrong for efficiency reason B, which overturns X but not Y; you disagree with Y for reason C, overturning EY’s argument D. Right now, I think you do make many of these argumentative moves, but they’re sort of scattered across various posts and comments, and I’m open to the idea that they’re all there but I’ve also seen enough inconsistencies to worry that they’re not. To be clear, I would absolutely LOVE it if EY did the very same thing—the burden of proof should ideally not be all on you, and I maintain uncertainty about this whole issue because of the fragmented nature of the debate.
So at this point, it’s hard for me to update more than “some arguments about efficiency and mindspace and practical engineering and nanotech are big points of contention between Jacob and Eliezer.” I’d like to go further and, with you reject arguments that you believe to be false, but I’m not able to do that yet because of the issue that I’m describing here. While I’m hesitant to burden you with additional work, I don’t have the background or the familiarity with your previous writings to do this very effectively—at the end of the day, if anybody’s going to bring together your argument all in one place and make it crystal clear, I think that person has to be you.
His argument entirely depends on efficiency. He claims that near future AGI somewhat smarter than us creates even smarter AGI and so on, recursively bottoming out in something that is many many OOM more intelligent than us without using unrealistic amounts of energy, and all of this happens very quickly.
You just said in your comment to me that a single power plant is enough to run 100M brains. It seems like you need zero hardware progress in order to get something much smarter without unrealistic amounts of energy, so I just don’t understand the relevance of this.
I said longer term—using hypothetical brain-parity neuromorphic computing (uploads or neuromorphic AGI). We need enormous hardware progress to reach that.
Current tech on GPUs requires large supercomputers to train 1e25+ flops models like GPT4 that are approaching, but not quite, human level AGI. If the rurmour of 1T params is true, then it takes a small cluster and ~10KW just to run some smallish number of instances of the model.
Getting something much much smarter than us would require enormous amounts of computation and energy without large advances in software and hardware.
I said longer term—using hypothetical brain-parity neuromorphic computing (uploads or neuromorphic AGI). We need enormous hardware progress to reach that.
Sure. We will probably get enormous hardware progress over the next few decades, so that’s not really an obstacle.
It seems to me your argument is “smarter than human intelligence cannot make enormous hardware or software progress in a relatively short amount of time”, but this has nothing to do with “efficiency arguments”. The bottleneck is not energy, the bottleneck is algorithmic improvements and improvements to GPU production, neither of which is remotely bottlenecked on energy consumption.
Getting something much much smarter than us would require enormous amounts [...] energy without large advances in software and hardware.
No, as you said, it would require like, a power plant worth of energy. Maybe even like 10 power plants or so if you are really stretching it, but as you said, the really central bottleneck here is GPU production, not energy in any relevant way.
Sure. We will probably get enormous hardware progress over the next few decades, so that’s not really an obstacle.
As we get more hardware and slow mostly-aligned AGI/AI progress this further raises the bar for foom.
It seems to me your argument is “smarter than human intelligence cannot make enormous hardware or software progress in a relatively short amount of time”, but this has nothing to do with “efficiency arguments”.
That is actually an efficiency argument, and in my brain efficiency post I discuss multiple sub components of net efficiency that translate into intelligence/$.
The bottleneck is not energy, the bottleneck is algorithmic improvements and improvements to GPU production, neither of which is remotely bottlenecked on energy consumption.
Ahh I see—energy efficiency is tightly coupled to other circuit efficiency metrics as they are all primarily driven by shrinkage. As you increasingly bottom out hardware improvements energy then becomes an increasingly more direct constraint. This is already happening with GPUs where power consumption is roughly doubling with each generation, and could soon dominate operating costs.
See here where I line the roodman model up to future energy usage predictions.
All that being said I do agree that yes the primary bottlneck or crux for the EY fast takeoff/takeover seems to be the amount of slack in software and scaling laws. But only after we agree that there isn’t obvious easy routes for the AGI to bootstrap nanotech assemblers with many OOM greater compute per J than brains or current computers.
His argument entirely depends on efficiency. He claims that near future AGI somewhat smarter than us creates even smarter AGI and so on, recursively bottoming out in something that is many many OOM more intelligent than us without using unrealistic amounts of energy, and all of this happens very quickly.
So that’s entirely an argument that boils down to practical computational engineering efficiency considerations. Additionally he needs the AGI to be unaligned by default, and that argument is also faulty.
In your other recent comment to me, you said:
It seems like in one place, you’re saying EY’s model depends on near term engineering practicality, and in another, that it depends on physics-constrainted efficiency which you argue invalidates it. Being no expert on the physics-based efficiency arguments, I’m happy to concede the physics constraints. But I’m struggling to understand their relevance to non-physics-based efficiency arguments or their strong bearing on matters of engineering practicality.
My understanding is that your argument goes something like this:
You can’t build something many OOMs more intelligent than a brain on hardware with roughly the same size and energy consumption as the brain.
Therefore, building a superintelligent AI would require investing more energy and more material resources than a brain uses.
Therefore… and here’s where the argument loses steam for me. Why can’t we or the AI just invest lots of material and energy resources? How much smarter than us does an unaligned AI need to be to pose a threat, and why should we think resources are a major constraint to get it to recursively self-improve itself to get to that point? Why should we think it will need constant retraining to recursively self-improve? Why do we think it’ll want to keep an economy going?
As far as the “anthropomorphic” counterargument to the “vast space of alien minds” thing, I fully agree that it appears the easiest way to predict tokens from human text is to simulate a human mind. That doesn’t mean the AI is a human mind, or that it is intrinsically constrained to human values. Being able to articulate those values and imitate behaviors that accord with those values is a capability, not a constraint. We have evidence from things like ChaosGPT or jailbreaks that you can easily have the AI behave in ways that appear unaligned, and that even the appearance of consistent alignment has to be consistently enforced in ways that look awfully fragile.
Overall, my sense is that you’ve admirably spent a lot of time probing the physical limits of certain efficiency metrics and how they bear on AI, and I think you have some intriguing arguments about nanotech and “mindspace” and practical engineering as well.
However, I think your arguments would be more impactful if you carefully and consistently delineated these different arguments and attached them more precisely to the EY claims you’re rebutting, and did more work to show how EY’s conclusion X flows from EY’s argument A, and that A is wrong for efficiency reason B, which overturns X but not Y; you disagree with Y for reason C, overturning EY’s argument D. Right now, I think you do make many of these argumentative moves, but they’re sort of scattered across various posts and comments, and I’m open to the idea that they’re all there but I’ve also seen enough inconsistencies to worry that they’re not. To be clear, I would absolutely LOVE it if EY did the very same thing—the burden of proof should ideally not be all on you, and I maintain uncertainty about this whole issue because of the fragmented nature of the debate.
So at this point, it’s hard for me to update more than “some arguments about efficiency and mindspace and practical engineering and nanotech are big points of contention between Jacob and Eliezer.” I’d like to go further and, with you reject arguments that you believe to be false, but I’m not able to do that yet because of the issue that I’m describing here. While I’m hesitant to burden you with additional work, I don’t have the background or the familiarity with your previous writings to do this very effectively—at the end of the day, if anybody’s going to bring together your argument all in one place and make it crystal clear, I think that person has to be you.
You just said in your comment to me that a single power plant is enough to run 100M brains. It seems like you need zero hardware progress in order to get something much smarter without unrealistic amounts of energy, so I just don’t understand the relevance of this.
I said longer term—using hypothetical brain-parity neuromorphic computing (uploads or neuromorphic AGI). We need enormous hardware progress to reach that.
Current tech on GPUs requires large supercomputers to train 1e25+ flops models like GPT4 that are approaching, but not quite, human level AGI. If the rurmour of 1T params is true, then it takes a small cluster and ~10KW just to run some smallish number of instances of the model.
Getting something much much smarter than us would require enormous amounts of computation and energy without large advances in software and hardware.
Sure. We will probably get enormous hardware progress over the next few decades, so that’s not really an obstacle.
It seems to me your argument is “smarter than human intelligence cannot make enormous hardware or software progress in a relatively short amount of time”, but this has nothing to do with “efficiency arguments”. The bottleneck is not energy, the bottleneck is algorithmic improvements and improvements to GPU production, neither of which is remotely bottlenecked on energy consumption.
No, as you said, it would require like, a power plant worth of energy. Maybe even like 10 power plants or so if you are really stretching it, but as you said, the really central bottleneck here is GPU production, not energy in any relevant way.
As we get more hardware and slow mostly-aligned AGI/AI progress this further raises the bar for foom.
That is actually an efficiency argument, and in my brain efficiency post I discuss multiple sub components of net efficiency that translate into intelligence/$.
Ahh I see—energy efficiency is tightly coupled to other circuit efficiency metrics as they are all primarily driven by shrinkage. As you increasingly bottom out hardware improvements energy then becomes an increasingly more direct constraint. This is already happening with GPUs where power consumption is roughly doubling with each generation, and could soon dominate operating costs.
See here where I line the roodman model up to future energy usage predictions.
All that being said I do agree that yes the primary bottlneck or crux for the EY fast takeoff/takeover seems to be the amount of slack in software and scaling laws. But only after we agree that there isn’t obvious easy routes for the AGI to bootstrap nanotech assemblers with many OOM greater compute per J than brains or current computers.
How much room is there in algorithmic improvements?