I think it does matter how efficient AI can get in using energy to fuel computation, in that I think it provides an important answer to the question over whether AI will be distributed widely, and whether we can realistically control AI distribution and creation.
If it turns out that creating superhuman AI is possible without much use of energy by individuals in their basement, then long term, controlling AI becomes essentially impossible, and we will have to confront a world where the government isn’t going to reliably control AI by default. Essentially, Eliezer’s initial ideas about the ability to create very strong technology in your basement may eventually become reality, just with a time delay.
If it turns out that any AI must use a minimum of say 10,000 watts or more, then there is hope for controlling AI creation and distribution long term.
And this matters both in scenarios where existential risk mostly comes from individuals, and scenarios where existential risk doesn’t matter, but what will happen in a world where superhuman AI is created.
If it turns out that any AI must use a minimum of say 10,000 watts or more, then there is hope for controlling AI creation and distribution long term.
Note, 1 kW (50-100x human brain wattage) is roughly the power consumption of a very beefy desktop PC, and 10 kW is roughly the power consumption of a single rack in a datacenter. Even ~megawatt scale AI (100 racks) could fit pretty easily within many existing datacenters, or within a single entity’s mid-size industrial-scale basement, at only moderate cost.
Yeah, this isn’t enough to stop companies from producing useful AI, but it does mostly mean we can hope to avoid scenarios where single individuals can reliably build AI, meaning that controlling AI in scenarios where individuals, but not companies are the problem for existential risk is possible. It’s also relevant for other questions not focused on existential risk as well.
My own view is that you only need something a little bit smarter than the smartest humans, in some absolute sense, in order to re-arrange most of the matter and energy in the universe (almost) arbitrarily
Does this imply that a weakly superhuman AGI can solve alignment?
I think it does matter how efficient AI can get in using energy to fuel computation, in that I think it provides an important answer to the question over whether AI will be distributed widely, and whether we can realistically control AI distribution and creation.
If it turns out that creating superhuman AI is possible without much use of energy by individuals in their basement, then long term, controlling AI becomes essentially impossible, and we will have to confront a world where the government isn’t going to reliably control AI by default. Essentially, Eliezer’s initial ideas about the ability to create very strong technology in your basement may eventually become reality, just with a time delay.
If it turns out that any AI must use a minimum of say 10,000 watts or more, then there is hope for controlling AI creation and distribution long term.
And this matters both in scenarios where existential risk mostly comes from individuals, and scenarios where existential risk doesn’t matter, but what will happen in a world where superhuman AI is created.
Note, 1 kW (50-100x human brain wattage) is roughly the power consumption of a very beefy desktop PC, and 10 kW is roughly the power consumption of a single rack in a datacenter. Even ~megawatt scale AI (100 racks) could fit pretty easily within many existing datacenters, or within a single entity’s mid-size industrial-scale basement, at only moderate cost.
Yeah, this isn’t enough to stop companies from producing useful AI, but it does mostly mean we can hope to avoid scenarios where single individuals can reliably build AI, meaning that controlling AI in scenarios where individuals, but not companies are the problem for existential risk is possible. It’s also relevant for other questions not focused on existential risk as well.
Does this imply that a weakly superhuman AGI can solve alignment?