Consuming free energy means things like: taking the jobs that unaligned AI systems could have done, making it really hard to hack into computers (either by improving defenses or, worst case, just having an ecosystem where any vulnerable machine is going to be compromised quickly by an aligned AI), improving the physical technology of militaries or law enforcement so that a misaligned AI does not have a significant advantage.
I also imagine AI systems doing things like helping negotiate and enforce agreements to reduce access to destructive technologies or manage the consequences (including in particular powerful AI systems themselves). And of course I imagine AI systems doing alignment research, generating new technological solutions, a clearer understanding of how to deploy AI systems, improving implementation quality at relevant labs, helping identify key risks and improve people’s thinking about those risks, etc.
(I don’t think that an AI developer is likely to be in a position to achieve a decisive strategic advantage, but I’d stand by this point regardless and think it’s still reflecting an important disagreement about what the situation is likely to look like.)
I’ll note that I’m pretty enthusiastic about attempts to increase the security / sophistication of our civilization, for basically these reasons (the more efficient the stock market, the less money an unaligned AGI can make; the better computer security is, the less computers an unaligned AGI can steal, and so on). I’m nevertheless pretty worried about:
the ‘intelligent adversary’ part (where the chain’s weakest link is the one that gets attacked, rather than a random link, meaning you need to do a ton of ‘increasing sophistication’ work for each unit of additional defense you get, given the number of attack surfaces)
the ‘different payoff profile’ part (great powers might be very interested in screwing with each other, and a world with great power spy conflict probably has much better security setups than one without, but none of them are interested in releasing a superplague that kills all humans, and so won’t necessarily have better biodefense, i.e. AI may reveal lots of novel attack surfaces)
the ‘fragile centralization / supply chain’ part (a more sophisticated economy is probably less hardened against disruption than a less sophisticated economy, because the sophistication was in large part about how to get ‘better returns in peacetime’ than optimizing for survival / thriving broadly speaking / following traditions that had been optimized for that)
Consuming free energy means things like: taking the jobs that unaligned AI systems could have done, making it really hard to hack into computers (either by improving defenses or, worst case, just having an ecosystem where any vulnerable machine is going to be compromised quickly by an aligned AI), improving the physical technology of militaries or law enforcement so that a misaligned AI does not have a significant advantage.
I also imagine AI systems doing things like helping negotiate and enforce agreements to reduce access to destructive technologies or manage the consequences (including in particular powerful AI systems themselves). And of course I imagine AI systems doing alignment research, generating new technological solutions, a clearer understanding of how to deploy AI systems, improving implementation quality at relevant labs, helping identify key risks and improve people’s thinking about those risks, etc.
(I don’t think that an AI developer is likely to be in a position to achieve a decisive strategic advantage, but I’d stand by this point regardless and think it’s still reflecting an important disagreement about what the situation is likely to look like.)
I’ll note that I’m pretty enthusiastic about attempts to increase the security / sophistication of our civilization, for basically these reasons (the more efficient the stock market, the less money an unaligned AGI can make; the better computer security is, the less computers an unaligned AGI can steal, and so on). I’m nevertheless pretty worried about:
the ‘intelligent adversary’ part (where the chain’s weakest link is the one that gets attacked, rather than a random link, meaning you need to do a ton of ‘increasing sophistication’ work for each unit of additional defense you get, given the number of attack surfaces)
the ‘different payoff profile’ part (great powers might be very interested in screwing with each other, and a world with great power spy conflict probably has much better security setups than one without, but none of them are interested in releasing a superplague that kills all humans, and so won’t necessarily have better biodefense, i.e. AI may reveal lots of novel attack surfaces)
the ‘fragile centralization / supply chain’ part (a more sophisticated economy is probably less hardened against disruption than a less sophisticated economy, because the sophistication was in large part about how to get ‘better returns in peacetime’ than optimizing for survival / thriving broadly speaking / following traditions that had been optimized for that)