I’ll note that I’m pretty enthusiastic about attempts to increase the security / sophistication of our civilization, for basically these reasons (the more efficient the stock market, the less money an unaligned AGI can make; the better computer security is, the less computers an unaligned AGI can steal, and so on). I’m nevertheless pretty worried about:
the ‘intelligent adversary’ part (where the chain’s weakest link is the one that gets attacked, rather than a random link, meaning you need to do a ton of ‘increasing sophistication’ work for each unit of additional defense you get, given the number of attack surfaces)
the ‘different payoff profile’ part (great powers might be very interested in screwing with each other, and a world with great power spy conflict probably has much better security setups than one without, but none of them are interested in releasing a superplague that kills all humans, and so won’t necessarily have better biodefense, i.e. AI may reveal lots of novel attack surfaces)
the ‘fragile centralization / supply chain’ part (a more sophisticated economy is probably less hardened against disruption than a less sophisticated economy, because the sophistication was in large part about how to get ‘better returns in peacetime’ than optimizing for survival / thriving broadly speaking / following traditions that had been optimized for that)
I’ll note that I’m pretty enthusiastic about attempts to increase the security / sophistication of our civilization, for basically these reasons (the more efficient the stock market, the less money an unaligned AGI can make; the better computer security is, the less computers an unaligned AGI can steal, and so on). I’m nevertheless pretty worried about:
the ‘intelligent adversary’ part (where the chain’s weakest link is the one that gets attacked, rather than a random link, meaning you need to do a ton of ‘increasing sophistication’ work for each unit of additional defense you get, given the number of attack surfaces)
the ‘different payoff profile’ part (great powers might be very interested in screwing with each other, and a world with great power spy conflict probably has much better security setups than one without, but none of them are interested in releasing a superplague that kills all humans, and so won’t necessarily have better biodefense, i.e. AI may reveal lots of novel attack surfaces)
the ‘fragile centralization / supply chain’ part (a more sophisticated economy is probably less hardened against disruption than a less sophisticated economy, because the sophistication was in large part about how to get ‘better returns in peacetime’ than optimizing for survival / thriving broadly speaking / following traditions that had been optimized for that)