Automated weapons can be replied to with nukes if they’re the same scale, and the US has demonstrated drone amplification of fighter pilots, so I’m actually slightly less worried about that—as much as I hate their inefficiency and want them to get the fuck out of yemen, I’m also not worried about US losing air superiority. I’m pretty sure the main weapons risk from AI is superpathogens designed to kill all humans. Sure, humans wouldn’t use them, but it’s been imaginable how to build them for a while, it would only take an AI who thought they could live without us.
I think your model of safety doesn’t match mine much at all. What’s your timeline until AI that is stronger than every individual human at every competitive game?
No, automated weapons cannot be countered with nukes. I specifically meant the scenario of
General purpose task robotics controlling models. Appears extremely feasible because the generality hypothesis turns out to be correct. (meaning it’s actually easier to solve all robotics tasks all at once than to solve individual ones to human level. Gpt-3 source is no more complex than efficient zero.)
Self replication which is an obvious property of 2
The mining manufacturing equivalent of having 10 billion or 100 billion workers.
Enough automated weapons as to create an impervious defense against nuclear attack by parties with current or near future human built technology. 1000 ICBMs is scary when you have 10 abms and do not have defenses at each target or thousands of backup radars.
It is an annoyance when you have overwhelming numbers of defensive weapons and can actually afford to make enough bunkers for every living citizen.
I don’t think being stronger at every game makes AI necessarily uncontrollable. I think the open agency model allows for competitive AGI and ASI that will be potentially more effective than the global RL stateful agent model. (More effective because as humans we care about task performance and reliability and a stateless system will be many times more reliable)
Automated weapons can be replied to with nukes if they’re the same scale, and the US has demonstrated drone amplification of fighter pilots, so I’m actually slightly less worried about that—as much as I hate their inefficiency and want them to get the fuck out of yemen, I’m also not worried about US losing air superiority. I’m pretty sure the main weapons risk from AI is superpathogens designed to kill all humans. Sure, humans wouldn’t use them, but it’s been imaginable how to build them for a while, it would only take an AI who thought they could live without us.
I think your model of safety doesn’t match mine much at all. What’s your timeline until AI that is stronger than every individual human at every competitive game?
No, automated weapons cannot be countered with nukes. I specifically meant the scenario of
General purpose task robotics controlling models. Appears extremely feasible because the generality hypothesis turns out to be correct. (meaning it’s actually easier to solve all robotics tasks all at once than to solve individual ones to human level. Gpt-3 source is no more complex than efficient zero.)
Self replication which is an obvious property of 2
The mining manufacturing equivalent of having 10 billion or 100 billion workers.
Enough automated weapons as to create an impervious defense against nuclear attack by parties with current or near future human built technology. 1000 ICBMs is scary when you have 10 abms and do not have defenses at each target or thousands of backup radars.
It is an annoyance when you have overwhelming numbers of defensive weapons and can actually afford to make enough bunkers for every living citizen.
I don’t think being stronger at every game makes AI necessarily uncontrollable. I think the open agency model allows for competitive AGI and ASI that will be potentially more effective than the global RL stateful agent model. (More effective because as humans we care about task performance and reliability and a stateless system will be many times more reliable)
interesting...
Yeah. The planet is too small. Geopolitical stalemates are only possible when someone doesn’t have a big enough weapon.
The endgame will converge to one winner. Winning is not guaranteed but you can always choose to lose.