No, automated weapons cannot be countered with nukes. I specifically meant the scenario of
General purpose task robotics controlling models. Appears extremely feasible because the generality hypothesis turns out to be correct. (meaning it’s actually easier to solve all robotics tasks all at once than to solve individual ones to human level. Gpt-3 source is no more complex than efficient zero.)
Self replication which is an obvious property of 2
The mining manufacturing equivalent of having 10 billion or 100 billion workers.
Enough automated weapons as to create an impervious defense against nuclear attack by parties with current or near future human built technology. 1000 ICBMs is scary when you have 10 abms and do not have defenses at each target or thousands of backup radars.
It is an annoyance when you have overwhelming numbers of defensive weapons and can actually afford to make enough bunkers for every living citizen.
I don’t think being stronger at every game makes AI necessarily uncontrollable. I think the open agency model allows for competitive AGI and ASI that will be potentially more effective than the global RL stateful agent model. (More effective because as humans we care about task performance and reliability and a stateless system will be many times more reliable)
No, automated weapons cannot be countered with nukes. I specifically meant the scenario of
General purpose task robotics controlling models. Appears extremely feasible because the generality hypothesis turns out to be correct. (meaning it’s actually easier to solve all robotics tasks all at once than to solve individual ones to human level. Gpt-3 source is no more complex than efficient zero.)
Self replication which is an obvious property of 2
The mining manufacturing equivalent of having 10 billion or 100 billion workers.
Enough automated weapons as to create an impervious defense against nuclear attack by parties with current or near future human built technology. 1000 ICBMs is scary when you have 10 abms and do not have defenses at each target or thousands of backup radars.
It is an annoyance when you have overwhelming numbers of defensive weapons and can actually afford to make enough bunkers for every living citizen.
I don’t think being stronger at every game makes AI necessarily uncontrollable. I think the open agency model allows for competitive AGI and ASI that will be potentially more effective than the global RL stateful agent model. (More effective because as humans we care about task performance and reliability and a stateless system will be many times more reliable)
interesting...
Yeah. The planet is too small. Geopolitical stalemates are only possible when someone doesn’t have a big enough weapon.
The endgame will converge to one winner. Winning is not guaranteed but you can always choose to lose.