Not “win everything”—you just gain 1 unit of utility while everyone else gains (1-e) if you win and everyone gets zero utility if anyone loses.
That’s quite consistent with bioengineering (win = you get healthy and wealthy; others win = you may get some part of that health and wealth; lose = a plague wipes out everyone) and with superweapons (win = you get to rule the world; others win = you get to live as second-class citizen in a peaceful Empire; lose = the weapon gets used, everyone dies).
In fact your race looks quite like the race for nuclear weapons.
I don’t see the similarity with nuclear weapons; indeed we had the arms race without destruction, and it’s not clear what the “safety” they might be skimping on would be.
Coming second in a nuclear arms race is not so bad, for example.
Coming second in a nuclear arms race is not so bad, for example.
I wonder if you would feel the same way had Hitler been a bit more focused on a nuclear program and didn’t have that many prejudices against Jewish nuclear scientists...
The main difference I see with nuclear weapons is that if neither side pursues them then you end up in much the same place as if it’s very close, except that you have spent a lot on it.
While on AI, the benefits would be huge unless the failure is equally drastic.
Not “win everything”—you just gain 1 unit of utility while everyone else gains (1-e) if you win and everyone gets zero utility if anyone loses.
That’s quite consistent with bioengineering (win = you get healthy and wealthy; others win = you may get some part of that health and wealth; lose = a plague wipes out everyone) and with superweapons (win = you get to rule the world; others win = you get to live as second-class citizen in a peaceful Empire; lose = the weapon gets used, everyone dies).
In fact your race looks quite like the race for nuclear weapons.
I don’t see the similarity with nuclear weapons; indeed we had the arms race without destruction, and it’s not clear what the “safety” they might be skimping on would be.
Coming second in a nuclear arms race is not so bad, for example.
I mostly had in mind this little anecdote.
I wonder if you would feel the same way had Hitler been a bit more focused on a nuclear program and didn’t have that many prejudices against Jewish nuclear scientists...
Ok, I’ll admit the model can be fitted to many different problems, but I still suspect that AI would fit it more naturally than most.
The main difference I see with nuclear weapons is that if neither side pursues them then you end up in much the same place as if it’s very close, except that you have spent a lot on it.
While on AI, the benefits would be huge unless the failure is equally drastic.