Isn’t that begging the question? If the goal is to teach why being optimistic is dangerous, declaring by fiat that an unaligned AI ends the world skips the whole “teaching” part of a game.
Yes, it doesn’t establish why it’s inherently dangerous but does help explain a key challenge to coordinating to reduce the danger.
Isn’t that begging the question? If the goal is to teach why being optimistic is dangerous, declaring by fiat that an unaligned AI ends the world skips the whole “teaching” part of a game.
Yes, it doesn’t establish why it’s inherently dangerous but does help explain a key challenge to coordinating to reduce the danger.