Exactly your point is what has prevented me from adopting the orthodox LessWrong position. If I knew that in the future Clippy was going to kill me and everyone else, I would consider that a neutral outcome. If, however, I knew that in the future some group of humans were going to successfully align an AGI to their interests, I would be far more worried.
If anyone knows of an Eliezer or SSC-level rebuttal to this, please let me know so that I can read it.
Exactly your point is what has prevented me from adopting the orthodox LessWrong position. If I knew that in the future Clippy was going to kill me and everyone else, I would consider that a neutral outcome. If, however, I knew that in the future some group of humans were going to successfully align an AGI to their interests, I would be far more worried.
If anyone knows of an Eliezer or SSC-level rebuttal to this, please let me know so that I can read it.