These questions are equivalent in the same sense as “how about just not setting X equal to pi” and “how about just setting X equal to e” are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.
Consider my parallel changed to “How about, you know, just not building an Unfriendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
There are many possible Unfriendly AI, and most of them don’t base their decision of torturing you on whether you gave them all your money.
Therefore, you can use your reason to try building a Friendly AI… and either succeed or fail, depending on the complexity of the problem and your ability to solve it.
But not depending on a blackmail.
This is the difference between “you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills”, and “you should build this specific Unfriendly AI, because if you don’t, but someone else does, then it will torture you for an eternity”. In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don’t want.
That’s not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.
Consider my parallel changed to “How about, you know, just not building an Unfriendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
There are many possible Unfriendly AI, and most of them don’t base their decision of torturing you on whether you gave them all your money.
Therefore, you can use your reason to try building a Friendly AI… and either succeed or fail, depending on the complexity of the problem and your ability to solve it.
But not depending on a blackmail.
This is the difference between “you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills”, and “you should build this specific Unfriendly AI, because if you don’t, but someone else does, then it will torture you for an eternity”. In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don’t want.
That’s not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.