These questions are equivalent in the same sense as “how about just not setting X equal to pi” and “how about just setting X equal to e” are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.
To the contrary, “just building the [very specific sort of] whole monster” is what’s more equivalent to “just building a [very specific definition of] Friendly AI”, an a priori improbable task.
Worse for the basilisk: at least in the case of Friendly AI you might end up stuck with nothing better to do but throw a dart and hope for a bulls-eye. But in the case of the basilisk, the acausal trade is only rational if you expect a high likelihood of the trade being carried out. But if that likelihood is low then you’re just being nutty, which means it’s unlikely for the other side of the trade to be upheld in any case (acausally trying to influence Omega’s prediction of you may work if Omega is omniscient, but not so well if Omega is irrational). This lowers the likelihood still further… until the only remaining question is simply “what’s the fixed point of “x_{n+1} = x_n/2″?”
These questions are equivalent in the same sense as “how about just not setting X equal to pi” and “how about just setting X equal to e” are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.
Consider my parallel changed to “How about, you know, just not building an Unfriendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
There are many possible Unfriendly AI, and most of them don’t base their decision of torturing you on whether you gave them all your money.
Therefore, you can use your reason to try building a Friendly AI… and either succeed or fail, depending on the complexity of the problem and your ability to solve it.
But not depending on a blackmail.
This is the difference between “you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills”, and “you should build this specific Unfriendly AI, because if you don’t, but someone else does, then it will torture you for an eternity”. In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don’t want.
That’s not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.
These questions are equivalent in the same sense as “how about just not setting X equal to pi” and “how about just setting X equal to e” are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.
To the contrary, “just building the [very specific sort of] whole monster” is what’s more equivalent to “just building a [very specific definition of] Friendly AI”, an a priori improbable task.
Worse for the basilisk: at least in the case of Friendly AI you might end up stuck with nothing better to do but throw a dart and hope for a bulls-eye. But in the case of the basilisk, the acausal trade is only rational if you expect a high likelihood of the trade being carried out. But if that likelihood is low then you’re just being nutty, which means it’s unlikely for the other side of the trade to be upheld in any case (acausally trying to influence Omega’s prediction of you may work if Omega is omniscient, but not so well if Omega is irrational). This lowers the likelihood still further… until the only remaining question is simply “what’s the fixed point of “x_{n+1} = x_n/2″?”
Consider my parallel changed to “How about, you know, just not building an Unfriendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
There are many possible Unfriendly AI, and most of them don’t base their decision of torturing you on whether you gave them all your money.
Therefore, you can use your reason to try building a Friendly AI… and either succeed or fail, depending on the complexity of the problem and your ability to solve it.
But not depending on a blackmail.
This is the difference between “you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills”, and “you should build this specific Unfriendly AI, because if you don’t, but someone else does, then it will torture you for an eternity”. In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don’t want.
That’s not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.