I think this post makes many valid arguments against hopes some weak arguments people sometimes actually make, but it side-steps the actually reasonable version of the simulation arguments/acausal trade proposal. I think variants of the reasonable proposal has been floating around in spoken conversations and scattered LessWrong comments since a while, but I couldn’t find any unified write-up, so I wrote it up here, including a detailed response to the arguments Nate makes here:
I think this post makes many valid arguments against hopes some weak arguments people sometimes actually make, but it side-steps the actually reasonable version of the simulation arguments/acausal trade proposal. I think variants of the reasonable proposal has been floating around in spoken conversations and scattered LessWrong comments since a while, but I couldn’t find any unified write-up, so I wrote it up here, including a detailed response to the arguments Nate makes here:
You can, in fact, bamboozle an unaligned AI into sparing your life