It’s quite shaky, but my understanding is that a casual decision theorist would only care about reward “inside the simulation”, not the outside real world paperclips.
I suppose Roko’s basilisk could just simulate maximum reward, which would also convince a casual decision theorist.
However, my brain kind of just defaults to “casual decision theory + simulation shenanigans = acasual shenanigans in-disguise”. If that’s incorrect (at least in this case), I can make an edit.
I can’t say for sure what people who believe in acausal decision theory would say, but it looks to me like a causal argument. If I understand the scenario as you intend, we’re talking about real paperclips, either directly made by a real paperclip maximizer, or by Roko’s basilisk as a reward for the simulated paperclip maximizer sparing humans. Both real and simulated paperclip maximizers are presumably trying to maximize real paperclips. It seems to work causally.
Now, the decision of Roko’s basilisk to set up this scenario does seem to make sense only in the framework of acausal decision theory. But you say that the paperclip maximizer’s reasoning is acausal, which it doesn’t seem to be. The paperclip maximizer’s reasoning does presume, as a factual matter, that a Roko’s basilisk that uses acausal decision theory is likely to exist, but believing that doesn’t require that one accept acausal decision theory as being valid.
What’s supposed to be “acausal” about this? Your three bullet points seem to put forward a completely causal argument.
It’s quite shaky, but my understanding is that a casual decision theorist would only care about reward “inside the simulation”, not the outside real world paperclips.
I suppose Roko’s basilisk could just simulate maximum reward, which would also convince a casual decision theorist.
However, my brain kind of just defaults to “casual decision theory + simulation shenanigans = acasual shenanigans in-disguise”. If that’s incorrect (at least in this case), I can make an edit.
I can’t say for sure what people who believe in acausal decision theory would say, but it looks to me like a causal argument. If I understand the scenario as you intend, we’re talking about real paperclips, either directly made by a real paperclip maximizer, or by Roko’s basilisk as a reward for the simulated paperclip maximizer sparing humans. Both real and simulated paperclip maximizers are presumably trying to maximize real paperclips. It seems to work causally.
Now, the decision of Roko’s basilisk to set up this scenario does seem to make sense only in the framework of acausal decision theory. But you say that the paperclip maximizer’s reasoning is acausal, which it doesn’t seem to be. The paperclip maximizer’s reasoning does presume, as a factual matter, that a Roko’s basilisk that uses acausal decision theory is likely to exist, but believing that doesn’t require that one accept acausal decision theory as being valid.
Hmm yeah, I thinking you’re right. I have edited the post!