I can’t say for sure what people who believe in acausal decision theory would say, but it looks to me like a causal argument. If I understand the scenario as you intend, we’re talking about real paperclips, either directly made by a real paperclip maximizer, or by Roko’s basilisk as a reward for the simulated paperclip maximizer sparing humans. Both real and simulated paperclip maximizers are presumably trying to maximize real paperclips. It seems to work causally.
Now, the decision of Roko’s basilisk to set up this scenario does seem to make sense only in the framework of acausal decision theory. But you say that the paperclip maximizer’s reasoning is acausal, which it doesn’t seem to be. The paperclip maximizer’s reasoning does presume, as a factual matter, that a Roko’s basilisk that uses acausal decision theory is likely to exist, but believing that doesn’t require that one accept acausal decision theory as being valid.
I can’t say for sure what people who believe in acausal decision theory would say, but it looks to me like a causal argument. If I understand the scenario as you intend, we’re talking about real paperclips, either directly made by a real paperclip maximizer, or by Roko’s basilisk as a reward for the simulated paperclip maximizer sparing humans. Both real and simulated paperclip maximizers are presumably trying to maximize real paperclips. It seems to work causally.
Now, the decision of Roko’s basilisk to set up this scenario does seem to make sense only in the framework of acausal decision theory. But you say that the paperclip maximizer’s reasoning is acausal, which it doesn’t seem to be. The paperclip maximizer’s reasoning does presume, as a factual matter, that a Roko’s basilisk that uses acausal decision theory is likely to exist, but believing that doesn’t require that one accept acausal decision theory as being valid.
Hmm yeah, I thinking you’re right. I have edited the post!