It seems to me you are taking my assumption of linearity on the wrong level. To be exact, I need the assumption of linearity of the operator of calculating future-time snapshots (fixed in the article).
This is entirely different from your example.
Imagine for example how the Fourier Transform is linear as an operation.
However at no point did I do anything that could be described as “simulating you”.
The argument here is something like “just because you did the calculations differently doesn’t mean your calculations failed to simulate a consciousness”. Without a real model of how computation gives rise to consciousness (assuming it does), this is hard to resolve.
Second, one could simply accept it: there are some ways to do a given calculation which are ethical, and some ways that aren’t.
I don’t particularly endorse either of these, by the way (I hold no strong position on simulation ethics in general). I just don’t see how your argument establishes that simulation morality is incoherent.
It seems to me you are taking my assumption of linearity on the wrong level. To be exact, I need the assumption of linearity of the operator of calculating future-time snapshots (fixed in the article).
This is entirely different from your example.
Imagine for example how the Fourier Transform is linear as an operation.
OK. I think I see what you are getting at.
First, one could simply reject your conclusion:
The argument here is something like “just because you did the calculations differently doesn’t mean your calculations failed to simulate a consciousness”. Without a real model of how computation gives rise to consciousness (assuming it does), this is hard to resolve.
Second, one could simply accept it: there are some ways to do a given calculation which are ethical, and some ways that aren’t.
I don’t particularly endorse either of these, by the way (I hold no strong position on simulation ethics in general). I just don’t see how your argument establishes that simulation morality is incoherent.