Is more ‘harm’ done in actually carrying out the computation of the torture simulation on our one tape turing machine than simply writing out the initial state of the torture simulation on the turing machine’s tape?
The tape does not contain all the information necessary. You also need the machine that interprets the tape. A different machine would perform a different computation.
Loading the tape into the machine, but not powering the machine, is also insufficient. You don’t then have a hypothetical Turing machine; rather, you have a machine that performs the function of doing nothing with the tape. Reality doesn’t care about human-intuitive hypotheticals and near-misses.
Representation is irrelevant; the seashells are torture because working out the right arrangement of seashells necessarily involves actually performing the computation.
Why privilege the physical movement of the seashells? What if I move the seashells into position for timestep 35469391 and then mentally imagine the position of the seashells at timestep 35469392? You could say I am “performing the calculation,” but you could also say I am “discovering the result of propagating forward the initial conditions.”
I don’t think our intuitions about what “really happens” are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can’t be morally responsible for the notional space of computable torture simulations because they exist whether or not we “carry them out.” But perhaps we are morally responsible for particular instantiations of those algorithms.
I don’t know the answer—but I don’t think the answer is that performing mechanical operations with seashells reifies torture but writing down the algorithm does not.
The tape does not contain all the information necessary. You also need the machine that interprets the tape. A different machine would perform a different computation.
Loading the tape into the machine, but not powering the machine, is also insufficient. You don’t then have a hypothetical Turing machine; rather, you have a machine that performs the function of doing nothing with the tape. Reality doesn’t care about human-intuitive hypotheticals and near-misses.
Representation is irrelevant; the seashells are torture because working out the right arrangement of seashells necessarily involves actually performing the computation.
Why privilege the physical movement of the seashells? What if I move the seashells into position for timestep 35469391 and then mentally imagine the position of the seashells at timestep 35469392? You could say I am “performing the calculation,” but you could also say I am “discovering the result of propagating forward the initial conditions.”
I don’t think our intuitions about what “really happens” are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can’t be morally responsible for the notional space of computable torture simulations because they exist whether or not we “carry them out.” But perhaps we are morally responsible for particular instantiations of those algorithms.
I don’t know the answer—but I don’t think the answer is that performing mechanical operations with seashells reifies torture but writing down the algorithm does not.
Mental imaging also suffices to perform the computation.
I don’t have coherent thoughts about why knowing the algorithm is morally distinct from knowing its output.