I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? [...]
I agree that such moral questions are difficult—but I don’t see how the difficulty of such questions could constitute evidence about whether a program can “be conscious” or “have a soul” (whatever those mean) or be morally relevant (which has the advantage of being less abstract a concept).
You can ask those same questions without mentioning Turing Machines: what if we have a device capable of making a perfect copy of any physical object, down to each individual quark? Is it morally wrong to kill such a copy of a human? Does the answer to that question have any relevance to the question of whether building such a device is physically possible?
To me, it sounds a bit like saying that since our protocol for seating people around a table are meaningless in zero gravity, then people cannot possibly live in zero gravity.
I agree that such moral questions are difficult—but I don’t see how the difficulty of such questions could constitute evidence about whether a program can “be conscious” or “have a soul” (whatever those mean) or be morally relevant (which has the advantage of being less abstract a concept).
You can ask those same questions without mentioning Turing Machines: what if we have a device capable of making a perfect copy of any physical object, down to each individual quark? Is it morally wrong to kill such a copy of a human? Does the answer to that question have any relevance to the question of whether building such a device is physically possible?
To me, it sounds a bit like saying that since our protocol for seating people around a table are meaningless in zero gravity, then people cannot possibly live in zero gravity.