Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I’m less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.
Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don’t expect to see black holes inside brains, at least.
In any case, your original question was about the moral worth of turing machines, was it not? We can’t use “turing machines can’t be conscious” as excuse not to worry about those moral questions, because we aren’t sure whether turing machines can be conscious. “It doesn’t feel like they should be” isn’t really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.
So here’s my actual answer to your question: as a rule of thumb, act as if any simulation of “sufficient fidelity” is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.
’Course, this shouldn’t be a practical problem for a while yet, and we may have learned more by the time we’re creating simulations of “sufficient fidelity”.
Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I’m less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.
Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don’t expect to see black holes inside brains, at least.
In any case, your original question was about the moral worth of turing machines, was it not? We can’t use “turing machines can’t be conscious” as excuse not to worry about those moral questions, because we aren’t sure whether turing machines can be conscious. “It doesn’t feel like they should be” isn’t really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.
So here’s my actual answer to your question: as a rule of thumb, act as if any simulation of “sufficient fidelity” is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.
’Course, this shouldn’t be a practical problem for a while yet, and we may have learned more by the time we’re creating simulations of “sufficient fidelity”.