You wrote:
“This is the part where you’re going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.”
at this stage, you’ve just assumed the conclusion. you’ve just assumed what you want to prove.
“Therefore, consciousness—whatever we mean when we say that—is indeed possible for Turing machines.”
having assumed that A is true, it is easy to prove that A is true. You haven’t given an argument.
“To refute this proposition, you’d need to present evidence of a human being performing an operation that can’t be done by a Turing machine.”
It’s not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people’s) conjecture would be proven. But if I don’t refute it, that doesn’t mean your proposition is true, it just means that it hasn’t yet been proven false. Those are quite different things, you know.
Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I’m less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.
Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don’t expect to see black holes inside brains, at least.
In any case, your original question was about the moral worth of turing machines, was it not? We can’t use “turing machines can’t be conscious” as excuse not to worry about those moral questions, because we aren’t sure whether turing machines can be conscious. “It doesn’t feel like they should be” isn’t really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.
So here’s my actual answer to your question: as a rule of thumb, act as if any simulation of “sufficient fidelity” is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.
’Course, this shouldn’t be a practical problem for a while yet, and we may have learned more by the time we’re creating simulations of “sufficient fidelity”.
at this stage, you’ve just assumed the conclusion. you’ve just assumed what you want to prove.
No—what I’m pointing out is that the question “what are the ethical implications for turing machines” is the same question as “what are the ethical implications for human beings” in that case.
It’s not my job to refute the proposition. Currently, as far as I can tell, the question is open.
Not on Less Wrong, it isn’t. But I think I may have misunderstood your situation as being one of somebody coming to Less Wrong to learn about rationality of the “Extreme Bayesian” variety; if you just dropped in here to debate the consciousness question, you probably won’t find the experience much fun. ;-)
I did refute it, then my (and several other people’s) conjecture would be proven. But if I don’t refute it, that doesn’t mean your proposition is true, it just means that it hasn’t yet been proven false. Those are quite different things, you know.
Less Wrong has different—and far stricter—rules of evidence than just about any other venue for such a discussion.
In particular, to meaningfully partake in this discussion, the minimum requirement is to understand the Mind Projection Fallacy at an intuitive level, or else you’ll just be arguing about your own intuitions… and everybody will just tune you out.
Without that understanding, you’re in exactly the same place as a creationist wandering into an evolutionary biology forum, without understanding what “theory” and “evidence” mean, and expecting everyone to disprove creationism without making you read any introductory material on the subject.
In this case, the introductory material is the Sequences—especially the ones that debunk supernaturalism, zombies, definitional arguments, and the mind projection fallacy.
When you’ve absorbed those concepts, you’ll understand why the things you’re saying are open questions are not even real questions to begin with, let alone propositions to be proved or disproved! (They’re actually on a par with creationists’ notions of “missing links”—a confusion about language and categories, rather than an argument about reality.)
I only replied to you because I though perhaps you had read the Sequences (or some portion thereof) and had overlooked their application in this context (something many people do for a while until it clicks that, oh yeah, rationality applies to everything).
So, at this point I’ll bow out, as there is little to be gained by discussing something when we can’t even be sure we agree on the proper usage of words.
“at this stage, you’ve just assumed the conclusion. you’ve just assumed what you want to prove.
No—what I’m pointing out is that the question “what are the ethical implications for turing machines” is the same question as “what are the ethical implications for human beings” in that case.”
Yeah, look, I’m not stupid. If someone assumes A and then actually bothers to write out the modus ponens A->B (when A->B is an obvious statement) so therefore B, and then wants to point out, ‘hey look, I didn’t assume B, I proved it!’, that really doesn’t mean that they proved anything deep. They still just assumed the conclusion they want since they assumed a statement that trivially implies their desired conclusion. But I’ll bow out now too...I only followed a link from a different forum, and indeed my fears were confirmed that this is a group of people who don’t have anything meaningful or rational to say about certain concepts (I mean, if you don’t realize even that certain things are in principle open to physical test!---and you drew an analogy to creationism vs evolution without realizing that evolution had and has many positive pieces of observable, physical evidence in its favor while your position has at present at best very minimal observable, tangible evidence in its favor (certain recent experiments in neuroscience can be charitably interpreted in favor of your argument, but on their own they are certainly not enough)).
If you’re looking for a clear, coherent and true explanation of consciousness, you aren’t going to find that anywhere today, especially not in off-the-cuff replies; and if someone does eventually figure it out, you ought to expect it to have a book’s worth of prerequisites and not be something that can be explained in a thousand words of comment replies. Consciousness is an extraordinarily difficult and confusing topic, and, generally speaking, the only way people come up with explanations that seem simple is by making simplifying assumptions that are wrong.
As for the more specific question of whether humans are Turing computable, this follows if (a) the laws of physics in a finite volume are Turing computable, and (b) human minds run entirely on physics. Both of these are believed to be true - (a) based on what we know about the laws themselves, and (b) based on neuroscience, which shows that physics are necessary and sufficient to produce humans, combined with Occam’s Razor, which says that we shouldn’t posit anything extra that we don’t need. If you’d like to zoom in on the explanation of one of these points, I’d be happy to provide references.
Please read and understand the plethora of material that has been linked for you. This community does not dwell on solved problems of philosophy, and many have already taken a great deal of time to provide you with the relevant information.
You wrote: “This is the part where you’re going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.”
at this stage, you’ve just assumed the conclusion. you’ve just assumed what you want to prove.
“Therefore, consciousness—whatever we mean when we say that—is indeed possible for Turing machines.”
having assumed that A is true, it is easy to prove that A is true. You haven’t given an argument.
“To refute this proposition, you’d need to present evidence of a human being performing an operation that can’t be done by a Turing machine.”
It’s not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people’s) conjecture would be proven. But if I don’t refute it, that doesn’t mean your proposition is true, it just means that it hasn’t yet been proven false. Those are quite different things, you know.
Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I’m less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.
Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don’t expect to see black holes inside brains, at least.
In any case, your original question was about the moral worth of turing machines, was it not? We can’t use “turing machines can’t be conscious” as excuse not to worry about those moral questions, because we aren’t sure whether turing machines can be conscious. “It doesn’t feel like they should be” isn’t really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.
So here’s my actual answer to your question: as a rule of thumb, act as if any simulation of “sufficient fidelity” is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.
’Course, this shouldn’t be a practical problem for a while yet, and we may have learned more by the time we’re creating simulations of “sufficient fidelity”.
No—what I’m pointing out is that the question “what are the ethical implications for turing machines” is the same question as “what are the ethical implications for human beings” in that case.
Not on Less Wrong, it isn’t. But I think I may have misunderstood your situation as being one of somebody coming to Less Wrong to learn about rationality of the “Extreme Bayesian” variety; if you just dropped in here to debate the consciousness question, you probably won’t find the experience much fun. ;-)
Less Wrong has different—and far stricter—rules of evidence than just about any other venue for such a discussion.
In particular, to meaningfully partake in this discussion, the minimum requirement is to understand the Mind Projection Fallacy at an intuitive level, or else you’ll just be arguing about your own intuitions… and everybody will just tune you out.
Without that understanding, you’re in exactly the same place as a creationist wandering into an evolutionary biology forum, without understanding what “theory” and “evidence” mean, and expecting everyone to disprove creationism without making you read any introductory material on the subject.
In this case, the introductory material is the Sequences—especially the ones that debunk supernaturalism, zombies, definitional arguments, and the mind projection fallacy.
When you’ve absorbed those concepts, you’ll understand why the things you’re saying are open questions are not even real questions to begin with, let alone propositions to be proved or disproved! (They’re actually on a par with creationists’ notions of “missing links”—a confusion about language and categories, rather than an argument about reality.)
I only replied to you because I though perhaps you had read the Sequences (or some portion thereof) and had overlooked their application in this context (something many people do for a while until it clicks that, oh yeah, rationality applies to everything).
So, at this point I’ll bow out, as there is little to be gained by discussing something when we can’t even be sure we agree on the proper usage of words.
“at this stage, you’ve just assumed the conclusion. you’ve just assumed what you want to prove.
No—what I’m pointing out is that the question “what are the ethical implications for turing machines” is the same question as “what are the ethical implications for human beings” in that case.”
Yeah, look, I’m not stupid. If someone assumes A and then actually bothers to write out the modus ponens A->B (when A->B is an obvious statement) so therefore B, and then wants to point out, ‘hey look, I didn’t assume B, I proved it!’, that really doesn’t mean that they proved anything deep. They still just assumed the conclusion they want since they assumed a statement that trivially implies their desired conclusion. But I’ll bow out now too...I only followed a link from a different forum, and indeed my fears were confirmed that this is a group of people who don’t have anything meaningful or rational to say about certain concepts (I mean, if you don’t realize even that certain things are in principle open to physical test!---and you drew an analogy to creationism vs evolution without realizing that evolution had and has many positive pieces of observable, physical evidence in its favor while your position has at present at best very minimal observable, tangible evidence in its favor (certain recent experiments in neuroscience can be charitably interpreted in favor of your argument, but on their own they are certainly not enough)).
If you’re looking for a clear, coherent and true explanation of consciousness, you aren’t going to find that anywhere today, especially not in off-the-cuff replies; and if someone does eventually figure it out, you ought to expect it to have a book’s worth of prerequisites and not be something that can be explained in a thousand words of comment replies. Consciousness is an extraordinarily difficult and confusing topic, and, generally speaking, the only way people come up with explanations that seem simple is by making simplifying assumptions that are wrong.
As for the more specific question of whether humans are Turing computable, this follows if (a) the laws of physics in a finite volume are Turing computable, and (b) human minds run entirely on physics. Both of these are believed to be true - (a) based on what we know about the laws themselves, and (b) based on neuroscience, which shows that physics are necessary and sufficient to produce humans, combined with Occam’s Razor, which says that we shouldn’t posit anything extra that we don’t need. If you’d like to zoom in on the explanation of one of these points, I’d be happy to provide references.
This conversation is much too advanced for you.
Please read and understand the plethora of material that has been linked for you. This community does not dwell on solved problems of philosophy, and many have already taken a great deal of time to provide you with the relevant information.